Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.
Episode | Date |
---|---|
My mistakes on the path to impact by Denise_Melchin
16:02
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: My mistakes on the path to impact, published by Denise_Melchin on the effective altruism forum.
Doing a lot of good has been a major priority in my life for several years now. Unfortunately I made some substantial mistakes which have lowered my expected impact a lot, and I am on a less promising trajectory than I would have expected a few years ago. In the hope that other people can learn from my mistakes, I thought it made sense to write them up here! I will attempt to list the mistakes which lowered my impact most over the past several years in this post and then analyse their causes. Writing this post and previous drafts has also been very personally useful to me, and I can recommend undertaking such an analysis.
Please keep in mind that my analysis of my mistakes is likely at least a bit misguided and incomprehensive.
It would have been nice to condense the post a bit more and structure it better, but having already spent a lot of time on it and wanting to move on to other projects, I thought it would be best not to let the perfect be the enemy of the good!
To put my mistakes into context, I will give a brief outline of what happened in my career-related life in the past several years before discussing what I consider to be my main mistakes.
Background
I came across the EA Community in 2012, a few months before I started university. Before that point my goal had always been to become a researcher. Until early 2017, I did a mathematics degree in Germany and received a couple of scholarships. I did a lot of ‘EA volunteering’ over the years, mostly community building and large-scale grantmaking. I also did two unpaid internships at EA orgs, one during my degree and one after graduating, in summer 2017.
After completing my summer internship, I started to try to find a role at an EA org. I applied to ~7 research and grantmaking roles in 2018. I got to the last stage 4 times, but received no offers. The closest I got was receiving a 3-month-trial offer as a Research Analyst at Open Phil, but it turned out they were unable to provide visas. In 2019, I worked as a Research Assistant for a researcher at an EA aligned university institution on a grant for a few hundred hours. I stopped as there seemed to be no route to a secure position and the role did not seem like a good fit.
In late 2019 I applied for jobs suitable for STEM graduates with no experience. I also stopped doing most of my EA volunteering. In January 2020 I began to work in an entry-level data analyst role in the UK Civil Service which I have been really happy with. In November, after 6.5mon full-time equivalent worked, I received a promotion to a more senior role with management responsibility and a significant pay rise.
First I am going to discuss what I think I did wrong from a first-order practical perspective. Afterwards I will explain which errors in my decision making process I consider the likely culprits for these mistakes - the patterns of behaviour which need to be changed to avoid similar mistakes in the future.
A lot of the following seems pretty silly to me now, and I struggle to imagine how I ever fully bought into the mistakes and systematic errors in my thinking in the first place. But here we go!
What did I get wrong?
I did not build broad career capital nor kept my options open. During my degree, I mostly focused on EA community building efforts as well as making good donation decisions. I made few attempts to build skills for the type of work I was most interested in doing (research) or skills that would be particularly useful for higher earning paths (e.g. programming), especially later on. My only internships were at EA organisations in research roles. I also stopped trying to do well in my degree later on, and stopped my previously-substantial involvement in political work.
In my firs...
|
Dec 12, 2021 |
Growth and the case against randomista development by HaukeHillebrandt, John G. Halstead
57:20
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Growth and the case against randomista development, published by HaukeHillebrandt, John G. Halstead on the effective altruism forum.
Update, 3/8/2021: I (Hauke) gave a talk at Effective Altruism Global on this post:
Summary
Randomista development (RD) is a form of development economics which evaluates and promotes interventions that can be tested by randomised controlled trials (RCTs). It is exemplified by GiveWell (which primarily works in health) and the randomista movement in economics (which primarily works in economic development).
Here we argue for the following claims, which we believe to be quite weak:
Prominent economists make plausible arguments which suggest that research on and advocacy for economic growth in low- and middle-income countries is more cost-effective than the things funded by proponents of randomista development.
Effective altruists have devoted too little attention to these arguments.
Assessing the soundness of these arguments should be a key focus for current generation-focused effective altruists over the next few years.
We hope to start a conversation on these questions, and potentially to cause a major reorientation within EA.
We also believe the following stronger claims:
4. Improving health is not the best way to increase growth.
5. A ~4 person-year research effort will find donation opportunities working on economic growth in LMICs which are substantially better than GiveWell’s top charities from a current generation human welfare-focused point of view.
However, economic growth is not all that matters. GDP misses many crucial determinants of human welfare, including leisure time, inequality, foregone consumption from investment, public goods, social connection, life expectancy, and so on. A top priority for effective altruists should be to assess the best way to increase human welfare outside of the constraints of randomista development, i.e. allowing intervention that have not or cannot be tested by RCTs.
We proceed as follows:
We define randomista development and contrast it with research and advocacy for growth-friendly policies in low- and middle-income countries.
We show that randomista development is overrepresented in EA, and that, in contradistinction, research on and advocacy for growth-friendly economic policy (we refer to this as growth throughout) is underrepresented
We then show why some prominent economists believe that, a priori, growth is much more effective than most RD interventions.
We present a quantitative model that tries to formalize these intuitions and allows us to compare global development interventions with economic growth interventions. The model suggests that under plausible assumptions a hypothetical growth intervention can be thousands of times more cost-effective than typical RD interventions such as cash-transfers. However, when these assumptions are relaxed and compared to the very good RD interventions, growth interventions are on a similar level of effectiveness as RD interventions.
We consider various possible objections and qualifications to our argument.
Acknowledgements
Thanks to Stefan Schubert, Stephen Clare, Greg Lewis, Michael Wiebe, Sjir Hoeijmakers, Johannes Ackva, Gregory Thwaites, Will MacAskill, Aidan Goth, Sasha Cooper, and Carl Shulman for comments. Any mistakes are our own. Opinions are ours, not those of our employers.
Marinella Capriati at GiveWell commented on this piece, and the piece does not represent her views or those of GiveWell.
1. Defining Randomista Development
We define randomista development (RD) as an approach to development economics which investigates, evaluates and recommends only interventions which can be tested by randomised controlled trials (RCTs).
RD can take low-risk or more “hits-based” forms. Effective altruists have especially focused on the low-risk for...
|
Dec 12, 2021 |
Announcing my retirement by Aaron Gertler
01:58
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Announcing my retirement, published by Aaron Gertler on the effective altruism forum.
A few sharp-eyed readers noticed my imminent departure from CEA in our last quarterly report. Gold stars all around!
My last day as our content specialist — and thus, my last day helping to run the Forum — is December 10th. The other moderators will continue to handle the basics, and we’re in the process of hiring my replacement. (Let me know if anyone comes to mind!)
Managing this place was fun. It wasn’t always fun, but — on the whole, a good time.
I’ve enjoyed giving feedback to a few hundred people, organizing some interesting AMAs, running a writing contest, building up the Digest, hosting workshops for EA groups around the world, and deleting a truly staggering number of comments advertising escort services (I’ll spare you the link).
More broadly, I’ve felt a continual sense of admiration for everyone who cares about the Forum and tries to make it better — by reading, voting, posting, crossposting, commenting, tagging, Wiki-editing, bug-reporting, and/or moderating. Collectively, you’ve put in tens of thousands of hours of work to develop our strange, complicated, unique website, with scant compensation besides karma.
(Now that I’m leaving, it’s time to be honest — despite the rumors, our karma isn’t the kind that gets you a better afterlife.)
Thank you for everything you’ve done to make this job what it was.
What’s next?
In January, I’ll join Open Philanthropy as their communications officer, working to help their researchers publish more of their work.
I’ll also be joining Effective Giving Quest as their first partnered streamer. Wish me luck: moderating this place sometimes felt like herding cats, but it’s nothing compared to Twitch chat.
My Forum comments will be less frequent, but probably spicier.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
My current impressions on career choice for longtermists by Holden Karnofsky
44:13
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: My current impressions on career choice for longtermists, published by Holden Karnofsky on the effective altruism forum.
This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it's valuable for there to be multiple perspectives on this topic out there.
Edited to add: see below for why I chose to focus on longtermism in this post.
While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize "paths" to particular roles working on particular causes; by contrast, I emphasize "aptitudes" one can build in a wide variety of roles and causes (including non-effective-altruist organizations) and then apply to a wide variety of longtermist-relevant jobs (often with options working on more than one cause). Example aptitudes include: "helping organizations achieve their objectives via good business practices," "evaluating claims against each other," "communicating already-existing ideas to not-yet-sold audiences," etc.
(Other frameworks for career choice include starting with causes (AI safety, biorisk, etc.) or heuristics ("Do work you can be great at," "Do work that builds your career capital and gives you more options.") I tend to feel people should consider multiple frameworks when making career choices, since any one framework can contain useful insight, but risks being too dogmatic and specific for individual cases.)
For each aptitude I list, I include ideas for how to explore the aptitude and tell whether one is on track. Something I like about an aptitude-based framework is that it is often relatively straightforward to get a sense of one's promise for, and progress on, a given "aptitude" if one chooses to do so. This contrasts with cause-based and path-based approaches, where there's a lot of happenstance in whether there is a job available in a given cause or on a given path, making it hard for many people to get a clear sense of their fit for their first-choice cause/path and making it hard to know what to do next. This framework won't make it easier for people to get the jobs they want, but it might make it easier for them to start learning about what sort of work is and isn't likely to be a fit.
I’ve tried to list aptitudes that seem to have relatively high potential for contributing directly to longtermist goals. I’m sure there are aptitudes I should have included and didn’t, including aptitudes that don’t seem particularly promising from a longtermist perspective now but could become more so in the future.
In many cases, developing a listed aptitude is no guarantee of being able to get a job directly focused on top longtermist goals. Longtermism is a fairly young lens on the world, and there are (at least today) a relatively small number of jobs fitting that description. However, I also believe that even if one never gets such a job, there are a lot of opportunities to contribute to top longtermist goals, using whatever job and aptitudes one does have. To flesh out this view, I lay out an "aptitude-agnostic" vision for contributing to longtermism.
Some longtermism-relevant aptitudes
"Organization building, running, and boosting" aptitudes[1]
Basic profile: helping an organization by bringing "generally useful" skills to it. By "generally useful" skills, I mean skills that could help a wide variety of organizations accomplish a wide variety of different objectives. Such skills could include:
Business operations and project management (including setting objectives, metrics, etc.)
People management and management coaching (some manager jobs require specialized skills, but some just require general management-associated skills)
Executive leadership (setting and enfo...
|
Dec 12, 2021 |
After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation by EA applicant
06:44
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation, published by EA applicant on the effective altruism forum.
(I am writing this post under a pseudonym because I don’t want potential future non-EA employers to find this with a quick google search. Initially my name could be found on the CV linked in the text, but after this post was shared much more widely than I had expected, I got cold feet and removed it.)
In the past 12 months, I applied for 20 positions in the EA community. I didn’t get any offer. At the end of this post, I list all those positions, and how much time I spent in the application process. Before that, I write about why I think more posts like this could be useful.
Please note: The positions were all related to long-termism, EA movement building, or meta-activities (e.g. grant-making). To stress this again, I did not apply for any positions in e.g. global health or animal welfare, so what I’m going to say might not apply to these fields.
Costs of applications
Applying has considerable time-costs. Below, I estimate that I spent 7-8 weeks of full-time work in application processes alone. I guess it would be roughly twice as much if I factored in things like searching for positions, deciding which positions to apply for, or researching visa issues. (Edit: Some organisations reimburse for time spent in work tests/trials. I got paid in 4 of the 20 application processes. I might have gotten paid in more processes if I had advanced further).
At least for me, handling multiple rejections was mentally challenging. Additionally, the process may foster resentment towards the EA community. I am aware the following statement is super in-accurate and no one is literally saying that, but sometimes this is the message I felt I was getting from the EA community:
“Hey you! You know, all these ideas that you had about making the world a better place, like working for Doctors without Borders? They probably aren’t that great. The long-term future is what matters. And that is not funding constrained, so earning to give is kind of off the table as well. But the good news is, we really, really need people working on these things. We are so talent constraint. (20 applications later) . Yeah, when we said that we need people, we meant capable people. Not you. You suck.”
Why I think more posts like this would have been useful for me
Overall, I think it would have helped me to know just how competitive jobs in the EA community (long-termism, movement building, meta-stuff) are. I think I would have been more careful in selecting the positions I applied for and I would probably have started exploring other ways to have an impactful career earlier. Or maybe I would have applied to the same positions, but with less expectations and less of a feeling of being a total loser that will never contribute anything towards making the world a better place after being rejected once again 😊
Of course, I am just one example, and others will have different experiences. For example, I could imagine that it is easier to get hired by an EA organisation if you have work experience outside of research and hospitals (although many of the positions I applied for were in research or research-related).
However, I don’t think I am a very special case. I know several people who fulfil all of the following criteria:
- They studied/are studying at postgraduate level at a highly competitive university (like Oxford) or in a highly competitive subject (like medical school)
- They are within the top 5% of their course
- They have impressive extracurricular activities (like leading a local EA chapter, having organised successful big events, peer-reviewed publications while studying, .)
- They are very motivated and EA aligned
- They applied for at least 5 positi...
|
Dec 12, 2021 |
EAF's ballot initiative doubled Zurich’s development aid by Jonas Vollmer
23:48
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: EAF"s ballot initiative doubled Zurich’s development aid, published by Jonas Vollmer on the effective altruism forum.
Summary
In 2016, the Effective Altruism Foundation (EAF), then based in Switzerland, launched a ballot initiative asking to increase the city of Zurich’s development cooperation budget and to allocate it more effectively.
In 2018, we coordinated a counterproposal with the city council that preserved the main points of our original initiative and had a high chance of success.
In November 2019, the counterproposal passed with a 70% majority. Zurich’s development cooperation budget will thus increase from around $3 million to around $8 million per year. The city will aim to allocate it “based on the available scientific research on effectiveness and cost-effectiveness.” This seems to be the first time that Swiss legislation on development cooperation mentions effectiveness requirements.
The initiative cost around $25,000 in financial costs and around $190,000 in opportunity costs. Depending on the assumptions, it raised a present value of $20–160 million in development funding.
EAs should consider launching similar initiatives in other Swiss cities and around the world.
Initial proposal and signature collection
In spring 2016, the Effective Altruism Foundation (EAF), then still based in Basel, Switzerland, launched a ballot initiative asking for the city of Zurich’s development cooperation budget to be increased and to be allocated more effectively. (For information on EAF’s current focus, see this article.) We chose Zurich due to its large budget and leftist/centrist majority. I published an EA Forum post introducing the initiative and a corresponding policy paper (see English translation). (Note: In the EA Forum post, I overestimated the publicity/movement-building benefits and the probability that the original proposal would pass. I overemphasized the quantitative estimates, especially the point estimates, which don’t adequately represent the uncertainty. I underestimated the success probability of a favorable counterproposal. Also, the policy paper should have had a greater focus on hits-based, policy-oriented interventions because I think these have a chance of being even more cost-effective than more “straightforward” approaches and also tend to be viewed more favorably by professionals.)
We hired people and coordinated volunteers (mostly animal rights activists we had interacted with before) to collect the required 3,000 signatures (plus 20% safety margin) over six months to get a binding ballot vote. Signatures had to be collected in person in handwritten form. For city-level initiatives, people usually collect about 10 signatures per hour, and paying people to collect signatures costs about $3 per signature on average.
Picture: Start of signature collection on 25 May 2016.
Picture: Submission of the initiative at Zurich’s city hall on 22 November 2016.
The legislation we proposed (see the appendix) focused too strongly on Randomized Controlled Trials (RCTs) and demanded too much of a budget increase (from $3 million to $87 million per year). We made these mistakes because we had internal disagreements about the proposal and did not dedicate enough time to resolving them. This led to negative initial responses from the city council and influential charities (who thought the budget increase was too extreme, were pessimistic about the odds of success, and disliked the RCT focus), implying a <1% success probability at the ballot because public opinion tends to be heavily influenced by the city council’s official vote recommendation. At that point, we planned to retract the initiative before the vote to prevent negative PR for EA, while still aiming for a favorable counterproposal.
Counterproposal
As is common for Swiss ballot initiatives, the city d...
|
Dec 12, 2021 |
Is effective altruism growing? An update on the stock of funding vs. people by Benjamin_Todd
35:02
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Is effective altruism growing? An update on the stock of funding vs. people, published by Benjamin_Todd on the effective altruism forum.
This is a cross-post from 80,000 Hours. See part 2 on the allocation across cause areas.
In 2015, I argued that funding for effective altruism – especially within meta and longtermist areas – had grown faster than the number of people interested in it, and that this was likely to continue. As a result, there would be a funding ‘overhang’, creating skill bottlenecks for the roles needed to deploy this funding.
A couple of years ago, I wondered if this trend was starting to reverse. There hadn’t been any new donors on the scale of Good Ventures (the main partner of Open Philanthropy), which meant that total committed funds were growing slowly, giving the number of people a chance to catch up.
However, the spectacular asset returns of the last few years and the creation of FTX, seem to have shifted the balance back towards funding. Now the funding overhang seems even larger in both proportional and absolute terms than 2015.
In the rest of this post, I make some rough guesses at total committed funds compared to the number of interested people, to see how the balance of funding vs. talent might have changed over time.
This will also serve as an update on whether effective altruism is growing – with a focus on what I think are the two most important metrics: the stock of total committed funds, and of committed people.
This analysis also made me make a small update in favour of giving now vs. investing to give later.
Here’s a summary of what’s coming up:
How much funding is committed to effective altruism (going forward)? Around $46 billion.
How quickly have these funds grown? About 37% per year since 2015, with much of the growth concentrated in 2020–2021.
How much is being donated each year? Around $420 million, which is just over 1% of committed capital, and has grown maybe about 21% per year since 2015.
How many committed community members are there? About 7,400 active members and 2,600 ‘committed’ members, growing 10–20% per year 2018–2020, and growing faster than that 2015–2017.
Has the funding overhang grown or shrunk? Funding seems to have grown faster than the number of people, so the overhang has grown in both proportional and absolute terms.
What might be the implications for career choice? Skill bottlenecks have probably increased for people able to think of ways to spend lots of funding effectively, run big projects, and evaluate grants.
To caveat, all of these figures are extremely rough, and are mainly estimated off the top of my head. I haven’t checked them with the relevant donors, so they might not endorse these estimates. However, I think they’re better than what exists currently, and thought it was important to try to give some kind of rough update on how my thinking has changed. There are likely some significant mistakes; I’d be keen to see a more thorough version of this analysis. Overall, please treat this more like notes from a podcast than a carefully researched article.
Which growth metrics matter?
Broadly, the future[1] impact of effective altruism depends on the total stock of:
The quantity of committed funds
The number of committed people (adjusted for skills and influence)
The quality of our ideas (which determine how effectively funding and labour can be turned into impact)
(In economic growth models, this would be capital, labour, and productivity.)
You could consider other resources like political capital, reputation, or public support as well, though we can also think of these as being a special type of labour.
In this post, I’m going to focus on funding and labour. (To do an equivalent analysis for ideas, which could easily matter more, we could try to estimate whether the expected return of our best way...
|
Dec 12, 2021 |
Announcing "Naming What We Can"! by GidonKadosh, EdoArad, Davidmanheim, ShayBenMoshe, sella, Guy Raveh, Asaf Ifergan
04:49
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Announcing "Naming What We Can"!, published by GidonKadosh, EdoArad, Davidmanheim, ShayBenMoshe, sella, Guy Raveh, Asaf Ifergan on the effective altruism forum.
We hereby announce a new meta-EA institution - "Naming What We Can".
Vision
We believe in a world where every EA organization and any project has a beautifully crafted name. We believe in a world where great minds are free from the shackles of the agonizing need to name their own projects.
Goal
To name and rename every EA organization, project, thing, or person. To alleviate any suffering caused by name-selection decision paralysis
Mission
Using our superior humor and language articulation prowess, we will come up with names for stuff.
About us
We are a bunch of revolutionaries who believe in the power of correct naming. We translated over a quintillion distinct words from English to Hebrew. Some of us have read all of Unsong. One of us even read the whole bible. We spent countless fortnights debating the in and outs of our own org’s title - we Name What We Can.
What Do We Do?
We're here for the service of the EA community. Whatever you need to rename - we can name. Although we also rename whatever we can. Even if you didn't ask.
Examples
As a demonstration, we will now see some examples where NWWC has a much better name than the one currently used.
80,000 Hours => 64,620 Hours. Better fits the data and more equal toward women, two important EA virtues.
Charity Entrepreneurship => Charity Initiatives. (We don't know anyone who can spell entrepreneurship on their first try. Alternatively, own all of the variations: Charity Enterpeneurship, Charity Entreprenreurshrip, Charity Entrepenurship, Charity Entepenoorship, .)
Global Priorities Institute => Glomar Priorities Institute. We suggest including the dimension of time, making our globe a glome.
OpenPhil => Doing Right Philanthropy. Going by Dr.Phil would give a lot more clicks.
EA Israel => זולתנים יעילים בארץ הקודש
ProbablyGood => CrediblyGood. Because in EA we usually use credence rather than probability.
EA Hotel => Centre for Enabling EA Learning & Research.
Giving What We Can => Guilting Whoever We Can. Because people give more when they are feeling guilty about being rich.
Cause Prioritization => Toby Ordering.
Max Dalton => Max Delta. This represents the endless EA effort to maximize our ever-marginal utility.
Will MacAskill => will McAskill. Evidently a more common use:
Peter singer & steven pinker should be the same person, to avoid confusion.
OpenAI => ProprietaryAI. Followed by ClosedAI, UnalignedAI, MisalignedAI, and MalignantAI.
FHI => Bostrom's Squad.
GiveWell => Don'tGivePlayPumps. We feel that the message could be stronger this way.
Doing Good Better => Doing Right Right.
Electronic Arts, also known as EA, should change its name to Effective Altruism. They should also change all of their activities to Effective Altruism activities.
Impact estimation
Overall, we think the impact of the project will be net negative on expectation (see our Guesstimate model). That is because we think that the impact is likely to be somewhat positive, but there is a really small tail risk that we will cause the termination of the EA movement. However, as we are risk-averse we can mostly ignore high tails in our impact assessment so there is no need to worry.
Call to action
As a first step, we offer our services freely here on this very post! This is done to test the fit of the EA community to us. All you need to do is to comment on this post and ask us to name or rename whatever you desire.
Additionally, we hold a public recruitment process here on this very post! If you want to apply to NWWC as a member, comment on this post with a name suggestion of your choosing! Due to our current lack of diversity in our team, we particularly encourage women, people of color, ...
|
Dec 12, 2021 |
Major UN report discusses existential risk and future generations (summary) BY finm, Avital Balwit
20:40
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Major UN report discusses existential risk and future generations (summary), published BY finm, Avital Balwit on the effective altruism forum.
Co-written with Avital Balwit.
Introduction and Key Points
On September 10th, the Secretary General of the United Nations released a report called “Our Common Agenda”. This report seems highly relevant for those working on longtermism and existential risk, and appears to signal unexpectedly strong interest from the UN. It explicitly uses longtermist language and concepts, and suggests concrete proposals for institutions to represent future generations and manage catastrophic and existential risks. In this post we've tried summarising the report for an EA audience.
Some notable features of the report:
It explicitly discusses “future generations”, “long-termism”, and “existential risk”
It highlights biorisks, nuclear weapons, advanced technologies, environmental disasters/climate change as extreme or even existential risks
It recommends the “regulation of artificial intelligence to ensure that this is aligned with shared global values”
It proposes several instruments for protecting future generations:
A Futures Lab for futures impact assessments and “regularly reporting on megatrends and catastrophic risks”
A Special Envoy for Future Generations to assist on “long-term thinking and foresight” and explore various international mechanisms for representing future generations, including...
Repurposing the Trusteeship Council to represent the interests of future generations (a major but long-inactive organ of the UN)
A Declaration on Future Generations
It proposes instruments for addressing major risks:
An Emergency Platform to convene key actors in response to complex global crises
A Strategic Foresight and Global Risk Report to be released every 5 years
It also calls for a 2023 Summit of the Future to discuss topics including these proposals addressing major risks and future generations
Other topics discussed which might be of interest:
Protecting and regulating the ‘digital commons’ and an internet-enabled ‘infodemic’
The governance of outer space
Lethal autonomous weapons
Improving pandemic response and preparedness
Developing well-being indices to complement GDP
Context
A year ago, on the 75th anniversary of the formation of the UN, member nations asked the Secretary General, António Guterres, to produce a report with recommendations to advance the agenda of the UN. This report is his response.
The report also coincides with Guterres’ re-election for his second term as Secretary General, which will begin in January 2022 and will likely last 5 years.
The report was informed by consultations, listening exercises, and input from outside experts. Toby Ord (author of The Precipice) was asked to contribute to the report as such an ‘outside expert’. Among other things he underlined that ‘future generations’ does not (just) mean ‘young people’, and that international institutions should begin to address risks even more severe than COVID-19, up to and including existential risks.
All of the new instruments and institutions described in the report are proposals made to the General Assembly of member nations. It remains to be seen how many of them will ultimately be implemented, and in what eventual form.
Summary of the Report
The report is divided into five main sections, with sections 3 and 4 being of greatest relevance from an EA or longtermist perspective. The first section situates the report in the context of the pandemic, suggesting that now is an unusually “pivotal moment” between “breakdown” and “breakthrough”. It highlights major past successes (the Montreal Protocol, the eradication of smallpox) and notes how the UN was established in the aftermath of WWII to “save succeeding generations” from war. It then calls for a “new globa...
|
Dec 12, 2021 |
Don't Be Bycatch by AllAmericanBreakfast
04:57
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Don't Be Bycatch, published by AllAmericanBreakfast on the effective altruism forum.
It's a common story. Someone who's passionate about EA principles, but has little in the way of resources, tries and fails to do EA things. They write blog posts, and nothing happens. They apply to jobs, and nothing happens. They do research, and don't get that grant. Reading articles no longer feels exciting, but like a chore, or worse: a reminder of their own inadequacy. Anybody who comes to this place, I heartily sympathize, and encourage them to disentangle themselves from this painful situation any way they can.
Why does this happen? Well, EA has two targets.
Subscribers to EA principles who the movement wants to become big donors or effective workers.
Big donors and effective workers who the movement wants to subscribe to EA principles.
I won't claim what weight this community and its institutions give to (1) vs. (2). But when we set out to catch big fish, we risk turning the little fish into bycatch. The technical term for this is churn.
Part of the issue is the planner's fallacy. When we're setting out, we underestimate how long and costly it will be to achieve an impact, and overestimate what we'll accomplish. The higher above average you aim for, the more likely you are to fall short.
And another part is expectation-setting. If the expectation right from the get-go is that EA is about quickly achieving big impact, almost everyone will fail, and think they're just not cut out for it. I wish we had a holiday that was the opposite of Petrov Day, where we honored somebody who went a little bit out of their comfort zone to try and be helpful in a small and simple way. Or whose altruistic endeavor was passionate, costly, yet ineffective, and who tried it anyway, changed their mind, and valued it as a learning experience.
EA organizations and writers are doing us a favor by presenting a set of ideas that speak to us. They can't be responsible for addressing all our needs. That's something we need to figure out for ourselves. EA is often criticized for its "think global" approach. But the EA is our local, our global local. How do we help each other to help others?
From one little fish in the sEA to another, this is my advice:
Don't aim for instant success. Aim for 20 years of solid growth. Alice wants to maximize her chance of a 1,000% increase in her altruistic output this year. Zahara's trying to maximize her chance of a 10% increase in her altruistic output. They're likely to do very different things to achieve these goals. Don't be like Alice. Be like Zahara.
Start small, temporary, and obvious. Prefer the known, concrete, solvable problem to the quest for perfection. Yes, running an EA book club or, gosh darn it, picking up trash in the park is a fine EA project to cut our teeth on. If you donate 0% of your income, donating 1% of your income is moving in the right direction. Offer an altruistic service to one person. Interview one person to find out what their needs are.
Ask, don't tell. When entrepreneurs do market research, it's a good idea to avoid telling the customer about the idea. Instead, they should ask the customer about their needs and problems. How do they solve their problems right now? Then they can go back to the Batcave and consider whether their proposed solution would be an improvement.
Let yourself become something, just do it a little more gradually. It's good to keep your options open, but EA can be about slowing and reducing the process of commitment, increasing the ability to turn and bend. It doesn't have to be about hard stops and hairpin turns. It's OK to take a long time to make decisions and figure things out.
Build each other up. Do zoom calls. Ask each other questions. Send a message to a stranger whose blog posts you like. Form relationships, and...
|
Dec 12, 2021 |
Reducing long-term risks from malevolent actors by David_Althaus, Tobias_Baumann
01:22:42
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Reducing long-term risks from malevolent actors, published by David_Althaus, Tobias_Baumann on the effective altruism forum.
Summary
Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. (More)
Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. (More)
Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks. (More)
We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future.
The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. (More)
We also explore possible future technologies that may offer unprecedented leverage to mitigate against malevolent traits. (More)
Selecting against psychopathic and sadistic tendencies in genetically enhanced, highly intelligent humans might be particularly important. However, risks of unintended negative consequences must be handled with extreme caution. (More)
We argue that further work on reducing malevolence would be valuable from many moral perspectives and constitutes a promising focus area for longtermist EAs. (More)
What do we mean by malevolence?
Before we make any claims about the causal effects of malevolence, we first need to explain what we mean by the term. To this end, consider some of the arguably most evil humans in history—Hitler, Mao, and Stalin—and the distinct personality traits they seem to have shared.[1]
Stalin repeatedly turned against former comrades and friends (Hershman & Lieb, 1994, ch. 15, ch. 18), gave detailed instructions on how to torture his victims, ordered their loved ones to watch (Glad, 2002, p. 13), and deliberately killed millions through various atrocities. Likewise, millions of people were tortured and murdered under Mao’s rule, often according to his detailed instructions (Dikötter, 2011; 2016; Chang & Halliday, ch. 8, ch. 23, 2007). He also took pleasure in watching acts of torture and imitating in what his victims went through (Chang & Halliday, ch. 48, 2007). Hitler was not only responsible for the death of millions, he also engaged in personal sadism. On his specific instructions, the plotters of the 1944 assassination attempt were hung by piano wires and their agonizing deaths were filmed (Glad, 2002). According to Albert Speer, “Hitler loved the film and had it shown over and over again” (Toland, 1976, p. 818). Hitler, Mao, and Stalin—and most other dictators—also poured enormous resources into the creation of personality cults, manifesting their colossal narcissism (Dikötter, 2019). (The section Malevolent traits of Hitler, Mao, Stalin, and other dictators in Appendix B provides more evidence.)
Many scientific constructs of human malevolence could be used to summarize the relevant psychological traits shared by Hitler, Mao, Stalin, and other malevolent individuals in positions of power. We focus on the Dark Tetrad traits (Paulhus, 2014) because they seem especially relevant and have been studied extensively by psychologists. The Dark Tetrad comprises the following four traits—the more well-known Dark Triad (Paulhus & Williams, 2002) refers to the first three traits:
Machiavellianism is characterized by manipulating and deceiving others to further one’s own interests, indifference to morality, and obsession with achieving power or wealth.
Narcissism involves an inflated sense of one’s importance and abilities, an excessive need for admiration, a lack of emp...
|
Dec 12, 2021 |
Problem areas beyond 80,000 Hours' current priorities by Ardenlk
26:46
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Problem areas beyond 80,000 Hours' current priorities, published by Ardenlk on the effective altruism forum.
Why we wrote this post
At 80,000 Hours we've generally focused on finding the most pressing issues and the best ways to address them.
But even if some issue is 'the most pressing'—in the sense of being the highest impact thing for someone to work on if they could be equally successful at anything—it might easily not be the highest impact thing for many people to work on, because people have various talents, experience, and temperaments.
Moreover, the more people involved in a community, the more reason there is for them to spread out over different issues. There will eventually be diminishing returns as more people work on the same set of issues, and both the value of information and the value of capacity building from exploring more areas will be greater if more people are able to take advantage of that work.
We're also pretty uncertain which problems are the highest impact things to work on—even for people who could work on anything equally successfully.
For example, maybe we should be focusing much more on preventing great power conflict than we have been. After all, the first plausible existential risk to humanity was the creation of the atom bomb; it's easy to imagine that wars could incubate other, even riskier technological advancements.
Or maybe there is some dark horse cause area—like research into surveillance—that will turn out to be way more important for improving the future than we thought.
Perhaps for these reasons, many of our advisors guess that it would be ideal if 5-20% of the effective altruism community's resources were focused on issues that the community hasn't historically been as involved in, such as the ones listed below. We think we're currently well below this fraction, so it's plausible some of these areas might be better for some people to go into right now than our top priority problem areas.
Who is best suited to work on these other issues? Pioneering a new problem area from an effective altruism perspective is challenging, and in some ways harder than working on a priority area, where there is better training and infrastructure. Working on a less-researched problem can require a lot of creativity and critical thinking about how you can best have a positive impact by working on the issue. For example, it likely means working out which career options within the area are the most promising for direct impact, career capital, and exploration value, and then pursuing them even if they differ from what most other people in the area tend to value or focus on. You might even eventually need to 'create your own job' if pre-existing positions in the area don't match your priorities. The ideal person would therefore be self-motivated, creative, and willing to chart the waters for others, as well as have a strong interest or relevant experience in one of these less-explored issues.
We compiled the following lists by combining suggestions from 6 of our advisors with our own ideas, judgement, and research. We were looking for issues that might be very important, especially for improving the long-term future, and which might be currently neglected by people thinking from an effective altruism perspective. If something was suggested twice, we took that as a presumption in favor of including it.
We're very uncertain about the value of working on any one of these problems, but we think it's likely that there are issues on these lists (and especially the first one) that are as pressing as our highest priority problem areas.
What are the pros and cons of working in each of these areas? Which are less tractable than they appear, or more important? Which are already being covered adequately by existing groups we don't know enough about? What potentia...
|
Dec 12, 2021 |
The case of the missing cause prioritisation research by weeatquince
24:56
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: The case of the missing cause prioritisation research, published by weeatquince on the effective altruism forum.
Introduction / summary
In 2011 I came across Giving What We Can, which shortly blossomed into effective altruism. Call me a geek if you like but I found it exciting, like really exciting. Here were people thinking super carefully about the most effective ways to have an impact, to create change, to build a better world. Suddenly a boundless opportunity to do vast amounts of good opened up before my eyes. I had only just got involved and by giving to fund bednets and had already magnified my impact on the world 100 times.
And this was just the beginning. Obviously bednets were not the most effective charitable intervention, they were just the most effective we had found to date – with just a tiny amount of research. Imagine what topic could be explored next: the long run effects of interventions, economic growth, political change, geopolitics, conflict studies, etc. We could work out how to compare charities of vastly different cause areas, or how to do good beyond donations (some people were already starting to talk about career choices). Some people said we should care about animals (or AI risk), I didn’t buy it (back then), but imagine, we could work out what different value sets lead to different causes and the best charities for each.
As far as I could tell the whole field of optimising for impact seemed vastly under-explored. This wasn’t too surprising – most people don’t seem to care that much about doing charitable giving well and anyway it was only just coming to light how truly bad our intuitions were at making charitable choices (with the early 2000’s aid skepticism movement).
Looking back, I was optimistic. Yet in some regards my optimism was well-placed. In terms of spreading ideas, my small group of geeky uni friends went on to create something remarkable, to shift £m if not £bn of donations to better causes, to help 1000s maybe 100,000s of people make better career decisions. I am no longer surprised if a colleague, tinder date or complete stranger has heard of effective altruism (EA) or gives money to AMF (a bednet charity).
However, in terms of the research I was so excited about, of developing the field of how to do good, there has been minimal progress. After nearly a decade, bednets and AI research still seem to be at the top of everyone’s Christmas donations wish list. I think I assumed that someone had got this covered, that GPI or FHI or whoever will have answers, or at least progress on cause research sometime soon. But last month, whilst trying to review my career, I decided to look into this topic, and, oh boy, there just appears to be a massive gaping hole. I really don’t think it is happening.
I don’t particularly want to shift my career to do cause prioritisation research right now. So I am writing this piece in the hope that I can either have you, my dear reader, persuade me this work is not of utmost importance, or have me persuade you to do this work (so I don’t have to).
A. The importance of cause prioritisation research
What is your view on the effective altruism community and what it has achieved? What is the single most important idea to come out of the community? Feel free to take a moment to reflect. (Answers on a postcard, or comment).
It seems to me (predictably given the introduction) that far and away the most valuable thing EA has done is the development of and promotion of cause prioritisation as a concept. This idea seems (shockingly and unfortunately) unique to EA.[1] It underpins all EA thinking, guides where EA aligned foundations give and leads to people seriously considering novel causes such as animal welfare or longtermism.
This post mostly focuses on the current progress of and neglectedness of this work ...
|
Dec 12, 2021 |
Lessons from my time in Effective Altruism by richard_ngo
12:13
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Lessons from my time in Effective Altruism, published by richard_ngo on the effective altruism forum.
I’ll start with an overview of my personal story, and then try to extract more generalisable lessons. I got involved in EA around the end of 2014, when I arrived at Oxford to study Computer Science and Philosophy. I’d heard about EA a few years earlier via posts on Less Wrong, and so already considered myself EA-adjacent. I attended a few EAGx conferences, became friends with a number of EA student group organisers, and eventually steered towards a career in AI safety, starting with a masters in machine learning at Cambridge in 2017-2018.
I think it’s reasonable to say that, throughout that time, I was confidently wrong (or at least unjustifiably confident) about a lot of things. In particular:
I dismissed arguments about systemic change which I now find persuasive, although I don’t remember how - perhaps by conflating systemic change with standard political advocacy, and arguing that it’s better to pull the rope sideways.
I endorsed earning to give without having considered the scenario which actually happened, of EA getting billions of dollars of funding from large donors. (I don’t know if this possibility would have changed my mind, but I think that not considering it meant my earlier belief was unjustified.)
I was overly optimistic about utilitarianism, even though I was aware of a number of compelling objections; I should have been more careful to identify as "utilitarian-ish" rather than rounding off my beliefs to the most convenient label.
When thinking about getting involved in AI safety, I took for granted a number of arguments which I now think are false, without actually analysing any of them well enough to raise red flags in my mind.
After reading about the talent gap in AI safety, I expected that it would be very easy to get into the field - to the extent that I felt disillusioned when given (very reasonable!) advice, e.g. that it would be useful to get a PhD first.
As it turned out, though, I did have a relatively easy path into working on AI safety - after my masters, I did an internship at FHI, and then worked as a research engineer on DeepMind’s safety team for two years. I learned three important lessons during that period. The first was that, although I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field. The second was that the job simply wasn’t a good fit for me (for reasons I’ll discuss later on). The third was that I’d been dramatically underrating “soft skills” such as knowing how to make unusual things happen within bureaucracies.
Due to a combination of these factors, I decided to switch career paths. I’m now a PhD student in philosophy of machine learning at Cambridge, working on understanding advanced AI with reference to the evolution of humans. By now I’ve written a lot about AI safety, including a report which I think is the most comprehensive and up-to-date treatment of existential risk from AGI. I expect to continue working in this broad area after finishing my PhD as well, although I may end up focusing on more general forecasting and futurism at some point.
Lessons
I think this has all worked out well for me, despite my mistakes, but often more because of luck (including the luck of having smart and altruistic friends) than my own decisions. So while I’m not sure how much I would change in hindsight, it’s worth asking what would have been valuable to know in worlds where I wasn’t so lucky. Here are five such things.
1. EA is trying to achieve something very difficult.
A lot of my initial attraction towards EA was because it seemed like a slam-dunk case: here’s an obvious i...
|
Dec 12, 2021 |
EA needs consultancies by lukeprog
21:28
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: EA needs consultancies, published by lukeprog on the effective altruism forum.
Problem
EA organizations like Open Phil and CEA could do a lot more if we had access to more analysis and more talent, but for several reasons we can't bring on enough new staff to meet these needs ourselves, e.g. because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations.[1] This also contributes to there being far more talented EAs who want to do EA-motivated work than there are open roles at EA organizations.[2]
A partial solution?
In the public and private sectors, one common solution to this problem is consultancies. They can be think tanks like the National Academies or RAND,[3] government contractors like Booz Allen or General Dynamics, generalist consulting firms like McKinsey or Deloitte, niche consultancies like The Asia Group or Putnam Associates, or other types of service providers such as UARCs or FFRDCs.
At the request of their clients, these consultancies (1) produce decision-relevant analyses, (2) run projects (including building new things), (3) provide ongoing services, and (4) temporarily "loan" their staff to their clients to help with a specific project, provide temporary surge capacity, provide specialized expertise that it doesn't make sense for the client to hire themselves, or fill the ranks of a new administration.[4] (For brevity, I'll call these "analyses," "projects," "ongoing services," and "talent loans," and I'll refer to them collectively as "services.")
This system works because even though demand for these services can fluctuate rapidly at each individual client, in aggregate across many clients there is a steady demand for the consultancies' many full-time employees, and there is plenty of useful but less time-sensitive work for them to do between client requests.
Current state of EA consultancies
Some of these services don't require EA talent, and can thus be provided for EA organizations by non-EA firms, e.g. perhaps accounting firms. But what about analyses and services that require EA talent, e.g. because they benefit from lots of context about the EA community, or because they benefit from habits of reasoning and moral intuitions that are far more common in the EA community than elsewhere?[5]
Rethink Priorities (RP) has demonstrated one consultancy model: producing useful analyses specifically requested by EA organizations like Open Philanthropy across a wide range of topics.[6] If their current typical level of analysis quality can be maintained, I would like to see RP scale as quickly as they can. I would also like to see other EAs experiment with this model.[7]
BERI offers another consultancy model, providing services that are difficult or inefficient for clients to handle themselves through other channels (e.g. university administration channels).
There may be a few other examples, but I think not many.[8]
Current demand for these services
All four models require sufficient EA client demand to be sustainable. Fortunately, my guess is that demand for ≥RP-quality analysis from Open Phil alone (but also from a few other EA organizations I spoke to) will outstrip supply for the foreseeable future, even if RP scales as quickly as they can and several RP clones capable of ≥RP-quality analysis are launched in the next couple years.[9] So, I think more EAs should try to launch RP-style "analysis" consultancies now.
However, for EAs to get the other three consultancy models off the ground, they probably need clearer evidence of sufficiently large and steady aggregate demand for those models from EA organizations. At least at first, this probably means that these models will work best for services that demand relatively "generalist" talent, perhaps corresponding ...
|
Dec 12, 2021 |
The Cost of Rejection by Daystar Eld
12:58
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: The Cost of Rejection, published by Daystar Eld on the effective altruism forum.
For those that don't know, I've worked as a therapist for the rationality and EA community for over two years now, first part time, then full time in early 2020. I often get asked about my observations and thoughts on what sorts of issues are particularly prevalent or unique to the community, and while any short answer to that would be oversimplifying the myriad of issues I've treated, I do feel comfortable saying that "concern with impact" is a theme that runs pretty wide and deep no matter what people come to sessions to talk about.
Seeing how this plays out in various different ways has motivated me to write on some aspects of it, starting with this broad generalization; rejection hurts. Specifically, rejection from a job that's considered high impact (which, for many, implicitly includes all jobs with EA organizations) hurts a lot. And I think that hurt has a negative impact that goes beyond the suffering involved.
In addition to basing this post off of my own observations, I’ve written it with the help of/on behalf of clients who have been affected by this, some of whom reviewed and commented on drafts.
I. Premises
There are a few premises that I’m taking for granted that I want to list out in case people disagree with any specific ones:
The EA population is growing, as are EA organizations in number and size.
This seems overall to be a very good thing.
In absolute numbers, EA organizations are growing slower or at pace with the overall EA population.
Even with massive increases in funding this seems inevitable, and also probably good? There are many high impact jobs outside of EA orgs that we would want people in the community to have.
(By EA orgs I specifically mean organizations headed by and largely made up of people who self-identify as Effective Altruists, not just those using evidence-and-reason-to-do-the-most-good)
((Also there’s a world in which more people self-identify as EAs and therefore more organizations are considered EA and by that metric it’s bad that EA orgs are growing slower than overall population, but that’s also not what I mean))
Even with more funding being available, there will continue to be many more people applying to EA jobs than getting them.
I don’t have clear numbers for this, but asking around at a few places got me estimates between ~47-124 applications for specific positions (one of which noted that ~¾ of them were from people clearly within and familiar with the EA community), and hundreds of applications for specific grants (at least once breaking a thousand).
This is good for the organizations and community as a whole, but has bad side effects, such as:
Rejection hurts, and that hurt matters.
For many people, rejection is easily accepted as part of trying new things, shooting for the moon, and challenging oneself to continually grow.
For many others, it can be incredibly demoralizing, sometimes to the point of reducing motivation to continue even trying to do difficult things.
So when I say the hurt matters, I don’t just mean that it’s suffering and we should try to reduce suffering wherever we can. I also mean that as the number of EAs grows faster than the number of positions in EA orgs, the knock-on effects of rejection will slow community and org growth, particularly since:
The number of EAs who receive rejections from EA orgs will likely continue to grow, both absolutely and proportionally.
Hence, this article.
II. Models
There are a number of models I have for all of this that could be totally wrong. I think it’s worth spelling them out a bit more so that people can point to more bits and let me know if they are important, or why they might not be as important as I think they are.
Difficulty in Self Organization
First, I think it’s import...
|
Dec 12, 2021 |
Concerns with ACE's Recent Behavior by Hypatia
19:07
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Concerns with ACE's Recent Behavior, published by Hypatia on the effective altruism forum.
Epistemic Status: I feel pretty confident that the core viewpoint expressed in this post is correct, though I'm less confident in some specific claims. I have not shared a draft of this post with ACE, and so it’s possible I’ve missed important context from their perspective.
EDIT: ACE board member Eric Herboso has responded with his personal take on this situation. He believes some points in this post are wrong or misleading. For example, he disputes my claim that ACE (as an organization) attempted to cancel a conference speaker.
EDIT: Jakub Stencel from Anima International has posted a response. He clarifies a few points and offers some context regarding the CARE conference situation.
Background
In the past year, there has been some concern in EA surrounding the negative impact of “cancel culture”[1] and worsening discourse norms. Back in October, Larks wrote a post criticizing EA Munich's decision to de-platform Robin Hanson.The post was generally well-received, and there have been other posts on the forum discussing potential risks from social-justice oriented discourse norms. For example, see The Importance of Truth-Oriented Discussions in EAand EA considerations regarding increasing political polarization.
I'm writing this post because I think some recent behavior from Animal Charity Evaluators (ACE) is a particularly egregious example of harmful epistemic norms in EA. This behavior includes:
Making (in my view) poorly reasoned statements about anti-racism and encouraging supporters to support or donate to anti-racist causes and organizations of dubious effectiveness
Attempting to cancel an animal rights conference speaker because of his views on Black Lives Matter, withdrawing from that conference because the speaker's presence allegedly made ACE staff feel unsafe, and issuing a public statement supporting its staff and criticizing the conference organizers
Penalizing charities in reviews for having leadership and/or staff who are deemed to be insufficiently progressive on racial equity, and stating it won't offer movement grants funding to those who disagree with its views on diversity, equity, and inclusion[2].
Because I'm worried that this post could hurt my future ability to get a job in EAA, I'm choosing to remain anonymous.
My goal here is to:
a) Describe ACE's behavior in order to raise awareness and foster discussion, since this doesn't seem to have attracted much attention, and
b) Give a few reasons why I think ACE's behavior has been harmful, though I’ll be brief since I think similar points have been better made elsewhere
I also want to be clear that I don't think ACE is the only bad actor here, as other areas of the EAA community have also begun to embrace harmful social-justice derived discourse norms[3]. However, I'm focusing my criticism on ACE here because:
It positions itself as an effective altruism organization, rather than a traditional animal advocacy organization
It is well known and generally respected by the EA community
It occupies a powerful position within the EAA movement, directing millions of dollars in funding each year and conducting a large fraction of the movement's research
And before I get started, I'd also like to make a couple caveats:
I think ACE does a lot of good work, and in spite of this recent behavior, I think its research does a lot to help animals. I'm also not trying to “cancel” ACE or any of its staff. But I do think the behavior outlined in this post is bad enough that ACE supporters should be vocal about their concerns and consider withholding future donations.
I am not suggesting that racism, discrimination, inequality, etc. shouldn't be discussed, or that addressing these important problems isn't EA-worthy. The EA commu...
|
Dec 12, 2021 |
Introducing Probably Good: A New Career Guidance Organization by omernevo, sella
08:49
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Introducing Probably Good: A New Career Guidance Organization, published by omernevo, sella on the effective altruism forum.
We’re excited to announce the launch of Probably Good, a new organization that provides career guidance intended to help people do as much good as possible.
Context
For a while, we have felt that there was a need for a more generalist careers organization than 80,000 Hours — one which is more agnostic regarding different cause areas and might provide a different entry point into the community to people who aren’t a good fit for 80K’s priority areas. Following 80,000 Hours’ post about what they view as gaps in the careers space, we contacted them about how a new organization could effectively fill some of those gaps.
After a few months of planning, asking questions, writing content, and interviewing experts, we’re almost ready to go live (we aim to start putting our content online in 1-2 months) and would love to hear more from the community at large.
How You Can Help
The most important thing we’d like from you is feedback. Please comment on this post, send us personal messages on the Forum, email us (omer at probablygood dot org, sella at probablygood dot org), or set up a conversation with us via videoconference. We would love to receive as much feedback as we can get.
We’re particularly interested in hearing about things that you, personally, would actually read // use // engage with, but would appreciate absolutely any suggestions or feedback.
Probably Good Overview
The most updated version of the overview is here.
Following is the content of the overview at the time this announcement is posted.
Overview
Probably Good is a new organization that provides career guidance intended to help people do as much good as possible. We will start by focusing on online content and a small number of 1:1 consultations. We will later consider other forms of career guidance such as a job board, scaling up the 1:1 consultations, more in-depth research, etc.
Our approach to guidance is focused on how to help each individual maximize their career impact based on their values, personal circumstances, and motivations. This means that we will accommodate a wide range of preferences (for example, different cause areas), as long as they’re consistent with our principles, and try to give guidance in accordance with those preferences.
Therefore, we’ll be looking at a wide range of impactful careers under different views on what to optimize for or under various circumstantial constraints, such as how to maximize impact within specific career paths, within specific geographic regions, through earning to give, or within more specific situations (e.g. making an impact from within a large corporation).
There are other organizations in this space, the most well-known being 80,000 Hours. We think our approach is complementary to 80,000 Hours’ current approach: Their guidance mostly focuses on people aiming to work on their priority problem areas, and we would be able to guide high quality candidates who aren’t. We would direct candidates to 80,000 Hours or other specialized organizations (such as Animal Advocacy Careers) if they’re a better fit for their principles and priority paths.
This characterization of our target audience is very broad; this has two main motivations. First, as part of our experimental approach: we are interested in identifying which cause areas currently have the most unserved demand. By providing preliminary value in multiple areas of expertise, we hope to more efficiently identify where our investment would be most useful, and we may specialize (in a more informed manner) in the future. The second motivation for this is that one possibility for specialization is as a “router” interface - helping individuals make preliminary decisions tailored to the...
|
Dec 12, 2021 |
What Makes Outreach to Progressives Hard by Cullen_OKeefe
14:39
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: What Makes Outreach to Progressives Hard, published by Cullen_OKeefe on the effective altruism forum.
This post summarizes some of my conclusions on things that can make EA outreach to progressives hard, as well as some tentative recommendations on techniques for making such outreach easier.
To be clear, this post does not argue or assume that outreach to progressives is harder than outreach to other political ideologies.[1] Rather, the point of this post is to highlight identifiable, recurring memes/thought patterns that cause Progressives to reject or remain skeptical of EA.
My Background (Or, Why I am Qualified to Talk About This)
Nothing in here is based on systematic empirical analysis. It should therefore be treated as highly uncertain. My analysis here draws on two sources:
Reflecting on my personal journey as someone who transitioned from a very social-justice-y worldview to a more EA-aligned one (and therefore understands the former well), who is still solidly left-of-center, and who still retains contacts in the social justice (SJ) world; and
My largely failed attempts as former head of Harvard Law School Effective Altruism to get progressive law students to make very modest giving commitments to GiveWell charities.
Given that the above all took place in America, this post is most relevant to American political dynamics (especially at elite universities), and may very well be inapplicable elsewhere.[2]
Readers may worry that I am being a bit uncharitable here. However, I am not trying to present the best progressive objections to EA (so as to discover the truth), but rather the most common ones (so as to persuade people better). In other words, this post is about marketing and communications, not intellectual criticisms. Since I think many of the common progressive objections to EA are bad, I will attempt to explain them in (what I take to be) their modal or undifferentiated form, not steelman them.
Relatedly, when I say "progressives" through the rest of this post, I am mainly referring to the type of progressive who is skeptical of EA, not all progressives. There are many amazing progressive EAs, who do not see these two ideologies to be in conflict whatsoever. And many non-EA progressives will believe few of these things. Nevertheless, I do think I am pointing to a real set of memes that are common—but definitely not universal—among the American progressive left as of 2021. This is sufficient for understanding the messaging challenges facing EAs within progressive institutions.
Reasons Progressives May Not Like EA
Legacy of Paternalistic International Aid
Many progressives have a strong prior against international aid, especially private international aid. Progressives are steeped in—and react to—stories of paternalistic international aid,[3] much in the way that EAs are steeped in stories of ineffective aid (e.g., Playpumps).
Interestingly, EAs and progressives will often (in fact, almost always) agree on what types of aid are objectionable. However, we tend to take very different lessons away from this.
EAs will generally take away the lesson that we have to be super careful about which interventions to fund, because funding the wrong intervention can be ineffective or actively harmful. We put the interests of our intended beneficiaries first by demanding that charities demonstrably advance their beneficiaries' interests as cost-effectively as possible.
Progressives tend to take a very different lesson from this. They tend to see this legacy as objectionable due to the very nature of the relationship between aid donors and recipients. Roughly, they may believe that the power differential between wealthy donors from the Global North and aid recipients in developing countries makes unobjectionable foreign aid either impossible or, at the very least, extr...
|
Dec 12, 2021 |
Snapshot of a career choice 10 years ago by Julia_Wise
04:57
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Snapshot of a career choice 10 years ago, published by Julia_Wise on the effective altruism forum.
Here’s a little episode from EA’s history, about how much EA career advice has changed over time.
Ten years ago, I wrote an angsty LessWrong post called “Career choice for a utilitarian giver.” (“Effective altruism” and “earning to give” didn’t exist as terms at that point.)
At the time, there was a lot less funding in EA, and the emphasis was very much on donation rather than direct work. Donation was the main way I was hoping to have an impact. I was studying to become a social worker, but I had become really worried that I should try for some higher-earning career so I could donate more.
I thought becoming a psychiatrist was my best career option, since it paid significantly more than the social work career I was on track towards, and I thought I could be good at it. I prioritized donation very highly, and I estimated that going into medicine would allow me to earn enough to save 2500 lives more than I could by staying on the same path. (That number is pretty far wrong, but it’s what I thought at the time.) The other high-earning options I could think of seemed to require quantitative skills I didn’t have, or a level of ambition and drive I didn’t have.
A few people did suggest that I might work on movement building, but for some reason it didn’t seem like a realistic option to me. There weren’t existing projects that I could slot into, and I’m not much of an entrepreneurial type.
The post resulted in me talking to a career advisor from a project that would eventually become 80,000 Hours. The advisor and I talked about how I might switch fields and try to get into medical school. I was trying not to be swayed by the sunk cost of the social work education I had already completed, but I also just really didn’t want to go through medical school and residency.
My strongest memory of that period is lying on the grass at my grad school, feeling awful about not being willing to put the years of work into earning more money. There were lives at stake. I was going to let thousands of people die from malaria because I didn’t want to work hard and go to medical school. I felt horribly guilty. And I also thought horrible guilt was not going to be enough to motivate me through eight years of intense study and residency.
After a few days of crisis, I decided to stop thinking about it all the time. I didn’t exactly make a conclusive decision, but I didn’t take any steps to get into med school, and after a few more weeks it was clear to me that I had no real intention to change paths. So I continued to study social work, with the belief that I was doing something seriously wrong. (To be clear, nobody was telling me I should feel this way, but I wasn’t living up to my own standards.)
In the meantime, I started writing Giving Gladly and continued hosting dinners at my house where people could discuss this kind of thing. The Boston EA group grew out of that.
It didn’t occur to me that I could work for an EA organization without moving cities. But four years later, CEA was looking for someone to do community work in EA and was willing to consider someone remote. Because of my combination of EA organizing, writing, and experience in social work, I turned out to be a good fit. I was surprised that they were willing to hire someone remote. Although I struggled at first to work out what exactly I should be doing, over time it was clear to me that I could be much more useful here than either in social work or earning to give.
I don’t think there’s a clear moral of the story about what this means other people should do, but here are some reflections:
I look back on this and think, wow, we had very little idea how to make good use of a person like me. I wonder how many other square pegs are ou...
|
Dec 12, 2021 |
Lessons from Running Stanford EA and SERI by kuhanj
37:51
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Lessons from Running Stanford EA and SERI, published by kuhanj on the effective altruism forum.
Introduction and summary
Who knew a year of work could turn a 1-organizer EA group into one of the largest EA groups in the world? Especially considering that the person spearheading this growth had little experience running much of anything relevant, and very suboptimal organization skills (It’s me, welp).
I definitely didn’t, when I started running Stanford EA in mid-2019. But I did know it was worth a shot; many universities are absolutely insane opportunities for multiplying your impact--where else can you find such dense clusters of people with the values, drive, talent, time, and career flexibility to dedicate their careers to tackling the world’s most pressing, difficult, large-scale problems?
Stanford EA had effectively one real organizer (Jake McKinnon), and our only real programming was weekly discussions (which weren't very well attended) and the one-off talk for a few years. This was the case until 2019, when Jake started prioritizing succession, spending lots of time talking to a few new intrigued-but-not-yet-highly-involved members (like me!) to get more involved about the potential impact we could have doing community building and for the world more broadly. Within a year, Stanford EA grew to be one of the largest groups in EA. That first year of work turned Stanford EA into a very large group, and in the second year since, I’ve been super proud of what our team has accomplished:
Getting SERI (the Stanford Existential Risks Initiative) off the ground (which wouldn’t have been possible without our faculty directors and Open Phil’s support), which has inspired other x-risk initiatives at Cambridge and (coming this year) at Harvard/MIT.
Running all of CEA’s Virtual Programs for their first global iterations, introducing over 500 people to key concepts in EA
Getting ~10 people to go from little knowledge of EA to being super dedicated to pursuing EA-guided careers, and boosting the networks, motivation, and knowledge of 5+ more who were already dedicated (At Stanford, and hopefully much more outside of Stanford since we ran a lot of global programming)
Running a global academic conference, together with other SERI organizers.
Running a large majority of all x-risk/longtermist internships in the world this year, together with other SERI organizers (though this is in part due to FHI being unable to run their internship this summer)
Founding the Stanford Alt. Protein Project, which recently ran a well-received, nearly 100-person class on alternative proteins, and has also set up connections/grants between three Stanford professors and the Good Food Institute to conduct alternative protein research.
Helping several other EA groups get off the ground, and running intro to EA talks and fellowships with them
I say this, not to brag, but because I think it shows several important things:
There’s an incredible amount of low-hanging fruit in this area. The payoffs to doing good community-building work are huge.
See also these posts for additional evidence and discussion.
Maybe you think EAs are over-determined? I don’t think so; perhaps half of the hardcore EAs in our group don’t seem to have been (and this proportion could be even higher with more and better community building).
You (yes, you!) can do similar things. We’re not that special--we’re mostly an unorganized team of students who care a lot.
We still have so much to learn, but I think we got some things right. What’s the sEAcret sauce? I try to distill it in this post, as a mix of mindsets, high-level goals, and tactics.
Here’s the short version/summary:
EA groups have lots of room for growth and improvement, as evidenced by the rapid growth of Stanford EA (despite it still being very suboptimally run and lots o...
|
Dec 12, 2021 |
The motivated reasoning critique of effective altruism by Linch
40:26
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: The motivated reasoning critique of effective altruism, published by Linch on the effective altruism forum.
Epistemic status: Half-baked at best
I have often been skeptical of the value of a) critiques against effective altruism and b) fully general arguments that seem like they can apply to almost anything. However, as I am also a staunch defender of hypocrisy, I will now hypocritically attempt to make the case for applying a fully general critique to effective altruism.
In this post, I will claim that:
Motivated reasoning inhibits our ability to acquire knowledge and form reasoned opinions.
Selection bias in who makes which arguments significantly exacerbates the problem of motivated reasoning
Effective altruism should not be assumed to be above these biases. Moreover, there are strong reasons to believe that incentive structures and institutions in effective altruism exacerbate rather than alleviate these biases.
Observed data and experiences in effective altruism support this theory; they are consistent with an environment where motivated reasoning and selection biases are rampant.
To the extent that these biases (related to motivated reasoning) are real, we should expect the harm done to our ability to form reasoned opinions to also seriously harm the project of doing good.
I will use the example of cost-effectiveness analyses as a jumping board for this argument. (I understand that effective altruism, especially outside of global health and development, has largely moved away from explicit expected value calculations and cost-effectiveness analyses. However, I do not believe this change invalidates my argument (see Appendix B)).
I also list a number of tentative ways to counteract motivated reasoning and selection bias in effective altruism:
Encourage and train scientific/general skepticism in EA newcomers.
Try marginally harder to accept newcomers, particularly altruistically motivated ones with extremely high epistemic standards
As a community, fund and socially support external (critical) cost-effectiveness analyses and impact assessments of EA orgs
Within EA orgs, encourage and reward dissent of various forms
Commit to individual rationality and attempts to reduce motivated reasoning
Maybe encourage a greater number of people to apply and seriously consider jobs outside of EA or EA-adjacent orgs
Maintain or improve the current culture of relatively open, frequent, and vigorous debate
Foster a bias towards having open, public discussions of important concepts, strategies, and intellectual advances
Motivated reasoning: What it is, why it’s common, why it matters
By motivated reasoning, I roughly mean what Julia Galef calls “soldier mindset” (H/T Rob Bensinger):
In directionally motivated reasoning, often shortened to "motivated reasoning", we disproportionately put our effort into finding evidence/reasons that support what we wish were true.
Or, from Wikipedia:
emotionally biased reasoning to produce justifications or make decisions that are most desired rather than those that accurately reflect the evidence
I think motivated reasoning is really common in our world. As I said in a recent comment:
My impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random examples that come to mind include public health (on covid, vaping, nutrition), bioethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, and diversity/inclusion. Moreover, these problems aren't limited to particular institutions: these problems are instantiated in academia, activist groups, media, regulatory groups and "mission-oriented" companies.
What does motivated reasoning loo...
|
Dec 12, 2021 |
What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? by Luisa_Rodriguez
01:09:12
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?, published by Luisa_Rodriguez on the effective altruism forum.
Epistemic transparency: Confidence in conclusions varies throughout. I give rough indicators of my confidence at the section level by indicating the amount of time I spent researching/thinking about each particular subtopic, plus a qualitative description of the types of sources I rely on. In general, I consider it a first step toward understanding this threat from civilizational collapse — not a final or decisive one.
Acknowledgements
This research was funded by the Forethought Foundation. It was written by Luisa Rodriguez under the supervision of Arden Koehler and Lewis Dartnell. Thanks to Arden Koehler, Max Daniel, Michael Aird, Matthew van der Merwe, Rob Wiblin, Howie Lempel, and Kit Harris who provided valuable comments. Thanks also to William MacAskill for providing guidance and feedback on the larger project.
Summary
In this post, I explore the probability that if various kinds of catastrophe caused civilizational collapse, this collapse would fairly directly lead to human extinction. I don’t assess the probability of those catastrophes occurring in the first place, the probability they’d lead to indefinite technological stagnation, or the probability that they’d lead to non-extinction existential catastrophes (e.g., unrecoverable dystopias). I hope to address the latter two outcomes in separate posts (forthcoming).
My analysis is organized into case studies: I take three possible catastrophes, defined in terms of the direct damage they would cause, and assess the probability that they would lead to extinction within a generation. There is a lot more someone could do to systematically assess the probability that a catastrophe of some kind would lead to human extinction, and what I’ve written up is certainly not conclusive. But I hope my discussion here can serve as a starting point as well as lay out some of the main considerations and preliminary results.
Note: Throughout this document, I’ll use the following language to express my best guess at the likelihood of the outcomes discussed:
TABLE1
Case 1: I think it’s exceedingly unlikely that humanity would go extinct (within ~a generation) as a direct result of a catastrophe that causes the deaths of 50% of the world’s population, but causes no major infrastructure damage (e.g. damaged roads, destroyed bridges, collapsed buildings, damaged power lines, etc.) or extreme changes in the climate (e.g. cooling). The main reasons for this are:
Although civilization’s critical infrastructure systems (e.g. food, water, power) might collapse, I expect that several billions of people would survive without critical systems (e.g. industrial food, water, and energy systems) by relying on goods already in grocery stores, food stocks, and fresh water sources.
After a period of hoarding and violent conflict over those supplies and other resources, I expect those basic goods would keep a smaller number of remaining survivors alive for somewhere between a year and a decade (which I call the grace period, following Lewis Dartnell’s The Knowledge).
After those supplies ran out, I expect several tens of millions of people to survive indefinitely by hunting, gathering, and practicing subsistence agriculture (having learned during the grace period any necessary skills they didn’t possess already).
Case 2: I think it’s very unlikely that humanity would go extinct as a direct result of a catastrophe that caused the deaths of 90% of the world’s population (leaving 800 million survivors), major infrastructure damage, and severe climate change (e.g. nuclear winter/asteroid impact).
While I expect that millions would starve to death in the wake of something like a globa...
|
Dec 12, 2021 |
Seven things that surprised us in our first year working in policy - Lead Exposure Elimination Project by Jack, LuciaC
11:34
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Seven things that surprised us in our first year working in policy - Lead Exposure Elimination Project, published by Jack, LuciaC on the effective altruism forum.
Following the interest in our post announcing the launch of Lead Exposure Elimination Project (LEEP) eight months ago, we are now sharing an update unpacking seven findings that have surprised us in our experiences so far. We hope these will be relevant to others interested in policy change or starting a new project or charity.
For those who are not familiar with LEEP, we are a Charity Entrepreneurship-incubated NGO advocating for lead paint regulation in countries with large and growing burdens of lead poisoning from paint. The reasons we focus on reducing lead exposure are outlined in our introduction post. In short, we believe the problem to be neglected and tractable, and that the intervention has the potential to improve lives with a high level of cost-effectiveness.
1. The speed of progress with government has been less of a limiting factor than expected
In our first target country, Malawi, we had a number of uncertainties about how quickly progress could be made. We were unsure if we would be able to get in touch with the relevant government officials, if they would be willing to engage, and whether our advocacy would lead to action in a reasonable timeframe. We found that stakeholders were far more willing to engage than we had expected. Even without existing connections or introductions, government officials replied to our emails and agreed to meetings. Beyond getting initial meetings, the tractability of achieving change was also higher than expected. After we carried out a study demonstrating high levels of lead in paint, the Malawi Bureau of Standards agreed to begin monitoring and enforcement for lead content of paint (using pre-existing but unimplemented standards), and have since confirmed that they have begun. This change occurred within three months of beginning advocacy in Malawi - significantly faster than our expected timeframe of 1-2 years.
We also found a surprising willingness to cooperate from the local paint industry. Since presenting to the paint manufacturers our findings of lead in paint and the benefits of switching to non-lead, they have engaged with us and with our support identifying non-lead alternative ingredients. We will be carrying out a repeat paint study in a few months to measure how this progress relates to levels of lead paint available on the market.
There are a number of factors that we think contributed to this faster traction and high level of stakeholder engagement. One is the new country-specific data that we were able to generate through a small paint sampling study. We believe that this data provided an effective opener to communications and also convincingly demonstrated that lead paint is a problem in Malawi. Our government contacts confirmed that this Malawi-specific evidence was key for their decision to take action. Generating new country-specific data through small-scale local studies seems to be an effective advocacy strategy that may be cross-applicable to other areas of policy.
Other reasons why stakeholder engagement has been greater than expected might be specific to lead paint regulation advocacy. For example, lead paint regulation is not particularly expensive for governments to implement or for paint manufacturers to comply with, reducing the barrier to action for both stakeholder groups. Also, there is a strong and established evidence-base for the harms of childhood lead poisoning, increasing consensus on the issue. As well as this, there is a growing awareness of a global movement towards lead paint regulation, including examples of neighbouring countries and the support of respected international bodies such as the WHO. This may facilitate ...
|
Dec 12, 2021 |
Why I'm concerned about Giving Green By alexrjl
22:10
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Why I'm concerned about Giving Green, published by alexrjl on the effective altruism forum.
Disclosure
I am a forecaster, and occasional independent researcher. I also work in a volunteer capacity for SoGive, which has included some analysis of the climate space in order to provide advice to some individual donors interested in this area. This work has involved ~20 hours of conference calls over the last year with donors and organisations, one of which was the Clean Air Task Force, although for the last few months my primary focus has been internal work on moral weights. I began the research for this piece in a personal capacity, and the opinions below are my own, not those of SoGive.
I received input on some early drafts, for which I am extremely grateful, from Sanjay Joshi (SoGive’s founder), as well as Aaron Gertler and Linch Zhang, however I again want to emphasise that the opinions expressed, and especially any mistakes in the below, are mine alone. I'm also very grateful to Giving Green for taking the time to have a call with me about my thinking here. I provided a copy of the post to them in advance, and they have indicated that they'll be providing a reponse to the below.
Overview
Big potential
I think that Giving Green has the potential to be incredibly impactful, not just on the climate but also on the EA/Effective Giving communities. Many people, especially young people, are extremely concerned about climate change, and very excited to act to prevent it. Meta-analysis of climate charities has the chance to therefore have large first-order effects, by redirecting donations to the most effective organisations within the climate space. It also, if done well, has the potential to have large second-order effects, by introducing people to the huge multiplier on their impact that cost-effectiveness research can have, and through that to the wider EA movement. I note that at least one current CEA staff member took this exact path into EA. With this said, I am concerned about some aspects of Giving Green in its current form, and having discussed these concerns with them, felt it was worth publishing the below.
Concerns about research quality
Giving Green’s evaluation process involves substantial evidence collection and qualitative evaluation, but eschews quantitative modelling, in favour of a combination of metrics which do not have a simple relationship to cost-effectiveness. In three cases, detailed below, I have reservations about the strength of Giving Green’s recommendations. Giving Green also currently recommends the Clean Air Task Force, which I enthusiastically endorse, but who Founders Pledge had identified as promising before Giving Green’s founding, and Tradewater, who I have not evaluated. What this boils down to is that in every case where I investigated an original recommendation made by Giving Green, I was concerned by the analysis to the point where I could not agree with the recommendation.
Despite the unusual approach, especially compared to standard EA practice, the research and methodology are presented by Giving Green in a way which implies a level of concreteness comparable to major existing charity evaluators such as Givewell. As well as the quantitative aspect mentioned above, major evaluators are notable for the high degree of rigour in their modelling, with arguments being carefully connected to concrete outcomes, and explicit consideration of downside risks and ways that they could be wrong. One important part of the more usual approach is that it makes research much easier to critique, as causal reasoning is laid out explicitly, and key assumptions are identified and quantified. When research lacks this style, not only does the potential for error increase, but it becomes much more difficult and time-intensive to critique, meaning errors...
|
Dec 12, 2021 |
Good news on climate change by John G. Halstead, jackva
23:00
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Good news on climate change, published by John G. Halstead, jackvaon the effective altruism forum.
This post is about how much warming we should expect on current policy and assuming emissions stop at 2100. We argue the risk of extreme warming (>6 degrees) conditional on these assumptions now looks much lower than it once did.
Crucially, the point of this post is about the direction of an update, not an absolute assessment of risk -- indeed, the two of us disagree a fair amount on the absolute risk, but strongly agree on the direction and relative magnitude of the update.
The damage of climate change depends on three things:
How much we emit
The warming we get, conditional on emissions
The impact of a given level of warming.
The late and truly great economist Martin Weitzman argued for many years that the catastrophic risk from climate change was greater than commonly recognised. In 2015, Weitzman, along with Gernot Wagner, an economist now at New York University, released Climate Shock, which argued that the chance of more than 6 degrees of warming is worryingly high. Using the International Energy Agency’s estimate of the most likely level of emissions on current policy, and the IPCC’s estimate of climate sensitivity, Wagner and Weitzman estimated that the chance of more than 6 degrees is 11%, on current policy.[1]
In recent years, the chance of more than 6 degrees of warming on current policy has fallen quite substantially for two reasons:
Emissions now look likely to be lower
The right tails of climate sensitivity have become thinner
1. Good news on emissions
For a long time the climate policy and impacts community was focused on one possible ‘business as usual’ emissions scenario known as Representative Concentration Pathway 8.5 (RCP8.5), a worst case against which climate action would be compared. Each representative concentration pathway can be paired with a socioeconomic story of how the world will develop in key areas such as population, income, inequality and education. These are known as ‘shared socioeconomic pathways’ (SSPs).
The latest IPCC report outlines five shared socioeconomic pathways. The only one that is compatible with RCP8.5 is a high economic growth fossil fuel-powered future called Shared Socioeconomic Pathway 5 (SSP5). In combination, SSP5 and RCP8.5 is called ‘SSP5-8.5’. On SSP5-8.5, we would emit a further 2.2 trillion tonnes of carbon by 2100, on top of the 0.65 trillion tonnes we have emitted so far.[2] For reference, we currently put about 10 billion tonnes of carbon into the atmosphere from fossil fuel burning and industry.[3] The other emissions pathways are shown below:
IPCC, Climate Change 2021: The Physical Science Basis, Assessment Review 6, Summary for Policymakers: Figure SPM.4
However, for a variety of reasons, SSP5-RCP8.5 now looks increasingly unlikely as a ‘business as usual’ emissions pathway. There are several reasons for this. Firstly, the costs of renewables and batteries have declined extremely quickly. Historically, models have been too pessimistic on cost declines for solar, wind and batteries: out of nearly 3,000 Integrated Assessment Models, none projected that solar investment costs (different to the levelised costs shown below) would decline by more than 6% per year between 2010 and 2020. In fact, they declined by 15% per year.[4]
This means that renewables will play an increasing role in energy supply in the future. In part for this reason, energy systems models now suggest that high fossil fuel futures are much less likely. For example, the chart below shows emissions on current policies and pledged policies, according to the International Energy Agency.
Source: Hausfather and Peters, ‘Emissions – the ‘business as usual’ story is misleading’, Nature, 2020.
The chart above from Hausfather and Peters (2020) relies...
|
Dec 12, 2021 |
Small and Vulnerable by deluks917
06:28
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Small and Vulnerable, published by deluks917 on the effective altruism forum.
Anyone who is dedicating the majority of their time or money to Effective Altruism needs to ask themselves why. Why not focus on enjoying life and spending your time doing what you love most? Here is my answer:
I have a twin sister but neither of us had many other friends growing up. From second to fifth grade we had none. From sixth to eighth we had one friend. As you might guess I was bullied quite badly. Multiple teachers contributed to this. Despite having no friends my parents wanted us to be normal. They pressured me to play sports with the boys in the neighborhood. I was unable to play with an acceptable level of skill and was not invited to the games anyway. But we were still forced to go 'play outside' after school. We had to find ways to kill time. Often we literally rode our bicycles in a circle in a parking lot. We were forced to 'play outside' for hours most days and even longer on weekends. I was not even allowed to bring a book outside though sometimes I would hide them outside at night and find them the next day. Until high school, I had no access to the internet. After dinner, I could watch TV, read and play video games. These were the main sources of joy in my childhood.
Amazingly my mom made fun of her children for being weirdos. My sister used to face a wall and stim with her fingers when she was overwhelmed. For some reason, my mom interpreted this as 'OCD'. So she made up a song titled 'OCD! Do you mean me?' It had several verses! This is just one, especially insane, example.
My dad liked to 'slap me around. He usually did not hit me very hard but he would slap me in the face all the time. He also loved to call me 'boy' instead of my name. He claims he got this idea from Tarzan. It took me years to stop flinching when people raised their hands or put them anywhere near my face. I have struggled with gender since childhood. My parents did not tolerate even minor gender nonconformity like growing my hair out. I would get hit reasonably hard if I insisted on something as 'extreme' as crossing my legs 'like a girl in public. I recently started HRT and already feel much better. My family is a lot of the reason I delayed transitioning.
If you go by the checklist I have quite severe ADHD. 'Very often' seemed like an understatement for most of the questions. My ADHD was untreated until recently. I could not focus on school or homework so trying to do my homework took way too much time. I was always in trouble in school and considered a very bad student. It definitely hurts when authority figures constantly, and often explicitly, treat you like a fuck up and a failure who can't be trusted. But looking back it seems amazing I was considered such a bad student. I love most of the subjects you study in school! When I finally got access to the internet I spent hours per day reading Wikipedia articles. I still spend a lot of time listening to lectures on all sorts of subjects, especially history. Why were people so cruel to a little child who wanted to learn things?
Luckily things improved in high school. Once I had more freedom and distance from my parents my social skills improved a huge amount. In high school, I finally had internet access which helped an enormous amount. My parents finally connected our computer at home to the internet because they thought my sister and I needed it for school. I also had access to the computers in the high school library. By my junior year in high school, I was not really unpopular. Ironically my parent's overbearing pressure to be a 'normal kid' probably prevented me from having a social life until I got a little independence. Sadly I was still constantly in trouble in school throughout my high school years.
The abuse at home was very bad. But,...
|
Dec 12, 2021 |
Some quick notes on "effective altruism"
05:22
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Some quick notes on "effective altruism", published by Jonas Vollmer on the effective altruism forum.
Introduction
I have some concerns about the "effective altruism" branding of the community.
I recently posted them as a comment, and some people encouraged me to share them as a full post instead, which I'm now doing.
I think this conversation is most likely not particularly useful or important to have right now, but there's some small chance it could be pretty valuable.
This post is based on my personal intuition and anecdotal evidence. I would put more trust in well-run surveys of the right kinds of people or other more reliable sources of evidence.
"Effective Altruism" sounds self-congratulatory and arrogant to some people:
Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me.
Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals in relevant areas. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions.
"Effective altruism" sounds like a strong identity:
Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se.
Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals.
In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas.
Some further, less important points:
"Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
A lot of people don't know what "altruism" means.
"Effective altruism" often sounds pretty awkward when translated to other languages. That said, this issue also affects a lot of the alternatives.
We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term.
Taking action on EA issues doesn't strictly require altruism. While I think it’s important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if th...
|
Dec 12, 2021 |
All Possible Views About Humanity's Future Are Wild by Holden Karnofsky
15:59
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: All Possible Views About Humanity's Future Are Wild, published by Holden Karnofsky on the effective altruism forum.
This is a linkpost for/
Audio version is here
Summary:
In a series of posts starting with this one, I'm going to argue that the 21st century could see our civilization develop technologies allowing rapid expansion throughout our currently-empty galaxy. And thus, that this century could determine the entire future of the galaxy for tens of billions of years, or more.
This view seems "wild": we should be doing a double take at any view that we live in such a special time. I illustrate this with a timeline of the galaxy. (On a personal level, this "wildness" is probably the single biggest reason I was skeptical for many years of the arguments presented in this series. Such claims about the significance of the times we live in seem "wild" enough to be suspicious.)
But I don't think it's really possible to hold a non-"wild" view on this topic. I discuss alternatives to my view: a "conservative" view that thinks the technologies I'm describing are possible, but will take much longer than I think, and a "skeptical" view that thinks galaxy-scale expansion will never happen. Each of these views seems "wild" in its own way.
Ultimately, as hinted at by the Fermi paradox, it seems that our species is simply in a wild situation.
Before I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is "wild." I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes.
My view
This is the first in a series of pieces about the hypothesis that we live in the most important century for humanity.
In this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a "technologically mature"[1] civilization. That would mean that:
We'd be able to start sending spacecraft throughout the galaxy and beyond.
These spacecraft could mine materials, build robots and computers, and construct very robust, long-lasting settlements on other planets, harnessing solar power from stars and supporting huge numbers of people (and/or our "digital descendants").
See Eternity in Six Hours for a fascinating and short, though technical, discussion of what this might require.
I'll also argue in a future piece that there is a chance of "value lock-in" here: whoever is running the process of space expansion might be able to determine what sorts of people are in charge of the settlements and what sorts of societal values they have, in a way that is stable for many billions of years.[2] If that ends up happening, you might think of the story of our galaxy[3] like this. I've marked major milestones along the way from "no life" to "intelligent life that builds its own computers and travels through space."
Thanks to Ludwig Schubert for the visualization. Many dates are highly approximate and/or judgment-prone and/or just pulled from Wikipedia (sources here), but plausible changes wouldn't change the big picture. The ~1.4 billion years to complete space expansion is based on the distance to the outer edge of the Milky Way, divided by the speed of a fast existing human-made spaceship (details in spreadsheet just linked); IMO this is likely to be a massive overestimate of how long it takes to expand throughout the whole galaxy. See footnote for why I didn't use a logarithmic axis.[4]
??? That's crazy! According to me, there's a dec...
|
Dec 12, 2021 |
Some personal thoughts on EA and systemic change by CarlShulman
10:32
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Some personal thoughts on EA and systemic change, published by CarlShulman on the effective altruism forum.
DavidNash requested that I repost my comment below, on what to make of discussions about EA neglecting systemic change, as a top-level post. These are my off-the-cuff thoughts and no one else's. In summary (to be unpacked below):
Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times
The great majority of critics of EA invoking systemic change fail to present the simple sort of quantitative analysis given above for the interventions they claim excel, and frequently when such analysis is done the intervention does not look competitive by EA lights
Nonetheless, my view is that historical data do show that the most efficient political/advocacy spending, particularly aiming at candidates and issues selected with an eye to global poverty or the long term, does have higher returns than GiveWell top charities (even ignoring nonhumans and future generations or future technologies); one can connect the systemic change critique as a position in intramural debates among EAs about the degree to which one should focus on highly linear, giving as consumption, type interventions
EAs who are willing to consider riskier and less linear interventions are mostly already pursuing fairly dramatic systemic change, in areas with budgets that are small relative to political spending (unlike foreign aid)
As funding expands in focused EA priority issues, eventually diminishing returns there will equalize with returns for broader political spending, and activity in the latter area could increase enormously: since broad political impact per dollar is flatter over a large range political spending should either be a very small or very large portion of EA activity
In full:
Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times
Empirical data on the impact of votes, the effectiveness of lobbying and campaign spending work out without any problems of fancy decision theory or increasing marginal returns
E.g. Andrew Gelman's data on US Presidential elections shows that given polling and forecasting uncertainty a marginal vote in a swing state average something like a 1 in 10 million chance of swinging an election over multiple elections (and one can save to make campaign contributions
80,000 Hours has a page (there have been a number of other such posts and discussion, note that 'worth voting' and 'worth buying a vote through campaign spending or GOTV' are two quite different thresholds) discussing this data and approaches to valuing differences in political outcomes between candidates; these suggest that a swing state vote might be worth tens of thousands of dollars of income to rich country citizens
But if one thinks that charities like AMF do 100x or more good per dollar by saving the lives of the global poor so cheaply, then these are compatible with a vote being worth only a few hundred dollars
If one thinks that some other interventions, such as gene drives for malaria eradication, animal advocacy, or existential risk interventions are much more cost-effective than AMF, that would lower the value further except insofar as one could identify strong variation in more highly-valued effects
Experimental data on the effects of campaign contributions suggest a cost of a few hundred dollars per marginal vote (see, e.g. Gerber's work on GOTV experiments)
Prediction markets and polling models give a good basis for assessing the chance of billions of dollars of campaign funds swinging an election
If there are increasing returns to scale from large-scale spending, small donors can convert their funds into a smal...
|
Dec 12, 2021 |
Reality is often underpowered by Gregory_Lewis
08:27
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Reality is often underpowered, published by Gregory_Lewis on the effective altruism forum.
Introduction
When I worked as a doctor, we had a lecture by a paediatric haematologist, on a condition called Acute Lymphoblastic Leukaemia. I remember being impressed that very large proportions of patients were being offered trials randomising them between different treatment regimens, currently in clinical equipoise, to establish which had the edge. At the time, one of the areas of interest was, given the disease tended to have a good prognosis, whether one could reduce treatment intensity to reduce the long term side-effects of the treatment whilst not adversely affecting survival.
On a later rotation I worked in adult medicine, and one of the patients admitted to my team had an extremely rare cancer,[1] with a (recognised) incidence of a handful of cases worldwide per year. It happened the world authority on this condition worked as a professor of medicine in London, and she came down to see them. She explained to me that treatment for this disease was almost entirely based on first principles, informed by a smattering of case reports. The disease unfortunately had a bleak prognosis, although she was uncertain whether this was because it was an aggressive cancer to which current medical science has no answer, or whether there was an effective treatment out there if only it could be found.
I aver that many problems EA concerns itself with are closer to the second story than the first. That in many cases, sufficient data is not only absent in practice but impossible to obtain in principle. Reality is often underpowered for us to wring the answers from it we desire.
Big units of analysis, small samples
The main driver of this problem for ‘EA topics’ is that the outcomes of interest have units of analysis for which the whole population (leave alone any sample from it) is small-n: e.g. outcomes at the level of a whole company, or a whole state, or whole populations. For these big unit of analysis/small sample problems, RCTs face formidable in principle challenges:
Even if by magic you could get (e.g.) all countries on earth to agree to randomly allocate themselves to policy X or Y, this is merely a sample size of ~200. If you’re looking at companies relevant to cage-free campaigns, or administrative regions within a given state, this can easily fall another order of magnitude.
These units of analysis tend highly heterogeneous, almost certainly in ways that affect the outcome of interest. Although the key ‘selling point’ of the RCT is it implicitly controls for all confounders (even ones you don’t know about), this statistical control is a (convex) function of sample size, and isn’t hugely impressive at ~ 100 per arm: it is well within the realms of possibility for the randomisation happen to give arms with unbalanced allocation of any given confounding factor.
‘Roughly’ (in expectation) balanced intervention arms are unlikely to be good enough in cases where the intervention is expected to have much less effect on the outcome than other factors (e.g. wealth, education, size, whatever), thus an effect size that favours one arm or the other can be alternatively attributed to one of these.
Supplementing this raw randomisation by explicitly controlling for confounders you suspect (cf. block randomisation, propensity matching, etc.) has limited value when don’t know all the factors which plausibly ‘swamp’ the likely intervention effect (i.e. you don’t have a good predictive model for the outcome but-for the intervention tested). In any case, they tend to trade-off against the already scarce resource of sample size.
These ‘small sample’ problems aren’t peculiar to RCTs, but endemic to all other empirical approaches. The wealth of econometric and quasi-experimental methods (e.g. IVs, ...
|
Dec 12, 2021 |
Big List of Cause Candidates by NunoSempere
58:57
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Big List of Cause Candidates, published by NunoSempere on the effective altruism forum.
Many thanks to Ozzie Gooen for suggesting this project, to Marta Krzeminska for editing help and to Michael Aird and others for various comments.
In the last few years, there have been many dozens of posts about potential new EA cause areas, causes and interventions. Searching for new causes seems like a worthy endeavour, but on their own, the submissions can be quite scattered and chaotic. Collecting and categorizing these cause candidates seemed like a clear next step.
We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research. At the same time, we feel like this list itself can be useful already.
Further, as I kept adding more and more cause candidates, I realized that aiming for completeness was a fool's errand, or at least too big a task for an individual working alone.
Below is my current list with a simple categorization, as well as an occasional short summary which paraphrases or quotes key points from the posts linked. See the last appendix for some notes on nomenclature. If there are any entries I missed (and there will be), please say so in the comments and I'll add them. I also created the "Cause Candidates" tag on the EA Forum and tagged all of the listed posts there. They are also available in a Google Sheet.
Animal Welfare and Suffering
Pointer: This cause has its various EA Forum tags (farmed animal welfare, wild animal welfare, meat alternatives), where more cause candidates can be found. Brian Tomasik et al.'s Essays on Reducing Suffering are also a gift that keeps on giving for this and other cause areas.
1.Wild Animal Suffering Caused by Fires
Related categories: Politics: System change, targeted change, policy reform.
Wild animal suffering caused by fires and ways to prevent it: a noncontroversial intervention (@Animal_Ethics)
An Animal Ethics grantee designed a protocol aimed at helping animals during and after fires. The protocol contains specific suggestions, but the path to turning these into policy is unclear.
2. Invertebrate Welfare
Invertebrate Welfare Cause Profile (@Jason Schukraft)
The scale of direct human impact on invertebrates (@abrahamrowe)
"In this post, we apply the standard importance-neglectedness-tractability framework to invertebrate welfare to determine, as best we can, whether this is a cause area that is worth prioritizing. We conclude that it is."
Note: See also Brian Tomasik's Do Bugs Feel Pain.
3. Humane Pesticides
Humane Pesticides as the Most Marginally Effective Cause (@JeffMJordan)
Improving Pest Management for Wild Insect Welfare (@Wild_Animal_Initiative)
The post argues that insects experience consciousness, and that there are a lot of them, so we should give them significant moral weight (comments contain a discussion on this point). The post goes on to recommend subsidization of less-painful pesticides, an idea initially suggested by Brian Tomasik, who "estimates this intervention to cost one dollar per 250,000 less-painful deaths." The second post goes into much more depth.
4. Diet Change
Is promoting veganism neglected and if so what is the most effective way of promoting it? (@samuel072)
Animal Equality showed that advocating for diet change works. But is it cost-effective? (@Peter_Hurford, @Marcus_A_Davis)
Cost-effectiveness analysis of a program promoting a vegan diet (@nadavb, @sella, @GidonKadosh, @MorHanany)
Measuring Change in Diet for Animal Advocacy (@Jacob_Peacock)
The first post is a stub. The second post looks at a reasonably high-powered study on individual outreach. It concludes that, based on reasonable assum...
|
Dec 12, 2021 |
Killing the ants by Joe_Carlsmith
12:17
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Killing the ants, published by Joe_Carlsmith on the effective altruism forum.
(Cross-posted from Hands and Cities)
I. The ants
Recently, my housemates and I started seeing a lot of ants in the house. They marched in long lines along the edges of the basement and the bathrooms. A few showed up in the drawers. My girlfriend put out some red pepper, which was supposed to deter them from one of their routes, but they cut a line straight through.
We thought maybe they were sheltering from the rain, which had become more frequent. We had had ants before; we’d talked, then, about whether to do something about it; but we hadn’t, and eventually they disappeared. We thought maybe this would happen again.
It didn’t. Over weeks, the problem got worse. There were hundreds of ants in the upstairs bathroom. They started to show up much more in the kitchen. We threw out various things, sealed various things. They showed up in beds. Kitchen drawers were now ant territory.
We talked about what to do. We were reluctant to kill them, which was part of why we had waited. But a number of people in the house felt that the situation was getting out of hand, and that we were on track for something much harder to control. I thought of a house I had stayed at, where the ants swarmed over the coffee maker every morning, and efforts (I’m not sure how extreme) to get rid of them had failed.
The most effective killing method is to poison the colony as a whole. The ants are lured into a sugary liquid that also contains borax, which is poisonous for ants, but relatively safe for humans. They then track the poison back to the colony. We talked about how bad this would be for the ants — and in particular, the fact that the poison is slow-acting. Crushing them directly, we thought, might be more humane; though it would also be more time-consuming, and less likely to solve the problem.
Eventually, though without resolving all disagreements amongst housemates, we put out the poison baits (my girlfriend also tried cloves, coffee grounds, and lemon juice around that time, as well as luring the ants to some peanut butter and honey outside, away from the house). The ants in the kitchen disappeared. There are still a few in the upstairs bathroom; and inside the clear plastic baits, you can see ant bodies, in the syrup.
II. Owning it
At one point, on the topic of the ants, I said, in passing, something like: “may we be forgiven.” My girlfriend responded seriously, saying something like: “We won’t be. There’s no forgiveness.”
Something about her response made me realize that the choice to kill the ants had had, for me, a quality of unreality. I had exerted some limited advocacy, in the direction of some hazy set of norms, but with no real sense of responsibility for what I was doing. There was something performative and disengaged about it — a type of disengagement in which one, for example, “feels bad” about killing the ants — and the question of whether we were doing the “right thing” was part of that. I was looking at the concepts. I was hoping for some kind of conformity, some kind of “pass” from the moral “authorities.” But I wasn’t looking down my arm, at the world I was creating, and the ants that were dying as a result. I wasn’t owning it.
Regardless of whether our choice was right or wrong (I’m still not sure), we chose for these ants to die. We killed them. What we got, when we chose, was not a “good job” or “bad job” from the universe: what we got was this world, and not another. And this world was right there, in front of me, whether we should be “forgiven” or no.
Not owning the choice was made easier, I think, by the fact that the death of the ants would mostly occur offscreen; outside of my “zone”, and not, directly, by my own hand. Indeed, I had declined to crush the ants myself, and I hadn’t bee...
|
Dec 12, 2021 |
Cultured meat predictions were overly optimistic by Neil_Dullaghan
07:04
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Cultured meat predictions were overly optimistic, published by Neil_Dullaghan on the effective altruism forum.
In a 2021 MotherJones article, Sinduja Rangarajan, Tom Philpott, Allison Esperanza, and Alexis Madrigal compiled and visualized 186 publicly available predictions about timelines for cultured meat (made primarily by cultured meat companies and a handful of researchers). I added 11 additional predictions ACE had collected, and 76 other predictions I found in the course of a forthcoming Rethink Priorities project.
Check out our dataset
Of the 273 predictions collected, 84 have resolved - nine resolving correctly, and 75 resolving incorrectly. Additionally, another 40 predictions should resolve at the end of the year and look to be resolving incorrectly. Overall, the state of these predictions suggest very systematic overconfidence. Cultured meat seems to have been perpetually just a few years away since as early as 2010 and this track record plausibly should make us skeptical of future claims from producers that cultured meat is just a few years away.
Here I am presenting the results of predictions that have resolved, keeping in mind they are probably not a representative sample of publicly available predictions, nor assembled from a systematic search. Many of these are so vaguely worded that it’s difficult to resolve them positively or negatively with high confidence. Few offer confidence ratings, so we can’t measure calibration.
Below is the graphic made in the MotherJones article. It is interactive in the original article.
The first sale of a ~70% cultured meat chicken nugget occurred in a restaurant in Singapore on 2020 December 19th for S$23 (~$17 USD) for two nugget dishes at the 1880 private member's club, created by Eat Just at a loss to the company (Update 2021 Oct 15:" 1880 has now stopped offering the chicken nuggets, owing to “delays in production,” but hopes to put them back on menus by the end of the year." (Aronoff, 2021). We have independently tried to acquire the products ourselves from the restaurant and via delivery but have been unsuccessful so far).
65 predictions made on cultured meat being available on the market or in supermarkets specifically can now be resolved. 56 were resolved negatively and in the same direction - overly optimistic (update: the original post said 52). None resolved negatively for being overly pessimistic. These could resolve differently depending on your exact interpretation but I don't think there is an order of magnitude difference in interpretations. The nine that plausibly resolved positively are listed below (I also listed nine randomly chosen predictions that resolved negatively).
In 2010 "At least another five to 10 years will pass, scientists say, before anything like it will be available for public consumption". (A literal reading of this resolves correct, even though one might interpret the meaning as a product will be available soon after ten years)
Mark Post of Maastricht University & Mosa Meat in 2014 stated he “believes a commercially viable cultured meat product is achievable within seven years." (It’s debatable if the Eat Just nugget is commercially viable as it is understood to be sold at a loss for the company).
Peter Verstate of Mosa Meat in 2016 predicted that premium priced cultured products should be available in 5 years (ACE 2017)
Mark Post in 2017 "says he is happy with his product, but is at least three years from selling one" (A literal reading of this resolves correct, even though one might interpret the meaning as a product will be available soon after three years)
Bruce Friedrich of the Good Food Institute in March 2018 predicted “clean-meat products will be available at a high price within two to three years”
Unnamed scientists in December 2018 “say that you can buy it [meat in a labor...
|
Dec 12, 2021 |
Make a $100 donation into $200 (or more) by WilliamKiely
03:20
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Make a $100 donation into $200 (or more), published by WilliamKiely on the effective altruism forum.
Latest Update: On Nov 24 at 1:01pm PT, the matching fund pool was increased to $600,000. Check the realtime dashboard to see how much is still available to allocate.
Of the first $250,000 in matching funds, more than 82% went to nonprofits you all donated to:
Donation Match Terms
This year, starting on November 1, Every.org is offering a very attractive $250,000 true counterfactual donation match. (Realtime dashboard of remaining funds.)
/@william.kiely?c=gg25
Every.org will match the first donation you make to each US 501(c)(3) nonprofit you give to 1:1 up to $100 per donor per nonprofit.
Currently, Every.org will contribute an extra $10 to your donation if you click to share your donation after donating. This might change (what it was originally).
The Match Terms in Every.org's words:
A donor can support multiple nonprofits, but only the first donation they make to each of those nonprofits will be matched. If someone makes two $50 donations to the same organization, then only the first $50 would be matched. If someone makes a $1000 donation, then only the first $100 is matched. If someone makes ten $100 donations to different organizations, then all ten donations will be matched.
Steps to Participate
Join with:/@william.kiely?c=gg25 (If you're a new user, this will give you and I $25 in giving credit in addition to the match described above (Update: I believe this new user incentive was removed by Nov 24), plus help me track how many EAs participate in the match so I can share the information with the community.)
Check the live dashboard to see if there are remaining matching funds.
If so, donate $100[1] to a nonprofit of your choice (to get your donation automatically matched 1:1)
After donating, click one of the links to share your donation (to get the extra share incentive, currently +$10)
Repeat steps 3 and 4 for every nonprofit you want to support!
FAQ Answers
Everyone can participate, regardless of country, even if you already joined last year.
Fees are low, so donate by card if it's easier for you. Or if you'd prefer to eliminate all fees you can do so by connecting your bank account.
Tax receipts: You can get these easily in your account on your My Giving page.
If this sounds familiar...
It's because 198 of you participated in a previous donation match sponsored by the same Every.org after seeing the post Make a $10 donation into $35 in December 2020.
We successfully directed $4,950 in matching funds to highly effective nonprofits during that match. It was quite popular because it only took ~3 minutes for each person to direct $25 in matching funds. I'm hopeful that even more of you will participate in Every.org's current match since it's just as easy and yet the limits are much higher.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Are we living at the most influential time in history? BY WilliamKiely
03:34
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Are we living at the most influential time in history?, published BY WilliamKiely on the effective altruism forum.
Latest Update: On Nov 24 at 1:01pm PT, the matching fund pool was increased to $600,000. Check the realtime dashboard to see how much is still available to allocate.
Of the first $250,000 in matching funds, more than 82% went to nonprofits you all donated to:
Donation Match Terms
This year, starting on November 1, Every.org is offering a very attractive $250,000 true counterfactual donation match. (Realtime dashboard of remaining funds.)
/@william.kiely?c=gg25
Every.org will match the first donation you make to each US 501(c)(3) nonprofit you give to 1:1 up to $100 per donor per nonprofit.
Currently, Every.org will contribute an extra $10 to your donation if you click to share your donation after donating. This might change (what it was originally).
The Match Terms in Every.org's words:
A donor can support multiple nonprofits, but only the first donation they make to each of those nonprofits will be matched. If someone makes two $50 donations to the same organization, then only the first $50 would be matched. If someone makes a $1000 donation, then only the first $100 is matched. If someone makes ten $100 donations to different organizations, then all ten donations will be matched.
Steps to Participate
Join with:/@william.kiely?c=gg25 (If you're a new user, this will give you and I $25 in giving credit in addition to the match described above (Update: I believe this new user incentive was removed by Nov 24), plus help me track how many EAs participate in the match so I can share the information with the community.)
Check the live dashboard to see if there are remaining matching funds.
If so, donate $100[1] to a nonprofit of your choice (to get your donation automatically matched 1:1)
After donating, click one of the links to share your donation (to get the extra share incentive, currently +$10)
Repeat steps 3 and 4 for every nonprofit you want to support!
FAQ Answers
Everyone can participate, regardless of country, even if you already joined last year.
Fees are low, so donate by card if it's easier for you. Or if you'd prefer to eliminate all fees you can do so by connecting your bank account.
Tax receipts: You can get these easily in your account on your My Giving page.
If this sounds familiar...
It's because 198 of you participated in a previous donation match sponsored by the same Every.org after seeing the post Make a $10 donation into $35 in December 2020.
We successfully directed $4,950 in matching funds to highly effective nonprofits during that match. It was quite popular because it only took ~3 minutes for each person to direct $25 in matching funds. I'm hopeful that even more of you will participate in Every.org's current match since it's just as easy and yet the limits are much higher.
You can donate less than $100 and still get matched, but note that you will forfeit your ability to get the full match for that nonprofit, even if you donate again. Per the terms: "If someone makes two $50 donations to the same organization, then only the first $50 would be matched."
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
2018-2019 Long-Term Future Fund Grantees: How did they do? by NunoSempere
08:29
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: 2018-2019 Long-Term Future Fund Grantees: How did they do?, published by NunoSempere on the effective altruism forum.
Introduction
At the suggestion of Ozzie Gooen, I looked at publicly available information around past LTF grantees. We've been investigating the potential to have more evaluations of EA projects, and the LTFF grantees seemed to represent some of the best examples, as they passed a fairly high bar and were cleanly delimited.
For this project, I personally investigated each proposal without consulting many others. This work was clearly limited by not reaching out to others directly, but requesting external involvement would have increased costs significantly. We were also partially interested in finding how much we could figure out with this limitation.
Background
During its first two rounds (round 1, round 2) of the LTF fund, under the leadership of Nick Beckstead, grants went mostly to established organizations, and didn’t have informative write-ups.
The next few rounds, under the leadership of Habryka et. al., have more informative write-ups, and a higher volume of grants, which are generally more speculative. At the time, some of the grants were scathingly criticised in the comments. The LTF at this point feels like a different, more active beast than under Nick Beckstead. I evaluated its grants from the November 2018 and April 2019 rounds, meaning that the grantees have had at least two years to produce some legible output. Commenters pointed out that the 2018 LTFF is pretty different from the 2021 LTFF, so it’s not clear how much to generalize from the projects reviewed in this post.
Despite the trend towards longer writeups, the reasoning for some of these grants is sometimes opaque to me, or the grant makers sometimes have more information than I do, and choose not to publish it.
Summary
By outcome
Flag Number of grants Funding ($)
More successful than expected
6 (26%) $ 178,500 (22%)
As successful as expected
5 (22%) $ 147,250 (18%)
Not as successful as hoped for
3 (13%) $ 80,000 (10%)
Not successful
3 (13%) $ 110,000 (13%)
Very little information
6 (26%) $ 287,900 (36%)
Total 23 $ 803,650
Not included in the totals or in the percentages are 5 grants worth a total of $195,000 which I tagged didn’t evaluate because of a perceived conflict of interest.
Method
I conducted a brief Google, LessWrong and EA forum search of each grantee, and attempted to draw conclusions from the search. However, quite a large fraction of grantees don't have much of an internet presence, so it is difficult to see whether the fact that nothing is findable under a quick search is because nothing was produced, or because nothing was posted online. Overall, one could spend a lot of time with an evaluation. I decided to not do that, and go for an “80% of value in 20% of the time”-type evaluation.
Grantee evaluation examples
A private version of this document goes by grantees one by one, and outlines what public or semi-public information there is about each grant, what my assessment of the grant’s success is, and why. I did not evaluate the grants where I had personal information which people gave me in a context in which the possibility of future evaluation wasn't at play. I shared it with some current LTFF fund members, and some reported finding it at least somewhat useful.
However, I don’t intend to make that version public, because I imagine that some people will perceive evaluations as unwelcome, unfair, stressful, an infringement of their desire to be left alone, etc. Researchers who didn’t produce an output despite getting a grant might feel bad about it, and a public negative review might make them feel worse, or have other people treat them poorly. This seems undesirable because I imagine that most grantees were taking risky bets with a high expected value, even i...
|
Dec 12, 2021 |
Is EA Growing? EA Growth Metrics for 2018 by Peter Wildeford
13:45
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Is EA Growing? EA Growth Metrics for 2018, published by Peter Wildeford on the effective altruism forum.
Is EA growing? Rather than speculating from anecdotes, I decided to collect some data. This is a continuation of the analysis started last year. For each trend, I collected the raw data and also highlighted in green where the highest point was reached (though this may be different from the period with the largest growth depending on which derivative you are looking at). You can download the raw data behind these tables here.
Implications
This year, I decided to separate growth stats into a few different categories, looking at how growth changes when we talk about people learning about EA through reading; increasing their commitment through joining a newsletter, joining a Facebook group, joining the EA Forum, or subscribing to a podcast; increasing engagement by committing -- self-identifying as EA on the EA Survey and/or taking a pledge; and having an impact by doing something, like donating or changing their careers[33].
When looking at this, it appears that there has been a decline of people searching and looking for EA (at least in the ways we track), with the exception of 80,000 Hours pageviews and EA Reddit page subscriptions which continued to grow but at a much lower pace. When we look at the rate of change, we can see a fairly clear decline across all metrics:
We can also see that when it comes to driving initial EA readership and engagement, 80,000 Hours is very clearly leading the pack while other sources of learning about EA are declining a bit:
In fact, the two sources of learning about EA that seem to best represent natural search -- Google interest and Wikipedia pageviews -- appear somewhat correlated and are now both declining together.
However, there are more people consuming EA in closer ways (what I termed “joining”) -- while growth rate in the EA Newsletter and 80K Newsletter has slowed down, the EA FB is more active, the EA Reddit and total engagement from 80K continues to grow, and new avenues like Vox's Future Perfect and 80K's podcast have opened up. However, this view of growth can change depending on which derivative you look at. Looking at the next derivative makes clear that there was a large explosion of interest in 2017 in the EA Reddit and the EA Newsletter that wasn’t repeated in 2018:
Additionally, Founder's Pledge continues to grow and OFTW has had explosive growth, though GWWC has stalled out a bit. The EA Survey has also recovered from a sluggish 2017 to break records in 2018. Looking at the rate of change shows Founder's Pledge clearly increasing, GWWC decreasing, and OFTW’s having fairly rapid growth in 2018 after a slowdown in 2017.
Lastly, the part we care about most seems to be doing the strongest -- while tracking the actual impact of the EA movement is really hard and very sensitive to outliers, nearly every doing/impact metric we do track was at its strongest in either 2017 or 2018, with only GiveWell and 80K seeing a slight decline in 2018 relative to 2017. However, looking at the actual rate of change shows a bleaker picture that we may be approaching a plateau.
Conclusion
Like last year, it still remains a bit difficult to infer broad trends given that a decline for one year might be the start of a true plateau or decline (as appears to be the case for GWWC) or may just be a one-time blip prior to a bounce back (as appears to be the case for the EA Survey[34]).
Overall, the decline in people first discovering EA (reading) and the growth of donations / career changes (doing) makes sense, as it is likely the result of the intentional effort across several groups and individuals in EA over the past few years to focus on high-fidelity messaging and growing the impact of pre-existing EAs and deliberate decisions to stop mas...
|
Dec 12, 2021 |
Why I find longtermism hard, and what keeps me motivated by Michelle_Hutchinson
10:31
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Why I find longtermism hard, and what keeps me motivated, published by Michelle_Hutchinson on the effective altruism forum.
[Cross-posted from the 80,000 Hours blog]
I find working on longtermist causes to be — emotionally speaking — hard: There are so many terrible problems in the world right now. How can we turn away from the suffering happening all around us in order to prioritise something as abstract as helping make the long-run future go well?
A lot of people who aim to put longtermist ideas into practice seem to struggle with this, including many of the people I’ve worked with over the years. And I myself am no exception — the pull of suffering happening now is hard to escape. For this reason, I wanted to share a few thoughts on how I approach this challenge, and how I maintain the motivation to work on speculative interventions despite finding that difficult in many ways.
This issue is one aspect of a broader issue in EA: figuring out how to motivate ourselves to do important work even when it doesn’t feel emotionally compelling. It’s useful to have a clear understanding of our emotions in order to distinguish between feelings and beliefs we endorse and those that we wouldn’t — on reflection — want to act on.
What I’ve found hard
First, I don’t want to claim that everyone finds it difficult to work on longtermist causes for the same reasons that I do, or in the same ways. I’d also like to be clear that I’m not speaking for 80,000 Hours as an organisation.
My struggles with the work I’m not doing tend to centre around the humans suffering from preventable diseases in poor countries. That’s largely to do with what I initially worked on when I came across effective altruism. For other people, it’s more salient that they aren’t actively working to prevent the barbarity of some factory farming practices. I’m not going to talk about all of the ways in which people might find it hard to focus on the long-run future — for the purposes of this article, I’m going to focus specifically on my own experience.
I feel a strong pull to help people now
A large part of the suffering in the world today simply shouldn’t exist. People are suffering and dying for want of cheap preventative measures and cures. Diseases that rich countries have managed to totally eradicate still plague millions around the world. There’s strong evidence for the efficacy of cheap interventions like insecticide-treated anti-malaria bed nets. Yet many of us in rich countries are well off financially, and spend a significant proportion of our income on non-necessity goods and services. In the face of this absurd and preventable inequity, it feels very difficult to believe that I shouldn’t be doing anything to ameliorate it.
Likewise, it often feels hard to believe that I shouldn’t be helping people geographically close to me — such as homeless people in my town, or people who are being illegitimately incarcerated in my country. It’s hard to deal with there being visible and preventable suffering that I’m not doing anything to combat.
For me, putting off helping people alive today in favour of helping those in the future is even harder than putting off helping those in my country in favour of those on the other side of the world. This is in part due to the sense that if we don’t take actions to improve the future, there are others coming after us who can. By contrast, if we don’t take action to help today’s global poor, those coming after us cannot step in and take our place. The lives we fail to save this year are certain to be lost and grieved for.
Another reason this is challenging is that wealth seems to be sharply increasing over time. This means that we have every reason to believe that people in the future will be far richer than people today, and it would seem to follow that people in the future d...
|
Dec 12, 2021 |
Introducing the Legal Priorities Project by jonasschuett
03:16
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Introducing the Legal Priorities Project, published by jonasschuett on the effective altruism forum.
We’re excited to introduce a new EA organization: the Legal Priorities Project.
About us
The Legal Priorities Project is an independent, global research project founded by researchers from Harvard University. Our mission is to conduct legal research that tackles the world’s most pressing problems. This currently leads us to focus on the protection of future generations.
The project is led by Christoph Winter (ITAM/Harvard); the other founding team members are Cullen O’Keefe (OpenAI/GovAI), Eric Martínez (MIT), and Jonas Schuett (Goethe University Frankfurt). For more information about our team, visit our website.
The idea was born at the EA group at Harvard Law School in Fall 2018. Since then, we raised two rounds of funding from Ben Delo at the advice of Effective Giving, built a highly motivated and mission-aligned core team, registered as a 501(c)(3) nonprofit, hosted a seminar at Harvard Law School, and organized our first summer research fellowship. Besides that, we worked on our research agenda and a number of other research projects.
We’re currently assessing the desirability and feasibility of having a formal affiliation with a university. We consider founding a center or institute at a leading law school in the US or UK within the coming 2 years.
Our research
We aim to establish “legal priorities research” as a new research field. At the meta-level, we determine which problems legal scholars should work on in order to tackle the world’s most pressing problems. At the object-level, we conduct legal research on the identified problems.
Our approach to legal priorities research is influenced by the longtermism paradigm. Consequently, we are currently focusing on the following cause areas: (1) improving the governance of advanced artificial intelligence, (2) mitigating risks from synthetic biology, (3) mitigating extreme risks from climate change, and (4) improving institutional design and decision-making.
Legal priorities research can be viewed as a subset of global priorities research. While global priorities research is located at the intersection of philosophy and economics, legal priorities research focuses primarily on legal studies, although it is still highly interdisciplinary.
We are currently working on a research agenda for legal priorities research. The agenda will be divided by cause areas and will contain a list of promising research projects for legal scholars. We hope to publish the agenda in December 2020. Sign up to our newsletter, if you want to receive an email when it gets published.
We are also working on a number of object-level research projects. Please get in touch, if you want to collaborate with us on future research projects. You may also want to fill out our expression of interest form.
Further information
Website: legalpriorities.org
Email: hello@legalpriorities.org
LinkedIn: linkedin.com/company/legalpriorities
Twitter: twitter.com/legalpriority
Facebook: facebook.com/legalpriorities
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Can I have impact if I’m average? by Fabienne
03:26
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Can I have impact if I’m average?, published by Fabienne on the effective altruism forum.
A friend of mine who is into EA said a few days ago that he thinks most people cannot have an impact, because to have an impact you need to be among the 0.1%-1% best in your field. I have encountered this thought in quite a few people in/interested in EA, some of whom say that this thought has dragged them down a lot. When I led an EA career workshop for students who received the Studienstiftung scholarship, one of my participants who had just realised he could have a lot more impact if he switched career paths, said to me something along the lines of: “Oh man, what I have been doing was worthless”. I replied: “Ehm no it wasn’t? :) You seem to have improved lives noticeably. The fact that there are better opportunities than what you did does not take away that value. In fact, it’s because improving even a single life is valuable that the best opportunities are so incredibly valuable.” 80k (probably rightly so) seeks to focus on the top 1%, but that does not mean that you cannot have (a lot of) impact if you are less good at what you do.
Here is what I think is going on when people despair about their impact. I think our ability to feel what “unusually high impact” means is very limited. Our head knows that there is a big difference between saving a few people and saving a multitude of people, but our heart doesn’t quite get it. So what some people in EA then seem to do is this: They assign the value level “maximally valuable” to the most impactful thing someone could do - so far, so good. But then when they encounter a lower level of impact (such as saving one life), they reduce their value judgment by however lower the impact is compared to the highest possible impact. This leaves them with an inappropriately low judgement of value for this impact, because our judgement of value for the highest impact possible was way too low to start with. It's the opposite to what people outside of EA tend to do - (correctly) give a lot of value to saving one life but not scaling this judgement up appropriately. I think it's possible to avoid both mistakes - at least, I think that I am able to avoid them both.
I think underestimating the value of significant but non-maximal impact is a problem. For one thing, it’s a misconception and misconceptions are rarely helpful. Second, I think this is probably bad for the mental health and productivity of our movement, because it de-motivates and saddens people. Third, it probably affects not only people who have “average” talent, whatever that means, but also those who are in fact excellent at something but who think of themselves as average. There seem to be a lot of people like this in EA. Fourth, I think it’s bad for public relations because it can make people feel useless and can come across as arrogant.
How do we fix this misconception? I hope that this post helps a bit with that - what follows are some other ideas. Perhaps the idea I’m presenting here, or related ones, could be included in the mental health workshops at EA conferences together with CBT and ACT methods for those who want more help emotionally distancing themselves from this or other unhelpful thoughts. Movement builders could watch out for this misunderstanding and correct it, like I did at the workshop I mentioned above. Maybe EA-related websites could include the idea somewhere, such as in their FAQs. I don’t know how much these things would help, but my personal experience with clarifying this misconception to people has been positive.
Thanks to Rob Long for helping me improve this post!
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
A new strategy for broadening the appeal of effective giving (GivingMultiplier.org) by Lucius_Caviola
11:33
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: A new strategy for broadening the appeal of effective giving (GivingMultiplier.org), published by Lucius_Caviola on the effective altruism forum.
In this post, I introduce an ongoing research project with the aim of bringing effective giving to a wider range of altruists. The strategy combines 1) donation bundling (splitting between your favorite and an effective charity), 2) asymmetrical matching (offering higher matching rates for allocating more to an effective charity), 3) a form of donor coordination (to provide the matching). After conducting a series of experiments, we will test our strategy in the real-world using our new website GivingMultiplier.org. This project is a collaboration with Prof Joshua Greene and is supported by an EA Funds grant.
Background
It is difficult to motivate people to give more effectively. Presenting people with information about charity effectiveness can increase effective giving to some extent (Caviola, Schubert, et al., 2020a; 2020b). However, the effect is limited because most people prefer to give to their favorite charity even when they know that other charities are more effective (Berman et al., 2018). This is because people are motivated by ‘warm glow’ of giving (Andreoni, 1990), which isn’t a good proxy for effectiveness. Another issue is that most people aren’t motivated to proactively seek out information about the most effective charities. But making people care more about effectiveness is difficult. In multiple studies I have found that presenting people with moral arguments makes little to no difference. (Though moral arguments might work for some people and under the right circumstances, cf. Lindauer et al., 2020; Schwitzgebel et al., 2020.) Therefore, the approach we take here is to work with people’s preferences instead of trying to change them.
The strategy
Below is a short summary of the set of techniques our strategy relies on. In our experiments, 2,000 (Amazon MechanicalTurk) participants made probabilistically implemented decisions involving real money. If you are interested in more details about our studies and results, you can find an early working draft here.
1) Donation bundling
We found that donations to effective charities can be increased by up to 75% when people are offered the option to split their donation between their favorite and a highly effective charity (Study 1). We call this technique donation bundling. Most donors find such bundle options appealing because they enjoy nearly all the warm-glow of giving exclusively to their favorite charity, but also gain the satisfaction of giving more effectively and fairly (Study 2). Likewise, we find that third-parties perceive bundle donors as both highly warm and highly competent, as compared to donors who give exclusively to an emotionally appealing charity (warm, but less competent) or exclusively to a highly effective charity (competent, but less warm) (Study 3).
2) Asymmetrical matching
The bundling technique can be enhanced by offering matching funds in an asymmetrical way, i.e. the matching rate increases as more is allocated to the effective charity. In our studies, participants were offered higher matching rates, the more they would give to the effective charity as opposed to their favorite charity. For example, they might get a 10% matching rate for giving 50% to their favorite and 50% to the effective charity, but a 20% matching rate for giving 100% to the effective charity. We found that asymmetrical matching can increase donations to effective charities by an additional 55% (Study 4). A key advantage of offering donation matching is that it provides people with no prior interest in effective giving to visit the site and choose to support a highly effective charity.
3) Matching as donor coordination
Where does the matching funding come from? We ...
|
Dec 12, 2021 |
SHIC Will Suspend Outreach Operations by cafelow
11:05
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: SHIC Will Suspend Outreach Operations, published by cafelow on the effective altruism forum.
A Q1 update and 2018 in review
By Baxter Bullock and Catherine Low
Since launching in 2016, Students for High-Impact Charity (SHIC), a project of Rethink Charity, has focused on educational outreach for high school students (primarily ages 16-18) through our interactive content. In January 2018, we began implementing instructor-led workshops, mostly in the Greater Vancouver area. Below, we summarize our experiences of 2018 and explain why we are choosing to suspend our outreach operations.
Summary
2018 saw strong uptake, but difficulty securing long-term engagement - Within a year of instructor-led workshops, we presented 106 workshops, reaching 2,580 participants at 40 (mostly high school) institutions. We experienced strong student engagement and encouraging feedback from both teachers and students. However, we struggled in getting students to opt into advanced programming, which was our behavioral proxy for further engagement.
By the end of April, SHIC outreach will function in minimal form, requiring very little staff time - Over the next two months, our team will gradually wind down delivered workshops at schools. We plan on maintaining a website with resources and fielding inquiries through a contact form for those who are looking for information on how best to implement EA education.
The most promising elements of SHIC may be incorporated into other high-impact projects - The SHIC curriculum could likely be repurposed for other high-impact projects within the wider Rethink Charity umbrella. For example, it could be a tool for engaging potential high-net-worth donors, or as content to provide local group leaders.
We believe in the potential of educational outreach and hope to revisit this in the future - While we acknowledge the possibility that poor attendance at advanced workshops is indicative of general interest level in our program and/or EA in general, it's also possible that the methods we used to facilitate long term engagement were inadequate. We think that under the right circumstances, educational outreach could be more fruitful.
SHIC will release an exhaustive evaluation of our experience with educational outreach in the coming months.
2018 in review
In late 2017 we made a strategic shift towards a high-fidelity model of student engagement through instructor-led workshops. We tested this model throughout 2018, with our instructors visiting schools in Greater Vancouver, Canada[1].
Most students (56%) participated in a single-session workshop lasting approximately 80 minutes. These workshops consisted of a giving game[2], followed by an overview of the core ideas of effective altruism[3], including coverage of key cause areas. The remaining 44% of participants participated in multi-session (typically three), in-depth workshops which usually included a giving game, interactive explorations of the topics mentioned above, a cause prioritization activity, and a discussion of effective career paths.
Our goal for the second half of 2018 was to identify high-potential students from our school visits, and engage them further with supplementary advanced workshops at a central location in Vancouver. To gauge interest initially, we began with an opt-in approach for all interested students who provided an email address in order to obtain more information. We ran a workshop in November which primarily consisted of an in-depth activity on cause prioritization, and a workshop in December focused on effectively creating online fundraisers for the holidays.
Our results
The metrics we identified to gauge our success were:
Teachers and school uptake
Student survey results indicating shifts of opinion and/or behavior
The number of students who continue to engage with the material...
|
Dec 12, 2021 |
Avoiding Munich's Mistakes: Advice for CEA and Local Groups by Larks
01:02:22
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Avoiding Munich's Mistakes: Advice for CEA and Local Groups, published by Larks on the effective altruism forum.
If all mankind minus one, were of one opinion, and only one person were of contrary opinion, mankind would be no more justified in silencing that one person, than he, if he had the power, would be justified in silencing mankind.
John Stuart Mill, On Liberty, p23
We strive to base our actions on the best available evidence and reasoning about how the world works. We recognise how difficult it is to know how to do the most good, and therefore try to avoid overconfidence, to seek out informed critiques of our own views, to be open to unusual ideas, and to take alternative points of view seriously. ...
We are a community united by our commitment to these principles, not to a specific cause. Our goal is to do as much good as we can, and we evaluate ways to do that without committing ourselves at the outset to any particular cause. We are open to focusing our efforts on any group of beneficiaries, and to using any reasonable methods to help them. If good arguments or evidence show that our current plans are not the best way of helping, we will change our beliefs and actions.
Excepted from The Guiding Principles of Effective Altruism
Introduction
This post argues that Cancel Culture is a significant danger to the potential of EA project, discusses the mistakes that were made by EA Munich and CEA in their deplatforming of Robin Hanson, and provides advice on how to avoid such issues in the future.
As ever, I encourage you to use the navigation pane to jump to the parts of the article that are most relevant to you. In particular, if you are already convinced you might skip the 'examples' and 'quotes' sections.
Background
The Nature of Cancel Culture
In the past couple of years, there’s been much damage done to the norms around free speech and inquiry, in substantial parts due to what’s often called cancel culture. Of relevance to the EA community is that there have been an increasing number of highly public threats and attacks on scientists and public intellectuals, where researchers are harassed online, disinvited from conferences, had their papers retracted, and fired, because of mass online mobs reacting to an accusation over slight wording on topics of race, gender, and other issues of identity, or guilt-by-association with other people who have also been attacked by such mobs.
This is colloquially called ‘cancelling’, after the hashtags that have formed saying #CancelX or #xisoverparty, where X is some person, company or other entity, hashtags which are commonly trending on Twitter.
While such mobs cannot attack every person who speaks in public, they can attack any person who speaks in public, leading to chilling effects where nobody wants to talk about the topics that can lead to cancelling.
Cancel Culture essentially involves the following steps:
A victim, often a researcher, says or does something that irks someone online.
This critic then harshly criticises the person using attacks that are hard to respond to in our culture - the accusation of racism is a common one. The goal of this attack is to signal to a larger mob that they should pile on, with the hope of causing massive damage to the person’s private and professional lives.
Many more people then join in the attack online, including (often) contacting their employer.
People who defend the victim are attacked as also being guilty of a similar crime.
Seeing this dynamic, many associates of the victim prefer to sever their relationship, rather than be subject to this abuse. This may also include their employer, for whom the loss of one employee seems a relatively small cost for maintaining PR.
The online crowd may swiftly move on; however, the victim now lives under a cloud of suspicion that is hard to...
|
Dec 12, 2021 |
Thoughts on whether we're living at the most influential time in history by Buck
14:03
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Thoughts on whether we're living at the most influential time in history, published by Buck on the effective altruism forum.
(thanks to Claire Zabel, Damon Binder, and Carl Shulman for suggesting some of the main arguments here, obviously all flaws are my own; thanks also to Richard Ngo, Greg Lewis, Kevin Liu, and Sidney Hough for helpful conversation and comments.)
Will MacAskill has a post, Are we living at the most influential time in history?, about what he calls the “hinge of history hypothesis” (HoH), which he defines as the claim that “we are living at the most influential time ever.” Whether HoH is true matters a lot for how longtermists should allocate their effort. In his post, Will argues that we should be pretty skeptical of HoH.
EDIT: Will recommends reading this revised article of his instead of his original post.
I appreciate Will’s clear description of HoH and its difference from strong longtermism, but I think his main argument against HoH is deeply flawed. The comment section of Will’s post contains a number of commenters making some of the same criticisms I’m going to make. I’m writing this post because I think the rebuttals can be phrased in some different, potentially clearer ways, and because I think that the weaknesses in Will’s argument should be more widely discussed.
Summary: I think Will’s arguments mostly lead to believing that you aren’t an “early human” (a human who lives on Earth before humanity colonizes the universe and flourishes for a long time) rather than believing that early humans aren’t hugely influential, so you conclude that either humanity doesn’t have a long future or that you probably live in a simulation.
I sometimes elide the distinction between the concepts of “x-risk” and “human extinction”, because it doesn’t matter much here and the abbreviation is nice.
(This post has a lot of very small numbers in it. I might have missed a zero or two somewhere.)
EDIT: Will's new post
Will recommends reading this revised article of his instead of his original post. I believe that his new article doesn't make the assumption about the probability of civilization lasting for a long time, which means that my criticism "This argument implies that the probability of extinction this century is almost certainly negligible" doesn't apply to his new post, though it still applies to the EA Forum post I linked. I think that my other complaints are still right.
The outside-view argument
This is the argument that I have the main disagreement with.
Will describes what he calls the “outside-view argument” as follows:
1. It’s a priori extremely unlikely that we’re at the hinge of history
2. The belief that we’re at the hinge of history is fishy
3. Relative to such an extraordinary claim, the arguments that we’re at the hinge of history are not sufficiently extraordinarily powerful
Given 1, I agree with 2 and 3; my disagreement is with 1, so let’s talk about that. Will phrases his argument as:
The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.
The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.
I have several disagreements with this argument.
This argument implies that the probability of exti...
|
Dec 12, 2021 |
Opinion: Digital marketing is under-utilized in EA by JSWinchell
05:55
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Opinion: Digital marketing is under-utilized in EA, published by JSWinchell on the effective altruism forum.
In this post I will make the case that digital marketing is under-utilized by EA orgs as well as provide some example use cases.
My hope is that this post leads to EA orgs testing the below or similar strategies.
A large part of what Effective Altruism is trying to do is to change people’s beliefs and behaviors. Digital advertising is one tool for achieving this goal. The fact that corporations, governments, and nonprofits repeatedly invest millions of dollars in digital marketing programs is evidence of their efficacy.
A couple notes:
I work at Google/YouTube helping large advertisers run Google and YouTube Ads. For that reason this post does not touch on Facebook/Instagram//Twitter/TikTok, but I am sure there are large opportunities there as well.
This post is focused on paid advertising.
Cost estimates are based on previous experience and industry benchmarks, but costs vary based on the geography, season, tactic etc.
If you plan on running any of the strategies described below, please reach out to me so we can coordinate with other charities that are planning on running similar strategies.
If your EA org would like to explore running a digital marketing campaign please don’t hesitate to reach out to me at j.s.winchell@gmail.com.
Search Ads
Every year, millions of people ask Google questions related to charity, poverty, animal welfare, AI safety, etc. If we can direct those people to EA websites, they will get EA answers to their questions.
Google gives registered charities $10K/month in free advertising credits. Of a sample of ~10 EA charities, only one was fully using this Google Ads Grant.
With a little extra knowledge, spending the Google Ads Grant becomes much easier than described in previous posts (1, 2).
If you work at an EA org and would like help spending your full Google Ads Grant please fill out this survey.
Image/display Ads
If your organization has a target audience and you know of websites that audience visits, display ads could be a very cost-effective way for your charity to achieve its goals.
Example use case: Founders Pledge wants to spread the word about their pledge. They identify three websites frequented by founders and advertise on those websites. A standard benchmark for an image/display ad is $2 per 1,000 impressions (an impression is when an ad is served). This strategy would break even at one pledge per 1,000,000,000 impressions served. (Assumes advertising cost of $2 per 1,000 impressions and an average pledge size of $2M, which is based on ~$3B pledged and ~1,500 pledgers).
While the optimal number of impressions served will be much less than 1,000,000,000, it is very likely higher than 0.
In addition to sending founders to the Founders Pledge website, this tactic would also increase awareness of the pledge thereby making it easier for their outreach team to sign on new members.
Video Ads
YouTube Ads are a powerful and inexpensive way to deliver visual and audio messages to targeted audiences.
You can target users using any combination of the following:
Geography (radius targeting, zip/postal code, state)
Household income (top 10%, 11-20%, etc.)
Search history on Google and YouTube (e.g. users that searched for “best charity” or “factory farming” in the last 30 days)
Types of websites visited (e.g. users that have visited the websites of large nonprofits)
YouTube channels being watched at the time the impression serves (contextual targeting)
Demographics (age, gender)
Example use cases:
Org A wants to boost enrollment in EA University groups. They have a member student record a simple 6 second selfie video inviting students to the group. They target 18-24 year-olds within a 10 mile radius of their target universities. They could...
|
Dec 12, 2021 |
Collection of good 2012-2017 EA forum posts by saulius
05:13
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Collection of good 2012-2017 EA forum posts, published by saulius on the effective altruism forum.
I feel that older EA forum posts are not read nearly as much as they should. Hence, I collected the ones that seemed to be the most useful and still relevant today. I recommend going through this list in the same way you would go through the frontpage of this forum: reading the titles and clicking on the ones that seem interesting and relevant to you. Note that you can hover over links to see more details about each post.
Also note that many of these posts have lower karma scores than most posts posted nowadays. This is in large part because until September 2018, all votes were worth only one karma point, and before September 2014 there was no karma system at all. Furthermore, the forum readership used to be lower. Hence, I don’t advise to pay too much attention to karma when choosing which of these posts to read. Most of these posts had a significantly higher karma than other posts posted around the same time.
To create this list, I skimmed through the titles (and sometimes the contents) of all posts posted between 2012 and 2017. I relied on my intuition to decide which posts to include. Undoubtedly, I missed some good ones. Please feel free to point them out in the comments.
Also note that in some cases the information in these posts might be outdated, or no longer reflect the opinions of their authors.
Communication
Supportive Scepticism
See also: Supportive scepticism in practice
Some Thoughts on Public Discourse
Six Ways To Get Along With People Who Are Totally Wrong
If you don't have good evidence one thing is better than another, don't pretend you do
You have a set amount of "weirdness points". Spend them wisely.
General reasoning
In defence of epistemic modesty
Integrity for consequentialists
Beware surprising and suspicious convergence
Cause-prioritization
Why I'm skeptical about unproven causes (and you should be too)
Follow up: Where I've Changed My Mind on My Approach to Speculative Causes
How we can make it easier to change your mind about cause areas
Cause prioritization for downside-focused value systems
Five Ways to Handle Flow-Through Effects
Post series by Micheal Dickens
Long-term Future
AI Safety Literature reviews by Larks: 2016, 2017, 2018, 2019
Will we eventually be able to colonize other stars? Notes from a preliminary review
The timing of labour aimed at reducing existential risk
Cognitive Science/Psychology As a Neglected Approach to AI Safety
Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk
Why might the future be good?
Personal thoughts on careers in AI policy and strategy
My current thoughts on MIRI's "highly reliable agent design" work
Improving disaster shelters to increase the chances of recovery from a global catastrophe
Why long-run focused effective altruism is more common sense
Advice on how to think about altruism
Effective Altruism is a Question (not an ideology)
Response: Effective Altruism is an Ideology, not (just) a Question
Cheerfully
Effective Altruism is Not a Competition
Personal consumption changes as charity (a suggestion about how to decide whether to buy more expensive but more ethical products)
Aim high, even if you fall short
An embarrassment of riches
Parenthood and effective altruism
How much does it cost to have a child?
EA is the best choice I've ever made
Room for Other Things: How to adjust if EA seems overwhelming
On everyday altruism and the local circle
Helping other altruists
Effective altruism as the most exciting cause in the world
For more advice on how to think about altruism, see excellent blogs Minding our way (by Nate Soares) and Giving Gladly (by Julia Wise)
Movement strategy
Keeping the effective altruist movement welcoming
The perspectives...
|
Dec 12, 2021 |
Objections to Value-Alignment between Effective Altruists by CarlaZoeC
31:03
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Objections to Value-Alignment between Effective Altruists, published by CarlaZoeC on the effective altruism forum.
With this post I want to encourage an examination of value-alignment between members of the EA community. I lay out reasons to believe strong value-alignment between EAs can be harmful in the long-run.
The EA mission is to bring more value into the world. This is a rather uncertain endeavour and many questions about the nature of value remain unanswered. Errors are thus unavoidable, which means the success of EA depends on having good feedback mechanisms in place to ensure mistakes can be noticed and learned from. Strong value-alignment can weaken feedback mechanisms.
EAs prefer to work with people who are value-aligned because they set out to maximse impact per resource expended. It is efficient to work with people who agree. But a value-aligned group is likely intellectually homogenous and prone to breed implicit assumptions or blind spots.
I also noticed particular tendencies in the EA community (elaborated in section: homogeneity, hierarchy and intelligence), which generate additional cultural pressures towards value-alignment, make the problem worse over time and lead to a gradual deterioration of the corrigibility mechanisms around EA.
Intellectual homogeneity is efficient in the short-term, but counter-productive in the long-run. Value-alignment allows for short-term efficiency, but the true goal of EA – to be effective in producing value in the long- term – might not be met.
Disclaimer
All of this is based on my experience of EA over the timeframe 2015-2020. Experiences differ and I share this to test how generalisable my experiences are. I used to hold my views lightly and I still give credence to other views on developments in EA. But I am getting more, not less worried over time, particularly because others members have expressed similar views and worries to me but have not spoken out about them because they fear losing respect or funding. This is precisely the erosion of critical feedback mechanism that I point out here. I have a solid but not unshakable belief about the theoretical mechanism I outline is correct but I do not know to what extent it takes effect in EA. But I’m also not sure whether those who will disagree with me will know to what extent this mechanism is at work in their own community. What I am sure of however (on the basis of feedback from people who have read this post pre-publication) is that my impressions of EA are shared by others within the community, that they are the reason why some have left EA or never quite dared to enter. This alone is reason for me to share this - in the hope that a healthy approach to critique and a willingness to change in response to feedback from the external world is still intact.
I recommend the impatient reader to skip forward to the section on Feedback Loops and Consequences.
Outline
I will outline reasons that lead EAs to prefer value-alignment and search for definitions of value-alignment. I then describe cultural traits of the community which play a role in amplifying this preference and finally evaluate what effect value-alignment might have on EAs feedback loops and goals.
Axiomaticity
Movements make explicit and obscure assumptions. They make explicit assumptions: they stand for something and exist with some purpose. An explicit assumption is, by my definition, one that was examined and consciously agreed upon.
EA explicitly assumes that one should maximise the expected value of one’s actions in respect to a goal. Goals differ between members but mostly do not diverge greatly. They may be a reduction of suffering, the maximisation of hedons in the universe or the fulfilment of personal preferences, and others. But irrespective of individual goals EAs mostly agree that resources sh...
|
Dec 12, 2021 |
List of ways in which cost-effectiveness estimates can be misleading by saulius
23:18
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: List of ways in which cost-effectiveness estimates can be misleading, published by saulius on the effective altruism forum.
In my cost-effectiveness estimate of corporate campaigns, I wrote a list of all the ways in which my estimate could be misleading. I thought it could be useful to have a more broadly-applicable version of that list for cost-effectiveness estimates in general. It could maybe be used as a checklist to see if no important considerations were missed when cost-effectiveness estimates are made or interpreted.
The list below is probably very incomplete. If you know of more items that should be added, please comment. I tried to optimize the list for skimming.
How cost estimates can be misleading
Costs of work of others. Suppose a charity purchases a vaccine. This causes the government to spend money distributing that vaccine. It's unclear whether the costs of the government should be taken into account. Similarly, it can be unclear whether to take into account the costs that patients have to spend to travel to a hospital to get vaccinated. This is closely related to concepts of leverage and perspective. More on it can be read in Byford and Raftery (1998), Karnofsky (2011), Snowden (2018), and Sethu (2018).
It can be unclear whether to take into account the fixed costs from the past that will not have to be spent again. E.g., costs associated with setting up a charity that are already spent and are not directly relevant when considering whether to fund that charity going forward. However, such costs can be relevant when considering whether to found a similar charity in another country. Some guidelines suggest annualizing fixed costs. When fixed costs are taken into account, it's often unclear how far to go. E.g., when estimating the cost of distributing a vaccine, even the costs of roads that were built partly to make the distribution easier could be taken into account.
Not taking future costs into account. E.g., an estimate of corporate campaigns may take into account the costs of winning corporate commitments, but not future costs of ensuring that corporations will comply with these commitments. Future costs and effects may have to be adjusted for the possibility that they don't occur.
Not taking past costs into account. In the first year, a homelessness charity builds many houses. In the second year, it finds homeless people to live in those houses. In the first year, the impact of the charity could be calculated as zero. In the second year, it could be calculated to be unreasonably high. But the charity wouldn't be able to sustain the cost-effectiveness of the second year.
Not adjusting past or future costs for inflation.
Not taking overhead costs into account. These are costs associated with activities that support the work of a charity. It can include operational, office rental, utilities, travel, insurance, accounting, administrative, training, hiring, planning, managerial, and fundraising costs.
Not taking costs that don't pay off into account. Nothing But Nets is a charity that distributes bednets that prevent mosquito-bites and consequently malaria. One of their old blog posts, Sauber (2008), used to claim that "If you give $100 of your check to Nothing But Nets, you've saved 10 lives." While it may be true that it costs around $10 or less[1] to provide a bednet, and some bednets save lives, costs of bednets that did not save lives should be taken into account as well. According to GiveWell's estimates, it currently costs roughly $3,500 for a similar charity (Against Malaria Foundation) to save one life by distributing bednets.
Wiblin (2017) describes a survey in which respondents were asked "How much do you think it would cost a typical charity working in this area on average to prevent one child in a poor country from dying unnecessarily, by ...
|
Dec 12, 2021 |
Introducing High Impact Athletes by Marcus Daniell
01:41
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Introducing High Impact Athletes, published by Marcus Daniell on the effective altruism forum.
Hi all,
A while back I posted on here asking if there were any other pro athlete aspiring EAs. The response (while not including other pro athletes) was amazing, and the conversations and contacts that manifested from this forum were myriad. Thank you deeply for being such an awesome community!
Now I am very pleased to say that High Impact Athletes has launched.
We are an EA aligned non-profit run by pro athletes. HIA aims to channel donations to the most effective, evidence-based charities in the world in the areas of Global Health & Poverty and Environmental Impact. We will harness the wealth, fame, and social influence of professional athletes to bring as many new people in to the effective altruism framework as possible and create the biggest possible snowball of donations to the places where they can do the most good.
You can poke around on the website to learn more at/
Feedback is welcomed, and even more welcome is a follow on any of the socials. I'm terrible at social media and could use all the help I can get to build an audience.
Instagram: high.impact.athletes
Twitter: HIAorg
Facebook: @HIAorg
On that note, if anyone is interested in helping out with the social media side of things or knows anyone who would be please do get in touch either on here or at marcus@highimpactathletes.com
Thank you once again, you're all awesome.
Cheers, Marcus Daniell
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Parenting: Things I wish I could tell my past self by Michelle_Hutchinson
13:25
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Parenting: Things I wish I could tell my past self, published by Michelle_Hutchinson on the effective altruism forum.
I have a baby who’s nearly 10 months old. I’ve been thinking about what I’d like to be able to go back and tell myself before I embarked on this journey. I suspect that some of the differences between how I experienced it and what I had read in books correlates with ways that other effective altruists might also experience things. I also generally felt that finding decent no-nonsense information about parenting was hard, and that the signal to noise ratio when googling for answers was peculiarly bad. Probably the most useful advice I got was from EA friends with kids. So I thought it might be useful to jot down some thoughts for other EAs likely to have kids soon (or hoping to support others who are!).
Note that these are just my experiences. I’ve been surprised how easy it is when it comes to mothering I hear ‘this is how I did it’ as ‘if you’re not doing the same you’re doing it wrong’. I mean no such implication! Your mileage may vary on all of the below.
Things I was surprised about:
Not changing much as a person: The biggest uncertainty I had starting out was how much my interests and priorities would change when I had a baby. Various people I talked to confidently expected they would substantially change once the baby came along, for example that I would find being at home looking after a baby more interesting than it sounded in the abstract. A lot of the advice I read on the internet likewise indicated that people tended to want more maternity leave than they expected, and to be more inclined to go part time after having children. For those reasons, I roughly planned to take 3 months of maternity leave, but to be prepared for actually wanting more leave. In the actual event, I was really surprised by how little my inclinations changed. Far from wanting more maternity leave than I expected, I was keen throughout to be in touch with my colleagues and hear how things were going in the office, and wanted to get back to doing bits of work really quite soon after having Leo. This seemed in marked contrast with the other mothers I was meeting at baby groups, who had expected to want to hear about what was happening in their offices, but actually weren’t at all interested once the baby came along. I think I did too much assuming that when I had a baby I’d turn into a different kind of person, and not enough simply thinking about ‘given the kind of person I am, how do I expect having a baby to interface with that?’. Also, I did too much looking at the average of how people change, rather than noticing that people react in widely differing ways, which include ‘not changing much at all’. Overall it’s rather a relief to feel I’m still the same person, but now with a cute small person to spend time with.
Finding childcare was harder than I expected. When I got to it, I wanted to go back to work before three months. My husband had committed to finishing various pieces of work before starting paternity leave (3 months in). For that reason, we were keen to arrange some child care for Leo when he was younger than three months. That turned out to be more difficult than I expected. Nurseries don’t tend to take kids that young and the agency we wrote to had trouble finding us someone who would be short term (and took a while to get back to us at each step). We got a recommendation for someone on care.com, which almost worked out, except they found out their current contract precluded them from also working for us. The process also felt intimidating, at a time when we were already learning a lot of new things, which slowed down how well we did at it. I think I should have approached it more with the mindset of ‘we need to hire someone, and hiring is hard!’ than I d...
|
Dec 12, 2021 |
AI Governance: Opportunity and Theory of Impact by Allan Dafoe
22:43
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: AI Governance: Opportunity and Theory of Impact, published by Allan Dafoe on the effective altruism forum.
Note: We have recently opened up roles for researchers and a project manager at the Centre for the Governance of Artificial Intelligence, part of the Future of Humanity Institute, University of Oxford
AI governance concerns how humanity can best navigate the transition to a world with advanced AI systems.[1] It relates to how decisions are made about AI,[2] and what institutions and arrangements would help those decisions to be made well.
I believe advances in AI are likely to be among the most impactful global developments in the coming decades, and that AI governance will become among the most important global issue areas. AI governance is a new field and is relatively neglected. I’ll explain here how I think about this as a cause area and my perspective on how best to pursue positive impact in this space. The value of investing in this field can be appreciated whether one is primarily concerned with contemporary policy challenges or long-term risks and opportunities (“longtermism”); this piece is primarily aimed at a longtermist perspective. Differing from some other longtermist work on AI, I emphasize the importance of also preparing for more conventional scenarios of AI development.
Contemporary Policy Challenges
AI systems are increasingly being deployed in important domains: for many kinds of surveillance; by authoritarian governments to shape online discourse; for autonomous weapons systems; for cyber tools and autonomous cyber capabilities; to aid and make consequential decisions such as for employment, loans, and criminal sentencing; in advertising; in education and testing; in self-driving cars and navigation; in social media. Society and policy makers are rapidly trying to catch up, to adapt, to create norms and policies to guide these new areas. We see this scramble in contemporary international tax law, competition/antitrust policy, innovation policy, and national security motivated controls on trade and investment.
To understand and advise contemporary policymaking, one needs to develop expertise in specific policy areas (such as antitrust/competition policy or international security) as well as in the relevant technical aspects of AI. It is also important to build a community jointly working across these policy areas, as these phenomena interact, and are often driven by similar technical developments, involve similar tradeoffs, and benefit from similar insights. For example, AI-relevant antitrust/competition policy is shaping and being shaped by great power rivalry, and these fields benefit from understanding AI’s character and trajectory.
Long-term Risks and Opportunities
Longtermists are especially concerned with the long-term risks and opportunities from AI, and particularly existential risks, which are risks of extinction or other destruction of humanity’s long-term potential (Ord 2020, 37).
Superintelligence Perspective
Many longtermists come to the field of AI Governance from what we can call the superintelligence perspective, which typically focuses on the challenge of having an AI agent with cognitive capabilities vastly superior to those of humans. Given how important intelligence is---to the solving of our global problems, to the production and allocation of wealth, and to military power---this perspective makes clear that superintelligent AI would pose profound opportunities and risks. In particular, superintelligent AI could pose a threat to human control and existence that dwarfs our other natural and anthropogenic risks (for a weighing of these risks, see Toby Ord’s The Precipice).[3] Accordingly, this perspective highlights the imperative that AI be safe and aligned with human preferences/values. The field of AI Safety is in part m...
|
Dec 12, 2021 |
What gives me hope by Michelle_Hutchinson
05:03
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: What gives me hope, published by Michelle_Hutchinson on the effective altruism forum.
Fast forward to a few months ago when I listened an interview with Peter Singer on the Lex Fridman podcast and he mentioned 80,000 Hours. Over many evenings with my new born son asleep on me on the sofa I read your key ideas pages and problem profiles and started to think about how I can change my career to maximise its long term impact.
Sometimes being an effective altruist feels really tough. There is so much suffering in the world, and no guarantee that people in the future will even get to live. There’s a lot of work to be done to try to counteract these things, and the work is usually not easy - whether you’re working a high pressure job to earn money to donate, needing to make stressful judgement calls about the ways to solve big problems or applying for roles to try to figure out how you can best contribute.
One of the things that I find most heartening when effective altruism feels rough is reading through the applications we get to speak with our team. Reading through so many people’s plans for their careers and journeys to effective altruism is by turns humbling, heartwarming and inspiring. It reminds me how many of us are working together on improving the world, and how dedicated and caring my fellow travellers are. It makes attacking the world’s problems feel a bit more manageable.
I wanted to try to share some of the feeling I get from reading these stories. I’m not intending them as endorsements of specific actions. In particular, I think something EA does well is avoid glorifying sacrifice and making clear that the less effort you need to expend to help someone, the better. But people being willing to do what’s needed even at great personal cost does mean we’re more likely to be able to pull off difficult things. I also find it deeply moving.
There are so many different ways in which people’s stories are touching. Some people have had a really tough time themselves growing up, and yet somehow got through that with a determination to use their time to help others. Others have a plethora of options that would net them riches and prestige, yet decide to spend their labour figuring out how to help others as much as possible instead.
I want to share a few stories of people I've come across (with details removed for anonymity). I hope they give you a taste of the awe and feeling of connection to others I feel when I read about them.
The medical student nearing the end of their training who plans on radically switching career track. Having gotten into EA, they're willing to take a role in a totally different field despite all the hard work they put into their degree and how unintelligible their choice will be to their family.
The introvert willing to spend their free time fundraising door to door despite how horrible it is asking people for money
The vegan who donates 10% and gave a stranger a kidney, yet (crazily!) summarises their contributions so far as if they were no big deal and just what anyone might do
The student who chose their university based on being able to finish their degree as fast as possible in order to be out in the world helping others
The professor, many years into their career, willing to switch research focus entirely or leave academia because they read The Precipice and realised their skills are well suited to increasing humanity’s resilience
The early retiree who deliberately saved enough to retire by 40, but is willing to work a difficult corporate job in order to make money to give away
The winner of one of the toughest international maths competitions with an interest in a pure maths academic career, but who is instead working on the applied problems they think will most help others
The student already donating a significant proportion of the mon...
|
Dec 12, 2021 |
Taboo "Outside View" by kokotajlod
12:00
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Taboo "Outside View", published by kokotajlod on the effective altruism forum.
No one has ever seen an AGI takeoff, so any attempt to understand it must use these outside view considerations.
[Redacted for privacy]
What? That’s exactly backwards. If we had lots of experience with past AGI takeoffs, using the outside view to predict the next one would be a lot more effective.
My reaction
Two years ago I wrote a deep-dive summary of Superforecasting and the associated scientific literature. I learned about the “Outside view” / “Inside view” distinction, and the evidence supporting it. At the time I was excited about the concept and wrote: “...I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined.”
Now that I have more experience, I think the concept is doing more harm than good in our community. The term is easily abused and its meaning has expanded too much. I recommend we permanently taboo “Outside view,” i.e. stop using the word and use more precise, less confused concepts instead. This post explains why.
What does “Outside view” mean now?
Over the past two years I’ve noticed people (including myself!) do lots of different things in the name of the Outside View. I’ve compiled the following lists based on fuzzy memory of hundreds of conversations with dozens of people:
Big List O’ Things People Describe As Outside View:
Reference class forecasting, the practice of computing a probability of an event by looking at the frequency with which similar events occurred in similar situations. Also called comparison class forecasting. [EDIT: Eliezer rightly points out that sometimes reasoning by analogy is undeservedly called reference class forecasting; reference classes are supposed to be held to a much higher standard, in which your sample size is larger and the analogy is especially tight.]
Trend extrapolation, e.g. “AGI implies insane GWP growth; let’s forecast AGI timelines by extrapolating GWP trends.”
Foxy aggregation, the practice of using multiple methods to compute an answer and then making your final forecast be some intuition-weighted average of those methods.
Bias correction, in others or in oneself, e.g. “There’s a selection effect in our community for people who think AI is a big deal, and one reason to think AI is a big deal is if you have short timelines, so I’m going to bump my timelines estimate longer to correct for this.”
Deference to wisdom of the many, e.g. expert surveys, or appeals to the efficient market hypothesis, or to conventional wisdom in some fairly large group of people such as the EA community or Western academia.
Anti-weirdness heuristic, e.g. “How sure are we about all this AI stuff? It’s pretty wild, it sounds like science fiction or doomsday cult material.”
Priors, e.g. “This sort of thing seems like a really rare, surprising sort of event; I guess I’m saying the prior is low / the outside view says it’s unlikely.” Note that I’ve heard this said even in cases where the prior is not generated by a reference class, but rather from raw intuition.
Ajeya’s timelines model (transcript of interview, link to model)
. and probably many more I don’t remember
Big List O’ Things People Describe As Inside View:
Having a gears-level model, e.g. “Language data contains enough structure to learn human-level general intelligence with the right architecture and training setup; GPT-3 + recent theory papers indicate that this should be possible with X more data and compute.”
Having any model at all, e.g. “I model AI progress as a function of compute and clock time, with the probability distribution over how much compute is needed shifting 2 OOMs lower each decade.”
Deference to wisdom of the few, e.g. “the people I trust most on this matter seem to think.”
Intuition-...
|
Dec 12, 2021 |
The psychology of population ethics by Lucius_Caviola
05:40
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: The psychology of population ethics, published by Lucius_Caviola on the effective altruism forum.
In a new paper, David Althaus, Andreas Mogensen, Geoffrey Goodwin, and I, investigate people's population ethical intuitions. Across nine experiments (N = 5,776), we studied how lay people judge the moral value of hypothetical human populations that differ in their size and in the quality of the individual lives that comprise them. Our investigation aimed to answer three questions:
Do people weigh happiness and suffering symmetrically?
Do people focus more on the average or total welfare of a given population?
Do people account only for currently existing lives, or also lives that could yet exist?
Here is a very brief summary of the key findings (more details can be found in the linked paper):
1. People, on average, weigh suffering more than happiness
Participants, on average, believed that more happy than unhappy people were needed in order for the whole population to be net positive (Studies 1a-c). Judgments about the acceptable proportion of happy and unhappy people in a population matched judgments about the acceptable proportion of happiness and unhappiness within a single individual’s lifetime. The precise trade ratio between happiness and suffering depended on the intensity levels of happiness and suffering, such that a greater proportion of happiness was required as intensity levels increased (Study 1b). Study 1c clarified that, on average, participants continued to believe that more happiness than suffering was required even when the happiness and suffering units were exactly equally intense. This suggests that people generally weigh suffering more than happiness in their moral assessments, above and beyond perceiving suffering to be more intense than happiness. However, our studies also made clear that there are individual differences and that a substantial proportion of participants weighed happiness and suffering equally strongly, in line with classical utilitarianism.
In Study 1c, we asked participants what proportion of people in a population (or what proportion of all the moments in an individual life) needed to be happy vs unhappy for the whole population (or life) to be net positive. Horizontal lines represent mean value.
2. People have both an averagist and a totalist preference
Participants had a preference both for populations with greater total and greater average welfare (Study 3a-d). In Study 3a, we found that participants preferred populations with better total levels (i.e., higher levels in the case of happiness and lower levels in the case of suffering) when the average levels were held constant. In Study 3b, we found that participants preferred populations with better average levels when the total levels were held constant. In Study 3c, we found that most participants’ preferences lay in between the recommendations of these two principles when they conflict, suggesting that participants applied both preferences simultaneously in such cases. Further, their focus on average welfare even led them (remarkably) to judge it preferable to add new suffering people to an already miserable world, as long as this increased average welfare (Study 3d). But, when prompted to reflect, participants’ preference for the population with the better total welfare became stronger.
In Study 3d, we asked participants which out of two populations they consider better: a population consisting of 1,000 very happy (unhappy) people or a population of 2,000 people consisting of 1,000 very happy (unhappy) people and an additional 1,000 people who are also happy (unhappy) but to a weaker extent than the first 1,000—either on level ±10, ±50, ±90. Depending on the condition, participants were prompted to think reflectively or to rely on their intuition. The responses in the unh...
|
Dec 12, 2021 |
Information security careers for GCR reduction by ClaireZabel, lukeprog
14:55
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Information security careers for GCR reduction, published by ClaireZabel, lukeprog on the effective altruism forum.
Update 2019-12-14: There is now a Facebook group for discussion of infosec careers in EA (including for GCR reduction); join here
This post was written by Claire Zabel and Luke Muehlhauser, based on their experiences as Open Philanthropy Project staff members working on global catastrophic risk reduction, though this post isn't intended to represent an official position of Open Phil.
Summary
In this post, we summarize why we think information security (preventing unauthorized users, such as hackers, from accessing or altering information) may be an impactful career path for some people who are focused on reducing global catastrophic risks (GCRs).
If you'd like to hear about job opportunities in information security and global catastrophic risk, you can fill out this form created by 80,000 Hours, and their staff will get in touch with you if something might be a good fit.
In brief, we think:
Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity.
More generally, security expertise may be useful for those attempting to reduce GCRs, because such work sometimes involves engaging with information that could do harm if misused.
We have thus far found it difficult to hire security professionals who aren't motivated by GCR reduction to work with us and some of our GCR-focused grantees, due to the high demand for security experts and the unconventional nature of our situation and that of some of our grantees.
More broadly, we expect there to continue to be a deficit of GCR-focused security expertise in AI and biosecurity, and that this deficit will result in several GCR-specific challenges and concerns being under-addressed by default.
It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them).
It’s plausible that some people focused on high-impact careers (as many effective altruists are) would be well-suited to helping meet this need by gaining infosec expertise and experience and then moving into work at the relevant organizations.
If people who try this don’t get a direct work job but gain the relevant skills, they could still end up in a highly lucrative career in which their skillset would be in high demand.
We explain below.
Risks from Advanced AI
As AI capabilities improve, leading AI projects will likely be targets of increasingly sophisticated and well-resourced cyberattacks (by states and other actors) which seek to steal AI-related intellectual property. If these attacks are not mitigated by teams of highly skilled and experienced security professionals, then such attacks seem likely to (1) increase the odds that TAI / AGI is first deployed by malicious or incautious actors (who acquired world-leading AI technology by theft), and also seem likely to (2) exacerbate and destabilize potential AI technology races which could lead to dangerously hasty deployment of TAI / AGI, leaving insufficient time for alignment research, robustness checks, etc.[1]
As far as we know, this is a common view among those who have studied questions of TAI / AGI alignment and strategy for several years, though there remains much disagreement about the details, and about the relative magnitudes of different risks.
Given this, we think a member of such a security team could do a lot of good, if they are better than their replacement and/or they understand the full nature of the AI safety and security challenge better than their replacement (e.g. because they have spent many years thinking about AI from a GCR-reduction angle). Furthermo...
|
Dec 12, 2021 |
COVID: How did we do? How can we know? by Ghost_of_Li_Wenliang
17:29
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: COVID: How did we do? How can we know?, published by Ghost_of_Li_Wenliang on the effective altruism forum.
Obviously it was a triumph: the fastest vaccine development, approval, and rollout in history (by a factor of 5). We're up to 2.5bn doses in arms, out of say 12bn. And we got several good ones! Huzzah!
Obviously it was a catastrophe: through dumb inaction and a comedy of errors, we squandered the chance to suppress the virus. 4 million people are confirmed to have died of (or with) COVID - and given unbelievable underreporting that might be actually 12 million - and given Delta's momentum 20 million by the end is not unlikely. This is despite this virus being easy mode: unlike 1918, very few of the deaths were among the young frontline workers keeping the healthcare and delivery systems working; unlike SARS 1, post-viral disability is relatively rare. This is just the present pandemic and doesn't count future deaths from letting the thing become a permanent fixture.
How did we do? How can we even answer that question?
When I talk about whether a given country's response to COVID was a success or a failure, smart friends reply "governments had to balance the tradeoffs, so what looks like failure is really just compromise between multiple objectives (like economic activity)", "it's easy to say the optimal response in hindsight", that "it's difficult to compare different countries because of the different distances from China, wealth, state capacity".
For instance, they think the UK did ok. They can think this because they choose to compare to the average actual response (never mind that the UK is top 20 in deaths per capita). But what would the best possible response look like? What did our institutions stop us from getting?
Vax
Any self-respecting COVID rant must foreground vaccination. It is the solution, where other policies just buy time, or else consume old or disabled people.
We underinvested, and prevented market investment.
The EU paid $14 per Pfizer dose. What was it really worth? The current black market price for Pfizer is about $500. But that's a gross underestimate of the shadow price, since you get almost zero quality assurance or liability from darknet dealers. (You might still get the travel passport, depending on how weak your country's infosec is.) One proper estimate of the per-vaccine social benefit is $6000.
So we should have spent trillions in massive pre-purchase, on every credible vaccine. (The much-praised Operation Warp Speed and its equivalents elsewhere only pre-purchased about 2bn out of the necessary 12-14bn, and did so shockingly late, in August 2020.)
$3000 x 14 bn doses pays for a lot of overtime on microlipid machine assembly (which was the bottleneck on mRNA vaccine supply last year).
(That's assuming that you continue to ban vaccine markets, believing, as you apparently do, that fairness is worth the early death of millions. Another way to fund supply expansion for the global south is to just not get in our own way.)
The rich world defected, duh
16% of the world bought 70% of the vaccines. What force on earth could stop them? None, so we needed the massive supply increases, which were effectively banned.
This was not even good selfishness: it guaranteed the emergence of new strains in the global south.
This is the real evil of the EU procurement. They want to harm their own by delaying 4 weeks, to look strong? Well, that's one thing. But had they done a pre-purchase in March 2020, then global supply could have scaled up, so that the inevitable snatch away from the global south was completely balanced out by expanded supply
What did we do? Overall, about 2bn doses ordered by August 2020, i.e. 5 times too little, 5 months too late.
The strange death of human challenge trials
Probably the biggest mistake was not intentionally infec...
|
Dec 12, 2021 |
Open Philanthropy is seeking proposals for outreach projects by abergal, ClaireZabel
17:22
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Open Philanthropy is seeking proposals for outreach projects, published by abergal, ClaireZabel on the effective altruism forum.
Open Philanthropy is seeking proposals from applicants interested in growing the community of people motivated to improve the long-term future via the kinds of projects described below.[1]
Apply to start a new project here; express interest in helping with a project here.
We hope to draw highly capable people to this work by supporting ambitious, scalable outreach projects that run for many years. We think a world where effective altruism, longtermism, and related ideas are routine parts of conversation in intellectual spaces is within reach, and we’re excited to support projects that work towards that world.
In this post, we describe the kinds of projects we’re interested in funding, explain why we think they could be very impactful, and give some more detail on our application process.
Proposals we are interested in
Programs that engage with promising young people
We are seeking proposals for programs that engage with young people who seem particularly promising in terms of their ability to improve the long-term future (and may have interest in doing so).
Here, by “particularly promising”, we mean young people who seem well-suited to building aptitudes that have high potential for improving the long-term future. Examples from the linked post include aptitudes for conducting research, advancing into top institutional roles, founding or supporting organizations, communicating ideas, and building communities of people with similar interests and goals, among others. Downstream, we hope these individuals will be fits for what we believe to be priority paths for improving the long-term future, such as AI alignment research, technical and policy work reducing risks from advances in synthetic biology, career paths involving senior roles in the national security community, and roles writing and speaking about relevant ideas, among others.
We’re interested in supporting a wide range of possible programs, including summer or winter camps, scholarship or fellowship programs, seminars, conferences, workshops, and retreats. We think programs with the following characteristics are most likely to be highly impactful:
They engage people ages 15 - 25 who seem particularly promising in terms of their ability to improve the long-term future, for example people who are unusually gifted in STEM, economics, philosophy, writing, speaking, or debate.
They cover effective altruism (EA), rationality, longtermism, global catastrophic risks, or related topics.
They involve having interested young people interact with people currently working to improve the long-term future.
Examples of such programs that Open Philanthropy has supported include SPARC, ESPR, the SERI and FHI summer research programs, and the recent EA Debate Championship. However, we think there is room for many more such programs.
We especially encourage program ideas which:
Have the potential to engage a large number of people (hundreds to tens of thousands) per year, though we think starting out with smaller groups can be a good way to gain experience with this kind of work.
Engage with groups of people who don’t have many ways to enter relevant intellectual communities (e.g. they are not in areas with high concentrations of people motivated to improve the long-term future).
Include staff who have experience working with members of the groups they hope to engage with—in particular, experience talking with young people about new ideas while being respectful of their intellectual autonomy and encouraging independent intellectual development.
We encourage people to have a low bar for submitting proposals to our program, but note that we view this as a sensitive area: we think programs like these have t...
|
Dec 12, 2021 |
EffectiveAltruismData.com: A Website for Aggregating and Visualising EA Data by Hamish Huggard
07:05
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: EffectiveAltruismData.com: A Website for Aggregating and Visualising EA Data, published by Hamish Huggard on the effective altruism forum.
There’s lots of cool data floating around in EA: grant databases, survey results, growth metrics, etc. I’m a data scientist and enjoy data visualisation, so thought it would be a fun project to build a website which aggregates EA data into interactive plots.
The website is now live at EffectiveAltruismData.com. Source code is available on Github.
This project is still a work in progress: the data is pretty out of date and I’ve got lots of future work planned. But it’s far enough along that I’d like some public feedback.
The website is responsive and should look good on any desktop or tablet screen. If you’re viewing on a phone it will probably look ok, but you may need to alternate between portrait and landscape.
Here are a few screenshots:
Implementation Details
The website is mostly coded in Python. The main libraries I used were:
Pandas for data handling.
Plotly for creating the interactive plots.
Dash for the web framework.
I also wrote a bunch of vanilla CSS for the frontend styling.
The web server is currently deployed with Heroku, which costs $7/month.
I have vague ambitions to re-implement the frontend with D3.js or Chart.js. This should cut down the loading time and give me more control over how the visualisations work.
Design Philosophy
I aimed to follow the data visualisation principles laid out in Information Dashboard Design and Storytelling with Data. These include:
Minimise the “ink-to-data” (or “pixel-to-data”) ratio to avoid distracting clutter.
Encoding data in length or distance is much higher fidelity than area or angle.
Avoid pie charts, stacked area plots, radar charts, violin plots.
Stick to bar charts, scatter plots, and line graphs as much as possible.
Don’t make the reader rotate their head.
Use horizontal bar charts rather than vertical ones.
Minimise the total length the eye has to travel to take in all the data.
Avoid legends.
On line graphs, put the labels directly on the ends of the lines.
Why I Did This
I said earlier that this project was motivated by my fancy for data visualisation. But I do think there’s a lot of scope for valuable data visualisation and data wrangling work within Effective Altruism.
For example, I’ve found it difficult to get a sense of the scale of donations within EA. Are total donations basically Open Philanthropy plus a rounding error? Or do donations from all the little guys like me actually make a difference in the big picture?
This isn’t just an interesting question in itself: it also informs my life decisions. If the total donations of people in my reference class is enough to make a noticeable change to AMF funding, then I’m more likely to steadily earn-to-give on a moderately affluent career path. If my reference class is totally overwhelmed by a handful of mega-donors, then I’m more likely to drop everything and spend a year figuring out if I can contribute to AI safety.
This was some of the motivation behind the first panel of EffectiveAltruismData.com. Ultimately, I’d like to have a plot which puts all the major stocks and flows of EA money on a common scale and puts my personal earning-to-give into perspective.
Another example: I have a vague sense that EAs are getting more diverse over time. Is this true? Currently, answering this question would require going through all the EA survey reports, reading numbers from images of plots, and typing them into a spreadsheet. It would be nice if all the data was easily accessible from some central repository and ready for analysis.
Data visualisation is a great tool for getting lots of people quickly up to speed on quantitative facts. Gapminder and Our World in Data do this to great effect. If we want EA to be an efficient ...
|
Dec 12, 2021 |
What we learned from a year incubating longtermist entrepreneurship by Rebecca Kagan, Jade Leung, imben
30:06
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: What we learned from a year incubating longtermist entrepreneurship, published by Rebecca Kagan, Jade Leung, imben on the effective altruism forum.
This post is a retrospective on the Longtermist Entrepreneurship (LE) Project, which ran for a year and explored ways to incubate new longtermist entrepreneurship. If you’re in a hurry, we recommend reading key lessons learned, what we’d be excited about, and what it takes to work in this space.
Thanks to Markus Anderljung, Aaron Gertler, Sam Hilton, Josh Jacobson and Jonas Vollmer for reviewing, as well as many others who reviewed an earlier draft of the document. All opinions and mistakes are our own.
Intro
The Longtermist Entrepreneurship (LE) Project ran from April 2020 through May 2021, with the aim of testing ways to support the creation of new longtermist nonprofits, companies, and projects. During that time, we did market sizing, user interviews, and ran three pilot programs on how to support longtermism entrepreneurship, including a fellowship. The LE Project was run by Jade Leung, Ben Clifford, and Rebecca Kagan, and funded by Open Philanthropy. The project shut down after a year because of staffing reasons, but also because of some uncertainty about the project’s direction and value.
We never had a public internet presence, so this may be the first time that many people on the EA Forum are hearing about our work. This post describes the history of the project, our pilot programs, and our lessons learned. It also describes what we’d support seeing in the future, and what our concerns are about this space, and ways to learn more.
Overall, we think that supporting longtermist entrepreneurship is important and promising work, and we expect people will continue to work in this space in the coming years. However, we aren't publishing this post because we want to encourage lots of people to start longtermist incubators. We think doing longtermist startup incubation is incredibly difficult, and requires specific backgrounds. We wanted to share what we’ve transparently and widely to help people learn from our successes and mistakes, and to think carefully about what future efforts should be made in this direction.
If you’re considering starting an LE incubator[1], we’d love to hear about it so we can offer advice and coordination with others interested in working in this space. Please fill out this google form if you’re interested in founding programs in LE incubation.
Key lessons learned:
Overall, it’s likely that one or multiple organizations should be doing LE incubation. We need more longtermist organizations, and the current ecosystem doesn’t seem poised to fix this problem. Our fellowship and matchmaking pilots were promising, suggesting that there’s more we can do to start new organizations.
There’s interest in LE programs, but a limited talent pool that has strong backgrounds in both longtermism and entrepreneurship. Talent is likely to be a significant bottleneck. Hundreds of people expressed interest in doing LE, but a very small number of these (1-3 dozen) had backgrounds in both longtermism and entrepreneurship. There were few people that we thought could pull off very ambitious projects.
The idea pool is more limited and less developed than we expected. There are existing lists of ideas, but almost no ideas are fleshed out and have broad support. There are no clear “highest priority ideas'' that are obviously good to pursue and have been carefully vetted. Instead, most people we spoke to thought that the most promising ideas depended on the available talent. We found almost no longtermist ideas for traditional startup-minded people to pursue.
Funders are worried about downside risks of some new projects, but often more open to funding short-runway projects with frequent checkpoints. Funders do want to see m...
|
Dec 12, 2021 |
Patient vs urgent longtermism has little direct bearing on giving now vs later by Owen_Cotton-Barratt
10:09
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Patient vs urgent longtermism has little direct bearing on giving now vs later, published by Owen_Cotton-Barratt on the effective altruism forum.
This post is a response to having heard multiple people express something like "I'm persuaded by the case for patient longtermism, so I want to save money rather than give now", or otherwise implicitly assuming that patient longtermism is obviously more in favour of saving money than urgent longtermism (e.g. Ben Todd says "Even the people who are most into patient longtermism still think we should spend some on object-level things today. It’s just maybe they would only give half a percent of the portfolio as opposed to 4%." in his podcast episode on varieties of longtermism).
This view is understandable to me, especially given:
Trammell's paper arguing that "patient philanthropists" should invest rather than spend down their capital (note that I tentatively agree with this in the context of global poverty, to which he applies the framework in the paper);
The commonsense meanings of "urgent" and "patient", and of "spend" and "invest".
Nonetheless, I think it is mistaken and there is no direct implication that "patient longtermists" should be less willing to spend money now than "urgent longtermists". Rather I think it's an open question which will depend on a lot of messy empirics (about giving opportunities) which position should be more in favour of saving money now. My current guess is to recommend spending rather than saving money at current margins to both patient and urgent longtermists. Neither recommendation feels robust; however, I'm actually a little more pro-saving for "urgent" longtermists than for "patient" ones.
Note that I do think that considering which timescales we want to exert influence over is an extremely fruitful lens (although I'd usually think of a natural timescale as attaching to an activity rather than an overall view), and it has a great deal of relevance for deciding what to fund and hence indirect bearing on whether to give now or later.
[With apologies for a lack of careful scholarship: I suspect these points are largely written up elsewhere, and appreciated in large part already by Trammell and Todd.]
So what's going on? Why doesn't the argument for patient philanthropy apply straightforwardly in the longtermist case?
The argument is in favour of investing (so that you have more resources available later), rather than spending (so that you have less resources available). You might think that giving money away should naturally be considered as spending; and considered from the perspective of an individual donor it probably is. But from the perspective of the longtermist community, most "spending" of money now is actually investment. It pays for research or career development or book-writing or websites or community-building (etc.); and the hope is that resources invested in these things now will return more resources meaningfully aligned with important parts of the longtermist worldview down the line (whether more money, or more people willing to act on the principles, or more broad sympathy to and influence for the ideas).
I think the best of these activities are almost certainly good investments; for instance I think that longtermism (broadly understood) has vastly outperformed the stock market over the last twenty years in terms of the resources it has amassed. I then think that individual decisions about giving now vs later should largely be driven by whether the best identifiable marginal opportunities are still good investments.
There's a lot of nuance that can (and should!) modulate that statement, for instance:
If an individual is still increasing their understanding of what good opportunities look like fast enough, they could be better waiting
But deferring to more-informed others or ...
|
Dec 12, 2021 |
Shallow evaluations of longtermist organizations by NunoSempere
01:00:32
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Shallow evaluations of longtermist organizations, published by NunoSempere on the effective altruism forum.
Introduction
This document reviews a number of organizations in the longtermist ecosystem, and poses and answers a number of questions which would have to be answered to arrive at a numerical estimate of their impact. My aim was to see how useful a "quantified evaluation" format in the longtermist domain would be.
In the end, I did not arrive at GiveWell-style numerical estimates of the impact of each organization, which could be used to compare and rank them. To do this, one would have to resolve and quantify the remaining uncertainties for each organization, and then convert each organization's impact to a common unit [1, 2].
In the absence of fully quantified evaluations, messier kinds of reasoning have to be used and are being used to prioritize among those organizations, and among other opportunities in the longtermist space. But the hope is that reasoning and reflection built on top of quantified predictions might prove more reliable than reasoning and reflection alone.
In practice, the evaluations below are at a fairly early stage, and I would caution against taking them too seriously and using them in real-world decisions as they are. By my own estimation, of two similar past posts, 2018-2019 Long Term Future Fund Grantees: How did they do? had 2 significant mistakes, as well as half a dozen minor mistakes, out of 24 grants, whereas Relative Impact of the First 10 EA Forum Prize Winners had significant errors in at least 3 of the 10 posts it evaluated.
To make the scope of this post more manageable, I mostly did not evaluate organizations included in Lark's yearly AI Alignment Literature Review and Charity Comparison posts, nor meta-organizations [3].
Evaluated organizations
Alliance to Feed the Earth in Disasters
Epistemic status for this section: Fairly sure about the points related to ALLFED's model of its own impact. Unsure about the points related to the quality of ALLFED's work, given that I'm relying on impressions from others.
Questions
With respect to the principled case for an organization to be working on the area:
What is the probability of a (non-AI) catastrophe which makes ALLFED's work relevant (i.e., which kills 10% or more of humanity, but not all of humanity) over the next 50 to 100 years?
How much does the value of the future diminish in such a catastrophe?
How does this compare to work in other areas?
With respect to the execution details:
Is ALLFED making progress in its "feeding everyone no matter what" agenda?
Is that progress on the lobbying front, or on the research front?
Is ALLFED producing high-quality research? On a Likert scale of 1-5, how strong are their papers and public writing?
Is ALLFED cost-effective?
Given that ALLFED has a large team, is it a positive influence on its team members? How would we expect employees and volunteers to rate their experience with the organization?
Tentative answers
Execution details about ALLFED in particular
Starting from a quick review as a non-expert, I was inclined to defer to ALLFED's own expertise in this area, i.e., to trust their own evaluation that their own work was of high value, at least compared to other possible directions which could be pursued within their cause area. Per their ALLFED 2020 Highlights, they are researching ways to quickly scale alternative food production, at the lowest cost, in the case of large catastrophes, i.e., foods which could be produced for several years if there was a nuclear war which blotted out the sun.
However, when talking with colleagues and collaborators, some had the impression that ALLFED was not particularly competent, nor its work high quality. I would thus be curious to see an assessment by independent experts about how valuable their w...
|
Dec 12, 2021 |
Thoughts on doing good through non-standard EA career pathways by Buck
18:10
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Thoughts on doing good through non-standard EA career pathways, published by Buck on the effective altruism forum.
(Thanks to Beth Barnes, Asya Bergal, and especially Joshua Teperowski-Monrad for comments. A lot of the ideas in this post originated in friends of mine who didn’t want to write them up; they deserve most of the credit for good ideas, I just organized the ideas and wrote it up.)
80000 Hours writes (under the heading “Apply an unusual strength to a needed niche”):
If there’s any option in which you might excel, it’s usually worth considering, both for the potential impact and especially for the career capital; excellence in one field can often give you opportunities in others.
This is even more likely if you’re part of a community that’s coordinating or working in a small field. Communities tend to need a small number of experts covering each of their main bases.
For instance, anthropology isn’t the field we’d most often recommend someone learn, but it turned out that during the Ebola crisis, anthropologists played a vital role, since they understood how burial practices might affect transmission and how to change them. So, the biorisk community needs at least a few people with anthropology expertise.
I think that there are many people who do lots of good through pursuing a career that they were a particularly good fit for, rather than by trying to fit themselves into a top-rated EA career. But I also think it’s pretty easy to pursue such paths in a way that isn’t very useful. In this post I’m going to try to build on this advice to describe some features of how I think these nonstandard careers should be pursued in order to maximize impact.
I’m going to misleadingly use the term “nonstandard EA career” to mean “a career that isn’t one of 80K’s top suggestions”. (I’m going to abbreviate 80,000 Hours as 80K.)
I’m not very confident in my advice here, but even if the advice is bad, hopefully the concepts and examples are thought provoking.
Doing unusual amounts of good requires unusual actions
If you want to do an unusual amount of good, you probably need to take some unusual actions. (This isn’t definitionally true, but I think most EAs should agree on it–at the very least, most EAs think you can do much more good than most people do by donating an affordable but unusual share of your income to GiveWell recommended nonprofits.)
One approach to this is working in a highly leveraged job on a highly leveraged problem. This is the approach suggested by the 80K career guide. They came up with a list of career options, like doing AI safety technical research, or working at the CDC on biosecurity, or working at various EA orgs, which they think are particularly impactful and which they think have room for a bunch of EAs.
Another classic choice is donating to unusually effective nonprofits, which is a plan where you didn’t have to choose a particularly specific career path (though taking a specific career path is extremely helpful), the unusual effectiveness comes from the choice to donate to an unusually effective place.
The nice thing about taking one of those paths is that you might not need to do anything else unusual in order to have a lot of impact.
There are also some reasons to consider doing something that isn’t EtG or an 80K recommendation. For example:
There might be lower hanging fruit in that field, because it doesn’t have as many EAs in it.
Comparative advantage: The thing that you’re best at probably isn’t a top recommended EA career, just on priors.
You can’t get a job in one of the top recommended EA careers, maybe because you’re not in 80K’s target audience of graduates aged 20-35 who have lots of career options. I care about people in this group, but I’m not going to address this situation much in the rest of this piece, because the relevant con...
|
Dec 12, 2021 |
The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020) BY Aaron Gertler
05:36
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020), published by Aaron Gertler on the effective altruism forum.
Context
My job is about helping people get involved in effective altruism, so I pay attention to how this happens.
I'm not sure I've ever seen any piece of content not named "Doing Good Better" get as much positive as Sam Harris's two podcast episodes with Will MacAskill.
After the first episode, Sam was deeply affected, and pledged to donate $3500/month in podcast proceeds to the Against Malaria Foundation.
After the second episode, Sam joined Giving What We Can and pledged 10% of profits from his Waking Up app (well over $3500/month) to effective charities.
Impact
Both episodes seem to have caused a spike in GWWC memberships, and the second may have boosted EA engagement more generally. Some notes on that:
GWWC estimates that over 600 people have taken the pledge in part because of the episodes (with another ~600 signing up for Try Giving). To break this down:
~800 people who finished the sign-up survey mentioned a podcast as one way they found GWWC (the average person chose 1.8 sources).
Of the 123 people who said which podcast it was, 107 said Sam Harris (87%).
Extrapolating a similar rate to the ~700 who didn't say which podcast gives another ~600 referrals on top of the original 107.
The "podcast" option was only added to the form in October 2020, before the second episode but after the first; another ~500 people who filled it out before then mentioned Sam somewhere.
I'd guess that most of these were coming from the first episode with Will, though he may have mentioned his giving in other episodes, and GWWC by extension.
An extremely engaged community builder told me in February 2021: "I feel like most new EAs I've met in the last year came in through Sam Harris."
My subjective impression in the weeks after the second episode came out was that most of the ambient "positive EA chatter" I heard on Twitter (people tweeting out random EA endorsements who normally talked about other things) included mentions of the podcast.
Why was this so impactful?
Some factors I think were important:
Sam set an example.
One of the most persuasive ways to promote something is to do it yourself.
One of Sam's explicit goals on the podcast is to get listeners to make ethical decisions, and I'd imagine that many listeners seek him out for ethical advice. This isn't as much the case for podcasters like Tim Ferriss or Joe Rogan, or other sources of publicity (TED, op-eds, etc.)
From the transcript below: "The question that underlies all of this, really, is: How can we live a morally beautiful life? That is more and more what I care about, and what the young Will MacAskill is certainly doing."
Sam made a rare endorsement.
Sam took several minutes to explain why he thinks giving is important, and gives GWWC a strong recommendation. This is a rare thing for him to do; most of his guests aren't selling anything (save maybe a book), and he doesn't advertise on his podcast.
Comparatively, Tim Ferriss (another major podcaster who had Will as a guest) has ~5 minutes of long-form advertising on every episode, and generally recommends lots of things every time a guest comes on. On the writeup of Will's episode, GWWC was the 23rd item on a bullet list of "selected links".
Tim's podcast referred 8 people to GWWC. This is actually a solid number, given that the "where you heard about us" question wasn't added until more than a year after that episode came out. But I think the true impact of the episode was still much lower than that of the Sam episodes, despite Tim's larger audience.
The conversation is really good.
I listened to the second episode soon after it came out, before I knew anything about its impact, and was almost immediately struck by how good Wi...
|
Dec 12, 2021 |
A do-gooder's safari by Owen_Cotton-Barratt
05:04
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: A do-gooder's safari, published by Owen_Cotton-Barratt on the effective altruism forum.
Doing good things is hard.
We’re gonna look at some deep tensions that attach to trying to do really good stuff. To keep it relatable(?!), I’ve included badly-drawn animals.
The mole pursues goals which are within comprehension and reach. At best the mole knows the immediate challenges extremely well and does a great job at dealing with them. At worst, the mole is digging in a random direction.
The giraffe looks into the distance, focusing on the big picture, and perhaps on challenges that will come up later but aren’t even apparent today. At best the giraffe identifies crucial directions to steer in. At worst, the giraffe doesn’t look where they’re going and trips over, or has ideas which are dumb because they don’t engage with details.
Moles have much more direct feedback loops than giraffes, so it’s harder to be a good giraffe than a good mole. When there’s a well-specified achievable goal, you can set a mole at it. Consequently many industries are structured with lots of mole-shaped roles. Idealists are often giraffes.
The beaver is industriously focused on the task at hand. The beaver rejects distractions and gets s done. At best, they are extremely productive. At worst, they miss big improvements in how they could go about things, or execute on a subtly wrong version of the task that misses most of the value.
The elephant is always asking how things are going, and whether the task is the right one. At their best, the elephant reorients things in better directions or finds big systemic improvements. At their worst, the elephant fails to get anything done because they can’t settle on what they’re even trying to do.
The mole and beaver are cousins, as are the giraffe and elephant. But you certainly get mole-elephants (applying lots of meta but only to local goals), or giraffe-beavers (just focused on the object-level of the big-picture).
The owl is a perfectionist. They have high standards for things, and want everything to meet those. At their best, they make things excellent, and perfectly crafted. If you want to produce the best X in the world, you probably need at least one owl involved. At their worst, they stall on projects because there’s something not quite right that they can’t see how to fix, and it’s unsatisfying.
The hare likes to ship things. They feel urgency all the time, and hate letting the perfect be the enemy of the good. At their best, they just make things happen! The hare can also be a good learner because they charge at things — sometimes they bounce off and get things wrong, but they get loads of experience so loads of chances to learn. At their worst, the hare produces loads of stuff but it’s all junk.
The dog is very socially governed / approval-seeking. They are excited to do things that people (particularly the cool people) will think are cool. At their best, they provide a social fabric which makes coordination simple (if someone else wants a thing done, they’re happy to do it without slowing everything down by making sure they understand the deep reasons why it’s desired). They also make sure gaps are filled — they jump onto projects and help out with things, or pick up balls that everyone agrees are important. At their worst, they chase after hype and status without providing any meaningful checks on whether the ideas they’re following are actually good.
The cat doesn’t give tuppence for what anyone else thinks. They’re just into working out what seems good and going for it. All new ideas involve at least a bit of cat. At their best, cats head into the wilderness and bring back something amazing. At their very worst they do something stupid and damaging that proper socialisation would have stopped. The more normal cat failure modes are to wander t...
|
Dec 12, 2021 |
SHOW: A framework for shaping your talent for direct work by RyanCarey
13:01
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: SHOW: A framework for shaping your talent for direct work, published by RyanCarey on the effective altruism forum.
By Ryan Carey, cowritten with Tegan Mccaslin (this post represents our own opinions, and not those of our current or former employers)
TLDR: If your career as an EA has stalled, you’ll eventually break through if you do one (or more) of four things: gaining skills outside the EA community, assisting the work of more senior EAs, finding valuable projects that EAs aren’t willing to do, or finding projects that no one is doing yet.
Let’s say you’ve applied to, and been rejected, from several jobs in high-impact areas over the past year (a situation that is becoming more common as the size of the movement grows). At first you thought you were just unlucky, but it looks increasingly likely that your current skillset just isn’t competitive for the positions you’re applying to. So what’s your next move?
I propose that there are four good paths open to you now:
Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.
Get Humble: Amplify others’ impact from a more junior role.
Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.
Get Weird: Find things no one is doing.
I’ve used or strongly considered all of these strategies myself, so before I outline each in more depth I’ll discuss the role they’ve played in my career. (And I encourage readers who resonate with SHOW to do the same in the comments!) Currently I do AI safety research for FHI. But when I first came to the EA community 5 years ago, my training was as a doctor, not as a researcher. So when I had my first professional EA experience, as an intern for 80,000 Hours, my work was far from extraordinary. As the situation stood, I was told that I would probably be more useful as a funder than as a researcher.
I figured that in the longer term, my greatest chance at having a substantial impact lay in my potential as a researcher, but that I would have to improve my maths and programming skills to realize that. I got skilled by pursuing a master’s degree in bioinformatics, thinking I might contribute to work on genomics or brain emulations.
But when I graduated, I realized I still wouldn’t be able to lead research on these topics; I didn’t yet have substantial experience with the research process. So I got humble and reached out to MIRI to see if they could use a research assistant. There, I worked under Jessica Taylor for a year, until the project I was involved in wound down. After that I reached out to several places to continue doing AI safety work, and was accepted as an intern and ultimately a full-time researcher at FHI.
Right now, I feel like I have plenty of good AI safety projects to work on. But will the ideas keep flowing? If not, that’s totally fine: I can get outside and work on security and policy questions that EA hasn’t yet devoted much time to, or I could dive into weird problems like brain emulation or human enhancement that few people anywhere are working on.
The fact is that EA is made up in some large part by a bunch of talented generalists trying to squeeze into tiny fields, with very little supervision to go around. For most people, trying to do direct work will mean that you repeatedly hit career walls like I did, and there’s no shame in that. If anything, the personal risk you incur through this process is honorable and commendable. Hopefully, the SHOW framework will just help you go about hitting walls a little more efficaciously.
1. Get Skilled (outside of EA)
This is common advice for a reason: it’s probably the safest and most accessible path for the median EA. When you consider that skills are generally learned more easily with supervision, and that most skills are transferable between EA and non-EA contexts, getting training...
|
Dec 12, 2021 |
Cultured meat: A comparison of techno-economic analyses By Linch, Neil_Dullaghan
41:52
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Cultured meat: A comparison of techno-economic analyses, published By Linch, Neil_Dullaghan on the effective altruism forum.
For cultured meat to move the needle on climate, a sequence of as-yet-unforeseen breakthroughs will still be necessary. We’ll need to train cells to behave in ways that no cells have behaved before. We’ll need to engineer bioreactors that defy widely accepted principles of chemistry and physics. We’ll need to build an entirely new nutrient supply chain using sustainable agricultural practices, inventing forms of bulk amino acid production that are cheap, precise, and safe. Investors will need to care less about money. Germs will have to more or less behave. It will be work worthy of many Nobel prizes—certainly for science, possibly for peace. And this expensive, fragile, infinitely complex puzzle will need to come together in the next 10 years.
On the other hand, none of that could happen.
That is the takeaway from a new article by Joe Fassler (2021) in The Counter that draws heavily on two techno-economic analyses (TEA) of cultured meat (CE Delft 2021 & Humbird 2020). For full disclosure, we at Rethink Priorities were independently reviewing these TEAs (plus a third by Risner, et al. 2020) and in the process of writing a summary and comparison of them with our main takeaways.
The article addresses many of the issues we also noticed, and supplements them with interviews from industry experts. Here we want to acknowledge that they beat us to the punch somewhat, add a few relevant things we think the article left out from the comparison, and what the next steps are in our project.
The main cruxes of disagreement across the TEAs are:
Approach to the research question
Investor payback timelines
Food grade versus pharmaceutical grade bioreactors
The costs of media (growth factors and amino acids) at scale
The limits of cell-engineering needed to reduce media consumption needs
First though, we provide our quick summaries of the three TEAs so readers have a background before diving into the comparisons.
As we are investigating a scientific question that sometimes hinges on deep technical expertise (which neither Neil nor I have), we will likely have some errors in the summaries and (especially) personal takeaways. In addition, this report is less thoroughly checked than usual for Rethink Priorities reports. It should best be viewed as our current tentative understanding of the existing literature, rather than a final definitive summary of the existing literature.
tl;dr: We reviewed 3 TEAs on cultured meat. Our summary is that Humbird is very high quality and suggests cultured meat cost-competitiveness is hard and needs everything to go right. CE Delft outlines some of what will need to go right, but doesn't provide much evidence that any of it is possible, has internal validity errors, and arguably has too much motivated reasoning. Risner, et al. is decent, within the narrow limits it sets itself, but too many details are underespecified for it to reflect the full costs and challenges of scaling up cultured meat. Reading the TEAs and doing surrounding research has turned Linch from a cultured meat optimist to being broadly pessimistic. Neil wants to be more agnostic until further research from Rethink Priorities and others.
Our TEA Summaries
As part of a project forecasting the potential for cost-competitive cultured meat to displace conventional meat, my colleague Neil and I investigated three techno-economic analyses (TEAs) estimating the economic feasibility of cost-competitive cultured meat ($2.50-$8/kg, akin to different estimates for existing wholesale meat prices).
A quick note on terms. The studies we looked at (and us) are only investigating “cultured meat”, that is, animal cell based meat of target conventional meats (usually beef). They do ...
|
Dec 12, 2021 |
Desperation Hamster Wheels by Nicole_Ross
09:29
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Desperation Hamster Wheels, published by Nicole_Rosson the effective altruism forum.
This is a linkpost for
In my first few jobs, I felt desperate to have an impact. I was often filled with anxiety that I might not be able to. My soul ached. Fear propelled me to action. I remember sitting in a coffee shop one Saturday trying to read a book that I thought would help me learn about biosafety, an impactful career path I wanted to explore. While I found the book interesting, I had to force myself to read each page because I was worn out. Yet I kept chugging along because I thought it was my lifeline, even though I was making extremely little progress. I thought: If I don’t do this excellently, I’ll be a failure.
There were three critical factors that, taken together, formed a “desperation hamster wheel,” a cycle of desperation, inadequacy, and burn out that got me nowhere:
Self-worth -- I often acted and felt as if my self-worth was defined wholly by my impact, even though I would give lip service to self-worth being more than that.
Insecurity/inadequacy -- I constantly felt not skilled or talented enough to have an impact in the ways I thought were most valuable.
Black and white thinking -- I thought of things in binary. E.g. I was either good enough or not, I was smart or not, I would have an impact or not.
Together, these factors manifested as a deep, powerful, clawing desire for impact. They drove me to work as hard as possible, and fight with all I had. It backfired.
This “desperation hamster wheel” led me to think too narrowly about what opportunities were available for impact and what skills I had or could learn. For example, I only thought about having an impact via the organization I was currently working at, instead of looking more broadly. I only considered the roles most lauded in my community at the time, instead of thinking outside the box about the best fit for me.
I would have been much happier and much more impactful had I taken a more open, relaxed, and creative approach.
Instead, I kept fighting against my weaknesses -- against reality -- rather than leaning into my strengths. (1) It led me to worse work habits and worse performance, creating a vicious cycle, as negative feedback and lack of success fueled my desperation. For example, I kept trying to do research because I thought that that work was especially valuable. But, I hadn’t yet developed the skills necessary to do it well, and my desperation made the inherent vulnerability and failure involved in learning feel like a deadly threat. Every mistake felt like a severe proclamation against my ability to have an impact.
I’m doing a lot better now and don’t feel this desperation anymore. Now, I can lean into my strengths and build on my weaknesses without having my whole self-worth on the line. I can approach the questions of having an impact with my career openly and with curiosity, which has led me to a uniquely well-suited role. I can try things and make mistakes, learning from those experiences, and becoming better.
I feel unsure about what helped me change. Here are some guesses, in no particular order:
Taking anxiety and depression medication
Changing roles to find one that played more to my strengths
I’m sad that I’m not better or smarter than I grew up hoping I might be. It took time to grieve that and come to terms with it on both a personal and impact level (how many people could I have helped?) (2)
Digging into the slow living movement (3) and trying to live more intentionally and with more reflection
Having a partner who loves me and who doesn’t care about the impact I have, except so far as he knows I’d like to have an impact
Reconnecting with my younger, child self, who felt curious and excited. In the worst moments on my desperation hamster wheel, I no longer felt those things. I foun...
|
Dec 12, 2021 |
EA Funds is more flexible than you might think by Jonas Vollmer
03:40
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: EA Funds is more flexible than you might think, published by Jonas Vollmer on the effective altruism forum.
I’ve noticed that some people seem to have misconceptions about what kinds of grants EA Funds can make, so I put together a quick list of things we can fund that may be surprising to some of you.
(Reminder: our funding deadline for this round is March 7, though you can apply at any time of the year.)
EA Funds will consider building longer-term funding relationships, not just one-off grants.
Even though we typically make one-off grant decisions and have some turnover among fund managers, we can consider commitments to provide longer-term funding. We are also happy to otherwise help with the predictability of funding, e.g. by sharing our thoughts on how easy or hard we expect it to be to get funding in the future.
EA Funds can provide academic scholarships and teaching buy-outs.
We haven’t received a lot of applications for scholarships in the past, but the Long-Term Future Fund (LTFF) and EA Infrastructure Fund (EAIF) would be very excited about funding more of them.
Few graduate students and postdocs seem to be aware that they can be bought out of teaching duties, but sometimes this can be a great way to make more time for research.
EA Funds will consider funding organizations, including medium-sized ones, not just small projects.
The LTFF and EAIF are still unsure whether they will want to fund larger organizations in the longer term. Until then, they will consider funding organizations as long as they don’t have a comparative disadvantage for doing so. If Open Philanthropy has not seriously considered funding you, we will consider you, at least for now.
EA Funds will consider making large grants.
We have made grants larger than $250,000 in the past and will continue to consider them (potentially referring to other funders along with our evaluation). We think our comparative advantage is evaluating grants that other funders aren’t aware of or don’t have the capacity to evaluate, which typically are small grants, but we are flexible and willing to consider exceptions.
The EAIF and LTFF can make grants at any time of the year and on short notice.
We run funding rounds because it saves us some effort per application. But if your project needs funding within a month and the next decision deadline is three months away, we can still make it happen.
If there were a project that would have a very large impact if funded within three days, and no impact otherwise, there’s a high chance that we would get it funded.
The EAIF and LTFF can make anonymized grants.
As announced here, we can get you funded without disclosing personal information about you in our public payout reports.
EA Funds can pass on applications to other funders.
In cases where we aren’t the right funder (e.g., because we don’t have sufficient funding, or it’s a for-profit start-up, or there is some other issue), we are open to passing along applications when we think it might be a good fit. We are in touch with Open Philanthropy, EA-aligned for-profit investors, and other funders, and they have expressed interest in receiving interesting applications from us.
In general, we will listen to the arguments rather than rigidly following self-imposed rules. We have few constraints and are good at finding workarounds for the ones we have (except for some legal ones). We want to help great projects succeed and will do what it takes to make that happen. If you are unsure whether EA Funds can fund something, the best working hypothesis is that it can.
Reminder: the current EA Funds round closes March 7th. Apply here, and see this article for more information.
(Note that the Global Health and Development Fund does not accept funding applications, so this post does not apply to it.)
thanks for listening. to help us...
|
Dec 12, 2021 |
Aligning Recommender Systems as Cause Area by IvanVendrov
23:26
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Aligning Recommender Systems as Cause Area, published by IvanVendrov on the effective altruism forum.
by Ivan Vendrov and Jeremy Nixon
Disclaimer: views expressed here are solely our own and not those of our employers or any other organization.
Most recent conversations about the future focus on the point where technology surpasses human capability. But they overlook a much earlier point where technology exceeds human vulnerabilities.
The Problem, Center for Humane Technology.
The short-term, dopamine-driven feedback loops that we have created are destroying how society works.
Chamath Palihapitiya, former Vice President of user growth at Facebook.
The most popular recommender systems - the Facebook news feed, the YouTube homepage, Netflix, Twitter - are optimized for metrics that are easy to measure and improve, like number of clicks, time spent, number of daily active users, which are only weakly correlated with what users care about. One of the most powerful optimization processes in the world is being applied to increase these metrics, involving thousands of engineers, the most cutting-edge machine learning technology, and a significant fraction of global computing power.
The result is software that is extremely addictive, with a host of hard-to-measure side effects on users and society including harm to relationships, reduced cognitive capacity, and political radicalization.
Update 2021-10-18: As Rohin points out in a comment below the evidence for concrete harms directly attributing to recommender systems is quite weak and speculative; the main argument of the post does not strongly depend on the last paragraph.
In this post we argue that improving the alignment of recommender systems with user values is one of the best cause areas available to effective altruists, particularly those with computer science or product design skills.
We’ll start by explaining what we mean by recommender systems and their alignment. Then we’ll detail the strongest argument in favor of working on this cause, the likelihood that working on aligned recommender system will have positive flow-through effects on the broader problem of AGI alignment. We then conduct a (very speculative) cause prioritization analysis, and conclude with key points of remaining uncertainty as well as some concrete ways to contribute to the cause.
Cause Area Definition
Recommender Systems
By recommender systems we mean software that assists users in choosing between a large number of items, usually by narrowing the options down to a small set. Central examples include the Facebook news feed, the YouTube homepage, Netflix, Twitter, and Instagram. Less central examples are search engines, shopping sites, and personal assistant software which require more explicit user intent in the form of a query or constraints.
Aligning Recommender Systems
By aligning recommender systems we mean any work that leads widely used recommender systems to align better with user values. Central examples of better alignment would be recommender systems which
optimize more for the user’s extrapolated volition - not what users want to do in the moment, but what they would want to do if they had more information and more time to deliberate.
require less user effort to supervise for a given level of alignment. Recommender systems often have facilities for deep customization (for instance, it's possible to tell the Facebook News Feed to rank specific friends’ posts higher than others) but the cognitive overhead of creating and managing those preferences is high enough that almost nobody uses them.
reduce the risk of strong undesired effects on the user, such as seeing traumatizing or extremely psychologically manipulative content.
What interventions would best lead to these improvements? Prioritizing specific interventions is out of scope ...
|
Dec 12, 2021 |
[Podcast] Having a successful career with anxiety, depression, and imposter syndrome by Aaron Gertler
03:58
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: [Podcast] Having a successful career with anxiety, depression, and imposter syndrome, published by Aaron Gertler on the effective altruism forum.
This is a linkpost for/
Note: 80,000 Hours wants to keep the interviewee's name off of Google in this context. You can use "Howie" in comments, but please don't use his surname. We'll redact the surname if any comment happens to use it.
I'm linking to one of my favorite 80,000 Hours podcast episodes ever — not only because the topic seems important and broadly relevant, but because some of the interviewee's advice was directly applicable to problems I was experiencing and problems I've been working through with friends who are struggling.
I'd honestly recommend the episode to basically anyone in or out of EA, with a few caveats:
As 80,000 Hours notes, the conversation gets pretty intense. See the end of the quote below for advice on working around the most intense bits.
The podcast goes on at length about the ways in which the EA community can be deeply supportive of people struggling with mental illness. I can imagine parts of it being a difficult listen for people who either (a) spend a lot of their time in other communities that aren't so supportive, or (b) haven't gotten the same kind of support from within the EA community. In either case, I can imagine some of the advice not being very applicable, and this being frustrating.
Points that especially resonated for me:
Your friends may not be available all the time, but they'll almost always want to help in some way. Consider ways that even a few minutes per day of a friend's time could be good for you.
If you're struggling to get something done, and you're about to spend hours worrying about it or frantically churning out a terrible version of it before some arbitrary deadline... well, I'll quote the podcast:
Sometimes you can literally just write a one sentence email that’s like, “Hey, I’m not going to get to this for a week. I’m sorry,” and that clears the whole thing. The person writes back, “That’s totally fine,” and you don’t have to feel bad about it at all anymore. And nobody was ever upset.
Episode description
Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes.
We also hope that the episode will:
Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles.
Give insight into what it’s like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully.
Several early listeners have even made specific behavioral changes due to listening to the episode — including people who generally have good mental health but were convinced it’s well worth the low cost of setting up a plan in case they have problems in the future.
So we think this episode will be valuable for:
People who have experienced mental health problems or might in future;
People who have had troubles with stress, anxiety, low mood, low self esteem, imposter syndrome and similar issues, even if their experience isn’t well described as ‘mental illness’;
People who have never experienced these problems but want to learn about what it’s like, so they can better relate to and assist family, friends or colleagues who do.
In other words, we think this episode could be worthwhile for almost every...
|
Dec 12, 2021 |
Ask Me Anything! by William_MacAskill
05:20
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Ask Me Anything!, published by William_MacAskill on the effective altruism forum.
Thanks for all the questions, all - I’m going to wrap up here! Maybe I'll do this again in the future, hopefully others will too!
Hi,
I thought that it would be interesting to experiment with an Ask Me Anything format on the Forum, and I’ll lead by example. (If it goes well, hopefully others will try it out too.)
Below I’ve written out what I’m currently working on. Please ask any questions you like, about anything: I’ll then either respond on the Forum (probably over the weekend) or on the 80k podcast, which I’m hopefully recording soon (and maybe as early as Friday). Apologies in advance if there are any questions which, for any of many possible reasons, I’m not able to respond to.
If you don't want to post your question publicly or non-anonymously (e.g. you're asking “Why are you such a jerk?” sort of thing), or if you don’t have a Forum account, you can use this Google form.
What I’m up to
Book
My main project is a general-audience book on longtermism. It’s coming out with Basic Books in the US, Oneworld in the UK, Volante in Sweden and Gimm-Young in South Korea. The working title I’m currently using is What We Owe The Future.
It’ll hopefully complement Toby Ord’s forthcoming book. His is focused on the nature and likelihood of existential risks, and especially extinction risks, arguing that reducing them should be a global priority of our time. He describes the longtermist arguments that support that view but not relying heavily on them.
In contrast, mine is focused on the philosophy of longtermism. On the current plan, the book will make the core case for longtermism, and will go into issues like discounting, population ethics, the value of the future, political representation for future people, and trajectory change versus extinction risk mitigation. My goal is to make an argument for the importance and neglectedness of future generations in the same way Animal Liberation did for animal welfare.
Roughly, I’m dedicating 2019 to background research and thinking (including posting on the Forum as a way of forcing me to actually get thoughts into the open), and then 2020 to actually writing the book. I’ve given the publishers a deadline of March 2021 for submission; if so, then it would come out in late 2021 or early 2022. I’m planning to speak at a small number of universities in the US and UK in late September of this year to get feedback on the core content of the book.
My academic book, Moral Uncertainty, (co-authored with Toby Ord and Krister Bykvist) should come out early next year: it’s been submitted, but OUP have been exceptionally slow in processing it. It’s not radically different from my dissertation.
Global Priorities Institute
I continue to work with Hilary and others on the strategy for GPI. I also have some papers on the go:
The case for longtermism, with Hilary Greaves. It’s making the core case for strong longtermism, arguing that it’s entailed by a wide variety of moral and decision-theoretic views.
The Evidentialist’s Wager, with Aron Vallinder, Carl Shulman, Caspar Oesterheld and Johannes Treutlein arguing that if one aims to hedge under decision-theoretic uncertainty, one should generally go with evidential decision theory over causal decision theory.
A paper, with Tyler John, exploring the political philosophy of age-weighted voting.
I have various other draft papers, but have put them on the back burner for the time being while I work on the book.
Forethought Foundation
Forethought is a sister organisation to GPI, which I take responsibility for: it’s legally part of CEA and independent from the University, We had our first class of Global Priorities Fellows this year, and will continue the program into future years.
Utilitarianism.net
Darius Meissner and I (w...
|
Dec 12, 2021 |
Many Undergrads Should Take Light Courseloads by Mauricio
04:58
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Many Undergrads Should Take Light Courseloads , published by Mauricio on the effective altruism forum.
Thanks to Kuhan Jeyapragasan for comments and relevant/motivating conversation. In the spirit of saving time, I wrote this one relatively quickly.
Summary / Introduction
When I started undergrad, I decided to take almost as many classes as I could that term—wouldn’t want to miss out on limited opportunities to learn, right? I spent that whole term hunched over books and stressing over how behind I was (so, not meeting people, not getting research or job experience, not figuring out what problems to prioritize, not getting others into high-impact careers), only to later realize my classes had taught me little of value.
That doesn’t seem to have been a unique experience; many impact-driven undergrads fill their schedule with many classes (often, time-consuming ones). Consider not doing that—I think taking many classes each term is one of the most costly, easily avoidable mistakes that impact-driven undergrads tend to make. After all, it seems that, for most promising career goals:
Extra classes are not that valuable.
Some things you could be doing with that time instead are extremely valuable.
If you spend much unnecessary time on classes (e.g. get more than one degree in undergrad, take almost as many classes as you’re allowed to take each term, take especially time-consuming classes, etc), please at least have it be an intentional choice—one in which you recognize the opportunity cost. The opportunity cost is too massive to accept on the basis of the high school mindset that the most important choices we make are our curricula—there are so many other ways we can pursue our goals.
Caveats
This is written in the context of US universities, where students tend to have substantial flexibility to choose their courseloads. It may be much less applicable to other contexts.
No, I’m not saying you should let your grades plummet or drop out of college.
Some people’s productivity benefits a lot from oversight/accountability/deadlines. If that’s the case for you, consider alternatives to classes which provide that structure, e.g. part-time supervised work or collaborations.
Related/supporting thoughts
If you want to go into research, research experience / track record and good grades (e.g. GPA of ~3.8+ for CS) on classes you took are major assets; having taken extra classes won’t be as noticed (and will burn time that, if spent differently, could better prepare you for graduate school / research).
At least Dan Hendrycks (CS PhD student researching ML safety at UC Berkeley) agrees this strongly applies to AI safety technical research / CS grad school. He advises: "Avoid tough needless courses and take easy courses... [Unnecessary, tough classes are] the easiest way people burn time. [...] for CS grad school, research is what matters. [...] I can’t see much reason for a double or triple major since those will force you to take many more courses." [Edited to add details]
There’s a ton of low-hanging fruit you could be taking for getting other undergrads to strategically dedicate their careers to highly pressing problems.
Extra classes have very little value for going into policy or many industry jobs (from two policy professionals’ advice in conversation).
Classes are very unlikely to teach you much about a bunch of very important things, like what you value and what cause prioritization / career paths will best allow you to realize those values.
Taking more classes trades off against your ability to deeply understand and/or get good grades in the classes you do take.
Being very stressed and sad over a packed courseload doesn’t just suck—it also probably makes you less productive and less charismatic (which, editing to clarify, seems useful for building connections and getting peo...
|
Dec 12, 2021 |
Please Take the 2020 EA Survey by Peter Wildeford
01:49
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Please Take the 2020 EA Survey, published by Peter Wildeford on the effective altruism forum.
The 2020 Effective Altruism Survey is now live at the following link:
If you would like to share the EA Survey with others, please share this link:
The survey will remain open through the end of the year.
What is the EA Survey?
The EA Survey provides valuable information about the demographics of the EA community, how people get involved, how they donate, what causes they prioritise, their experiences of EA, and more.
The estimated average completion time for the main section of this year’s survey is 20 minutes. There is also an ‘Extra Credit’ section at the end of the survey, if you are happy to answer some more questions.
What's new this year?
There are two important changes regarding privacy and sharing permissions this year:
1) This year, all responses to the survey (including personal information such as name and e-mail address) will be shared with the Centre for Effective Altruism unless you opt out on the first page of the survey.
2) Rethink Priorities will not be making an anonymised data set available to the community this year. We will, however, consider requests for us to provide additional aggregate analyses which are not included in our main series of posts.
Also the Centre for Effective Altruism has generously donated a prize of $500 USD that will be awarded to a randomly selected respondent to the EA Survey, for them to donate to any of the organizations listed on EA Funds. Please note that to be eligible, you need to provide a valid e-mail address so that we can contact you.
We would like to express our gratitude to the Centre for Effective Altruism for supporting our work.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Hinge of History Refuted (April Fools' Day)
10:44
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Hinge of History Refuted (April Fools' Day), published by Thomas Kwa on the effective altruism forum.
Introduction
Is the present the hingiest time in history? The answer to this question is enormously important to altruists, and as such has attracted the attention of many philosophers and researchers: Nick Beckstead, Phil Trammell, Toby Ord, Aron Vallinder, Allan Dafoe, Matt Wage, Holden Karnofsky, Carl Shulman, and Will MacAskill. I attempt to answer the question through a novel method of hinginess analysis.
Flawed Hinginess Measures
To determine whether one era of history is hingier than another, we must have some way to measure hinginess. The naive method used by many previous researchers is to simply count the number of hinges in each time period. There are several problems with this idea.
Suppose that a heavy door is held up by two hinges. Replacing these with four hinges, each half the size, would double the number of hinges. However, since the range of motion of the door is the same in each case, the hinginess has not changed.
Taken to its conclusion, a naive hinge count leads to the absurd conclusion that the hinginess of the world is dominated by its insect population. It is estimated that the world contains on the order of 1 million billion ants; each ant has six legs, with each leg containing three hinges. Even disregarding the antennae and mandibles and necks of ants, we end up with an estimate of 1.81016 hinges belonging to ants, which dwarfs all of humanity's hinges by orders of magnitude.1 Despite the fact that ant joints are millions of times less hingey than some other hinges, many people believe that underground ant colonies contribute the vast majority of the world's hinginess-- the so-called "underfoot myth". In reality, the proportion is much smaller, which we will examine in the next section.
Estimating the total mass or volume of hinges is not much better. While past hinges were limited to natural materials, advances in materials science have allowed us to manufacture hinges of many different metals: "copper, brass, nickel, bronze, stainless steel, chrome, and steel" (source), allowing for much better quality, more hingey, hinges of the same mass or volume. A reasonable hinginess measure should take into account the number of hinges in the universe, but also the hinginess capacity of each individual hinge.
Estimating Hinginess
I claim that hinginess of a given hinge depends on three factors, and can be estimated as scale tractability neglectedness:
Scale: How big a door can the hinge swing, and through what distance?
Tractability: When carrying this door, how low is the hinge's friction torque, compared to the torque required to twist the hinge off-axis? (In short, how good of a hinge is it?)
Neglectedness: How long has the hinge functioned without maintenance?5
Some hinges rank poorly in scale and are not very neglected (e.g. ant tarsal joints), while others are large in scale and very neglected (e.g. the doors of ancient temples). Similarly, some hinges are very tractable (e.g. in a precision-engineered bank vault) while some are intractable (a living hinge, like that on a plastic ketchup bottle lid, is only a few times easier to open than to twist or break). This is summarized in the table below.
Scale
Tractability
Neglectedness
High Panama Canal gate Bank vault door Ancient temple door
Low Ant leg joint Ketchup bottle Newly manufactured part
Hinginess in the Past
Despite the low neglectedness of animal joints, they accounted for most of the hinginess in the world between the Cambrian explosion and the rise of agriculture. The reason is that there were simply no other hinges.4
The "underfoot myth" notwithstanding, insects account for comparatively little hinginess, mostly due to their small scale and neglectedness.2
Hinginess in the Prese...
|
Dec 12, 2021 |
COVID-19 brief for friends and family by eca
07:15
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: COVID-19 brief for friends and family, published by eca on the effective altruism forum.
People have been saying all kinds of wild stuff about the new coronavirus. I work in biosecurity and have been following the outbreak since the beginning. What follows is my best attempt to communicate what we know about the virus, and how to prepare, with my family and friends. I thought I would share in case others have been looking for a similar document.
This is NOT intended to be a detailed, rigorous justification of the preparation measures I've outlined, nor an authoritative statement on the best current estimates for epidemiological parameters. Instead, I try to be as straightforward as possible, cite only the action-relevant details, and align with the best recommendations I've heard from the EA biosecurity community as well as experts writ large.
Caveats aside, I'd be interested in feedback on this purpose, including whether I am missing sensible prep measures, have the right tone for sharing widely, or am wrong about the facts. I'm happy to provide more technical justification in comments.
Here is the draft. I'll be updating it as we have more info and I have more time to include sources, so go there for the most recent version. The first draft is included below for convenience.
If you'd like, please feel free to copy, modify, etc and share with your own family.
Coronavirus in brief (work in progress)
Bottom Line.
Coronavirus is significantly worse than the flu, but not the zombie apocalypse. No need to panic, but it probably makes sense to prepare.
It is going to affect day-to-day-life in western countries, including the U.S.
You and your family will probably face personal risk of illness by the end of the year.
You can prepare by
Stocking ~1 month of nonperishable food and other necessities, and 3 months of medications.
Relocating away from dense cities and/or shifting to working from home, if possible.
Learning how to properly wash your hands, and practicing not touching your face.
Avoiding travel after March of this year, and/or planning with cancellation option.
Making plans to care for and protect the elderly from exposure to the virus.
Buying and carrying hand sanitizer, and using it frequently (every 30 min outside your home, before you eat or touch your face).
Wiping commonly contacted items (phone, keyboard, headphones etc) down with disinfectant regularly.
Avoiding crowded places (e.g. concerts, subways, theatres, buses, airports etc) without protection.
For essential travel, buying N95 respirators, if you can, and learning how to use them, including shaving facial hair.
What does the virus do?
The virus causes coughing, sneezing, fever, pneumonia, and in severe cases kidney failure and death.
80% of cases are relatively mild. The rest look like moderate to severe pneumonia.
Approximately 1% of people who catch the virus die.
After symptoms show, it takes 3 weeks - 1 month for severe cases to resolve.
Risk is much higher for people over 40.
Children appear to be relatively unaffected.
Men may be twice as susceptible as women, although it is too early to tell with confidence.
Immunity may not last long, and no-one has it to start with.
Where is the virus now (Feb 28)?
80,000+ cases worldwide, most in China. 2,800+ deaths.
23 countries have more than 10 cases outside of China.
Japan, Iran, Italy, and South Korea all had an exponential growth of cases from 10s to 100s in less than a week.
60 cases in the U.S. 1 case, in Northern California, is likely the first spread without link to China, suggesting the virus is spreading undetected in the United States.
What do we know about the virus?
It likely arose from a crossover, or “zoonosis” from animals in China, sometime in late November early december of 2019.
It is most closely related to a virus called SARS which...
|
Dec 12, 2021 |
Finding equilibrium in a difficult time by Julia_Wise
10:24
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Finding equilibrium in a difficult time, published by Julia_Wise on the effective altruism forum.
To start: I don’t want to say that self-isolation is that bad in the scheme of things. People have lost their lives, they’ve lost loved ones. Healthcare workers are working hard, at their own risk, to protect us all. Some other workers don’t have a choice about continuing to work in person. And for some immunocompromised people and their families, self-isolation is the reality much or all of the time.
But I’m writing for those of us who aren’t physically ill, are doing some amount of self-isolation or social distancing because of the pandemic, and are not finding it easy. Most of this isn’t specific to EAs, but I hope it’s useful.
We are all having a hard time with this
I assume I’m not the only person who finished last week and realized I’d gotten very little work done.
We're all anxious about the situation in different ways. This is a hard, weird time. I don’t expect to have normal work weeks for a while, and you probably shouldn’t expect that either (especially if you’re newly working from home or if you have children who are suddenly out of school). And if you're affected by job loss, of course things are even more upside-down.
Focus on the basics: Sleep. Eat nourishing food. Get some exercise and sunshine. Connect with other people. These things are literally a public health measure — you’re protecting your immune system.
On information:
If you’re like me, you’ve found yourself reading more about this topic than is useful for any practical purpose. Think about diminishing marginal returns: what's the amount and kind of information about this that will benefit you? And when does it start to produce very little value?
Here’s the advice Gregory Lewis (a medical doctor and public health specialist who works on biorisk at the Future of Humanity Institute) gave to his colleagues:
I’d recommend some information hygiene. The typical person doesn’t need ‘up to the minute’ information on what is going on worldwide, and generally it takes time for instant reports to resolve into a clear picture.
Further, typical media reporting will tend to be biased in the very alarming direction (e.g., the typical ‘live feed’: “New case in A!” “New Case in B!” “Event C cancelled due to coronavirus fear!”). Social media tends not to be much better regarding bias, and worse with regard to reliability.
In other words, especially for those worried about this, staying glued to the screen can get a very high yield of anxiety for a very poor yield of useful action-relevant information.
Here are some good sources of information (which is the bulk of my information diet):
Generally:
WHO
Public health matters
Johns Hopkins Center for Health Security newsletter
For the data:
JH mapping dashboard
Worldometer (slightly easier to divvy out some time courses)
Typically good commentary/analysis/explanation
Tom Inglesby’s Twitter (both for itself, and for links to CHS’s other work)
John Campbell’s Youtube
Trevor Bedford’s Twitter for virology.
On working remotely:
When the Great Plague of London sent Isaac Newton and other Cambridge students home for a year in 1665, he did some of his best work including the famous falling-apple realization. Maybe once you settle in, you'll have a productive time in a different environment than usual.
If you’re used to working from a desk and switch to working from a couch or bed, you’re risking hurting your body. (After a two-week stretch of writing from bed a lot, my husband had serious wrist pain for weeks.) Please set up a good workspace where you can use your computer without putting your neck, back, and wrists in awkward positions.
More:
Wirecutter on equipment for working from home (though you can make an ergonomic setup for much less - here’s mine.)
Making profession...
|
Dec 12, 2021 |
Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being by MichaelPlant, JoelMcGuire
34:28
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being, published by MichaelPlant, JoelMcGuire on the effective altruism forum.
This is a cross-post from the website of the Happier Lives Institute.
TL;DR: We estimate that StrongMinds is 12 times (95% CI: 4, 24) more cost-effective than GiveDirectly in terms of subjective well-being. This puts it roughly on a par with the top deworming charities recommended by GiveWell.
[Edit 26/10/2021: Table 4 and accompanying text added]
1. Background and summary
In order to do as much good as possible, we need to compare how much good different things do in a single ‘currency’. At the Happier Lives Institute (HLI), we believe the best approach is to measure the effects of different interventions in terms of ‘units’ of subjective well-being (e.g. self-reports of happiness and life satisfaction).
In this post, we discuss our new research comparing the cost-effectiveness of psychotherapy to cash transfers. Before we get to that comparison, we should first highlight the advantage of doing it in terms of subjective well-being; to illustrate that, it will help to flag some alternative methods.
We could assess the effect each intervention has on wealth, but this would fail to capture the benefits of psychotherapy. It’s implausible to think that treating depression is only good insofar as it helps you to earn more. We could assess their effects using standard measures of health, such as a Disability-Adjusted Life-Year (DALY), but it’s similarly mistaken to think that alleviating extreme poverty is only good insofar as it helps you to become healthier. We could make some arbitrary assumptions about how much a given change in income and DALYs each contribute to well-being; this would allow us to ‘trade’ between them. But this would just be a guess and could be badly wrong. If we measure the effects on subjective well-being, how individuals feel and think about their lives (e.g. "Overall, how satisfied are you with your life, nowadays?" 0-10), we can provide an evidence-based comparison in units that more fully capture what we think really matters.
Efforts to work out the global priorities for improving subjective well-being are relatively new. Nevertheless, the recent push to integrate well-being in public policy-making in countries such as Scotland and New Zealand, as well as the reach of publications such as the World Happiness Report (which started in 2012), indicates that this is a viable approach.
Earlier work conducted by HLI’s Director, Michael Plant, suggested that using subjective well-being might reveal different priorities for individuals and organisations seeking to do the most good, with mental health standing out as one area that is crucial and potentially neglected. Plant’s (2018, 2019 ch. 7) prior back-of-the-envelope calculations indicated that StrongMinds, a mental health charity that treats women with depression in Africa, could be as cost-effective as GiveWell’s top charity recommendations.
These initial findings motivated us to do a much more rigorous analysis of the same interventions in terms of subjective well-being, so we undertook meta-analyses in each case. These aimed to address three questions:
Is assessing cost-effectiveness in terms of subjective well-being feasible: are there enough data that we can make these sorts of comparisons without making major assumptions to fill in the blanks?
Is this approach worthwhile: does it indicate new or different priorities?
Does this specific comparison between cash transfers and psychotherapy indicate that donors and decision-makers should change the way they allocate their resources, assuming they want to do the most good?
Our research focused specifically on studies in low...
|
Dec 12, 2021 |
Careers Questions Open Thread Benjamin_Todd
03:12
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Careers Questions Open Thread, published Benjamin_Todd on the effective altruism forum.
Hi everyone,
Many people in EA aren’t able to get as much career advice as they’d like, while at the same time, hundreds of EAs are happy to provide informal advice and mentoring within their career area.
Much of what we do in our one-on-one advice at 80,000 Hours is try to connect these two groups, but we’re not able to cover a significant number of people. At the same time, spaces like the EA careers discussion FB group don’t seem to have taken off as a place where people get concrete advice.
As an experiment, I thought we could try having an open career questions thread on the Forum.
By posting a reply here, anyone can post a question about their career, without having to make a top level post, and anyone on the forum can write an answer.
If it works well, we could do it each month or so.
To get things going, some of the 80,000 Hours team will be available from Monday onwards to write quick answers to topics they have views on (in an individual capacity rather than representing our official view), though our hope is that others will get involved.
For those with questions, I could imagine those ranging from high-level to practical:
I’m trying to choose whether to focus on global health or climate change, how should I decide?
I can either accept this job offer or go to graduate school, which seems best?
Which skills should I focus on learning in my spare time?
Where can I learn more about how to interview for jobs in policy?
I’m especially keen to see questions from people who haven’t posted much before.
The answers to your questions will probably be more useful if you can share a bit of background, though feel free to skip if it'll prevent you from asking at all! You can also skip if you're asking a very general question.
Here’s a short template to provide background – feel free to pick whichever parts seem most useful as context:
Which 2-5 problem areas do you intend to focus on?
What ideas for longer-term roles do you have?
What do you see as your strengths & most valuable career capital?
Some key facts on your experience / qualifications / achievements (or a link to your LinkedIn profile if you’re comfortable linking your name to the question).
Any important personal constraints to keep in mind (e.g. tied to a certain location)
What 2-5 next career moves are you considering? (i.e. specific jobs or educational opportunities you might take)
If you want to do a longer version, you could use our worksheet.
Just please bear in mind this will all be public on the internet for the long term. Don’t post things you wouldn’t want future employers to see, unless using an anonymous account. Even being frank about the pros and cons of different jobs can easily look bad.
As a reminder, we have more resources to help you write out and clarify your plan here.
For those responding to questions, bear in mind this thread might attract people who are newer to the forum, and careers can be a personal subject, so try to keep it friendly.
I’m looking forward to your questions and seeing how the thread unfolds!
Update 21 Dec: Thank you everyone for the questions and responses! The 80k team won't be able to post much more until Jan, but we'll try to respond after that.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Introducing High Impact Professionals by DevonFritz, Federico Speziali
16:04
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Introducing High Impact Professionals, published by DevonFritz, Federico Speziali on the effective altruism forum.
Overview
It is our pleasure to present to you our new EA organization, High Impact Professionals (HIP). Our vision is to enable EA working professionals to have the biggest positive impact possible. We imagine a community working together for the greater good, contributing not only through their money but also with their time and skills. We recently went through the 2021 Charity Entrepreneurship Incubation Program and received $100,000 in funding for our first full year of operations.
The rest of this document will outline:
The problem we are solving
How we are solving it
Our next steps
How you can get involved
Key Takeaways
We created HIP to maximize the impact of EA working professionals and solve several identified problems within the EA ecosystem.
We are going to pilot several promising interventions in year one to identify the most impactful path forward. We list them in the Pilots section of the post.
We have many ways you can get involved.
Our first pilot is workplace fundraising events. We have successfully run these in the past and are now helping EA working professionals (HIPs) host them at their organizations. If you are an EA working professional and want to increase your impact, become more engaged, and gain useful skills and EA career capital, register your interest to host a fundraising event. We will provide you with the resources and support needed to host a successful event.
We want to talk to more HIPs to get a better understanding of the needs of our community. Through our own giving of time and money, we have a sense of what might be needed but want to validate this with others in the community. If you are an EA working professional looking to have more impact please register your interest here. We encourage you to have a low bar for doing this: if you are considering it, just do it!
Please feel free to share with others you know who might be a good fit.
Thanks to Cillian Crosson, Ula Zarosa, Aaron Gertler, Jack Lewars, Jan-Willem Van Putten, and Brian Tan for their invaluable feedback on this announcement. All errors are our own.
Problem
Impact
According to the most recent EA Surveys, over half of all EAs are in the private sector and earning to give is the most common career choice amongst EAs with more than a third of votes cast. This group has resources like skills, time and money that they can contribute to high-impact organizations and through surveys have expressed a desire to do so. For example, the same survey revealed that a full 55% of EAs want to donate more, and more than one-third want to volunteer their time as a way to become more engaged. This means that the largest cohort of EAs is limited in their ultimate impact and seems to be facing a bottleneck.
EA Community Issues
In addition to impact problems, there have also been issues within the EA community that our organization can likely mitigate:
Funding diversification - a recent survey of 29 meta organizations revealed that a majority of those polled have a strong desire for more funding diversification for their organization.
Supply and demand issue with EA jobs - many EAs want to work for EA organizations but there are few positions available. This creates a lot of frustration for the many EAs that apply for an EA job but don’t get one. A recent forum post implies this could be a range of 47-124 people being rejected per role.
Insularity in EA - EA is quite insular and would likely benefit from stronger connections to the private sector, both to spread the ideas of EA more widely and to establish a more solid reputation for EA in wider circles.
Evidence
The rigorous research and selection process performed by Charity Entrepreneurship identified this idea as one of...
|
Dec 12, 2021 |
£4bn for the global poor: the UK's 0.7% by Sanjay
03:42
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: £4bn for the global poor: the UK's 0.7%, published by Sanjay on the effective altruism forum.
Background
The UK Chancellor of the Exchequer announced that the government will reduce the amount of spend on international development from 0.7% of GNI to 0.5%. (read more, e.g., here). This means that the government will spend £10bn on aid instead of £14bn.
This post sets out an attempt to undo this decision.
I'm hoping we can find more people to help with analysis and to donate to the campaign.
Plan
At a high level, the plan we have in mind is essentially taken straight from the tech startup playbook:
Identify the highest leverage constituencies (probably those with "moderate" Tory MPs)
perform google and facebook ads to identify people in those constituencies willing to write a letter or email to their MP
recruit them and provide them with a template letter/email
I'm currently reaching out to a bunch of NGOs in my network to ask them
Are approaches like these already being used? (and if not, why not?)
Do any NGOs already have the analysis on which constituencies are the best targets?
Once we have recruited people, how best to look after them?
Even if we find that NGOs are already targeting the most strategic constituencies, we would to do some work ourselves on analysing which the most strategic constituencies are, as this would help us to assess and potentially support the decision-making being done by the NGOs.
Who's working on this, and why do we need more people
The main people involved thus far are myself and a member of the EA community called Sahil.
We will need more people to help with various tasks, especially analysis to work out which MPs are the most strategic ones to reach out to. There are probably other tasks that I haven't thought through -- this post is being written quickly, as we may not have much time to act.
How would funds be used?
Funds are needed for Facebook/Google ads to reach strategically chosen constituents of the MPs who are most interested in supporting international development.
Given the considerations set out below, I would judge that this likely outperforms a donation to a GiveWell-recommended charity. (A confident/rigorous assessment of this claim would require a more detailed model than I have time for; timescales are likely short)
How good is more development spend?
DfID has in recent years been considered one of the top international development agencies, known for its focus on impact and its awareness of cost-effectiveness. Their Chief Economist was and still is Rachel Glennerster (early taker of the GWWC pledge). Something that's unclear is the extent to which DfID's effectiveness might change after the merger with the FCO (foreign office).
Having said that, at this particular time, the need is particularly high, so at the margin, reducing spend is likely to mean that much-needed programmes are brought to an abrupt halt right at the moment when work is most needed.
Reducing development spend in the short term strikes me as a clearly bad idea.
How effective is this campaign likely to be?
The change will require a change to the law, which means MPs will have to vote on it.
I get the impression that it's not clear that the government will win on this. This suggests that it's a good campaign to apply some effort to.
A guardian political correspondent had this to say about the drop in the ODA percentage:
"This is a very politically tricky moment, as shown by the amount of time Sunak uses justifying it. A lot of Tory MPs are angry. The change potentially requires a Commons vote, and there is no guarantee the government will win. This one, as they say, could run and run."
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
FTX EA Fellowships by FTX Foundation
01:35
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: FTX EA Fellowships, published by FTX Foundation on the effective altruism forum.
Announcing a new program: FTX EA Fellowships!
FTX is a cryptocurrency exchange headquartered in the Bahamas, founded with the goal of making money to give to effective causes. We’re looking to
support people doing exciting work
kickstart an EA community in the Bahamas
To those ends, we’re looking for applications from people already working on EA jobs or projects that can be done from the Bahamas.
For fellowship recipients, we’ll provide:
travel to and from the Bahamas
housing in the Bahamas for up to 6 months
an EA coworking space
a stipend of $10,000
Round 1 applications close 11/15. We’ll get back with responses by 12/1, and accommodations will start in January. This is just an initial default schedule: happy to accept off-cycle applications or people who wouldn’t be able to move until later as well.
We plan to accept somewhere between 10-25 applicants in the first round, depending on interest and capacity constraints.
The application is here. (If you don’t need a fellowship but might want to come hang out in the Bahamas, fill out this form.) We may follow up to do an interview over video chat after reviewing initial applications. If you have any questions feel free to email fellowships@ftx.com.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Candy for Nets by Jeff_Kaufman
03:15
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Candy for Nets, published by Jeff_Kaufman on the effective altruism forum.
Yesterday morning my five-year-old daughter was asking me about mosquitos, and we got on to talking about malaria, nets, and how Julia and I donate to the AMF to keep other kids from getting sick and potentially dying. Lily took it very seriously, and proposed that when I retire she take my programming job and donate in my place.
I told her that she didn't need to wait until after I retired to start helping, and she decided she wanted to sell candy on the bike path as a fundraiser. I told her we could do this after naps if the weather was still nice, and the first thing she said when I got her up from her nap was that she wanted to go make a sign.
She dictated to me, "Lily is selling candy to raise money for malaria nets, $1" and I wrote the letters. She colored them in:
(It looks like she's posing with the sign here, but this is just how she happened to position herself for coloring. She has short arms.)
Once Anna was up from her (longer) nap I got out the wagon and brought them over to the bike path. Lily did all the selling; I just hung out to the side, leaning against a tree.
She's always been good at talking to adults, and did a good job selling the candy. She would explain that the candy was $1/each, that the money was going to buy malaria nets, and that malaria was a very bad disease that you got from mosquitoes. People were generous, and several people gave without taking candy, or put in an extra dollar. One person didn't have cash but wanted to give enough that they went home and came back with a dollar. As someone who grew up in a part of town with very little foot traffic, the idea that you can just walk a short distance from your house to somewhere where several people will pass per minute continually amazes me.
After about twenty minutes all the candy was sold and Lily had collected $20.75. She played in the park for a while, and then when we came home she asked how we would use the money to buy nets. I showed her pictures of distributions on the AMF website but she wanted to see pictures of the nets in use so we spent a while on image search:
I explained that we weren't going to distribute the nets ourselves, but that we would provide the money so other people could.
Initially she didn't want to donate the whole amount, but wanted to set aside half to buy more candy so she could do this again. I told her that I would be happy to buy the candy. Possibly I should have let her manage this herself, but I was worried that the money wouldn't end up donated which wouldn't have been fair to the people who'd bought the candy, and explained this to her. She gave me the $20.75 and I used my credit card to pay for the nets. [1]
Here's the message she dictated for the donation:
I want people to be safe in the world from biting mosquitoes. I don't want them getting hurt, and especially I don't want the kids like me to die.
I don't know how her relationship with altruism will change as she gets older, and I do think there are ways it will be hard for her to have parents who have strong unusual views. As we go I'm going to continue to try very hard not to pressure or manipulate her, while still giving advice and helping her explore her motivations here. I am, however, very proud of her today.
[1] I haven't listed this on our donations page and it doesn't count it towards our 50% goal because the donation was Lily's and not ours.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Listen to more EA content with The Nonlinear Library by Kat Woods
12:57
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Listen to more EA content with The Nonlinear Library, published by Kat Woods on the effective altruism forum.
Listen here: Spotify, Google Podcasts, Pocket Casts, Apple. Or, just search for "Nonlinear Library" in your preferred podcasting app.
We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.
In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.
Goal: increase the number of people who read EA research
An EA koan: if your research is high quality, but nobody reads it, does it have an impact?
Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact.
Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better
Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA articles read, we’re increasing the impact of all of that content.
This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact.
Here are some purely hypothetical numbers just to illustrate this way of thinking:
Imagine that you, a researcher, have spent 100 hours producing outstanding research that is relevant to 1,000 out of a total of 10,000 EAs.
Each relevant EA who reads your research will generate $1,000 of positive impact. So, if all 1,000 relevant EAs read your research, you will generate $1 million of impact.
You post it to the EA Forum, where posts receive 500 views on average. Let’s say, because your report is long, only 20% read the whole thing - that’s 100 readers. So you’ve created 1001,000 = $100,000 of impact. Since you spent 100 hours and created $100,000 of impact, that’s $1,000 per hour - pretty good!
But if you were to spend, say 1 hour, promoting your report - for example, by posting links on EA-related Facebook groups - to generate another 100 readers, that would produce another $100,000 of impact. That’s $100,000 per marginal hour or ~$2,000 per hour taking into account the fixed cost of doing the original research.
Likewise, if another 100 EAs were to listen to your report while commuting, that would generate an incremental $100,000 of impact - at virtually no cost, since it’s fully automated.
In this illustrative example, you’ve nearly tripled your cost-effectiveness and impact with one extra hour spent sharing your findings and having a public system that turns it into audio for you.
Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long...
|
Dec 12, 2021 |
2019 AI Alignment Literature Review and Charity Comparison by Larks
01:58:59
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: 2019 AI Alignment Literature Review and Charity Comparison , published by Larks on the effective altruism forum.
Cross-posted to LessWrong here.
Introduction
As in 2016, 2017 and 2018, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments.
My aim is basically to judge the output of each organisation in 2019 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.
I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) another existential risk capital allocation project, 2) the miracle of life and 3) computer games.
How to read this document
This document is fairly extensive, and some parts (particularly the methodology section) are the same as last year, so I don’t recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you.
If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well.
If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories.
Here are the un-scientifically-chosen hashtags:
Agent Foundations
AI_Theory
Amplification
Careers
CIRL
Decision_Theory
Ethical_Theory
Forecasting
Introduction
Misc
ML_safety
Other_Xrisk
Overview
Philosophy
Politics
RL
Security
Shortterm
Strategy
New to Artificial Intelligence as an existential risk?
If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper.
If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation.
Research Organisations
FHI: The Future of Humanity Institute
FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here.
Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work.
In the past I have been very impressed with their work.
Research
Drexler's Reframing Superintelligence: Comprehensive AI Services as General Intelligence is a massive document arguing that superintelligent AI will be developed for individual discrete services for specific finite tasks, rather than as general-purpose agents. Basically the idea is that it makes more sense for people to develop specialised AIs, so these will happen first, and if/when we build AGI these services can help control it. To some extent this seems to match what is happening - we do have many specialised AIs - but on the other hand there are teams working directly on AGI, and often in ML 'build an ML system that does it...
|
Dec 12, 2021 |
What will 80,000 Hours provide (and not provide) within the effective altruism community? by Benjamin_Todd
08:37
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: What will 80,000 Hours provide (and not provide) within the effective altruism community?, published by Benjamin_Todd on the effective altruism forum.
There are many career services that would be useful to the effective altruism community, and unfortunately 80,000 Hours is not able to provide them all.
In this post, I aim to sum up what we intend to provide and what we can’t, to make it easier for other groups to fill these gaps.
80,000 Hours’ online content is also serving as one of the most common ways that people get introduced to the effective altruism community, but we’re not the ideal introduction for many types of people, which I also list in the section on online articles.
You can see our full plans in our annual review .
Target audience
Our aim is to do the most we can to fill the key skill gaps in the world’s most pressing problems. We think that is the best way we can help to improve the lives of others over the long term.
We think that the best way to do this is – given our small team – to initially specialise on a single target audience, and gradually expand the audience over time.
Given this, most of our effort (say 50%+) is on advice and support for English-speaking people age 20-35 who might be able to enter one of our current priority paths (say a 5%+ chance of success).
We also aim to put ~30% of our effort into other ways of addressing our priority problems (AI, biorisk, global priorities research, building EA, nuclear security, improving institutional decision-making, extreme climate risks) or potential priority problems , some of which we might class as priority paths in the future. 20% of our effort goes into a wider range of roles and problem areas.
(Edit: note that this allocation is in line with the views of EA Forum members and other proxies for 'core' EA community members.)
We think a wider range of people could consider our priority paths than is often assumed. For instance, some people have worried that this is aimed at too narrow an audience, say, people who attended one of the best 20 universities in the world. But one of our top paths is ‘AI policy’; we take this to include some junior roles in government and politics, and many people take these roles who haven’t attended a top 20 university. Another priority path is working at EA organisations, and the latest EA survey found that about half of the staff at those organisations did not attend a top 20 university.
We also aim to be useful to a broader range of readers than those who might pursue a priority path – as I’ll explain later – and we think much of our content is indeed relevant to them.
Still, this audience is clearly much narrower than everyone we’d like to get involved in effective altruism, and everyone already within effective altruism who could benefit from help with their career.
We also intend to expand the scope of this audience over time as our team grows (especially by extending the age range we focus on), though this will proceed gradually over a matter of years.
This means we need other groups to focus on other audiences. Some of the bigger gaps include:
People interested in problem areas pursued by those focused on near-term impacts, such as global health and factory farming; we don’t include career paths in these areas within our priority paths.
People who would prefer materials written in a language that’s not English, and those who want career advice specific to non-English speaking countries, especially outside the US and UK.
People older than 35, and especially over 40.
People younger than 20, and especially under 18.
People who are looking for advice on where to donate, or want to do part-time advocacy, but don't want to significantly change their careers.
People who could make an impactful career change but not within our priority paths or nearby options.
Ri...
|
Dec 12, 2021 |
Things I Learned at the EA Student Summit by Akash
10:29
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Things I Learned at the EA Student Summit, published by Akash on the effective altruism forum.
This weekend, I attended the EA Student Summit. Below, I’m summarizing some of my “key takeaways”—ideas that I found interesting, helpful, or thought-provoking. I’m dividing them into a few key themes and questions:
How to Learn About EA
How can we effectively learn about EA? Will Payne had some helpful suggestions, summarized below:
We don’t need to reinvent the wheel—there are already a lot of resources (and curated libraries of resources) out there. Oxford’s EA Introductory Fellowship reading list seems like a great place to start.
We should be explicit about updating our beliefs. If we change our mind about something, it’s useful for us to document it and share it with others.
We should strive to learn more effortfully. Will pointed out that it’s usually not enough to passively read an article or listen to a podcast—we’re more likely to remember information and benefit from it if we learn in more effortful ways. Some examples include summarizing an article in your own words, asking yourself reflection questions (and then answering them), or explaining an idea to others. This forum post is another example :)
Will’s talk also had one of my favorite suggestions from the summit. Instead of saying “I found this article interesting because it was about X”, Will suggests that we say “I found this article interesting because it suggests that we should do Y.” I think this is extremely clever, for at least two reasons.
First, I think it makes the person that we’re talking to more motivated and energized about the topic. There are thousands of interesting articles about interesting topics, but there are very few that directly try to make me think or act differently.
Second, I think it helps us recognize when things aren’t actually helpful. It’s easy to get lost thinking about interesting ideas that don’t actually have any impact on the choices we make. By asking “does this suggest that I should do something different or think about something differently?”, we might save ourselves a lot of time.
EA Resources that I Didn't Know About
I learned about a few useful EA resources:
The Center for Effective Altruism has a media specialist and a licensed social worker who serves as the EA community liaison. Anyone can contact them. Yes, that means you. Or me. Part of their job is literally to help us think, feel, and communicate better.
Aaron Gertler, who runs the EA forum, is willing to read/edit any potential forum posts.
Giving What We Can has a list of content ideas for blog posts.
These are just a few that I’ve learned about recently— I’m sure there are many more. This post from a few weeks ago has a more thorough list of EA-related organizations.
How often should you reach out to EAs and EA-related organizations?
I have an immense amount of respect for other EAs. These are literally people who are devoting their lives toward solving the world’s biggest problems and finding the most effective ways to spend their time and money.
So naturally, I thought that these people—especially the “big name” people—would have far better things to do with their time than talk to me. I need to wait until I have a really impressive idea before I request the time of other EAs—I could distract them from discovering the next highly effective charity or the solution to AI safety!
Nearly all of my experiences at the summit went against this idea. People wanted to talk to me, and others, about raw, unpolished ideas. Almost every EA I spoke to—including the “big names”—seemed authentically and intrinsically motivated to talk to students about their interests and ideas. I honestly think this was my biggest surprise of the conference—there are so many EAs who would genuinely like to talk to you.
Now, there’s al...
|
Dec 12, 2021 |
Why I am probably not a longtermist by Denise_Melchin
12:13
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Why I am probably not a longtermist, published by Denise_Melchin on the effective altruism forum.
tl;dr: I am much more interested in making the future good, as opposed to long or big, as I neither think the world is great now nor am convinced it will be in the future. I am uncertain whether there are any scenarios which lock us into a world at least as bad as now that we can avoid or shape in the near future. If there are none, I think it is better to focus on “traditional neartermist” ways to improve the world.
I thought it might be interesting to other EAs why I do not feel very on board with longtermism, as longtermism is important to a lot of people in the community.
This post is about the worldview called longtermism. It does not describe a position on cause prioritisation. It is very possible for causes commonly associated with longtermism to be relevant under non-longtermist considerations.
I structured this post by crux and highlighted what kind of evidence or arguments would convince me that I am wrong, though I am keen to hear about others which I might have missed! I usually did not investigate my cruxes thoroughly. Hence, only ‘probably’ not a longtermist.
The quality of the long-term future
1. I find many aspects of utilitarianism uncompelling.
You do not need to be a utilitarian to be a longtermist. But I think depending on how and where you differ from total utiliarianism, you will probably not go ‘all the way’ to longtermism.
I very much care about handing the world off in a good state to future generations. I also care about people’s wellbeing regardless of when it happens. What I value less than a total utilitarian is bringing happy people into existence who would not have existed otherwise. This means I am not too fussed about humanity’s failure to become much bigger and spread to the stars. While creating happy people is valuable, I view it as much less valuable than making sure people are not in misery. Therefore I am not extremely concerned about the lost potential from extinction risks (but I very much care about its short-term impact), although that depends on how good and long I expect the future to be (see below).
What would convince me otherwise:
I not only care about pursuing my own values, but I would like to ensure that other people’s reflected values are implemented. For example, if it turned out that most people in the world really care about increasing the human population in the long term, I would prioritise it much more. However I am a bit less interested in the sum of individual preferences, but more the preferences of a wide variety of groups. This is to give more weight to rarer worldviews as well as not rewarding one group outbreeding the other or spreading their values in an imperialist fashion.
I also want to give the values of people who are suffering the most more weight. If they think the long-term future is worth prioritising over their current pain, I would take this very seriously.
Alternatively, convincing me of moral realism and the correctness of utilitarianism within that framework would also work. So far I have not seen a plain language explanation of why moral realism makes any sense, but it would probably be a good start.
If the world suddenly drastically improved and everyone had as good a quality of life as my current self, I would be happy to focus on making the future big and long instead of improving people’s lives.
2. I do not think humanity is inherently super awesome.
A recurring theme in a lot of longtermist worldviews seems to be that humanity is wonderful and should therefore exist for a long time. I do not consider myself a misanthrope, I expect my views to be average for Europeans. Humanity has many great aspects which I like to see thrive.
But I find the overt enthusiasm for humanity most longtermis...
|
Dec 12, 2021 |
We need alternatives to Intro EA Fellowships by Ashley Lin
20:33
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: We need alternatives to Intro EA Fellowships, published by Ashley Lin on the effective altruism forum.
Thanks to Kuhan Jeyapragasan, Akash Wasil, Olivia Jimenez, and Lizka Vaintrob for helpful comments / conversations.
TLDR: I think group organizers have become anchored on the idea of Intro EA Fellowships as 8-week small-group things, which might actually be a sub-optimal way to introduce promising students to the EA world. We need new alternatives that are exciting, immersive, and enable EA-interested students to move as quickly as they’d like through the EA funnel.
The Intro EA Fellowship (also known as the Arete Fellowship) is a program where a small group of fellows and a facilitator meet multiple times to learn about some aspect of effective altruism. Stanford EA’s virtual Intro EA Fellowship was the first structured EA program I was part of and when I realized EA was a thriving community with real humans in it. I don’t think my experience is unique. In uni EA groups around the world, the Intro EA Fellowship is one of the core programs offered to students.
Some context on Intro EA Fellowships:
Fellowships are usually structured in cohorts of 2-5 fellows and a facilitator who meets weekly for 6-10 weeks.
Each week, fellows do some amount of readings and participate in a discussion with their cohort.
There are often also social activities that allow fellows to get to know each other better and cultivate friendships.
Uni group organizers often use Intro EA Fellowships as an early/mid part of the funnel for highly-engaged EA students.
Intro EA Fellowships exist in so many places because they have some upsides, some of which I list below.
That said, I also think there are some strong downsides to the Intro EA Fellowship model as it currently exists. As a participant, I wasn’t particularly impressed by my Intro EA Fellowship experience -- I’m not sure if my fellowship cohort actually finished (people got busy) and think there’s a chance I would’ve bounced off the EA community had I not attended a summer EA retreat for high school students a couple months later. Now, as I help organize Penn EA’s Intro Fellowship cohort, I’m noticing how my frustrations as an Intro Fellowship participant weren’t unique to me.
In this post, I share some of those frustrations (downsides to the Intro Fellowship as I see them). I try to steelman the argument for why Intro EA fellowships should exist. Finally, I introduce some Intro EA Fellowship alternatives that I'd be excited to see uni group organizers prototype. I’m personally planning to try some of these -- if you’d like to coordinate, please reach out ashley11@wharton.upenn.edu!
Downsides of Intro Fellowships
Much of this is observed through my own experience being part of / facilitating fellowships. I also think Intro Fellowship experiences are highly variable depending on one’s cohort and facilitator, and I think there’s a good chance I’ve had a relatively worse Intro EA Fellowship experience than others:
The standard 8-week fellowship timeline is too slow for people who are really excited early on and want to move faster. In these scenarios, the fellowship might actually slow people down and cause their excitement to fade (an hour-long conversation and some readings each week is a pretty sluggish pace) -- worse case scenario, it might cause some promising people to lose interest.
I want to cultivate a fast-paced vibe among fellows, instead of a discussion-group like vibe. (To me “fast-paced,” when applied to something that seems interesting, cultivates genuine excitement and deep curiosity). For example, when I first found 80,000 Hours, I pulled an all-nighter reading it and was absolutely ecstatic that people had spent so much time thinking about triaging problems and how one can do the most good in the world. In my early EA days...
|
Dec 12, 2021 |
500 Million, But Not A Single One More by jai
04:30
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: 500 Million, But Not A Single One More, published by jai on the effective altruism forum.
This is a linkpost for http://blog.jaibot.com/?p=413
We will never know their names.
The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.
It was in Ancient Egypt, where it attacked slave and pharaoh alike. In Rome, it effortlessly decimated armies. It killed in Syria. It killed in Moscow. In India, five million dead. It killed a thousand Europeans every day in the 18th century. It killed more than fifty million Native Americans. From the Peloponnesian War to the Civil War, it slew more soldiers and civilians than any weapon, any soldier, any army. (Not that this stopped the most foolish and empty souls from attempting to harness the demon as a weapon against their enemies.)
Cultures grew and faltered, and it remained. Empires rose and fell, and it thrived. Ideologies waxed and waned, but it did not care. Kill. Maim. Spread. An ancient, mad god, hidden from view, that could not be fought, could not be confronted, could not even be comprehended. Not the only one of its kind, but the most devastating.
For a long time, there was no hope — only the bitter, hollow endurance of survivors.
In China, in the 10th century, humanity began to fight back.
It was observed that survivors of the mad god’s curse would never be touched again: They had taken a portion of that power into themselves, and were so protected from it. Not only that, but this power could be shared by consuming a remnant of the wounds. There was a price, for you could not take the god’s power without first defeating it — but a smaller battle, on humanity’s terms.
By the 16th century, the technique spread to India, then across Asia, the Ottoman Empire and, in the 18th century, Europe. In 1796, a more powerful technique was discovered by Edward Jenner.
An idea began to take hold: Perhaps the ancient god could be killed.
A whisper became a voice; a voice became a call; a call became a battle cry, sweeping across villages, cities, nations. Humanity began to cooperate, spreading the protective power across the globe, dispatching masters of the craft to protect whole populations. People who had once been sworn enemies joined in a common cause for this one battle. Governments mandated that all citizens protect themselves, for giving the ancient enemy a single life would put millions in danger.
And, inch by inch, humanity drove its enemy back. Fewer friends wept; fewer neighbors were crippled; fewer parents had to bury their children.
At the dawn of the 20th century, for the first time, humanity banished the enemy from entire regions of the world. Humanity faltered many times in its efforts, but there were individuals who never gave up, who fought for the dream of a world where no child or loved one would ever fear the demon ever again. Viktor Zhdanov, who called for humanity to unite in a final push against the demon; the great tactician Karel Raška, who conceived of a strategy to annihilate the enemy; Donald Henderson, who led the efforts in those final days.
The enemy grew weaker. Millions became thousands, thousands became dozens. And then, when the enemy did strike, scores of humans came forth to defy it, protecting all those whom it might endanger.
The enemy’s last attack in the wild was on Ali Maow Maalin, in 1977. For months afterwards, dedicated humans swep...
|
Dec 12, 2021 |
Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement by ThomasWoodside, jessica_mccurdy
12:27
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement, published by ThomasWoodside, jessica_mccurdy on the effective altruism forum.
Written by Jessica McCurdy and Thomas Woodside
Overview
Yale has been one of the only groups in the past advocating for a selective fellowship. However, after we noticed a couple instances of people who had barely been accepted to the fellowship becoming extremely engaged with the group, we decided to do an analysis of our scoring of applications and eventual engagement. We found no correlation.
We think this shows the possibility that some of the people we have rejected in the past could have become extremely engaged members, which seems like a lot of missed value. We are still doing more analysis using different metrics and methods. For now we are tentatively recommending that groups do not follow our previous advice about being selective if they have the capacity to take on more fellows. We recommend either guaranteeing future acceptance to those over a baseline or encouraging applicants to apply to EA virtual programs if limited by capacity. This is not to say that there is no good way of selecting fellows but rather that ours in particular was not effective.
Rationale for Being Selective & Relevant Updates
These have been our reasons for being selective in the past and our updated thoughts
Only the most excited applicants participate (less engaged fellows who have poor attendance or involvement can set unwanted norms)
By emphasizing the time commitment in the interviews and making it easy for applicants to postpone doing the fellowship hopefully we will self select for this.
Fellows are incentivized show up and be actively engaged (since they know they are occupying a spot another person did not receive)
The application and interview process alone should create the feeling of selectiveness even if we don’t end up being that selective.
We only need a few moderators that we are confident will be friendly, welcoming, and knowledgeable about EA
We were lucky enough to have several previous fellows who fit this description.
Now that there is training available for facilitators we hope to skill up new ones quickly.
We made it a lot easier to become and be a facilitator by separating that role from the organizers role.
We create a stronger sense of community amongst Fellows
This is still a concern
Each Fellow can receive an appropriate amount of attention since organizers get to know each one individually
This is still a concern though in the past Fellowship organizers were also taking on many different roles and now we have one person now whose only role is to manage the fellowship.
We don’t strain our organizing capacity and can run the Fellowship more smoothly
This is still a concern but the previous point also applies here
Overall, we still think these are good and important reasons for keeping the fellowship smaller. However, we are currently thinking that the possibility of rejecting an applicant who would have become really involved outweighs these concerns.
Although, there is an argument to be made that these people would have found a way to be involved anyways.
How we Measured Engagement and Why we Chose it
How we measured it
We brainstormed ways of being engaged with the group and estimated a general ranking for them. We ended up with:
Became YEA President >
Joined YEA Exec >
Joined the Board OR
Became a regular attendee of events and discussion groups OR
Became a mentor (after graduating) >
Became a Fellowship Facilitator (who is not also on the board) OR
Did the In-Depth Fellowship >
Became a top recruiter for the Fellowship OR
Had multiple 1-1 outside of the fellowship OR
Asked to be connected to the EA community in their post- graduation location OR
Attended the Student Summit >
Came to at least ...
|
Dec 12, 2021 |
Long-Term Future Fund: April 2019 grant recommendations by Habryka
01:15:46
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Long-Term Future Fund: April 2019 grant recommendations, published by Habryka on the effective altruism forum.
Please note that the following grants are only recommendations, as all grants are still pending an internal due diligence process by CEA.
This post contains our allocation and some explanatory reasoning for our Q1 2019 grant round. We opened up an application for grant requests earlier this year which was open for about one month, after which we received an unanticipated large donation of about $715k. This caused us to reopen the application for another two weeks. We then used a mixture of independent voting and consensus discussion to arrive at our current grant allocation.
What is listed below is only a set of grant recommendations to CEA, who will run these by a set of due-diligence tests to ensure that they are compatible with their charitable objectives and that making these grants will be logistically feasible.
Grant Recipients
Each grant recipient is followed by the size of the grant and their one-sentence description of their project.
Anthony Aguirre ($70,000): A major expansion of the Metaculus prediction platform and its community
Tessa Alexanian ($26,250): A biorisk summit for the Bay Area biotech industry, DIY biologists, and biosecurity researchers
Shahar Avin ($40,000): Scaling up scenario role-play for AI strategy research and training; improving the pipeline for new researchers
Lucius Caviola ($50,000): Conducting postdoctoral research at Harvard on the psychology of EA/long-termism
Connor Flexman ($20,000): Performing independent research in collaboration with John Salvatier
Ozzie Gooen ($70,000): Building infrastructure for the future of effective forecasting efforts
Johannes Heidecke ($25,000): Supporting aspiring researchers of AI alignment to boost themselves into productivity
David Girardo ($30,000): A research agenda rigorously connecting the internal and external views of value synthesis
Nikhil Kunapuli ($30,000): A study of safe exploration and robustness to distributional shift in biological complex systems
Jacob Lagerros ($27,000): Building infrastructure to give X-risk researchers superforecasting ability with minimal overhead
Lauren Lee ($20,000): Working to prevent burnout and boost productivity within the EA and X-risk communities
Alex Lintz ($17,900): A two-day, career-focused workshop to inform and connect European EAs interested in AI governance
Orpheus Lummis ($10,000): Upskilling in contemporary AI techniques, deep RL, and AI safety, before pursuing a ML PhD
Vyacheslav Matyuhin ($50,000): An offline community hub for rationalists and EAs
Tegan McCaslin ($30,000): Conducting independent research into AI forecasting and strategy questions
Robert Miles ($39,000): Producing video content on AI alignment
Anand Srinivasan ($30,000): Formalizing perceptual complexity with application to safe intelligence amplification
Alex Turner ($30,000): Building towards a “Limited Agent Foundations” thesis on mild optimization and corrigibility
Eli Tyre ($30,000): Broad project support for rationality and community building interventions
Mikhail Yagudin ($28,000): Giving copies of Harry Potter and the Methods of Rationality to the winners of EGMO 2019 and IMO 2020
CFAR ($150,000): Unrestricted donation
MIRI ($50,000): Unrestricted donation
Ought ($50,000): Unrestricted donation
Total distributed: $923,150
Grant Rationale
Here we explain the purpose for each grant and summarize our reasoning behind their recommendation. Each summary is written by the fund member who was most excited about recommending the relevant grant (plus some constraints on who had time available to write up their reasoning). These differ a lot in length, based on how much available time the different fund members had to explain their reasoning.
Writeups by Helen Toner
Al...
|
Dec 12, 2021 |
When can I eat meat again? by clairey
39:29
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: When can I eat meat again?, published by clairey on the effective altruism forum.
By Claire Yip, co-founder of Cellular Agriculture UK. These views are my own.
Summary
Timeline: When we can expect highly similar cost-competitive alternatives to animal products
Timeline
There is a lot of uncertainty around when we will be able to eat meat grown from cells, and how we should divide our efforts between that, plant-based alternatives, and other forms of animal advocacy. This post seeks to give sensible, unbiased views on the future of alternative proteins.
However, these views are uncertain too: I have c.40-60% confidence. These estimates are not set in stone. Factors like investment and activity would quicken progress, but things will also probably take longer than we expect, for unexpected reasons, and we don’t know everything that we don’t know.
These are estimates for cost-competitive alternatives. We will be able to buy these products earlier than this.
In the next 5-10 years, expect to see plant-based versions of processed meats become more widespread and delicious. Yay, chicken nuggets! You’re also in luck if you want plant-based or animal-free scrambled eggs, omelettes, milk, cream, yoghurt, or whey protein powder.
These plant-based products will get even better in the next 10-20 years, especially as they’re blended e.g. with collagen (produced without animals), or real meat cultivated from cells. Decent animal-free butter, cheese, and whole egg products might become a reality! Pet food produced from animal and microorganism cells will also be more easily available.
If you want to eat unprocessed whole meat like bacon or sashimi without hurting too many animals or blowing your grocery budget, it looks like you’ll have to wait a few decades (30-50 years).
My time estimates are mostly based on private conversations with plant-based and cellular agriculture companies, at conferences and through writing a report on low-cost cell culture media for the Good Food Institute, as well as my understanding of the technical progress needed for each technology. However, my views do not represent those of GFI.
Actions you can take
Donors: If you want to donate to this space, promising recipients are the Good Food Institute and New Harvest (for open access research into cellular and acellular agriculture specifically).
Farmed Animal Funders has highly tentative suggestions on how philanthropists might allocate donations/funding to plant-based alternatives.
If you want to work in this space:
Most companies are hungry for scientific and engineering talent and will continue to hire.
Relevant disciplines/skills for plant-based meat include: biochemistry, food science, plant biology, chemical engineering.
Relevant disciplines/skills for a/cellular agriculture include: biochemistry, food science, plant biology, chemical engineering, tissue engineering, synthetic biology, bioreactor engineering, cell culture.
Software engineering will probably become more useful in automation and computational modelling.
Experience required varies: some ask for a few years of experience in a lab while others may only hire PhDs.
There are more job openings at plant-based companies, although public interest is peaking, so competition is also high (so replaceability might be a concern).
These are hugely varied, from working on production lines to operations, HR, data science, etc.
Non-scientific roles at acellular and cellular agriculture companies will be more widely available as they approach commercialisation, in 3-5 years.
CellAgri, GFI, and 80,000 Hours maintain lists of job opportunities.
Students: GFI has a guide for students which contains career profiles.
They also have quarterly career calls.
Researchers/scientists: Academic research seems valuable, partly because it is often open access while s...
|
Dec 12, 2021 |
Introducing LEEP: Lead Exposure Elimination Project by Jack, LuciaC
09:32
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Introducing LEEP: Lead Exposure Elimination Project, published by Jack, LuciaC on the effective altruism forum.
We are excited to announce the launch of Lead Exposure Elimination Project (LEEP), a new EA organisation incubated by Charity Entrepreneurship. Our mission is to reduce lead poisoning, which causes significant disease burden worldwide. We aim to achieve this by advocating for lead paint regulation in countries with large and growing burdens of lead poisoning from paint.
In this post, we make the case for lead exposure reduction as a priority, and outline our plan to address this problem.
The Problem
Others in the effective altruism community have already identified that working on lead poisoning could be a high-impact opportunity (see here, here, and here). Through the Importance, Tractability, Neglectedness framework, we unpack the reasoning for prioritising lead exposure interventions, and for our approach of advocating for the introduction of lead paint laws.
Importance
Lead poisoning has substantial health and economic costs, and lead paint is a primary contributor [1]. In terms of individual impacts, lead exposure has a number of effects. Even a low level of lead exposure can lead to mental disability and IQ loss, as well as increased rates of mental illness and psychopathology and significantly reduced lifetime earnings capacity [2, 3, 4]. Lead also has effects on behaviour and criminal tendencies; in particular having a large impact on the prevalence of violent crime [5]. In adults, lifetime lead exposure is an important risk factor for renal disease and cardiovascular disease, including hypertension and coronary artery disease [6, 7]. Higher levels of exposure can affect all organ systems, and even result in respiratory difficulties, seizure, coma, and death [5].
Lead poisoning primarily affects children, and does so at a massive scale. UNICEF reports that 815 million children have blood lead levels above 5 µg/dL - a sufficient level for neurodevelopmental effects and reduced IQ [8]. The vast majority live in low and middle-income countries. Put another way, one in three children are currently affected by lead poisoning to some degree.
In addition to disability, it also causes 1 million deaths per year. In total, lead poisoning accounts for 22 million DALYs every year, which means that lead poisoning is responsible for approximately 1% of the global disease burden [9].
In terms of lost earnings, lead poisoning impacts the world economy to the level of approximately $1 trillion per year [4]. This amounts to a loss of 1.2% of world GDP. These losses are concentrated in low and middle-income countries, where they can amount to as much as 5-8% of GDP, suggesting that lead exposure can be a significant barrier to economic development and poverty reduction.
In short, the problem of lead poisoning is a significant one.
Neglectedness
At present, while all countries except for one have banned leaded petrol, 61% of countries have no lead paint regulations whatsoever [1]. In many of these primarily low and middle-income countries the burden of disease from lead poisoning is still significant. In high-income countries, this is a less severely neglected area, as most countries have introduced regulations banning leaded petrol and lead paint.
While there are some organisations working to address this issue in low and middle-income countries, including IPEN, ToxicsLink, and Pure Earth, many countries with significant lead burdens remain neglected by other actors. LEEP aims to fill this gap, and target these neglected countries.
Tractability
This is the most uncertain aspect of working on lead poisoning, given the uncertainty around the success of policy change interventions. However, there are several reasons in favour of the tractability of policy change to ban t...
|
Dec 12, 2021 |
2020 AI Alignment Literature Review and Charity Comparison by Larks
02:12:57
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: 2020 AI Alignment Literature Review and Charity Comparison, published by Larks on the effective altruism forum.
Write a Review
cross-posted to LW here.
Introduction
As in 2016, 2017, 2018, and 2019, I have attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to a securities analyst with regards to possible investments.
My aim is basically to judge the output of each organisation in 2020 and compare it to their budget. This should give a sense of the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2021 budgets to get a sense of urgency.
I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I have overlooked or misconstrued. As ever I am painfully aware of the various corners I have had to cut due to time constraints from my job, as well as being distracted by 1) other projects, 2) the miracle of life and 3) computer games.
This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues which I attempt to cover but only very cursorily.
How to read this document
This document is fairly extensive, and some parts (particularly the methodology section) are largely the same as last year, so I don’t recommend reading from start to finish. Instead, I recommend navigating to the sections of most interest to you.
If you are interested in a specific research organisation, you can use the table of contents to navigate to the appropriate section. You might then also want to Ctrl+F for the organisation acronym in case they are mentioned elsewhere as well. Papers listed as ‘X researchers contributed to the following research lead by other organisations’ are included in the section corresponding to their first author and you can Cntrl+F to find them.
If you are interested in a specific topic, I have added a tag to each paper, so you can Ctrl+F for a tag to find associated work. The tags were chosen somewhat informally so you might want to search more than one, especially as a piece might seem to fit in multiple categories.
Here are the un-scientifically-chosen hashtags:
AgentFoundations
Amplification
Capabilities
Corrigibility
DecisionTheory
Ethics
Forecasting
GPT-3
IRL
Misc
NearAI
OtherXrisk
Overview
Politics
RL
Strategy
Textbook
Transparency
ValueLearning
New to Artificial Intelligence as an existential risk?
If you are new to the idea of General Artificial Intelligence as presenting a major risk to the survival of human value, I recommend this Vox piece by Kelsey Piper, or for a more technical version this by Richard Ngo.
If you are already convinced and are interested in contributing technically, I recommend this piece by Jacob Steinheart, as unlike this document Jacob covers pre-2019 research and organises by topic, not organisation, or this from Critch & Krueger, or this from Everitt et al, though it is a few years old now
Research Organisations
FHI: The Future of Humanity Institute
FHI is an Oxford-based Existential Risk Research organisation founded in 2005 by Nick Bostrom. They are affiliated with Oxford University. They cover a wide variety of existential risks, including artificial intelligence, and do political outreach. Their research can be found here.
Their research is more varied than MIRI's, including strategic work, work directly addressing the value-learning problem, and corrigibility work - as well as work on other Xrisks.
They run a Research Scholars Program, where people can join them to do research at FHI. There is a f...
|
Dec 12, 2021 |
EA Forum Creative Writing Contest: $22,000 in prizes for good stories by Aaron Gertler
09:34
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: EA Forum Creative Writing Contest: $22,000 in prizes for good stories, published by Aaron Gertler on the effective altruism forum.
Update: The contest is now closed! All submissions made by 11:59 PST on Friday, October 29 will be considered. This includes a few posts whose authors had trouble submitting, but contacted me about before the deadline.
Stories are a key part of how EA has grown since its beginning. Some examples:
The Drowning Child and the Expanding Circle, which probably did more to launch the EA movement than any other piece of writing
Harry Potter and the Methods of Rationality, which introduced many readers to important ideas about good epistemics and AI risk
500 Million, But Not A Single One More, which has been read aloud in conference halls for hundreds of people and is one of the first essays in the EA Handbook
The Fable of the Dragon-Tyrant, which seems to have been extremely influential for one of the world’s most prominent entrepreneurs (who has since given tens of millions of dollars to various EA-adjacent causes)
There’s a lot of “rational fiction” out there — stories about people thinking clearly to solve problems. Many of those stories also incorporate EA themes. But they tend to reveal their ideas over dozens of chapters, making it hard for someone to pick up on those themes unless they’re willing to dedicate many hours of time.
We’d like to see creative work that “gets to the point” quickly — stories that, in a single sitting, might inspire someone to find out more about effective altruism, whether that means the whole movement or a single idea/cause area/intervention.
So we’re running a contest! We want to see you write or share stories and creative nonfiction with EA themes. And we’ve added prizes to sweeten the deal.[1]
Notably, your work doesn’t have to use EA jargon or cover a popular cause area, as long as it gets across the core idea of "using evidence and reason to help others effectively".
That said, it doesn't hurt if the work references popular EA topics in some way, or tries to directly inspire readers to find out more about EA. For example, HPMOR includes a note along the lines of “to learn what Harry knows, read the LessWrong Sequences”. We’d be happy to see stories that would justify the note “to learn what X knows, join a Virtual Program”.[2]
What kinds of content can I submit?
We’ll have two categories:
Fictional stories, like “The Fable of the Dragon-Tyrant”
Creative nonfiction, like “500 Million, But Not A Single One More”
No need to include the category in your submission.
What are the prizes?
Update: CEA initially funded $10,000 in prizes. However, a generous donor (Owen Cotton-Barratt) added another $12,000.
We’re now offering $22,000 in total prize money, with the following structure:
First prize (among all entries): $10,000
Two second prizes: $3,000 each
If first prize goes to a fiction entry, at least one second prize will go to a nonfiction entry, and vice-versa
Four third prizes: $1,000 each
Eight honorable mentions: $250 each
What’s the deadline?
Entries must be published on the EA Forum no later than 11:59 pm PST on Friday, October 29 (initial deadline extended by two weeks after a few people said it seemed short).
If you’d like to be kind to the judges and let us space out our reading over time, you can publish earlier :-)
How do I submit content?
We recommend publishing your entry on the EA Forum and tagging it with Creative Writing Contest.
We want lots of people to read and discuss your submissions — we think the Forum will be a really fun place if good stories start showing up. However, we won’t use upvotes or comments as part of our process for choosing a winner.
If you'd strongly prefer not to publish the work for any reason (including the desire to submit it elsewhere), you can submit it through this f...
|
Dec 12, 2021 |
Survey on AI existential risk scenarios by SamClarke, Alexis Carlier, jonasschuett
13:00
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Survey on AI existential risk scenarios, published by SamClarke, Alexis Carlier, jonasschuett on the effective altruism forum.
Cross-posted to LessWrong.
Summary
In August 2020, we conducted an online survey of prominent AI safety and governance researchers. You can see a copy of the survey at this link.[1]
We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts, CHAI, CLR, CSER, CSET, FHI, FLI, GCRI, MILA, MIRI, Open Philanthropy and PAI) and a number of independent researchers. We received 75 responses, a response rate of 56%.
The survey aimed to identify which AI existential risk scenarios[2] (which we will refer to simply as “risk scenarios”) those researchers find most likely, in order to (1) help with prioritising future work on exploring AI risk scenarios, and (2) facilitate discourse and understanding within the AI safety and governance community, including between researchers who have different views.
In our view, the key result is that there was considerable disagreement among researchers about which risk scenarios are the most likely, and high uncertainty expressed by most individual researchers about their estimates.
This suggests that there is a lot of value in exploring the likelihood of different AI risk scenarios in more detail, especially given the limited scrutiny that most scenarios have received. This could look like:
Fleshing out and analysing the scenarios mentioned in this post which have received less scrutiny.
Doing more horizon scanning or trying to come up with other risk scenarios, and analysing them.
At this time, we are only publishing this abbreviated version of the results. We have a version of the full results that we may publish at a later date. Please contact one of us if you would like access to this, and include a sentence on why the results would be helpful or what you intend to use them for.
We welcome feedback on any aspects of the survey.
Motivation
It has been argued that AI could pose an existential risk. The original risk scenarios were described by Nick Bostrom and Eliezer Yudkowsky. More recently, these have been criticised, and a number of alternative scenarios have been proposed. There has been some useful work exploring these alternative scenarios, but much of this is informal. Most pieces are only presented as blog posts, with neither the detail of a book, nor the rigour of a peer-reviewed publication. For further discussion of this dynamic, see work by Ben Garfinkel, Richard Ngo and Tom Adamczewski.
The result is that it is no longer clear which AI risk scenarios experts find most plausible. We think this state of affairs is unsatisfactory for at least two reasons. First, since many of the proposed scenarios seem underdeveloped, there is room for further work analyzing them in more detail. But this is time-consuming and there are a wide range of scenarios that could be analysed, so knowing which scenarios leading experts find most plausible is useful for prioritising this work. Second, since the views of top researchers will influence the views of the broader AI safety and governance community, it is important to make the full spectrum of views more widely available. The survey is intended to be a first step in this direction.
The survey
We asked researchers to estimate the probability of five AI risk scenarios, conditional on an existential catastrophe due to AI having occurred. There was also a catch-all “other scenarios” option.
These were the five scenarios we asked about, and the descriptions we gave in the survey:
"Superintelligence"
A single AI system with goals that are hostile to humanity quickly becomes sufficiently capable for complete world domination, and causes the future to contain very little of what we value, as described in “Superintelligenc...
|
Dec 12, 2021 |
How can we make Our World in Data more useful to the EA community? by EdMathieu
01:14
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: How can we make Our World in Data more useful to the EA community?, published by EdMathieu on the effective altruism forum.
I work at Our World in Data, where we try to make research and data on the world's largest problems more accessible and understandable.
I attended EA Global this past weekend, where I received very interesting input from many lovely people on potential improvements. But I thought it'd also be worth asking here to get wider feedback. I'm interested in all the following:
Low-hanging 'data fruits': simple datasets or charts that you know to be readily available somewhere and that would add significant value, but that aren't already listed here.
High-hanging fruits: things we could add to the website in the medium term with a lot more work (new subjects, larger datasets, data that needs a lot of cleaning, etc.)
Imaginary fruits: what you'd like to see on OWID in your wildest dreams (e.g. global population projections to the year 10,000 under various scenarios).
Thank you!
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Toby Ord’s ‘The Precipice’ is published!by matthew.vandermerwe
03:36
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Toby Ord’s ‘The Precipice’ is published!, published by matthew.vandermerwe on the effective altruism forum.
Write a Review
The Precipice: Existential Risk and the Future of Humanity is out today. I’ve been working on the book with Toby for the past 18 months, and I’m excited for everyone to read it. I think it has the potential to make a profound difference to the way the world thinks about existential risk.
How to get it
It's out in the UK on March 5 and US March 24
An audiobook, narrated by Toby himself, is out March 24
You can buy it on Amazon now, or at theprecipice.com/purchase
You can download the opening chapters for free by signing up to the newsletter at www.theprecipice.com
What you can do
Read the book
Talk about it with your friends and family, or share quotes you like on social media
If you enjoy it, consider writing a review on Amazon or Goodreads
Summary of the book
Part One: The Stakes
Toby places our time within the broad sweep of human history: showing how far humanity has come in 2,000 centuries, and where we might go if we survive long enough. He outlines the major transitions in our past—the Agricultural, Scientific, and Industrial Revolutions. Each is characterised by dramatic increases in our power over the natural world, and together they have yielded massive improvements in living standards. During the twentieth century, with the detonation of the atomic bomb, humanity entered a new era. We gained the power to destroy itself, without the wisdom to ensure that we don’t. This is the Precipice, and how we navigate this period will determine whether humanity has a long and flourishing future, or no future at all. Toby introduces the concept of existential risk—risks that threaten to destroy humanity’s longterm potential. He shows how the case for safeguarding humanity from these risks draws support from a range of moral perspectives. Yet it remains grossly neglected—humanity spends more each year on ice cream than we do on protecting our future.
Part Two: The Risks
Toby explores the science behind the risks we face. In Natural Risks, he considers threats from asteroids & comets, supervolcanic eruptions, and stellar explosions. He shows how we can use humanity’s 200,000 year history to place strict bounds on how high the natural risk could be. In Anthropogenic Risks, he looks at risks we have imposed on ourselves in the last century, from nuclear war, extreme climate change, and environmental damage. In Future Risks, he turns to threats that are on the horizon from emerging technologies, focusing in detail on engineered pandemics, unaligned artificial intelligence, and dystopian scenarios.
Part Three: The Path Forward
Toby surveys the risk landscape and gives his own estimates for each risk. He also provides tools for thinking about how they compare and combine, and for how to prioritise between risks. He estimates that nuclear war and climate change each pose more risk than all the natural risks combined, and that risks from emerging technologies are higher still. Altogether, Toby believes humanity faces a 1 in 6 change of existential catastrophe in the next century. He argues that it is in our power to end these risks today, and to reach a place of safety. He outlines a grand strategy for humanity, provides actionable policy and research recommendations, and shows what each of us can do. The book ends with an inspiring vision of humanity’s potential, and what we might hope to achieve if we navigate the risks of the next century.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Everyday Longtermism by Owen_Cotton-Barratt
14:13
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Everyday Longtermism, published by Owen_Cotton-Barratt on the effective altruism forum.
This post is about a question:
What does longtermism recommend doing in all sorts of everyday situations?
I've been thinking (on and off) about versions of this question over the last year or two. Properly I don't want sharp answers which try to give the absolute best actions in various situations (which are likely to be extremely context dependent and perhaps also weird or hard to find), but good blueprints for longtermist decision-making in everyday situations: pragmatic guidance which will tend to produce good outcomes if followed.
The first part of the post explains why I think this is an important question to look into. The second part talks about my current thinking and some guess answers: that everyday longtermism might involve seeking to improve decision-making all around us (skewing to more important decision-making processes), while abiding by commonsense morality.
A lot of people provided some helpful thoughts in conversation or on old drafts; interactions that I remember as particularly helpful came from: Nick Beckstead, Anna Salamon, Rose Hadshar, Ben Todd, Eliana Lorch, Will MacAskill, Toby Ord. They may not endorse my conclusions, and in any case all errors, large and small, remain my own.
Motivations for the question
There are several different reasons for wanting an answer to this. The most central two are:
Strong longtermism says that the morally right thing to do is to make all decisions according to long-term effects. But for many many decisions it's very unclear what that means.
At first glance the strong longtermist stance seems like it might recommend throwing away all of our regular moral intuitions (since they're not grounded in long-term effects). This could leave some dangerous gaps; we should look into whether they get rederived from different foundations, or if something else should replace them.
More generally it just seems like if longtermism is important we should seek a deep understanding of it, and for that it's good to look at it from many angles (and everyday decisions are a natural and somewhat important class).
Having good answers to the question of everyday longtermism might be very important for the memetics / social dynamics of longtermism.
People encountering and evaluating an idea that seems like it's claiming broad scope of applicability will naturally examine it from lots of angles.
Two obvious angles are "what does this mean for my day-to-day life?" and "what would it look like if everyone was on board with this?".
Having good and compelling answers to these could be helpful for getting buy-in to the ideas.
I think an action-guiding philosophy is at an advantage in spreading if there are lots of opportunities for people to practice it, to observe when others are/aren’t following it, and to habituate themselves to a self-conception as someone who adheres to it.
For longtermism to get this advantage, it needs an everyday version. That shouldn't just provide a fake/token activity, but meaningful practice that is substantively continuous with the type of longtermist decision-making which might have particularly large/important long-term impacts.
If longtermism got to millions or tens of millions of supporters -- as seems plausible on timescales of a decade or three -- it could be importantly bottlenecked on what kind of action-guiding advice to give people.
A third more speculative motivation is that the highest-leverage opportunities may be available only at the scale of individual decisions, so having better heuristics to help identify them might be important. The logic is outlined in the diagram below. Suppose opportunities naturally arise at many different levels of leverage (value out per unit of effort in) and scales (how much effort...
|
Dec 12, 2021 |
Empirical data on value drift by Joey
08:04
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Empirical data on value drift, published by Joey on the effective altruism forum.
Write a Review
Why It’s Important to Know the Risk of Value Drift
The concept of value drift is that over time, people will become less motivated to do altruistic things. This is not to be confused with changing cause areas or methods of doing good. Value drift has a strong precedent of happening for other related concepts, both ethical things (such as being vegetarian) and things that generally take willpower (such as staying a healthy weight).
Value drift seems very likely to be a concern for many EAs, and if it were a major concern, it would substantially affect career and donation plans.
For example, if value drift rarely happens, putting money into a savings account with the intent of donating it might be basically as good as putting it into a donor-advised fund. However, if the risk of value drift is higher, a dollar in a savings account is more likely to later be used for non-altruistic reasons and thus not nearly as good as a dollar put into a donor advised fund, where it’s very hard not to donate it to a registered charity.
In a career context, a plan such as building career capital for 8 years and then moving into an altruistic job would be considered a much better plan if value drift were rare than if it were common. The more common value drift is, the stronger near-term focused impact plans are relative to longer-term focused impact plans. For example, you might get an entry-level position at a charity and build up capacity by getting work experience. This has the potential, though not always, to be slower at building your CV than getting a degree or working in a low-impact but high-prestige field. However, it has impact right away, which matters more if the risk of value drift is high.
The Data
Despite the importance of value drift to important questions, it's rarely been talked about or studied. One of the reasons it is so under-studied is that it would take a long time to get good data.
I have been in the EA movement for ~5 years. I decided to pool some data from contacts who I met in my first year of EA. I only included people who would have called themselves EAs for 6 months or longer (I would not include someone who was only into EA for a month and then disappeared), and who and took some sort of EA action (working for an EA org, taking the GWWC pledge, running an EA group). I also only included people who I knew and kept in touch with well enough to know what happened to them (even if they left the EA movement). It is ultimately a convenience sample, but it was based on working for 4 current EA orgs and living in 4 different countries over that time, so it’s not focused on a single location or organization.
I also broke the groups down into ~10% donors and ~50% donors, because many times I have heard people being more or less concerned about one of these groups vs the other. These broad groups are not just focused on people doing earning to give. Someone who is working heavy hours for an EA organization and making most of their life decisions with EA as their number one priority would be considered in the 50% group. Someone running an EA chapter who makes decisions with EA as a factor, but prioritizes other factors above it, would be put in the 10% group. The percentages are aimed at rough proxies of how important EA is in these people's lives, not strictly financial donations. I did not count changing cause areas as value drift (e.g. changing from donating 10% to MIRI to AMF) -- only different levels of overall altruistic involvement.
The results over 5 years are as follows:
16 people were ~50% donors → 9/16 stayed around 50%
22 people were ~10% donors → 8/22 stayed around 10%
No one moved from the 10% category to the 50% category, and I only counted fairly noticeabl...
|
Dec 12, 2021 |
Ingredients for creating disruptive research teams by stefan.torges
01:36:46
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Ingredients for creating disruptive research teams, published by stefan.torges on the effective altruism forum.
Write a Review
Introduction
This post tries to answer the question of what qualities make some research teams more effective than others. I was particularly interested in learning more about “disruptive” research teams, i.e. research teams that have an outsized impact on (1) the research landscape itself (e.g. by paving the way for new fields or establishing a new paradigm), and/or (2) society at large (e.g. by shaping technology or policy).[1] However, I expect the conclusions to be somewhat relevant for all research teams.
Research seems to have become increasingly important within the effective altruism community. In the past few years, GPI was founded, FHI started growing significantly, and Open Phil is expanding its research capacity. Will MacAskill even called effective altruism a “research program”. From this perspective, we should be both interested in creating new fields of research, or at least substantially influencing existing ones, as well as impacting society.
Acknowledgments:
I did some of the research presented here as part of my work at the Berlin-based Effective Altruism Foundation (EAF), a research group and grantmaker dedicated to preventing suffering in the long-term future. Thanks to Jonas Vollmer, Jan Dirk Capelle, Max Daniel, and Alfredo Parra for valuable comments on an early draft of this post.
Summary
I looked at the two most comprehensive and rigorous academic studies on productive research teams I could find after a shallow review of the available literature (one literature review, Bland & Ruffin (1992), and one meta-analysis, Hülseger, Anderson & Salgado (2009)). Unfortunately, I could not find similarly comprehensive studies of disruptive research teams in particular. I complemented this with seven case studies of research teams I picked based on my own non-systematic judgment that they have been particularly disruptive. These are the RAND Corporation, the Sante Fe Institute, the Palo Alto Research Center (PARC), Bell Labs, Skunk Works, the Los Alamos Laboratory, and the partnership of Kahneman & Tversky.
The following are my key findings based on this research:
Particularly disruptive research teams always seem to contain a significant number of excellent researchers and even those who are not brilliant are very capable. Teams seem to benefit from cognitive diversity but not demographic diversity.
Disruptive research teams seem to benefit from a purposeful vision that describes the kind of change they want to affect in the world. While more concrete goals are probably helpful, they seem difficult to set in this context.
Leaders likely have an outsized impact on how productive and disruptive a research group is. In almost all cases, relevant research expertise seems to required for such a role. For some teams, a second administrative leadership role seems to be helpful for securing resources and managing external relationships.
Research teams seem more likely to realize their full disruptive potential if the researchers do not have to do anything but research and have easy access to all the resources they need.
Individual researchers in disruptive teams seem to thrive when given a large degree of autonomy, i.e., when they’re allowed to pursue projects and collaborations as they see fit. Instead of imposing metrics or incentives, it seems to work best to give them considerable freedom to work outside of usual incentive structures.
To facilitate internal communications outside of formal structures, teams seem to benefit from shared spaces that allow for these exchanges to occur. Establishing a shared physical space that encourages interaction seems to be most important.
Psychological safety, i.e., the feeling that voicing contro...
|
Dec 12, 2021 |
Long-term investment fund at Founders Pledge by SjirH
05:22
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Long-term investment fund at Founders Pledge, published by SjirH on the effective altruism forum.
Write a Review
Edit 27/10/21: See these posts 1 2 3 for the next steps in this project. The post below was originally published on 09/01/20.
At Founders Pledge, we are considering launching a long-term investment fund for our members. Contributions to this fund would by default be invested, potentially over centuries or millennia to come. Grants would only be made when there’s a strong case that a donation opportunity beats investment from a longtermist perspective.
This idea was prompted in large part by recent research in the EA community, most notably Phil Trammell’s initial work on patient philanthropy and Will MacAskill’s forum post and the ensuing discussion on outside-view longtermism.
We have just started this investigation, and don’t hold the views expressed below strongly. This post is mainly a call for input: we’d like to make the best possible use of the expertise and connections available in the larger EA community.
Why a long-term investment fund
In brief, we currently see three main potential ways in which investing to give later may be better than giving now:
By exploiting the pure time preference in the market, i.e. that non-patient people are willing to sell the future (and especially the long-term future) cheaply
By exploiting the risk premium in the market, to the extent that longtermist altruists should price risks differently to the market
By giving us more time to learn and get better at identifying high-impact giving opportunities to benefit the long term
There are also considerations that may counter (partially or in full) these three benefits:
We may be living at one of the most influential times in history
There are risks of expropriation, e.g. existential catastrophes or legal changes
There are risks of value or competency change in the wrong direction, e.g. governance ends up in the wrong hands or new moral and nonmoral insights are not incorporated
We have major uncertainty about all six factors, and intend to look into them further as part of this investigation. Assuming legal feasibility, we think it likely (>50%) that a well-governed long-term investment fund is among the highest-impact giving opportunities we currently know of from a longtermist perspective.
What the fund would (ideally) entail
Donations would be invested with the idea of growing the fund, potentially over centuries or even millennia to come. Money would only be deployed when there is a strong case that allocating to a funding opportunity is higher-impact from a longtermist perspective than keeping the money invested. This could happen, for instance, if our estimate of the expropriation rate rises greatly, legal and/or market changes make investing much less attractive, or we identify a truly extraordinary funding opportunity that we don’t expect to be filled by others.
We might decide to create a separate legal entity for the fund, to make it less dependent on what happens to Founders Pledge in the long term. If so, we’ll have to define a legally fixed objective. We think we should define this in pure moral terms to allow for strategic flexibility, e.g. it should not include anything about investing. It should also balance protection against value drift with flexibility to incorporate new moral insights. Our starting idea is “to provide maximum benefit to all sentient beings, regardless of where or when they exist”.
In addition to this fixed legal objective, we are thinking about the best way to structure the fund’s year-to-year governance. For instance, we could carefully select a board of trustees to guard the objective of the fund and update its strategy. They should embody the values of the fund and be strategically knowledgeable. This would allow a lot of the gover...
|
Dec 12, 2021 |
Improving Institutional Decision-Making: a new working group by IanDavidMoss, lauragreen, Vicky Clayton
17:18
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Improving Institutional Decision-Making: a new working group, published by IanDavidMoss, lauragreen, Vicky Clayton on the effective altruism forum.
Write a Review
By Ian David Moss, Vicky Clayton and Laura Green
Summary
This post describes recent and planned efforts to develop improving institutional decision-making (IIDM) as a cause area within and beyond the effective altruism movement.
Despite increasing interest in the topic over the past several years, IIDM remains underexplored compared to “classic” EA cause areas such as AI safety and animal welfare.
To help address some questions that have come up in our community-building work, we provide a working definition of IIDM, emphasizing its interdisciplinary nature and potential to bring together insights across professional, industry, and geographic boundaries.
We also describe a new meta initiative aiming to disentangle and make intellectual progress on IIDM over the next year. The initiative includes several research and community development projects intended to enable more confident funding recommendations and career guidance going forward.
You can get involved by volunteering to work on our projects, helping us secure funding, or giving us feedback on our plans.
Introduction
In 2017, 80,000 Hours published Jess Whittlestone’s problem profile on the topic of improving institutional decision-making (IIDM), which deemed the cause area “among the most pressing problems to work on” and suggested that “improving the quality of decision-making in important institutions could improve our ability to solve almost all other problems.” In the years since, we’ve seen signs of steadily increasing interest in IIDM within the EA community: IIDM-related talks, meetups and discussion channels have been included at most recent EA Global conferences, and a Facebook group founded to centralize discussion on the topic now has nearly 900 members. Today, 80,000 Hours continues to list IIDM among its priority problem areas and names “Building capacity to explore and solve problems,” a broad category that includes IIDM, as one of its top two overall priorities for career paths.
Still, IIDM remains underexplored compared to “classic” EA cause areas such as AI safety and animal welfare. Up until now, there has not been a formal, globally focused umbrella organization dedicated to IIDM within the effective altruism ecosystem, leaving a gap of coordination in the field. There are legitimate questions about the effectiveness and tractability of interventions in the space that need to be resolved in order to be able to direct donations or career tracks with confidence. And we know from conversations with others in the EA community that IIDM’s interdisciplinary nature can make the cause area feel fuzzy or overly broad to some.
For these reasons, the three of us are stepping up to act as a focal point for people interested in “disentangling” and making intellectual progress on IIDM in 2021. This work grows out of a year’s worth of informal exploration that has taken place since the first official IIDM meetup at EAG London 2019. In this article, we’ll share our working definition of IIDM and some key points from our recently developed operational plan.
What is improving institutional decision-making?
Decision-making at major institutions is shaped by a complex web of individual judgments, value systems, organizational structures and routines, leadership behaviors, incentives, social influences, and external conditions. As such, it’s worth taking a moment to define and explain what we mean by “improving institutional decision-making” a little more clearly.
Let’s focus first on the “decision-making” part. The classic decision problems taught in economics textbooks describe fairly straightforward analytical problems: given two or more mutually ex...
|
Dec 12, 2021 |
AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy by Aaron Gertler
05:56
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: AMA: Tim Ferriss, Michael Pollan, and Dr. Matthew W. Johnson on psychedelics research and philanthropy, published by Aaron Gertler on the effective altruism forum.
We're excited to bring you an AMA with three people who have done a lot to increase the profile and prospects of psychedelic research.
Effective altruism has a history of engaging with psychedelics (see these posts, for example) as a promising intervention for mental health issues — one which could sharply reduce the suffering of tens or hundreds of millions of people.
Between Tim, Michael, and Matt, we have many kinds of expertise here — nonprofit investing, journalism, medicine, and more. We hope the discussion is interesting, and useful for anyone who's thought about working or giving within this area.
We'll gather questions for a couple of days. Michael and Matt will answer questions on Sunday, May 16th. Tim will answer questions on Tuesday, May 18th (we've pushed his original date back by one day).
Author introductions
Tim Ferriss
Hi, everyone! I’m Tim Ferriss, and I’ll be doing an AMA here. More on me: I’m an author (The 4-Hour Workweek, Tools of Titans, etc.) and early-stage investor (Uber, Shopify, Duolingo, Alibaba, etc.).
Through my foundation and since circa 2015, I have committed at least $4-6 million to non-profit scientific research and clinical treatments of “intractable” psychiatric conditions such as treatment-resistant depression, opioid/opiate addiction, post-traumatic stress disorder (PTSD), and others. I believe (A) this research has the potential to revolutionize the treatment of mental health and addiction, which the data from studies thus far seem to support, and (B) I’m a case study. Psychedelics have saved my life several times over, including helping me to heal from childhood abuse.
Projects and institutions include the Centre for Psychedelic Research at Imperial College London (the first such center in the world); the Center for Psychedelic and Consciousness Research at Johns Hopkins University School of Medicine (the first such center in the US); MAPS (Phase 3 studies for MDMA-assisted psychotherapy); divisions and studies at UCSF (e.g., The Neuroscape Psychedelic Division); The University of Auckland (LSD microdosing); and others (e.g., pro bono launch of Trip of Compassion documentary on MDMA-assisted psychotherapy).
I evaluate non-profit and scientific initiatives in the same way I evaluate for-profit startups, and I believe some bets in this nascent field represent high-leverage, low-cost opportunities to bend the arc of history, much as Katharine McCormick did for the first birth control pill. Here is one blog post with more elaboration.
I am happy to answer any questions through the AMA. Dr. Matthew Johnson is no doubt better qualified to answer the scientific (and more), and Michael Pollan is no doubt more qualified to answer the journalistic (and more), but I will do my best to be helpful!
Michael Pollan
I'm a journalist and author who focuses on ways that the human and natural worlds intersect — including within our minds.
In 2015, I wrote a New Yorker article on psychotherapy, "The Trip Treatment", which profiled a number of cancer patients whose experiences with psilocybin had reduced or entirely banished their fear of death. This led me to embark on a two-year journey into the history of psychedelic policy and its potential for modern medicine, and to write a book: How to Change Your Mind: The New Science of Psychedelics. My forthcoming book, This is Your Mind on Plants, covers the strange contrast between the human experience with several plant drugs — opium, caffeine, and mescaline — and how we choose to define and regulate them.
I'd be glad to answer questions about anything I've written on the subject. Particular topics of interest:
The history of drug regulatio...
|
Dec 12, 2021 |
How to succeed as an early-stage researcher: the "lean startup" approach by tobyshevlane
14:05
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: How to succeed as an early-stage researcher: the "lean startup" approach, published by tobyshevlane on the effective altruism forum.
I am approaching the end of my AI governance PhD, and I’ve spent about 2.5 years as a researcher at FHI. During that time, I’ve learnt a lot about the formula for successful early-career research.
This post summarises my advice for people in the first couple of years. Research is really hard, and I want people to avoid the mistakes I’ve made.
My argument: At the early stage of your research career, you should think of yourself as an entrepreneur trying to build products (i.e. research outputs) that customers (i.e. the broader community) want to consume.
You might be thinking: but I thought I was trying to maximise my impact? Sure, but at this stage in your career, you don’t know what’s impactful. You should be epistemically humble and responsive to feedback from people you respect. You should be opportunistic, willing to pivot quickly.
I am calling this the “lean startup” approach to research. By now, everyone knows that most startup ideas are bad, and that founders should embrace this: testing minimal versions of the product, getting feedback from customers, iterating, and making dramatic changes where necessary. When you’re starting out in research, it’s the same.
Early-stage researchers have two big problems. Number one, all your project ideas are bad. Number two, once you’ve written something, nobody is going to read it. It’s like an app that nobody downloads. It is possible to avoid these pitfalls, but that requires active effort. I will list many strategies that I’ve found helpful. At the end of the post, I’ll give a few examples from my own career.
A lot of this advice is stolen from people who have helped me over the years. I encourage you to try it out.
EDIT: I am most familiar with AI governance. I'm not sure how well my views generalise to other fields. (Thanks to the commenters who raised this issue.)
Problem 1: your project ideas are bad
In the early stage of your research career, 80-100% of your project ideas are bad. You’ll feel like your favourite project idea is great, but then years later, you’ll ask yourself: “what was I thinking?”
Executing an idea requires a large time investment. You don’t want to waste that time on a bad idea.
By “project idea” I mean not just a topic, but some initial framing of the problem, and some promising ideas for what kind of arguments or results you might produce. So, how do you find a good one of those?
Solutions:
Ideally, someone senior tells you what to work on. But this is time-expensive for them, and they don’t want to give away their best ideas to somebody who might execute them badly. So more realistically.
Write out at least 10 project ideas, and ask somebody more senior to rank the best few. Always keep this list and add to it over time. This is a tried-and-tested method and it works very well. If you are pushing just one, single project idea, you might be able to arouse some minor, polite interest from other people, but this is a much less meaningful feedback process.
Notice when people are genuinely interested. Sometimes you will get a cue that a person is actually interested in a puzzle or argument that you’ve formulated. You notice that they’ve been nerd sniped. That’s a very valuable feedback signal. It is also a reason to recentre the project around the exact question that nerd sniped them. Because you don’t yet have a well-developed sense of what issues are most interesting, you should update heavily on this kind of feedback. (As you get more experienced, you can allow yourself to get nerd sniped by your own ideas.)
Fit into an established paradigm. While at FHI I have gradually absorbed a sense of the implicit worldviews of senior people, and what kinds of problems they t...
|
Dec 12, 2021 |
What areas are the most promising to start new EA meta charities - A survey of 40 EAs by Joey
20:44
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: What areas are the most promising to start new EA meta charities - A survey of 40 EAs, published by Joey on the effective altruism forum.
Write a Review
Charity Entrepreneurship (CE) is researching EA meta as one of four cause areas in which we plan to launch charities in 2021. EA meta has always been an area we have been excited about and think holds promise (after all, CE is a meta charity). Historically we have not focused on meta areas for a few reasons. One of the most important is that we wanted to confirm the impact of the CE model in more measurable areas such as global health and animal advocacy. After two formal programs we are now sufficiently confident in the model to expand to more meta charities. We were also impressed by the progress and traction that Animal Advocacy Careers made in their first year. Founded based on our research into animal interventions, this organization works at a more meta level than other charities we have incubated.
In this document, I summarize the results of 40 interviews conducted with EA experts. These interviews constitute part of CE’s research into potentially effective new charities that could be founded in 2021 to improve the EA movement as a whole.
Methodology
In discussing meta charities, we are using a pretty broad definition. We include both charities that are one step removed from impact but in a single cause area (such as Animal Advocacy Careers), and more cross-cutting meta charities within the effective altruist movement.
Generally our first step when approaching an area would involve broad research and literature reviews. However, given the more limited resources focused on meta EA and our stronger baseline understanding of the area, we wanted to supplement this information with a broad range of interviews. We ended up speaking to about 40 effective altruists (EAs) across 16 different organizations and 8 current or former chapter leaders. We tried to pick people who had spent considerable time thinking about meta issues and could be considered EA experts, and overall aimed for a diverse range of perspectives.
The duration of the interviews ranged from 30 minutes to 2 hours, running about an hour on average. Not everyone answered every question but the majority of questions were answered by the majority of people. The average question got ~35 responses and none got fewer than 30. Interviewees were informed that an EA Forum post would be written containing the aggregated data but not individual responses. The background notes ended up being ~100 pages.
We broke down the questions asked into three sections:
Open questions
What meta ideas might be uniquely impactful?
What ideas might be uniquely unimpactful?
Crucial considerations
Expand vs improve
Time vs money vs information
Broad vs narrow
What do you think of the current EA community trajectory?
What do you think are the biggest flaws of the EA movement?
Specific sub areas
We took ideas that people had historically suggested on the EA Forum and organized them into around a dozen categories, providing examples for each.
For each category, we were interested in whether it was seen as above or below average, as well as if any specific ideas stood out as promising.
The descriptions below aim to reflect the aggregate responses I got, not what CE thinks or my impression after speaking to everyone (that will be a different post). The results constitute one (but not the only) piece of data CE will use when coming to recommendations for a new organization.
Results
1. Open questions
This was the hardest area to synchronize. It was surprising how much overall divergence there was between different people in terms of what ideas and concepts were seen as the most important.
Lots of ideas that came up in the open questions were covered in the category areas, but open question...
|
Dec 12, 2021 |
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22] by Habryka, Buck
03:12
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22], published by Habryka, Buck on the Effective Altruism Forum.
We (Redwood Research and Lightcone Infrastructure) are organizing a bootcamp to bring people interested in AI Alignment up-to-speed with the state of modern ML engineering. We expect to invite about 20 technically talented effective altruists for three weeks of intense learning to Berkeley, taught by engineers working at AI Alignment organizations. The curriculum is designed by Buck Shlegeris (Redwood) and Ned Ruggeri (App Academy Co-founder). We will cover all expenses.
We aim to have a mixture of students, young professionals, and people who already have a professional track record in AI Alignment or EA, but want to brush up on their Machine Learning skills.
Dates are Jan 3 2022 - Jan 22 2022. Application deadline is November 15th. We will make application decisions on a rolling basis, but will aim to get back to everyone by November 22nd.
Apply here
AI-Generated image (VQGAN+CLIP) for prompt: "Machine Learning Engineering by Alex Hillkurtz", "aquarelle", "Tools", "Graphic Cards", "trending on artstation", "green on white color palette"
The curriculum is still in flux, but this list might give you a sense of the kinds of things we expect to cover (it’s fine if you don’t know all these terms):
Week 1: PyTorch — learn the primitives of one of the most popular ML frameworks, use them to reimplement common neural net architecture primitives, optimization algorithms, and data parallelism
Week 2: Implementing transformers — reconstruct GPT2, BERT from scratch, play around with the sub-components and associated algorithms (eg nucleus sampling) to better understand them
Week 3: Training transformers — set up a scalable training environment for running experiments, train transformers on various downstream tasks, implement diagnostics, analyze your experiments
(Optional) Week 4: Capstone projects
We’re aware that people start school/other commitments at various points in January, and so are flexible about you attending whatever prefix of the bootcamp works for you.
Logistics
The bootcamp takes place at Constellation, a shared office space in Berkeley for people working on long-termist projects. People from the following organizations often work from the space: MIRI, Redwood Research, Open Philanthropy, Lightcone Infrastructure, Paul Christiano’s Alignment Research Center and more.
As a participant, you’d attend communal lunches and events at Constellation and have a great opportunity to make friends and connections.
If you join the bootcamp, we’ll provide:
Free travel to Berkeley, for both US and international applications
Free housing
Food
Plug-and-play, pre-configured desktop computer with an ML environment for use throughout the bootcamp
You can find a full FAQ and more details in this Google Doc.
Apply here
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
A Qualitative Analysis of Value Drift in EA by MarisaJurczyk
30:44
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A Qualitative Analysis of Value Drift in EA, published by MarisaJurczyk on the Effective Altruism Forum.
Write a Review
About a year ago, after working with the Effective Thesis Project, I started my undergraduate thesis on value drift in the effective altruism movement. I interviewed eighteen EAs about their experiences with value drift and used a grounded theory approach to identify common themes. This post is a condensed report of my results. The full, official version of the thesis can be found here.
Note that I’ve changed some of the terms used in my thesis in response to feedback, specifically “moral drift”, “internal value drift”, and “external value drift”. This post uses the most up-to-date terminology at time of posting, though these terms still a work-in-progress.
Summary
Value drift is a term EAs and rationalists use to refer to changes in our values over time, especially changes away from EA and other altruistic values.
We want to promote morally good value changes and avoid morally bad value changes, but distinguishing between the two can be difficult since we tend to be poor judges of our own morality.
EAs seem to think that value drift is most likely to affect the human population as a whole, less likely to affect the EA community, and even less likely to affect themselves. This discrepancy might be due to an overconfidence bias, so perhaps EAs ought to assume that we’re more likely to value drift than we intuitively think we are.
Being connected with the EA community, getting involved in EA causes, being open to new ideas, prioritizing a sustainable lifestyle, and certain personality traits seem associated with less value drift from EA values.
The study of EAs’ experiences with value drift is rather neglected, so further research is likely to be highly impactful and beneficial for the community.
Background
What is Value Drift?
As far as I can tell, “value drift” is an expression that was first used by the rationalist community, in reference to AI safety. It has not been discussed nor studied outside of the rationalist and EA communities - at least, not using the term “value drift.”
Value drift has been defined as broadly as changes in values and as narrowly as losing motivation to do altruistic things. People seem to see the former as the technical definition and the latter as what the term implies, as value drift away from EA values is often seen as the most concerning value drift to an EA who wants to remain an EA, or altruistic more generally. However, we can certainly experience value drift towards EA values or experience value drift that keeps us just as aligned with EA.
NB: Throughout this post, I use value drift to refer to a shift away from EA values, unless I specify otherwise.
I discuss a few different types of value drift throughout this post:
Hierarchical value drift: a change in one’s hierarchy of values in which a value is not lost or gained, but rather its priority is changed.
Transformative value drift: losing a previously-held value or gaining a new value.
Abstract value drift: changes in abstract values, such as happiness or non-suffering.
Concrete value drift: changes in concrete vales, such as effective altruism, animal welfare, global health, or existential risk-prevention.
Value drift can lead us to act less morally than we otherwise would. However, it’s possible that our behaviors can change without our values changing. Darius Meissner’s forum post, “Concrete Ways to Reduce Value Drift and Lifestyle Drift”, makes the important distinction between value drift and lifestyle drift, where value drift refers to changes in values and lifestyle drift refers to changes in behaviors, often as a result of circumstances. Some academic research highlights a similar phenomenon called ethical drift, which refers specifically to behavior chan...
|
Dec 12, 2021 |
Geographic diversity in EA by AmAristizábal
04:08
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Geographic diversity in EA, published by AmAristizábal on the Effective Altruism Forum.
Write a Review
Hi, I´m from Colombia (first post) and I want to share my thoughts on lack of geographic diversity in EA.
I suspect that due to lack of diversity, questions that could be relevant to EA have not been considered enough and here I share some of the ones that I deal with the most (although I don't have a strong position about most of these things and probably I just have not been aware if they DO HAVE been considered, in that case I would appreciate a lot if you could send links or recommendations):
-Whether giving locally could be better (or not) for donors in low and middle income countries:
Countries with weak currencies such as mine face high exchange rates (especially in hard times such as this pandemic). I have the intuition that with a volatile dollar price it doesn't always make sense to donate to EA recommended charities and perhaps donors could allocate better their donations by donating locally. In my case I just switch to save and donate later (because I'm young ) but what if I still want to donate a little bit to keep motivation? Or what if I want to convince my friend's uncle to donate?I still want to have an informed opinion.
-Spot regional differences within countries when answering different types of questions: Even if my country's GDP is higher than many countries where effective donations according to EA are allocated, there are many regions within my country where poverty is extremely high, even higher than in richer cities from poorer countries. Those differences are hard to spot if EA spots “poverty” as a whole without zooming in geographical zones.
-Addressing the real potential of going into policy in LMICs: EA recommends policy careers but I suspect that it's an even more important path in LMICs, where policies are weaker, policymakers are even less evidence based and where institutions have a lot more potential to improve.
-Whether there is a chance to adapt EA to other cultural values:
Individualism vs collectivism: I feel that EA was born in cultures that value individualistic goals (even if the focus is on the world as a whole). For example, I see EA deeply linked to “western”´s understanding of freedom, independence of thought, skepticism, mistrust for authority and social norms, etc. However, other cultures with more collectivist mindsets can struggle to link altruism to those specific values. In many cultures altruism is deeply linked to religion or family bonds and giving is prioritized when you help those that surround you. Even if there is no rational argument to value more a life in my country vs a life in sub-saharan Africa, what if EA is losing an opportunity to take advantage of these cultural drives towards giving by, for example, strengthening local networks of charities.
Nationalism: Even if I'm not fond of nationalism I do recognize it as a huge drive for altruism in my country (probably in many others as well). I won't convince my friend's uncle to donate to Against Malaria but I could convince him to donate to a colombian charity. Could we use those emotional bonds to promote doing good in an effective way at the same time?
-I wonder if there is a bias when EA talks about problems not being “neglected” enough when dismissing some cause areas or focus topics: an example that comes to my mind is gender inequality in governments or in the workplace. In EA there is a whole focus area on improving institutional decision making, which is great (actually there is where I want to focus); but what if there are easier and more urgent steps to be taken towards IIDM in LMICs such as focusing on women's access to governments (something that in high income countries is not that neglected and has been widely addressed, or at least a lot mo...
|
Dec 12, 2021 |
Why EA groups should not use “Effective Altruism” in their name by KoenSchoen
18:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Why EA groups should not use “Effective Altruism” in their name, published by KoenSchoen on LessWrong.
Starting a conversation about the name “Effective Altruism” for local and university groups.
Abstract: most EA groups’ names follow the recipe “Effective Altruism + [location/university]”. In 2020 we founded a university EA group who’s name does not include the words “Effective Altruism”. We have grown rapidly, and it now seems more and more likely that our organization will stick around in years to come. We think our name played a non-negligible part in that. In fact, we believe that choosing an alternative name is one of the most cost-effective things you can do to make your group grow. In this article we argue that more (potential) groups should consider an alternative name. We propose a method for coming up with that name. Lastly, we propose that “part of the EA network” could serve as a common subtitle to unite all EA groups despite their various names. Scroll down to ‘summary’ for a quick overview of our arguments.
One of my teachers, a social entrepreneur, once told me: “when you are doing any kind of project, first make sure to give it a good name.” These words ran through my mind when I, together with five others, started a new EA student association at Erasmus University Rotterdam in the Netherlands.
At our second collective meeting we decided against the name “Effective Altruism Erasmus” and opted for “Positive Impact Society Erasmus” (PISE) instead. Now, 6 months in, we still believe this was a great decision. Our association is doing well, and we believe that our name has had some part in that.
As we speak, more Dutch EA groups are considering changing their name. Maastricht University’s chapter is already called “LEAP” (Local Effective Altruism Project) and the group at Wageningen University is also considering a name change.
We think we should have a movement wide conversation about “Effective Altruism” as the name for local and university groups. Below we have written down our thoughts on two questions: firstly, should local and university groups have a name other than “Effective Altruism X”? Secondly, if so, what should that name be? Lastly, we propose a common subtitle for all EA groups with an alternative name. Our thoughts are far from complete and we are uncertain on many accounts. We invite anyone to add to the discussion!
Before we start: how important is a name anyways?
How important is the name of your group? On the one hand a name is just a name. If you are delaying founding an EA chapter because you are fervently debating your groups name, you need to reconsider your priorities. However, you only get one chance at a first impression and sometimes your first impression makes a difference.
How much of a difference does it make? In the last 6 month our group grew from 6 active members to 24, all of which are now spending time every week on organizing events and workshops, working on projects etc. Many of them had never heard of EA before this year but have now taken a fellowship, or read an EA book. If we had to make a conservative guess we would say about 2-3 people would not have found us if we would have had the traditional name (later we will give some examples of this).
In total, coming up with the name took us about 3,5 hours (which was about an hour longer than it should have cost). 2,5 hours of work for growing your organization by 2-3 extra active members (every 6 months) is a return on investment we haven’t often seen elsewhere. Therefore we think more local and university groups should consider an alternative name.
Should local and university groups have a name other than “Effective Altruism + [location/university]”?
The name “Effective Altruism” was never meant to take off as a popular term. The marketing implications of the name ...
|
Dec 12, 2021 |
EA Debate Championship & Lecture Series by Dan Lahav, sella
20:50
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: EA Debate Championship & Lecture Series, published by Dan Lahav, sella on the Effective Altruism Forum.
Executive Summary
On October 23-25 2020, we hosted the inaugural online EA Debate Championship - a three-day debate championship with EA-themed topics.
The championship had 150+ participants, from roughly 25 countries, that span 6 continents.
The championship was supported by the World Universities Debating Championship, aka WUDC - one of the largest international student-driven events in the world.
There were a total of 7 debate rounds - 5 preliminary rounds and 2 knockout rounds. The knockout rounds were held in 2 different language proficiency categories to promote inclusivity. In total over the course of that weekend over 500 EA-related speeches were delivered.
The championship featured a Distinguished Lecture Series as non-mandatory preparation material - 9 lectures, 3 debate exercises and 1 Q&A session containing introductory EA materials (totalling ~10 hours), with top EA speakers including Ishaan Guptasarma, Joey Savoie, Karolina Sarek, Kat Woods, Lewis Bollard, Olivia Larsen, Nick Beckstead, and Will MacAskill. The debate exercises were filmed by world-renowned debate teams.
The championship included a research component to examine if debating on EA topics changes the stance of debaters towards EA values.
Most of the participants were not familiar with EA prior to the competition, or had limited exposure to core EA ideas. However, when asked after the tournament many were highly positive on the prospect of attending a future EA debating championship, and reported a strong willingness to continue their engagement with the EA community.
During the tournament, over $2,000 were donated to effective charities by the participants (with most of the funds going to the Against Malaria Foundation). The funds were doubled via donation matching provided by Open Philanthropy.
The competition was initiated and organized by members of EA Israel who are also debaters; with the support of several highly influential international debaters and the World Championship. This collaboration was possible due to the strong ties that exist between the debating community and the EA community in Israel. We think that there is room to building similar ties on a more global scale.
In the rest of the post we will explain our motivation to run the event, describe the program and its outcomes in detail, share what we have learned from the process, and discuss our next steps.
Organizing the tournament was an effort of a great many. We thank them all, and would like to stress that any mistakes or inaccuracies in the description are our own. In particular we would like to thank Adel Ahmed, Ameera Moore, Barbara Batycka, Bosung Baik, Chaerin Lee, Connor O’Brien, Dana Green, Emily Frizell, Enting Lee, Harish Natarajan, Ishaan Guptasarma, Jaeyoung Choi, Jessica Musulin, Joey Savoie, Kallina Basli, Karolina Sarek, Kat Woods, Lewis Bollard, Milos Marajanovic, Mubarrat Wassey, Nick Beckstead, Olivia Larsen, Omer Nevo, Sally Kwon, Salwaa Khan, Seoyoun, Seungyoun Lee, Sharmila Parmanand, Tricia Park, Will MacAskill and Yeaeun Shin for their contributions in running the tournament, filming lectures or creating exercises; to David Moss, David Reinstein and Stefan Schubert for their advice on running the tournament survey; and to the many incredibly qualified debate adjudicators & speakers that made the event possible
Motivation
We initiated this effort due to the impression that themed debating tournaments (along with matching preparation materials) can be a relatively broad yet high-fidelity outreach opportunity. We believe this is the case for several reasons:
The international debating community mostly consists of undergraduate students from around 50 countries (elite universities are represented ac...
|
Dec 12, 2021 |
Despite billions of extra funding, small donors can still have a significant impact by Benjamin_Todd
19:07
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Despite billions of extra funding, small donors can still have a significant impact, published by Benjamin_Todd on the Effective Altruism Forum.
I’ve written about how there’s now a lot more funding committed to effective altruism– about $50bn.
It’s natural to think this means small donors can no longer have much impact, and I’ve seen several cases of people saying they’re not sure whether their donations will do any good, because all the opportunities are being taken by large donors.
However, I think this isn’t right: more donations from small donors still have a significant impact. This means raising additional funding is still of value to the community, and I think earning to give and donating to e.g. the Long Term Future Fund, is a highly impactful thing to do – probably more impactful than the vast majority of careers.
I also think the increase in funding means there’s an opportunity to do even more good than earning to give, and that people earning to give currently should seriously consider switching to the kinds of opportunities flagged in my talk at EAG. But that doesn’t mean that small donations have no impact.
Instead:
What matters is not the total amount of available funding, but the current level of cost-effectiveness at the margin. This has likely declined, but is still high.
Small donors should be able to roughly match large donors in terms of cost-effectiveness by ‘funging’ with them.
Small donors can sometimes beat large donors in terms of cost-effectiveness, and I provide a list of some common ways to do this.
At the end, I’ll make some comments on where I think people should donate.
1. What matters is not total funding available but marginal cost-effectiveness
It’s true that as more funding becomes available, all else equal, we should expect more of the best opportunities to be taken, and for cost-effectiveness to decrease.
However, there is a force which limits the size of this effect: how quickly we’re able to discover new opportunities. Because effective altruism is still small and building capacity, it’s not obvious that cost-effectiveness will decline quickly.
While I think the very best opportunities involve taking a more hits based, longterm focused approach than GiveWell, their recommendations serve as a good starting point to examine these dynamics. GiveWell’s top recommendations probably constitute the ‘bar’ for neartermist work. In a recent post, Open Philanthropy’s Global Health and Wellbeing team expect to find many opportunities above this bar, but for marginal dollars to go to GiveWell.
Overall, GiveWell now seems to be targeting a cost-effectiveness of 8x GiveDirectly or higher for most donations, though about 20% funds will go towards opportunities that are 5-8x as cost-effective as GiveDirectly, and so additional donations should be about this cost-effective.
GiveWell is unsure whether the margin will be closer to 5x than 8x. In the same post, Open Philanthropy says “we currently expect GiveWell’s marginal cost-effectiveness to end up around 7-8x GiveDirectly”.
They also say they believe that GiveWell’s margin has been around 10x GiveDirectly in recent years, so if it declines to 7x, that will be a 30% fall – this is only a modest decline and still very high.
To illustrate, they estimate that donating $4,250 to a charity that’s 8x GiveDirectly is as good as saving the life of a child under five.
With a lognormal distribution of cost-effectiveness, there should be many more opportunities at the 5x level than the 10x level, so it should be possible to deploy a lot more funds as the bar lowers. (Even setting aside the possibility of discovering new highly cost-effective interventions.)
In a worst case scenario, billions could be spent on cash transfers at a level of cost-effectiveness similar to or only a little below GiveDirectly. T...
|
Dec 12, 2021 |
Presenting: 2021 Incubated Charities (Charity Entrepreneurship) by Joey
13:06
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Presenting: 2021 Incubated Charities (Charity Entrepreneurship), published by Joey on the Effective Altruism Forum.
2021 was the third year that we at Charity Entrepreneurship held our annual Incubation Program. Interest in the program was very high, with over 2000 applications submitted. 27 participants representing 16 countries graduated from the 2-month intensive training program, including teams that will start new organizations, individuals that are being hired by high-impact organizations, regional groups that will conduct research under our mentorship, and a foundation that will focus on providing grants to high-impact interventions.
We are delighted to announce the launch of five new charities and want to thank our funders: EA Funds, Open Philanthropy, and the CE Seed Network (a group of high-impact professionals) for their generous donations, which totaled to $537,000 in grants offered to the charities this year.
The 2021 incubated charities are:
Training For Good - delivering a range of training programs to fill important EA capability gaps and raise the utilization rate of EA talent
High Impact Professionals - enabling working professionals to have the biggest positive impact possible
Shrimp Welfare Project - improving the lives of hundreds of millions of farmed shrimp in Southeast Asia
Healthier Hens - improving the welfare of farmed egg-laying hens via a cost-effective intervention focused on feed fortification
Center for Alcohol Policy Solutions - saving lives and promoting well-being through alcohol taxation
TRAINING FOR GOOD
Co-founders: Cillian Crosson, Jan-Willem van Putten, Steve Thompson
Website: trainingforgood.com
Contact: contact@trainingforgood.com
CE Incubation Grant: $175,000
Description of the intervention:
Training for Good (TFG) will upskill people to tackle the most pressing global problems. TFG will deliver a range of training programs to help solve skill bottlenecks in EA cause areas and raise the rate of talent utilization within the EA movement.
Background of the intervention:
Talent utilization: There are over 6,000 committed EAs, the majority of whom want to pursue impactful careers. Yet many are struggling to find concrete opportunities to implement EA in their lives. TFG aims to raise the rate of talent utilization within the EA movement by creating training programs that enable large numbers of people to enter impactful careers.
Skill bottlenecks: Funding for many EA cause areas has grown faster than the number of people interested in them. This has led to a “funding overhang” and an increase in certain skill bottlenecks. TFG aims to solve these skill bottlenecks in EA cause areas by developing targeted programs that fill current skill gaps and advance the capabilities needed to deploy funds effectively in the future.
Near-term plans:
TFG intends to experiment with different approaches to training before choosing where to narrow their focus. They will pilot the following four training programs within year one:
Salary Negotiation for Earning-to-Givers: In November 2021, TFG is launching a training program to help E2Gers maximize their donation potential. If you are interested in participating, please complete this application form by midnight on 7th November.
Effective Careers in the Civil Service: In January 2022, TFG will launch a training program for aspiring policymakers. If you are a current or aspiring policymaker within Europe (including UK and other non-EU countries), please complete this needs survey to help us identify the most important skill gaps.
EA for Experienced Professionals: Around May 2022, TFG will host a week-long retreat, training corporate executives with 10+ years experience in basic EA concepts and connecting them to the EA movement to fill management skill gaps in key EA organizations.
Grantmaking for Im...
|
Dec 12, 2021 |
Introducing Training for Good (TFG) by Cillian Crosson, Jan-WillemvanPutten, SteveThompson
20:26
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Introducing Training for Good (TFG), published by Cillian Crosson, Jan-WillemvanPutten, SteveThompson on the Effective Altruism Forum.
We are excited to announce the launch of a new effective altruism training organisation, Training for Good (TFG).
trainingforgood.com
TFG aims to upskill people to tackle the most pressing global problems. TFG will identify the most critical skill gaps in the EA movement and address them through training. TFG was incubated by Charity Entrepreneurship in 2021.
This post introduces TFG, provides an overview of the problems we seek to tackle and presents our immediate plans for addressing them. The following is structured into:
Overview of TFG
TFG’s short term plans
Decisions and underlying assumptions
How you can help
Ask us anything
We thank Brian Tan, Charles He, Devon Fritz, Isaac Dunn, James Ozden, and Sam Hilton for their invaluable feedback on this announcement. All errors and shortcomings are our own.
Overview of TFG
Why training?
Track record
Some EA organisations have experienced moderate success running training programmes and online courses.
Animal Advocacy Careers ran a ~9 week online course, teaching core content about effective animal advocacy, effective altruism, and impact-focused career strategy. They recently published the results of two longitudinal studies they ran comparing and testing the cost-effectiveness of this online course and their one-to-one advising calls. Their results weakly suggested that while one-to-one calls are slightly more effective per participant, online courses are a slightly more cost-effective service
Charity Entrepreneurship’s two-month incubation programme aims to equip participants with the skills needed to found an effective non-profit. Through this programme, they have helped launch 16 effective organisations to date.
The Centre for Effective Altruism uses online courses as a high fidelity method of spreading EA ideas and growing the movement. They run an Introductory EA Programme which introduces the core ideas of effective altruism through 1-hour discussions over the course of eight weeks.
Other programmes offered by Peter Singer, the Centre for Applied Rationality, the Good Food Institute, and 80,000 Hours have also proved popular, suggesting that there is further demand for such courses.
Movement demographics
Movement demographics suggest that EAs are a promising audience for training. 80% are aged under 35 and a large proportion are still deciding what career to pursue or building up career capital. Over 50% of EAs also place career capital as a focus above direct impact. These demographic factors suggest a strong interest in gaining skills and participating in training programmes.
Cause neutral and flexible
Training is a cause neutral intervention. Cross-cutting programmes can be run which benefit several cause areas simultaneously or multiple targeted programmes can be run for different cause areas.
Flexibility is particularly important when we consider that EA is a relatively young movement and that there may be cause areas which deserve our attention that we are currently neglecting. If information arises to suggest that we should switch our attention to another cause area (even temporarily), TFG could easily do so. Moreover, we believe that such organisational flexibility could help enable movement flexibility, as it creates the space for intellectual exploration to take place.
Comparative advantage
Our co-founding team has a relative amount of expertise designing and delivering training programmes. In particular, Steve has extensive experience in both the design and facilitation of large scale training and development programmes. He has spent over ten years in the corporate sector training and coaching across multinational firms.
Cillian and Jan-Willem also have experience fac...
|
Dec 12, 2021 |
Notes on Managing to Change the World"by Peter Wildeford
39:53
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Notes on Managing to Change the World", published by Peter Wildeford on the effective altruism forum.
The book “Managing to Change the World: The Nonprofit Manager's Guide to Getting Results” by Alison Green and Jerry Hauser comes highly recommended from a wide variety of top executive directors of non-profits, and after reading it, I can say these positive recommendations are entirely justified.
Don’t let the title fool you - while talk of “changing the world” may sound pie-in-the-sky or even hippyish, this book was relentlessly practical. The principles also matter more than just for non-profits. I think anyone managing others should read this book, regardless of whether they are working in non-profits or not. So far this is my favorite management book that I have ever read.[1]
The density of information is amazing and it will be difficult for me to do the book justice with a summary, and the book is short enough that I encourage everyone to read the actual book cover-to-cover rather than just my summary here. Nevertheless, I will persist with summarizing.
Note there may be some things in this book that I disagree with, or at least don’t fully agree with. I’d be careful to read the book critically. There is also a lot of good advice that is not in this book. In these notes I mainly aim to summarize what I find as the key takeaways of the book, from my understanding and as applied to my personal context, rather than try to present my all-things-considered view on how best to run a non-profit organization. Also note that this post is a personal post and does not necessarily represent the views or practices at Rethink Priorities.
Summary of the Summary
Management is about getting things done through other people and your job as a manager is to get results.
Good managers set goals, are clear about what those goals are, hold people to those goals, help people meet those goals, are clear with people about when they aren’t meeting goals, and are not afraid to tell some employees they aren’t right for the job. Good managers ensure people are in roles where they will excel and get everyone aligned around a common purpose. Good managers delegate, but don’t disappear after - they don’t do the work themselves but do ensure implementation happens and help employees do their work.
Most managers should spend less time actually doing work than they probably spend, but more time guiding other people through their work than they probably spend.
The best way to ensure delegation goes successfully is to (1) be clear from the start about what you expect, (2) stay engaged enough along the way to make sure you and the employees are on the same page and to ensure the ongoing quality of the work, and (3) hold people accountable for what they deliver.
The most common way managers fail at delegation is by not staying involved throughout to check on progress. You should have a regular (typically weekly) 1-on-1 meeting with each employee you manage to connect personally, review progress against the plan, ask probing questions, provide feedback, help the employees adjust priorities, and create connections between employees.
When giving feedback, be specific. When asking questions, be specific.
Delegation usually starts by handing off specific tasks and projects, but the true power of delegation emerges when you can hand off broad responsibilities.
When interacting with your own boss (managing up), have empathy and remember they are a person. Guide them toward doing the right thing and make managing easy. When asking for input from your manager, apply the one hand rule - keep questions to yes/no or multiple choice, make an initial recommendation / default, and make everything clear upfront but provide background at the end as necessary.
What is management?
The point of management is to get more ...
|
Dec 12, 2021 |
A Red-Team Against the Impact of Small Donations by AppliedDivinityStudies
13:44
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A Red-Team Against the Impact of Small Donations, published by AppliedDivinityStudies on the Effective Altruism Forum.
In a comment on Benjamin Todd's article in favor of small donors, :
This article is kind of too "feel good" for my tastes. I'd also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.
I am naturally an angsty person, and I don't carry much reputational risk, so this seemed like a natural fit.
I agree with NunoSempere that Benjamin's epistemics might be suffering from the nobility of his message. It's a feel-good encouragement to give, complete with a sympathetic photo of a very poor person who might benefit from your generosity. Because that message is so good and important, it requires a different style of writing and thinking than "let's try very hard to figure out what's true."
Additionally, I see Benjamin's post as a reaction to some popular myths. This is great, but we shouldn't mistake "some arguments against X are wrong" for "X is correct".
As to not bury the lede: I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.
Funny enough, although this is framed as a "red-team" post, I think that Benjamin mostly agrees with that advice. You can choose to take this as evidence that the advice is robust to worldview diversification, or as evidence that I'm really bad at red-teaming and falling prey to justification drift.
In terms of epistemic status: I take my own arguments here seriously, but I don't see them as definitive. Specifically, this post is meant to counterbalance , so you should read his first, or at least read it later as a counterbalance against this one.
1. Our default view should be that high-impact funding capacity is already filled.
Consider Benjamin's explanation for why donating to LTFF is so valuable:
I would donate to the Long Term Future Fund over the global health fund, and would expect it to be perhaps 10-100x more cost-effective (and donating to global health is already very good). This is mainly because I think issues like AI safety and global catastrophic biorisks are bigger in scale and more neglected than global health.
I absolutely agree that those issues are very neglected, but only among the general population. They're not at all neglected within EA. Specifically, the question we should be asking isn't "do people care enough about this", but "how far will my marginal dollar go?"
To answer that latter question, it's not enough to highlight the importance of the issue, you would have to argue that:
There are longtermist organizations that are currently funding-constrained,
Such that more funding would enable them to do more or better work,
And this funding can't be met by existing large EA philanthropists.
It's not clear to me that any of these points are true. They might be, but Benjamin doesn't take the time to argue for them very rigorously. Lacking strong evidence, my default assumptions are that funding capacity for extremely high-impact organizations well aligned with EA ideology will be filled by donors.
Benjamin does admirably clarify that there are specific programs he has in mind:
there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic.
At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn't OpenPhil just fund it?
In general, my default...
|
Dec 12, 2021 |
You have more than one goal, and that's fine by Julia_Wise
00:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: You have more than one goal, and that's fine, published by Julia_Wise on the Effective Altruism Forum.
Write a Review
This version of the essay has been lightly edited. You can find the original here.
When people come to an effective altruism event for the first time, the conversation often turns to projects they’re pursuing or charities they donate to. They often have a sense of nervousness around this, a feeling that the harsh light of cost-effectiveness is about to be turned on everything they do. To be fair, this is a reasonable thing to be apprehensive about, because many youngish people in EA do in fact have this idea that everything in life should be governed by cost-effectiveness. I've been there.
Cost-effectiveness analysis is a very useful tool. I wish more people and institutions applied it to more problems. But like any tool, this tool will not be applicable to all parts of your life. Not everything you do is in the “effectiveness” bucket. I don't even know what that would look like.
I have lots of goals. I have a goal of improving the world. I have a goal of enjoying time with my children. I have a goal of being a good spouse. I have a goal of feeling connected in my friendships and community. Those are all fine goals, but they’re not the same. I have a rough plan for allocating time and money between them: Sunday morning is for making pancakes for my kids. Monday morning is for work. It doesn’t make sense to mix these activities, to spend time with my kids in a way that contributes to my work or to do my job in a way that my kids enjoy.
If I donate to my friend’s fundraiser for her sick uncle, I’m pursuing a goal. But it’s the goal of “support my friend and our friendship,” not my goal of “make the world as good as possible.” When I make a decision, it’s better if I’m clear about which goal I’m pursuing. I don’t have to beat myself up about this money not being used for optimizing the world — that was never the point of that donation. That money is coming from my "personal satisfaction" budget, along with money I use for things like getting coffee with friends.
I have another pot of money set aside for donating as effectively as I can. When I'm deciding what to do with that money, I turn on that bright light of cost-effectiveness and try to make as much progress as I can on the world’s problems. That involves looking at the research on different interventions and choosing what I think will do the most to bring humanity forward in our struggle against pointless suffering, illness, and death. The best cause I can find usually ends up being one that I didn’t previously have any personal connection to, and that doesn’t nicely connect with my personal life. And that’s fine, because personal meaning-making is not my goal here. I can look for personal meaning in the decision afterward, but that's not what drives the decision.
When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal. It’s fine to support your local arts organization because their work gives you joy, because you want to be active in your community, or because they helped you and you want to reciprocate. If you also have a goal of improving the world as much as you can, decide how much time and money you want to allocate to that goal, and try to use those resources as effectively as you can.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Is Democracy a Fad? by Ben Garfinkel
28:53
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Is Democracy a Fad?, published by Ben Garfinkel on the Effective Altruism Forum.
This cross-post from my personal blog explains why I think democracy will probably recede within the next several centuries, supposing people are still around.
The key points are that: (1) Up until the past couple centuries, nearly all states have been dictatorships. (2) There are many examples of system-wide social trends, including the rise of democracy in Ancient Greece, that have lasted for a couple centuries and then been reversed. (3) If certain popular theories about democratization are right, then widespread automation would negate recent economic changes that have allowed democracy to flourish.
This prediction might have some implications for what people who are trying to improve the future should do today (although I'm not sure what these implications are). It might also have some implications for how we should imagine the future more broadly. For example, it might give us stronger reasons to doubt that future generations will take inclusive approaches to any consequential decisions they face.[1]
Introduction
There’s a strange new trend that’s been sweeping the world. In recent centuries, you may have noticed, it has become more and more common for people to choose their own leaders. Five thousand years after states first emerged, democracy has been taking off in a big way.
The average state’s level of democracy over the past two hundred years. States with sub-zero scores are more autocratic than democratic.[2]
If you follow politics, then you’ve probably already heard a lot about democracy. Still, though, a quick definition might be useful. In a proper democracy, the state’s most important figures are at least indirectly chosen through elections. A large portion of the people ruled by the state are allowed to vote, these votes are counted more-or-less accurately and more-or-less equally, and there’s no truly serious funny business.[3]
Proper democracies are something new. For most of the past five thousand years, dictatorship has been the standard model for states. We don’t know much about the first state, Uruk, but the most common theory is that it was a theocracy ruled by a small priestly class. Monarchy emerged a bit later, spread across the broader Near East, and then stuck around in one form or another for thousands of years.
Many archeologists suspect these little bowls were used to ration out grain to people doing forced labor. They are also by far the most common artifact found around Uruk, which is often taken as an ominous sign.
In other parts of the world, small states with noteworthy democratic elements have emerged from time to time. Certain small states in Greece, as the most famous example, were borderline-proper democracies for a couple hundred years. However, if there was any trend at all, then the trend was toward more consistent and complete dictatorship. States with noteworthy democratic elements tended to lose these elements over time, as they either expanded or fell under the influence of larger states.[4] No sensible person living one thousand years ago would have predicted the recent democratic surge.
It’s natural to wonder: Will this rise in democracy last? Or will democracy turn out to be only a passing fad—something like the Ice Bucket Challenge of regime types?[5]
Let’s suppose, to be more specific, that one thousand years from now people and states still at least kind of exist. How surprised should we be if democracy is no more common then than it was in the year 1000AD?
An Outside View
One way to approach this question is to think hard about history, political science, economics, the future of technology, and all that. Another way to approach the question is just to look at the long-run trend.
The trend, again, is roughly this: Democracy was very ...
|
Dec 12, 2021 |
I scraped all public "Effective Altruists" Goodreads reading lists by MaxRa
05:32
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: I scraped all public "Effective Altruists" Goodreads reading lists, published by MaxRa on the Effective Altruism Forum.
A couple of weeks ago I mentioned the idea of scraping the reading lists of the members of the Effective Altruists Goodreads group. The initial motivation was around the idea that EAs might be reading too much of the same books, and we might improve this by finding out which books are read relatively little compared to how many EAs proclaim that they want to read them. I got some positive feedback and got to work. Besides helping a little with improving our exploration of literature, I think the results also serve as an interesting survey of the reading behavior of EAs. Though we might want to keep in mind a possible selection bias for EAs and EA-adjacent people that share their reading behavior on Goodreads.
For those who don’t know Goodreads, it’s a social network where you can share ratings and reviews of the books you’ve read, and organize books in shelves like I have read this! or I want to read this!. It’s quite fun, many EAs are on there and I wholeheartedly recommend joining.
In total, there were 333 349 people in the Effective Altruists Goodreads group, and 257 275 of them had their privacy settings set to completely public, allowing anyone to inspect their reading lists even without being logged in. I checked the Goodreads scraping rules and was good to go.
Before you continue, I invite you to predict the following:
3 from the 10 most read books, except Doing Good Better
a book that relatively many EAs want to read, but few have actually read
Finally, if you have any further ideas for analysis, leave a comment and I’ll be happy to see what I can do. If you want access to the csv file or the Python script I used, I uploaded them here. In this screenshot you see the types of data I have.
Most read books
Here the books that our community already explored a bunch. I would not have expected 1984 and Superintelligence to make it to the Top 5. HPMOR being the least read Harry Potter novel is a slight disappointment.
Most planned to read
Many classics on people’s I want to read this! lists, maybe overall slightly lengthier & more difficult books? Though Superforecasting is not too long and very readable and very excellent in my opinion, so feel free to read this one.
Highest planned to read / have read ratio
These are the books that might be more useful to be read by more EAs, as many say they want to read them, but in proportion the fewest people have actually read them. Of course, there are good reasons why some of those books are read less, e.g. some of them, like The Rise and Fall of American Growth, Probability Theory or The Feynman Lectures on Physics would take me enormously more time to read compared to, say, 1984 (which still took me, a relatively slow reader, something on the order of 10 to 20 hours). Also, the vast majority of the books in this list have only been read by one person, so a score of 11 can be interpreted as one person having read the book and 11 people wanting to read it. Additionally, as of now this list excludes books that have never been read by any EA, as the ratio would be infinite. For those books, see the next section.
If we only allow books with at least 2 reads, we get this list:
Most commonly planned to read books that have not been read by anyone yet
I’ll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset Energy and Civilization next time I check.
Highest rated books
Here the highest rated of all books that were read at least 10 times. Not too many surprises here, EAs know what's good!
Lowest rated books
Here the same with the highest rated books. Before any fandom feels too ostracized (speaking as somebody who absolutely loved the Eragon saga), I should info...
|
Dec 12, 2021 |
Five New EA Charities with High Potential for Impact by Joey
12:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Five New EA Charities with High Potential for Impact, published by by Joey on the Effective Altruism Forum.
Write a Review
Cross-posted from Charity Entrepreneurship blog.
Hundreds of ideas researched, thousands of applications considered, and a two-month intensive Incubation Program culminated in five new charities being founded. Each of these charities has the potential to have a large impact on the world and to become one of the most cost-effective in their field.
We’re delighted to announce the five new charities who have just launched through our 2020 Incubation Program:
Lead Exposure Elimination Project (LEEP) - advocating for lead paint regulation to reduce lead poisoning and improve the health, well-being, and potential of children worldwide.
Animal Ask - maximizing farmed animal asks through dedicated research.
Family Empowerment Media (FEM) - enabling informed family planning and birth spacing decisions through clear, compelling, and accurate radio-based communication.
Giving Green - directing dollars towards evidence-backed projects that combat the climate crisis.
Canopie - bridging the mental health care gap for pre- and postpartum women through cost-effective, scalable, and evidence-based solutions.
LEAD EXPOSURE ELIMINATION PROJECT (LEEP)
Co-founders: Lucia Coulter, Jack Rafferty
Website: leadelimination.org
Contact: contact@leadelimination.org
CE Incubation Grant: $60,000
Room for more funding: $25,000
Donation page: leadelimination.org/donate/
Description of the intervention:
LEEP advocates for lead paint regulation to reduce lead poisoning and improve the health, well-being, and potential of children worldwide.
Background of the intervention:
One in three children has dangerous levels of lead in their bloodstream. This lead acts as a powerful toxin that causes irreversible harm to their brains and vital organs. It results in reduced intelligence, lower educational attainment, behavioral disorders, increased tendencies for violent crime, cardiovascular disease, and reduced lifetime earnings.
The impact on cognitive development is responsible for an estimated $1 trillion of income loss per year in LMICs alone, while the health effects cause 1 million deaths and 22.4 million DALYs per year, accounting for 1% of the global burden of disease. A primary cause of lead exposure is lead paint, which is widespread and unregulated in over 100 countries.
LEEP advocates for regulation of lead paint in countries with large and growing burdens of lead poisoning from paint, where no-one else is working on the issue. Their approach is to identify these countries, create incentives through awareness, and support governments to develop and introduce lead paint laws.
For more details on LEEP, read their introductory post on the EA forum. To hear about their progress, sign up for their newsletter.
Near-term plans:
LEEP’s first priority is country selection to ensure they target tractable, high-burden, and neglected countries. They have so far identified Malawi as their most promising country on this basis. Over the next two months, LEEP will be testing the levels of lead in new paints on the market in Malawi and building relationships with stakeholders and decision-makers. Depending on findings and progress from this stage, they will either pilot their advocacy campaign in Malawi to introduce lead paint regulation, or pivot to another promising country.
ANIMAL ASK
Co-founders: George Bridgwater, Amy Odene
Website: www.animalask.org
Contact: info@animalask.org
CE Incubation Grant: $100,000
Room for more funding: In early 2021 when we have evaluated our organizational worth to the movement, we may seek additional funding.
Description of the intervention:
Animal Ask was founded with the express aim to assist animal advocacy organizations in their efforts to reduce farmed an...
|
Dec 12, 2021 |
Can EA leverage an Elon-vs-world-hunger news cycle? by Jackson Wagner
04:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Can EA leverage an Elon-vs-world-hunger news cycle?, published by Jackson Wagner on the Effective Altruism Forum.
Summary: Elon Musk promises to donate six billion dollars if the UN can explain how this would truly solve world hunger (it would probably be much more expensive). Regardless of whether the donation happens or not, a major news cycle about the cost-effectiveness of international charitable donations seems like a great opportunity to raise the public profile of effective altruism.
Details of The Billionare-Bashing Drama
US senators are currently debating a large bill that will probably include some form of tax increase on the rich. Elon Musk, now the world's richest man, voiced his opposition to a proposed tax on unrealized capital gains. He framed his oppositon specifically in terms of government inefficiency, saying:
"Who is best at capital allocation -- government or entrepeneurs -- is indeed what it comes down to."
More recently, a recent CNN headline asserted that just 2% of Elon Musk's ~$300B net worth could "solve world hunger" by feeding the 42 million people who suffer from malnutrition -- people who are otherwise "literally going to die".
Inevitably, this claim turns out to be somewhat hyperbolic/innumerate -- 6 billion dollars divided by 42 million people is around $140 per person, which would be a lot lower than givewell's most effective interventions (around ~$5000 per life saved). Maybe this billionare-bashing CNN interview is revealing an astounding, hithero unknown charitable opportunity. But more likely, most of the people are not literally going to die and/or the effort to alleviate the problem would cost much more than $6 billion (at the very least, if we need to keep giving people food each year, the real cost would be $6 billion repeating annually). I haven't yet looked into the details of the situation too closely.
Now, Elon has offered to indeed donate $6 billion, on the (presumably impossible) condition that the UN provide a realistic plan for how the problem of world hunger could legitimately be solved on that budget. For scale, six billion dollars devoted to EA cause areas would represent more than a 10% increase on the ~$42B total funds currently committed to the movement. Right now, EA organizations are spending spend about $0.2 billion on GiveWell-style global health charities each year.
This Seems Like A Good Time For EA To Shine
This conversation is already distinct from most billionare-related discorse for its focus on cost-effectiveness and international aid for the world's poorest, rather than the usual arguing over the fairness of allowing rich people to exist at all and the desire to increase taxes in order to fund more social services in the developed world.
In short, for a brief moment in time, a major news cycle is focused on how one can do the most good to save the most lives per dollar. This obviously seems like a great time to introduce the ideas of effective altruism to more people. I can only imagine that Kelsey Piper is already busily drafting up an article about this for Future Perfect. But what else can EA do to capitalize on this news cycle? Should Givewell try to outline how they would attempt to spend six billion dollars? Surely their current top charities would run out of room-for-more-funding? Would it be wiser to stay on-message with a relatively simple theme, like promoting Givewell's expertise in cost-effective global health and development spending? Or should we try to fire off a bunch of thinkpieces climbing the counterintuitiveness ladder from typical disaster aid to growth economics, and from there onwards to longtermism, x-risk reduction, etc? What should EA's general strategy be around these news cycles -- the movement generally tries to avoid political polarization, but surely some events are go...
|
Dec 12, 2021 |
If you like a post, tell the author! by Aaron Gertler
02:47
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: If you like a post, tell the author!, published by Aaron Gertler on the Effective Altruism Forum.
Write a Review
I wonder whether I should write more comments pointing out what I liked in a post even if I don't have anything to criticise instead of just silently upvoting.
- Denise Melchin
I've heard this question quite a few times, and the answer is: Yes! Absolutely yes! Tell authors when you like something they've written!
Imaginary case study
Consider the experience of a Forum author who writes a post most readers like, in a world where people only comment if they have a critique.
They go to the Forum and see a string of comments:
"You're wrong about A."
"You're wrong about B."
"Why didn't you mention C?"
The post could have dozens of upvotes, but if it looks like anyone who closely engaged with it found something to criticize, the author may not feel great about their work.
(This doesn't mean that criticism isn't valuable: If you find something to criticize, you should also probably tell the author.)
In a world where people share what they like about posts, the comments might be:
"You're wrong about A."
"I see what the above poster means about point A1, but I thought point A2 was actually an interesting take, and could be correct under assumption Q."
"I hadn't read this post you linked to — thanks for the reference!"
"You're wrong about B."
"I really liked your discussion of B!"
"Why didn't you mention C?"
"Your points about D and E were really helpful for a project I'm working on."
The criticism still exists, but I'd expect the author to feel better about responding if they know the post was valuable to some readers.
Also, positive reactions are useful feedback in their own right!
Frequently asked questions
What if my positive comment is just "thanks, I enjoyed this?"
Still good! Even a generic nice comment will be much more salient to most authors than a silent upvote.
What if my positive comment just takes up space in a way that distracts from more important critical discussion and intellectual progress and whatnot?
This is paraphrased from things I've actually heard when talking to Forum users.
While I understand the concern, I must emphasize that the Forum exists on the Internet, a system of interconnected computer networks where space is effectively unlimited. We also offer the "scrollbar," a feature people can use to skip over comments they don't want to read or discuss.
If someone finds your positive comment distracting, they can scroll past it. But there's at least one person who probably won't find it distracting — the author.
Conclusion
If you like a post, tell the author!
If you don't like a post, it's also fine to tell the author!
But at the very least, let's try to make sure authors don't get a negatively-skewed view of how people think about their posts.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Matt Levine on the Archegos failure by Kelsey Piper
06:29
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Matt Levine on the Archegos failure, published by Kelsey Piper on the Effective Altruism Forum.
Matt Levine is a finance writer with a very entertaining free newsletter, also available on Bloomberg to subscribers. Today's newsletter struck me as a fairly remarkable failure analysis of a very expensive failure, in which Credit Suisse lost $5.5billion dollars when the hedge fund Archegos collapsed. That doesn't usually happen, and banks are, of course, very incentivized to avoid it. When it happened, Credit Suisse commissioned a very thorough investigation into what went wrong.
Some background: Archegos was a hedge fund, founded in 2013, that defaulted spectacularly this spring. The Wall Street Journal estimates that they lost $8billion in 10 days. Levine wrote at the time:
The basic story of Archegos is that it extracted as much leverage as possible from a half dozen Wall Street banks to buy a concentrated portfolio of tech and media stocks (apparently partially hedged with short index positions[2]), and those stocks went up a lot, before going down a lot.
If you merely own some stocks, and they go way up and then way down, you'll end with approximately the money you started with and everything will be fine. But if you have taken out loans to buy stocks, then when they go up your wealth has increased. And if you then use your increased wealth to borrow lots more money and buy more stocks, then when they go down you will lose $8billion in ten days.
None of this is unknown to bankers, so it's confusing that the bankers let Archegos do this. In the immediate aftermath, there was a lot of theorizing about how the banks might have had inaccurate or incomplete information about how heavily leveraged Archegos was. Levine:
When the Archegos story came out this spring, there was a sense, from the outside, that the banks had missed something, that there was some structural component of Archegos’s trades that caused the banks to underestimate the risks they were taking. For instance, there was a widespread theory that, because Archegos did most of its trades in the form of total return swaps (rather than owning stocks directly), it didn’t have to disclose its positions publicly, and because it did those swaps with multiple banks, none of the banks knew how big and concentrated Archegos’s total positions were, so they didn’t know how bad it would be if Archegos defaulted.
But, nope, absolutely not, Credit Suisse was entirely plugged in to Archegos’s strategy and how much trading it was doing with other banks, and focused clearly on this risk.
So what went wrong? According to the report Credit Suisse commissioned from a law firm on the whole mess, what went wrong is that Credit Suisse determined that Archegos was overleveraged, and that they needed more collateral, and they called Archegos to that effect, and Archegos responded "hey sorry I've been swamped this week, can we talk later?" and that was that.
No, really, that's pretty much it.
The report:
On February 23, 2021, the PSR analyst covering Archegos reached out to Archegos’s Accounting Manager and asked to speak about dynamic margining. Archegos’s Accounting Manager said he would not have time that day, but could speak the next day. The following day, he again put off the discussion, but agreed to review the proposed framework, which PSR sent over that day. Archegos did not respond to the proposal and, a week-and-a-half later, on March 4, 2021, the PSR analyst followed up to ask whether Archegos “had any thoughts on the proposal.” His contact at Archegos said he “hadn’t had a chance to take a look yet,” but was hoping to look “today or tomorrow.”
Of course, when your counterparty is refusing to give you more collateral, you can pull all their loans. But Credit Suisse was kind of reluctant to pull that lever given that it wa...
|
Dec 12, 2021 |
Movement Collapse Scenarios by rebecca_baron
15:30
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Movement Collapse Scenarios, published by rebecca_baronon the Effective Altruism Forum.
Epistemic status: I mostly want to provide a starting point for discussion, not make any claims with high confidence.
Introduction and summary
It’s 2024. The effective altruism movement no longer exists, or is no longer doing productive work, for reasons our current selves wouldn’t endorse. What happened, and what could we have done about it in 2019?
I’m concerned not to hear this question discussed more often (though CEA briefly speculates on it here). It’s a prudent topic for a movement to be thinking about at any stage of its life cycle, but our small, young, rapidly changing community should be taking it especially seriously—it’s very hard to say right now where we’ll be in five years. I want to spur thinking on this issue by describing four plausible ways the movement could collapse or lose much of its potential for impact. This is not meant to be an exhaustive list of scenarios, nor is it an attempt to predict the future with any sort of confidence—it’s just an exploration of some of the possibilities, and what could logically lead to what.
Sequestration: The EAs closest to leadership become isolated from the rest of the community. They lose a source of outside feedback and a check on their epistemics, putting them at a higher risk of forming an echo chamber. Meanwhile, the rest of the movement largely dissolves.
Attrition: Value drift, burnout, and lifestyle changes cause EAs to drift away from the movement one by one, faster than they can be replaced. The impact of EA tapers, though some aspects of it may be preserved.
Dilution: The movement becomes flooded with newcomers who don’t understand EA’s core concepts and misapply or politicize the movement’s ideas. Discussion quality degrades and “effective altruism” becomes a meaningless term, making the original ideas impossible to communicate.
Distraction: The community becomes engrossed in concerns tangential to impact, loses sight of the object level, and veers off track of its goals. Resources are misdirected and the best talent goes elsewhere.
Below, I explore each scenario in greater detail.
Sequestration
To quote CEA’s three-factor model of community building,
Some people are likely to have a much greater impact than others. We certainly don’t think individuals with more resources matter any more as people, but we do think that helping direct their resources well has a higher expected value in terms of moving towards CEA’s ultimate goals.
However,
good community building is about inclusion, whereas good prioritization is about exclusion
and
It might be difficult in practice for us to be elitist about the value someone provides whilst being egalitarian about the value they have, even if the theoretical distinction is clear.
I don’t want to be seen as arguing for any position in the debate about whether and how much to prioritize those who appear most talented—a sufficiently nuanced writeup of my thoughts would distract from my main point here. However, I do want to highlight a possible risk of too much elitism that I haven’t really seen talked about. The terms “core” and “middle” are commonly used here, but I generally find their use conflates level of involvement or commitment with level of prominence or authority. In this post I’ll be using the following definitions:
Group 1 EAs are interested in effective altruism and may give effectively or attend the occasional meetup, but don’t spend much time thinking about EA or consider it a crucial part of their identities and their lives.
Group 2 EAs are highly dedicated to the community and its project of making the world a better place; they devour EA content online and/or regularly attend meetups. However, they are not in frequent contact with EA decision-makers.
Group 3 EA...
|
Dec 12, 2021 |
Some promising career ideas beyond 80,000 Hours' priority paths by Ardenlk
24:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is:Some promising career ideas beyond 80,000 Hours' priority paths, published by Ardenlk on the Effective Altruism Forum.
Write a Review
This is a sister post to "Problem areas beyond 80,000 Hours' current priorities".
Introduction
In this post, we list some more career options beyond our priority paths that seem promising to us for positively influencing the long-term future.
Some of these are likely to be written up as priority paths in the future, or wrapped into existing ones, but we haven't written full profiles for them yet—for example policy careers outside AI and biosecurity policy that seem promising from a longtermist perspective.
Others, like information security, we think might be as promising for many people as our priority paths, but because we haven't investigated them much we're still unsure.
Still others seem like they'll typically be less impactful than our priority paths for people who can succeed equally in either, but still seem high-impact to us and like they could be top options for a substantial number of people, depending on personal fit—for example research management.
Finally some—like becoming a public intellectual—clearly have the potential for a lot of impact, but we can't recommend them widely because they don't have the capacity to absorb a large number of people, are particularly risky, or both.
We compiled this list by asking 6 advisers about paths they think more people in the effective altruism community should explore, and which career ideas they think are currently undervalued—including by 80,000 Hours. In particular, we were looking for paths that seem like they may be promising from the perspective of positively shaping the long-term future, but which aren't already captured by aspects of our priority paths. If something was suggested twice and also met those criteria, we took that as a presumption in favor of including it. We then spent a little time looking into each one and put together a few thoughts and resources for those that seemed most promising. The result is the list below.
We'd be excited to see more of our readers explore these options, and plan on looking into them more ourselves.
Who is best suited to pursue these paths? Of course the answer is different for each one, but in general pursuing a career where less research has been done on how to have a large impact within it—especially if few of your colleagues will share your perspective on how to think about impact—may require you to think especially critically and creatively about how you can do an unusual amount of good in that career. Ideal candidates, then, would be self-motivated, creative, and inclined to think rigorously and often about how they can steer toward the highest impact options for them—in addition to having strong personal fit for the work.
What are the pros and cons of each of these paths? Which are less promising than they might at first appear? What particular routes within each one are the most promising and which are the least? What especially promising high-impact career ideas is this list missing?
We're excited to read people's reactions in the comments. And we hope that for people who want to pursue paths outside those we talk most about, this list can give them some fruitful ideas to explore.
Career ideas we're particularly excited about beyond our priority paths
Become a historian focusing on large societal trends, inflection points, progress, or collapse
We think it could be high-impact to study subjects relevant to the long-term arc of history—e.g, economic, intellectual, or moral progress from a long-term perspective, the history of social movements or philanthropy, or the history of wellbeing. Better understanding long trends and key inflection points, such as the industrial revolution, may help us understand what could cause other im...
|
Dec 12, 2021 |
Estimates of global captive vertebrate numbers by saulius
01:59:05
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Estimates of global captive vertebrate numbers, published by saulius on the Effective Altruism Forum.
Write a Review
In this article, I list all the estimates I could find for numbers of vertebrates that are farmed or kept in captivity for various purposes. I also describe some groups of captive vertebrates for which I found no estimates. For some bigger groups of animals that are less well-known amongst animal activists, I also describe trends and main welfare concerns.
The purpose of the article is to make it easier to find and compare estimates. Hopefully, this can also help animal advocates decide which issues to focus on. Note that I chose to focus on captive vertebrates simply to limit the scope of the article.
Summary tables
Estimates are summarized in the tables below. Numbers can also be explored in this spreadsheet. The rest of the article provides sources and explanations for these numbers.
All figures are for the whole world unless otherwise specified. For brevity, I use M for a million (10^6) and B for a billion (10^9).
Reptiles and amphibians
table
Fish
table
Note that all the numbers above exclude shellfish. I’ve found no estimates for :
fish used as live bait in commercial fishing,
fish trapped in nets and traps,
fish hooked on hooks in commercial and recreational fisheries,
food fish transported alive,
other species of wild-caught fish suffocating in the air after landing.
Chickens
table
Most of the 68.8B slaughtered chickens were raised specifically for meat, but the figure seems to include at least some slaughtered chickens from the egg-laying industry (see the appendix in Šimčikas (2019a)). According to FAOSTAT, in total there were 23.7B chickens alive in 2018.
Other birds
table
I haven’t found estimates for the number of:
ducks and geese live-plucked for their feathers and down,
swiftlets farmed for their nests,
ostriches farmed for meat
Mammals
table
I haven’t tried finding the number of:
animals raised in other pet mills (kitten mills, rabbit mills, etc.),
animals raised in more humane pet breeding facilities,
household rodent pests caught in traps,
civets used to make civet coffee,
elk farmed for food.
Mixed species
table
I haven’t estimated the number of animals who are:
kept alive in food markets,
captured or captive-bred to be released into the wild as a Buddhist ritual to earn good karma (fangsheng),
raised to be hunted in countries other than the UK,
used in circuses outside of Europe,
used for racing, fighting, and other forms of entertainment,
kept in wildlife rehabilitation clinics,
land animals bred in captivity to be released into the wild for species reintroduction programs.
various species of animals kept in wildlife farms (see Standaert (2020))
Pets
The table below presents Euromonitor International data, which has estimates of the number of pets in 53 countries,[1] which account for about 70% of the world’s human population.
table
If estimates in tables above seem difficult to compare and comprehend, it may be useful to look at the appendix where I convert estimates into units of time. Estimates can also be explored in this spreadsheet.
Explanation of uncertainty levels
In the ‘Uncertainty’ columns in the tables above, I describe the uncertainty for each estimate as low, moderate, high, or very high. Here is roughly what I mean by these words:
Low - the estimate comes from a trustworthy source that explains how it arrived at the estimate. I’d be surprised if the estimate was off by a factor of 1.5 or more. In cases I provide a point estimate (e.g., “1M”), this means that I’d be surprised if the actual number was more than 1.5 times smaller or larger than the estimate. In cases where I provide a range (e.g., “1M–2M”), this means that I’d be surprised if the real number was more than 1.5 times larger than the upper bound or more th...
|
Dec 12, 2021 |
Does Economic History Point Toward a Singularity? by Ben Garfinkel
05:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Does Economic History Point Toward a Singularity?, published by Ben Garfinkel on the Effective Altruism Forum.
Write a Review
I’ve ended up spending quite a lot of time researching premodern economic growth, as part of a hobby project that got out of hand. I’m sharing an informal but long write-up of my findings here, since I think they may be relevant to other longtermist researchers and I am unlikely to write anything more polished in the near future. Click here for the Google document.[1]
Summary
Over the next several centuries, is the economic growth rate likely to remain steady, radically increase, or decline back toward zero? This question has some bearing on almost every long-run challenge facing the world, from climate change to great power competition to risks from AI.
One way to approach the question is to consider the long-run history of economic growth. I decided to investigate the Hyperbolic Growth Hypothesis: the claim that, from at least the start of the Neolithic Revolution up until the 20th century, the economic growth rate has tended to rise in proportion with the size of the global economy.[2] This claim is made in a classic 1993 paper by Michael Kremer. Beyond influencing other work in economic growth theory, it has also recently attracted significant attention within the longtermist community, where it is typically regarded as evidence in favor of further acceleration.[3] An especially notable property of the hypothesized growth trend is that, if it had continued without pause, it would have produced infinite growth rates in the early twenty-first century.
I spent time exploring several different datasets that can be used to estimate pre-modern growth rates. This included a number of recent archeological datasets that, I believe, have not previously been analyzed by economists. I wanted to evaluate both: (a) how empirically well-grounded these estimates are and (b) how clearly these estimates display the hypothesized pattern of growth.
Ultimately, I found very little empirical support for the Hyperbolic Growth Hypothesis. While we can confidently say that the economic growth rate did increase over the centuries surrounding the Industrial Revolution, there is approximately nothing to suggest that this increase was the continuation of a long-standing hyperbolic trend. The alternative hypothesis that the modern increase in growth rates constituted a one-off transition event is at least as consistent with the evidence.
The premodern growth data we have is mostly extremely unreliable: For example, so far as I can tell, Kremer’s estimates for the period between 10,000BC and 400BC ultimately derive from a single speculative paragraph in a book published decades earlier. Putting aside issues of reliability, the various estimates I considered also, for the most part, do not clearly indicate that pre-modern growth was hyperbolic. The most empirically well-grounded datasets we have are at least weakly in tension with the hypothesis. Overall, though, I think we are in a state of significant ignorance about pre-modern growth rates.
Beyond evaluating these datasets, I also spent some time considering the growth model that Kremer uses to explain and support the Hyperbolic Growth Hypothesis. One finding is that if we use more recent data to estimate a key model parameter, the model may no longer predict hyperbolic growth: the estimation method that we use matters. Another finding, based on some shallow reading on the history of agriculture, is that the model likely overstates the role of innovation in driving pre-modern growth.
Ultimately, I think we have less reason to anticipate a future explosion in the growth rate than might otherwise be supposed.[4][5]
EDIT: See also this addendum comment for an explanation of why I think the alternative "phase transition" ...
|
Dec 12, 2021 |
List of EA funding opportunities by MichaelA
12:51
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: List of EA funding opportunities, published by MichaelA on the Effective Altruism Forum.
This is a quickly written post listing opportunities for people to apply for funding from funders that are part of the EA community.
I'm probably forgetting some opportunities relevant to longtermist and EA movement building work, and many opportunities relevant to other cause areas. Please comment if you know of things I’m missing!
Maybe a list like this already exists; if so, please comment to mention it! Two somewhat similar things are the 80,000 Hours job board and Effective Thesis’s list of funding opportunities.
I strongly encourage people to consider applying for one or more of these things. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding aren’t very high. (I think the same basic logic applies to job applications.)
It'd probably be useful for someone to make a separate collection of non-EA funding opportunities that would be well-suited to EA-aligned projects. But such things are included in this post.
I follow the name of each funding opportunity with some text from the linked page.
I wrote this post in a personal capacity, not as a representative of any of the orgs mentioned.
See also Things I often tell people about applying to EA Funds.
EDIT: JJ Hepburn has now created an Airtable with similar info, which you can view the outputs of here. That currently complements this post, and could supersede this post if someone takes ownership of adding things there, updating the info, and ironing out potential glitches. Please contact me if you're interested in doing that.
Currently open Open Phil funding opportunities
Request for proposals for growing the community of people motivated to improve the long-term future
“We are seeking proposals from applicants interested in growing the community of people motivated to improve the long-term future via the kinds of projects described below.
Apply to start a new project here; express interest in helping with a project here.
Applications are open until further notice and will be assessed on a rolling basis. If we plan to stop accepting applications, we will indicate it on this page at least a month ahead of time.
See this post for additional details about our thinking on these projects.”
Open Philanthropy Undergraduate Scholarship
“Apply here (see below for details regarding application deadlines).
This program aims to provide support for highly promising and altruistically-minded students who are hoping to start an undergraduate degree at one of the top universities in the USA or UK (see below for details) and who do not qualify as domestic students at these institutions for the purposes of admission and financial aid.”
Open Philanthropy Course Development Grants
“This program aims to provide grant support to academics for the development of new university courses (including online courses). At present, we are looking to fund the development of courses on a range of topics that are relevant to certain areas of Open Philanthropy’s grantmaking that form part of our work to improve the long-term future (potential risks from advanced AI, biosecurity and pandemic preparedness, other global catastrophic risks), or to issues that are of cross-cutting relevance to our work. We are primarily looking to fund the development of new courses, but we are also accepting proposals from applicants who are looking for funding to turn courses they have already taught in an in-person setting into freely-available online courses.
Applications are open until further notice and will be assessed on a rolling basis.
APPLY HERE”
Early-career funding for individuals interested in improving the long-term future
“This program aims to provide support - p...
|
Dec 12, 2021 |
Effective Altruism is a Question (not an ideology) by Helen
06:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Effective Altruism is a Question (not an ideology), published by Helen on the Effective Altruism Forum.
Write a Review
What is the definition of Effective Altruism? What claims does it make? What do you have to believe or do, to be an Effective Altruist?
I don’t think that any of these questions make sense.
It’s not surprising that we ask them: if you asked those questions about feminism or secularism, Islamism or libertarianism, the answers you would get would be relevant and illuminating. Different proponents of the same movement might give you slightly different answers, but synthesising the answers of several people would give you a pretty good feeling for the core of the movement.
But each of these movements is answering a question. Should men and women be equal? (Yes.) What role should the church play in governance? (None.) What kind of government should we have? (One based on Islamic law.) How big a role should government play in people’s private lives? (A small one.)
Effective Altruism isn’t like this. Effective Altruism is asking a question, something like:
“How can I do the most good, with the resources available to me?”
There are some excellent introductions to Effective Altruism out there. They often outline common conclusions that Effective-Altruism-style thinking leads to: things like earning to give, or favouring interventions in poorer countries over those in richer countries. This makes sense - Effective Altruism does seem to imply that those things are a good idea - but it doesn't make the conclusions part of the core of the movement.
What does this mean for how we think and talk about Effective Altruism?
Reframing Effective Altruism as a question has some pretty significant implications. These aren’t necessarily new – some people already act on the points below. But I think they are worth thinking about explicitly.
1. We should try to avoid calling ourselves “effective altruists”
Feminist, secularist, Islamist, environmentalist... it’s not surprising that people who think Effective Altruism is interesting and important want to switch the “-ism” into an “-ist”, and use it to refer to themselves. The linguistic part of our brain does it automatically.
But there’s a big problem with this. “Effective Altruism” is a carefully and cleverly chosen name, and it describes its own core question succinctly. But it does this by combining a common adjective with a common noun, which means that changing the last syllable gives you not an identifier, but a truth claim.
“I am an effective altruist” may sound to the speaker like “I think Effective Altruism is really important”, but to the listener, it sounds like “I perform selfless acts in a manner that is successful, efficient, fruitful or efficacious.” (Thesauruses are fun!)
Effective Altruism is already a slightly impudent name, since its claim to be a ground-breaking idea rests on the premise that other altruism is ineffective.
Calling oneself an effective altruist is much worse. As well as provoking scepticism or hostility, it automatically leads into questions like “Can I [x] and still be an effective altruist?” “How much do I have to donate to be an effective altruist?” “How does an effective altruist justify spending money on anything beyond bare survival?” These questions feel like they should have meaningful answers, but trying to answer them probably won't get us very far.
Alternative descriptors include “aspiring effective altruist”, “interested in Effective Altruism”, “member of the Effective Altruism movement”. What do you think of those options? Do you have others? When could it still be appropriate to use “effective altruist”?
2. Our suggested actions and causes are best guesses, not core ideas
It’s extremely important that Effective Altruism does get translated into actions in the real world. To d...
|
Dec 12, 2021 |
My personal cruxes for working on AI safety by Buck
01:04:42
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: My personal cruxes for working on AI safety, published by Buck on the Effective Altruism Forum.
Write a Review
The following is a heavily edited transcript of a talk I gave for the Stanford Effective Altruism club on 19 Jan 2020. I had rev.com transcribe it, and then Linchuan Zhang, Rob Bensinger and I edited it for style and clarity, and also to occasionally have me say smarter things than I actually said. Linch and I both added a few notes throughout. Thanks also to Bill Zito, Ben Weinstein-Raun, and Howie Lempel for comments.
I feel slightly weird about posting something so long, but this is the natural place to put it.
Over the last year my beliefs about AI risk have shifted moderately; I expect that in a year I'll think that many of the things I said here were dumb. Also, very few of the ideas here are original to me.
After all those caveats, here's the talk:
Introduction
It's great to be here. I used to hang out at Stanford a lot, fun fact. I moved to America six years ago, and then in 2015, I came to Stanford EA every Sunday, and there was, obviously, a totally different crop of people there. It was really fun. I think we were a lot less successful than the current Stanford EA iteration at attracting new people. We just liked having weird conversations about weird stuff every week. It was really fun, but it's really great to come back and see a Stanford EA which is shaped differently.
Today I'm going to be talking about the argument for working on AI safety that compels me to work on AI safety, rather than the argument that should compel you or anyone else. I'm going to try to spell out how the arguments are actually shaped in my head. Logistically, we're going to try to talk for about an hour with a bunch of back and forth and you guys arguing with me as we go. And at the end, I'm going to do miscellaneous Q and A for questions you might have.
And I'll probably make everyone stand up and sit down again because it's unreasonable to sit in the same place for 90 minutes.
Meta level thoughts
I want to first very briefly talk about some concepts I have that are about how you want to think about questions like AI risk, before we actually talk about AI risk.
Heuristic arguments
When I was a confused 15 year old browsing the internet around 10 years ago, I ran across arguments about AI risk, and I thought they were pretty compelling. The arguments went something like, "Well, sure seems like if you had these powerful AI systems, that would make the world be really different. And we don't know how to align them, and it sure seems like almost all goals they could have would lead them to kill everyone, so I guess some people should probably research how to align these things." This argument was about as sophisticated as my understanding went until a few years ago, when I was pretty involved with the AI safety community.
I in fact think this kind of argument leaves a lot of questions unanswered. It's not the kind of argument that is solid enough that you'd want to use it for mechanical engineering and then build a car. It's suggestive and heuristic, but it's not trying to cross all the T's and dot all the I's. And it's not even telling you all the places where there's a hole in that argument.
Ways heuristic arguments are insufficient
The thing which I think is good to do sometimes, is instead of just thinking really loosely and heuristically, you should try to have end-to-end stories of what you believe about a particular topic. And then if there are parts that you don't have answers to, you should write them down explicitly with question marks. I guess I'm basically arguing to do that instead of just saying, "Oh, well, an AI would be dangerous here." And if there's all these other steps as well, then you should write them down, even if you're just going to have your just...
|
Dec 12, 2021 |
List of EA-related organisations
19:14
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: List of EA-related organisations, published by JamieGittins on the Effective Altruism Forum.
Write a Review
Update: After some suggestions in the comments I made a wiki, which anyone can add to. Hopefully this will enable us to keep the list up to date.
I’ve constantly been discovering new, exciting organisations in the years since I got involved in EA. Recently, I came across WANBAM (Women and Non-Binary Altruism Mentors) and wished that I’d known about it sooner, so I could’ve started recommending it to people who I thought would be interested. It then occurred to me that people probably have similar experiences all the time in EA. I couldn’t find a comprehensive list of EA-related organisations that already existed, so I decided to make one. By collecting as many EA-related organisations as I can into one place, I hope I can help some people to discover some exciting orgs that they wouldn’t have otherwise!
What this is: A list of organisations that are aligned with some of EA’s key principles. The organisations I have included tend to meet a least one of these criteria:
Have explicitly aligned themselves with EA
Are currently recommended by GiveWell or Animal Charity Evaluators
Were incubated by Charity Entrepreneurship
Have engaged with the EA community (e.g. by posting on the EA Forum or attending EA Global)
What this is not: A comprehensive list of every organisation which should be considered ‘EA-aligned’. I don’t think this is a useful or even possible distinction to make, since many organisations lie on a continuum of commitment to EA values. As is obvious from scrolling the 80,000 Hours job board, there are many thousands of organisations out there which do effective work in EA cause areas.
I have not included EA projects which don’t hire staff, or which are national/local EA groups (some of which hire paid staff) to keep the list more straightforward. Despite my best efforts, I imagine I’ve accidentally left off some organisations which should be on here (or potentially added some which shouldn’t be), so I welcome any comments with suggestions of changes!
Infrastructure
80,000 Hours – Does research into how people can have greater impact with their careers. Also maintains a high impact jobs board and produces a podcast.
Animal Advocacy Careers – Seeks to address the career and talent bottlenecks in the animal advocacy movement, especially the farmed animal movement, by providing career services and advice. Incubated by Charity Entrepreneurship.
Animal Charity Evaluators (ACE) – Evaluates and recommends the most effective animal charities.
Ayuda Efectiva - Promotes effective giving in Spain. Their Global Health Fund routes donations to a selection of GiveWell's recommended charities, providing tax deductibility for Spanish donors. They plan to launch similar funds for other cause areas in the near future.
Centre for Effective Altruism (CEA) – Helps to grow and support the EA community.
Charity Entrepreneurship – Does research into the most effective interventions and incubates charities to implement these interventions.
Doebem - A Brazilian-based donation platform which recommends effective charities according to EA principles.
Donational - A donation platform which recommends effective charities to users, and helps them to pledge and allocate a proportion of their income to those charities.
Effective Altruism Foundation (EAF) – Implements projects aimed at doing the most good in terms of reducing suffering. Once initiated, projects are carried forward by EAF with differing degrees of independence and in some cases become autonomous organisations. Projects have included Raising for Effective Giving (REG) and the Centre on Long-Term Risk (CLR).
Effective Giving UK/Netherlands – Helps major donors to find and fund the most promising solutions to the world’s m...
|
Dec 12, 2021 |
Humanities Research Ideas for Longtermists by Lizka
22:56
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Humanities Research Ideas for Longtermists, published by Lizka on the Effective Altruism Forum.
Summary
This post lists 10 longtermism-relevant project ideas for people with humanities interests or backgrounds. Most of these ideas are for research projects, but some are for summaries, new outreach content, etc. (See below for what I mean by “humanities.”)
The ideas, in brief:
Study future-oriented beliefs in certain religions or groups
Study the ways in which incidental qualities become essential to institutions
Explore fiction as a tool for moral circle expansion
Study how longtermists use different forms of media and how this might be improved
Study how non-EAs actually view AI safety issues, and how we got here
Produce anthropological/ethnographic studies of unusually relevant groups
Apply insights from education, history, and development studies to creating a post-societal-collapse recovery plan
Study notions of utopias
Analyze social media (and online forums) in the context of longtermism
Use tools from non-history humanities fields to aid history-oriented projects relevant for longtermism
Why it might be helpful to produce lists of projects for people with humanities backgrounds (or interests) to work on
Deliberately looking for and studying topics that are humanities-oriented could be a way to discover longtermist interventions that are hard to notice or tackle from other angles (e.g., a STEM angle), improve our views on known causes and interventions, and find topics that are better fits for some people than existing (non-humanities) project ideas would be.
If it is relatively easy to produce such lists, it suggests that we are systematically missing humanities ideas and tools from our reasoning, and that this gap is not explainable by a natural disconnect between longtermist values or concerns and non-STEM areas.[1] (If we had exhausted humanities approaches to longtermism, it would probably be hard to find previously unnoticed topics that seem reasonable.) It seems valuable to have diversity in backgrounds and perspectives, and the existence of this gap suggests that supporting humanities projects might be a way to improve on that front.
Collections like this can consolidate existing ideas and resources in one place, making it easier to find projects and collaborate as a community.
I am aware of talented people who have been put off EA (and longtermism) due to their general sense that the humanities are considered worthless. My sense is that EAs do see value in the humanities, and it might be worth making this clearer.
(Personal note) this project was helpful for me as a way to explore longtermist research.
Scope and disclaimers
The focus of the post is on the humanities disciplines most neglected in EA and longtermism, so I excluded history, philosophy, and psychology. (Those might also be neglected in the community, but there has been at least some mention of how they could be relevant for longtermism in places like the Forum.)[2] My use of the word “humanities'' is loose—for this project, I accepted some fields that might be considered social sciences instead. In practice, I think the ideas listed here are most related to anthropology, archival studies, area studies, art history, (comparative) literature, (comparative) religion studies/theology, education, and media studies.
The list is not meant to be exhaustive by any means; in particular, the selection of topics here is heavily influenced by my own academic background (literature, sort-of-history, art, math). Some of the ideas are ideas for bringing existing research into EA rather than ideas for producing totally new research. It is also important to note that I have very little background in most of the areas involved in this list, and I wouldn't be surprised if deeper research discovered that some ...
|
Dec 12, 2021 |
A personal take on longtermist AI governance by lukeprog
11:23
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A personal take on longtermist AI governance, published by lukeprog on the Effective Altruism Forum.
Several months ago, I summarized Open Philanthropy's work on AI governance (which I lead) for a general audience here. In this post, I elaborate my thinking on AI governance in more detail for people who are familiar with effective altruism, longtermism, existential risk, and related topics. Without that context, much of what I say below may be hard to understand, and easy to misunderstand.
These are my personal views, and don't necessarily reflect Open Phil's views, or the views of other individuals at Open Phil.
In this post, I
briefly recap the key points of my previous post,
explain what I see as the key bottlenecks in the space, and
share my current opinions about how people sympathetic to longtermism and/or AI existential risk mitigation can best contribute today.
Open Phil's AI governance work so far (a recap)
First, some key points from my previous post:
In practice, Open Phil's grantmaking in Potential Risks from Advanced Artificial Intelligence is split in two:
One part is our grantmaking in "AI alignment," defined here as "the problem of creating AI systems that will reliably do what their designers want them to do even when AI systems become much more capable than their designers across a broad range of tasks."[1]
The second part, which I lead, is our grantmaking in "AI governance," defined here as "local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems."
Our AI focus area is part of our longtermism-motivated portfolio of grants,[2] and we focus on AI alignment and AI governance grantmaking that seems especially helpful from a longtermist perspective. On the governance side, I sometimes refer to this longtermism-motivated subset of work as "transformative AI governance" for relative concreteness, but a more precise concept for this subset of work is "longtermist AI governance."[3]
It's difficult to know which “intermediate goals” we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from transformative AI (from a longtermist perspective). As such, our grantmaking so far tends to focus on:
.research that can help clarify how AI technologies may develop over time, and which intermediate goals are worth pursuing.
.research and advocacy aimed at the few intermediate goals we've come to think are clearly worth pursuing, such as particular lines of AI alignment research, and creating greater awareness of the difficulty of achieving high assurance in the safety and security of increasingly complex and capable AI systems.
.broad field-building activities, for example scholarships, career advice for people interested in the space, professional networks, etc.[4]
.training, advice, and other support for actors with plausible future impact on transformative AI outcomes, as always with a focus on work that seems most helpful from a longtermist perspective.
Key bottlenecks
Since our AI governance grantmaking began in ~2015,[5] we have struggled to find high-ROI grantmaking opportunities that would allow us to move grant money into this space as quickly as we'd like to.[6] As I see it, there are three key bottlenecks to our AI governance grantmaking.
Bottleneck #1: There are very few longtermism-sympathetic people in the world,[7] and even fewer with the specific interests, skills, and experience to contribute to longtermist AI governance issues.
As a result, the vast majority of our AI governance grantmaking has supported work by people who are (as far as I know) not sympathetic to longtermism (and may have never heard of it). However, it's been difficult to find high-ROI grantmaking opportunities of this...
|
Dec 12, 2021 |
A cause can be too neglected by saulius
02:43
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A cause can be too neglected, published by saulius on the Effective Altruism Forum.
Write a Review
Effective Altruism movement often uses a scale-neglectedness-tractability framework. As a result of that framework, when I discovered issues like baitfish, fish stocking, and rodents fed to pet snakes, I thought that it is an advantage that they are almost maximally neglected (seemingly no one is working on them). Now I think that it’s also a disadvantage because there are set-up costs associated with starting work on a new cause. For example:
You first have to bridge the knowledge gap. There are no shoulders of giants you can stand on, you have to build up knowledge from scratch.
Then you probably need to start a new organization where all employees will be beginners. No one will know what they are doing because no one has worked on this issue before. It takes a while to build expertise, especially when there are no mentors.
If you need support, you usually have to somehow make people care about a problem they’ve never heard before. And it could be a problem which only a few types of minds are passionate about because it was neglected all this time (e.g. insect suffering).
graph
Now let’s imagine that someone did a cost-effectiveness estimate and concluded that some well-known intervention (e.g. suicide hotline) was very cost-effective. We wouldn’t have any of the problems outlined above:
Many people already know how to effectively do the intervention and can teach others.
We could simply fund existing organizations that do the intervention.
If we found new organizations, it might be easier to fundraise from non-EA sources. If you talk about a widely known cause or intervention, it’s easier to make people understand what you are doing and probably easier to get funding.
Note that we can still use EA-style thinking to make interventions more cost-effective. E.g. fund suicide hotlines in developing countries because they have lower operating costs.
Conclusions
We don’t want to create too many new causes with high set-up costs. We should consider finding and filling gaps within existing causes instead. However, I’m not writing this to discourage people from working on new causes and interventions. This is only a minor argument against doing it, and it can be outweighed by the value of information gained about how promising the new cause is.
Furthermore, this post shows that if we don’t see any way to immediately make a direct impact when tackling a new issue (e.g. baitfish), it doesn’t follow that the cause is not promising. We should consider how much impact could be made after set-up costs are paid and more resources are invested.
Opinions are my own and not the views of my employer.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Our plans for hosting an EA wiki on the Forum by Aaron Gertler
07:30
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Our plans for hosting an EA wiki on the Forum, published by Aaron Gertler on the Effective Altruism Forum.
We are in the process of implementing a major project on the Forum — turning our current system of tags into a full-fledged “EA wiki”.
Under this system, many of the tags used for posts will also serve as articles in the wiki. Many articles in the Wiki will also serve as tags that can be applied to articles.
However, there are exceptions in both directions. Some tags don’t make sense as wiki articles (for example, “EA London Update”). And some articles are too narrow to be useful tags (for example, Abhijit Banerjee). These will be marked as “wiki only” — they can be found with the Forum’s search engine, but can’t be used to tag posts.
The project is made possible by the work of Pablo Stafforini, who received an EA Infrastructure Fund grant to create an initial set of articles.
Why is an EA wiki useful?
EA content mostly takes the form of essays, videos, research papers, and other long-form content. If you want to find a simple definition of a term, or a brief summary of a cause area, you often have to find a particular blog post from 2012 or ask someone in the community.
A wiki can serve as a collection of definitions and brief explanations that help people efficiently develop their understanding of EA’s ideas and community.
It can also host more detailed articles that wouldn’t have a reason to exist elsewhere. Even if no one person wants to e.g. summarize all the major arguments to give now vs. later, ten people can each contribute a small portion of an article and get the same result.
CEA tried to do some of this with EA Concepts, a proto-wiki with well-written articles on a range of topics. But the project was deprioritized at one point and never picked back up — mostly because it takes a lot of time and energy to create and maintain anything close to a complete list of important concepts in effective altruism.
The EA Forum seems like a better way to do this, for a few reasons.
Why host this on the EA Forum?
There have been multiple attempts to create an EA wiki before, including EA Concepts, but none have really taken off. It’s hard to get the necessary volume of volunteer work to compile a strong encyclopedia on a topic as broad and complex as effective altruism.
Building a new wiki on the EA Forum has a few advantages over using a separate website:
Constant attention: Hundreds of people visit the Forum every day. While tag pages don’t get many edits now, we’ll be doing a lot of work to promote them over the next few months. Anyone who visits the Forum will be prompted to contribute; if even a small number decide to help, I think we’ll have a larger collection of volunteers than any past EA wiki project.
Strong SEO: The effectivealtruism.org domain gets a lot of traffic, which means it tends to show up in search engines for relevant terms. Once we’ve set up a collection of heavily cross-linked wiki articles, I expect that the articles will begin to draw a lot of new visitors to the site.
Automatic updates through tagging: While the whole “articles are tags” thing can feel weird at times, it also means that many articles will be attached to an ever-growing list of relevant posts.
Professional support: The Forum is run by CEA and draws from the technical work of developers from both CEA and LessWrong. We constantly add new features, and if something breaks, we have the resources to fix it. CEA’s resources also allow us to provide incentives for Wiki editing (more news on that soon, but not in this post).
What are the next steps?
Pablo’s approach involves setting up as many relatively short articles as possible, to give editors something to work from. He started by creating ~150 “stubs”, or one-sentence articles, that he expects to develop in the coming months...
|
Dec 12, 2021 |
The Long-Term Future Fund has room for more funding, right now by abergal
02:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Long-Term Future Fund has room for more funding, right now , published by abergal on the Effective Altruism Forum.
The Long-Term Future Fund is on track to approve $1.5M - $2M of grants this round. This is 3 - 4x what we’ve spent in any of our last five grant rounds and most of our current fund balance.
We received 129 applications this round, desk rejected 33 of them, and are evaluating the remaining 96. Looking at our preliminary evaluations, I’d guess we’ll fund 20 - 30 of these.
In our last comparable grant round, April 2019, we received 91 applications and funded 13, for a total of $875,150. Compared to that round:
We’ve received more applications. (42% more than in April.)
We’re likely to distribute more money per applicant, because several applications are for larger grants, and requested salaries have gone up. (The average grant request is ~$80K this round vs. ~$50K in April, and the median is ~$50K vs. ~$25K in April.)
We’re likely to fund a slightly greater percentage of applications. (16% - 23% vs. 14% in April.)
We’ve recently changed parts of the fund’s infrastructure and composition, and it’s possible that these changes have caused us to unintentionally lower our standards for funding. My personal sense is that this isn’t the case; I think the increased spending reflects an increase in the number of quality applications submitted to us, as well as changing applicant salaries.
If you were considering donating to the fund in the past but were unsure about its room for more funding, now could be a particularly impactful time to give. I don’t know if my perceived increase in quality applications will persist, but I no longer think it’s implausible for the fund to spend $4M - $8M this year while maintaining our previous bar for funding. This is up from my previous guess of $2M.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
More EAs should consider “non-EA” jobs by abergal
02:08
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: More EAs should consider “non-EA” jobs, published by abergal on the Effective Altruism Forum.
The Long-Term Future Fund is on track to approve $1.5M - $2M of grants this round. This is 3 - 4x what we’ve spent in any of our last five grant rounds and most of our current fund balance.
We received 129 applications this round, desk rejected 33 of them, and are evaluating the remaining 96. Looking at our preliminary evaluations, I’d guess we’ll fund 20 - 30 of these.
In our last comparable grant round, April 2019, we received 91 applications and funded 13, for a total of $875,150. Compared to that round:
We’ve received more applications. (42% more than in April.)
We’re likely to distribute more money per applicant, because several applications are for larger grants, and requested salaries have gone up. (The average grant request is ~$80K this round vs. ~$50K in April, and the median is ~$50K vs. ~$25K in April.)
We’re likely to fund a slightly greater percentage of applications. (16% - 23% vs. 14% in April.)
We’ve recently changed parts of the fund’s infrastructure and composition, and it’s possible that these changes have caused us to unintentionally lower our standards for funding. My personal sense is that this isn’t the case; I think the increased spending reflects an increase in the number of quality applications submitted to us, as well as changing applicant salaries.
If you were considering donating to the fund in the past but were unsure about its room for more funding, now could be a particularly impactful time to give. I don’t know if my perceived increase in quality applications will persist, but I no longer think it’s implausible for the fund to spend $4M - $8M this year while maintaining our previous bar for funding. This is up from my previous guess of $2M.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Announcing the Patient Philanthropy Fund by SjirH
01:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Announcing the Patient Philanthropy Fund, published by SjirH on the Effective Altruism Forum.
Today, after nearly two years of preparations, Founders Pledge is launching the Patient Philanthropy Fund (PPF): a grantmaking vehicle that invests to give to safeguard and improve the long-term future.
The PPF is incubated as a special trust within Founders For Good - the Founders Pledge UK entity. It is managed by a Management Committee consisting of purpose-aligned experts on timing of giving. Our aim is to further develop and grow it over the coming 10 years and eventually spin it out as a separate charitable entity.
The Fund is open for contributions by non-FP-members via EA Funds. We are launching the PPF with $1m in pre-seed funding contributed by a list of Founding Partners and Supporters, including many EA community members.
Please refer to the Fund's website for more information on its plans, governance structure, Management Committee, grant-making policy etc.
And please share any feedback or questions you have on the PPF in the comments on this post, or reach out to sjir@founderspledge.com!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
EA is vetting-constrained by toonalfrink
05:54
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: EA is vetting-constrained, published by toonalfrink on the Effective Altruism Forum.
Write a Review
Re: What to do with people? and After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
Epistemic status: just my personal impression. Please prove me wrong.
So we know that:
1) In aggregate, there are billions of dollars in EA
2) There are lots of surplus talented people looking for EA work, that can't get it
and I would like to add:
3) I estimate that there are at least 10-20 budding organisations that would love to use this money to get these people to work, and scale beyond their current size, and properly tackle the problem they aim to solve. I know at least 5 founders like that personally.
So with all the ingredients in place for amazing movement growth, why isn't the magic happening?
Knowing who to delegate to
I agree with the idea that if you want to solve problems, you need to organise your system in a hierarchy, with some kind of divide-and-conquer strategy where a principal figures out the subproblems of a problem and delegates them to some agents, and recurse.
One problem here is that, even if the agent is aligned, the principal needs some way to tell that the agent is capable of carrying out a problem to a certain standard.
Different systems solve this problem in different ways. A company might have some standards for hiring and structurally review the performance of their employees. Academia relies on prestige and some metrics that are proxies of quality. The market gives the most money to those that sell the most popular products. Communities kick out members that cross boundaries, and deprive uninteresting people of attention.
EA, by which I mean the established organisations of EA, does this kind of thing in two ways: by hiring directly, and by vetting projects. For the latter there are grantmakers. As professionals that have thought a lot about what projects need to happen, they take a long hard look at an application by a startup founder, and if they expect it to work out well, they fund it. Simple enough.
The state of vetting in EA
I want to clarify that none of the following is meant to be accusatory. Grantmaking sounds like one of the hardest jobs in the world, and projects are by no means entitled to EA money just because they call themselves EA. I hope that this post keeps a spirit of high trust, which I think is very important.
So why aren't we seeing more new EA organisations getting funding? Two hypotheses come to mind:
The “high standards” hypothesis. Grantmakers think that these new organisations just aren't up to standard, and they would therefore cause damage. Perhaps their model is that EA should retain a very high standard to make sure that the prestige of the movement stays intact. After all that's what the movement might need to influence big institutions like academia and government.
The "vetting bottleneck" hypothesis. Grantmaking organisations are just way understaffed. It's not that they're sure that these organisations don't meet the bar, it's just that they can't verify that in time, so the safest option is to hold off on funding, or fund a more established organisation instead.
In reality, it is probably a combination of both of these. Some anecdotal evidence:
When one startup got rejected by a grantmaking organisation, and they pressed for feedback, there were told that "We do not possess the domain expertise to evaluate scalable existential risk reduction projects in the way that [other org] would be better placed to do." And "as such, we rely more on the strength and quality of references when modeling out the potential impact of projects." This was after they were invited to the interview stage. It suggests that grantmakers fall back on prestige because they don’t always have the...
|
Dec 12, 2021 |
There's Lots More To Do by Jeff_Kaufman
03:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: There's Lots More To Do, published by Jeff_Kaufman on the Effective Altruism Forum.
Write a Review
Benjamin Hoffman recently wrote a post arguing that "drowning children are rare":
Stories such as Peter Singer's "drowning child" hypothetical frequently imply that there is a major funding gap for health interventions in poor countries, such that there is a moral imperative for people in rich-countries to give a large portion of their income to charity. There are simply not enough excess deaths for these claims to be plausible.
As far as I can see, this pretty much destroys the generic utilitarian imperative to live like a monk and give all your excess money to the global poor or something even more urgent. Insofar as there's a way to fix these problems as a low-info donor, there's already enough money. Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense. Spend money on taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits.
Imagine that the best intervention out there was direct cash transfers to globally poor people. The amount of money that could be productively used here is very large: it would cost at least $1T to give $1k to each of the 1B poorest people in the world. This is very far from foundations already having more than enough money. That there are extremely poor people who can do far more with my money than I can is enough for me to give.
While I also think there are other ways to spend money altruistically that have more benefit per dollar than cash transfers, this only strengthens the argument for helping.
How does Ben reach the opposite conclusion? Reading his post several times it looks to me like two things:
He's looking at "saving lives via preventing communicable, maternal, neonatal, and nutritional diseases" as the only goal. While it's a category of intervention that people in the effective altruism movement have talked about a lot, it's definitely not the only way to help people. If you were to completely eliminate deaths in this category it would be amazing and hugely beneficial, but there would still be people dying from other diseases, suffering in many non-fatal ways, and generally having poverty limit their options and potential. And that's without considering more speculative options like trying to keep us from killing ourselves off or generally trying to make the long-term future go as well as possible.
He's setting a threshold of $5k for how much we'd be willing to pay to avert a death, which is much too low. I do agree there is some threshold at which you'd be very reasonable to stop trying to help others and just do what makes you happy. Where this threshold is depends on many things, especially how well-off you are, but I would expect it to be more in the $100k range than the $5k range for rich-country effective altruists. By comparison, the US Government uses ~$9M.
I do think the "drowning children" framing isn't great, primarily because it puts you in a frame of mind where you expect that things will be much cheaper than they actually are (familiar), but also because it depends on being in a situation where only you can help and where you must act immediately. There's enough actual harm in the world that we don't need thought experiments to show why we should help. So while there aren't that many "drowning children", there is definitely a lot of work to do.
(Crossposted from jefftk.com)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Announcing the launch of the Happier Lives Institute by MichaelPlant
07:31
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Announcing the launch of the Happier Lives Institute, published by MichaelPlant on the Effective Altruism Forum.
Write a Review
Following months of work by a dedicated team of volunteers, I am pleased to announce the launch of the Happier Lives Institute, a new EA organisation which seeks to answer the question: ‘What are the most effective ways we can use our resources to make others happier?’
Summary
The Happier Lives Institute is pioneering a new way of thinking about the central question of effective altruism - how can we benefit others as much as possible? We are approaching this through a ‘happiness lens’, using individuals’ reports of their subjective well-being as the measure of benefit. Adopting this approach indicates potential new priorities, notably that mental health emerges as a large and neglected problem.
Our vision is a world where everyone lives their happiest life.
Our mission is to guide the decision-making of those who want to use their resources to most effectively make lives happier.
We aim to fulfill our mission by:
1. Searching for the most effective giving opportunities in the world for improving happiness. We are starting by investigating mental health interventions in low-income countries.
2. Assessing which careers allow individuals to have the greatest counterfactual impact in terms of promoting happier lives.
Our approach
Our work is driven by three beliefs.
1) We should do the most good we can
We should use evidence and reason to determine how we can use our resources to benefit others the most. We follow the guiding principles of effective altruism: commitment to others, scientific mindset, openness, integrity, and collaborative spirit.
2) Happiness is what ultimately matters
Philosophers use the word ‘well-being’ to refer to what is ultimately good for someone. We think well-being consists in happiness, defined as a positive balance of enjoyment over suffering. Understood this way, this means that when we reduce misery, we increase happiness. Further, we believe well-being is the only thing which is intrinsically good, that is, that matters in and of itself. Other goods, such as wealth, health, justice, and equality are instrumentally valuable: they are not valuable in themselves, but because and to the extent that they increase happiness.
3) Happiness can be measured
The last few decades have seen an explosion of research into ‘subjective well-being’ (SWB), with about 170,000 books and articles published in the last 15 years. SWB is measured using self-reports of people’s emotional states and global evaluations of life satisfaction; these measures have been shown to be valid and reliable. We believe SWB scores are the best available measure of happiness; therefore, we should use these scores, rather than anything else (income, health, education, etc.) to determine what makes people happier.
Specifically, we expect to rely on life satisfaction as the primary metric. This is typically measured by asking “Overall, how satisfied are you with your life nowadays?” (0 - 10). While we think measures of emotional states are closer to an ideal measure of happiness, far fewer data of this type is available. A longer explanation of our approach to measuring happiness can be found here.
When we take these three beliefs together, the question: “How can we do the most good?” becomes, more specifically: “What are the most cost-effective ways to increase self-reported subjective well-being?”
Our strategy
Social scientists have collected a wealth of data on the causes and correlates of happiness. While there are now growing efforts to determine how best to increase happiness through public policy, no EA organisation has yet attempted to translate this information into recommendations about what the most effective ways are for private actors to make l...
|
Dec 12, 2021 |
In praise of unhistoric heroism by rosehadshar
05:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: In praise of unhistoric heroism, published by rosehadshar on the Effective Altruism Forum.
Write a Review
Dorothea Brooke as an example to follow
I once read a post by an effective altruist about how Dorothea Brooke, one of the characters in George Eliot’s Middlemarch, was an EA. There’s definitely something interesting about looking at the story like this, but for me this reading really missed the point when it concluded that Dorothea’s life had been “a tragic failure”.[1] I think that Dorothea’s life was in many ways a triumph of light over darkness, and that her success and not her failure is the thing we should take as a pattern.
Dorothea dreamed big: she wanted to alleviate rural poverty and right the injustice she saw around her. In the end, those schemes came to nothing. She married a Radical MP, and in the process forfeited the wealth she could have given to the poor. She spent her life in small ways, “feeling that there was always something better which she might have done, if she had only been better and known better.” But she made the lives of those around her better, and she did good in the ways which were open to her. I think that the way in which Dorothea’s life is an example to us is best captured in the final lines of Middlemarch:
“Her finely touched spirit had still its fine issues, though they were not widely visible. Her full nature, like that river of which Cyrus broke the strength, spent itself in channels which had no great name on the earth. But the effect of her being on those around her was incalculably diffusive: for the growing good of the world is partly dependent on unhistoric acts; and that things are not so ill with you and me as they might have been, is half owing to the number who lived faithfully a hidden life, and rest in unvisited tombs.”
If this was said of me after I’d die, I’d think I’d done a pretty great job of things.
A related critique of EA
I think many EAs would not be particularly pleased if that was ‘all’ that could be said for them after they died, and I think that there is something worrying about this.
One of the very admirable things about EAs is their commitment to how things actually go. There’s a recognition that big talk isn’t enough, that good intentions aren’t enough, that what really counts is what ultimately ends up happening. I think this is important and that it helps make EA a worthwhile project. But I think that when people apply this to themselves, things often get weird. I don’t spend that much time with my ear to the grapevine, but from my anecdotal experience it seems not uncommon for EAs to:
obsess about their own personal impact and how big it is
neglect comparative advantage and chase after the most impactful whatever
conclude that they are a failure because their project is a failure or lower status than some other project
generally feel miserable about themselves because they’re not helping the world more, regardless of whether they’re already doing as much as they can
An example of a kind of thing I’ve heard several people say is ‘aw man, it sucks to realise that I’ll only ever have a tiny fraction of the impact Carl Shulman has’. There are many things I dislike about this, but in this context the thing that seems most off is that being Carl Shulman isn’t the game. Being you is the game, doing the good you can do is the game, and for this it really doesn’t matter at all how much impact Carl has.
Sure, there’s a question of whether you’d prefer to be Carl or Dorothea, if you could choose to be either one.[2] But you are way more likely to end up being Dorothea.[3] You should expect to live and die in obscurity, you should expect to undertake no historic acts, you should expect most of your work to come to nothing in particular. The heroism of your life isn’t that you single-handedly press the wor...
|
Dec 12, 2021 |
Rethink Priorities - 2021 Impact and 2022 Strategy by Peter Wildeford
26:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Rethink Priorities - 2021 Impact and 2022 Strategy, published by Peter Wildeford on the Effective Altruism Forum.
Summary
Our purpose and people
Rethink Priorities is a research organization that conducts research to inform policymakers and major foundations about how to best help people and nonhuman animals in both the present and the long-term future. Our work informs key stakeholders of the effective altruism community regarding decisions around funding, interventions, and research time allocation worth millions of dollars every year.
This year we hired 14 new people. We now have a staff of 28 people, corresponding to 24.75 full-time equivalents (with 19.75 FTE focused on research and 5 FTE on operations). We’ll have spent about $2.1M USD in 2021.
Our 2021 impact
In 2021, we achieved some tangible impact from our work, such as:
Improving animal welfare strategies in the European Union (moving several million dollars to what we believe are more effective approaches).
Setting up an ambitious project to study the capacity for welfare of different species.
Starting to staff up a team to address AI Governance and Strategy.
Investigating lead reduction as a possible intervention competitive with GiveWell’s top charities.
Running an intern program that successfully got new researchers involved in effective altruism and led to some interns getting permanent jobs in organizations like the Centre for Effective Altruism, Founders Pledge, and here at Rethink Priorities.
Our plans for 2022
Among other projects in 2022, we’re especially excited to:
Begin work with our newly expanded AI Governance and Strategy team and Global Health and Development team, both of which are directly focused on identifying high-impact giving opportunities.
Build a larger longtermist research team to explore longtermist work and interventions more broadly.
Tentatively conclude our intensive work on interspecies comparisons of moral weight, which could help us better prioritize across many cause areas.
Help solve the funding overhang in EA and unlock tons of impact by identifying interventions across cause areas that can take lots of money while still meeting a high bar for cost-effectiveness.
Our funding goals
If better funded, we would be able to do more high-quality work and employ more talented researchers than we otherwise would.
Currently, our goal is to raise $5,435,000 by the end of 2022. This consists of gaps of:
$2,230,000 for animal-focused research
$1,410,000 for longtermism research
$1,275,000 for EA movement research and surveying
$520,000 for global health and development research
However, we believe that if we were maximally ambitious and expanded as much as is feasible, we could effectively spend the funds if we raised up to $12,900,000 in 2022.
If you’d like to support our work, you can donate to us as part of Facebook’s donation matching on Giving Tuesday on November 30, or donate directly to us here. We do accept and track restricted funds by cause area if that is of interest. If you have questions about tax-deductibility in your country or are interested in making a major gift, please contact our Director of Development Janique Behman.
Ask us more
We’re running an AMA on the EA Forum this Friday, November 19. Ask us any questions you may have!
Our path to impact
Rethink Priorities achieves impact by improving the decisions made by grantmakers and on-the-ground organizations, multiplying the impact of their work. The effective altruism movement currently allocates hundreds of millions of dollars and millions of hours of work every year — we aim to improve that allocation.
We work independently to uncover new insights while also collaborating with existing groups and funders to ensure the most effective actions are taken based on rigorous research.
Our organization can be understoo...
|
Dec 12, 2021 |
Update on CEA's EA Grants Program by Nicole_Ross, Centre for Effective Altruism
10:06
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Update on CEA's EA Grants Program, published by Nicole_Ross, Centre for Effective Altruism on the Effective Altruism Forum.
Write a Review
In December, I (Nicole Ross) joined CEA to run the EA Grants program, which gives relatively small grants (usually under $60,000 per grant) to individuals and start-up projects within EA cause areas. Before joining CEA, I worked at the Open Philanthropy Project and GiveWell doing both research and grants operations.
When I joined CEA, the EA Grants program had been running since 2017. Upon my initial review, it had a mixed track record. Some grants seemed quite exciting, some seemed promising, others lacked the information I needed to make an impact judgment, and others raised some concerns.
Additionally, the program had a history of operational and strategic challenges. I've spent the majority of the last nine months working to improve the overall functioning of the program. I'm now planning the future of EA Grants, and trying to determine whether some version of the program ought to exist moving forward.
In this brief update, I’ll describe some of the program’s past challenges, a few things I’ve worked on, and some preliminary thoughts about the future of the program. I’ll also request feedback on the current EA funding landscape, and what value EA Grants might be able to add if we decide to maintain the program going forward.
Note on early 2019 EA Grants round
Last year, we publicly stated that we “expect the next round after this one to be early next year [2019] but we want to review lessons from this round before committing to a date.” When it became clear that we would not hold a round in early 2019, we did not update the previous statement. We regret any confusion we may have caused by failing to provide a clear update on our plans.
Issues with the program
EA Grants began in 2017. From June 2017 to December 2018 (when I joined CEA), grant management was a part-time responsibility of various staff members who also had other roles. As a result, the program did not get as much strategic and evaluative attention as it needed. Additionally, CEA did not appropriately anticipate the operational systems and capacity needed to run a grantmaking operation, and we did not have the full infrastructure and capacity in place to run the program.
Because everyone involved recognized the importance of the program, CEA eventually began to take steps to resolve broader issues related to this lack of attention, including establishing the full-time Grants role for which I was hired and hiring an operations contractor to process grants. We believe it was a mistake that we didn’t act more quickly to improve the program, and that we weren’t more transparent during this process.
My first responsibility in my new role was to investigate these issues, with support from staff who had worked on the EA Grants program in the past. I am grateful for the many hours current and former staff have spent helping me get up to speed and build a consolidated picture of the EA Grants program.
Below are what I view as the most important historical challenges with the EA Grants program:
1) Lack of consolidated records and communications
We did not maintain well-organized records of individuals applying for grants, grant applications under evaluation, and records of approved or rejected applications. We sometimes verbally promised grants without full documentation in our system. As a result, it was difficult for us to keep track of outstanding commitments, and of which individuals were waiting to hear back from CEA. This resulted in us spending much longer preparing for our independent audit than would have been ideal.
2) Lack of clarification about the role EA Grants played in the funding ecosystem
While we gave information about the types of projects EA Grants woul...
|
Dec 12, 2021 |
The case for building more and better epistemic institutions in the effective altruism community by stefan.torges
09:20
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The case for building more and better epistemic institutions in the effective altruism community, published by stefan.torges on the Effective Altruism Forum.
Write a Review
As a community, we should think more about how to create and improve our collective epistemic institutions. With that, I mean the formalized ways of creating and organizing knowledge in the community beyond individual blogs and organizations. Examples are online platforms like the EA Forum and Metaculus, events like EA Global and the Leaders Forum, surveys like the EA survey and the survey at the Leaders Forum. This strikes me as a neglected form of community-building that might be particularly high-leverage.
The case for building more and better epistemic institutions
Epistemic progress is crucial for the success of this community.
Effective altruism is about finding out how to do the most good and then doing it. For that, epistemic progress is important. Will MacAskill has even referred to effective altruism as a “research project.” Since people in this community have changed their views about how to do the most good substantially over the last ten years, we should expect that we’re still wrong about many things.
Some institutions facilitate or significantly accelerate epistemic progress.
People in this community are probably more aware of the research showing this than many other people. Ironically, we even recommend working on improving the decision-making of other organizations or communities. Aggregative forecasting is talked about most often and it seems to have solid evidence behind it. Still, it has limitations. For instance, it cannot help us with conceptual work, improving our reasoning and arguments directly, and inherently vague concepts. There is some evidence on other instruments like certain forms of expert elicitation or structured analytic techniques (e.g., devil’s advocate), but the evidence base seems less sound. It might still be worth experimenting with them. Peer review seems to be another valuable institution facilitating epistemic progress. I’m not sure if this has ever been investigated properly but it has a lot of prima facie plausibility to it.
I don’t want to argue that we already know all the institutions that facilitate epistemic progress but there are at least some that do. If we think this is sufficiently important and there are more such institutions to be designed, experimenting and expanding the research base might be among the most important things we could do.
We are not close to the perfect institutional setup.
I don’t want to overstate the case. We have already built a number of great institutions in this regard, probably much better than other communities. Again, forecasting has probably seen the most attention (e.g., Metaculus, Foretold). The other examples I mentioned at the top, however, are also important and many have improved over the last few years.
Still, I’m confident we can do better. Starting from the evidence base I sketched out above, we might start experimenting with the following institutions:
Institutionalizing devil’s advocates: So far, we have had to rely on the initiative and courage of individuals to come forward with criticism of cause areas or certain paradigms within them (e.g., here, here). Perhaps there are ways to incentivize or institutionalize such work even more or even earlier. For instance, we could set up prizes for the best critique of apparently common assumptions or priorities.
Expert surveys/elicitation: Grace et al. (2017) did one for AI timelines. The Leaders Forum survey is focused on EA-related questions. If possible, we could experiment with validating the participants or systematizing participant selection in other ways. We could also just explore many more questions this way in order to get a sense of what the most...
|
Dec 12, 2021 |
Effective Altruism is an Ideology, not (just) a Question by Fods12
27:37
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Effective Altruism is an Ideology, not (just) a Question, published by Fods12 on the Effective Altruism Forum.
Write a Review
This is a linkpost for/
Introduction
In a widely-cited article on the EA forum, Helen Toner argues that effective altruism is a question, not an ideology. Here is her core argument:
What is the definition of Effective Altruism? What claims does it make? What do you have to believe or do, to be an Effective Altruist?
I don’t think that any of these questions make sense.
It’s not surprising that we ask them: if you asked those questions about feminism or secularism, Islamism or libertarianism, the answers you would get would be relevant and illuminating. Different proponents of the same movement might give you slightly different answers, but synthesising the answers of several people would give you a pretty good feeling for the core of the movement.
But each of these movements is answering a question. Should men and women be equal? (Yes.) What role should the church play in governance? (None.) What kind of government should we have? (One based on Islamic law.) How big a role should government play in people’s private lives? (A small one.)
Effective Altruism isn’t like this. Effective Altruism is asking a question, something like:
“How can I do the most good, with the resources available to me?”
In this essay I will argue that his view of effective altruism being a question and not an ideology is incorrect. In particular, I will argue that effective altruism is an ideology, meaning that it has particular (if somewhat vaguely defined) set of core principles and beliefs, and associated ways of viewing the world and interpreting evidence. After first explaining what I mean by ideology, I proceed to discuss the ways in which effective altruists typically express their ideology, including by privileging certain questions over others, applying particular theoretical frameworks to answer these questions, and privileging particular answers and viewpoints over others. I should emphasise at the outset that my purpose in this article is not to disparage effective altruism, but to try to strengthen the movement by helping EAs to better understand the intellectual actual intellectual underpinnings of the movement.
What is an ideology?
The first point I want to explain is what I mean when I talk about an ‘ideology’. Basically, an ideology is a constellation of beliefs and perspectives that shape the way adherents of that ideology view the world. To flesh this out a bit, I will present two examples of ideologies: feminism and libertarianism. Obviously these will be simplified since there is considerable heterogeneity within any ideology, and there are always disputes about who counts as a ‘true’ adherent of any ideology. Nevertheless, I think these quick sketches are broadly accurate and helpful for illustrating what I am talking about when I use the word ‘ideology’.
First consider feminism. Feminists typically begin with the premise that the social world is structured in such a manner that men as a group systematically oppress women as a group. There is a richly structured theory about how this works and how this interacts with different social institutions, including the family, the economy, the justice system, education, health care, and so on. In investigating any area, feminists typically focus on gendered power structures and how they shape social outcomes. When something happens, feminists ask ‘what affect does this have on the status and place of women in society?’ Given these perspectives, feminists typically are uninterested in and highly sceptical of any accounts of social differences between men and women based on biological differences, or attempts to rationalist differences on the basis of social stability or cohesion. This way of looking at thing...
|
Dec 12, 2021 |
Review of FHI's Summer Research Fellowship 2020 by rosehadshar
34:17
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is:Review of FHI's Summer Research Fellowship 2020, published by rosehadshar on the Effective Altruism Forum.
Write a Review
This post reviews the Future of Humanity Institute’s Summer Research Fellowship 2020 in detail. If you’re in a hurry, we recommend reading the summary of activities, lessons learned, and comparing costs and benefits sections for a quicker take.
Thanks to Owen Cotton-Barratt, Max Daniel, Eliana Lorch, Tanya Singh and the summer fellows for reviewing and improving this post.
Summary of activities
27 fellows spent 6 weeks on a remote fellowship programme.
We got ~300 applications and interviewed ~50 people.
Backgrounds ranged from undergraduate students to postdocs.
By default, each fellow had a mentor and worked on a research project (though there were some exceptions, and many fellows also spent significant time exploring).
The mentors were researchers and research managers at FHI, CSER, OpenPhil, OpenAI, GPI, and MIRI.
Approximately, the projects related to AI governance (13 fellows), technical AI safety (3 fellows), biosecurity (4 fellows), macrostrategy broadly construed (5 fellows), and policy (2 fellows).
This fellowship grew out of the CEA summer fellowship (which CEA wanted to stop running), though was significantly different in that a) it was remote, and b) we took more fellows (in 2019, the CEA fellowship took 11 fellows).
Lessons learned
Meta: we’re stating these confidently to make clear which direction the summer fellowship has updated us in. Obviously this was just one summer programme, and in many cases there might be important differences which mean that these takeaways wouldn’t generalise. The takeaways are in rough order of importance according to us.
Remote fellowships can provide substantial value to fellows and organizers, and are worth considering where in person options are not available, or are especially costly.
Initially we put some probability on the whole fellowship being awkward and unproductive, but in fact fellows had good experiences and some good things seem to have come from the programme. Remoteness also allowed us to take more fellows, and in the particular case of Covid-19, also came with lower opportunity costs for many fellows and organisers.
In spite of remoteness, it was possible to create a positive, friendly and open culture. Many fellows remarked on this and appreciated it, and we guess that it helped to increase engagement, support those experiencing difficulties, and deepen fellows’ experiences. Modelling open and authentic communication on the part of the organisers seems to have been a major contributing factor to this culture.
That said, we still think that there are many ways in which an in person fellowship would have been better, particularly in terms of culture, networking and some kinds of logistics.
There is more interest than we expected in mentoring people amongst researchers in a range of different EA organisations. We successfully found 18 mentors for 24 fellows (3 fellows didn’t have formal mentors for various reasons), and we contacted at least a further 10 people who we think would have mentored if there had been a fellow working on the right topic. We also didn’t do a comprehensive search for mentors, and instead reached out ad hoc to people in our network, so we expect we would have found more interest if we had searched a bit harder.
There were more promising applicants for this programme than we expected. In 2019 the CEA fellowship received ~90 applications, and this fellowship received ~300. We think that the number of strong applicants was correspondingly larger.
Minor difficulties with mental health were common, and more serious ones not extremely rare (although this is confounded by remoteness and Covid-19). 1-1s and other conversations seemed to help support people with this. We t...
|
Dec 12, 2021 |
Does climate change deserve more attention within EA? by Louis_Dixon
25:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Does climate change deserve more attention within EA?, published by Louis_Dixon on the Effective Altruism Forum.
Write a Review
I have been an 80,000 Hours Podcast listener and active in EA for about eighteen months, and I have shifted from a focus on climate change, to animal welfare, and now to x-risk and s-risk, which seem to be highly promising from an EA perspective. Along the way, I wonder if some parts of EA might have underplayed climate change, and if more engagement and content on the topic could be valuable.
While I was thinking about sustainability and ethics, I was frustrated by how limited the coverage of the topic was in the 80,000 Hours Podcast episodes, so I emailed the team. Rob Wiblin responded and suggested that I write up an EA forum post.
Thanks to Alexrjl, John Bachelor, and David Nash for their suggestions and edits.
Edited 29/10/2019 to remove a misquotation of 80,000 Hours, and a few other cases where I want to rectify some over-simplifications.
Summary
While it is true that EA and 80,000 Hours is effective in drawing attention to highly neglected areas, my view is it has unjustly neglected coverage of climate change. There are several reasons why I believe climate change deserves more attention within EA. Firstly, some key opinion-shapers in EA appear to have recently updated towards higher weightings on the severity of climate change. Secondly, though climate change is probably not an existential risk itself, it could be treated as an existential risk factor or multiplier. Thirdly, there are limitations to a crude application of the ITN framework and a short-termist approach to altruism. Fourthly, climate change mitigation and resilience may be more tractable than previously argued. Finally, by failing to show a sufficient appreciation of the severity of climate change, EA may risk losing credibility and alienating potential effective altruists.
Changing perceptions of climate change among key individuals in EA
1. Assessment of climate change in Doing Good Better, 2015
The view taken in this book, foundational to EA, mostly equates climate change to a year of lost growth, and assigns a 'small but significant risk' that temperature rises are above 4C.
Will Macaskill: Economists tended to assess climate change as not all that bad. Most estimate that climate change will cost only around 2% of global GDP. The thought that climate change would do the equivalent of putting us back one year economically isn’t all that scary- 2013 didn’t seem that much worse than 2014. So the social cost of one tonne of American’s greenhouse gas emissions is about $670 every year. Again, that’s not a significant cost, but it’s also not the end of the world.
However, this standard economic analysis fails to faithfully use expected value reasoning. The standard analysis looks only at the effects from the most likely scenario: a 2-4C rise in temperature. there is a small but significant risk of a temperature increase that’s much greater than 2-4C.
The IPCC gives more than 5% probability to temperature rises greater than 6C and even acknowledges a small risk of catastrophic climate change of 10C or more. To be clear, I’m not saying that this is at all likely, in fact it’s very unlikely. But it is possible, and if it were to happen, the consequences would be disastrous, potentially resulting in a civilisational collapse. It’s difficult to give a meaningful answer of how bad that would be, but if we think it’s potentially catastrophic, then we need to revise our evaluation of the importance of mitigating climate change. In that case, the true expected social cost of carbon could be much higher than $32 per metric ton, justifying much more extensive efforts to reduce emissions than the estimates the economists first suggested.
The main text, and the later table of cause ...
|
Dec 12, 2021 |
Why and how to start a for-profit company serving emerging markets by Ben_Kuhn
10:35
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Why and how to start a for-profit company serving emerging markets, published by Ben_Kuhn on the Effective Altruism Forum.
Write a Review
This is a linkpost for
Wave (1) is a for-profit, venture backed startup building cheap, instant money transfer to and within Africa. Since launching in 2015, we’ve become by far the biggest remitter to Kenya and Ghana, saving our users and recipients over $100 million so far. Our biggest source of expected future impact is building mobile money systems within Africa, which will have an orders-of-magnitude bigger impact if it succeeds.
Wave’s mission is to improve the world, not to make money. Despite that, we operate more like a tech company than a social enterprise. Our investors are venture capitalists trying to make a high return, and they hold us to the same standards of growth rate and unit economics as any developed-world startup. This might seem like a downside (surely it would be easier to directly optimize for impact rather than have pressure from investors to make money?), but for us it’s actually increased our impact in two ways. First, the pressure to grow quickly forces us to make our product better and scale faster, so we help more people by a larger amount. Second, since we’ve done really well by for-profit investors’ standards, we can raise much more money than a nonprofit or social enterprise.
In my opinion, Wave’s path—importing the US startup playbook to developing countries—was predictably high-expected-impact ex ante. First, starting a company generally has high expected impact: the social benefit of innovation is usually a large multiple of the private return. Serving an emerging market adds another multiplier, because the problems you could work on are much worse. (Providing someone $5 of value means a lot more when $5 is their day’s wages!) Finally, there’s more low-hanging fruit for companies to work on in developing countries, because the supply of skilled entrepreneurs is smaller.
On the margin, then, more altruists with experience working in the developed world should try this approach. I feel safe saying this because very few people (altruistic or not) currently seem to.
This is surprising, since lots of developing countries now have the infrastructure to support tech companies. In big cities like Dakar, Nairobi, Addis or Lagos, there’s mostly-reliable electricity, decent internet, high smartphone penetration, driveable roads and so on. But there aren’t many great startups taking advantage of them. (According to our investors who pay the most attention to Africa, Wave is by far the most promising.)
Why isn’t the space more crowded? I’d guess it’s because creating a great product requires two things: being maniacally perfectionist, and deeply understanding your users. To be maniacally perfectionist, you need to be immersed in a culture with really high product standards (for instance, Silicon Valley). To understand your users in Africa, you need to live in Africa. The intersection of these two groups is practically no one, because most people who could live in Silicon Valley would much rather not move to, say, a former tank base in the middle of the desert (where many of my coworkers lived for years).
One way to think of Wave is as an importer of high standards. For instance, in most mobile money systems in Africa, if you try to make a large withdrawal, your local agent may not have enough cash—it could take them hours or days to come up with the money. At Wave, we realized this made users sad, so we started predicting how much cash our agents would need and working with them to make sure they never ran out. This was a lot of extra work and risk for us, but led to massive adoption from traders—with funds available instantly from Wave, they could often turn over inventory literally twice as fast. Every wa...
|
Dec 12, 2021 |
Minimal-trust investigations by Holden Karnofsky
18:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Minimal-trust investigations, published by Holden Karnofsky on the Effective Altruism Forum.
This piece is about the single activity ("minimal-trust investigations") that seems to have been most formative for the way I think.
Most of what I believe is mostly based on trusting other people.
For example:
I brush my teeth twice a day, even though I've never read a study on the effects of brushing one's teeth, never tried to see what happens when I don't brush my teeth, and have no idea what's in toothpaste. It seems like most reasonable-seeming people think it's worth brushing your teeth, and that's about the only reason I do it.
I believe climate change is real and important, and that official forecasts of it are probably reasonably close to the best one can do. I have read a bunch of arguments and counterarguments about this, but ultimately I couldn't tell you much about how the climatologists' models actually work, or specifically what is wrong with the various skeptical points people raise.1 Most of my belief in climate change comes from noticing who is on each side of the argument and how they argue, not what they say. So it comes mostly from deciding whom to trust.
I think it's completely reasonable to form the vast majority of one's beliefs based on trust like this. I don't really think there's any alternative.
But I also think it's a good idea to occasionally do a minimal-trust investigation: to suspend my trust in others and dig as deeply into a question as I can. This is not the same as taking a class, or even reading and thinking about both sides of a debate; it is always enormously more work than that. I think the vast majority of people (even within communities that have rationality and critical inquiry as central parts of their identity) have never done one.
Minimal-trust investigation is probably the single activity that's been most formative for the way I think. I think its value is twofold:
It helps me develop intuitions for what/whom/when/why to trust, in order to approximate the views I would hold if I could understand things myself.
It is a demonstration and reminder of just how much work minimal-trust investigations take, and just how much I have to rely on trust to get by in the world. Without this kind of reminder, it's easy to casually feel as though I "understand" things based on a few memes or talking points. But the occasional minimal-trust investigation reminds me that memes and talking points are never enough to understand an issue, so my views are necessarily either based on a huge amount of work, or on trusting someone.
In this piece, I will:
Give an example of a minimal-trust investigation I've done, and list some other types of minimal-trust investigations one could do.
Discuss a bit how I try to get by in a world where nearly all my beliefs ultimately need to come down to trusting someone.
Example minimal-trust investigations
The basic idea of a minimal-trust investigation is suspending one's trust in others' judgments and trying to understand the case for and against some claim oneself, ideally to the point where one can (within the narrow slice one has investigated) keep up with experts.2 It's hard to describe it much more than this other than by example, so next I will give a detailed example.
Detailed example from GiveWell
I'll start with the case that long-lasting insecticide-treated nets (LLINs) are a cheap and effective way of preventing malaria. I helped investigate this case in the early years of GiveWell. My discussion will be pretty detailed (but hopefully skimmable), in order to give a tangible sense of the process and twists/turns of a minimal-trust investigation.
Here's how I'd summarize the broad outline of the case that most moderately-familiar-with-this-topic people would give:3
People sleep under LLINs, which are mosqu...
|
Dec 12, 2021 |
Opinion: Estimating Invertebrate Sentience by Jason Schukraft
38:56
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Opinion: Estimating Invertebrate Sentience, published by Jason Schukraft on the Effective Altruism Forum.
Write a Review
Introduction
Between May 2018 and June 2019 Rethink Priorities completed a large project on the subject of invertebrate sentience.[1] We investigated the best methodology to approach the question, outlined some philosophical difficulties inherent in the project, described the features most relevant to invertebrate sentience, compiled the extant scientific literature on the topic, summarized our results, and ultimately produced an invertebrate welfare cause profile. We are currently in the process of identifying concrete interventions to improve invertebrate wellbeing, with a report on the welfare of managed honey bees due out in mid-November and a report on the welfare of farmed snails nearing completion.
One thing we did not do was publish explicit numerical estimates of the probability that various groups of invertebrates are sentient.
Our team discussed publishing such estimates many times, but these discussions generated considerable internal disagreement. Two members of the (four person) team believed that publishing explicit sentience estimates was a bad idea. The other two members felt that it was a good idea. In the end, we settled on the following compromise: several months after the completion of the invertebrate sentience project, we would publish an unofficial opinion piece in which each of us could share her/his own reasoning on the subject and, if so desired, her/his own estimates.[2]
This post fulfills that compromise. In it, the four members of Rethink Priorities’ invertebrates team—Daniela R. Waldhorn, Marcus A. Davis, Peter Hurford, and Jason Schukraft—outline their views on the value, feasibility, and danger of quantitative estimates of invertebrate sentience. Marcus and Peter provide numerical estimates of sentience for each of the taxa we investigated for our invertebrate sentience project, Daniela offers a qualitative ranking of the same taxa, and Jason argues that we are not yet in a position to deliver estimates that are actionable and robust enough to outweigh the (slight but non-negligible) harm that publishing such estimates prematurely might engender.
What follows are the personal opinions of individual researchers. Officially, Rethink Priorities does not have a position on the explicit probability that various invertebrates are sentient.
Daniela R. Waldhorn
1. Vertebrates
There is an ample and detailed body of empirical data which justifies believing that non-human vertebrates are sentient. In particular, there is solid neuro-anatomical, physiological and behavioral evidence that vertebrates like cows and chickens are conscious. There is also a growing trend to recognize that these animals do not only experience physical suffering (and pleasure) but also have emotional lives (see e.g. Marino, 2017; Proctor et al., 2013).
Based on existing evidence and the generalized acceptance of the Cambridge Declaration on Consciousness (2012)[3], my overall conclusions regarding the probabilities of consciousness for these animals are presented as follows:
2. Invertebrates
When we consider invertebrates, the debate about whether they are conscious becomes much more complex. First, it must be conceded that the numerous invertebrate species and their diversity impose severe constraints to justifiable generalizations about the presence of consciousness in this group of animals. Second, the scientific literature about sentience in invertebrates is not only scarce but fragmentary–that is to say, the extent to which invertebrates have been investigated varies. Thus, there are some particular species about which there is a comparatively great deal of knowledge (e.g., fruit flies), whereas much less research has focused on individuals of ot...
|
Dec 12, 2021 |
Donor Lottery Debrief by TimothyTelleenLawton
08:27
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Donor Lottery Debrief, published by TimothyTelleenLawton on the Effective Altruism Forum.
Write a Review
Good news, I've finally allocated the rest of the donor lottery funds from the 2016-2017 Donor Lottery (the first one in our community)! It took over 3 years but I'm excited about the two projects I funded. It probably goes without saying, but this post is about an independent project and does not represent CFAR (where I work).
This post contains several updates related to the donor lottery:
My first donation: $25k to The Czech Association for Effective Altruism (CZEA) in 2017
My second (and final) donation: $23.5k to Epidemic Forecasting (EpiFor) in 2020
Looking back on the Donor Lottery
Call for more projects like these to fund
CZEA
My previous comments on the original donor lottery post share the basics of how the first $25k was used for CZEA (this was $5k more than I was originally planning to donate due to transfer efficiency considerations). Looking back now, I believe that donation likely had a strong impact on EA community building.
My donation was the largest that CZEA had received (I think they previously had received one other large donation—about half the size) and it was enough for CZEA to transition from a purely volunteer organization into a partially-professional organization (1 FTE, plus volunteers). Based on conversations with Jan Kulveit, I believe it would have taken at least 8 more months for CZEA to professionalize otherwise. I believe that in the time they bought with the donation, they were able to more easily secure substantial funding from CEA and other funders, as well as scale up several compelling initiatives: co-organizing Human-aligned AI Summer School, AI Safety Research Program, and a Community Building Retreat (with CEA).
I also have been glad to see a handful of people get involved with EA and Rationality through CZEA, and I think the movement is stronger with them. To pick an example familiar to me, several CZEA leaders were recently part of CFAR's Instructor Training Program: Daniel Hynk (Co-founder of CZEA), Jan Kulveit (Senior Research Scholar at FHI), Tomáš Gavenčiak (Independent Researcher who has been funded by EA Grants), and Irena Kotíková (President of CZEA).
For more detail on CZEA's early history and the impact of the donor lottery funds (and other influences), see this detailed account.
EpiFor
In late April 2020, I heard about Epidemic Forecasting—a project launched by people in the EA/Rationality community to inform decision makers by combining epidemic modeling with forecasting. I learned of the funding opportunity through my colleague and friend, Elizabeth Garrett.
The pitch was immediately compelling to me as a 5-figure donor: A group of people I already believed to be impressive and trustworthy were launching a project to use forecasting to help powerful people make better decisions about the pandemic. Even though it seemed likely that nothing would come of it, it seemed like an excellent gamble to make, based on the following possible outcomes:
Prevent illness, death, and economic damage by helping governments and other decision makers handle the pandemic better, especially governments that couldn't otherwise afford high-quality forecasting services
Highlight the power of—and test novel applications of—an underutilized tool: forecasting (see the book Superforecasting for background on this)
Test and demonstrate the opportunity for individuals and institutions to do more good for important causes by thinking carefully (Rationality/EA) rather than relying on standard experts and authorities alone
Engage members of our community in an effort to change the world for the better, in a way that will give them some quick feedback—thus leading to deeper/faster learning
Cross-pollinate our community with professional fie...
|
Dec 12, 2021 |
EA Birthday Posts: An Alternative to Fundraisers by kuhanj
12:15
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: EA Birthday Posts: An Alternative to Fundraisers, published by kuhanj on the Effective Altruism Forum.
Write a Review
For my birthday earlier this year, I spent a fair amount of time writing an EA-themed birthday post (reproduced at the bottom of this write-up). I think that this post did fairly well - 5 messages and subsequent calls about career plan changes (!), and 170 reactions on Facebook. As such, I'd be excited for more EAs to make similar posts, especially other highly involved university organizers with experience communicating about EA. In this post I share my thought-process for making this birthday post, what I could’ve done differently, some considerations for other EAs interested in doing the same, and the post itself. I’d love to hear feedback on the post and the ideas in this writeup, and similar successful examples of leveraging birthdays and other occasions for EA purposes.
Choosing the content of the post:
I initially decided to make an EA-themed birthday fundraiser, thinking it might be a uniquely strong meta-EA opportunity. At first I spent a lot of time trying to figure out what cause area I wanted to raise awareness for. However in the end, I ended up going down a different route than typical birthday fundraisers. Rather than picking a particular charity and asking for donations, I decided to instead make a different ask - to consider high impact careers (in particular, by highlighting 80,000 Hours and its Key Ideas page). I did this for three main reasons:
A change in career choice is far more impactful than making a donation (and will probably lead to donations later on anyway). Even one person counterfactually making a career pivot towards addressing the most pressing problems as a result of reading my post could be worth upwards of hundreds of thousands of dollars in donations - several orders of magnitude more successful than I would expect for a regular birthday fundraiser.
Birthdays are a good opportunity to ask people to do costly things that would make me happy, like donating to a charity I care about, or in my case, reading a lengthy FB post and website).
I wanted to counter common misconceptions about EA. Many people I meet at Stanford who have heard of it already think EA = EArning (sorry) to give, or that it only focuses on donations to evidence-backed short-term well-being focused interventions. I thought my birthday post provided a good opportunity to address these misconceptions.
Ideas I wanted to convey through the post:
What we choose to do with our careers is likely the highest impact decision we’ll make.
80000hours.org is a really great resource for (especially young) people trying to figure out what problems are most pressing, and how we can use our career to tackle them.
Addressing common misconceptions about EA: That it is not just about donations or earning to give, or solely about global health interventions, or any single cause area for that matter.
Short descriptions of the current cause areas prioritized in EA and why they’re prioritized. I wasn’t thrilled with the wording I ended up with for each of the cause area descriptions, but it’s a start. I’d be excited to hear others’ thoughts on how to best communicate about these causes to non-EA audiences.
Impact of the post:
Most excitingly, five people so far have taken me up on my offer to talk to them about changing careers/career plans. I’ve had calls and planned follow up with each of them to discuss their thoughts on the 80,000 Hours Key Ideas page and applying the content to their careers. A few others messaged me saying they’d read the 80,000 Hours Key Ideas page and really liked it. The post has received 170 likes and 5 shares, which is my most well-received Facebook post so far by quite some margin (nearly twice as much engagement as my second most popular post...
|
Dec 12, 2021 |
Introducing Metaforecast: A Forecast Aggregator and Search Tool by NunoSempere, Ozzie Gooen
07:47
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Introducing Metaforecast: A Forecast Aggregator and Search Tool, published by NunoSempere, Ozzie Gooen on the Effective Altruism Forum.
Introduction
The last few years have seen a proliferation of forecasting platforms. These platforms differ in many ways, and provide different experiences, filters, and incentives for forecasters. Some platforms like Metaculus and Hypermind use volunteers with prizes, others, like PredictIt and Smarkets are formal betting markets.
Forecasting is a public good, providing information to the public. While the diversity among platforms has been great for experimentation, it also fragments information, making the outputs of forecasting far less useful. For instance, different platforms ask similar questions using different wordings. The questions may or may not be organized, and the outputs may be distributions, odds, or probabilities.
Fortunately, most of these platforms either have APIs or can be scraped. We’ve experimented with pulling their data to put together a listing of most of the active forecasting questions and most of their current estimates in a coherent and more easily accessible platform.
Metaforecast
Metaforecast is a free & simple app that shows predictions and summaries from 10+ forecasting platforms. It shows simple summaries of the key information; just the immediate forecasts, no history. Data is fetched daily. There’s a simple string search, and you can open the advanced options for some configurability. Currently between all of the indexed platforms we track ~2100 active forecasting questions, ~1200 (~55%) of which are on Metaculus. There are also 17,000 public models from Guesstimate.
One obvious issue that arose was the challenge of comparing questions among platforms. Some questions have results that seem more reputable than others. Obviously a Metaculus question with 2000 predictions seems more robust than one with 3 predictions, but less obvious is how a Metaculus question with 3 predictions compares to one from Good Judgement Superforecasters where the number of forecasters is not clear, or to estimates from a Smarkets question with £1,000 traded. We believe that this is an area that deserves substantial research and design experimentation. In the meantime we use a star rating system. We created a function that estimates reputability as “stars” on a 1-5 system using the forecasting platform, forecast count, and liquidity for prediction markets. The estimation came from volunteers acquainted with the various forecasting platforms. We’re very curious for feedback here, both on what the function should be, and how to best explain and show the results.
Metaforecast is being treated as an experimental endeavor of QURI. We spent a few weeks on it so far, after developing technologies and skill sets that made it fairly straightforward. We're currently expecting to support it for at least a year and provide minor updates. We’re curious to see what interest is like and respond accordingly.
Metaforecast is being led by Nuño Sempere, with support from Ozzie Gooen, who also wrote much of this post.
Select Search Screenshots
Charity
GiveWell
Data Sources
Platform Url Information used in Metaforecast Robustness
Metaculus Active questions only. The current aggregate is shown for binary questions, but not for continuous questions. 2 stars if it has fewer than 100 forecasts, 3 stars when between 101 and 300, 4 stars if over 300
Foretell (CSET)/ All active questions 1 star if a question has fewer than 100 forecasts, 2 stars if it has more
Hypermind Questions on various dashboards 3 stars
Good Judgement/ We use various superforecaster dashboards. You can see them here and here 4 stars
Good Judgement Open/ All active questions 2 stars if a question has fewer than 100 forecasts, 3 stars if it has more
Smarkets/ Only take the polit...
|
Dec 12, 2021 |
[New org] Canning What We Give by Louis_Dixon
03:25
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: [New org] Canning What We Give, published by Louis_Dixon on the Effective Altruism Forum.
Epistemic status: 30% (plus or minus 50%). Further details at the bottom.
In the 2019 EA Cause Prioritisation survey, Global Poverty remains the most popular single cause across the sample as a whole. But after more engagement with EA, around 42% of people change their cause area, and of that, a majority (54%) moved towards the Long Term Future/Catastrophic and Existential Risk Reduction.
While many people find that donations help them stay engaged (and continue to be a great thing to do), there has been much discussion of other ways people can contribute positively.
In thinking about the long-run future, one area of research has been improving human's resilience to disasters. A 2014 paper looked at global refuges, and more recently ALLFED, among others, have studied ways to feed humanity in disaster scenarios.
There is much work done, and even much more needed, to directly reduce risks such as through pandemic preparedness, improving nuclear treaties, and improving the functioning of international institutions.
But we believe that there are still opportunities to increase resilience in disaster scenarios. Wouldn't it be great if there was a way to directly link the simplicity of donations with effective methods for the recovery of civilisation?
Photo credit to Facebook and Wikipedia - cans shown are illustrative only
Canning what we give
In The Knowledge by Lewis Dartnell (p. 40), an estimate is given of how long a supermarket would be able to feed a single person:
So if you were a survivor with an entire supermarket to yourself, how long could you subsist on its contents? Your best strategy would be to consumable perishable goods for the first few weeks, and then turn to dried pasta and nice... A single average-sized supermarket should be able to sustain you for around 55 years - 63 if you eat the canned cat and dog food as well.
But in thinking about an population, there would be fewer resources to go around per person.
The UK Department for Environment, Food and Rural Affairs (DEFRA) estimated in 2010 that there was a national stock reserve of 11.8 days of 'ambient slow-moving groceries'. (ibid, p.40)
It seems clear that there aren't enough canned goods.
Our proposal
We propose that:
We try to expand both the range of things that are canned, and find ways to bury them deep in the earth (ideally beyond of the reach of the mole people)
Donors to GWWC instead consider CWWG
Donors put valuable items in cans which they would want in a disaster scenario, e.g. fruit salads, Worcester sauce, marmelade
EA funds provides a donation infrastructure to support sending cans
A mock-up of the CWWG dashboard
Risks
We are concerned that:
Cluelessness could lead to uncertain outcomes
The production of too many cans could make things too shiny and there could be a shortage of sunglasses
There might not be enough can openers
Focusing industrial production on making can could lead to a global arms race to make more cans
Further information
Partial credit to this goes to Harri Besceli - we came up with the idea together.
This was a joke. Happy April fools.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
My Q1 2019 EA Hotel donation by vipulnaik
19:33
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: My Q1 2019 EA Hotel donation, published by vipulnaik on the Effective Altruism Forum.
Write a Review
On March 31, 2019, I donated 3200 GBP to the EA Hotel fundraiser via GoFundMe. The donation cost me $4,306.14 USD. My decision was based mainly on the information in the EA Hotel page and the documents linked from the donations list website page on EA Hotel, which include the recent Effective Altruism Forum posts.
In this post, I describe the reasons that influenced my decision to donate. I didn't draft the post before donating, so some of the elaboration includes aspects that didn't (at least consciously) influence my donation decision.
I limited the time I spent writing the post, and will most likely not be able to respond to comments. But please feel free to comment with your thoughts in response to my post or other comments!
NOTE: I have no affiliation with the EA Hotel. I have never visited it, nor have I closely collaborated with anybody living there. I did not show this post to anybody affiliated with the EA Hotel before posting. Nothing here should be taken as an official statement about the EA Hotel.
The sections of the post:
I like the idea of the EA Hotel
I like the skin-in-the-game of the key players
I like the execution so far
I see institutional risk reasons for lack of institutional funding: These reasons don't apply to individual donors, so I don't see the lack of institutional funding as a reason to dissuade me from donating
I have not been dissuaded by the reasons against donating that I have seen so far
I find the value of marginal donations high and easy to grasp
How I decided to donate and determined the donation amount
I like the idea of the EA Hotel
My interpretation of the fundamental problem the EA Hotel is trying to solve: provide low-cost and optimized transient living arrangements to people engaged in self-study or early stages of projects. The hotel's low-cost living arrangements are further subsidized so that long-term residents don't have to pay anything at all, and in fact, get a stipend to cover some living expenses. This means that residents can pursue projects with single-minded focus without burning through savings or having to do additional jobs just to keep themselves financially afloat.
The backdrop of the problem, as I understand:
EA communities have congregated in some of the most expensive places in the world, such as the San Francisco Bay Area, Boston/Cambridge, New York City, and London. Even outside of these, most places with significant numbers of EAs tend to be cities, and these tend to have higher costs of living.
Most EA projects have trouble raising enough money to cover costs of living in these places, even after they get funding. Moreover, most EA organizations, which are also based in these areas, do not pay enough of a premium for people to build savings that would allow them to comfortably spend months working on such projects in these expensive locations.
Tendencies within EA to donate large fractions of one's personal wealth may have further exacerbated people's lack of adequate savings to pursue EA projects.
These problems, specially the first one, have been widely acknowledged. Attempts to figure out a new, lower-cost city for EAs and build group housing in that city started since as far back as 2014, when the Coordination of Rationality/EA/SSC Housing Project group was created. Browsing through the archives of that Facebook group is interesting because it shows the amount of effort that has gone in over the years in identifying lower-cost living places for EAs. This is the group where EA Hotel founder Greg Colbourn first announced his intention to buy a hotel in Blackpool.
Side note: Peter McCluskey's comment suggests that complaining about the high rent in major hubs is a signal of low status, because the mo...
|
Dec 12, 2021 |
Why Nations Fail and the long-termist view of global poverty by Ben_Kuhn
22:25
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Why Nations Fail and the long-termist view of global poverty, published by Ben_Kuhn on the Effective Altruism Forum.
Write a Review
This is a linkpost for
Within the effective altruism community, people often talk about “long-termist” vs “short-termist” worldviews. The official distinction between the two is that short-termists prioritize problems by how they affect people alive today, while long-termists prioritize problems by how they could affect humanity’s entire future trajectory. In practice, people usually treat this as synonymous with prioritizing either existential risk reduction (if long-termist), or scaling up proven global health interventions (if short-termist).
It’s a bit surprising that each worldview should have exactly one favorite cause area, though. Couldn’t you have short-termist work on existential risk, or long-termist work on global poverty? In reality, these supposedly discrete worldviews seem more like correlated clusters of various different beliefs:
"Short-termist"
High time discount rate
Prefers highly robust “outside view” type arguments
Extrapolates existing effects or trends
Skeptical of prima facie bizarre claims
Focuses on fixing known, concrete problems
Fast feedback loops are critical to making progress
"Long-termist"
Low or no discount rate
More open to “inside view” that this case might be different
Reasons about the future from first principles
Takes weird-sounding ideas more seriously
Focuses on preventing hypothetical, nebulous risks
Fast feedback is helpful, but not the most important thing
It’s understandable why some of these are correlated, but there must be a lot of people who fall through the cracks between the clusters. What if you share short-termists’ skepticism of weird claims and hypothetical risks, but you’re willing to focus on first-principles reasoning and work on a long time scale?
You’d still want to focus on something that’s a problem today, so you’d probably want to work on global poverty. But you’d dismiss GiveWell’s top charities as treating a symptom and not a cause. Why do these countries need charity in the first place? South Korea used to be just as poor as anywhere in Africa, but today it’s incredibly prosperous, while sub-Saharan Africa has made way less progress. If we could move the lowest-growth countries from their current trajectory onto South Korea’s, we’d have done much more than any single malaria-eradication campaign could.
If that were your worldview, you’d really enjoy Why Nations Fail, one of the best attempts I’ve seen at getting a first-principles understanding of what affects countries’ long-term economic growth.
First of all, what is economic growth? It’s when people produce more (or more valuable) stuff with the same effort[1]. The first and least controversial point in Why Nations Fail is that for a nation to keep on doing more with less, its individual citizens need to be incentivized to become more productive. In particular, the state should not set up systems where, whenever someone gets more productive, other people come and take away the extra stuff they produced. Those systems are what the authors call extractive economic institutions, and they include things like slavery, serfdom, indentured servitude, roving bandits, guilds, collectivized agriculture, nationalization of private assets, officials requiring bribes, kangaroo courts, banana republics, and other [animal or vegetable] [civic institution].
You might think you could grow your economy under extractive institutions by forcing people to become more productive even if they won’t get to keep the surplus. This does often work in the short term (and the short term can last a surprisingly long time)–for instance, Soviet Russia grew at about 5% annually from 1930-1970,[2] despite having extremely extractive institutions. But ...
|
Dec 12, 2021 |
How are resources in EA allocated across issues? by Benjamin_Todd
15:00
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: How are resources in EA allocated across issues?, published by Benjamin_Todd on the effective altruism forum.
This is a cross-post from 80,000 Hours.
How are the resources in effective altruism allocated across cause areas?
Knowing these figures, for both funding and labour, can help us spot gaps in the current allocation. In particular, I’ll suggest that broad longtermism seems like the most pressing gap right now.
This is a follow on from my first post, where I estimated the total amount of committed funding and people, and briefly discussed how many resources are being deployed now vs. invested for later.
These estimates are for how the situation stood in 2019. I made them in early 2020, and made a few more adjustments when I wrote this post. As with the previous post, I recommend that readers take these figures as extremely rough estimates, and I haven’t checked them with the people involved. I’d be keen to see additional and more thorough estimates.
Update Oct 2021: I mistakenly said the number of people reporting 5 for engagement was ~2300, but actually this was the figure for people reporting 4 or 5.
Allocation of funding
Here are my estimates:
Cause Area $ millions per year in 2019 %
Global health
185
44%
Farm animal welfare
55
13%
Biosecurity
41
10%
Potential risks from AI
40
10%
Near-term U.S. policy
32
8%
Effective altruism/rationality/cause prioritisation
26
6%
Scientific research
22
5%
Other global catastrophic risk (incl. climate tail risks)
11
3%
Other long term
1.8
0%
Other near-term work (near-term climate change, mental health)
2
0%
Total
416
100%
What it’s based on:
Using Open Philanthropy’s grants database, I averaged the allocation to each area 2017–2019 and made some minor adjustments. (Open Phil often makes 3yr+ grants, and the grants are lumpy, so it’s important to average.) At a total of ~$260 million, this accounts for the majority of the funding. (Note that I didn’t include the money spent on Open Phil’s own expenses, which might increase the meta line by around $5 million.)
I added $80 million to global health for GiveWell using the figure in their metrics report for donations to GiveWell-recommended charities excluding Open Philanthropy. (Note that this figure seems like it’ll be significantly higher in 2020, perhaps $120 million, but I’m using the 2019 figure.)
GiveWell says their best guess is that the figures underestimate the money they influence by around $20 million, so I added $20 million. These figures also ignore what’s spent on GiveWell’s own expenses, which could be another $5 million to meta.
For longtermist and meta donations that aren’t Open Philanthropy, I guessed $30 million per year. This was based on roughly tallying up the medium-sized donors I know about and rounding up a bit. I then roughly allocated them across cause areas based on my impressions. This figure is especially uncertain, but seems small compared to Open Philanthropy, so I didn’t spend too long on it.
Neartermist donations outside of Open Phil and GiveWell are the most uncertain.
I decided to exclude donors who don’t explicitly donate under the banner of effective altruism, or else we might have to include billions of dollars spent on cost-effective global health interventions, pandemic prevention, climate change etc. I excluded the Gates Foundation too, though they have said some nice things about EA. This is a very vague boundary.
For animal welfare, about $9 million has been donated to the EA Animal Welfare Fund, compared to $11.6 million to the Long Term Future Fund and the Meta Fund (now called the Infrastructure Fund). If the total amount to longtermist and meta causes is $30 million per year, and this ratio holds more broadly, it would imply $23 million per year to EA animal welfare (excluding OP) in total. This seems plausible considering that Ani...
|
Dec 12, 2021 |
Frank Feedback Given To Very Junior Researchers by NunoSempere
07:57
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Frank Feedback Given To Very Junior Researchers, published by NunoSempere on the effective altruism forum.
Over the last year, I have found myself giving feedback on various drafts, something that I'm generally quite happy to do. Recently, I got to give two variations of this same feedback in quick succession, so I noticed the commonalities, and then realized that these commonalities were also present on past pieces of feedback. I thought I'd write the general template up, in case others might find it valuable.
High level comments
You are working at the wrong level of abstraction and depth / you are biting more than you can chew / being too ambitious.
In particular, the questions that you analyze are likely to have many cruxes, i.e, factors that might change the conclusion completely. But you only identify a few such cruxes, and thus your analysis doesn't seem likely to be that robust.
I guess that the opposite error is possible—focus too much on one specific scenario which isn't that likely to happen. I just haven't seen it as much, and it doesn't seem as crippling when it happens.
Because you're being too ambitious, you don't have the tools necessary to analyze what you want to analyze, and to some extent those tools may not exist.
Compare with: Forecasting transformative AI timelines using biological anchors, Report on Semi-informative Priors on Transformative AI or Invertebrate Sentience: Summary of findings, which are much more constrained and have specific technical/semi-technical intellectual tool suited to the job (comparison with biological systems, variations on Laplace's law and other priors, markers of consciousness like reaction to harmful stimuli). You don't have an equivalent technical tool.
There is a missing link between the individual facts you outline, and the conclusions you reach (e.g., about [redacted] and [redacted]). I think that the correct thing to do here is to sit with the uncertainty, or to consider a range of scenarios, rather than to reach one specific conclusion. Alternatively, you could highlight that different dynamics could still be possible, but that on the balance of probabilities, you personally think that your favored hypothesis is more likely.
But in that case, it's be great if you more clearly defined your concepts and then expressed your certainty in terms of probabilities, because those are easier to criticize or put to the test, or even notice that there is a disagreement to be had.
Judgment calls
I get the impression that you rely too much on secondary sources, rather than on deeply understanding what you're talking about.
You are making the wrong tradeoff between formality and ¿clarity of thought?
Your report was difficult to read because of the trappings of scholarship—formal tone, long sentences and paragraphs, etc.) An index would have helped.
Your classification scheme is not exhaustive, and thus less useful.
This seems particularly important when considering intelligent adversaries.
I get the impression that you are not deeply familiar with the topic you are talking about. For example, when giving your overview, you don't consider [redacted], which is really the company working on this space.
In particular, I expect that the funders or decision-makers (for instance, Open Philanthropy) whom you might be attempting to influence or inform will be more familiar with the topic than you, and would thus not outsource their intellectual labor to your report.
I don't really know whether you are characterizing the literature faithfully, whether you're just citing the top few most salient experts that you found, or whether there are other factors at play. For instance, maybe the people who [redacted] don't want to be talking about it. Even if you are representing the academic consensus fairly, I don't know how much to trust it....
|
Dec 12, 2021 |
Cause profile: mental health by MichaelPlant
48:58
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Cause profile: mental health, published by MichaelPlant on the effective altruism forum.
Write a Review
Introduction
In this piece, I argue that mental illness may be one of the world’s most pressing problems.
Here is a summary of the key points:
Not only does mental illness seem to cause as much, if not more, total worldwide unhappiness than global poverty, it also seems far more neglected.
Effective mental health interventions exist currently. These have been improving over time and we can expect further improvements.
I estimate the cost-effectiveness of a particular mental health organisation, StrongMinds, and claim it is (at least) four times more effective per dollar than GiveDirectly, a GiveWell recommended top charity. This assumes we understand cost-effectiveness in terms of happiness, as measured by self-reported life satisfaction.
I explain why it’s unclear if StrongMinds is better than all the other GiveWell recommended life-improving charities (due to inconsistent evidence regarding negative spillovers from wealth increases) and life-saving charities (due to methodological issues about where on a 0-10 life satisfaction scale is the ‘neutral point’ equivalent to being dead).
I make some initial suggestions for the highest-impact careers, as well as alternative donation opportunities. No thorough analysis has yet been done to compare these.
While mental health has the most obvious appeal for those who believe we ought to be maximising the happiness of people alive today, I explain that belief isn’t necessary to conclude it is of the highest priority: someone could, in principle, value what happens to all possible sentient life and still reasonably decide this cause is where they’ll do the most good. I raise, but do not seek to resolve, the many crucial considerations here.
In order to get a sense of how important work on this area is, I examine (i) the scale, (ii) neglectedness, and (iii) tractability of the problem in turn. Ultimately, tractability - which I understand as cost-effectiveness - is what really matters and the preceding two sections should be seen as providing helpful background. I then set out why someone might - and might not - think this cause is their top priority and what they could do next if they decided it is.
Scale (how many suffer and by how much?)
Section summary: mental illness causes more suffering than poverty in developed countries, seems to cause roughly as much suffering worldwide as poverty does and, unlike poverty, is not shrinking.
The 2013 Global Burden of Disease (GBD) report estimated that depression affects approximately 350m people annually, while anxiety afflicts another 250 million.[1] By comparison, the report estimated that malaria affects 146 million people, while a 2015 World Bank report estimated 702 million people living on less than $1.25 a day.[2] While poverty affects many more people than mental health, the share of the world population living in absolute poverty is falling rapidly: there were 1.76 billion in absolute poverty in 1999, a drop of about 1 billion people.[3] By contrast, severe mental illnesses are on the rise.[4] As one example, in the UK the proportion of those reporting severe symptoms of common mental disorders has risen 34.7% between 1993 and 2014 (from 6.9% to 9.3% of the population).[5] It’s unlikely this is solely due to increased reporting: an American birth cohort analysis running from 1938 to 2010 found large increases in all psychopathologies after using standard methods to control for possible increases in reporting.[6]
To properly assess scale we also need to know how much suffering each causes: if poverty makes people miserable but mental illnesses are only mildly bad, poverty will be larger in scale. In a recent analysis of self-reported happiness scores, the World Happiness Rep...
|
Dec 12, 2021 |
Existential Risk and Economic Growth by leopold
01:40
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Existential Risk and Economic Growth, published by leopold on the effective altruism forum.
Write a Review
As a summer research fellow at FHI, I’ve been working on using economic theory to better understand the relationship between economic growth and existential risk. I’ve finished a preliminary draft; see below. I would be very interesting in hearing your thoughts and feedback!
Draft: leopoldaschenbrenner.com/xriskandgrowth
Abstract:
Technological innovation can create or mitigate risks of catastrophes—such as nuclear war, extreme climate change, or powerful artificial intelligence run amok—that could imperil human civilization. What is the relationship between economic growth and these existential risks? In a model of endogenous and directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend much on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity's survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity. Nevertheless, if the scale effect of existential risk is large and the returns to research diminish rapidly, it may be impossible to avert an eventual existential catastrophe.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Some thoughts on deference and inside-view models by Buck
14:16
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Some thoughts on deference and inside-view models, published by Buck on the effective altruism forum.
Write a Review
TL;DR:
It's sometimes reasonable to believe things based on heuristic arguments, but it's useful to be clear with yourself about when you believe things for heuristic reasons as opposed to having strong arguments that take you all the way to your conclusion.
A lot of the time, I think that when you hear a heuristic argument for something, you should be interested in converting this into the form of an argument which would take you all the way to the conclusion except that you haven't done a bunch of the steps--I think it's healthy to have a map of all the argumentative steps which you haven't done, or which you're taking on faith.
I think that all the above can be combined to form a set of attitudes which are healthy on both an individual and community level. For example, one way that our community could be unhealthy would be if people felt inhibited to say when they don't feel persuaded by arguments. But another unhealthy culture would be if we acted like you're a chump if you believe things just because people who you trust and respect believe them. We should have a culture where it's okay to act on arguments without having verified every step for yourself, and you can express confusion about individual steps without that being an act of rebellion against the conclusion of those arguments.
I wrote this post to describe the philosophy behind the schedule of a workshop that I ran in February. The workshop is kind of like AIRCS, but aimed at people who are more hardcore EAs, less focused on CS people, and with a culture which is a bit less like MIRI and more like the culture of other longtermist EAs.
Thanks to the dozens of people who I've talked to about these concepts for their useful comments; thanks also to various people who read this doc for their criticism. Many of these ideas came from conversations with a variety of EAs, in particular Claire Zabel, Anna Salamon, other staff of AIRCS workshops, and the staff of the workshop I’m going to run.
I think this post isn't really insightful enough or well-argued enough to justify how expansive it is. I posted it anyway because it seemed better than not doing so, and because I thought it would be useful to articulate these claims even if I don't do a very good job of arguing for them.
I tried to write the following without caveating every sentence with "I think" or "It seems", even though I wanted to. I am pretty confident that the ideas I describe here are a healthy way for me to relate to thinking about EA stuff; I think that these ideas are fairly likely to be a useful lens for other people to take; I am less confident but think it's plausible that I'm describing ways that the EA community could be different that would be very helpful.
Part 1: ways of thinking
Proofs vs proof sketches
When I first heard about AI safety, I was convinced that AI safety technical research was useful by an argument that was something like "superintelligence would be a big deal; it's not clear how to pick a good goal for a superintelligence to maximize, so maybe it's valuable to try to figure that out." In hindsight this argument was making a bunch of hidden assumptions. For example, here are three objections:
It's less clear that superintelligence can lead to extinction if you think that AI systems will increase in power gradually, and before we have AI systems which are as capable of the whole of humanity we have AI systems which are as capable as dozens of humans.
Maybe some other crazy thing (whole brain emulation, nanotech, technology-enabled totalitarianism) is likely to happen before superintelligence, which would make working on AI safety seem worse in a bunch of ways
Maybe it's really hard to work on technical AI ...
|
Dec 12, 2021 |
Are there any other pro athlete aspiring EAs? by Marcus Daniell
00:53
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Are there any other pro athlete aspiring EAs?, published by Marcus Daniell on the effective altruism forum.
Write a Review
I'm starting an EA aligned non-profit called High Impact Athletes that is aiming to funnel donations from current and retired pro athletes and their fans towards the most effective orgs in the world.
It's still early stage but I wondered if there were any pro or ex athletes hiding in the EA Forum who might be interested in supporting the idea? I believe pro sport is a relatively untapped space for EA and potentially has huge pulling power if the athletes get their fans on board.
Many thanks,
Marcus Daniell
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Rethink Priorities 2020 Impact and 2021 Strategy by Marcus_A_Davis
19:36
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Rethink Priorities 2020 Impact and 2021 Strategy, published by Marcus_A_Davis on the effective altruism forum.
Write a Review
Summary
Rethink Priorities is an EA research organization focused on influencing funders and key decision-makers to improve decisions within EA and EA-aligned organizations.
This year we expanded our operations team and hired multiple researchers across multiple causes, allowing us to expand and improve our animal welfare and EA movement building work, and build a dedicated longtermism team.
Rethink Priorities currently has a staff of 16 people, corresponding to 13 full-time equivalents (including 3 FTE operations staff). This year we spent 72% of our time working on research relevant to farmed and wild animal welfare, 9% on movement building, 8% on longtermism, and 11% on other research projects. By the end of the year, we’ll have spent about $833K in 2020.
We track our impact in multiple ways. This year we found qualitative interviews with key decision makers and leaders at EA organizations particularly helpful in understanding how to improve. Given the feedback we’ve received, we think two areas for improvement are more focus on communication with key funders and groups and also improving the visual communication of our work.
Over the next few years we plan to expand our work in animal welfare, relaunch our work in longtermism, and continue our work in movement building.
We continue to be constrained by a lack of funding to hire talented researchers and execute promising projects, having to turn down or delay very high-value projects. In particular, our non-animal growth is constrained by depending too heavily on EA Funds. We would strongly benefit from new individual donors to support our work and diversify our funder base.
If funded we would hire 1-2 additional researchers to tackle our ambitious research agenda and the opportunities we have to work with more groups. We would also create an intern program. In addition to resulting in additional directly valuable research, this program would also benefit both our future growth and the growth of other EA-aligned research efforts, by helping us identify new talented researchers, helping them build their skills, and helping our existing staff develop their management skills.
Currently, our goal is to raise $1.57M by the end of 2021. This consists of gaps of $757K for animal research, $503K for longtermism research, $261K for meta and movement building, and $46K for other research. We do accept and track restricted funds by cause area if that is of interest.
If you’d like to support our work, you can donate to us as part of Facebook’s donation matching on Giving Tuesday or donate directly to us here. If you’re interested in supporting our work with a major gift, contact Director of Development Janique Behman.
Our Mission
Our mission is to help funders make better grants and help organizations do higher impact work. We accomplish this by doing and communicating research that analyzes existing interventions, broadens and refines the scope of possible consideration, and deepens our understanding of what interventions are possible and effective.
Rethink Priorities Theory of Change
Organizational Structure
Staff
Over the course of 2020, we made a number of hires to improve our team. Thanks to support in 2019, Peter Hurford, Co-Executive Director, became full-time in March 2020, and in May we hired a Director of Operations, Abraham Rowe. Janique Behman joined as our Director of Development in November 2020.
We also expanded our research team, hiring five researchers (~3.75 FTE) to continue and expand our work across animal welfare, movement building, and longtermism.[1]
Michael Aird - Associate Researcher - Previously did longtermist and macrostrategy research for Convergence Analysis and the Center...
|
Dec 12, 2021 |
2017 Donor Lottery Report by AdamGleave
28:11
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: 2017 Donor Lottery Report, published by AdamGleave on the effective altruism forum.
Write a Review
I am the winner of the 2017 donor lottery. This write-up documents my decision process. The primary intended audience are other donors: several of the organisations I decided to donate to still have substantial funding gaps. I also expect this to be of interest to individuals considering working for one of the organisations reviewed.
To recap, in a donor lottery many individuals make small contributions. The accumulated sum is then distributed to a randomly selected participant. Your probability of winning is proportional to the amount donated, such that the expected amount of donations you control is the same as the amount you contribute. This is advantageous since the winner (given the extra work, arguably the "loser") of the lottery can justify spending substantially more time evaluating organisations than if he or she were controlling only their smaller personal donations.
In 2017, the Centre for Effective Altruism ran a donor lottery, and I won one of the two blocks of $100,000. After careful deliberation, I recommended that CEA make the following regrants:
$70,000 to ALLFED.
$20,000 to the Global Catastrophic Risk Institute (GCRI).
$5,000 to AI Impacts.
$5,000 to Wild Animal Suffering Research.
In the remainder of this document, I describe the selection process I used, and then provide detailed evaluations of each of these organisations.
Selection Process
I am a CS PhD student at UC Berkeley, working to develop reliable artificial intelligence. Prior to starting my PhD, I worked in quantitative finance. This document is independent work and is not endorsed by CEA, the organisations evaluated, or by my current or previous employers.
I assign comparable value to future and present lives, place significant weight on animal welfare (with high uncertainty) and am risk neutral. I have some moral uncertainty but would endorse these statements with >90% probability. Moreover, I largely endorse the standard arguments regarding the overwhelming importance of the far future.
Since I am mostly in agreement with major donors, notably Open Philanthropy, I tried to focus on areas that are the comparative advantage of smaller donors. In particular, I focused my investigation on small organisations with a significant funding gap.
To generate an initial list of possible organisations, I (a) wrote down organisations that immediately came to mind, (b) solicited recommendations from trusted individuals in my network, and (c) reviewed the list of 2017 EA grant recipients. I shortlisted four organisations from a superficial review of the longlist. Ultimately all the organisations on my shortlist were also organisations that immediately came to my mind in (a). This either indicates I already had a good understanding of the space, or that I am poor at updating my opinion.
I then conducted a detailed review of each of the shortlisted organisations. This included reading a representative sample of their published work, soliciting comments from individuals working in related areas, and discussion with staff at the organisation until I felt I had a good understanding of their strategy. In the next section, I summarise my current views on the shortlisted organisations.
The organisations evaluated were provided with a draft of this document and given 14 days to respond prior to publication. I have corrected any mistakes brought to my attention, and have also included a statement from ALLFED; other organisations were provided with the option to include a statement but chose not to do so. Some confidential details have been withheld, either at the request of the organisation or the individual who provided the information.
Summary of conclusions
I ranked ALLFED above GCRI as I view their research ...
|
Dec 12, 2021 |
How to generate research proposals by Jsevillamol
13:36
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: How to generate research proposals, published by Jsevillamol on the effective altruism forum.
Write a Review
Epistemic status: The advice I give is based on anecdotic experience, and what works well for me might not transfer well to others, but my past self would have found this article quite useful
As part of the application to FHI’s Summer Fellows Program I had to submit a research proposal, and once I finished my original proposal I had to engage again with the process to select my next research project.
In this article I will explain the particular approach I chose for this task, and provide some commentary on what worked well and what did not.
I expect you will get a lot of mileage out of this article if you are an early career researcher struggling to generate research ideas or if you are considering pursuing a career in research and would like to peek into how an important chunk of a researcher’s time is spent.
In short, my top advice for early career researchers is:
Create a pipeline that allows you to note down interesting research questions without committing to them and share them with others to receive feedback
When explicitly generating research questions, read research agendas and talk to other people to find disagreements.
Think about what makes a good research project to develop taste. My personal take is that what matters most is having concrete research questions, that you can devise a good methodology to answer those questions, that the output you intend to produce has a clear audience and goals and that your environment and background are a good fit to the question.
When selecting between ideas, try to fail fast, and don’t be afraid to discard ideas. To do this, try writing up outlines of the projects and schedule conversations with other researchers with overlapping interests.
To avoid spending too much time on the meta level, preallocate time to making the decision. If you have trouble deciding between concrete projects, just pick the one you are most excited about and work on it until you find a roadblock.
Generating and prioritizing research proposals seems to be a critical part of strategic research, and my informal impression is that systematic approaches are quite underexplored.
The overall process I used is split up in three chunks: generating research ideas, curating the ideas and operationalizing the ideas. We will cover each of these parts in separate sections of this article.
The outcome of this process is a detailed outline of a research project, which clearly explains the methodology and value of a project and that can be reused as an introduction to a potential publication.
The whole process took ~2 weeks. Progress was uneven, on some days I progressed a lot on the research proposal generation and others I just focused on other projects.
Brainstorming ideas
For the brainstorming phase I set up a brainstorming document where I collected questions that had drawn my attention, together with useful context and notes on what inspired the idea in a bullet point format.
To fill the list I resorted to the following strategies:
Reading research agendas
Talking to other people and finding disagreements
Creating taxonomies of areas of interest
Reading research agendas proved to be quite fruitful. With research agendas I mostly refer to collections of open questions compiled by other researchers. If you already have a topic of interest you can search research agendas focused precisely on that; if not they can provide an excellent introduction to new areas of research.
Some examples of research agendas are Allan Dafoe’s AI Governance Research Agenda [REF], Luke Muehlhauser’s How to study superintelligence strategy [REF] or OPP’s Important unresolved research questions in macroeconomic policy [REF].
Talking to people and finding disagreements was...
|
Dec 12, 2021 |
The ITN framework, cost-effectiveness, and cause prioritisation by John G. Halstead
24:51
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: The ITN framework, cost-effectiveness, and cause prioritisation, published by John G. Halstead on the effective altruism forum.
Write a Review
From reading EA material, one might get the impression that the Importance, Tractability and Neglectedness (ITN) framework is the (1) only, or (2) best way to prioritise causes. For example, in EA concepts’ two entries on cause prioritisation, the ITN framework is put forward as the only or leading way to prioritise causes. Will MacAskill’s recent TedTalk leaned heavily on the ITN framework as the way to make cause prioritisation decisions. Open Philanthropy Project explicitly prioritises causes using an informal version of the ITN framework.
In this post, I argue that:
Extant versions of the ITN framework are subject to conceptual problems.
A new version of the ITN framework, developed here, is preferable to extant versions.
Non-ITN cost-effectiveness analysis is, when workable, superior to ITN analysis for the purposes of cause prioritisation.
This is because:
Marginal cost-effectiveness is what we ultimately care about.
If we can estimate the marginal cost-effectiveness of work on a cause without estimating the total scale of a problem or its neglectedness, then we should do that, in order to save time.
Marginal cost-effectiveness analysis does not require the assumption of diminishing marginal returns, which may not characterise all problems.
ITN analysis may be useful when it is difficult to produce intuitions about the marginal cost-effectiveness of work on a problem. In that case, we can make progress by zooming out and carrying out an ITN analysis.
In difficult high stakes cause prioritisation decisions, we have to get into the weeds and consider in-depth the arguments for and against different problems being cost-effective to work on. We cannot bypass this process through simple mechanistic scoring and aggregation of the three ITN factors.
For this reason, the EA movement has thus far significantly over-relied on the ITN framework as a way to prioritise causes. For high stakes cause prioritisation decisions, we should move towards in-depth analysis of marginal cost-effectiveness.
[update - my footnotes didn't transfer from the googledoc, so I am adding them now]
1. Outlining the ITN framework
Importance, tractability and neglectedness are three factors which are widely held to be correlated with cost-effectiveness; if one cause is more important, tractable and neglected than another, then it is likely to be more cost-effective to work on, on the margin. ITN analyses are meant to be useful when it is difficult to estimate directly the cost-effectiveness of work on different causes.
Informal and formal versions of the ITN framework tend to define importance and neglectedness in the same way. As we will see below, they differ on how to define tractability.
Importance or scale = the overall badness of a problem, or correspondingly, how good it would be to solve it. So for example, the importance of malaria is given by the total health burden it imposes, which you could measure in terms of a health or welfare metric like DALYs.
Neglectedness = the total amount of resources or attention a problem currently receives. So for example, a good proxy for the neglectedness of malaria is the total amount of money that currently goes towards dealing with the disease.[^1]
Extant informal definitions of tractability
Tractability is harder to define and harder to quantify than importance and neglectedness. In informal versions of the framework, tractability is sometimes defined in terms of cost-effectiveness. However, this does not make that much sense because, as mentioned, the ITN framework is meant to be most useful when it is difficult to estimate the marginal cost-effectiveness of work on a particular cause. There would be no reas...
|
Dec 12, 2021 |
Coronavirus Research Ideas for EAs
38:43
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Coronavirus Research Ideas for EAs, published by Peter Wildeford on the effective altruism forum.
write a Review
COVID-19 is a tragedy with more everyday social implications in the developed world than anything since World War II. Many EAs are wondering what, if anything, to do about COVID to help the world. To try to investigate further, I am helping articulate possible research ideas for further discussion and consideration.
The kind of research we need to do in this situation is very different from the kind of research EA is used to doing well. We normally spend several months carefully researching a single topic that doesn’t change very much. With COVID, everything about this is reversed—the situation currently requires us to rapidly get up to speed and produce research in a matter of days in a situation that is rapidly changing. There is an exponentially growing “speed premium”—much more than we’ve ever seen. As such, please note that we have waived our normal review and quality check standards to get this out ASAP and there may be considerable mistakes in this article.
On the other hand, I do urge some degree of caution and humility. Let’s not all collectively lose our minds. We should worry at least some about armchair epidemiology from non-experts (though also see this) and properly recognize what our skills are and aren’t, where we can contribute and where we shouldn’t. We should also be careful that research done at breakneck speed is more likely to be wrong.
I'm a bit worried that many people will want to work on this topic just so they don't feel helpless in the face of the pandemic, or because there’s a lot of attention being paid to it now, or that it feels high-status and urgent, or many other reasons unconnected from EA-related impact. It can be really tough to see your community, friends, family, and self hurting and not feel like there’s much you can do. However, the work EA was doing before this pandemic still remains of importance now. If you can contribute to the anti-COVID effort that is great, but it is also fine (sometimes even preferable!) to continue to research what you were researching before.
Naturally, these research questions were put together rapidly and may continue to evolve rapidly. I numbered the questions to make them easier to reference and discuss. They’re grouped by topic. Note that the numbering system may get a bit weird as I add new questions without wanting to renumber all the other questions. I am trying to keep numbers stable so they can be referred to as shorthand.
Questions vary a lot in their urgency, importance, tractability, and neglectedness. If you’re already up to speed, I think questions 1, 3, 4, 14, 15, 19, 21, 22 and 33 are particularly important and urgent to look into right now.
I’m not sure how best to coordinate around these and other questions... the question of coordination itself should be the subject of active research (see question 3). For now, feel free to comment here, or centralize on LessWrong and the “Effective Altruism Coronavirus Discussion” FB group.
If you intend to dedicate significant effort to any of these questions, it might also be worth you joining our cross-org Slack group—comment here or email me (peter@rethinkpriorities.org) to be added.
I do think we should try to communicate early and often about our progress and aim to produce results quickly and share with others.
Note that this isn’t the only compilation of coronavirus research questions. I also know of LessWrong’s agenda and this list of questions.
I will try to keep this all up to date as rapidly as I can.
Meta-Research
1.) Just how bad are things right now? How bad might we expect it to get? What is the current state of play and what are various plausible scenarios forward? [PRIORITY]
The COVID-19 pandemic is a rapidly escalat...
|
Dec 12, 2021 |
Introducing Animal Advocacy Africa by AnimalAdvocacyAfrica
11:26
welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Introducing Animal Advocacy Africa , published by AnimalAdvocacyAfrica on the effective altruism forum.
Write a Review
We’re excited to introduce a new EA project: Animal Advocacy Africa!
Animal Advocacy Africa (AAA) aims to develop a collaborative and effective animal advocacy movement in Africa. We plan to do this by engaging organisations and individual advocates within farmed animal advocacy in Africa and using this engagement to seek cost-effective opportunities to help animals. We aim to achieve this via a two-stage process:
Six-month research phase: Identifying which barriers a |