Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.
Episode | Date |
---|---|
EA - EA Architect: Dissertation on Improving the Social Dynamics of Confined Spaces & Shelters Precedents Report by Tereza Flidrova
17:02
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Architect: Dissertation on Improving the Social Dynamics of Confined Spaces & Shelters Precedents Report, published by Tereza Flidrova on June 6, 2023 on The Effective Altruism Forum.
TL;DR
In this post, I will share the work I have done on the topic of civilisational shelters (1), (2), over the last year as an architecture master's student. I will share my dissertation on improving the social dynamics of confined spaces, including a practical design guide that can be used to design new or evaluate and improve existing confined spaces. I will also share the Shelters Precedents Report Draft I worked on last spring.
Key links from this post include:
My dissertation in pdf or flipbook formats
Link to the Wellbeing Worksheet, an interactive design guide proposed in my dissertation
Video summarising the research and findings (especially useful if you want to learn about my design proposal and the design guide)
Link to the Shelters Precedents Report Draft
Outline
Since last spring, I have explored ways to get involved in EA with my skills as an architect. So far, I wrote this and this article about my ideas and journey of becoming the âEA Architectâ, and have also started to help anyone with architectural or planning background get involved through the EA Architects and Planners group. One of the key areas I got involved in was civilisational shelters. This summer, I am going to Zambia to intern with the Charter Cities Institute.
This post has two parts:
Part 1: My architectural research-led dissertation on âImproving the Social Dynamics of Confined Spaces Located in Extreme Environmentsâ;
Part 2: Sharing the Shelters Precedents Report Draft I developed last spring and so far only shared internally.
Part 1: Improving the Social Dynamics of Confined Spaces Located in Extreme Environments
After co-organising the SHELTER Weekend last summer (see this post by Janne for a summary of what has been discussed), as well as studying various precedents and talking to many experts, I concluded that the best way I can contribute to the shelters work is by understanding what influences the social dynamics of very confined spaces. Hence, I chose this as my masterâs thesis at Oxford Brookes.
Why I did it
Global catastrophes, such as nuclear wars, pandemics, asteroid collisions or biological risks, threaten the very existence of mankind (Beckstead, 2015). These challenges have caused people to consider distant locations such as polar regions, deep sea, outer space, and even underground facilities as potential locations to seek safety during such crises (Beckstead, 2015; Jebari, 2015). However, living in confined spaces for prolonged periods brings prominent social challenges that might prevent their long-term success (Jebari, 2015). To ensure the successful habitation of confined spaces, special attention needs to be given to their design, allowing humans to survive and thrive long-term.
While there is existing research on the design of specific confined spaces, like the design of research stations in polar regions (Bannova, 2014; Palinkas, 2003), space stations (Basner, Dinges, et al., 2014; Harrison et al., 1985), prisons (Karthaus et al., 2019; Lily Bernheimer, Rachel OâBrien, Richard Barnes, 2017), biospheres testing space habitation (Nelson et al., 1994; Zabel et al., 1999) or nuclear bunkers (Graff, 2017; NPR, 2011), there seems to be a lack of a comprehensive architectural framework that can be utilised by designers of confined spaces in extreme environments to help improve their liveability. This is despite the fact there has been much research on the impacts of the physical environment (Klitzman and Stellman, 1989), including staying indoors (Rashid and Zimring, 2008), thermal comfort (Levin, 1995), the impact of light (Basner, Babisch, et al., 2014) and noise (Levin, 1995) on ...
|
Jun 06, 2023 |
EA - Transformative AGI by 2043 is <1% likely by Ted Sanders
10:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformative AGI by 2043 is <1% likely, published by Ted Sanders on June 6, 2023 on The Effective Altruism Forum.
Abstract
The linked paper is our submission to the Open Philanthropy AI Worldviews Contest. In it, we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%.
Specifically, we argue:
The bar is high: AGI as defined by the contestâsomething like AI that can perform nearly all valuable tasks at human cost or lessâwhich we will call transformative AGI is a much higher bar than merely massive progress in AI, or even the unambiguous attainment of expensive superhuman AGI or cheap but uneven AGI.
Many steps are needed: The probability of transformative AGI by 2043 can be decomposed as the joint probability of a number of necessary steps, which we group into categories of software, hardware, and sociopolitical factors.
No step is guaranteed: For each step, we estimate a probability of success by 2043,conditional on prior steps being achieved. Many steps are quite constrained by the short timeline, and our estimates range from 16% to 95%.
Therefore, the odds are low: Multiplying the cascading conditional probabilities together, we estimate that transformative AGI by 2043 is 0.4% likely. Reaching >10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.
Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.
Executive summary
For AGI to do most human work for <$25/hr by 2043, many things must happen.
We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:
We invent algorithms for transformative AGI60%We invent a way for AGIs to learn faster than humans40%AGI inference costs drop below $25/hr (per human equivalent)16%We invent and scale cheap, quality robots60%We massively scale production of chips and power46%We avoid derailment by human regulation70%We avoid derailment by AI-caused delay90%We avoid derailment from wars (e.g., China invades Taiwan)70%We avoid derailment from pandemics90%We avoid derailment from severe depressions95%Joint odds0.4%
Event
Forecast
by 2043 or TAGI,conditional onprior steps
If you think our estimates are pessimistic, feel free to substitute your own here. Youâll find it difficult to arrive at odds above 10%.
Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.
So a good skeptic must ask: Is our framework fair?
There are two possible errors to beware of:
Did we neglect possible parallel paths to transformative AGI?
Did we hew toward unconditional probabilities rather than fully conditional probabilities?
We believe we are innocent of both sins.
Regarding failing to model parallel disjunctive paths:
We have chosen generic steps that donât make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technology
One opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this belief
Regarding failing to really grapple with conditional probabilities:
Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will.
Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)
Have invented very cheap and effi...
|
Jun 06, 2023 |
EA - Cause area report: Antimicrobial Resistance by Akhil
09:22
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause area report: Antimicrobial Resistance, published by Akhil on June 6, 2023 on The Effective Altruism Forum.
This post is a summary of some of my work as a field strategy consultant at Schmidt Futures' Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed here.
AMR is a global health priority
Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and quality of human life since penicillin and other antimicrobials were first developed in the early and mid 20th century.
Antimicrobial resistance, or AMR, occurs when bacteria, viruses, fungi, and parasites evolve resistance to antimicrobials. As a result, antimicrobial medicine such as antibiotics and antifungals become ineffective and unable to fight infections in the body.
AMR is responsible for millions of deaths each year, more than HIV or malaria (ARC 2022). The AMR Visualisation Tool, produced by Oxford University and IHME, visualises IHME data which finds that 1.27 million deaths per year are attributable to bacterial resistance and 4.95 million deaths per year are associated with bacterial resistance, as shown below.
Figure 1: Composition of global bacterial infection related deaths, from AMR Visualisation Tool
This burden does not include that of non-bacterial infections, such as fungi or pathogens, which might increase this burden by several factors. For instance, every year, there are 150 million cases of severe fungal infections, which result in 1.7 million deaths annually (Kainz et al 2020). Unlike for bacterial infections, we do not have good estimates of how many of those are associated or attributable to resistance.
Concerningly, AMR is escalating at an increasing rate (USA data, Swiss data, Mhondoro et al 2019, Indian Council of Medical Research 2021). One prominent report estimates that AMR will result in 10 million deaths every year by 2050 (Jim OâNeill report, 2014).
Even more concerningly, we may be at a critical juncture, where if we do not drastically change our current trajectory, we could run out of effective antimicrobials. This would mean that our ability to perform surgery, give cancer patients chemotherapy, or manage chronic diseases like cystic fibrosis and asthma, all of which hinge on the effectiveness of antimicrobials, would be significantly impacted. The very foundations of modern medicine could be threatened; the WHO has warned that we could return to a pre-antibiotic age, which would result in the average human life expectancy going down from 70 to 50 (WHO, 2015).
Beyond the health effects, there is a profound economic cost to AMR â for patients, healthcare systems and the economy. In the USA, the CDC estimates that the cost of AMR is $55 billion every year (Dadgostar 2019). Studies show that as a result of AMR, the annual global GDP could decrease by 1% and there would be a 5-7% loss in low and middle income countries by 2050 (Dadgostar 2019). In conjunction, the World Bank states that AMR might limit gains in poverty reduction, push more people into extreme poverty and have significant labour market effects.
The importance of AMR is recognised by major governments and multilateral organisations. The WHO calls AMR one of the greatest public health threats facing humanity, the UK government lists AMR on its National Risk Register, and both GAVI and the United Nations Foundation term AMR as a âsilent pandemicâ.
AMR is a neglected field
Although there has been some global response to AMR, it has not been proportional to its threat to healthcare systems and the economy. Despite many governments developing National Action Plans (NAPs) in response to the WHO call for the same in ...
|
Jun 06, 2023 |
EA - ALTER Israel - 2023 Mid-Year Update by Davidmanheim
05:54
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: ALTER Israel - 2023 Mid-Year Update, published by Davidmanheim on June 6, 2023 on The Effective Altruism Forum.
ALTER is an organization in Israel that works on several EA priority areas and causes. This semiannual update is intended to inform the community of what we have been doing, and provide a touchpoint for those interested in engaging with us. Since the last update at the beginning of 2023, we have made progress on a number of areas, and have ambitious ideas for future projects.
Progress to Date
Since its founding, ALTER has started and run a number of projects.
Organized and managed an AI safety conference in Israel, AISIC 2022 hosted at the Technion, bringing in several international speakers including Stuart Russell, to highlight AI Safety focused on existential-risk and global-catastrophic-risk, to researchers and academics in Israel. This was successful in raising the profile of AI safety here in Israel, and in helping identify prospective collaborators and researchers.
Support for Vanessa Kosoyâs Learning-Theoretic Safety Agenda, including an ongoing prize competition, and work to hire researchers working in the area.
Worked with Israelâs foreign ministry, academics here in Israel, and various delegations to and organizations at the Biological Weapons Convention to find avenues to promote Israelâs participation.
Launched our project to get the Israeli government to iodize salt, to mitigate or eliminate the current iodine deficiency that we estimate causes an expected 4-IQ point loss to the median child born in Israel today.
Worked on mapping the current state of metagenomic sequencing usage in Israel, in order to prepare for a potential use of widespread metagenomic monitoring for detecting novel pathogens.
Organized and hosted a closed Q&A with Eliezer Yudkowsky while he was visiting Israel, for 20 people in Israel working on or interested in contributing to AI safety. This was followed by a larger LessWrong meetup with additional attendees.
Current and Ongoing Work
We have a number of ongoing projects related to both biorisk and AI safety.
Fellowship program. We have started this program to support researchers interested in developing research agendas relevant to AI safety. Ram Rahum is our inaugural funded AI safety fellow, who was found via our AI Safety conference. Since then, he has co-organized a conference in London on rebellion and disobedience in AI jointly with academics in Israel, the US, and the UK. As a fellow, he is also continuing to work with academics in Israel as well as a number of researchers at Deep Mind on understanding strategic deception and multi-agent games and dynamics for ML systems. His research home is here and monthly updates are here. Rona Tobolsky is a policy fellow, and is also working with us on policy, largely focused on biorisk and iodization.
Support for Vanessa Kosoyâs Learning-Theoretic AI Safety Agenda. To replace the former FTX funding, we have been promised funding from an EA donor lottery to fund a researcher working on the learning-theoretic safety agenda. We are working on recruiting a new researcher, and are excited about expanding this. Relatedly, we are helping support a singular learning theory workshop.
Biosecurity. David Manheim and Rona Tobolsky attended the Biological Weapons Convention - Ninth Review Conference, and have continued looking at ways to push for greater participation by Israel, which is not currently a member. David will also be attending a UNIDIR conference on biorisk in July. We are also continuing to explore additional pathways for Israel to contribute to global pandemic preparedness, especially around PPE and metagenomic biosurveillance.
AI field building. Alongside other work to build AI-safety work in Israel, ALTER helped initiate a round of the AGI Safety Fundamentals 101 program...
|
Jun 06, 2023 |
EA - National EA groups shouldnât focus on city groups by DavidNash
06:17
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: National EA groups shouldnât focus on city groups, published by DavidNash on June 5, 2023 on The Effective Altruism Forum.
Summary
National EA groups have a variety of strategies available to them, but many seem to focus on supporting local city groups as the main activity with less consideration of other interventions. I think this leads to neglecting more impactful activities for national groups. Potentially this is because they are following more established groups/resources where city groups are given as a default example.
Most people interested in EA are not joining local EA groups, and most people who could get more involved in EA donât necessarily want to do that via joining a local group first
From EA London attendance data for 2016-18, out of ~1300 people roughly 75% attended just 1 event, and only 10% attended 4 or more which suggests that most weren't aiming to become regular members
There is an unseen majority of people who know about EA and want to have more impact who are neglected by a city-first strategy
EA should attract more people than those also looking for community
Community is still important, but should be seen as additional rather than a main focus
Community can mean a lot of different things but Iâm defining community in this post as a more densely connected subset of a network based around a location
In practice this means a community is more likely to involve social gatherings, daily/weekly in person touchpoints
A network will involve conferences, mentorship, newsletters/social media, monthly/yearly touchpoints
There is probably value to having some city organisers if there is a critical mass of people interested in EA and the city has strong comparative advantages
Alternative strategies could include cause specific field building, career advising, supporting professional networks nationally, organisation incubation, translation
The Unseen Majority
When most people hear about EA for the first time, itâs usually via an online resource (80,000 Hours, GWWC, podcast) or word of mouth. The message they receive is that EA cares about having more impact and that EA as a movement is trying to help people have more impact.
This can contrast to the experience of going along to a local group (which is regularly suggested as a good way to get more involved with EA), and experiencing the main message as âjoin our communityâ, with less focus on helping that person have impact. This could lead to people who are focused on generating a lot of impact bouncing away from EA. Anecdotally I have heard people say that they donât find that much value from attending local group events but are still interested in EA and focus on having an impact in their career.
For the subset of people who are looking for community, local groups can be great. But for a lot of people who do not have that preference/have other life circumstances, this isnât what they are looking for. People already have communities they are a part of (family, friends, professional, hobbies) and often donât have time for many more. Anecdotally from conversations with other organisers the people most likely to join are those looking for a community - students, recent graduates or people who are new to the city.
This can be self reinforcing as the people who are likely to keep on attending meetups are the ones with spare time and lacking community. We often use neglectedness when choosing cause areas, leading to support of unseen majorities - people in poorer parts of the world, animals and future beings. But when it comes to movement building there is less thought paid to those who arenât visible. A lot of strategies I have seen are about increasing attendance or engagement at events rather than providing value to people who may not be as interested in attending lots of events each year but still wan...
|
Jun 05, 2023 |
EA - I made a news site based on prediction markets by vandemonian
06:56
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I made a news site based on prediction markets, published by vandemonian on June 5, 2023 on The Effective Altruism Forum.
Introduction
âNews through prediction marketsâ
The Base Rate Times is a nascent news site that incorporates prediction markets prominently into its coverage.
Please see current iteration: www.baseratetimes.com
Twitter: www.twitter.com/base_rate_times
What problem does it solve?
Forecasts are underutilized by the media
Prediction markets are more accurate than pundits, yet the media has made limited use of their forecasts. This is a big problem: one of the most rigorous information sources is being omitted from public discourse!
The Base Rate Times creates prediction markets content, substituting for inferior news sources. This improves the epistemics of its audience.
Forecasts are dispersed, generally inconvenient to consume
Prediction markets are dispersed among many different platforms, fragmenting the information forecasters provide. For example, different platforms ask similar questions in different ways. Furthermore, platformsâ UX is orientated towards forecasters, not information consumers. Overall, trying to use prediction markets as ânews replacementâ is cumbersome.
There is value in aggregating and curating forecasts from various platforms. We need engaging ways of sharing prediction marketsâ insights. The Base Rate Times aims to make prediction markets easily digestible to the general public.
How does it work?
News media (emotive narrative) vs Base Rate Times (actionable odds)
For example, this is a real headline from a reputable newspaper: âTaiwan braces for China's fury over Pelosi visitâ. Emotive and incendiary, it does not help you form an accurate model of the situation.
By contrast, The Base Rate Times: âChina-Taiwan conflict risk 14%, up 2x from 7% after Pelosi visitâ. That's an actionable insight. It can inform your decision on whether to stay in Taiwan or to flee, for example.
News aggregation, summarizing prediction markets
Naturally, the probabilities in the example above come from prediction markets. The Base Rate Times presents what prediction markets are telling us about news in an engaging way.
Stories that shift market odds are highlighted. And if a seemingly important story doesnât shift market odds, that also tells you something.
On The Base Rate Times, right now you can see the latest odds on:
Putin staying in power
Russian territorial gains in Ukraine
Escalation risk of NATO involvement
and more...
By glancing at a few charts, you can form a more accurate model (in less time) of Russia-Ukraine than reading countless narrative-based news stories.
Inspiration
A key inspiration was Scott Alexanderâs Prediction Market FAQ:
I recently had to read many articles on Elon Muskâs takeover of Twitter, which all repeated that ârumors saidâ Twitter was about to go down because of his mass firing. Meanwhile, there were several prediction markets on whether this would happen, and they were all around 40%. If some journalist had thought to check the prediction markets and cite them in their article, they could have not only provided more value (a clear percent chance instead of just âthere are some rumors saying thisâ), but also been right when everyone else was wrong.
Also Scottâs 'Mantic Monday' posts and Zviâs blog.
This simple chart by @ClayGraubard was another inspiration. Wanted something like this, but for all major news stories. Couldn't find it, so making it myself. (Clay is making geopolitics videos and podcasts now, check it out.)
Goals
Like 538, but for prediction markets
The Base Rate Times is a bet that forecasts can be popularized, as opinion polls have been, and improve societyâs models of the world.
Goal: Longshot probability of going mainstream, e.g. like 538.
If highly successful in scaling, weâd be effectively run...
|
Jun 05, 2023 |
EA - Abolishing factory farming in Switzerland: Postmortem by naoki
04:17
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abolishing factory farming in Switzerland: Postmortem, published by naoki on June 5, 2023 on The Effective Altruism Forum.The Initiative to abolish factory farming was a nationwide ballot in Switzerland, instigated by Sentience Politics.The contents of the postmortem below were written by Philipp Ryf, then co-president at Sentience Politics and co-lead of the campaign, for the 2022 annual report of Sentience Politics. The author of this post is Naoki Peter, co-president at Sentience Politics.The initiative at a glanceThe initiative demanded the abolition of factory farming in Switzerland, granting a maximum transitional period of 25 years.It aimed at anchoring stricter animal welfare guidelines as a new minimum standard in the Swiss constitution. These standards would have granted cows, pigs and chickens regular access to the outdoors and considerably more space.On behalf of Swiss farmers the initiative included import regulations that take account of the new Swiss standards.The initiative was rejected by a 62.9% majority of the voters on in September 2022.For more information see the initiative text (in German, French and Italian) and the ballot results.PostmortemThe living conditions of animals in agriculture have never been discussed so widely and publicly. Hundreds of thousands of people have engaged with the initiative beyond their usual scope, asking themselves the question: "What does my consumption mean for animals, people, and the environment?" The voting result has proven that the vision of a dignified, location-appropriate agriculture mobilises and moves people far beyond the base of the supporting parties.For the first time a broad alliance of Swiss animal protection, agricultural, and environmental organisations joined forces and stood up to the agricultural lobby to advocate for an animal welfare cause.Considering the high cost and resources required to launch an initiative, we aimed at including as many demands as possible. However, we now question whether a shorter, more targeted package of demands would have garnered more support. We will consider this for any future initiatives.In 2018 Sentience Politics launched the Initiative to abolish factory farming single-handedly. We should have prioritised alliance-building earlier. This would have enabled us to tap into the resources of more organisations.The opposition commanded resources which significantly exceeded our own. In order to have a realistic chance at the ballot box, any future initiative would likely require a substantially larger budget and campaign team.In order to inspire the majority of the population for a cause, it is essential to highlight the urgency of the matter and exclude any room for doubt. We could have done this better. For future campaigns, we must ensure that the actual conditions in animal farming are made transparent sooner. Possibilities include a greater focus on highlighting scandals, publishing our own investigations, and doing more to attract media exposure.Regional volunteer networks are very challenging to build. Here, we could have involved organisations that already have structures in place to manage regional leafleting, stand campaigns and billboards more effectively and at an earlier stage.We built up a large network of influential personalities from society. However, we lacked credible ambassadors from agriculture who could more readily be trusted as experts in their field until a late stage in the campaign. The clearest take-home message from the follow-up surveys was that policy change in agriculture does not work without the producers. In the future, we need to involve stakeholders within agriculture in our campaigns at an earlier stage.Alliance against factory farmingThe initiative to abolish factory farming was not successful at the ballo...
|
Jun 05, 2023 |
EA - A Double Feature on The Extropians by Maxwell Tabarrok
01:12
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Double Feature on The Extropians, published by Maxwell Tabarrok on June 4, 2023 on The Effective Altruism Forum.Link-post for two pieces I just wrote on the Extropians.The Extropians were an online group of techno-optimist transhumanist libertarians active in the 90s who influence a lot of online intellectual culture today, especially in EA and Rationalism. Prominent members include Eliezer Yudkowsky, Nick Bostrom, Robin Hanson, Eric Drexler, Marvin Minsky and all three of the likely candidates for Satoshi Nakamoto (Hal Finney, Wei Dai, and Nick Szabo).The first piece is a deep dive into the archived Extropian forum. It was super fun to write and I was constantly surprised about how much of the modern discourse on AI and existential risk had already been covered in 1996.The second piece is a retrospective on predictions made by Extropians in 1995. Eric Drexler, Nick Szabo and 5 other Extropians give their best estimates for when we'll have indefinite biological lifespans and reproducing asteroid eaters.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 04, 2023 |
EA - Podcast Interview with David Thorstad on Existential Risk, The Time of Perils, and Billionaire Philanthropy by Nick Anyos
01:37
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Podcast Interview with David Thorstad on Existential Risk, The Time of Perils, and Billionaire Philanthropy, published by Nick Anyos on June 4, 2023 on The Effective Altruism Forum.I have released a new episode of my podcast, EA Critiques, where I interview David Thorstad. David is a researcher at the Global Priorities Institute and also writes about EA on his blog, Reflective Altruism.In the interview we discuss three of his blog post series:Existential risk pessimism and the time of perils: Based on his academic paper of the same name, David argues that there is a surprising tension between the idea that there is a high probability of extinction (existential risk pessimism) and the idea that the expected value of the future, conditional on no existential catastrophe this century, is astronomically large.Exaggerating the risks: David argues that the probability of an existential catastrophe from any source is much lower than many EAs believe. At time of recording the series only covered risks from climate change, but future posts will make the same case for nuclear war, pandemics, and AI.Billionaire philanthropy: Finally, we talk about the the potential issues with billionaires using philanthropy to have an outsized influence, and how both democratic societies and the EA movement should respond.As always, I would love feedback, on this episode or the podcast in general, and guest suggestions. You can write a comment here, send me a message, or use this anonymous feedback form.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 04, 2023 |
EA - EA and Longtermism: not a crux for saving the world by ClaireZabel
15:58
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA and Longtermism: not a crux for saving the world, published by ClaireZabel on June 3, 2023 on The Effective Altruism Forum.This is partly based on my experiences working as a Program Officer leading Open Philâs Longtermist EA Community Growth team, but itâs a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it.Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (âweâ or âusâ in this post, though I know it wonât apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the âmost important centuryâ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach.A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use âEAsâ or âlongtermistsâ as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, Iâm concerned that this is a reason weâre failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the âexistential risk reductionâ frame [this seemed more true in 2022 than 2023].This is in the vein of Neel Nandaâs "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexanderâs âLong-termism vs. Existential Riskâ, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA.EA and longtermism: not a crux for doing the most important workRight now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think thatâs likely the most important thing anyone can do these days. And I donât think EA or longtermism is a crux for this prioritization anymore.A lot of us (EAs who currently prioritize x-risk reduction) were âEA-firstâ â we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about the importance of the far future and potential technologies and other changes that could influence it. Some of us were âlongtermists-secondâ; we prioritized making the far future as good as possible regardless of whether we thought we were in an exceptional position to do this, and that existential risk reduction would be one of the core activities for doing it.For most of the last decade, I think that most of us have emphasized EA ideas when trying to discuss X-risk with people outside our circles. And locally, this worked pretty well; some people (a whole bunch, actually) found these ideas compelling and ended up prioritizing similarly. I think t...
|
Jun 03, 2023 |
EA - Large Study Examining the Effects of Cash Transfer Programs on Population-Level Mortality Rates by nshaff3r
01:44
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Large Study Examining the Effects of Cash Transfer Programs on Population-Level Mortality Rates, published by nshaff3r on June 3, 2023 on The Effective Altruism Forum.The study was published in Nature on May 31st, 2023.Key Points:Cash transfer programs had the following observed effects:Deaths among women fell by 20%Largely driven by decreases in pregnancy-related deathsDeaths among children less than 5 fell by 8%No association between cash transfer programs and mortality among menTemporal analyses suggest reduction in mortality among men over time, and specific subgroup analysis (rather than population wide) found a 14% morality reduction among men aged 18-4037 low and middle income countries studied, population wide4,325,484 in the adult dataset2, 867,940 in the child datasetNo apparent differences between the effects of unconditional and conditional cash transfersFactors that lead to larger reductions in mortality:Programs with higher coverage and larger cash transfer amountsCountries with higher regulatory quality ratingsCountries with lower health expenditures per capitastronger association in sub-Saharan Africa relative to outside sub-Saharan AfricaCitation: Richterman, A., Millien, C., Bair, E.F. et al. The effects of cash transfers on adult and child mortality in low- and middle-income countries. Nature (2023).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 03, 2023 |
EA - Applications open for AI Safety Fundamentals: Governance Course by Jamie Bernardi
04:03
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Applications open for AI Safety Fundamentals: Governance Course, published by Jamie Bernardi on June 2, 2023 on The Effective Altruism Forum.We are looking for people who currently or might want to work in AI Governance and policy. If you have networks in or outside of EA who might be interested, we would appreciate you sharing this course with them.Full announcementThe AI Safety Fundamentals (AISF): Governance Course is designed to introduce the key ideas in AI Governance for reducing extreme risks from future AI systems.Alongside the course, you will be joining our AI Safety Fundamentals Community. The AISF Community is a space to discuss AI Safety with others who have the relevant skills and background to contribute to AI Governance, whilst growing your network and awareness of opportunities.The last time we ran the AI Governance course was in January 2022, then under Effective Altruism Cambridge. The course is now run by BlueDot Impact, founded by members of the same team (and now based in London).We are excited to relaunch the course now, when AI Governance is a focal point for the media and political figures. We feel this is a particularly important time to support high-fidelity discussion of ideas to govern the future of AI.Note we have also renamed the website from AGI Safety Fundamentals to AI Safety Fundamentals. We'll release another post within the next week to explain our reasoning, and we'll respond to any discussion about the rebrand there.Time commitmentThe course will run for 12 weeks from July-September 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week project.The time commitment is around 5 hours per week, so you can engage with the course and community alongside full-time work or study. The split will be 2-3 hours of preparatory work, and a 1.5-2 hour live session.Course structureThe course is 12 weeks long and takes around 5 hours a week to participate in.For the first 8 weeks, participants will work through 2-3 hours of structured content to prepare for a weekly, facilitated small discussion group of 1.5-2 hours. Participants will be grouped depending on their current career stage and policy expertise. The facilitator will be knowledgeable about AI Governance, and can help to answer participantsâ questions and point them to further resources.The final 4 weeks are project weeks. Participants can use this time to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.The course content is designed with input from a wide range of the community thinking about the governance of advanced AI. The curriculum will be updated before the course launches in mid-July.Target audienceWe think this course will particularly be able to help participants who:Have policy experience, and are keen to apply their skills to reducing risk from AI.Have a technical background, and want to learn about how they can use their skills to contribute to AI Governance.Are early in their career or a student who is interested in exploring a future career in policy to reduce risk from advanced AI.We expect at least 25% of the participants will not fit any of these descriptions. There are many skills, backgrounds and approaches to AI Governance we havenât captured here, and we will consider all applications accordingly.Within the course, participants will be grouped with others based on their existing policy expertise and existing familiarity with AI safety. This means that admissions can be broad, and discussions will be pitched at the correct level for all participants.Apply now!If you would like to be considered for the next round of the courses, starting in July 20...
|
Jun 02, 2023 |
EA - Lincoln Quirk has joined the EV UK board by Howie Lempel
02:01
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lincoln Quirk has joined the EV UK board, published by Howie Lempel on June 2, 2023 on The Effective Altruism Forum.Iâm excited to share that Lincoln Quirk has joined the board of Effective Ventures Foundation (UK). This follows the addition of Zach Robinson and Eli Rose to the EV US board about two months ago.Lincoln is a co-founder and Head of Product at Wave, a technology company that aims to make finance more accessible in Africa through digital infrastructure. Wave spun off from SendWave, a remittance platform which Lincoln also cofounded and which was acquired in 2021. He has maintained a deep interest in effective altruism and been a part of the EA community for over a decade.In his own words, "I'm excited to join the EV UK board. I've been trying to help the world and have called myself part of the EA community for 10+ years; EV is playing one of the most important roles in this community and correspondingly holds a lot of responsibility. I'm looking forward to helping figure out the best ways EV can contribute to making the world a better place through enabling EA community and EA projects.âThe EV UK trustees and I are excited to have Lincoln join and look forward to working with him. Lincoln impressed us during the interview process with his strategic insight and dedication to the role. I also think his experience founding and growing Wave and Sendwave will be a particularly useful perspective to add to the board.We are continuing to look for candidates to add to the boards of both EV UK and EV US, especially candidates with diverse backgrounds and experiences, and who have experience in accounting, law, finance, risk management or other management at large organizations. We recruited Lincoln via directly reaching out to him, and plan to continue to source candidates this way and via our open call. If you are interested or know of a great candidate, the linked forum post includes instructions for applying or nominating someone. Applications and nominations will be accepted until June 4th.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 02, 2023 |
EA - Things I Learned by Spending Five Thousand Hours In Non-EA Charities by jenn
12:53
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Things I Learned by Spending Five Thousand Hours In Non-EA Charities, published by jenn on June 2, 2023 on The Effective Altruism Forum.From late 2020 to last month, I worked at grassroots-level non-profits in operational roles. Over that time, Iâve seen surprisingly effective deployments of strategies that were counter-intuitive to my EA and rationalist sensibilities.I spent 6 months being the on-shift operations manager at one of the five largest food banks in Toronto (~50 staff/volunteers), and 2 years doing logistics work at Samaritans (fake name), a long-lived charity that was so multi-armed that it was basically operating as a supplementary social services department for the city it was in(~200 staff and 200 volunteers). Both orgs were well-run, though both dealt with the traditional non-profit double whammy of being underfunded and understaffed.Neither place was super open to many EA concepts (explicit cost-benefit analyses, the ITN framework, geographic impartiality, the general sense that talent was the constraining factor instead of money, etc). Samaritans in particular is a spectacular non-profit, despite(?) having basically anti-EA philosophies, such as:Being very localist; Samaritans was established to help residents of the city it was founded in, and now very specialized in doing that.Adherence to faith; the philosophy of The Catholic Worker Movement continues to inform the operating choices of Samaritans to this day.A big streak of techno-pessimism; technology is first and foremost seen as a source of exploitation and alienation, and adopted only with great reluctance when necessary.Not treating money as fungible. The majority of funding came from grants or donations tied to specific projects or outcomes. (This is a system that the vast majority of nonprofits operate in.)Once early on I gently pushed them towards applying to some EA grants for some of their more EA-aligned work, and they were immediately turned off by the general vibes of EA upon visiting some of its websites. I think the term âborg-likeâ was used.Over this post, Iâll be largely focusing on Samaritans as Iâve worked there longer and in a more central role, and itâs also a more interesting case study due to its stronger anti-EA sentiment.Things I LearnedLong Term Reputation is PricelessNon-Profits Shouldnât Be IslandsSlack is Incredibly PowerfulHospitality is Pretty ImportantFor each learning, I have a section for sketches for EA integration â I hesitate to call them anything as strong as recommendations, because the point is to give more concrete examples of what it could look like integrated in an EA framework, rather than saying that itâs the correct way forward.1. Long Term Reputation is PricelessInstitutional trust unlocks a stupid amount of value, and you canât buy it with money. Lots of resources (amenity rentals; the mayorâs endorsement; business services; pro-bono and monetary donations) are priced/offered based on tail risk. If you can establish that youâre not a risk by having a longstanding, unblemished reputation, costs go way down for you, and opportunities way up. This is the world that Samaritans now operate in.Samaritans had a much better, easier time at city hall compared to newer organizations, because of a decades-long productive relationship where we were really helpful with issues surrounding unemployment and homelessness. Permits get back to us really fast, applications get waved through with tedious steps bypassed, and fees are frequently waived. And it made sense that this was happening! Cities also deal with budget and staffing issues, why waste more time and effort than necessary on someone who you know knows the proper procedure and will ethically follow it to the letter?Itâs not just city hall. A few years ago, a local church offered up th...
|
Jun 02, 2023 |
EA - An Earn to Learn Pledge by Ben West
02:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: An Earn to Learn Pledge, published by Ben West on June 2, 2023 on The Effective Altruism Forum.tl;dr: create an equivalent of GWWC for building career capital. We've thought about this idea for ~15 minutes and are unlikely to do something ourselves, but wanted to share it because we think it might be good.Many people's greatest path to impact is through changing their careerBut for a lot of these people, particularly those earlier in their career, it doesn't make sense to immediately apply to impact-oriented jobs. Instead, it's better for them to build career capital at non-impact-oriented workplaces, i.e. "earning to learn"It would be nice if there was some equivalent of the Giving What We Can pledge for thisIt could involve something like pledging to:Spend at least one day per year updating your career plan with an eye towards impactApply to at least x impact-oriented jobs per year, even if you expect to get rejectedAnd some sort of dashboard checking people's adherence to this, and nudging them to adhere betterSome potential benefits:Many people who have vague plans of "earning to learn" just end up drifting away after entering the mainstream workforce; this can help them stay engagedIt might relieve some of the pressure around being rejected from "EA jobs" â making clear that Official Fancy EA People endorse career paths beyond "work at one of this small list of organizations" puts less pressure on people who aren't a good fit for one of those small list of organizationsRelatedly, it gives community builders a thing to suggest to a relatively broad set of community members which is robustly goodNext steps:I think the MVP here requires ~0 technology: come up with the pledge, get feedback on it, and if people are excited throw it into a Google formIt's probably worth reading criticisms of the GWWC pledge (e.g. this) to understand some of the failure modes here and be sure you avoid thoseIt also requires thinking through some of the risks, e.g. you might not want a fully public pledge since that could hurt people's job prospectsIf you are interested in taking on this project, please contact one of us and we can try to helpThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 02, 2023 |
EA - Probably tell your friends when they make big mistakes by Chi
09:27
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably tell your friends when they make big mistakes, published by Chi on June 1, 2023 on The Effective Altruism Forum.Big mistakes = Doing something that is actively harmful or useless by their own lights and values, i.e. doesn't help them achieve their life goals. (Not: Doing something that isn't in line with your values and goals.)A lot of people think that others in the EA-ish community are trying to do something impactful but end up doing something harmful or useless. Sometimes they also work on something that they are just not very good at or make other big mistakes. A lot of people never end up telling the other person that they think they are making big mistakes. Sometimes people also just have one particular argument for why the other might do harmful or useless work but not be sure whether it's a bad overall. This also often goes unsaid.I think that's understandable and also bad or at least very costly.Epistemic status: Speculation/rant. I know of another person who might post something in this topic that is much more rigorous and has actual background research.Upsides of telling others you think they are making big mistakes, wasting their time, or doing harm:It's good on a community level because people get information that's useful to decide how to achieve their goals (among them, having impact,) so people end up working on less suboptimal things and the community has better impact overall.It's good on a community level because it's pushes towards good intellectual conversations and progress.I and probably others find it stressful because I can't rely on others telling me if they think I'm doing a bad job, so I have to try to read between the lines. (I find it much less stressful now but when I was more insecure about my competence, I found it really stressful. I think one of my main concerns was others thinking and saying I'm "meh" or "fine" (with an unenthusiastic tone) but only behind my back.)Note that anxiety works differently for different people though and some people might find the opposite is true for them. See reasons against telling people that you think they are wasting their time or worse.I and probably others find it pretty upsetting that I can't rely on others being honest with me. It's valuable information and I would like people to act in a way that helps me achieve my stated goals (in this case, doing good), especially if their motivation for not being honest with me is protecting my wellbeing.That said, I often don't do a great job at this myself and think telling others you think their efforts would be better spent elsewhere also has significant costs, both personal and on a community level.Downsides of telling others you think they are making big mistakes, wasting their time, or doing harm:Hearing that somebody thinks you're doing harmful or useless work can be extremely discouraging and can lead people to over-update, especially if they are insecure anyway. (Possibly because people do it so rarely, so the signal can be interpreted as stronger than it's intended.)At the same time, we often have noisy information about how good another person's work is, especially how good a fit they are or could be.Criticising someone's work could lead to an awkward relationship to them. They might also get angry at you or start talking badly about you. This is especially costly if you have a friendly and or a professional relationship.An increase in people telling each other what they think about each other's work could create or amplify a culture in which everyone constantly feels like they have to orient themselves towards impact all the time and justify their decisions. This could lead to feelings of guilt, shame, judgement, higher risk-aversion, and an over-emphasis on doing things that are mainstream approved.That said, I thin...
|
Jun 01, 2023 |
EA - Global Innovation Fund projects its impact to be 3x GiveWell Top Charities by jh
01:52
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global Innovation Fund projects its impact to be 3x GiveWell Top Charities, published by jh on June 1, 2023 on The Effective Altruism Forum.The Global Innovation Fund (GIF) is a non-profit, impact-first investment fund headquartered in London that primarily works with mission-aligned development agencies (USAID, SIDA, Global Affairs Canada, UKAID). Through grants, loans and equity investments, they back innovations with the potential for social impact at a large scale, whether these are new technologies, business models, policy practices or behavioural insights.Recently, they made a bold but little publicized projection in their 2022 Impact Report (page 18): "We project every dollar that GIF has invested to date will be three times as impactful as if that dollar had been spent on long-lasting, insecticide-treated bednets... This is three times higher than the impact per dollar of Givewellâs top-rated charities, including distribution of anti-malarial insecticide-treated bednets. By Givewellâs estimation, their top charities are 10 times as cost-effective as cash transfers." The report says they have invested $112m since 2015.This is a short post to highlight GIF's projection to the EA community and to invite comments and reactions.Here are a few initial points:It's exciting to see an organization with relatively traditional funders comparing its impact to GiveWell's top charities (as well as cash transfers).I would want to see more information on how they did their calculations before taking a view on their projection.In any case, based on my conversations with GIF, and what I've understood about their methodology, I think their projection should be taken seriously. I can see many ways it could be either an overestimate or an underestimate.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 01, 2023 |
EA - Beyond Cost-Effectiveness: Insights for Effective Altruism from Health Economics by TomDrake
01:07
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Beyond Cost-Effectiveness: Insights for Effective Altruism from Health Economics, published by TomDrake on June 1, 2023 on The Effective Altruism Forum.Hi Everyone - partly inspired by attending the recent EA Global London conference a couple of weeks ago, I've written a CGD Blog with some thoughts on EA's approach to prioritisation and methods in health economics (specifically Health Technology Assessment). This is a link post and as CGD staff I have to post on our platform, but since the key target audience is EAs, I'd be delighted to hear thoughts from this community. I'll be sure to monitor the comments section and perhaps the discussion will feed into future work.The differences between EA and health econ I highlight include:1. Approaches generalising cost-effectiveness evidence 2. Going beyond cost-effectiveness in determining value3. Deliberative appraisal4. Institutionalisation of a participatory processPlease click through for the full blog.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 01, 2023 |
EA - Taxing Tobacco: the intervention that got away (happy World No Tobacco Day) by Yelnats T.J.
12:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taxing Tobacco: the intervention that got away (happy World No Tobacco Day), published by Yelnats T.J. on June 1, 2023 on The Effective Altruism Forum.TL;DRThe death toll of tobacco dwarfs traditional EA global health focuses (e.g. Malaria).Taxing tobacco is the most effective and yet most neglected form of reducing tobacco consumption.Multiple EA orgs have ran cost-effectiveness numbers that would put a tobacco taxation NGO on GiveWellâs list (both in expected value and contingent on success) and even at the top in some cases.Despite this, no enduring tobacco-taxation advocacy organization has emerged from EA (which would be the only advocacy organization in the world exclusively dedicated to tobacco taxation)Join Joel Tan (Founder of CEARCH) and J.T. Stanley, who as incubatees of Charity Entrepreneurship closely examined tobacco taxation, on June 10th (Saturday) at 14:00 UTC for a virtual discussion about the intervention and whatâs next to make it a reality.Thanks to Moritz von Knebel for providing feedback on the draft.Yes, I cite WHO a lot. Not all WHO citations are the same WHO link FYI.Problem spaceTobacco kills over 8 million individuals a yearâthatâs 13x Malaria (WHO) (CDC).Another way of framing it: annually more people are killed by tobacco usage than malaria, HIV, and neonatal deaths combined. twice over. And while the death toll of the latter three has been decreasing over time, death from tobacco is increasing.Of the 8 million deaths, 1.2 million are bystanders killed from secondhand smoking (WHO).Facts not related to death:Tobacco consumption displaces productive forms of spending. Tobacco consumes 1.5 to 17% (with a rough median around 4.5%) of a personâs income depending on country and socioeconomic status (table of results) (de Beyer et al., 2001). Spending on tobacco typically displaces spending on education and nutrition in low-income households (John, 2008) (Nonnemaker and Sur, 2007) (Pu et al., 2008).For example, smoking households spent 46% less on education than non-smoking households in surveyed townships in rural China (Wang et al., 2006). Another study states, âAverage male cigarette smokers [in Bangladesh] spend more than twice as much on cigarettes as per capita expenditure on clothing, housing, health and education combinedâ (Efroymson et al., 2001).Smoking decreases productivity (Tobacconomics, 2019) (Halpern et al., 2001). One study in the United States found that it cost 1,807 USD per year per smoker in lost productivity compared to non-smokers (Bunn III et al., 2006).Tobacco increases individual healthcare costs (Tobacconomics, 2019). A study in China found that individual medical spending attributable to smoking increased the poverty rate by 1.5% in urban areas and 0.7% in rural areas (Liu et al., 2006).Tobacco strains the healthcare sector and taxpayers foot the bill (Wunsch and Brodie, 2021) (Tobacconomics, 2019).The annual costs of tobacco from healthcare expenditures and losses in productivity are 1.8% of the world GDP, breaking down to 1.1% to 1.7% of LMICsâ GDPs (Goodchild et al., 2018) (Tobacconomics, 2019).One last fact: without intervention, tobacco is on track to kill a billion individuals during the 21st century (WHO, 2008) (Savedoff and Alwang, 2015) (Jha, 2012).The interventionThe scientific literature on tobacco consumption is lengthy. The interventions that have the most significant effect on reducing tobacco consumption have been formalized in WHOâs MPOWER framework (Kaleta et al., 2009).Of the MPOWER tobacco control measures, taxing tobacco has been demonstrated consistently in the scientific literature to the be the most effective intervention (WHO) (NIH).Taxing tobacco has a price elasticity around -0.5, meaning that for a 10% increase in retail price of tobacco, consumption decreases by 5%...
|
Jun 01, 2023 |
EA - New Video: What to Eat in a Global Catastrophe by Christian Pearson
01:29
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New Video: What to Eat in a Global Catastrophe, published by Christian Pearson on June 1, 2023 on The Effective Altruism Forum.We are excited to share our second video here at Insights for Impact, a YouTube channel that aims to communicate key insights from research that we think could have an especially high positive impact in the world.There are major threats to our food supply globally, both now and in future. The good news is, there are also plenty of viable food solutions. What are the most promising ways to feed the world cheaply, quickly and nutritiously?Our target audience is laypeople along with effective altruists who donât yet have much understanding of the given topic â either because theyâve never heard of it before, or if they donât have time to delve into long/technical papers! The idea is to facilitate knowledge gain and pique interest by communicating key insights from valuable research. We hope that some viewers will be interested enough to dig deeper or share the ideas, and this may ultimately spark positive change in the world. We also think our videos could be useful for organisations to share their work with potential donors and other stakeholders.Going forward, we are continuing to explore a range of EA-relevant cause areas in video form. We collaborate with researchers to ensure their work is accurately portrayed. If you are a researcher wanting to give your work a voice outside of the forum, please get in touch!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
|
Jun 01, 2023 |
EA - A moral backlash against AI will probably slow down AGI development by Geoffrey Miller
25:58
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A moral backlash against AI will probably slow down AGI development, published by Geoffrey Miller on May 31, 2023 on The Effective Altruism Forum.Note: This is a submission for the 2023 Open Philanthropy AI Worldviews contest, due May 31, 2023. It addresses Question 1: “What is the probability that AGI is developed by January 1, 2043?â€OverviewPeople tend to view harmful things as evil, and treat them as evil, to minimize their spread and impact. If enough people are hurt, betrayed, or outraged by AI applications, or lose their jobs, professional identity, and sense of purpose to AI, and/or become concerned about the existential risks of AI, then an intense public anti-AI backlash is likely to develop. That backlash could become a global, sustained, coordinated movement that morally stigmatizes AI researchers, AI companies, and AI funding sources. If that happens, then AGI is much less likely to develop by the year 2043. Negative public sentiment could be much more powerful in slowing AI than even the most draconian global regulations or formal moratorium, yet it is a neglected factor in most current AI timelines.IntroductionThe likelihood of AGI being developed by 2043 depends on two main factors: (1) how technically difficult it will be for AI researchers to make progress on AGI, and (2) how many resources – in terms of talent, funding, hardware, software, training data, etc. – are available for making that progress. Many experts’ ‘AI timelines’ for predicting AI development assume that AGI likelihood will be dominated by the first factor (technical difficulty), and assume that the second factor (available resources) will continue increasing.In this essay I disagree with that assumption. The resources allocated into AI research, development, and deployment may be much more vulnerable to public outrage and anti-AI hatred than the current AI hype cycle suggests. Specifically, I argue that ongoing AI developments are likely to provoke a moral backlash against AI that will choke off many of the key resources for making further AI progress. This public backlash could deploy the ancient psychology of moral stigmatization against our most advanced information technologies. The backlash is likely to be global, sustained, passionate, and well-organized. It may start with grass-roots concerns among a few expert ‘AI doomers’, and among journalists concerned about narrow AI risks, but it is likely to become better-organized over time as anti-AI activists join together to fight an emerging existential threat to our species.(Note that this question of anti-AI backlash likelihood is largely orthogonal to the issues of whether AGI is possible, and whether AI alignment is possible.)I’m not talking about a violent Butlerian Jihad. In the social media era, violence in the service of a social cause is almost always counter-productive, because it undermines the moral superiority and virtue-signaling strategies of righteous activists. (Indeed, a lot of ‘violence by activists’ turns out to be false flag operations funded by vested interests to discredit the activists that are fighting those vested interests.)Rather, I’m talking about a non-violent anti-AI movement at the social, cultural, political, and economic levels. For such a movement to slow down the development of AGI by 2043 (relative to the current expectations of Open Philanthropy panelists judging this essay competition), it only has to arise sometime in the next 20 years, and to gather enough public, media, political, and/or investor support that it can handicap the AI industry’s progress towards AGI, in ways that have not yet been incorporated into most experts’ AI timelines.An anti-AI backlash could include political, religious, ideological, and ethical objections to AI, sparked by vivid, outrageous, newsworthy fai... |
Jun 01, 2023 |
EA - A compute-based framework for thinking about the future of AI by Matthew Barnett
36:32
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A compute-based framework for thinking about the future of AI, published by Matthew Barnett on May 31, 2023 on The Effective Altruism Forum.How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:The most salient impact of AI will be its ability to automate labor, which is likely to trigger a productivity explosion later this century, greatly altering the course of history.The availability of useful compute is the most important factor that determines progress in AI, a trend which will likely continue into the foreseeable future.AI performance is likely to become relatively predictable on most important, general measures of performance, at least when predicting over short time horizons.While none of these ideas are new, my goal is to provide a single article that articulates and defends the framework as a cohesive whole. In doing so, I present the perspective that Epoch researchers find most illuminating about the future of AI. Using this framework, I will justify a value of 40% for the probability of Transformative AI (TAI) arriving before 2043.SummaryThe post is structured as follows.In part one, I will argue that what matters most is when AI will be able to automate a wide variety of tasks in the economy. The importance of this milestone is substantiated by simple models of the economy that predict AI could greatly accelerate the world economic growth rate, dramatically changing our world.In part two, I will argue that availability of data is less important than compute for explaining progress in AI, and that compute may even play an important role driving algorithmic progress.In part three, I will argue against a commonly held view that AI progress is inherently unpredictable, providing reasons to think that AI capabilities may be anticipated in advance.Finally, in part four, I will conclude by using the framework to build a probability distribution over the date of arrival for transformative AI.Part 1: Widespread automation from AIWhen discussing AI timelines, it is often taken for granted that the relevant milestone is the development of Artificial General Intelligence (AGI), or a software system that can do or learn “everything that a human can do.†However, this definition is vague. For instance, it's unclear whether the system needs to surpass all humans, some upper decile, or the median human.Perhaps more importantly, it’s not immediately obvious why we should care about the arrival of a single software system with certain properties. Plausibly, a set of narrow software programs could drastically change the world before the arrival of any monolithic AGI system (Drexler, 2019). In general, it seems more useful to characterize AI timelines in terms of the impacts AI will have on the world. But, that still leaves open the question of what impacts we should expect AI to have and how we can measure those impacts.As a starting point, it seems that automating labor is likely to be the driving force behind developing AI, providing huge and direct financial incentives for AI companies to develop the technology. The productivity explosion hypothesis says that if AI can automate the majority of important tasks in the economy, then a dramatic economic expansion will follow, increasing the rate of technological, scientific, and economic growth by at least an order of magnitude above its current rate (Davidson, 2021).A productivity explosion is a robust implication of simple models of economic growth models, which helps explain why the topic is so important to study. What's striking is that the productivity explosion thesis appears to follow naturally from some standard assump... |
May 31, 2023 |
EA - Linkpost: Survey evidence on the number of vegans in the UK by Sagar K Shah
02:22
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Linkpost: Survey evidence on the number of vegans in the UK, published by Sagar K Shah on May 31, 2023 on The Effective Altruism Forum.Stephen Walsh PhD recently carried out a review of different surveys estimating the number of vegans in the UK on behalf of the UK Vegan Society. This is the best review I’ve seen in the UK context that takes into account recent data.The review suggests:Around 0.25% of UK adults were vegan in 2015. The proportion was probably stable around this level for at least 15 years.The share increased to around 1% by 2018.A best guess of a further increase to around 1.35% by 2022 (note this estimate is less certain and not directly comparable to earlier estimates).The headline results are based on the Food and You (face-to-face) and Food and You 2 (online, postal) surveys commissioned by the UK Food Standards Agency, after comparison with results from other surveys, including consideration of questions asked to identify vegans, survey mode, sampling method and sample size.Stephen’s article was originally published in the Vegan Society Magazine (only available to members). Given the potential wider interest in the results, I have received his permission to share a link to a copy of his article, and he is happy to answer any interesting questions that come through in the comments.I have copied below the chart summarising the results of different surveys offering a consistent time trend, and the questions used in the Food and You Survey (2010 to 2018). The article contains links with further information about the surveys used in the chart below.Questions used in the Food and You Survey (2010 to 2018)Question 2_7Which, if any, of the following applies to you? Please state all that apply.Completely vegetarianPartly vegetarianVeganAvoid certain food for religious or cultural reasonsNone (SINGLE CODE ONLY)IF Q2_7 = Vegan VeganChkCan I just check, do you eat any foods of animal origin. That is meat, fish, poultry, milk, milk products, eggs or any dishes that contain these?1 Yes2 NoThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 31, 2023 |
EA - Updates from the Dutch EA community by James Herbert
18:37
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Dutch EA community, published by James Herbert on May 30, 2023 on The Effective Altruism Forum.We wanted to post something on the Forum to share all of the amazing things the Dutch EA community has achieved in the past 18 months or so. But we also wanted to avoid spending too much time writing it. So please accept this very messy post and feel free to ask questions in the comments! Parts were co-written by ChatGPT with minimal editing, hence the sometimes overly braggadocious tone.We start with national-level updates and two quick lessons-learnt, and then we have bullet-point summaries from some of the local groups. But first, an executive summary.Executive summaryOver the past year, the Dutch EA community has seen impressive growth at both the national and local levels.At the national level, the community has seen significant gains. The number of intro programme completions increased nearly tenfold, from 45 in 2021 to 400 in 2022. The number of city groups and university groups also grew, from 1 to 3 and 1 to 13 respectively. Notably, there was an influx of €700k donations via Doneer Effectief and an increase in EA Netherlands newsletter subscribers from 670 to around 1500.Since hiring two full-time community builders in 2022, EAN has helped establish over a dozen new groups which have collectively produced 350 intro programme graduates in 2022 alone. In addition to launching a new website and co-working space, EAN organized multiple retreats, conducted introductory talks, facilitated 'giving games', provided career counselling, hosted city meet-ups, and participated in a public debate on EA.Effective altruism is gaining recognition in the Dutch media, with coverage in major Dutch publications and appearances by prominent figures like writer Rutger Bregman. However, there have also been a few critical pieces, to which the EAN board has responded.Other significant achievements include the successful launch of Doneer Effectief's online donation platform, the high-profile EAGxRotterdam 2022 conference, and the Tien Procent Club's successful events on effective giving.Local EA groups across Dutch cities have also seen substantial growth. For example, the Amsterdam city and university groups have merged, and together they host weekly meetups, multiple programs, and are developing a mental health program. At Utrecht, the student group has hatched an Alt Protein group with a grant from the university, has launched an AI Safety group, has hosted a big speaker event with Rutger Bregman, and runs introduction fellowships, socials, coworking sessions and other events. In The Hague, the group conducted weekly dinners, three rounds of intro fellowships, and two rounds of AI governance fellowships.The team at Delft has increased EA awareness through fellowships, book clubs, a retreat, and launching the Delft AI Safety Initiative. Eindhoven’s group has engaged 31 people in Introduction Fellowships, has launched an AI safety team, and collaborated with other groups on their university campus. Nijmegen’s group has grown rapidly, with biweekly meetups and collaborations with other campus groups.The PISE group in Rotterdam hosts member-only weekly events, open book clubs, and four fellowship rounds this year. They also initiated EAGx Rotterdam. The Twente group has attended the university’s career fair and organized meetups and an introductory talk. Wageningen University's group has hosted live events and completed an introductory fellowship.Lessons learnt:Do organising and mobilising (organisers invest in developing the capacities of people to engage with others in activism and become leaders; mobilisers focus on maximising the number of people involved without developing their capacity for civic action)It's very valuable to have a public figure endorse... |
May 30, 2023 |
EA - Announcement: you can now listen to all new EA Forum posts by peterhartree
03:27
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcement: you can now listen to all new EA Forum posts, published by peterhartree on May 30, 2023 on The Effective Altruism Forum.For the next few weeks, all new EA Forum posts will have AI narrations.We're releasing this feature as a pilot. We will collect feedback and then decide whether to keep the feature and/or roll it out more broadly (e.g. for our full post archive).This project is run by TYPE III AUDIO in collaboration with the EA Forum team.How can I listen?On post pagesYou'll find narrations on post pages; you can listen to them by clicking on the speaker icon:On our podcast feedsDuring the pilot, posts that get >125 karma will also be released on the "EA Forum (Curated and Popular)" podcast feed:EA Forum (Curated & Popular)Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.Subscribe:Apple Podcasts | Google Podcasts | Spotify | RSSThis feed was previously known as "EA Forum (All audio)". We renamed it for reasons.During the pilot phase, most "Curated" posts will still be narrated by Perrin Walker of TYPE III AUDIO.Posts that get >30 karma will be released on the new "EA Forum (All audio)" feed:EA Forum (All audio)Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30+ karma, and other great writing.Subscribe:Apple Podcasts | Spotify | RSS | Google Podcasts (soon)How is this different from Nonlinear Library?The Nonlinear Library has made unofficial AI narrations of EA Forum posts available for the last year or so.The new EA Forum AI narration project can be thought of as "Nonlinear Library 2.0". We hope our AI narrations will be clearer and more engaging. Some specific improvements:Audio notes to indicate headings, lists, images, etc.Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.We'll skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project.What do you think?We'd love to hear your thoughts!To give feedback on a particular narration, click the feedback button on the audio player, or go to t3a.is.We're keen to hear about even minor issues: we have control over most details of the narration system, and we're keen to polish it. The narration system, which is being developed by TYPE III AUDIO, will be rolled out for thousands of hours of EA-relevant writing over the summer.To share feature ideas or more general feedback, comment on this post or write to eaforum@type3.audio.The reason for this mildly confusing update is the vast majority of people subscribed to the existing "All audio" feed, but we think that most of them don't actually want to receive ~4 episodes per day. If you're someone who wants to max out the number of narrations in your podcast app, please subscribe to the new "All audio" feed. For everyone else: no action required.Are you a writer with a blog, article or newsletter to narrate? Write to team@type3.audio and we'll make it happen.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 30, 2023 |
EA - The bullseye framework: My case against AI doom by titotal
26:03
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The bullseye framework: My case against AI doom, published by titotal on May 30, 2023 on The Effective Altruism Forum.Introduction:I’ve written quite a few articles casting doubt on several aspects of the AI doom narrative. (I’ve starting archiving them on my substack for easier sharing). This article is my first attempt to link them together to form a connected argument for why I find imminent AI doom unlikely.I don’t expect every one of the ideas presented here to be correct. I have a PHD and work as a computational physicist, so I’m fairly confident about aspects related to that, but I do not wish to be treated as an expert on other subjects such as machine learning where I am familiar with the subject, but not an expert. You should never expect one person to cover a huge range of topics across multiple different domains, without making the occasional mistake. I have done my best with the knowledge I have available.I don’t speculate about specific timelines here. I suspect that AGI is decades away at minimum, and I may reassess my beliefs as time goes on and technology changes.In part 1, I will point out the parallel frameworks of values and capabilities. I show what happens when we entertain the possibility that at least some AGI could be fallible and beatable.In part 2, I outline some of my many arguments that most AGI will be both fallible and beatable, and not capable of world domination.In part 3, I outline a few arguments against the ideas that “x-risk†safe AGI is super difficult, taking particular aim at the “absolute fanatical maximiser†assumption of early AI writing.In part 4, I speculate on how the above assumptions could lead to a safe navigation of AI development in the future.This article does not speculate on AI timelines, or on the reasons why AI doom estimates are so high around here. I have my suspicions on both questions. On the first, I think AGI is many decades away, on the second, I think founder effects are primarily to blame. However these will not be the focus of this article.Part 1: The bullseye frameworkWhen arguing for AI doom, a typical argument will involve the possibility space of AGI. Invoking the orthogonality thesis and instrumental convergence, the argument goes that in the possibility space of AGI, there are far more machines that want to kill us than those that don’t. The argument is that the fraction is so small that AGI will be rogue by default: like the picture below.As a sceptic, I do not find this, on its own, to be convincing. My rejoinder would be that AGI’s are not being plucked randomly from possibility space. They are being deliberately constructed and evolved specifically to meet that small target. An AI that has the values of “scream profanities at everyone†is not going to survive long in development. Therefore, even if AI development starts in dangerous territory, it will end up in safe territory, following path A. (I will flesh this argument out more in part 3 of this article).To which the doomer will reply: Yes, there will be some pressure towards the target of safety, but it won’t be enough to succeed, because of things like deception, perverse incentives, etc. So it will follow something more like path B above, where our attempts to align it are not successful.Often the discussion stops there. However, I would argue that this is missing half the picture. Human extinction/enslavement does not just require that an AI wants to kill/enslave us all, it also requires that the AI is capable of defeating us all. So there’s another, similar, target picture going on:The possibility space of AGI’s includes countless AI’s that are incapable of world domination. I can think of 8 billion such AGI’s off the top of my head: Human beings. Even a very smart AGI may still fail to dominate humanity, if it’s locked... |
May 30, 2023 |
EA - Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures by Center for AI Safety
01:15
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures, published by Center for AI Safety on May 30, 2023 on The Effective Altruism Forum.Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.â€We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 30, 2023 |
EA - List of Masters Programs in Tech Policy, Public Policy and Security (Europe) by sarahfurstenberg
05:47
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of Masters Programs in Tech Policy, Public Policy and Security (Europe), published by sarahfurstenberg on May 29, 2023 on The Effective Altruism Forum.We created this non-exhaustive List which was inspired and based on Konstantins personal research into Masters programmes. We expanded it with the help of others across the policy community. It was created for the 2023 cohort of fellows of the EU Tech Policy Fellowship hosted by Training for Good and includes a list of Masters in Europe, the UK and the US. CLICK HERE for the list.Limitations and Epistemic StatusThe list is based on personal experience, research, and limited feedback from others in the community.It is curated from a European perspective. Thus, the numbers and deadlines take European/EEA citizens as a reference point. Furthermore, whilst Masters from Europe, the UK and the US are listed, we have focussed on researching Masters in Europe. The latter lists are currently very incomplete.It's important to emphasise that this list is not exhaustive and may not represent all options of Masters in this Field.Additionally, the quality and relevance of each program may vary depending on individual needs, goals, and interests.Therefore, we recommended that individuals interested in pursuing a career in tech policy or policy in general conduct their own research, explore various programs, and consider multiple sources of information before making a decision! Ultimately, the decision to pursue a particular graduate program should be based on a thorough evaluation of individual goals, resources, and circumstances.What this post is notThis post does not outline what to study and what to aim for in choosing your Masters Degree. It is supposed to help people who have already decided that they want to pursue a Masters in Tech Policy, Security Studies or Public Policy but does not mean to imply that these are your only or even best options if you want to enter the Tech Policy field. A possibly safer and more classical approach of entering EU policy is to study basic law and economics subjects as they still hold a high standing across departments and fields in policy (See this article on “Joining the EU bubbleâ€). This would also give you more flexible career capital than tech policy degrees.To elaborate on these different paths a detailed post (such this one) outlining what to aim for in your studies if you want to contribute to tech policy, would be incredibly valuable and we encourage you to write this up and share your perspective if you have spent some time thinking about this!Created for who?This list is aimed at people interested in working in public policy (especially in Europe) and in tech policy with a potential to specialise in AI but only provides a very narrow selection of options. Degrees with "tech" or "AI'' related words in the name are helpful to quickly signal your relevance on these topics. Many of the Masters in this list are geared towards people with a non-technical undergraduate degree in social sciences, economics etc. Thus, it excludes many Masters on Artificial Intelligence and Tech Policy that require you to have had a Computer Sciences or technical background. We wanted to share the list to help with some of the preliminary research in choosing a Masters programme.The inclusion of Security Studies Masters programmes comes from the argument that it seems like a viable path from which to enter inter/national think tanks or institutions working on relevant AI policy without having technical specialisations beforehand.Other considerationsBesides studying in Europe, studying in the US can be a great and high-impact option since many degrees are both highly regarded in Europe as well as allowing you to potentially work in US policy. We highly encourage you to read this post on worki... |
May 29, 2023 |
EA - Obstacles to the Implementation of Indoor Air Quality Improvements by JesseSmith
12:19
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Obstacles to the Implementation of Indoor Air Quality Improvements, published by JesseSmith on May 29, 2023 on The Effective Altruism Forum.1. Tl;drMany reports indicate that indoor air quality (IAQ) interventions are likely to be effective at reducing respiratory disease transmission. However, to date there’s been very little focus on the workforce that will implement these interventions. I suggest that the US Heating, Ventilation and Air Conditioning (HVAC) and building maintenance workforces have already posed a significant obstacle to these interventions, and broad uptake of IAQ measures will be significantly hindered by them in the future. The impact will vary in predictable ways depending on the nature of the intervention and its implementation. We should favor simple techniques with improved oversight and outsource or crosscheck technically complex work to people outside of the current HVAC workforce. We should also make IAQ conditions and devices as transparent as possible to both experts and building occupants.To skip my bio and the technical horrors section, proceed to the recommendations in section 4.2. Who am I? Why do I think This? How Certain am I?I began working in construction in 1991. I did a formal carpentry apprenticeship in Victoria BC in the mid-90s and moved to the US in ‘99. Around 2008 I started taking greater interest in HVAC because - despite paying top dollar to local subcontractors - our projects had persistent HVAC problems. Despite protestations that they were following exemplary practices, our projects were plagued with high humidity, loud noise, frequent mechanical failure, and room-to-room temperature differences. This led me to first learn all aspects of system design and controls, and culminated in full system installations. Along the way I obtained a NJ Master HVAC license, performed the thermal work of ~2k light-duty energy retrofits, obtained multiple certifications in HVAC and low-energy design, and became a regional expert in building diagnostics. Since 2010 I’ve worked as a contractor or consultant to roughly a dozen major HVAC contractors and hundreds of homeowners.I’m reasonably certain that the baseline competence of the HVAC workforce is insufficient to broadly and reliably deploy IAQ interventions and that this is a serious obstacle. My comments are specific to the US. I’ve discussed these problems extensively with friends and acquaintances working at a national level and in other parts of the US and believe them to be common to most of the country. The problems are specific to the light commercial and residential workforce, but not domains that are closely monitored by mechanical engineering teams (e.g. hospitals). Based on some limited experience I suspect these problems are also common to Canada, but I’m less certain about their severity.3. Technical Horrors: Why is This so Difficult?Within HVAC, many important jobs are currently either not performed or delegated to people who are largely incapable of performing them. Many people convincingly lie about their capacity to perform a job they’re incapable of, report having done things they haven’t, or even make statements at odds with physics.Examples include:Accurate heat load/loss calculations: These are used to size heating and cooling systems, and in most areas are code mandated for both new and replacement systems. Competent sizing (Manual J for residential) is viewed as highly important by virtually all experts within HVAC. However, despite decades of investment in training and compliance, a lead technical manager of a clean energy program reported to me that >90% of Manual Js reviewed by his program had significant errors made apparent due to internal inconsistency (eg duct load on a hydronic system) or obvious inconsistencies with public information on zi... |
May 29, 2023 |
EA - Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible by Dr. David Mathers
08:11
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Governments Might Prefer Bringing Resources Back to the Solar System Rather than Space Settlement in Order to Maintain Control, Given that Governing Interstellar Settlements Looks Almost Impossible, published by Dr. David Mathers on May 29, 2023 on The Effective Altruism Forum.Part of my work for Arb Research (/).Epistemic Status: I have no scientific background and wrote this after only a couple of days thought, so it is very possible that there is some argument I am unaware of, but which would be obvious to physicists, why a ‘resource-gathering without settlement’ approach to interstellar exploration is not feasible. However, my Arb colleague Vasco Grilo has aerospace engineering expertise, and says he can’t think of any reason why it wouldn’t be feasible in principle. Still, take all this with a large dose of caution.Some futurists have considered it likely that, at least absent existential catastrophe in the next few centuries, human beings (or our post-human or machine descendants) will eventually attempt to settle our galaxy. After all, there are vastly more resources in the rest of the Milky Way than in the Solar system. So we could support far more lives and create much more of anything else we care about, if we make use of stuff out there in the wider galaxy. And one very obvious way for us to make use of that stuff is for us to send out spaceships to establish settlements which make use of the energy of the stars they arrive at. Those settlements could in turn seed further settlements in an iterative process. (This would likely require “digital peopleâ€/#fnref6 given the distances involved in interstellar travel.)However, this is not the only way in which we could try to make use of resources outside the solar system. Another way to do so would be to try and gather resources and bring them back to the Solar system without establishing any permanent settlements of either humans or AIs outside the Solar system itself. I think that a government on Earth (or elsewhere in the solar system) might actually prefer gathering resources in this way to space settlements for the following reason:Impossibility of Interstellar Governance (IIG): Because of the huge distances between stars, it is simply not possible for a government in the Solar system to exercise long-term effective governance over any space colonies further away than (at most) the closest handful of stars.For a powerful, although not completely conclusive, case for this claim see this Medium post:/@KevinKohlerFM/cosmic-anarchy-and-its-consequences-b1a557b1a2e3Given IIG, no government within the Solar system can be the government of a settlement outside it. Therefore, if a government sets up a colony run by agents in another star system, it loses direct control of those resources. Of course, the government can try and exercise more indirect control over what happens by choosing starting colonists with particular values. But it’s unclear the degree of control that will allow for long-term.Meanwhile, a government could try and send a mission to other stars which:A) Is not capable of setting-up a new self-sufficient settlement, or can be trusted not to do so. BUT,B) is capable of setting up physical infrastructure to extract the system’s energy and resources and bringing them back to the Solar system. This way, a government situated in the Solar system could maintain direct control over how resources are used. In contrast if they go the space settlement route, the government cannot directly govern the settlement. So it has to rely on the idea that if values of the initial settlers are correct, then the settlement will use its resources in the way the government desires even whilst operating outside the government’s control.A purely resource-gathering mission without settlement will be particularly att... |
May 29, 2023 |
EA - Why Africa Needs a Cage Free Model Farm and Producerâs Directory by abilibadanielbaba11@gmail.com
02:39
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Africa Needs a Cage Free Model Farm and Producerâs Directory, published by abilibadanielbaba11@gmail.com on May 29, 2023 on The Effective Altruism Forum. Published on May 27, 2023 9:23 AM GMTSummary of key points:Africaâs egg production is predominantly from caged farming, with only 40% of eggs being cage-free in 2020, according to FAOSTAT. About 80% of commercial hens are kept in cages in Northern and South Africa, with the caged numbers also rising in other African countries. With a rising population and a growing middle class, Africaâs egg market is expected to grow annually by 11.26%, 3.5% more than the global egg market growth within 2023-2027.Africa has about 550 million layer hens, with a commercial supply of two-thirds of egg consumption. The region's exponential growth in the poultry industry is expected to continue due to the rising middle class and rapid urbanisation. Many global and multinational corporations are responding to this by rapidly expanding their operations to and within Africa. âWalmart to open more stores in Africa via Massmart..â and âKFC to expand in Africa but it lacks only one thing: Chickensâ by the Financial Times in 2015 and 2016 respectively. âFamous brands continue expansion in Africaâ by CNBC Africa, 2015.Cage-free commitments are relatively low in Africa compared to other regions. One of the most cited reasons by multinational and local companies is the lack of sustainable cage-free products in my outreach to global companies present in Africa. At present, the Open Wing Alliance has a cage-free producers directory for all the regions in the world except Africa.A model farm is of immense importance as it will provide practical training in best practices in cage-free management, serve as a reference farm for cage-free producers to visit, and serve farmers, auditors, veterinarians and other industry stakeholders across Africa. Consolidating and training egg producers in high-welfare production will alleviate the needless sufferings thousands of millions of chickens will face in Africa.The model farm will thus serve as Africa's research and development centre, informing policy formulation and legislation. As part of its goals, the farm will develop and consolidate a cage-free producerâs directory in Africa, which is currently a bottleneck to cage-free commitments in the region with international and local companies to streamline cage-free policies to improve animal welfare. Corporate cage-free commitments will not be sustainable in Africa without a reliable egg producers directory.
|
May 29, 2023 |
EA - EA Estonia's Impact in 2022 by RichardAnnilo
13:40
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Estonia's Impact in 2022, published by RichardAnnilo on May 29, 2023 on The Effective Altruism Forum.BackgroundThis report is about January to December 2022 in EA Estonia, corresponding to our last grant period (funding from the EA Infrastructure Fund for 1 FTE and group expenses).Quick facts about Estonia: it has a population of 1.3 million and is placed both geographically and culturally between the Nordics and Eastern Europe. Our language has 14 noun cases, it is the birthplace of 10 unicorns, and we have the best mushroom scientists. Go figure.In our national EA group, there are 23–30 people whom I would consider to be “highly engaged†(meaning they have spent more than 100 hours engaging with EA content, have developed career plans and have taken significant steps in pursuit of doing good). You could expect around 30 people to attend our largest community event (Figure 1) and our Slack channel has 50–60 active weekly members (Figure 2).Figure 1: Attendees of our largest community event, the EA Estonia Summer Retreat. August 2022.Figure 2. EA Estonia Slack statistics from its creation. Weekly active members have been oscillating between 40 and 65 throughout 2022.Group strategyHere are the main metrics we used to evaluate our impact:Awareness: Number of people aware of the term “effective altruism†and EA Estonia.Activities:Introductory talksDirect outreachSocial media outreachFirst engagement: Number of people who took action because of our outreach.Activities:Introductory courseCause-specific reading groupsCareer planning: Number of people that develop career plans based on EA principles that are well informed and well reasoned.Activities:Career courseTaking action: Number of people taking significant action based on EA-informed career plans (e.g. starting full-time jobs, university degrees).Activities:1-1 career callsPeer-mentoringDirectly sharing opportunitiesConcerns with this model:The actual impact comes when people take action within high-impact jobs, which we currently don't measure.We don't measure value drift or other kinds of decreased engagement after taking significant next steps.This model doesn't prioritize targeting existing Highly Engaged EAs (HEAs) to have a higher impact.This also doesn't include a more meta-level goal of keeping people engaged and interested while moving towards an HEA status. We do organize social events for this reason, however the impact of them is not quantified.Regarless of these concerns, the main theory of change feels relatively straight-forward: (1) we find young altruistically-minded people who are unclear about their future career plans, then (2) we make them aware of the effective altruism movement and various high-impact career paths, and then (3) we prompt them to develop explicit career plans and encourage them to take action upon them.Below I will go into more detail regarding the goals, activities and results of 2022 in two categories: (i) outreach and (ii) growing HEAs. I will end with a short conclusion and key takeaways for next year.I OutreachGoal: 5,000 new people who know what “effective altruism†means and that there is an active local group in Estonia.Actual: 20,776 people reached.Activities:Liina Salonen started working full-time as the Communictions Specialist in EA Estonia.Reached at least 20,000 people on Facebook with the Introductory Course social media campaignStudent fair tabling.At least 155 people reached (played the Giving Game)Goal: 10 lecturers mentioning EA EstoniaActual: 1 lecturers reachedVisited a philosophy lecture. Number of students: 20.Talked about effective altruism and longtermism. Created a discussion with the lecturer.Suggested people sign up for our career course. Nobody responded.Wrote to two other philosophy lecturers, but the... |
May 29, 2023 |
EA - Should the EA community be cause-first or member-first? by EdoArad
07:42
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should the EA community be cause-first or member-first?, published by EdoArad on May 29, 2023 on The Effective Altruism Forum.It's really hard to do community building well. Opinions on strategy and vision vary a lot, and we don't yet know enough about what actually works and how well. Here, I'll suggest one axis of community-building strategy which helped me clarify and compare some contrasting opinions.Cause-firstWill Macaskill's proposed Definition of Effective Altruism is composed of:An overarching effort to figure out what are the best opportunities to do good.A community of people that work to bring more resources to these opportunities, or work on these directly.This suggests a "cause-first" community-building strategy, where the main goal for community builders is to get more manpower into the top cause areas. Communities are measured by the total impact produced directly through the people they engage with. Communities try to find the most promising people, persuade them to work on top causes, and empower them to do so well.CEA's definition and strategy seem to be mostly along these lines:Effective altruism is a project that aims to find the best ways to help others, and put them into practice.It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.andOur mission is to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.Member-firstLet's try out a different definition for the EA community, taken from CEA's guiding principles:What is the effective altruism community?The effective altruism community is a global community of people who care deeply about the world, make helping others a significant part of their lives, and use evidence and reason to figure out how best to do so.This, to me, suggests a subtly different vision and strategy for the community. One that is, first of all, focused on these people who live by EA principles. Such a "member-first" strategy could have a supporting infrastructure that is focused on helping the individuals involved to live their lives according to these principles, and an outreach/growth ecosystem that works to make the principles of EA more universal.What's the difference?I think this dimension has important effects on the value of the community, and that both local and global community-building strategies should be aware of the tradeoffs between the two.I'll list some examples and caricatures for the distinction between the two, to give a more intuitive grasp of how these strategies differ, without any clear order:Leaning cause-firstLeaning member-firstKeep EA Small and WeirdBig Tent EACurrent EA Handbook (focus on introducing major causes)2015's EA Handbook (focus on core EA principles)80,000 HoursProbably GoodWants more people doing high-quality AI Safety work, regardless of their acceptance of EA principlesWants more people deeply understanding and accepting EA principles, regardless of what they actually work on or donate to.Targeted outreach to students in high ranking universitiesBroad outreach with diverse messagingEncourages people to change occupations to focus on the world's most pressing problemsEncourages people to use the tools and principles of EA to do more good in their current trajectoryRisk of people not finding useful ways to contribute to top causesRisk of not enough people who want to contribute to the world's top causesThe community as a whole leads by example, by taking in-depth prioritization research with the proper seriousnessEach individual is focused more on how toimplement EA principles in their own lives, taking their personal worldview and situation into account ... |
May 29, 2023 |
EA - Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare by Meghan Barrett
03:47
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Drawing attention to invasive Lymantria dispar dispar spongy moth outbreaks as an important, neglected issue in wild animal welfare, published by Meghan Barrett on May 28, 2023 on The Effective Altruism Forum.This post contains only the summary of a longer research post, written by Meghan Barrett and Hannah McKay. The full post can be found at the above link on Rethink Priorities website.SummaryOne aim of wild animal welfare research is to identify situations where large numbers of wild animals are managed by humans in ways that have significant welfare impacts. If the number of individuals is large and the welfare impacts significant, the issue is important. As humans are managing these animals, it is possible the welfare impacts could be moderated to reduce their suffering. The massive scale of invasive (e.g., non-native) Lymantria dispar dispar (spongy moth) outbreaks represents an unappreciated wild animal welfare issue, and thus deserves further attention from a welfare (not simply an invasive species-control) perspective.The spongy moth is not endemic to North America. The species experiences localized three year-long outbreaks of half a billion or more caterpillars/km2 every 10-15 years in regions where they are well established (including their native range). Spongy moths currently occupy at least 860,000 km2 in North America, only ¼ of their possible range (though most of the occupied area is not experiencing outbreak conditions, most of the time). L. dispar continues to spread slowly to new areas each year despite multi-million dollar efforts to stop expansion. Assuming spongy moth caterpillars are sentient, methods for actively controlling them during outbreaks cause substantial suffering. The aerial spray (Btk) ruptures the stomach, causing the insect to die from either starvation or sepsis over two to seven days. However, because outbreaks are so large, most caterpillars are not actively targeted for control, and ‘natural forces’ are allowed to reduce the outbreak.The most prominent natural forces to rein in an outbreak are starvation and disease. The accidentally introduced fungus, Entomophaga (meaning “insect eaterâ€) maimaiga, digests caterpillars’ insides before pushing through the exoskeleton to release spores, usually within a week. LdNPV virus is also common in the spongy moth population, but only causes high levels of mortality during outbreaks when larvae are stressed from extreme competition. A symptom of severe LdNPV infection is “larval melting,†or the liquefaction of the insect’s internal organs.The scale of spongy moth outbreaks is tremendous, though notably these outbreaks are not necessarily higher-density than numbers of other insect species (e.g., 740 million to 6.2 billion individual wireworms/km2; Smithsonian, n.d.). However, spongy moths are one of the best tracked non-native insects (Grayson & Johnson, 2018; e.g., Stop the Spread program), providing us with better data for analyzing the scale of the welfare issue (both in terms of caterpillar density within outbreaks, and the total area affected by outbreaks). In addition, there is potential for significant range expansion by spongy moths that would increase the scope of the welfare concern, and there appears to be extreme suffering1 induced by both active and natural outbreak control. As a result, spongy moth welfare during outbreaks could be an issue of concern for animal welfare advocates.Further research could improve spongy moth welfare by: 1) identifying the most promising long-term interventions for preventing/reducing the occurrence of outbreaks behind the invasion front, 2) contributing to halting the spread of spongy moths into new areas, and 3) identifying the highest-welfare outbreak management strategies where outbreaks do occur.Thanks for listening. To help us ... |
May 28, 2023 |
EA - Has Russiaâs Invasion of Ukraine Changed Your Mind? by JoelMcGuire
07:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Has Russiaâs Invasion of Ukraine Changed Your Mind?, published by JoelMcGuire on May 28, 2023 on The Effective Altruism Forum. Published on May 27, 2023 6:35 PM GMTLikelihood of nuclear war, conditional on great power conflict Likelihood of nuclear war Conditional on Russia losing, is the world a safer place?[This post was written in a purely personal capacity, etc.]I recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and itâs unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.It seems like the Metaculus forecasting community is now more worried about great power conflict than it was before the war. I assume the invasion of Ukraine is a causal factor. But I feel oddly reassured about this, like the world was ruled by drunks who sobered up when the knives came out, reminded that knives are sharp and bodily fluids are precious.After the invasion, the prospect of a Russia-USA War shifted from a 5-15% to a 25% chance before 2050. I hadnât known about this forecast, but I would have assumed the opposite. Before the war, Russia viewed the US as a waning power, losing in Afghanistan, not-winning in Syria, Libya and Venezuela, riven by internecine strife and paralyzed by self-doubt. Meanwhile, Russiaâs confidence in its comeback rose with each cost-effective success in Crimea, Syria, and Kazakhstan.Now Russia knows how hollow its military was. And it knows the USA knows. And it knows that NATO hand-me-downs are emptying its once vast stockpiles of tanks and APCs. I assume it wonât recover the depth of its armour stocks in the near term (it doesnât have the USSRâs state capacity or industrial base). The USA also doesnât need to fight Russia. If Ukraine is doing this well, then Ukraine + Poland + Baltics would probably do just fine. Iâd put this more around 6.5%.I think a Russian war with a European state has probably increased simply based on Russiaâs revealed willingness to go to war, in conjunction with forecasters predicting a good chance (20%-24%) that the US and China will go to war over Taiwan. Russia may find such a conflict an opportunity to attempt to occupy a square mile of uninhabited Lithuanian forest to create a safe zone for ethnic Russian speakers and puncture the myth of NATOâs 5th article.Will there be a 'World War Three' before 2050? | MetaculusThe predicted probability to this question shifted by around 10%, from the 10-15% range to 20-25% after the war began. I assume this is mostly driven by Russia-NATO-initiated conflict. China-India conflict predictions have decreased from 30% pre-war to 17% before 2035 most recently. And China-US war predictions have stayed constant (20% before 2035). So the rise must stem from the increase in the likelihood of a Russia-US war or by other major powers between 2035 and 2050. I donât think I agree with the community here, as I explained previously.Will China get involved in the Russo-Ukrainian conflict by 2024?China hasnât involved itself in the Ukraine war yet. And the prospects for its involvement seem like they should dim over time â surely it would have acted or given more hints that it was considering doing so by now?This makes me more confused about whether China committed to a military confrontation with the West. If it has, and China believed it had more military-industria...
|
May 28, 2023 |
EA - Has Russia’s Invasion of Ukraine Changed Your Mind? by JoelMcGuire
09:30
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Has Russia’s Invasion of Ukraine Changed Your Mind?, published by JoelMcGuire on May 27, 2023 on The Effective Altruism Forum.[This post was written in a purely personal capacity, etc.]I recently had several long conversations with a friend about whether my regular doom-scrolling regarding the Ukraine war had sharpened my understanding of the world or mostly been a waste of time.Unfortunately, it seems more of the latter. When my mind has changed, it's been slight, and it’s unclear what actions my new views justify. Personally, this means I should probably go back to thinking about happiness and RCTs.I set out what I think are some relevant questions Russia's invasion of Ukraine could change your mind about and provide some sloppy commentary, but I'm interested to know what other EAs and rationalists think about this issue.High-level questionsLikelihood of great power conflictIt seems like the Metaculus forecasting community is now more worried about great power conflict than it was before the war. I assume the invasion of Ukraine is a causal factor. But I feel oddly reassured about this, like the world was ruled by drunks who sobered up when the knives came out, reminded that knives are sharp and bodily fluids are precious.After the invasion, the prospect of a Russia-USA War shifted from a 5-15% to a 25% chance before 2050. I hadn’t known about this forecast, but I would have assumed the opposite. Before the war, Russia viewed the US as a waning power, losing in Afghanistan, not-winning in Syria, Libya and Venezuela, riven by internecine strife and paralyzed by self-doubt. Meanwhile, Russia’s confidence in its comeback rose with each cost-effective success in Crimea, Syria, and Kazakhstan.Now Russia knows how hollow its military was. And it knows the USA knows. And it knows that NATO hand-me-downs are emptying its once vast stockpiles of tanks and APCs. I assume it won’t recover the depth of its armour stocks in the near term (it doesn’t have the USSR’s state capacity or industrial base). The USA also doesn’t need to fight Russia. If Ukraine is doing this well, then Ukraine + Poland + Baltics would probably do just fine. I’d put this more around 6.5%.I think a Russian war with a European state has probably increased simply based on Russia’s revealed willingness to go to war, in conjunction with forecasters predicting a good chance (20%-24%) that the US and China will go to war over Taiwan. Russia may find such a conflict an opportunity to attempt to occupy a square mile of uninhabited Lithuanian forest to create a safe zone for ethnic Russian speakers and puncture the myth of NATO’s 5th article.Will there be a 'World War Three' before 2050? | MetaculusThe predicted probability to this question shifted by around 10%, from the 10-15% range to 20-25% after the war began. I assume this is mostly driven by Russia-NATO-initiated conflict. China-India conflict predictions have decreased from 30% pre-war to 17% before 2035 most recently. And China-US war predictions have stayed constant (20% before 2035). So the rise must stem from the increase in the likelihood of a Russia-US war or by other major powers between 2035 and 2050. I don’t think I agree with the community here, as I explained previously.Will China get involved in the Russo-Ukrainian conflict by 2024?China hasn’t involved itself in the Ukraine war yet. And the prospects for its involvement seem like they should dim over time — surely it would have acted or given more hints that it was considering doing so by now?This makes me more confused about whether China committed to a military confrontation with the West. If it has, and China believed it had more military-industrial capacity than the West (which is what I’d believe if I was China), then now is the perfect opportunity to drain Western stocks ... |
May 28, 2023 |
EA - Do you think decreasing the consumption of animals is good/bad? Think again? by Vasco Grilo
11:03
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Do you think decreasing the consumption of animals is good/bad? Think again?, published by Vasco Grilo on May 27, 2023 on The Effective Altruism Forum.QuestionDo you think decreasing the consumption of animals is good/bad? For which groups of farmed animals?ContextI stopped eating animals 4 years ago mostly to decrease the suffering of farmed animals. I am glad I did that based on the information I had at the time. However, I am no longer confident that decreasing the consumption of animals is good/bad. It has many effects:Decreasing the number of factory-farmed animals.I believe this would be good for chickens, since I expect them to have negative lives. I estimated the lives of broilers in conventional and reformed scenarios are, per unit time, 2.58 and 0.574 times as bad as human lives are good (see 2nd table). However, these numbers are not resilient:On the one hand, if I consider disabling pain is 10 (instead of 100) times as bad as hurtful pain, the lives of broilers in conventional and reformed scenarios would be, per unit time, 2.73 % and 26.2 % as good as human lives. Nevertheless, disabling pain being only 10 times as bad as hurtful pain seems quite implausible if one thinks being alive is as good as hurtful pain is bad.On the other hand, I may be overestimating broilers’ pleasurable experiences.I guess the same applies to other species, but I honestly do not know. Figuring out whether farmed shrimps and prawns have good/bad lives seems especially important, since they are arguably the driver for the welfare of farmed animals.Decreasing the production of animal feed, and therefore reducing crop area, which tends to:Increase the population of wild animals, which I do not know whether it is good or bad. I think the welfare of terrestrial wild animals is driven by that of terrestrial arthropods, but I am very uncertain about whether they have good or bad lives. I recommend checking this preprint from Heather Browning and Walter Weit for an overview of the welfare status of wild animals.Decrease the resilience against food shocks. As I wrote here:The smaller the population of (farmed) animals, the less animal feed could be directed to humans to mitigate the food shocks caused by the lower temperature, light and humidity during abrupt sunlight reduction scenarios (ASRS), which can be a nuclear winter, volcanic winter, or impact winter.Because producing calories from animals is much less efficient than from plants, decreasing the number of animals results in a smaller area of crops.So the agricultural system would be less oversized (i.e. it would have a smaller safety margin), and scaling up food production to counter the lower yields during an ASRS would be harder.To maximise calorie supply, farmed animals should stop being fed and quickly be culled after the onset of an ASRS. This would decrease the starvation of humans and farmed animals, but these would tend to experience more severe pain for a faster slaughtering rate.As a side note, increasing food waste would also increase resilience against food shocks, as long as it can be promptly cut down. One can even argue humanity should increase (easily reducible) food waste instead of the population of farmed animals. However, I suspect the latter is more tractable.Increase biodiversity, which arguably increases existential risk due to ecosystem collapse (see Kareiva 2018).Decreasing greenhouse gas emissions, and therefore decreasing global warming.I have little idea whether this is good or bad.Firstly, it is quite unclear whether climate change is good or bad for wild animals.Secondly, although more global warming makes climate change worse for humans, I believe it mitigates the food shocks caused by ASRSs. Accounting for both of these effects, I estimated the optimal median global warming i... |
May 27, 2023 |
EA - By failing to take serious AI action, the US could be in violation of its international law obligations by Cecil Abungu
16:46
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: By failing to take serious AI action, the US could be in violation of its international law obligations, published by Cecil Abungu on May 27, 2023 on The Effective Altruism Forum.“Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other meansâ€.2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan.IntroductionThe United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on.A. Some important premisesThe US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them.This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.(i) All major human rights treaties, including the ICCPR, impose on states a duty to protect lifeIn addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky.(ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countriesThe Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it.(iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countriesEven though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subjec... |
May 27, 2023 |
EA - Co-found an incubator for independent AI Safety researchers! by Alexandra Bos
10:08
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Co-found an incubator for independent AI Safety researchers!, published by Alexandra Bos on May 26, 2023 on The Effective Altruism Forum.Full-time, remoteAPPLY HEREDeadline: Thursday, June 8th (in your timezone)If your ideal job would be leading an impact-driven organization, being your own boss and pushing for a safer future with AI, you might be a great fit for co-founding Catalyze Impact!Below, you will find out more about Catalyze’s mission and focus, why co-founding this org would be high-impact, how to tell if you’re a good fit, and how to apply.In short, Catalyze will 1) help people become independent technical AI Safety researchers, and 2) deliver key support to independent AI Safety researchers so they can do their best work.If you think this non-profit’s work could be important, please like/upvote and share this message so that the right people get to see it.You can ask questions, register interest to potentially fund us, work with us, make use of our services in the future and share information here.Why support independent AI Safety researchers?Lots of people want to do AI Safety (AIS) research and are trying to get in a position where they can, yet only around 100-300 people worldwide are actually doing research in this crucial area. Why? Because there are almost no AIS researcher jobs available due to AIS research organizations facing difficult constraints to scaling up. Luckily there is another way to grow the research field: having more people do independent research (where a self-employed individual gets a grant, usually from a fund).There is, however, a key problem: becoming and being a good independent AIS researcher is currently very difficult. It requires a lot of qualities which have nothing to do with being able to do good research: you have to be proactive, pragmatic, social, good enough at fundraising, very good at self-management and willing to take major career risks. Catalyze Impact will take away a large part of the difficulties that come with being an independent researcher, thereby making it a suitable option for more people so they are empowered to do good AIS research.How will we help?This is the current design of the pilot - but you will help shape this further!1. Fundraising support> help promising individuals get funded to do research2. Peer support networks & mentor-matching> get feedback, receive mentorship, find collaborators, brainstorm and stay motivated rather than falling into isolation3. Accountability and coaching> have structure, stay motivated and productive4. Fiscal sponsorship: hiring funded independent researchers as ‘employees’> take away operational tasks which distract from research & help them build better career capital through institutional affiliationIn what ways would this be impactful?Alleviating a bottleneck for scaling the AIS research field by making independent research suitable for more people: it seems that we need a lot more people to be working on solving alignment. However, talented individuals who have invested in upskilling themselves to go do AIS research (e.g. SERI MATS graduates) are largely unable to secure research positions. This is oftentimes not because they are not capable enough of doing the research, but because there are simply too few positions available (see footnote). Because of this, many of these talented individuals are left with a few sub-optimal options. 1) try to do research/a PhD in a different academic field in hopes that it will make them a better AIS researcher in the future, 2) take a job working on AI capabilities (!), or 3) try to become an independent AIS researcher.For many people, independent research (i.e. without this incubator) is not a good & viable option because being an independent researcher brings a lot of difficulties with it and arran... |
May 27, 2023 |
EA - [Linkpost] Longtermists Are Pushing a New Cold War With China by Mohammad Ismam Huda
01:32
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Longtermists Are Pushing a New Cold War With China, published by Mohammad Ismam Huda on May 27, 2023 on The Effective Altruism Forum.Jacob Davis, a writer for the socialist political magazine Jacobin, raises an interesting concern about how current longermist initiatives in AI Safety are in his assessment escalating tensions between the US and China. This highlights a conundrum for the Effective Altruism movement which seeks to advance both AI Safety and avoid a great power conflict between the US and China.This is not the first time this conundrum has been raised which has been explored on the forum previously by Stephen Clare.The key points Davis asserts are that:Longtermists have been key players in President Biden’s choice last October to place heavy controls on semiconductor exports.Key longtermist figures advancing export controls and hawkish policies against China include former Google CEO Eric Schmidt (through Schmidt Futures and the longtermist political fund Future Forward PAC), former congressional candidate and FHI researcher Carrick Flynn, as well as other longtermists in key positions at Gerogetown Center for Security and Emerging Technology and the RAND Corporation.Export controls have failed to limit China's AI research, but have wrought havoc on global supply chains and seen as protectionist in some circles.I hope this linkpost opens up a debate about the merits and weaknesses of current strategies and views in longtermist circles.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 27, 2023 |
EA - How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate by jackva
24:22
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate, published by jackva on May 27, 2023 on The Effective Altruism Forum. Published on May 26, 2023 5:30 PM GMT1/ Introduction 2/ The value proposition and mission of FP Climate 3/ Methodological choices and their underlying rationale 4/ Projects 5/ Grantmaking 7/ Conclusion AcknowledgmentsWe recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond.As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we continually try to build tools that are useful for applications outside the narrow cause area work â for example, some of the methodology work on impact multipliers has also been helpful for work in other areas, such as global catastrophic risks (here, as well as FP's Christian Ruhl's upcoming report on the nuclear risk landscape) and air pollution. Another way to put this is that we think of our climate work as one example of an effective altruist research and grantmaking program in a âhigh-but-not-maximal-uncertaintyâ environment, facing and attacking similar epistemic and methodological problems as, say, work on great power war, or risk-neutral current generations work. We will come back to this throughout the piece.In what follows, this update is organized as follows: We first describe the fundamental value proposition and mission of FP Climate (Section 2). We then discuss, at a high level, the methodological principles that flow from this mission (Section 3), before making this much more concrete with the discussion of three of the furthest developed research projects putting this into action (Section 4). This is the bulk of this methodology-focused-update. We then briefly discuss grantmaking plans (Section 5) and backlog (Section 6) before concluding (Section 7).As part of Founders Pledgeâs research team, the fundamental goal of FP Climate is to provide donors interested in maximizing the impact of their climate giving with a convenient vehicle to do so â the Founders Pledge Climate Fund. Crucially, and this is often misunderstood, our goal is not to serve arbitrary donor preferences but rather to guide donors to the most impactful opportunities available.. Taking caring about climate as given, we seek to answer the effective altruist question of what to prioritize.We are conceiving of FP Climate as a research-based grantmaking program to find and fund the best opportunities to reduce climate damage.We believe that at the heart of this effort has to be a credible comparative methodology to estimate relative expected impact, fit for purpose to the field of climate where a layer of uncertainties about society, economy, techno-economic factors, and the climate system, as well as a century-spanning global decarbonization effort. This is so because we are in a situation where causal effects and theories of change are often indirect and uncertainty is often irreducible on relevant time-frames (we discuss this more in our recent 80K Podcast (throughout links to 80K link to specific sections of the transcript), as well as Volts, and in our Changing Landscape report).While we have been building toward...
|
May 27, 2023 |
EA - How to evaluate relative impact in high-uncertainty contexts? An update on research methodology and grantmaking of FP Climate by jackva
30:34
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to evaluate relative impact in high-uncertainty contexts? An update on research methodology & grantmaking of FP Climate, published by jackva on May 26, 2023 on The Effective Altruism Forum.1/ IntroductionWe recently doubled our full-time climate team (hi Megan!), and we are just going through another doubling (hiring a third researcher, as well as a climate communications manager, job ad for the latter coming soon, for now reach out to sally@founderspledge.com).Apart from getting a bulk rate for wedding cake, we thought this would be a good moment to update on our progress and what we have in the pipeline for the next months, both in terms of research to be released as well as grantmaking with the FP Climate Fund and beyond.As discussed in the next section, If you are not interested in climate, but in EA grantmaking research in general, we think it still might be interesting reading. Being part of Founders Pledge and the effective altruist endeavor at large, we continually try to build tools that are useful for applications outside the narrow cause area work – for example, some of the methodology work on impact multipliers has also been helpful for work in other areas, such as global catastrophic risks (here, as well as FP's Christian Ruhl's upcoming report on the nuclear risk landscape) and air pollution. Another way to put this is that we think of our climate work as one example of an effective altruist research and grantmaking program in a “high-but-not-maximal-uncertainty†environment, facing and attacking similar epistemic and methodological problems as, say, work on great power war, or risk-neutral current generations work. We will come back to this throughout the piece.In what follows, this update is organized as follows: We first describe the fundamental value proposition and mission of FP Climate (Section 2). We then discuss, at a high level, the methodological principles that flow from this mission (Section 3), before making this much more concrete with the discussion of three of the furthest developed research projects putting this into action (Section 4). This is the bulk of this methodology-focused-update. We then briefly discuss grantmaking plans (Section 5) and backlog (Section 6) before concluding (Section 7).2/ The value proposition and mission of FP ClimateAs part of Founders Pledge’s research team, the fundamental goal of FP Climate is to provide donors interested in maximizing the impact of their climate giving with a convenient vehicle to do so – the Founders Pledge Climate Fund. Crucially, and this is often misunderstood, our goal is not to serve arbitrary donor preferences but rather to guide donors to the most impactful opportunities available.. Taking caring about climate as given, we seek to answer the effective altruist question of what to prioritize.We are conceiving of FP Climate as a research-based grantmaking program to find and fund the best opportunities to reduce climate damage.We believe that at the heart of this effort has to be a credible comparative methodology to estimate relative expected impact, fit for purpose to the field of climate where a layer of uncertainties about society, economy, techno-economic factors, and the climate system, as well as a century-spanning global decarbonization effort. This is so because we are in a situation where causal effects and theories of change are often indirect and uncertainty is often irreducible on relevant time-frames (we discuss this more in our recent 80K Podcast (throughout links to 80K link to specific sections of the transcript), as well as Volts, and in our Changing Landscape report).While we have been building towards such a methodology since 2021 our recent increase in resourcing is quickly narrowing the gap between aspiration and reality. Before describing some exe... |
May 27, 2023 |
EA - Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI by titotal
28:35
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bandgaps, Brains, and Bioweapons: The limitations of computational science and what it means for AGI, published by titotal on May 26, 2023 on The Effective Altruism Forum.[In this post I discuss some of my field of expertise in computational physics. Although I do my best to make it layman friendly, I can't guarantee as such. In later parts I speculate about other fields such as brain simulation and bioweapons, note that I am not an expert in these subjects.]In a previous post, I argued that a superintelligence that only saw three frames of a webcam would not be able to deduce all the laws of physics, specifically general relativity and Newtonian gravity. But this specific scenario would only apply to certain forms of boxed AI.Any AI that can read the internet has a very easy way to deduce general relativity and all our other known laws of physics: look it up on wikipedia. All of the fundamental laws of physics relevant to day to day life are on there. An AGI will probably need additional experiments to deduce a fundamental theory of everything, but you don’t need that to take over the world. The AI in this case will know all the laws of physics that are practically useful.Does this mean that an AGI can figure out anything?There is a world of difference between knowing the laws of physics, and actually using the laws of physics in a practical manner. The problem is one that talk of “solomonoff induction†sweeps under the rug: Computational time is finite. And not just that. Compared to some of the algorithms we’d like to pull off, computational time is miniscule.Efficiency or deathThe concept of computational efficiency is at the core of computer science. The running of computers costs time and money. If we are faced with a problem, we want an algorithm to find the right answer. But just as important is figuring out how to find the right answer in the least amount of time.If your challenge is “calculate piâ€, getting the exact “right answer†is impossible, because there are an infinite number of digits. At this point, we are instead trying to find the most accurate answer we can get for a given amount of computational resources.This is also applicable to NP-hard problems. Finding the exact answer to the travelling salesman problem for large networks is impossible within practical resource limits (assuming P not equal NP). What is possible is finding a pretty good answer. There’s no efficient algorithm for getting the exact right route, but there is one for guaranteeing you are within 50% of the right answer.When discussing AI capabilities, the computational resources available to the AI are finite and bounded. Balancing accuracy with computational cost will be fundamental to a successful AI system. Imagine an AI that, when asked a simple question, starts calculating an exact solution that would take a decade to finish. We’re gonna toss this AI in favor of one that gives a pretty good answer in practical time.This principle goes double for secret takeover plots. If computer model A spends half it’s computational resources modelling proteins, while computer model B doesn’t, computer model A is getting deleted. Worse, the engineers might start digging in to why model A is so slow, and get tipped off to the plot. All this is just to say: computational cost matters. A lot.A taste of computational physicsIn this section, I want to give you a taste of what it actually means to do computational physics. I will include some equations for demonstration, but you do not need to know much math to follow along. The subject will be a very highly studied problem in my field called the “band gap problemâ€.“band gap†is one of the most important material properties in semiconductor physics. It describes whether there is a slice of possible energy values that are forbidden ... |
May 27, 2023 |
EA - Ingroup Deference by trammell
33:02
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ingroup Deference, published by trammell on May 26, 2023 on The Effective Altruism Forum.Epistemic status: yes. All about epistemicsIntroductionIn principle, all that motivates the existence of the EA community is collaboration around a common goal. As the shared goal of preserving the environment characterizes the environmentalist community, say, EA is supposed to be characterized by the shared goal of doing the most good. But in practice, the EA community shares more than just this abstract goal (let’s grant that it does at least share the stated goal) and the collaborations that result. It also exhibits an unusual distribution of beliefs about various things, like the probability that AI will kill everyone or the externalities of polyamory.My attitude has long been that, to a first approximation, it doesn’t make sense for EAs to defer to each other’s judgment any more than to anyone else’s on questions lacking consensus.When we do, we land in the kind of echo chamber which convinced environmentalists that nuclear power is more dangerous than most experts think, and which at least to some extent seems to have trapped practically every other social movement, political party, religious community, patriotic country, academic discipline, and school of thought within an academic discipline on record. This attitude suggests the following template for an EA-motivated line of strategy reasoning, e.g. an EA-motivated econ theory paper:Look around at what most people are doing. Assume you and your EA-engaged readers are no more capable or better informed than others are, on the whole; take others’ behavior as a best guess on how to achieve their own goals.Work out [what, say, economic theory says about] how to act if you believe what others believe, but replace the goal of “what people typically want†with some conception of “the goodâ€.And so a lot of my own research has fit this mold, including the core of my work on “patient philanthropyâ€[1, 2] (if we act like typical funders except that we replace the rate of pure time preference with zero, here’s the formula for how much higher our saving rate should be). The template is hardly my invention, of course. Another example would be Roth Tran’s (2019) paper on “mission hedging†(if a philanthropic investor acts like a typical investor except that they’ll be spending the money on some cause, instead of their own consumption, here’s the formula for how they should tweak how they invest). Or this post on inferring AI timelines from interest rates and setting one’s philanthropic strategy accordingly.But treating EA thought as generic may not be a good first approximation.Seeing the “EA consensus†be arguably ahead of the curve on some big issues—Covid a few years ago, AI progress more recently—raises the question of whether there’s a better heuristic: one which doesn’t treat these cases as coincidences, but which is still principled enough that we don’t have to worry too much about turning the EA community into [more of] an echo chamber all around. This post argues that there is. The gist is simple. If you’ve been putting in the effort to follow the evolution of EA thought, you have some “inside knowledge†of how it came to be what it is on some question. (I mean this not in the sense that the evolution of EA thinking is secret, just in the sense that it’s somewhat costly to learn.)If this costly knowledge informs you that EA beliefs on some question are unusual because they started out typical and then updated in light of some idiosyncratic learning, e.g. an EA-motivated research effort, then it’s reasonable for you to update toward them to some extent. On the other hand, if it informs you that EA beliefs on some question have been unusual from the get-go, it makes sense to update the other way, toward the distribution o... |
May 26, 2023 |
EA - EA cause areas are likely power-law distributed too by Stian
04:01
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA cause areas are likely power-law distributed too, published by Stian on May 25, 2023 on The Effective Altruism Forum.So there are two pieces of common effective altruist thinking that I think are in tension with each other, but for which a more sophisticated version of a similar view makes sense and dissolves that tension. This means that in my experience, people can see others holding the more sophisticated view, and adopting the simple one without really examining them (and discovering this tension).This proposed tension is between two statements/beliefs. The first is the common (and core!) community belief that the impact of different interventions is power-law distributed. This means that the very best intervention is several times more impactful than the almost-best ones. The second is a statement or belief along the lines of "I am so glad someone has done so much work thinking about which areas/interventions would have the most impact, as that means my task of choosing among them is easier", or the extreme one which continues "as that means I don't have to think hard about choosing among them." I will refer to this as the uniform belief.Now, there is on the face of it many things to take issue with in how I phrased the uniform belief[1], but I want to deal with two things. 1) I think the uniform belief is a ~fairly common thing to "casually" believe - it is a belief that is easy to automatically form after cursorily engaging with EA topics - and 2) it goes strictly against the belief regarding the power-law distribution of impact.On a psychological level, I think people can come to hold the uniform belief when they fail to adequately reflect on and internalise that interventions are power-law distributed. Because once they do, the tension between the power-law belief and the uniform belief becomes clear. If the power-law (or simply a right-skewed distribution) holds, then even among the interventions and cause areas already identified, their true impact might be very different from each other. We just don’t know which ones have the highest impact.The holding of the uniform belief is a trap that I think people who don't reflect too heavily can fall into, and which I know because I was in it myself for a while - making statements like "Can't go wrong with choosing among the EA-recommended topics". Now I think you can go wrong in choosing among them, and in many different ways. To be clear, I don't think too many people stay in this trap for too long - EA has good social mechanisms for correcting others' beliefs [2] and I would think that it is often caught early. But it is the kind of thing that I am afraid new or cursory EAs might come away permanently believing: that someone else has already done all of the work of figuring out which interventions are the best.The more sophisticated view, and which I think is correct, is that because no one knows ex ante the "true" impact of an intervention, or the total positive consequences of work in an area, you personally cannot, before you start doing the difficult work of figuring out what you think, know which of the interventions you will end up thinking is the most important one. So at first blush - at first encounter with the 80k problem profiles, or whatever - it is fine to think that all the areas have equal expected impact [3]. You probably won't come in thinking this - because you have some prior knowledge - but it would be fine to think. What is not fine would be to (automatically, unconsciously) go on to directly choose a career path among them without figuring out what you think is important, what the evidence for each problem area is, and which area you would be a good personal fit for.So newcomers see that EA has several problem areas, and are looking at a wide selection of possible interventions, ... |
May 26, 2023 |
EA - It is good for EA funders to have seats on boards of orgs they fund [debate] by Nathan Young
01:55
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It is good for EA funders to have seats on boards of orgs they fund [debate], published by Nathan Young on May 25, 2023 on The Effective Altruism Forum.It has come to my attention that many people (including my past self) think that it's bad for funders to sit on the boards of orgs they fund. Eg someone at OpenPhil being the lead decision maker on a grant and then sitting on the board of that org.Let's debate thisSince I said this, several separate people I always update to, including a non-EA said this is trivially wrong. It is typical practice with good reason:EA is not doing something weird and galaxy-brained here. Particularly in America this is normal practiceHaving a board seat ensures that your funding is going where you want and might allow you to fund with other fewer strings attachedIt allows funder oversight. They can ask the relevant questions at the time rather than in some funding meetingPerhaps you might think that it causes funders to become too involved, but I dunno. And this is clearly a different argument than the standard "EA is doing something weird and slightly nepotistic"To use the obvious examples, it is therefore good that Claire Zabel sits on whatever boards she sits on of orgs OP funds. And reasonable that OpenPhil considered funding OpenAI as a way to get a board seat (you can disagree with the actual cost benefit but there was nothing bad normsy about doing it)Do you buy my arguments? Please read the comments to this article also, then vote in this anonymouse poll.Loading...And now you can bet and then make your argument to try and shift future respondents and earn mana for doing so.This market resolves in a month to the final agree % + weakly agree % of the above poll. Hopefully we can see it move in real time if someone makes a convincing argument.I think this is a really cool real time debate format and we should have it at EAG. Relevant docThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 26, 2023 |
EA - Will AI end everything? A guide to guessing | EAG Bay Area 23 by Katja Grace
28:18
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Will AI end everything? A guide to guessing | EAG Bay Area 23, published by Katja Grace on May 25, 2023 on The Effective Altruism Forum.Below is the video and transcript for my talk from EA Global, Bay Area 2023. It's about how likely AI is to cause human extinction or the like, but mostly a guide to how I think about the question and what goes into my probability estimate (though I do get to a number!)The most common feedback I got for the talk was that it helped people feel like they could think about these things themselves rather than deferring. Which may be a modern art type thing, like "seeing this, I feel that my five year old could do it", but either way I hope this empowers more thinking about this topic, which I view as crucially important.You can see the slides for this talk hereIntroductionHello, it's good to be here in Oakland. The first time I came to Oakland was in 2008, which was my first day in America. I met Anna Salamon, who was a stranger and who had kindly agreed to look after me for a couple of days. She thought that I should stop what I was doing and work on AI risk, which she explained to me. I wasn't convinced, and I said I'd think about it; and I've been thinking about it. And I'm not always that good at finishing things quickly, but I wanted to give you an update on my thoughts.Two things to talk aboutBefore we get into it, I want to say two things about what we're talking about. There are two things in this vicinity that people are often talking about. One of them is whether artificial intelligence is going to literally murder all of the humans. And the other one is whether the long-term future – which seems like it could be pretty great in lots of ways – whether humans will get to bring about the great things that they hope for there, or whether artificial intelligence will take control of it and we won't get to do those things.I'm mostly interested in the latter, but if you are interested in the former, I think they're pretty closely related to one another, so hopefully there'll also be useful things.The second thing I want to say is: often people think AI risk is a pretty abstract topic. And I just wanted to note that abstraction is a thing about your mind, not the world. When things happen in the world, they're very concrete and specific, and saying that AI risk is abstract is kind of like saying World War II is abstract because it's 1935 and it hasn't happened yet. Now, if it happens, it will be very concrete and bad. It'll be the worst thing that's ever happened. The rest of the talk's gonna be pretty abstract, but I just wanted to note that.A picture of the landscape of guessingSo this is a picture. You shouldn't worry about reading all the details of it. It's just a picture of the landscape of guessing [about] this, as I see it. There are a bunch of different scenarios that could happen where AI destroys the future. There’s a bunch of evidence for those different things happening. You can come up with your own guess about it, and then there are a bunch of other people who have also come up with guesses.I think it's pretty good to come up with your own guess before, or at some point separate to, mixing it up with everyone else's guesses. I think there are three reasons that's good.First, I think it's just helpful for the whole community if numerous people have thought through these things. I think it's easy to end up having an information cascade situation where a lot of people are deferring to other people.Secondly, I think if you want to think about any of these AI risk-type things, it's just much easier to be motivated about a problem if you really understand why it's a problem and therefore really believe in it.Thirdly, I think it's easier to find things to do about a problem if you understand exactly why it's a p... |
May 26, 2023 |
EA - Introducing Allied Scholars for Animal Protection by Dr Faraz Harsini
07:39
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Allied Scholars for Animal Protection, published by Dr Faraz Harsini on May 24, 2023 on The Effective Altruism Forum.We’re excited to introduce Allied Scholars for Animal Protection, a nonprofit creating a unified infrastructure for effective and sustainable animal advocacy at universities. Our mission is to organize, train, and mentor students who are interested in advocating for animal welfare and pursuing impactful careers.The ProblemUniversities play a critical role in shaping the future of society and effecting systemic change. As future leaders, college students hold immense potential for driving progress and cultural transformation.Unfortunately, animal advocacy in universities tends to be limited, sporadic, and unsustainable. The existing clubs on campuses operate independently with no coordination, and students are often hindered by a lack of time, training, experience, and support. Often, when active students graduate, their animal advocacy clubs become inactive. Much time and effort are wasted due to a lack of continuity and longevity of animal advocacy on campuses because students have to reinvent the proverbial wheel each time they restart a group.One of the worst consequences of this is that much talent goes untapped due to insufficient education and inspiration for vegans to choose effective and impactful careers. The EA Community is working hard to reach this talent through career advising and community building. We believe that on-the-ground support for university animal rights clubs can complement EA recruitment efforts and can encourage vegan college students to engage with critical intellectual work being done by the EA community. We also think that community building work focused specifically on animal advocacy can help reach vegans who might not be as interested in other cause areas or the broader EA project.Some animal advocacy organizations provide opportunities for students to volunteer, but enabling a strong campus movement is not the sole focus of these organizations. Having a single organization dedicated to providing infrastructure for campus activism would therefore fill a highly neglected niche in the current animal advocacy ecosystem.Building a strong campus animal advocacy movement is also highly tractable. There are many vegan students out there who care deeply about these issues but do not feel they have the knowledge or resources to organize a group of their own. By providing the needed support, we can dramatically lower the barrier of entry to vegan advocacy and broaden the pool of talent going towards highly impactful careers.Our ApproachAnimal organizations often focus on training individual students rather than on building a sustainable vegan community. ASAP takes a more holistic approach. Our strategy for constructing a strong campus animal movement involves the following:Building a growing vegan community while investing in and strengthening individuals. This means conducting outreach to vegans who might like to become more active as advocates, and training vegans to conduct effective outreach to nonvegans.Providing on-the-ground support to student groups.Collecting thousands of signatures through petitions.Streamlining the process of starting and running student animal advocacy groups.Facilitating systemic and long-term educational frameworks, rather than just one-time events. We will provide educational seminars to empower vegans and educate the general student population, with a special emphasis on plant-based nutrition for future healthcare professionals.Fighting speciesism and humane-washing while promoting plant-based options.By facilitating more effective student advocacy, we believe ASAP can help produce more influential vegans who push for change. We want to inspire the next Eric Adams, Co... |
May 25, 2023 |
EA - How I solved my problems with low energy (or: burnout) by Luise
21:42
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How I solved my problems with low energy (or: burnout), published by Luise on May 24, 2023 on The Effective Altruism Forum.I had really bad problems with low energy and tiredness for about 2 years. This post is about what I’ve learned. This is not a general guide to solving any and all energy problems since I mostly only have one data point. I still hope reading about my personal problems and solutions will help people solve theirs.SummaryI had basically two periods of very low energy: college and last summer.In college, I felt tired literally all day, especially as soon as I tried to study. I was also depressed.In the summer, I was very happy but I had days at a time where I wouldn’t do anything productive. All tasks seemed unbearably hard to me, sometimes even writing a simple email. I also became introverted.I thought I was being lazy and just needed to “get over itâ€. Starting to notice I had a ‘real’ problem was a big step forward.I learned that I actually had multiple hard-to-disentangle problems:I’m sensitive to disruptions in my sleep leading to feeling tired.Certain types of work that are both hard and demotivating also make me feel physically tired.My biggest realization was that I was burned out much of last summer. This was because I didn’t give myself rest, even though I didn’t see it that way at the time. This led to the unproductive days (not laziness).In college, I lived a weird lifestyle regarding sleep, social life, and other things. Some part of this was probably bad. Having common sense would’ve helped.I can now notice symptoms of overloading myself before it leads to burnout. Learning to distinguish this from “being lazy†phenomenologically was crucial.My problems had nothing to do with physical health or stress.When experimenting to solve my problems, it was useful for me to track when I had unproductive days. This way I could be sure how much the experiments impacted me.What my problems were like (so you know whether they’re similar to yours)A typical low-energy day while I was in college in first year:I wake up at 12 pm. I slept 9 hours but I’m tired. It doesn’t go away even after an hour. I open my math book. But literally as soon as I read the first sentences, I feel so tired that I physically want to lie down and close my eyes. It feels very hard to keep reading. Often I just stare at the wood of the table right next to my book. Not doing anything, just avoiding thinking. Even staring at the wall for 10 minutes sounds great right now. I never really stop feeling tired until it’s night again.A typical low-energy day while I was working on EA community building projects in the summer:I have to do a task I usually love doing, maybe reading applications for an event I’m running. But as soon as I look at the wall of Airtable fields and text, the task feels way too large. I will have to think deeply about these answers people wrote in the application form and make difficult decisions, drawing on information from over 20 fields. That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket. Even sitting upright in an office chair feels hard. I start reading. The text requires slight cognitive effort on the part of the reader to understand. It sounds too hard. I stare at a sentence, willing myself to think. I give up after 3 sentences.It’s lunchtime. I used to love lunchtime at the office because I get to chat with all these super cool people and because I’m quite extraverted. But now the idea of a group conversation sounds way too much. I don’t even want to chat to a single person. I would have to be ‘switched on’, think of things to say, smile, and I just don’t have... |
May 25, 2023 |
EA - AGI Catastrophe and Takeover: Some Reference Class-Based Priors by zdgroff
12:40
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Catastrophe and Takeover: Some Reference Class-Based Priors, published by zdgroff on May 24, 2023 on The Effective Altruism Forum.This is a linkpost forI am grateful to Holly Elmore, Michael Aird, Bruce Tsai, Tamay Besiroglu, Zach Stein-Perlman, Tyler John, and Kit Harris for pointers or feedback on this document.Executive SummaryOverviewIn this document, I collect and describe reference classes for the risk of catastrophe from superhuman artificial general intelligence (AGI). On some accounts, reference classes are the best starting point for forecasts, even though they often feel unintuitive. To my knowledge, nobody has previously attempted this for risks from superhuman AGI. This is to a large degree because superhuman AGI is in a real sense unprecedented. Yet there are some reference classes or at least analogies people have cited to think about the impacts of superhuman AI, such as the impacts of human intelligence, corporations, or, increasingly, the most advanced current AI systems.My high-level takeaway is that different ways of integrating and interpreting reference classes generate priors on AGI-caused human extinction by 2070 anywhere between 1/10000 and 1/6 (mean of ~0.03%-4%). Reference classes offer a non-speculative case for concern with AGI-related risks. On this account, AGI risk is not a case of Pascal’s mugging, but most reference classes do not support greater-than-even odds of doom. The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%). The latter is a distinctive result of the reference class exercise: the expected degree of AGI control over the world looks to far exceed the odds of human extinction by a sizable margin on these priors. The extent of existential risk, including permanent disempowerment, should fall somewhere between these two ranges.This effort is a rough, non-academic exercise and requires a number of subjective judgment calls. At times I play a bit fast and loose with the exact model I am using; the work lacks the ideal level of theoretical grounding. Nonetheless, I think the appropriate prior is likely to look something like what I offer here. I encourage intuitive updates and do not recommend these priors as the final word.ApproachI collect sets of events that superhuman AGI-caused extinction or takeover would be plausibly representative of, ex ante. Interpreting and aggregating them requires a number of data collection decisions, the most important of which I detail here:For each reference class, I collect benchmarks for the likelihood of one or two things:Human extinctionAI capture of humanity’s available resources.Many risks and reference classes are properly thought of as annualised risks (e.g., the yearly chance of a major AI-related disaster or extinction from asteroid), but some make more sense as risks from a one-time event (e.g., the chance that the creation of a major AI-related disaster or a given asteroid hit causes human extinction). For this reason, I aggregate three types of estimates (see the full document for the latter two types of estimates):50-Year Risk (e.g. risk of a major AI disaster in 50 years)10-Year Risk (e.g. risk of a major AI disaster in 10 years)Risk Per Event (e.g. risk of a major AI disaster per invention)Given that there are dozens or hundreds of reference classes, I summarise them in a few ways:Minimum and maximumWeighted arithmetic mean (i.e., weighted average)I “winsoriseâ€, i.e. replace 0 or 1 with the next-most extreme value.I intuitively downweight some reference classes. For details on weights, see the methodology.Weighted geometric meanFindings for Fifty-Year Impacts of Superhuman AISee the full document and spreadsheet for further details on how I arrive at these figures.... |
May 25, 2023 |
EA - New s-risks audiobook available now by Alistair Webster
00:58
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New s-risks audiobook available now, published by Alistair Webster on May 24, 2023 on The Effective Altruism Forum.Tobias Baumann's first-of-its-kind introduction to s-risks, Avoiding the Worst: How to Prevent a Moral Catastrophe is now available to listen to for free.Professionally narrated by Adrian Nelson, the full audiobook is out now on Audible and other audiobook stores. Additionally, a captioned video can be listened to for free on the CRS YouTube channel.Running at just 2 hours and 40 minutes, the audiobook packs in a comprehensive introduction to the topic, explaining what s-risks are, whether we should focus on them, and what we can do now to reduce the likelihood of s-risks occurring.The eBook is also available in various formats, or you can read a PDF version.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 25, 2023 |
EA - KFC Supplier Sued for Cruelty by alene
01:39
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: KFC Supplier Sued for Cruelty, published by alene on May 24, 2023 on The Effective Altruism Forum.Dear EA Forum readers,The EA charity, Legal Impact for Chickens (LIC), just filed our second lawsuit!As many of you know, LIC is a litigation nonprofit dedicated to making factory-farm cruelty a liability. We focus on chickens because of the huge numbers in which they suffer and the extreme severity of that suffering.Today, we sued one of the country’s largest poultry producers and a KFC supplier, Case Farms, for animal cruelty.The complaint comes on the heels of a 2021 undercover investigation by Animal Outlook, revealing abuse at a Morganton, N.C. Case Farms hatchery that processes more than 200,000 chicks daily.Our lawsuit attacks the notion that Big Ag is above the law. We are suing under North Carolina's 19A statute, which lets private parties enjoin animal cruelty.Case Farms was documented knowingly operating faulty equipment, including a machine piston which repeatedly smashes chicks to death and a dangerous metal conveyor belt which traps and kills young birds. Case Farms was also documented crushing chicks’ necks between heavy plastic trays.Case Farms supplies its chicken to KFC, Taco Bell, and Boar’s Head, among other customers.Thank you so much to all the EA Forum readers who helped make this happen, by donating to, and volunteering for, Legal Impact for Chickens!Sincerely,AleneThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 25, 2023 |
EA - Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project by Shen Javier
10:52
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Top Idea Reports from the EA Philippines Mental Health Charity Ideas Research Project, published by Shen Javier on May 24, 2023 on The Effective Altruism Forum.In October 2021 to May 2022, EA Philippines organized the Mental Health Charity Ideas Research project. The project's goal was to find ideas that can become highly impactful and cost-effective charities in improving the well-being of people living in the Philippines and other low- to middle-income countries. It focused on children and adolescent mental health.This was a follow-up to the participation of Brian Tan and myself in Charity Entrepreneurship’s 2021 Incubation Program, in their region-specific track for training people to research the top charity ideas in a region. The project was awarded $11,000 in funding from the EA Infrastructure Fund in 2021 for 1.2 FTE in salary for the project for 8 months. Brian transitioned to being an advisor of the project early on, and AJ Sunglao was brought on as a part-time project co-lead, while two part-time researchers (Mae Muñoz, and Zam Superadble) were also hired.Links to our reportsWe already held a brown bag session last June 11, 2022 discussing the research process and introducing the top four charity ideas we found last year. Now, we share deep reports on those ideas that detail the evidence supporting their effectiveness and how one might implement the charities in the Philippines. We also share the shallow reports made for the other top mental health interventions.Access the reports here:Deep ReportsSelf-Help Workbooks for Children and Adolescents in the Philippines and Low-to-Middle-Income CountriesSchool-based Psychoeducation in the Philippines and Low-to-Middle-Income CountriesGuided Self-Help Game-based App for Adolescents in the Philippines and Low-to-Middle-Income CountriesShallow ReportsHere’s a quick guide to our top ideas:Idea NameDescriptionCost-Effectiveness ($ per unit, total costs)Self-Help Workbooks for Children and AdolescentsThis intervention will develop and distribute self-help workbooks to improve depression and anxiety symptoms in children and young adolescents, particularly 6 to 18-year-olds. Depending on the severity of mental health disorders, the workbook can be accompanied by weekly guidance by lay counselors through telephone, email, social media, or other available platforms.School-based PsychoeducationThis preventive approach entails training and supervising teachers to deliver psychoeducation on mental health topics in their respective schools. Through weekly participatory learning sessions, students would learn to apply positive coping strategies, build interpersonal skills, and/or develop personal characteristics that would empower them to care for their mental health and navigate important life transitions.Guided Self-Help Game-based App for Adolescents The intervention is a self-help game-based mobile application for help-seeking adolescents aged 12 - 19 years old. As a self-help format, the app aims to teach service users concepts and skills that will aid them in addressing MH concerns. The content of the app will be based on evidence-based therapeutic modalities. The game-based format is used to enhance service user engagement and prevent dropout. Youth-led Mental Health SupportThis intervention is a community-based intervention for adolescents aged 13-18. It uses task-sharing principles in delivering basic para-mental health support by training community members like SK officials and student leaders in basic mental health skills such as psychoeducation, peer counseling, and psychological first aid. The content of the training would be based on other community-based interventions like Thinking Healthy Programme, PM+, and Self Help+.$2.67 per WHO-5 improvement$85.93 per GSES improvement$69.47 per SWEMWBS impro... |
May 24, 2023 |
EA - Who does work you are thankful for? by Nathan Young
00:32
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who does work you are thankful for?, published by Nathan Young on May 23, 2023 on The Effective Altruism Forum.I think that the other side of criticism is community support. So who are you grateful is doing what they are doing?Perhaps pick people who you think don't get complimented very much or don't get complimented as much as they get criticised.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 24, 2023 |
EA - Don Efficace is hiring a CEO by Don Efficace
04:03
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don Efficace is hiring a CEO, published by Don Efficace on May 23, 2023 on The Effective Altruism Forum.Don Efficace is a French effective giving association that aims to enable donors to support charitable programs deemed to be highly effective by independent evaluators in the area of global health and poverty, with an aim to expand to include climate change, and animal welfare.Our scope and purpose are similar to those of other effective and successful national giving organizations (eg. Effektiv Spenden in Germany, Ayuda Efectiva in Spain, Doneer Effectief in the Netherlands). The costs of Don Efficace are currently funded by private donors and Giving What We Can, so 100% of common donations fund charitable programs.We are recruiting for the position of Executive Director. In this strategic role for the development of Don Efficace in France, you will have the autonomy to create your own team, and collaborate with the Board of Directors, which is composed of internationally recognized experts with experience in various fields. The main task is to develop a fundraising strategy with French donors, including the media presence. You will also be in charge of overseeing the operational aspects such as the development of the website, communication tools or means, budget, recruitment, etc.Responsibilities:Raising funds for charitable programmes with proven effectivenessEngage with the French community and the media to promote understanding of the importance of impact and the value of evidence in charitable givingInform the general public about the wide variations in effectiveness of different programs, and the ability of donors to increase their charitable impact based on evidenceWorking alongside other stakeholders (e.g., Giving What We Can, and organizations involved in charitable programmes)Developing the community of donors seeking to give effectively in FranceManaging operations in Don Efficace (budget tracking, meetings, reporting to donors, etc.)Recruiting and managing a small team of staff and volunteersThe ideal candidate would have:Strong interpersonal and communication skills, including teamwork but also convincing people to support a project you believe inAbility to work independently and take initiativeHaving a growth mindset, strategic and iterative thinkerStrong taste for fast-paced projects and small structuresStrong interest in projects that aim to make a real impactExcellent written and spoken FrenchSufficient English for written and spoken communication3-5 years of experience, ideally some in fundraising and managementOpen to the values of Effective Giving: transparency, efficiency, and an evidence-based approach to maximize positive impactSalary, benefits and location:We are flexible on the availability of candidates and can accept different formats: full time (CDI), part time, job sharing and contractual arrangements, remote work (full remote acceptable) or on-site in Paris (CET time zone), suitable for family and life commitments. The position requires infrequent participation in meetings compatible with different time zones (approximately 2x/month).Compensation: ~45 k€ gross per year (+ a variable part), to be negotiated according to experience and location.Application:To apply, email acristia@givingwhatwecan.org a CV (including at least two references) and cover letter explaining your fit with the job.We will review applications as we receive them. We would prefer to find someone able to start by September 2023 (but can be flexible for the right person).For any questions, contact acristia@givingwhatwecan.org.We are an equal opportunity employer and value diversity within our organization. We do not discriminate on the basis of ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, ... |
May 24, 2023 |
EA - Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot by Jim Buhler
34:40
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some governance research ideas to prevent malevolent control over AGI and why this might matter a hell of a lot, published by Jim Buhler on May 23, 2023 on The Effective Altruism Forum.Epistemic status: I spent only a few weeks reading/thinking about this. I could have asked more people to give me feedback so I could improve this piece but I’d like to move on to other research projects and thought throwing this out there was still a good idea and might be insightful to some.SummaryMany power-seeking actors will want to influence the development/deployment of artificial general intelligence (AGI). Some of them may have malevolent(-ish) preferences which they could satisfy on massively large scales if they succeed at getting some control over (key parts of the development/deployment of) AGI. Given the current rate of AI progress and dissemination, the extent to which those actors are a prominent threat will likely increase.In this post:I differentiate between different types of scenarios and give examples.I argue that 1) governance work aimed at reducing the influence of malevolent actors over AGI does not necessarily converge with usual AGI governance work – which is as far as I know – mostly focused on reducing risks from “mere†uncautiousness and/or inefficiencies due to suboptimal decision-making processes, and 2) the expected value loss due to malevolence, specifically, might be large enough to constitute an area of priority in its own right for longtermists.I, then, list some research questions that I classify under the following categories:Breaking down the conditions for an AGI-related long-term catastrophe from malevolenceRedefining the set of actors/preferences we should worry aboutSteering clear from information/attention hazardsAssessing the promisingness of various interventionsHow malevolent control over AGI may trigger long-term catastrophes?(This section is heavily inspired by discussions with Stefan Torges and Linh Chi Nguyen. I also build on Das Sarma and Wiblin’s (2022) discussion.We could divide the risks we should worry about into those two categories: Malevolence as a risk factor for AGI conflict and Direct long-term risks from malevolence.Malevolence as a risk factor for AGI conflictClifton et al. (2022) write:Several recent research agendas related to safe and beneficial AI have been motivated, in part, by reducing the risks of large-scale conflict involving artificial general intelligence (AGI). These include the Center on Long-Term Risk’s research agenda, Open Problems in Cooperative AI, and AI Research Considerations for Human Existential Safety (and this associated assessment of various AI research areas). As proposals for longtermist priorities, these research agendas are premised on a view that AGI conflict could destroy large amounts of value, and that a good way to reduce the risk of AGI conflict is to do work on conflict in particular.In a later post from the same sequence, they explain that one of the potential factors leading to conflict is conflict-seeking preferences (CSPs) such as pure spite or unforgivingness. While AGIs might develop CSPs by themselves in training (e.g., because there are sometimes advantages to doing so; see, e.g., Abreu and Sethi 2003), they might also inherit them from malevolent(-ish) actors. Such an actor would also be less likely to want to reduce the chance of CSPs arising by “accidentâ€.This actor can be a legitimate decisive person/group in the development/deployment of AGI (e.g., a researcher at a top AI lab, a politician, or even some influencer whose’ opinion is highly respected), but also a spy/infiltrator or external hacker (or something in between these last two).Direct long-term risks from malevolenceFor simplicity, say we are concerned about the risk of some AGI ending up with... |
May 24, 2023 |
EA - Review of Animal Liberation Now by Richard Y Chappell
13:41
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of Animal Liberation Now, published by Richard Y Chappell on May 23, 2023 on The Effective Altruism Forum.Animal Liberation Now releases today! I received an advance copy for review, so will share some thoughts and highlights. (It feels a bit presumptuous to “review†such a classic text—obviously you should read it, no-one needs to await my verdict in order to know that—but hopefully there’s still some value in my sharing a few thoughts and highlights that stood out to me.)As Singer notes in his publication announcement, he considers it “a new book, rather than just a revision, because so much of the material in the book is new.†I’m embarrassed to admit that I never actually got around to reading the original Animal Liberation (aside from the classic first chapter, widely anthologized as ‘All Animals are Equal’, and commonly taught in intro ethics classes). So I can’t speak to any differences, except to note that the present book is very much “up to dateâ€, focusing on describing the current state of animal experimentation and agriculture, and (in the final chapter) engaging with recent philosophical defenses of speciesism.Empirical DetailsThis book is not exactly an enjoyable read. It describes, clearly and dispassionately, humanity’s abusive treatment of other animals. It’s harrowing stuff. To give just one example, consider our treatment of broiler chickens: they have been bred to grow so large they cannot support themselves or walk without pain (p. 118):The birds may try to avoid the pain by sitting down, but they have nothing to sit on except the ammonia-laden litter, which, as we saw earlier, is so corrosive that it can burn their bodies. Their situation has been likened to that of someone with arthritic leg joints who is forced to stand up all day. [Prof.] Webster has described modern intensive chicken production as “in both magnitude and severity, the single most severe, systematic example of man’s inhumanity to another sentient animal.â€Their parents—breeder birds—are instead starved to keep their weight at a level that allows mating to occur, and for the birds to survive longer—albeit in a state of hunger-induced aggression and desperation. In short, we’ve bred these birds to be physically incapable of living happy, healthy lives. It’s abominable.Our treatment of dairy cows is also heartbreaking:Dairy producers must ensure that their cows become pregnant every year, for otherwise their milk will dry up. Their babies are taken from them at birth, an experience that is as painful for the mother as it is terrifying for the calf. The mother often makes her feelings plain by constant calling and bellowing for her calf—and this may continue for several days after her infant calf is taken away. Some female calves will be reared on milk substitutes to become replacements of dairy cows when they reach the age, at around two years, when they can produce milk. Some others will be sold at between one to two weeks of age to be reared for beef in fattening pens or feedlots. The remainder will be sold to veal producers. (p. 155)A glimmer of hope is offered in the story of niche dairy farms that produce milk “without separating the calves from their mothers or killing a single calf.†(p. 157) The resulting milk is more expensive, since the process is no longer “optimized†purely for production. But I’d certainly be willing to pay more to support a less evil (maybe even positively good!) treatment of farm animals. I dearly hope these products become more widespread.The book also relates encouraging legislation, especially in the EU and New Zealand, constraining the mistreatment of animals in various respects. The U.S. is more disheartening for the most part, but here’s one (slightly) positive note (p. 282):In the U.S. the joint impact of the changes in stat... |
May 24, 2023 |
EA - Save the date: EAGxVirtual 2023 by Sasha Berezhnoi
02:23
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the date: EAGxVirtual 2023, published by Sasha Berezhnoi on May 23, 2023 on The Effective Altruism Forum.EAGxVirtual 2023 will take place on November 17-19Imagine interacting with EAs from over 70 countries and learning from their unique perspectives. Imagine walking across a virtual venue and making valuable connections, all from the comfort of your own home. Imagine no visa requirements and no airports. It's about to come true this November.Vision for the conferenceOur main goal is to help attendees identify the next steps to act based on EA principles wherever they are in the world and build stronger bonds within the community.Many people living outside of major EA hubs have uncertainties about how to take action. They don't have a good understanding of the EA landscape or who to ask. There are many types of constraints: language barriers, travel restrictions, or lack of knowledge about relevant opportunities.We want to address that by facilitating valuable connections, highlighting relevant opportunities and resources, and inviting speakers who are working on concrete projects. There will be a range of talks, workshops, live Q&A sessions, office hours with experts, and facilitated networking.What to expectLast year's EAGxVirtual featured 900 participants from 75 countries and facilitated lots of connections and progress. We want to build on this success, experiment, and improve.You can expect:Action-oriented content that will be relevant to people from different contexts and locationsAlways-available virtual venue (Gathertown) for unstructured conversations, socials, and private meetingsSchedule tailored for participants from different time zonesApplication processApplications will be open in September. Sign-up here to get notified when it’s open.Admissions will not be based on prior EA engagement or EA background knowledge. We welcome all who have a genuine interest in learning more or connecting!If you are completely new to EA, we recommend signing up for the Introductory EA Program to familiarise yourself with the core ideas before applying.EAGxVirtual 2023 will be hosted by EA Anywhere with the support of the CEA Events team.We are looking forward to an inspiring conference with you!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 24, 2023 |
EA - Give feedback on the new 80,000 Hours career guide by Benjamin Hilton
02:16
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Give feedback on the new 80,000 Hours career guide, published by Benjamin Hilton on May 23, 2023 on The Effective Altruism Forum.We’ve spent the last few months updating 80,000 Hours’ career guide (which we previously released in 2017 and which you've been able to get as a physical book). Today, we’ve put our new career guide live on our website. Before we formally launch and promote the guide - and republish the book - we’d like to gather feedback from you!How can you help?Take a look at the new career guide, which you can find at 80000hours.org/career-guide/.Please bear in mind that the vast majority of people who read the 80,000 Hours website are not EAs. Rather, our target audience for this career guide is approximately the ~100k young adults most likely to have high-impact careers, in the English speaking world. In particular, many of them are not yet familiar with many of the ideas that are widely discussed in the EA community. Also, this guide is primarily aimed at people aged 18-24.When you’re ready there’s a simple form to fill in:Click here to give feedback.Thank you so much!Extra context: why are we making this change?In 2018, we deprioritised 80,000 Hours’ career guide in favour of our key ideas series.Our key ideas series had a more serious tone, and was more focused on impact. It represented our best and most up-to-date advice. We expected that this switch would reduce engagement time on our site, but that the key ideas series would better appeal to people more likely to change their careers to do good.However, the drop in engagement time which we could attribute to this change was larger than we’d expected. In addition, data from our user survey suggested that people who changed their careers were more, not less, likely to have found and used the older, more informal career guide (which we kept up on our site).As a result, we decided to bring the advice in our career guide in line with our latest views, while attempting to retain its structure, tone and engagingness.We’re retaining the content in our key ideas series: it’s been re-released as our advanced series.Thank you for your help! You can find the new career guide here, and the feedback form here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 23, 2023 |
EA - Announcing a new organization: Epistea by Epistea
04:41
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a new organization: Epistea, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryWe are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).History and contextEpistea was initially founded in 2019 as a rationality and epistemics research and education organization by Jan Kulveit and a small group of collaborators. They ran an experimental workshop on group rationality, the Epistea Summer Experiment in the summer of 2019 and were planning on organizing a series of rationality workshops in 2020. The pandemic paused plans to run workshops and most of the original staff have moved on to other projects.In 2022, Irena KotÃková was looking for an organization to fiscally sponsor her upcoming projects. Together with Jan, they decided to revamp Epistea as an umbrella organization for a wide range of projects related to epistemics and existential security, under Irena’s leadership.What?Epistea is a service organization that creates, runs, and supports projects that help with clear thinking and scale-sensitive caring. We believe that actions in sensitive areas such as existential risk mitigation often follow from good epistemics, and we are particularly interested in supporting efforts in this direction.The core Epistea team is based in Prague, Czech Republic, and works primarily in person there, although we support projects worldwide. As we are based in continental Europe and in the EU, we are a good fit for projects located in the EU.We provide the following services:Fiscal sponsorship (managing payments, accounting, and overall finances)Administrative and operations support (booking travel, accommodation, reimbursements, applications, visas)Events organization and support (conferences, retreats, workshops)Ad hoc operations supportWe currently run the following projects:FIXED POINTFixed Point is a community and coworking space situated in Prague. The space is optimized for intellectual work and interesting conversations but also prioritizes work-life balance. You can read more about FIXED POINT here.Prague Fall SeasonPFS is a new model for global movement building which we piloted in 2022. The goal of the Season is to have a high concentration of people and events, in a limited time, in one space, and working on a specific set of problems. This allows for better coordination and efficiency and creates more opportunities for people to collaborate, co-create and co-work on important projects together, possibly in a new location - different from their usual space. Part of PFS is a residency program. You can read more about the Prague Fall Season here.Additionally, we support:ACS - Alignment of Complex Systems Research GroupPIBBSS - Principles of Intelligent Behavior in Biological and Social SystemsHAAISS - Human Aligned AI Summer SchoolWho?Irena KotÃková leads a team of 4 full-time staff and 4 contractors:Jana Meixnerová - Head of Programs, focus on the Prague Fall SeasonViktorie HavlÃÄková - Head of OperationsMartin Hrádela - Facilities Manager, focus on Fixed PointJan Å rajhans - User Experience SpecialistKarin Neumanová - Interior DesignerLinh Dan Leová - Operations AssociateJiřà NádvornÃk - Special ProjectsFrantiÅ¡ek Drahota - Special ProjectsThe team has a wide range of experience... |
May 23, 2023 |
EA - [Linkpost] "Governance of superintelligence" by OpenAI by Daniel Eth
02:46
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] "Governance of superintelligence" by OpenAI, published by Daniel Eth on May 22, 2023 on The Effective Altruism Forum.OpenAI has a new blog post out titled "Governance of superintelligence" (subtitle: "Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI"), by Sam Altman, Greg Brockman, and Ilya Sutskever.The piece is short (~800 words), so I recommend most people just read it in full.Here's the introduction/summary (bold added for emphasis):Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.And below are a few more quotes that stood out:"First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.""Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.""It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.""Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.""We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here""By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.""we believe it would be unintuitively risky and difficult to stop the creation of superintelligence"Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 23, 2023 |
EA - X-risk discussion in a college commencement speech by SWK
02:41
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: X-risk discussion in a college commencement speech, published by SWK on May 22, 2023 on The Effective Altruism Forum.Yesterday, Juan Manuel Santos, former president of Colombia (2010-2018) and 2016 Nobel Peace Prize winner gave the commencement address at the University of Notre Dame.The address contained the usual graduation speech stuff about all the problems humanity is facing and how this special class of students is uniquely equipped to change the world or whatever.While the gist of the message was pretty standard, I was pleasantly surprised that Santos spent a significant chunk of time talking about existential risk as the most pressing problem of our time (see clip here). Santos touched on the major x-risk factors well known to the EA community: AI, biosecurity, and nuclear weapons. However, he also emphasized climate change as one of the most pressing existential threats, which of course is a view that many EAs do not share.On one hand, I think this speech should be seen as a sign of hope. Ideas on the importance of mitigating x-risk — and the particular threats of AI, pandemics, and nuclear war — seem to be entering more mainstream circles. This trend is evidenced further in a recent post noting another former world leader (Israeli PM Naftali Bennett) who publicly discussed AI x-risk. And I think a college commencement speech is arguably more mainstream than the Asian Leadership Conference at which Bennett delivered his talk.On the other hand, it was clear that there is still a long way to go in terms of convincing most of the general population that x-risk should be taken seriously. Sitting in the audience, the people around me were smirking and giggling throughout the x-risk portion of Santos' speech. Afterward, I overheard people joking about the AI part and how they thought it was inappropriate to talk about such morbid material in a commencement address.Overall, I certainly appreciated Santos for talking about x-risk, but I'm not convinced that his words had much of an impact on the people in the audience. To be sure, I realize that commencement speeches are largely ceremonial and all but a handful don't have any broader societal impact. Still, it would have been nice to see people be more receptive to Santos' important ideas.I would be interested to hear if anyone has any thoughts on Santos' discussion of x-risk. Was it appropriate to talk about this stuff in the context of a commencement address? Is this an effective forum to spread ideas about x-risk (or other "weird" EA ideas), or will these ideas just fall on deaf ears? Or are commencement addresses mostly irrelevant and not worth even thinking about for the purposes of growing EA and promoting some of the more idiosyncratic concepts?Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 22, 2023 |
EA - If you find EA conferences emotionally difficult, you're not alone by Amber Dawn
05:57
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you find EA conferences emotionally difficult, you're not alone, published by Amber Dawn on May 22, 2023 on The Effective Altruism Forum.I went to EAG London this weekend. I had some interesting chats, wrote some cryptic squiggles in my notebook (“Clockify†“the Easterlin paradoxâ€, “functionalist eudaimonic theoriesâ€), and gave and received some hopefully-useful advice. Overall, the conference was fun and worthwhile for me. But at times, I also found the conference emotionally difficult.I think this is pretty common. After last year’s EAG, Alastair Fraser-Urquhart wrote about how he burnt out at the conference and had to miss a retreat starting the next day. The post was popular, and many said they’d had similar experiences.The standard euphemism for this facet of EA conferences is ‘intense’ or ‘tiring’, but I suspect these adjectives are often a more socially-acceptable way of saying ‘I feel low/anxious/exhausted and want to curl up in a foetal position in a darkened room’.I want to write this post to:balance out the ‘woo EAG lfg!’ hype, and help people who found it a bad or ambivalent experience to feel less alonedig into to why EAGs can be difficult: this might help attendees have better experiences themselves, and also create an environment where others are more likely to have good experienceshelp people who mostly enjoy EAGs understand what their more neurotic or introverted friends are going throughHere are some reasons that EAGs might be emotionally difficult. Some of these I’ve experienced personally, others are based on comments I’ve heard, and others are plausible educated guesses.It’s easy to compare oneself (negatively) to othersEA conferences are attended by a bunch of “impressive†people: big-name EAs like Will MacAskill and Toby Ord, entrepreneurs, organisation leaders, politicians, and “inner-circle-y†people who are Forum- or Twitter-famous. You’ve probably scheduled meetings with people because they’re impressive to you; perhaps you’re seeking mentorship and advice from people who are more senior or advanced in your field, or you want to talk to someone because they have cool ideas.This can naturally inflame impostor syndrome, feelings of inadequacy, and negative comparisons. Everyone seems smarter, harder-working, more agentic, better informed. Everyone’s got it all figured out, while you’re still stuck at Stage 2 of 80k’s career planning process. Everyone expects you to have a plan to save the world, and you don’t even have a plan for how to start making a plan.Most EAs, I think, know that these thought patterns are counterproductive. But even if some rational part of you knows this, it can still be hard to fight them - especially if you’re tired, scattered, or over-busy, since this makes it harder to employ therapeutic coping mechanisms.The stakes are highWe’re trying to solve immense, scary problems. We (and CEA) pour so much time and money into these conferences because we hope that they’ll help us make progress on those problems. This can make the conferences anxiety-inducing - you really really hope that the conference pays off. This is especially true if you have some specific goal - such as finding a job, collaborators or funders - or if you think the conference has a high opportunity cost for you.You spend a lot of time talking about depressing thingsThis is just part of being an EA, of course, but most of us don’t spend all our time directly confronting the magnitude of these problems. Having multiple back-to-back conversations about ‘how can we solve [massive, seemingly-intractable problem]?’ can be pretty discouraging.Everything is busy and franticYou’re constantly rushing from meeting to meeting, trying not to bump into others who are doing the same. You see acquaintances but only have time to wave hello, because y... |
May 22, 2023 |
EA - Announcing the Prague community space: Fixed Point by Epistea
05:21
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague community space: Fixed Point, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryA coworking and event space in Prague is open for the entirety of 2023, offering up to 50 desk spaces in a coworking office, a large event space for up to 60 people, multiple meeting rooms and other amenities. We are currently in the process of transitioning from an initial subsidized model to a more sustainable paid membership model.In 2022, Fixed Point was the home of the Prague Fall Season which will be returning there in 2023.We are seeking people, projects and hosts of events here in 2023. If you are interested you can apply here.What is Fixed Point?Fixed Point is a unique community and coworking space located in the heart of Prague operated by Epistea. We support organizations and individuals working on existential security, epistemics, rationality, and effective altruism. Across five floors there is a variety of coworking offices offering up to 50 workstations, as well as numerous meeting rooms and private call stations. In addition to functional work areas, there are inviting communal spaces such as a large comfortable common room accommodating up to 60 people, two fully equipped large kitchens, and a spacious dining area. These amenities create a welcoming environment that encourages social interaction and facilitates spontaneous exchange of ideas. Additionally, there are on-site amenities like a small gym, a nap room, two laundry rooms, bathrooms with showers, and a garden with outdoor tables and seating. For those in need of short-term accommodation, our on-site guesthouse has a capacity of up to 10 beds.Fixed Point is a space where brilliant and highly engaged individuals make crucial career decisions, establish significant relationships, and find opportunities for introspection among like-minded peers when they need it most. In 2022, Fixed Point was home to the Prague Fall Season, when 350 people visited the space.The name "Fixed Point" draws inspiration from the prevalence of various Fixed Point theorems in almost all areas people working in the space work on. If you study the areas seriously, you will find fixed points sooner or later.Why Prague?The Czech effective altruism and rationalist community has long been committed to operational excellence and the creation of physical spaces that facilitate collaboration. With numerous successfully incubated organizations and passionate individuals making a difference in high-impact domains, Prague is now a viable option, especially for EU citizens wanting to settle in continental Europe.In addition to the Prague Fall Season, Prague is home to many different projects, such as Alignment of Complex Systems Research Group, ESPR or Czech Priorities. We host the European runs of CFAR workshops and CFAR rEUnions.Whom is it for?We extend a warm welcome to both short and long-term visitors working on meaningful projects in the areas of existential risk mitigation, AI safety, rationality, epistemics, and effective altruism. We are particularly excited to accommodate individuals and teams in the following categories:Those interested in hosting events,Teams seeking a workspace for an extended period of time.Here are a few examples of the projects we are equipped to host and are enthusiastic about:Weekend hackathons,Incubators lasting up to several months,Conferences,Workshops and lectures on relevant topics,Providing office spaces for existing projects.Additional supportIn addition to the amenities, we can also offer the following services upon request:Project management for existing initiatives,Catering services for events,Administrative and operations supportAccommodation arrangements,Event logistics and operations assistance,Event design consulting.Feel free to r... |
May 22, 2023 |
EA - Announcing the Prague Fall Season 2023 and the Epistea Residency Program by Epistea
06:05
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Prague Fall Season 2023 and the Epistea Residency Program, published by Epistea on May 22, 2023 on The Effective Altruism Forum.SummaryFollowing a successful pilot in 2022, we are announcing the Prague Fall Season 2023, which is a program run by Epistea, happening from September 1 to December 10 at FIXED POINT in Prague, Czech Republic. In this time, FIXED POINT will host a variety of programs, projects, events and individuals in the areas of existential security, rationality, epistemics, and effective altruism. We will announce specific events and programs as we confirm them but for now, our flagship program is the 10-week Epistea Residency Program for teams working on projects related to epistemics and rationality. We are now seeking expressions of interest from potential Epistea residents and mentors.What is a season?The main benefit of doing a season is having a dedicated limited time to create an increased density of people in one place. This creates more opportunities for people to collaborate, co-create and co-work on important projects - sometimes in a new location. This happens to some extent naturally around major EA conferences in London or San Francisco - many people are there at the same time which creates opportunities for additional events and collaborations. However, the timeframe is quite short and it is not clearly communicated that there are benefits in staying in the area longer and there is not a lot of infrastructure in place to support that.We ran the pilot project Prague Fall Season last autumn: Along with 25 long-term residents, we hosted over 300 international visitors between September and December 2022. We provided comprehensive support through funding, venue operations, technical and personal development programs, social gatherings, and additional events, such as the CFAR workshop series. Based on the feedback we received and our own experience with the program, we decided to produce another edition of Prague Fall Season this year with a couple of changes:We are narrowing the scope of the program primarily to existential security, epistemics, and rationality.We ask that participants of the season help us share the cost of running the FIXED POINT house. We may be able to offer financial aid on a case by case basis but the expectation is that when you visit, you can cover at least some part of the cost.We are seeking event organizers who would like to make their events part of the season.We will be sharing more information about how to get involved soon. For now, our priority is launching the Epistea Residency program.The Epistea Residency 2023The backbone of the Prague Fall Season 2023 will once again be a 10-week residency program. This year, we are looking for 6-10 teams of 3-5 members each working on specific projects related to areas of rationality, epistemics, group rationality, and civilizational sanity, and delivering tangible outcomes. A residency project can be:Research on a relevant topic (examples of what we would be excited about are broad in some directions and include abstract foundations like geometric rationality or Modal Fixpoint Cooperation without Löb's Theorem, research and development of applied rationality techniques like Internal communication framework, research on the use of AI to improve human rationality like "automated Double-Crux aid" and more);Distillation, communication, and publishing (writing and publishing a series of explanatory posts, video production, writing a textbook or course materials, etc.);Program development (events, workshops, etc.);Anything else that will provide value to this space.Teams will have the option to apply to work on a specific topic (to be announced soon) or propose their own project. The selected teams will work on their projects in person at FIXED ... |
May 22, 2023 |
EA - AI strategy career pipeline by Zach Stein-Perlman
01:44
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI strategy career pipeline, published by Zach Stein-Perlman on May 22, 2023 on The Effective Altruism Forum.The pipeline for (x-risk-focused) AI strategy/governance/forecasting careers has never been strong, especially for new researchers. But it feels particularly weak recently (e.g. no summer research programs this year from Rethink Priorities, SERI SRF, or AI Impacts, at least as of now, and as few job openings as ever). (Also no governance course from AGI Safety Fundamentals in a while and no governance-focused programs elsewhere.) We're presumably missing out on a lot of talent.I'm not sure what the solution is, or even what the problem is-- I think it's somewhat about funding and somewhat about mentorship and mostly about [orgs not prioritizing boosting early-career folks and not supporting them for various idiosyncratic reasons] + [the community being insufficiently coordinated to realize that it's dropping the ball and it's nobody's job to notice and nobody has great solutions anyway].If you have information or takes, I'd be excited to learn. If you've been looking for early-career support (an educational program, way to test fit, way to gain experience, summer program, first job in AI strategy/governance/forecasting, etc.), I'd be really excited to hear your perspective (feel free to PM).(In AI alignment, I think SERI MATS has improved the early-career pipeline dramatically-- kudos to them. Maybe I should ask them why they haven't expanded to AI strategy or if they have takes on that pipeline. For now, maybe they're evidence that someone prioritizing pipeline-improving is necessary for it to happen...)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 22, 2023 |
EA - OPTIC [Forecasting Comp] â Pilot Postmortem by OPTIC
10:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OPTIC [Forecasting Comp] â Pilot Postmortem, published by OPTIC on May 19, 2023 on The Effective Altruism Forum.OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting â teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. See our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!What happened at the competition?Attendance114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldnât be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVPâd âyes,â though a few didnât end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.Schedule1 hour check-in time/lunch/socialization10 min introduction speech1 hour speech by Seth Blumberg on the future of forecasting (Seth is a behavioral economist and head of Googleâs internal prediction market, speaking in his individual capacity) â you can watch the speech hereQuestions released; 3 hours for forecasting (âforecasting periodâ)10 min conclusion speech, merch distribution20 min retrospective feedback formForecasting (teams, platform, scoring, prizes, etc)Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teamsâ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, weâll score the forecasts using log scoring.We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:1st place â $15002nd place â $8003rd place â $400Other prizes â $300Note that prizes for 1st-3rd place are given to the team and split between the members of the team.FundingWe received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.OrganizersOur organizing team comprises:Jingyi Wang (Brandeis University EA organizer)Saul Munn (Brandeis University EA organizer)Tom Shlomi (Harvard University EA organizer)Also, Saul and Jingyi will be attending EAG London â please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.SummaryOverall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. Weâre excited about the future of OPTIC!What went wellStrong pilot/good setupAs a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that theyâd be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails)...
|
May 21, 2023 |
EA - OPTIC [Forecasting Comp] — Pilot Postmortem by OPTIC
10:04
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OPTIC [Forecasting Comp] — Pilot Postmortem, published by OPTIC on May 19, 2023 on The Effective Altruism Forum.OPTIC is an in-person, intercollegiate forecasting competition where undergraduate forecasters compete to make accurate predictions about the future. Think olympiad/debate tournament/hackathon, but for forecasting — teams compete for thousands of dollars in cash prizes on question topics ranging from geopolitics to celebrity twitter patterns to financial asset prices.We ran the pilot event on Saturday, April 22 in Boston and are scaling up to an academic league/olympiad. See our website at opticforecasting.com, and contact us at opticforecasting@gmail.com (or by dropping a comment below)!What happened at the competition?Attendance114 competitors from 5 different countries and 13 different US states initially registered interest. A significant proportion indicated that they wouldn’t be able to compete in this iteration (logistical/scheduling concerns), but expressed interest to compete in the next one. 39 competitors RSVP’d “yes,†though a few didn’t end up attending and a couple unregistered competitors did show up. At the competition, the total attendance was 31 competitors in 8 teams of 3-4, with 2 spectators.Schedule1 hour check-in time/lunch/socialization10 min introduction speech1 hour speech by Seth Blumberg on the future of forecasting (Seth is a behavioral economist and head of Google’s internal prediction market, speaking in his individual capacity) — you can watch the speech hereQuestions released; 3 hours for forecasting (“forecasting periodâ€)10 min conclusion speech, merch distribution20 min retrospective feedback formForecasting (teams, platform, scoring, prizes, etc)Competitors were split up into teams of 3-4. They submitted one forecast per team on each of 30 questions through a private tournament on Metaculus. Teams’ forecasts were not made visible to other teams until after the forecasting period closed. Questions were a mix of binary and continuous, all with a resolution timeframe of weeks-months; all will have resolved by August 15. At that point, we’ll score the forecasts using log scoring.We will have awarded $3000 in cash prizes, to be distributed after the scoring is completed:1st place — $15002nd place — $8003rd place — $400Other prizes — $300Note that prizes for 1st-3rd place are given to the team and split between the members of the team.FundingWe received $4000 USD from the ACX Forecasting Mini-Grants on Manifund, and $2000 USD from the Long Term Future Fund.OrganizersOur organizing team comprises:Jingyi Wang (Brandeis University EA organizer)Saul Munn (Brandeis University EA organizer)Tom Shlomi (Harvard University EA organizer)Also, Saul and Jingyi will be attending EAG London — please reach out if you want to be involved with OPTIC, have questions/comments/concerns, or just want to chat!The following is a postmortem we wrote based on the recording of a verbal postmortem our team held after the event.SummaryOverall, the pilot went really well. We did especially well with setting ourselves up for future iterations, with flexibility/adaptability, and resource use. We could have improved time management and communication, as well as some other minor issues. We’re excited about the future of OPTIC!What went wellStrong pilot/good setupAs a pilot, the April 22 event definitely has set us up for future iterations of OPTIC. We now have a network of previous team captains and competitors from schools all around the Boston area (and beyond) who have indicated that they’d be excited to compete again. We have people set up at a few schools around the country who are going to start forecasting clubs which will compete as teams in forecasting tournaments. We have undergraduate interest (and associated emails)... |
May 21, 2023 |
EA - EAG talks are underrated IMO by Chi
02:06
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAG talks are underrated IMO, published by Chi on May 20, 2023 on The Effective Altruism Forum.Underrated is relative. My position is something like "most people should consider going to >1 EAG talk" and not "most people should spend most of their EAG in talks." This probably most applies to people who are kind of like me. (Been involved for a while, already have a strong network, don't need to do 1-1s for their job.)There's a meme that 1-1s are clearly the most valuable part of EAG(x) and that you should not really go to talks. (See e.g. this, this, this, they don't say exactly this but I think push in the direction of the meme.)I think EAG talks can be really interesting and are underrated. It's true that most of them are recorded and you could watch them later but I'm guessing most people don't actually do that. It also takes a while for them to be uploaded.I still think 1-1s are pretty great, especially if you'renew and don't know many people yet (or otherwise mostly want to increase the number of people you know),have a very specific thing you're trying to get out of EAG and talking to lots of people seems to be the right thing to achieve it.I'm mostly writing this post because I think the meme is really strong in some parts of the EA community. I can imagine that some people in the EA community would feel bad for attending talks because it doesn't feel "optimal." If you feel like you need permission, I want to give you permission to go to talks without feeling bad. Another motivation is that I recently attended my first set of EAG talks in years (I was doing lots of 1-1s for my job before) and was really surprised by how great they were. (That said, it was a bit hit or miss.) I previously accidentally assumed that talks and other prepared sessions would give me ~nothing.See also the rule of equal and opposite advice (1, 2) although I haven't actually read the posts I linked.My best guess is that people in EA are more biased towards taking actions that are part of a collectively "optimal" plan for [generic human with willpower and without any other properties] than taking actions that are good given realistic counterfactuals.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 20, 2023 |
EA - Former Israeli Prime Minister Speaks About AI X-Risk by Yonatan Cale
00:46
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Former Israeli Prime Minister Speaks About AI X-Risk, published by Yonatan Cale on May 20, 2023 on The Effective Altruism Forum.Watch here:/ (it's in English)From the video:just like nuclear tech is an amazing invention for humanity but can also risk the destruction of humanity, AI is the same. The world needs to get together NOW, to form the equivalent of the IAEA [...]Who is this?This is Naftali Bennett (wikipedia), an Israeli politician who was prime minister between June 2021 and June 2022.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 20, 2023 |
EA - âThe Race to the End of Humanityâ â Structural Uncertainty Analysis in AI Risk Models by Froolow
37:19
In very broad terms, models can be thought of as a collection of parameters, and instructions for how we organise those parameters to correspond to some real-world phenomenon of interest. These instructions are described as the modelâs âstructureâ, and include decisions like what parameters will be used to analyse the behaviour of interest and how those parameters will interact with each other. Because I am a bit of a modelling nerd I like to think of the structure as being the modelâs ontology â the sorts of things need to exist within the parallel world of the model in order to answer the question weâve set for ourselves. The impact of structure on outcomes is often under-appreciated; the structure can obviously constrain the sorts of questions you can ask, but the structure can also embody subtle differences in how the question is framed, which can have an important effect on outcomes.Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: âThe Race to the End of Humanityâ â Structural Uncertainty Analysis in AI Risk Models, published by Froolow on May 20, 2023 on The Effective Altruism Forum. Published on May 19, 2023 12:03 PM GMTSummaryThis is an entry into the Open Philanthropy AI Worldview Contest. It investigates the risk of Catastrophe due to an Out-of-Control AI. It makes the case that that model structure is a significant blindspot in AI risk analysis, and hence there is more theoretical work that needs to be done on model structure before this question can be answered with a high degree of confidence.The bulk of the essay is a âproof by exampleâ of this claim â I identify a structural assumption which I think would have been challenged in a different field with a more established tradition of structural criticism, and demonstrate that surfacing this assumption reduce the risk of Catastrophe due to Out-of-Control (OOC) AI by around a third. Specifically, in this essay I look at what happens if we are uncertain about the timelines of AI Catastrophe and Alignment, allowing them to occur in any order.There is currently only an inconsistent culture of peer reviewing structural assumptions in the AI Risk community, especially in comparison to the culture of critiquing parameter estimates. Since models can only be as accurate as the least accurate of these elements, I conclude that this disproportionate focus on refining parameter estimates places an avoidable upper limit on how accurate estimates of AI Risk can be. However, it also suggests some high value next steps to address the inconsistency, so there is a straightforward blueprint for addressing the issues raised in this essay.The analysis underpinning this result is available in this spreadsheet. The results themselves are displayed below. They show that introducing time dependency into the model reduces the risk of OOC AI Catastrophe from 9.8% to 6.7%:My general approach is that I found a partially-complete piece of structural criticism on the forums here and then implemented it into a de novo predictive model based on a well-regarded existing model of AI Risk articulated by Carlsmith (2021). If results change dramatically between the two approaches then I will have found a âfree lunchâ â value that can be added to the frontier of the AI Risk discussion without me actually having to do any intellectual work to push that frontier forward. Since the results above demonstraste quite clearly that the results have changed, I conclude that work on refining parameters has outpaced work on refining structure, and that ideally there would be a rebalancing of effort to prevent such âfree lunchesâ from going unnoticed in the future.I perform some sensitivity analysis to show that this effect is plausible given what we know about community beliefs about AI Risk. I conclude that my amended model is probably more suitable than the standard approach taken towards AI...
|
May 20, 2023 |
EA - “The Race to the End of Humanity†– Structural Uncertainty Analysis in AI Risk Models by Froolow
36:26
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: “The Race to the End of Humanity†– Structural Uncertainty Analysis in AI Risk Models, published by Froolow on May 19, 2023 on The Effective Altruism Forum.SummaryThis is an entry into the Open Philanthropy AI Worldview Contest. It investigates the risk of Catastrophe due to an Out-of-Control AI. It makes the case that that model structure is a significant blindspot in AI risk analysis, and hence there is more theoretical work that needs to be done on model structure before this question can be answered with a high degree of confidence.The bulk of the essay is a ‘proof by example’ of this claim – I identify a structural assumption which I think would have been challenged in a different field with a more established tradition of structural criticism, and demonstrate that surfacing this assumption reduce the risk of Catastrophe due to Out-of-Control (OOC) AI by around a third. Specifically, in this essay I look at what happens if we are uncertain about the timelines of AI Catastrophe and Alignment, allowing them to occur in any order.There is currently only an inconsistent culture of peer reviewing structural assumptions in the AI Risk community, especially in comparison to the culture of critiquing parameter estimates. Since models can only be as accurate as the least accurate of these elements, I conclude that this disproportionate focus on refining parameter estimates places an avoidable upper limit on how accurate estimates of AI Risk can be. However, it also suggests some high value next steps to address the inconsistency, so there is a straightforward blueprint for addressing the issues raised in this essay.The analysis underpinning this result is available in this spreadsheet. The results themselves are displayed below. They show that introducing time dependency into the model reduces the risk of OOC AI Catastrophe from 9.8% to 6.7%:My general approach is that I found a partially-complete piece of structural criticism on the forums here and then implemented it into a de novo predictive model based on a well-regarded existing model of AI Risk articulated by Carlsmith (2021). If results change dramatically between the two approaches then I will have found a ‘free lunch’ – value that can be added to the frontier of the AI Risk discussion without me actually having to do any intellectual work to push that frontier forward. Since the results above demonstraste quite clearly that the results have changed, I conclude that work on refining parameters has outpaced work on refining structure, and that ideally there would be a rebalancing of effort to prevent such ‘free lunches’ from going unnoticed in the future.I perform some sensitivity analysis to show that this effect is plausible given what we know about community beliefs about AI Risk. I conclude that my amended model is probably more suitable than the standard approach taken towards AI Risk analysis, especially when there are specific time-bound elements of the decision problem that need to be investigated (such as a restriction that AI should be invented before 2070). Therefore, I conclude that hunting for other such structural assumptions is likely to be an extremely valuable use of time, since there is probably additional low-hanging fruit in the structural analysis space.I offer some conclusions for how to take this work forwards:There are multiple weaknesses of my model which could be addressed by someone with better knowledge of the issues in AI Alignment. For example, I assume that Alignment is solved in one discrete step which is probably not a good model of how Aligning AIs will actually play out in practice.There are also many other opportunities for analysis in the AI Risk space where more sophisticated structure can likely resolve disagreement. For example, a live discussion in AI Risk at the ... |
May 20, 2023 |
EA - Announcing the Publication of Animal Liberation Now by Peter Singer
04:37
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Publication of Animal Liberation Now, published by Peter Singer on May 19, 2023 on The Effective Altruism Forum.SummaryMy new book, Animal Liberation Now, will be out next Tuesday (May 23).I consider ALN to be a new book, rather than just a revision, because so much of the material in the book is new.Pre-ordering from Amazon or other online booksellers (US only) or ordering/purchasing within the first week of publication will increase the chance of the book getting on NYT best-seller list. (Doing the same in other countries may increase the prospects of the book getting on that country’s bestseller list.)Along with the publication of the book, I will be doing a speaking tour with the same title as the book. You can book tickets here, with a 50% discount if you use the code SINGER50 (Profits will be 100% donated to effective charities opposing intensive animal production).Please spread the words (and links) about the book and the speaking tour to help give the book a strong start.Why a new book?The major motivation of writing the new book is to have a book about animal ethics that is relevant in the 21st Century. Compared with Animal Liberation, there are major updates on the situation of animals used in research and factory farming, and people’s attitudes toward animals, as well as new research on the capacities of animals to suffer, and on the contribution of meat to climate change.What’s different?The animal movement emerged after the 1975 version of AL. In particular, the concern for farmed animals developed rapidly over the last two decades. These developments deserve to be reported and discussed.Some of the issues discussed in AL have seen many changes since then. Some animal experiments are going out of fashion, while some others emerged. On factory farming, there were wins for the farmed animal movement, such as the partially successful “cage-free movement†and various wins in legislative reforms. But the number of animals raised in factory farms increased rapidly during the same time. A significant portion of this increased number came from aquaculture, in other words fish factory farms. New developments were also seen regarding replacing factory farming, in particular the development of plant-based meat alternative and cultivated meats.ALN has a more global perspective than AL, most notably discussing what happened in China. Since the last edition of AL, China has greatly increased the use of animals in research and factory farming.There are also changes in my views about a number of issues. Firstly, since 1990 (The year of publication for the last full revision of the 1975 version of AL), scientists have gained more evidence that suggests the sentience of fish and some invertebrates. Accordingly, I have updated my attitudes toward the probability of sentience of these animals. Secondly, I have changed my views toward the suffering of wild animals, in particular the possibility and tractability of helping them. Thirdly, I have added the discussion about the relation between climate change and meat consumption. Last but not least, Effective Altruism, as an idea or as a movement, did not exist when the versions of Animal Liberation were written, so I have added some discussions of the EA movement and EA principles in the new book.Is the book relevant to EA?Animal welfare is, and should be, one of the major cause areas with EA for reasons I do not need to repeat here. I will explain why ALN is relevant to EA.Firstly, ALN contains some of the commonly used arguments by EAs who work on animal welfare on why the issues of animal suffering is important. Reading ALN provides an opportunity for newcomers to the EA community to learn about animal ethics and why some (hopefully most) EAs think that animals matter morally and that they are... |
May 19, 2023 |
EA - Tips for people considering starting new incubators by Joey
13:08
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tips for people considering starting new incubators, published by Joey on May 19, 2023 on The Effective Altruism Forum.Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is because new incubators often do not address the actual bottlenecks faced by the nonprofit landscape, as we see them.There are lots of factors that prevent great new charities from being launched, and from eventually having a large impact. We have scaled CE to about 10 charities a year, and from our perspective, these are the three major bottlenecks to growing the new charity ecosystem further:Mid-stage fundingFoundersMultiplying effectsMid-stage fundingWe try to look at every step of our charities’ future journeys, to see how we expect them to fare as they progress. In general, there seems to be enough appetite in the philanthropic community to supply seed funding to brand new projects, and we have been successful in helping charities to launch with the funding they need. However, many cause areas appear to have gaps in available funding for charities who are around two to five years old.Charities’ budgets tend to grow each year; for example, a charity might need $150k for its first year, $250k for its 2nd, $400k for its third, and so on. The average charity might require a seed of $150k for its first year, and mid-stage funding (years 2-5) of ~$2 million over 4 years. Currently, it is much more difficult for highly effective charities to fundraise this much at this stage of their journey than it is for them to get the funding they need at the seed-funding stage. Keep in mind that this mid-stage funding is still too early and small for most major institutional funders (e.g., GiveWell does not recommend organizations that can only absorb $1 million a year as top charities), and governments rarely consider projects this young.Mental health case study: An example that demonstrates this issue well is found in the cause area of mental health. We have identified a number of promising intervention ideas in this area over the past few years, and a solid pool of aspiring entrepreneurs interested in founding mental health charities. Although we expected our seed network would be able to support the first round of funding, we did not have confidence in what came next for these charities. We have since worked to improve the situation by helping launch the Mental Health Funders Circle, but even with that network we are concerned about mid-stage funding in the future.Why is mid-stage the problem?: Instead of seed or late stage? I believe it’s the same donors who consider seed or mid-stage funding, but as the volume of funding is smaller at the seed stage, it is covered much more easily. While some charities may struggle in the late stage as well, less will even get to that stage and the number of options typically expand once a charity is clearly established as a field leader.FoundersAlthough a solid number of people are interested in founding charities, it's only an ideal fit for a relatively small percentage of people. It is a career path that requires a highly entrepreneurial mindset, plus a very strong ethical compass to succeed in. Due to the low number of people who are a good fit, I don’t believe that it is a career path that can absorb a high number of people (my guess would be less than 5% of people are actually suited to founding a nonprofit). It is my opinion that other career paths, like for-profit found... |
May 19, 2023 |
EA - Relative Value Functions: A Flexible New Format for Value Estimation by Ozzie Gooen
29:31
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Relative Value Functions: A Flexible New Format for Value Estimation, published by Ozzie Gooen on May 18, 2023 on The Effective Altruism Forum.SummaryQuantifying value in a meaningful way is one of the most important yet challenging tasks for improving decision-making. Traditional approaches rely on standardized value units, but these falter when options differ widely or lack an obvious shared metric. We propose an alternative called relative value functions that uses programming functions to value relationships rather than absolute quantities. This method captures detailed information about correlations and uncertainties that standardized value units miss. More specifically, we put forward value ratio formats of univariate and multivariate forms.Relative value functions ultimately shine where single value units struggle: valuing diverse items in situations with high uncertainty. Their flexibility and elegance suit them well to collective estimation and forecasting. This makes them particularly well-suited to ambitious, large-scale valuation, like estimating large utility functions.While promising, relative value functions also pose challenges. They require specialized knowledge to develop and understand, and will require new forms of software infrastructure. Visualization techniques are needed to make their insights accessible, and training resources must be created to build modeling expertise.Writing programmatic relative value functions can be much easier than one might expect, given the right tools. We show some examples using Squiggle, a programming language for estimation.We at QURI are currently building software to make relative value estimation usable, and we expect to share some of this shortly. We of course also very much encourage others to try other setups as well.Ultimately, if we aim to eventually generate estimates of things like:The total value of all effective altruist projects;The value of 100,000 potential personal and organizational interventions; orThe value of each political bill under consideration in the United States;then the use of relative value assessments may be crucial.Presentation & DemoI gave a recent presentation on relative values, as part of a longer presentation in our work at QURI. This features a short walk-through of an experimental app we're working on to express these values. The Relative Values part of the presentation is is from 22:25 to 35:59.This post gives a much more thorough description of this work than the presentation does, but the example in the presentation might make the rest of this make more sense.Challenges with Estimating Value with Standard UnitsThe standard way to measure the value of items is to come up with standardized units and measure the items in terms of these units.Many health measure benefits are estimated in QALYs or DALYsConsumer benefit has been measured in willingness to payLongtermist interventions have occasionally been measured in “Basis Pointsâ€, Microdooms and MicrotopiasRisky activities can be measured in MicromortsCOVID activities have been measured in MicroCOVIDsLet’s call these sorts of units “value units†as they are meant as approximations or proxies of value. Most of these (QALYs, Basis Points, Micromorts) can more formally be called summary measures, but we’ll stick to the term unit for simplicity.These sorts of units can be very useful, but they’re still infrequently used.QALYs and DALYS don’t have many trusted and aggregated tables. Often there are specific estimates made in specific research papers, but there aren’t many long aggregated tables for public use.There are very few tables of personal intervention value estimates, like the net benefit of life choices.Very few business decisions are made with reference to clear units of value. For example, “Whi... |
May 19, 2023 |
EA - EA Forum: content and moderator positions by Lizka
24:09
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Forum: content and moderator positions, published by Lizka on May 18, 2023 on The Effective Altruism Forum.TL;DR: We’re hiring for Forum moderators — apply by 1 June (it’s the first round of the application, and should take 15-20 minutes). We’re also pre-announcing a full-time Content Specialist position on the Online Team at CEA — you can indicate interest in that.âž¡ï¸ Apply to be a part-time Forum moderator by 1 June.Round 1 of the application should take around 15-20 minutes, and applying earlier is better.You can see moderator responsibilities below. This is a remote, part-time, paid position.âž¡ï¸ Indicate interest in a full-time Content Specialist position on the Online Team at CEA.We’ll probably soon be hiring for someone to work with me (Lizka) on content-related tasks on the Forum. If you fill out this form, we will send you an email when the application opens and consider streamlining your application if you seem like a particularly good fit.You can see more about the role’s responsibilities below. This is a full-time position, and can be remote or in-person from Oxford/Boston/London/Berkeley.âž¡ï¸ You can also indicate interest in working as a copy-editor for CEA or in being a Forum Facilitator (both are part-time remote roles).If you know someone who might be interested, please consider sending this post to them!Please feel free to get in touch with any questions you might have. You can contact forum@centreforeffectivealtruism.org or forum-moderation@effectivealtruism.org, comment here, or reach out to moderators and members of the Online Team.An overview of the rolesI’ve shared a lot more information on the moderator role and the full-time content role in this post — here's a summary in table form. (You can submit the first round of the moderator application or indicate interest in the content role without reading the whole post.)TitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest of the moderation team.Round 1 is open (and should take 15-20 minutes): apply by 1 JuneContent SpecialistFull-time, remote/ in-person (Oxford/ London/ Boston/ Berkeley)Encourage engagement with important and interesting online content (via outreach, newsletters, curation, Forum events, writing, etc.), improve the epistemics, safety, and trust levels on the Forum (e.g. via moderation), and more.Indication of interest (we'll probably open a full application soon)We’re also excited for indications of interest for the following part-timecontractor roles, although we might not end up hiring for these in the very near futureCopy-editor indication of interestPart-time, remote (~4 hours a week average), $30/hour by defaultCopy-editing for style, clarity, grammar — and generally sanity-checking content for CEA. Sometimes also things like reformatting, summarizing other content, finding images, and possibly posting on the website or social media. Forum Facilitator indication of interestPart-time, remote (~3 hours a week average), $30/hourTitleAbout the roleKey responsibilitiesStage the application is atModeratorPart-time, remote (average ~3 hours a week but variable), $40/hourMake the Forum safe, welcoming, and collaborative (e.g. by stopping or preventing aggressive behavior, being clear about moderation decisions), nurture important qualities on the Forum (e.g. by improving the written discussion norms or proactively nudging conversations into better directions), and help the rest ... |
May 19, 2023 |
EA - Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures by Nils
07:38
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Healthy Futures Global: Join Us in Tackling Syphilis in Pregnancy and Creating Healthier Futures, published by Nils on May 18, 2023 on The Effective Altruism Forum.TLDRHealthy Futures Global is a new global health charity originating from CE’s incubation program trying to prevent mother-to-child transmission of syphilis, founded by Keyur Doolabh (medical doctor with research experience) and Nils Voelker (M Sc in health economics and former strategy consultant)Healthy Futures’ strategy is to elevate syphilis screening rates in antenatal clinics to the high levels of HIV screening rates by replacing HIV-only tests with a dual HIV/syphilis testKeyur and Nils are currently exploring potential pilot countries, and will be in the Philippines and Tanzania soon - they invite you to subscribe to their newsletter and to reach out to volunteer, especially if you are in the Philippines or Tanzania or could connect us to people thereI. Introduction: Healthy Futures Global and its OriginsKeyur and Nils are excited to announce the launch of Healthy Futures Global, a new organisation originating from Charity Entrepreneurship's latest incubation programme, dedicated to making a positive impact on global health.Healthy Futures’ mission is to improve maternal and newborn health by focusing on the elimination of congenital syphilis, a preventable but devastating disease that affects millions of families worldwide.II. The Problem: Congenital Syphilis' Global ImpactSyphilis in pregnancy is a pressing global health issue. It causes approximately 60,000 newborn deaths and 140,000 (almost 10% of global) stillbirths annually, contributing up to 50% of all stillbirths in some regions (1, 2, 3, 4).Antenatal syphilis also causes lifelong disabilities for many surviving children, often going unaddressed in many countries. This disability can include cognitive impairment, vision and hearing deficits, bone deformity, and liver dysfunction. If a pregnant woman has syphilis, her child has a 12% chance of neonatal death, 16% chance of stillbirth, and 25% chance of disability (5).III. The Solution: Test and Treat StrategyThe theory of change (below) is a hybrid between direct intervention, technical assistance and policy work. It involves lobbying governments for policy support, and supporting governments, local NGOs and antenatal clinics to roll out dual HIV/Syphilis tests.The key components of the approach involve rapid testing (RDTs) during antenatal care and immediate treatment with antibiotics (BPG) for positive cases.The main strengths of Healthy Futures are:Cost effectiveness: The strategy has the potential to cost-effectively save lives, prevent disabilities, and reduce the burden on health systems. Our analysis gives us an expected value of $2,400 per life saved and ~10x the cost-effectiveness of direct cash transfers. The medical evidence for positive effects of treating the pregnant woman and her baby is strong (6).Monitoring and evaluation: The direct nature of this intervention offers quick feedback loops, allowing us to re-evaluate our strategy accordingly.Track record: Keyur is a medical doctor and Nils brings a background of consulting for pharmaceutical companies and global health organisations. Their backgrounds are a good fit for this type of intervention.Organisational fuel: Healthy Futures benefits from ongoing mentoring from Charity Entrepreneurship and previously incubated charities.IV. Challenges, Risks and MitigationThe challenges Healthy Futures plans to address are (ranked according to severity x likelihood of occurring - highest on top):Implementational challenges:Sustainability: The long-term success and cost-effectiveness of this intervention depend on governments making the necessary changes in the health care system and establishing mech... |
May 18, 2023 |
EA - U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments by DannyBressler
09:52
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: U.S. Regulatory Updates to Benefit-Cost Analysis: Highlights and Encouragement to Submit Public Comments, published by DannyBressler on May 18, 2023 on The Effective Altruism Forum.On April 6, 2023, the U.S. Office of Management and Budget released a draft of the first update to the Federal benefit-cost analysis (BCA) guidelines in 20 years. I saw a nice article in Vox Future Perfect and a nice EA forum post that covered this. These posts cover some of the key points, but I think there are other important updates that might be overlooked. I will highlight some of those below.The key new documents stipulating the new draft BCA guidelines are:Circular A-4The Circular A-4 PreambleCircular A-94Why is the update important? Since 1993, U.S. agencies have been required to conduct a regulatory analysis of all significant regulatory actions (the definition of which was just revised from a rule with an annual impact of more than $100 million to $200 million), which includes an assessment of the costs and benefits of the action. Essentially, all major regulatory actions in the U.S. are subject to BCA, guided by Circular A-4.However, these updated guidance documents are still drafts. They are subject to public comment and peer review and may be changed significantly in light of the feedback received in this process.If you think that some of these highlights or other parts of the new A-4 and A-94 are a good idea (or if you don’t) I’d highly recommend submitting a public comment via the Regulations.gov system (A-4 link and A-94 link). The deadline for public comments is June 6th.Positive comments that support the approach taken in the document are equally and often more useful/impactful than critical comments. If everyone who dislikes something criticizes it, and everyone who supports something doesn’t bother mentioning their support, it looks like everyone who had an opinion opposed it! So, if you like the approach taken (or don’t), please write a comment! Also, note that comments supported with the analytical reasons why the approach is (or is not) justified are generally more useful and taken more seriously.Now on to the highlights:Short-Term Discount RateAs the Vox article mentioned, the new update to Circular A-4 significantly lowers the discount rate to a 1.7% near-term discount rate. This of course is a large change from the previous 3% rate, but this comes from just using a similar method to the previous 2003 A-4 method with more recent Treasury yield data. The preamble has a deeper dive into this calculation and asks the public a number of questions about whether there is a better approach, for those who are interested.The draft Circular A-4 continues to take a “descriptive†approach to discounting in which market data is used to determine the observed tradeoffs people make between money now and money in the future. The discount rate is now lower simply because yields have been steadily declining for the last 20 years.There are good reasons to believe that rates will continue to be low, but it’s also important to emphasize that if rates are not low in the future, then this near-term discount rate will go up again. This is why from the perspective of placing more weight on the future, the next bullets may be more important.Long-Term Declining Discount RateA related important change (and more robust to future interest rate fluctuations) is that A-4 and A-94 endorse the general concept of declining discount rates, and the A-4 preamble proposed and asked for comment on a specific declining discount rate schedule, which discounts the future at progressively lower rates to account for future interest rate uncertainty. This is in line with the approach recommended in the literature based on the best available economics, and also ends up placing larger weight on t... |
May 18, 2023 |
EA - Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship by CE
22:22
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Presenting: 2023 Incubated Charities (Round 1) - Charity Entrepreneurship, published by CE on May 18, 2023 on The Effective Altruism Forum.We are thrilled to announce the launch of four new nonprofit organizations through our February/March 2023 Incubation Program.Executive summaryThe 2023 Round 1 incubated charities are:Animal Policy International - Ensuring animal welfare standards are upheld in international trade policyAnsh - Empowering mothers to save newborn lives by building healthcare capacity for adoption of Kangaroo CareHealthy Futures Global - Preventing mother-to-child transmission of syphilis through testing and treatmentHealthLearn - Providing the world’s best online training to health workers in developing countriesTwo more organizations got started during the program, but are not officially incubated by CE. We believe that the interventions, chosen by the solo founders, are promising (one was recommended by us as a top idea). We have provided support to both organizations through mentorship and benefits similar to those offered to our incubated projects.These organizations are:The Mission Motor - Building a more evidence-driven animal cause area by training and supporting organizations to use monitoring and evaluation to improve the impact of their interventionsUpstream Policies - Driving responsible fishing practices by championing bait fish prohibitionContext: The Charity Entrepreneurship Incubation Program February/March 2023The February/March 2023 program focused on global health and animal advocacy. Our generous donors from the CE Seed Network have enabled us to provide these initiatives with $590,000 in grant funding to kickstart their interventions.In addition to our seed grants, we are dedicated to providing our founders with comprehensive support. This includes continuous mentorship, operational assistance, free co-working spaces at our London office, and access to an ever-growing network of founders, donors, and highly specialized mentors. We have offered several tailored safety nets for those who have decided not to found a charity this program, such as career mentorship, connections to job opportunities, a two-month stipend, or another chance to found a charity through one of our upcoming programs. Our aim is to ensure that all program participants pursue high-impact careers, regardless of whether they found a charity in the given round.We are also pleased to share with you a recently-published video, which showcases program participants sharing their insights on the challenges and benefits of the program. They discuss their motivations for applying, as well as what they found most useful and enjoyable. The footage was filmed during an in-person week held at our London office, and we believe it provides valuable insights into what makes our program unique. We hope you take a moment to watch it.Feel free to learn more about the program here. The next application phase will start on July 10, and close on September 30, 2023. You'll have the opportunity to apply for both the February/March 2024 and July/August 2024 Incubation Programs. For the February/March 2024 program, our focus will be on: mass-media interventions, and preventative animal advocacy. To receive notifications when we start accepting applications, sign up here.Our new charities in detailANIMAL POLICY INTERNATIONALCo-founders and Co-Executive Directors: Mandy Carter, Rainer KravetsWebsite: animalpolicyinternational.orgEmail address: info@animalpolicyinternational.orgLinkedInCE Incubation Grant: $110,000Description of the interventionAnimal Policy International is working with policymakers in regions with higher levels of animal welfare legislation to advocate for responsible imports that are in line with domestic laws. By applying equal standards, they aim ... |
May 18, 2023 |
EA - Announcing the Animal Welfare Library ð¦ by arvomm
04:27
Introducing the Animal Welfare Library (AWLð¦): a web repository of animal welfare resources. We hope the library is useful for:Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Animal Welfare Library ð¦, published by arvomm on May 18, 2023 on The Effective Altruism Forum. Published on May 17, 2023 5:07 PM GMTTL;DRIntroducing the Animal Welfare Library (AWLð¦): a web repository of animal welfare resources. We hope the library is useful for:People who want to learn about animal welfare.More advanced users seeking references or organisations related to animal ethics and effective animal advocacy projects.The library contains:BooksArticlesOrganisationsVideos and FilmsRepositoriesAdvocatesNewsPodcastsAnd more, like newsletters, coming soon.We want AWL to be a link you can easily share with someone wanting to learn more about animal welfare. At 212 entries right now, the library is still expanding and is by no means exhaustive (suggest additions here or in the comments!).We found having a searchable and interactive overview of what is out there really helpful, and we hope you do too! We really value any feedback you might have!Our StoryToday we just launched the Animal Welfare Library or AWL (pronounced owl /aÊl/ ð¦!). AWL is our answer to the question "what is the go-to place for finding high-quality animal welfare resources?". Here's our story.Arvo: When I began my journey to learn more about animal welfare, I realised that there was an overwhelming amount of information on the subject, varying substantially in quality. Over the months and years that followed, I came across several websites and organisations that seemed to be doing incredible work and sharing valuable insights, and I frequently found myself wishing I had known about these resources from the start. I also started to see the interdisciplinary nature of the endeavour which made me realise that it would be particularly beneficial to develop a tool to catalogue some of the knowledge we possess and build a home for a centralised repository of valuable resources.Eleos: The plight of badly-off humans and other animals has been a priority of mine for many years. There has always been a sense of urgency behind my thinking: to me, practical philosophy isnât simply a set of interesting puzzles, but a fundamentally important enterprise to make the world a better place. A few years ago, I embarked on my path to study how I could apply my compassion more systematically in a way that helps me be more effective at making this world a better place. In summer 2021, I started gathering resources on animal ethics and helping animals that would help others make a difference, which resulted in this compilation (and this forum post) that would later help ground this project.In 2022, we met and found we had an aligned vision for a project like AWL. We joined forces and decided to make the library happen.This is how this project was born. This website is the resource we wish we had had in our hands when we started learning about animal welfare and humanityâs role in helping end the neglected suffering of millions of animals.Go to the library here: Animal Welfare Library.Structure of the SiteThe website is structured into home, library and act now pages. (We are also drafting a `why careâ page summarising key arguments in the animal welfare space). The home page has some highlighted resources and organisations and it prompts visitors to explore the library. The Library has all the resources we compiled and the options to filter and search (desktop only) for specific items, themes or key terms. The act now is split into four parts: career (which redirects to Animal Advocacy Careers), donations (redirecting to Animal Charity Evaluators), expanding our social influence and eating plant-based (each of these last two redirects to a subset of relev...
|
May 18, 2023 |
EA - Announcing the Animal Welfare Library 🦉 by arvomm
04:09
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Animal Welfare Library 🦉, published by arvomm on May 17, 2023 on The Effective Altruism Forum.TL;DRIntroducing the Animal Welfare Library (AWL🦉): a web repository of animal welfare resources. We hope the library is useful for:People who want to learn about animal welfare.More advanced users seeking references or organisations related to animal ethics and effective animal advocacy projects.The library contains:BooksArticlesOrganisationsVideos and FilmsRepositoriesAdvocatesNewsPodcastsAnd more, like newsletters, coming soon.We want AWL to be a link you can easily share with someone wanting to learn more about animal welfare. At 212 entries right now, the library is still expanding and is by no means exhaustive (suggest additions here or in the comments!).We found having a searchable and interactive overview of what is out there really helpful, and we hope you do too! We really value any feedback you might have!Our StoryToday we just launched the Animal Welfare Library or AWL (pronounced owl /aʊl/ 🦉!). AWL is our answer to the question "what is the go-to place for finding high-quality animal welfare resources?". Here's our story.Arvo: When I began my journey to learn more about animal welfare, I realised that there was an overwhelming amount of information on the subject, varying substantially in quality. Over the months and years that followed, I came across several websites and organisations that seemed to be doing incredible work and sharing valuable insights, and I frequently found myself wishing I had known about these resources from the start. I also started to see the interdisciplinary nature of the endeavour which made me realise that it would be particularly beneficial to develop a tool to catalogue some of the knowledge we possess and build a home for a centralised repository of valuable resources.Eleos: The plight of badly-off humans and other animals has been a priority of mine for many years. There has always been a sense of urgency behind my thinking: to me, practical philosophy isn’t simply a set of interesting puzzles, but a fundamentally important enterprise to make the world a better place. A few years ago, I embarked on my path to study how I could apply my compassion more systematically in a way that helps me be more effective at making this world a better place. In summer 2021, I started gathering resources on animal ethics and helping animals that would help others make a difference, which resulted in this compilation (and this forum post) that would later help ground this project.In 2022, we met and found we had an aligned vision for a project like AWL. We joined forces and decided to make the library happen.This is how this project was born. This website is the resource we wish we had had in our hands when we started learning about animal welfare and humanity’s role in helping end the neglected suffering of millions of animals.Go to the library here: Animal Welfare Library.Structure of the SiteThe website is structured into home, library and act now pages. (We are also drafting a `why care’ page summarising key arguments in the animal welfare space). The home page has some highlighted resources and organisations and it prompts visitors to explore the library. The Library has all the resources we compiled and the options to filter and search (desktop only) for specific items, themes or key terms. The act now is split into four parts: career (which redirects to Animal Advocacy Careers), donations (redirecting to Animal Charity Evaluators), expanding our social influence and eating plant-based (each of these last two redirects to a subset of relevant resources within the library). When possible, we tried to keep things minimalistic.FeedbackThis is all still work in progress - we wanted to put it out there be... |
May 18, 2023 |
EA - Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans) by titotal
24:08
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why AGI systems will not be fanatical maximisers (unless trained by fanatical humans), published by titotal on May 17, 2023 on The Effective Altruism Forum.[Disclaimer: While I have dabbled in machine learning, I do not consider myself an expert.]IntroductionWhen introducing newcomers to the idea of AI existential risk, a typical story of destruction will involve some variation of the “paperclip maximiser†story. The idea is that some company wishes to use an AGI to perform some seemingly simple and innocuous task, such as producing paperclips. So they set the AI with a goal function of maximizing paperclips. But, foolishly, they haven’t realized that taking over the world and killing all the humans would allow it to maximize paperclips, so it deceives them into thinking it’s friendly until it gets a chance to defeat humanity and tile the universe with paperclips (or wiggles that the AI interprets as paperclips under it's own logic).What is often not stated in these stories is an underlying assumption about the structure of the AI in question. These AI’s are fixed goal utility function maximisers, hellbent on making an arbitrary number as high as possible, by any means necessary. I’ll refer to this model as “fanatical†AI, although I’ve seen other posts refer to them as “wrapper†AI, referring to their overall structure.Increasingly, the assumption that AGI’s will be fanatical in nature is being challenged. I think this is reflected in the “orthodox†and “reform†Ai split. This post was mostly inspired by Nostalgebraist's excellent “why optimise for fixed goals†post, although I think there is some crossover with the arguments of the “shard theory†folks.Humans are not fanatical AI. They do have goals, but the goals change over time, and can only loosely be described by mathematical functions. Traditional programming does not fit this description, being merely a set of instructions executed sequentially. None of the massively successful recent machine-learning based AI fits this description, as I will explain in the next section. In fact, nobody even knows how to make such a fanatical AI.These days AI is being designed by trial-and-error techniques. Instead of hand designing every action it makes, we’re jumbling it's neurons around and letting it try stuff until it finds something that works. The inner working of even a very basic machine learning model is somewhat opaque to us. What is ultimately guiding the AI development is some form of evolution: the strategies that work survive, the strategies that don’t get discarded.This is ultimately why I do not believe that most AI will end up as fanatical maximisers. Because in the world that an AI grows up in, trying to be a fanatical global optimizer is likely to get you killed.This post relies on two assumptions: that there will be a fairly slow takeoff of AI intelligence, and that world takeover is not trivially easy. I believe both to be true, but I won't cover my reasons here for the sake of brevity.In part 1, I flesh out the argument for why selection pressures will prevent most AI from becoming fanatical. In part 2, I will point out some ways that catastrophe could still occur, if AI is trained by fanatical humans.Part 1: Why AI won't be fanatical maximisersGlobal and local maxima[I've tried to keep this to machine learning 101 for easy understanding].Machine learning, as it exists today, can be thought of as an efficient trial and error machine. It contains a bazillion different parameters, such as the “weights†of a neural network, that go into one gigantic linear algebra equation. You throw in an input, compute the output of the equation, and “score†the result based on some goal function G. So if you were training an object recognition program, G might be “number of objects correctly identifiedâ€. ... |
May 17, 2023 |
EA - Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow by Dr Dan Epstein
11:41
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing The Long Game Project: Tabletop Exercises for a Resilient Tomorrow, published by Dr Dan Epstein on May 17, 2023 on The Effective Altruism Forum.“No pilot would dare to fly a commercial airliner without significant training in a flight simulator . yet decision-makers are expected to make critical decisions relying on ‘theory’, ‘experience’, ‘intuition’, ‘gut feeling’, or less†- de Suarez et al 2012.tl;drThe Long Game Project is an organisation focused on helping institutions improve decision-making, enhance team collaboration, and build resilience. We aim to do this by a) Producing tools and resources to improve access to effective tabletop exercising; b) Providing advice, thought leadership and full-suite consultation to organisations interested in tabletop exercises; c) Encouraging a culture of running organisational games and building a community around tabletop games. Follow us on twitter and LinkedIn, play with our beta scenario-generator tool, provide feedback for some cash prizes and spread the word!Set the sceneImagine a world where organisations are prepared for the unexpected, where decision-makers can more confidently navigate crises and create positive outcomes even in the face of chaos because they have regular, simulated experience. Dynamic decision-making under uncertain conditions is a skill that can and should, be practised.Experience matters.Play it now,before you live it later.Roll for Initiative: Introducing The Long Game ProjectOur mission is to help organisations improve decision-making (IIDM), enhance team collaboration, and build resilience using tabletop exercises that are rules-light but experience heavy.Why we existThe world is becoming increasingly complex and unpredictable, with organisations facing various challenges. Institutional decision-making in practice is often ill-equipped to handle such complexity, leading to sub-optimal decision quality and severe failure modes. 80,000 hours lists IIDM as one of the best ways to improve existing levers, interventions and practical suggestions for our long-term future and lists this as a neglected issue. While IIDM is a complex cause area, requiring some disentangling, there are existing levers, interventions and practical suggestions that are under-utilised in practice. The Long Game Project exists to bridge this gap by applying several levers to a diverse range of sectors, helping organisations adapt and thrive in an ever-changing landscape.Our toolkit of levers includes; tabletop exercises, role-playing, future simulations, facilitation, probabilistic thinking games, goal and value alignment, decision design and other methods that combine game design and ideas from behavioural economics, psychology, and organisational theory.Critical Hit: Our organisational aimsBe the place to point to for tabletop scenario advice and tools- Provide expert-level thought leadership and consultation on running serious tabletop exercises.Lower the entry bar to effective tabletop exercising by producing tools, giving guidance and providing expertise.Empower institutions to tackle complex challenges effectively by transforming how they plan, react, and adapt in an increasingly uncertain world.Focus on scenarios of the world's most pressing problems and long-term horizons.Encourage a culture of running organisational games and building a community around tabletop games.Theory of ChangeOur theory of change is rooted in the belief that effective decision-making is a learnable skill. By providing organisations with simulated experience, we aim to create a positive feedback loop in which participants develop stronger decision-making abilities, leading to better outcomes and increased resilience.This, in turn, creates more efficient, effective, and adaptable organisations, ultimately c... |
May 17, 2023 |
EA - New CSER Director: Prof Matthew Connelly by HaydnBelfield
01:14
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New CSER Director: Prof Matthew Connelly, published by HaydnBelfield on May 17, 2023 on The Effective Altruism Forum.The Centre for the Study of Existential Risk (CSER) at Cambridge University is getting a new Director – Professor Matthew Connelly will be our Director from July 2023. Seán Ó hÉigeartaigh, our Interim Director, is staying at CSER and will be focussing more on AI governance and safety research.Prof Connelly is currently a Professor of international and global history at Columbia University, and for the last seven years has been Co-Director of its social science research centre, the Institute for Social and Economic Research and Policy. He has significant policy experience as a consultant for the Gates Foundation, the World Bank, and the Department of Homeland Security and Senate committees. He is the author of The Declassification Engine: What History Reveals about America’s Top Secrets and Fatal Misconception (an Economist and Financial Times book of the year). His next book is on “the history of the end of the worldâ€, a subject on which he has taught multiple courses.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 17, 2023 |
EA - Hiatus: EA and LW post summaries by Zoe Williams
01:58
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiatus: EA and LW post summaries, published by Zoe Williams on May 17, 2023 on The Effective Altruism Forum.For the past ~8 months, I've been summarizing the top posts on the EA and LW forums each week (see archive here), a project supported by Rethink Priorities (RP).I’ve recently taken on a new role as an AI Governance & Strategy Research Manager at RP. As a result of this we're going to be putting the forum summaries on hiatus while we work on what they should look like in the future, and hire for someone new to run the project. A big thank you to everyone who completed our recent survey - it’s great input for us as we evaluate this project going forward!The hiatus will likely last for ~4-6 months. We’ll continue to use the existing email list and podcast channel (EA Forum Podcast (Summaries)) when it's back up and running, so subscribe if you’re interested and feel free to continue to share it with others.If you’re looking for other ways to stay up to date in the meantime, some resources to consider:NewslettersThe EA Forum Digest - a weekly newsletter recommending new EA forum posts that have high karma, active discussion, or could use more input.Monthly Overload of Effective Altruism - a monthly newsletter with top research, organizational updates and events in the EA community.PodcastsEA forum podcast (curated posts) - human narrations of some of the best posts from the EA forum.Nonlinear library - AI narrations of all posts from the EA Forum, Alignment Forum, and LessWrong that meet a karma threshold.There are heaps of cause-area specific newsletters out there too - if you have suggestions, please share them in the comments.I’ve really enjoyed my time running this project! Thanks for reading and engaging, to Coleman Snell for narrating, and to all the writers who’ve shared their ideas and helped people find new opportunities to do good.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 17, 2023 |
EA - Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects! by Dawn Drescher
03:30
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Play Regrantor: Move up to $250,000 to Your Top High-Impact Projects!, published by Dawn Drescher on May 17, 2023 on The Effective Altruism Forum.Summary. We have collected expressions of interest for a total donation budget of about $200,000 from donors who are interested in using our platform. How many of them will end up donating and how much they’ll donate depends on you and how many great projects you bring onto the platform. But Greg Colbourn has firmly committed another $50,000 for AGI moratorium projects for a total of $250,000!FundingOver the past months we’ve asked many donors whether they would consider using the Impact Markets platform to find giving opportunities and what their 2023 donation budget is. We’re now at an aggregate budget of $200,000 and counting. Of course these donors are free to decide that they prefer another project over the ones that have registered on our platform, so that these $200,000 are far from committed.But Greg Colbourn has firmly committed another $50,000 for projects related to an AGI moratorium! He is already in touch with some early-stage efforts, but we definitely need more people to join the cause.You want to become a top donor – a project scout? You want to support a project?You think Charity X is the most impactful one but it’s still underfunded? Convince them to register on app.impactmarkets.io. Then register your donations to them. (They have to confirm that your claim is accurate.) Speed the process by repeating it with a few high-impact projects. When the projects reach some milestones, they can submit them for review. The better the review, the higher your donor score.A score > 200 still puts you firmly among the top 10 donors on our platform. That can change as more project scouts register their high-impact donations.At the moment, we’re still allowing donors to import all their past donations. (Please contact us if you would like to import a lot.) We will eventually limit this to the past month.What if you need moneyIf you’re running or planning to run some impactful project, you can register it on app.impactmarkets.io and pitch it to our top donors. If they think it’s great, their donation (an endorsement in itself) can pull in many more donations from the people who trust them.We’re continually onboarding more people with great donation track records, so both the people on the leaderboard and the ranking algorithm are in constant flux. Please check for updates at least in monthly intervals.You want to donate but don’t know whereFor now, you can add yours to the expressions of interest we’re collecting. That’ll increase the incentive for awesome projects to join our platform and for awesome donors to vie for the top donor status.Once there are top donors that you trust (say, because they share your values), the top donors’ donations will guide you to high-impact projects. Such projects may be outside the purview of charity evaluators like GiveWell or ACE, or they might be too young to have earned top charity status yet. Hence why our impact market doubles as a crowdsourced charity evaluator.QuestionsIf you have any questions:Please see our full FAQ.Check out this demo of a bot trained on our FAQ.Have a look at our recent Substack posts.Join our Discord and the #questions-and-feedback channel.Or of course ask your question below!Acknowledgements. Thanks to Frankie, Dony, Matt, and Greg for feedback, and thanks to Greg and everyone who has filled in the expression of interest form for their pledges!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 17, 2023 |
EA - The Charity Entrepreneurship top ideas new charity prediction market by CE
16:50
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Charity Entrepreneurship top ideas new charity prediction market, published by CE on May 17, 2023 on The Effective Altruism Forum.TL;DR: Charity Entrepreneurship would like your help in our research process. We are running a prediction market on the top 10 ideas across two cause areas. A total of $2000 in prizes is available for prediction accuracy and comment quality. Check it out at:The CE prediction marketFor our upcoming (winter 2024) Incubation Program, we are researching two cause areas. Within global development we are looking into mass media interventions –social and behavior change communication campaigns delivered through mass media (e.g., radio advertising, TV shows, text messages, etc.) aiming to improve human well-being. Within animal welfare we are looking into preventive (or long-run) interventions for farmed animals – the new charities will not just positively affect farmed animals in the short term, but will have a long-run effect on preventing animal suffering in farms 35 years from now.We have narrowed down to the most promising top 10 ideas for each of these cause areas. The Charity Entrepreneurship research team will be doing ~80-hour research projects on as many of these ideas as we can between now and July, carefully examining the evidence and crucial considerations that could either make or break the idea. At the end of this we will aim to recommend two-three ideas for each cause area.This is where you come in. We want to get your views and predictions on our top ideas within each cause area. We have put our top idea list onto the Manifold Markets prediction market platform, and you are invited to join a collective exercise to assess these ideas and input into our decision making.You can do this by reading the list of top ideas (below) for one or both of the cause areas, and then going to the Manifold Market platform and:Make a prediction about how likely you think it is that a specific idea will be recommended by us at the end of our research.Leave comments on each idea with your thoughts or views on why it might or might not be recommended, or why it might or might not be a good idea.As well as having the great benefit of helping our research, we have $2000 in prizes to give away (generously donated by Manifold Markets).$1,000 for comment prizes. We will give $100 to each person who gives one of the top 10 arguments or pieces of information that changes our minds the most regarding our selection decisions.$1,000 for forecasting prizes. We will grant prizes to the individuals who do the best at predicting which of the ideas we end up selecting.More details on these prizes are available on the page at Manifold.The market is open until June 5, 2023 for predictions and comments. This gives the CE research team time to read and integrate comments and insights into our research before our early July deadline.To participate, read the list below and go to: to make predictions and leave comments.Summary of ideas under considerationMass MediaBy ‘mass media’ interventions we refer to social and behavior change communication campaigns delivered through mass media, aiming to improve human well-being.1. Using mobile technologies (mHealth) to encourage women to attend antenatal clinics and/or give birth at a healthcare facilityAcross much of sub-Saharan Africa, only about 55% of women make the recommended four+ antenatal care visits, and only 60% give birth at a healthcare facility. This organization would encourage greater healthcare utilization and achieve lower maternal and neonatal mortality by scaling up evidence-based mHealth interventions, such as one-way text messages or two-way SMS/WhatsApp communications. These messages would aim to address common concerns about professional healthcare, as well as reminding women not to mis... |
May 17, 2023 |
EA - Don't optimise for social status within the EA community by freedomandutility
00:43
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't optimise for social status within the EA community, published by freedomandutility on May 17, 2023 on The Effective Altruism Forum.One downside of engaging with the EA community is that social status in the community probably isn't well aligned with impact, so if you consciously or subconsciously start optimising for status, you may be less impactful than you could be otherwise.For example, roles outside EA organisations which lead to huge social impact probably won't help much with social status inside the EA community.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 17, 2023 |
EA - Some quotes from Tuesday's Senate hearing on AI by Daniel Eth
06:13
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some quotes from Tuesday's Senate hearing on AI, published by Daniel Eth on May 17, 2023 on The Effective Altruism Forum.On Tuesday, the US Senate held a hearing on AI. The hearing involved 3 witnesses: Sam Altman, Gary Marcus, and Christina Montgomery. (If you want to watch the hearing, you can watch it here – it's around 3 hours.)I watched the hearing and wound up live-tweeting quotes that stood out to me, as well as some reactions. I'm copying over quotes to this post that I think might be of interest to others here. Note this was a very impromptu process and I wasn't originally planning on writing a forum post when I was jotting down quotes, so I've presumably missed a bunch of quotes that would be of interest to many here. Without further ado, here are the quotes (organized chronologically):Senator Blumenthal (D-CT): "I think you [Sam Altman] have said, in fact, and I'm gonna quote, 'Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.' You may have had in mind the effect on jobs, which is really my biggest nightmare in the long run..."Sam Altman: [doesn't correct the misunderstanding of the quote and instead proceeds to talk about possible effects of AI on employment]Sam Altman: "My worst fears are that... we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways; it's why we started the company... I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear eyed about what the downside case is and the work that we have to do to mitigate that."Sam Altman: "I think the US should lead [on AI regulation], but to be effective, we do need something global... There is precedent – I know it sounds naive to call for something like this... we've done it before with the IAEA... Given what it takes to make these models, the chip supply chain, the sort of limited number of competitive GPUs, the power the US has over these companies, I think there are paths to the US setting some international standards that other countries would need to collaborate with and be part of, that are actually workable, even though it sounds on its face like an impractical idea. And I think it would be great for the world."Senator Coons (D-DE): "I understand one way to prevent generative AI models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. There's another approach that's called 'constitutional AI' that gives the model a set of values or principles to guide its decision making. Would it be more effective to give models these kinds of rules instead of trying to require or compel training the model on all the different potentials for harmful content? ... I'm interested also, what international bodies are best positioned to convene multilateral discussions to promote responsible standards? We've talked about a model being CERN and nuclear energy. I'm concerned about proliferation and nonproliferation."Senator Kennedy (R-LA): "Permit me to share with you three hypotheses that I would like you to assume for the moment to be true... Hypothesis number 3... there is likely a berserk wing of the artificial intelligence community that intentionally or unintentionally could use artificial intelligence to kill all of us and hurt us the entire time that we are dying... Please tell me in plain English two or three reforms, regulations, if any, that you would implement if you were queen or king for a day..."Gary Marcus: "Number 1: a safety-review like we use with the FDA prior to widespread deployment... Number 2: a nimble monitoring agency to follow what's going ... |
May 17, 2023 |
EA - Probably Good launches improved website & 1-on-1 advising by Probably Good
04:36
TL;DR: Probably Goodâs career guidance services just got a lot better!Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launches improved website & 1-on-1 advising, published by Probably Good on May 16, 2023 on The Effective Altruism Forum. Published on May 16, 2023 4:39 PM GMTTL;DR: Probably Goodâs career guidance services just got a lot better!We renovated & rebranded our website â check it out and leave feedbackWe opened applications for 1-on-1 career guidance services! Apply hereWe published a lot more career profiles & completed our career guideMeet us at EAG! Come by our organization fair booth or community office hoursWhatâs new?Probably Good is a career guidance organization aiming to make impact-centered career advice more accessible to more people (you can read more about our goals, approach, and more in our about-us page). Our renovated site is a big improvement to our overall look and should provide a much better experience for readers. Updates include:A complete redesign to make the site more friendly, engaging, and easy to navigate. It loads faster, it looks better, and it's now easier to spend the hours of research your career deserves.A full end-to-end guide for how to think about and pursue an impactful career! We restructured the guide to be more accessible and engaging, and will continue making updates/adding summaries in the coming weeks.A LOT of new content:5 career path profiles:Climate SciencePsychologyFor-Profit EntrepreneurshipJournalismVeterinary MedicineImpactful career path recommendations for:Biology degreesEconomics degreesPsychology degreesBusiness degreesUpdated core concept articlesApply for 1-on-1 career advisingAlong with the new site, we also officially launched our 1-on-1 career advising service. These calls are a chance for us to help you think through your goals and plans, connect you to concrete opportunities and experts in Probably Good's focus areas, and provide ongoing consultation services. Applications are now open!If youâre currently planning your career path or looking to make a change, we encourage you to apply. Weâre also happy to work with people who are motivated to do good but are unfamiliar or less involved with EA, so feel free to share this opportunity more broadly with your network outside of the community. If you have further questions about the application process, donât hesitate to contact us at contact@probablygood.org.Get InvolvedOur team will be at EAG, so if youâd like to learn more about Probably Good or chat about career advising, feel free to stop by our table at the organization fair Friday or come by community office hours on Saturday 5-6pm.Apply for advising!Give us feedback on the site. There are still quite a few changes we plan on making and technical quirks weâll continue to update over the coming weeks. That said, weâd greatly appreciate any feedback on the site. To ensure that weâll see your comments, the best way to leave feedback is through our contact form. You can also reach out directly at hello@probablygood.org.Direct people who might be interested to our site. If youâre a community organizer (especially at a university or in a region outside the U.S. & U.K) weâd appreciate it if you'd spread the word about our resources & advising opportunities and let us know what further resources would be useful for your community.AcknowledgementsMany thanks to User-Friendly for making our whole website redesign & rebranding possible! They did a great job understanding our brand & needs, and provided a new look for PG that weâre really excited about.We also want to shout out our newest team members who helped make all these updates possible!Itamar Shatz, our new Head of Growth. Itamarâs writing about applied psychology and philosophy is read by over a million people each year, and linked from places l...
|
May 16, 2023 |
EA - Probably Good launches improved website and 1-on-1 advising by Probably Good
04:21
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launches improved website & 1-on-1 advising, published by Probably Good on May 16, 2023 on The Effective Altruism Forum.TL;DR: Probably Good’s career guidance services just got a lot better!We renovated & rebranded our website – check it out and leave feedbackWe opened applications for 1-on-1 career guidance services! Apply hereWe published a lot more career profiles & completed our career guideMeet us at EAG! Come by our organization fair booth or community office hoursWhat’s new?Probably Good is a career guidance organization aiming to make impact-centered career advice more accessible to more people (you can read more about our goals, approach, and more in our about-us page). Our renovated site is a big improvement to our overall look and should provide a much better experience for readers. Updates include:A complete redesign to make the site more friendly, engaging, and easy to navigate. It loads faster, it looks better, and it's now easier to spend the hours of research your career deserves.A full end-to-end guide for how to think about and pursue an impactful career! We restructured the guide to be more accessible and engaging, and will continue making updates/adding summaries in the coming weeks.A LOT of new content:5 career path profiles:Climate SciencePsychologyFor-Profit EntrepreneurshipJournalismVeterinary MedicineImpactful career path recommendations for:Biology degreesEconomics degreesPsychology degreesBusiness degreesUpdated core concept articlesApply for 1-on-1 career advisingAlong with the new site, we also officially launched our 1-on-1 career advising service. These calls are a chance for us to help you think through your goals and plans, connect you to concrete opportunities and experts in Probably Good's focus areas, and provide ongoing consultation services. Applications are now open!If you’re currently planning your career path or looking to make a change, we encourage you to apply. We’re also happy to work with people who are motivated to do good but are unfamiliar or less involved with EA, so feel free to share this opportunity more broadly with your network outside of the community. If you have further questions about the application process, don’t hesitate to contact us at contact@probablygood.org.Get InvolvedOur team will be at EAG, so if you’d like to learn more about Probably Good or chat about career advising, feel free to stop by our table at the organization fair Friday or come by community office hours on Saturday 5-6pm.Apply for advising!Give us feedback on the site. There are still quite a few changes we plan on making and technical quirks we’ll continue to update over the coming weeks. That said, we’d greatly appreciate any feedback on the site. To ensure that we’ll see your comments, the best way to leave feedback is through our contact form. You can also reach out directly at hello@probablygood.org.Direct people who might be interested to our site. If you’re a community organizer (especially at a university or in a region outside the U.S. & U.K) we’d appreciate it if you'd spread the word about our resources & advising opportunities and let us know what further resources would be useful for your community.AcknowledgementsMany thanks to User-Friendly for making our whole website redesign & rebranding possible! They did a great job understanding our brand & needs, and provided a new look for PG that we’re really excited about.We also want to shout out our newest team members who helped make all these updates possible!Itamar Shatz, our new Head of Growth. Itamar’s writing about applied psychology and philosophy is read by over a million people each year, and linked from places like The NY Times and TechCrunch. He has a PhD from Cambridge University, where he is an affiliated researcher... |
May 16, 2023 |
EA - Charity Entrepreneurshipâs research into large-scale global health interventions by CE
08:05
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Charity Entrepreneurshipâs research into large-scale global health interventions, published by CE on May 16, 2023 on The Effective Altruism Forum.TL;DR: Description of our comprehensive research process, used to determine the most effective charity ideas within the large-scale global health cause area- our focus for the most recent research round. Starting with 300+ ideas, we used our iterative process to find four highly-scalable, cost-effective interventions to be launched from our Incubation Program.Every year at Charity Entrepreneurship, we try to find the best interventions to launch impact-focused charities through our Incubation Program. As a research-driven organization, we try to continuously improve our research methodology and process to ensure a robust and comprehensive analysis of the interventions under consideration. In our last research round (late 2022- early 2023), we focused on the area of large-scale global health. In this post we share with you our insights on the objectives, research framework, and selection criteria that have guided us in identifying and recommending the most impactful ideas in this space.Why âlarge-scaleâ global health?There is evidence to suggest that the larger a charity scales, the less cost-effective it becomes. This tradeoff likely applies to most cause areas, but it is most evident in the global health and development space.In this diagram from an Open Philanthropy talk, we can clearly see this correlation mapped out:Source: Open Philanthropyâs Cause Prioritization Framework Talk (min. 22:12)The diagram shows GiveWell's top-recommended charities from 2020 clustered on the 10x cash line, each having the ability to spend approximately $100 million or more, annually. GiveDirectly is located on the 1x cash point, having the capacity to spend approximately $100 billion annually.This has lead us to two considerations::Firstly, it suggests that those who prioritize evidence-based, impact-driven philanthropy may identify highly effective, yet challenging-to-scale interventions that surpass the efficacy of GiveWell's top recommendations. However, identifying such interventions may be challenging.Secondly, it means that Charity Entrepreneurship needs to determine how to balance cost-effectiveness and scalability when recommending potential high-impact interventions.During our 2020 and 2021 research round in the global health and development space, our primary focus was on maximizing cost-effectiveness. We honed in on policy charities in particular, which are likely to reside in the top left quadrant of the scalability versus cost-effectiveness graph; such organizations may be many times more effective than current top recommendations by GiveWell, but have limited capacity to absorb additional funds. For instance, HLI estimates here that LEEP, the longest-running policy charity incubated by Charity Entrepreneurship, is approximately 100 times more cost-effective than cash transfers.In 2022, we made the strategic decision to shift our focus from maximizing cost-effectiveness, to maximizing scalability.This decision was made given the apparent high level of funding available from organizations such as GiveWell.We challenged ourselves to seek out the most promising new charity ideas that could scale to absorb $5 million or more in funding within five years, while also maintaining the same level of cost-effectiveness as current top GiveWell recommendations (10x cash, ~$100/DALY).Our research processIn late 2022 and early 2023, we conducted a six-month research round with a team of four staff members, as well as several research fellows, to identify the most promising new charity ideas.Our approach prioritized ideas that met the following criteria, in order of importance:Surpassed our benchmark of 10x cash, and could sc...
|
May 16, 2023 |
EA - Charity Entrepreneurship’s research into large-scale global health interventions by CE
08:05
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Charity Entrepreneurship’s research into large-scale global health interventions, published by CE on May 16, 2023 on The Effective Altruism Forum.TL;DR: Description of our comprehensive research process, used to determine the most effective charity ideas within the large-scale global health cause area- our focus for the most recent research round. Starting with 300+ ideas, we used our iterative process to find four highly-scalable, cost-effective interventions to be launched from our Incubation Program.Every year at Charity Entrepreneurship, we try to find the best interventions to launch impact-focused charities through our Incubation Program. As a research-driven organization, we try to continuously improve our research methodology and process to ensure a robust and comprehensive analysis of the interventions under consideration. In our last research round (late 2022- early 2023), we focused on the area of large-scale global health. In this post we share with you our insights on the objectives, research framework, and selection criteria that have guided us in identifying and recommending the most impactful ideas in this space.Why “large-scale†global health?There is evidence to suggest that the larger a charity scales, the less cost-effective it becomes. This tradeoff likely applies to most cause areas, but it is most evident in the global health and development space.In this diagram from an Open Philanthropy talk, we can clearly see this correlation mapped out:Source: Open Philanthropy’s Cause Prioritization Framework Talk (min. 22:12)The diagram shows GiveWell's top-recommended charities from 2020 clustered on the 10x cash line, each having the ability to spend approximately $100 million or more, annually. GiveDirectly is located on the 1x cash point, having the capacity to spend approximately $100 billion annually.This has lead us to two considerations::Firstly, it suggests that those who prioritize evidence-based, impact-driven philanthropy may identify highly effective, yet challenging-to-scale interventions that surpass the efficacy of GiveWell's top recommendations. However, identifying such interventions may be challenging.Secondly, it means that Charity Entrepreneurship needs to determine how to balance cost-effectiveness and scalability when recommending potential high-impact interventions.During our 2020 and 2021 research round in the global health and development space, our primary focus was on maximizing cost-effectiveness. We honed in on policy charities in particular, which are likely to reside in the top left quadrant of the scalability versus cost-effectiveness graph; such organizations may be many times more effective than current top recommendations by GiveWell, but have limited capacity to absorb additional funds. For instance, HLI estimates here that LEEP, the longest-running policy charity incubated by Charity Entrepreneurship, is approximately 100 times more cost-effective than cash transfers.In 2022, we made the strategic decision to shift our focus from maximizing cost-effectiveness, to maximizing scalability.This decision was made given the apparent high level of funding available from organizations such as GiveWell.We challenged ourselves to seek out the most promising new charity ideas that could scale to absorb $5 million or more in funding within five years, while also maintaining the same level of cost-effectiveness as current top GiveWell recommendations (10x cash, ~$100/DALY).Our research processIn late 2022 and early 2023, we conducted a six-month research round with a team of four staff members, as well as several research fellows, to identify the most promising new charity ideas.Our approach prioritized ideas that met the following criteria, in order of importance:Surpassed our benchmark of 10x cash, and could sc... |
May 16, 2023 |
EA - Announcing the Confido app: bringing forecasting to everyone by Blanka
15:04
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Confido app: bringing forecasting to everyone, published by Blanka on May 16, 2023 on The Effective Altruism Forum.SummaryHi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding.Why are we building Confido?We think there is a lot of attention in the EA space toward making better forecasts – investing in big public forecasting platforms, researching better scoring and aggregation methods, skilling up superforecasters and other quantitatively and technically minded people in the EA community, building advanced tools for more complex probabilistic models, etc.This is clearly important and well done and we do not expect to add much to this effort.However, we believe some other aspects of forecasting / quantified uncertainty are also valuable and currently neglected.For example, little effort has gone into making these concepts and tools accessible to people without a math or technical background. This includes, for example, many people from:organizations working on animal welfareorganizations working on non-technical AI safety, strategy and governanceorganizations working on biological risks and pandemic preparednessorganizations working on improving policymaking and governance in general, think tanks, even government bodiesWe think all of these would benefit from clearer ways of communicating uncertain beliefs and estimates, yet existing tools may have a high barrier of entry.What makes Confido unique?Confido is a tool for working collaboratively with probabilistic forecasts, estimates, beliefs and quantified uncertainty together with your team. Several features distinguish Confido from existing tools in this space:a strong emphasis on being easy to understand and convenient to usethe ability to use it internally and privately, including self-hostingthe ability to use it for more than just forecasting questions (more below)Confido is free and open sourceEase of use & user experienceConfido’s two main goals thus are to be maximally:easy to understand (easy to get started with, requiring minimal background knowledge, guiding the user where needed)convenient to use (requiring minimum hassle and extra steps to use it and perform tasks, pleasant to work with)The second part is also very important because when a tool is cum... |
May 16, 2023 |
EA - 2023 update on Muslims for Effective Altruism by Kaleem
09:58
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 2023 update on Muslims for Effective Altruism, published by Kaleem on May 15, 2023 on The Effective Altruism Forum.Summary:Muslims for Effective Altruism made progress this year through our three projects: “The Muslim Network for Positive Impactâ€, “The Muslim Impact Labâ€, and “Afterfundâ€, as well as continued behind the scenes work on fundraising, incubating new projects, and developing our short and medium term plans. The number of members of Muslims for Effective Altruism has continued to grow at a rate of ~25 members per year since mid-2021 (now at ~50 members).“Verdictâ€:It was a good (but not perfect) year for us, and we’re looking forward to doing even more interesting work, and having more of an impact, in the next one!Introduction:Around a year ago we posted our first post which gave a high level overview of why we think strating EA projects aimed at Muslims, or focusing on the intersection between EA and Islam, would be a high value endeavor. We were really pleasantly surprised at the amount of interest and support expressed about this endeavor, and continue to be grateful for the encouragement and feedback we receive from the community.Perhaps the most notable change before reading the rest of this post is that we’ve moved to thinking about Muslims for Effective Altruism’s structure as a federation of independent projects, rather than as one large org. We think this is useful because it prevents our lack of managerial capacity from holding back or interfering with our motivated project leads, as well as allowing us to increase our likelihood of expanding our network as other impactful projects can get off the ground without our knowledge or input.Kaleem will be attending EAG in London in May as well as hosting the “Muslims in EA†meetup there - so we thought it would be a good time to update everyone on what we’ve been up to in case you’d like to meet to discuss any aspects of our work.So, this post aims to provide an update on some of the work on which we’ve managed to make headway over the past year, as well as things we have fallen short on, and ways in which our plans have changed since the initial post.Projects:The Muslim Impact LabThe Muslim Impact Lab is a research and advisory body. This dynamic multi-disciplinary team is focused on collating and producing content on the intersections between EA ideas and Islamic Intellectual history, as well as providing expert consulting services to EA-Aligned organizations doing outreach in Muslim communities.The Lab assisted GiveDirectly by co-authoring their post in which they launched the Yemen Zakat Fund and previously advised them on their plans. Nayaaz Hashim, one of the co-founders of the lab, also published a piece on Unconditional Cash Transfers from an Islamic perspective on the Muslim Impact Lab substack. To date, GiveDirectly has raised $165,000 for the Yemen program, however it is difficult to quantify our counterfactual impact on this figure, given that other organizations such as Giving What We Can, and GiveDirectly themselves, have also been involved in raising funds. In the future we should look into ways of building in mechanisms to our processes that may help us establish our counterfactual impact with regards to raising effective Zakat.The lab is currently working on a survey to understand the moral priorities of the Muslim community, and developing a research agenda for, and producing content on, exploring intersections and challenges between understandings of Islam and current theories of effective altruism.The plan for the Muslim Impact Lab is to continue doing this research and, in collaboration with the Muslim Network for Positive Impact, put together a structured fellowship in the near future.Core Team: Maryam Khan, Nayaaz Hashim, Faezah Izadi, Mufti Sayed Haroon, Ahmed Gh... |
May 16, 2023 |
EA - Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast) by 80000 Hours
25:18
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Accidentally teaching AI models to deceive us (Ajeya Cotra on The 80,000 Hours Podcast), published by 80000 Hours on May 15, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Ajeya Cotra on accidentally teaching AI models to deceive us.You can click through for the audio, a full transcript, and related links. Below is the episode summary and some key excerpts.Episode SummaryI don’t know yet what suite of tests exactly you could show me, and what arguments you could show me, that would make me actually convinced that this model has a sufficiently deeply rooted motivation to not try to escape human control. I think that’s, in some sense, the whole heart of the alignment problem.And I think for a long time, labs have just been racing ahead, and they’ve had the justification — which I think was reasonable for a while — of like, “Come on, of course these systems we’re building aren’t going to take over the world.†As soon as that starts to change, I want a forcing function that makes it so that the labs now have the incentive to come up with the kinds of tests that should actually be persuasive.Ajeya CotraImagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don’t get to see any resumes or do reference checks. And because you’re so rich, tonnes of people apply for the job — for all sorts of reasons.Today’s guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you’re monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can’t we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won’t work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:Saints — models that care about doing what we really wantSycophants — models that just want us to say they’ve done a good job, even if they get that praise by taking actions they know we wouldn’t want them toSchemers — models that don’t care about us or our interests at all, who are just pleasing us so long as that serves their own agendaIn principle, a machine learning training process based on reinforcement learning could spit out any of these three attitudes, because all three would perform roughly equally well on the tests we give them, and ‘performs well on tests’ is how these models are selected.But while that’s true in principle, maybe it’s not something that could plausibly happen in the real world. Af... |
May 16, 2023 |
EA - A flaw in a simple version of worldview diversification by NunoSempere
09:57
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A flaw in a simple version of worldview diversification, published by NunoSempere on May 15, 2023 on The Effective Altruism Forum.SummaryI consider a simple version of “worldview diversificationâ€: allocating a set amount of money per cause area per year. I explain in probably too much detail how that setup leads to inconsistent relative values from year to year and from cause area to cause area. This implies that there might be Pareto improvements, i.e., moves that you could make that will result in strictly better outcomes. However, identifying those Pareto improvements wouldn’t be trivial, and would probably require more investment into estimation and cross-area comparison capabilities.1More elaborate versions of worldview diversification are probably able to fix this particular flaw, for example by instituting trading between the different worldview—thought that trading does ultimately have to happen. However, I view those solutions as hacks, and I suspect that the problem I outline in this post is indicative of deeper problems with the overall approach of worldview diversification.The main flaw: inconsistent relative valuesThis section perhaps has too much detail to arrive at a fairly intuitive point. I thought this was worth doing because I find the point that there is a possible Pareto improvement on the table a powerful argument, and I didn’t want to hand-wave it. But the reader might want to skip to the next sections after getting the gist.Deducing bounds for relative values from revealed preferencesSuppose that you order the ex-ante values of grants in different cause areas. The areas could be global health and development, animal welfare, speculative long-termism, etc. Their values could be given in QALYs (quality-adjusted life-years), sentience-adjusted QALYs, expected reduction in existential risk, but also in some relative unit2.For simplicity, let us just pick the case where there are two cause areas:More undilluted shades represent more valuable grants (e.g., larger reductions per dollar: of human suffering, animal suffering or existential risk), and lighter shades represent less valuable grants. Due to diminishing marginal returns, I’ve drawn the most valuable grants as smaller, though this doesn’t particularly matter.Now, we can augment the picture by also considering the marginal grants which didn’t get funded.In particular, imagine that the marginal grant which didn’t get funded for cause #1 has the same size as the marginal grant that did get funded for cause #2 (this doesn’t affect the thrust of the argument, it just makes it more apparent):Now, from this, we can deduce some bounds on relative values:In words rather than in shades of colour, this would be:Spending L1 dollars at cost-effectiveness A greens/$ is better than spending L1 dollars at cost-effectiveness B reds/$Spending L2 dollars at cost-effectiveness X reds/$ is better than spending L2 dollars at cost-effectiveness Y greens/$Or, dividing by L1 and L2,A greens is better than B redsX reds is better than Y redsIn colors, this would correspond to all four squares having the same size:Giving some values, this could be:10 greens is better than 2 reds3 reds is better than 5 greensFrom this we could deduce that 6 reds > 10 greens > 2 reds, or that one green is worth between 0.2 and 0.6 reds.But now there comes a new yearBut the above was for one year. Now comes another year, with its own set of grants. But we are keeping the amount we allocate to each area constant.It’s been a less promising year for green, and a more promising year for red, . So this means that some of the stuff that wasn’t funded last year for green is funded now, and some of the stuff that was funded last year for red isn’t funded now:Now we can do the same comparisons as the last time:And when ... |
May 15, 2023 |
EA - EA Survey 2022: Demographics by David Moss
14:03
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey 2022: Demographics, published by David Moss on May 15, 2023 on The Effective Altruism Forum.SummaryGenderSince 2020, the percentage of women in our sample has increased (26.5% vs 29.3%) and the percentage of men decreased (70.5% vs 66.2%).More recent cohorts of EAs have lower percentages of men than earlier cohorts. This pattern is compatible with either increased recruitment of women, non-binary or other non-male EAs in more recent years and/or men being more likely to drop out of EA.We examine differences between cohorts across years and find no evidence of significant differences in dropout between men and women.Race/ethnicityThe percentage of white respondents in our sample (76.26%) has remained fairly flat over time.More recent cohorts contain lower percentages of white respondents (compatible with either increased recruitment and/or lower dropout of non-white respondents).We also examine differences between cohorts across years for race/ethnicity, but do not find a consistent pattern.AgeThe average age at which people first get involved in EA (26) has continued to increase.Education and employmentThe percentage of students in the movement has decreased since 2020 and the percentage in employment has increased. However, just over 40% of those who joined EA in the last year were students.Universities11.8% of respondents attended the top 10 (QS) ranked universities globally.Career strategiesThe most commonly cited strategy for impact in one’s career was ‘research’ (20.61%) followed by ‘still deciding’ (19.63%).More than twice as many respondents selected research as selected ‘earning to give’ (10.24%), organization-building skills (ops, management), government and policy, entrepreneurship or community building (<10% each).Men were significantly more likely to select research and significantly less likely to select organization-building skills. We found no significant differences by race/ethnicity.Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%).PoliticsRespondents continue to be strongly left-leaning politically (63.6% vs 2.4% right-leaning).Our 2022 sample was slightly more left-leaning than in 2019.ReligionA large majority of respondents (69.58%) were atheist, agnostic or non-religious (similar to 2019).Introduction3567 respondents completed the 2022 EA Survey.A recurring observation in previous surveys is that the community is relatively lacking in demographic diversity on the dimensions of gender, age, race/ethnicity, and nationality. In this report, we examine the demographic composition of the community, how it has changed over time, and how this is related to different outcomes.In future posts in this series we will examine differences in experiences of and satisfaction with the community (see posts from 2019 and 2020), and explore the geography of the EA community.In a forthcoming follow-up survey, we will also be examining experiences related to gender and community satisfaction in more detail.Basic DemographicsGenderThe percentage of women has slightly increased since 2020 (26.5% to 29.3%), while the percentage of men has slightly decreased (70.5% to 66.2%).Gender across survey yearsLooking across different survey years, we can see that there is now a higher percentage of women in our sample than in the earliest years. In the earliest EA Surveys, we saw just over 75% men, whereas in the most recent survey, we see just over 65%.Gender across cohortsLooking across cohorts (EAs who reported first getting involved in a given year), we see that more recent cohorts contain more women than men. This is compatible with either/both increased recruitment of women (or decreased recruitment of men) or disproportionate attrition of... |
May 15, 2023 |
EA - How much do markets value Open AI? by Ben West
07:48
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much do markets value Open AI?, published by Ben West on May 14, 2023 on The Effective Altruism Forum.Summary: A BOTEC indicates that Open AI might have been valued at 220-430x their annual recurring revenue, which is high but not unheard of. Various factors make this multiple hard to interpret, but it generally does not seem consistent with investors believing that Open AI will capture revenue consistent with creating transformative AI.OverviewEpistemic status: revenue multiples are intended as a rough estimate of how much investors believe a company is going to grow, and I would be surprised if my estimated revenue multiple was off by more than a factor of 5. But the "strategic considerations" portion of this is a bunch of wild guesses that I feel much less confident about.There has been some discussion about how much markets are expecting transformative AI, e.g. here. One obvious question is "why isn't Open AI valued at a kajillion dollars?"I estimate that Microsoft's investment implicitly valued OAI at 220-430x their annual recurring revenue. This is high - average multiples are around 7x, but some pharmaceutical companies have multiples > 1000x. This would seem to support the argument that investors think that OAI is exceptional (but not "equivalent to the Industrial Revolution" exceptional).However, Microsoft received a set of benefits from the deal which make the EV multiple overstated. Based on adjustments, I can see the actual implied multiple being anything from -2,200x to 3,200x.(Negative multiples imply that Microsoft got more value from access to OAI models than the amount they invested and are therefore willing to treat their investment as a liability rather than an asset.)One particularly confusing fact is that OAI's valuation appears to have gone from $14 billion in 2021 to $19 billion in 2023. Even ignoring anything about transformative AI, I would have expected that the success of ChatGPT etc. should have resulted in a more than a 35% increase.Qualitatively, my guess is that this was a nice but not exceptional deal for OAI, and I feel confused why they took it. One possible explanation is “the kind of people who can deploy $10B of capital are institutionally incapable of investing at > 200x revenue multiplesâ€, which doesn’t seem crazy to me. Another explanation is that this is basically guaranteeing them a massive customer (Microsoft), and they are willing to give up some stock to get that customer.Squiggle model hereIt would be cool if someone did a similar write up about Anthropic, although publicly available information on them is slim. My guess is that they will have an even higher revenue multiple (maybe infinite? I'm not sure if they had revenue when they first raised).DetailsValuation: $19BA bunch of news sites (e.g. here) reported that Microsoft invested $10 billion to value OAI at $29 billion. I assume that this valuation is post money, meaning the pre-money valuation is 19 billion.Although this site says that they were valued at $14 billion in 2021, meaning that they only increased in value 35% the past two years. This seems weird, but I guess it is consistent with the view that markets aren’t valuing the possibility of TAI.Revenue: $54M/yearReuters claims they are projecting $200M revenue in 2023.FastCompany says they made $30 million in 2022.If the deal closed in early 2023, then presumably annual projections of their monthly revenue were higher than $30 million, though it's unclear how much.Let’s arbitrarily say MRR will increase 10x this year, implying a monthly growth rate of 10^(1/12) = 1.22Solving the geometric series of 200 = x (1-1.22^12) / (1 -1.22) we get that their first month revenue is $4.46M, a run rate of $53.52M/yearOther factors:The vast majority of the investment is going to be spent on Micros... |
May 15, 2023 |
EA - Consider using the EA Gather Town for your online meetings and events by Siao Si
04:19
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider using the EA Gather Town for your online meetings and events, published by Siao Si on May 14, 2023 on The Effective Altruism Forum.Tl;drThe EA Gather town is cost-efficient to share among EA organisations - and gives you access to its own highly engaged EA community.Ways you can use the EA Gather spaceHave a permanent online ‘office’ for your organisation or group to cowork inRun meetings or internal events in the spaceRun public events, either for the broader EA community or other groupsThe space is run by a group of volunteers, so ping one of us on here if you want to talk about setting up a presence there - or just hop onto the space for a while and see how you find it!EA Gather UpdatesGather has reduced their free plan from 25 concurrent users to 10, which has forced many EA groups using it to seek different services, which may also charge or have their own limitations.However, CEA has generously agreed to fund the EA Gather Town instance for 30-40 concurrent users, a capacity which EA organisations can freely piggyback on.The rest of this post will discuss the benefits and drawbacks of using the space.Benefits‘Free’ virtual spaceGiven the uneven distribution of usage over a month, almost all of the capacity we’re paying for goes unused. Each org might have a weekly meeting of 1-2 hours, spiking the usage to near capacity, then have 0-5 users online for the rest of the week. So it’s very likely you could run your own weekly meetings at whatever time suits you without any concern about capacity, and virtually certain if you have any flexibility to adjust the times.We have a shared event calendar so that you can track whether your usage spikes might overlap. In that event, we have some excess funding to boost capacity.Integration with a growing section of the EA communityEA groups that have used the space regularly include EAGx Cambridge, Charity Entrepreneurship, Anima International, Training for Good, Alignment Ecosystem Development, EA France, EA Hong Kong, Metaculus, and more.Last year we were the meetup and hangout space for EAGxVirtual, and hopefully will host many more such events. We also have many independent regular users, who might be future staff of, donors to, beneficiaries of, or otherwise supporters of your group.It’s entirely up to you how public your office space is to other users. Some EA groups welcome guests, some are entirely private, some are in between. We’re currently exploring intuitive visual norms to clearly signal the preferences of different groups (feel free to suggest some!).Also, your office is not a prison - your members are always welcome and warmly encouraged to join us either for coworking or socialising in the common area :)Gather Town native functionality/default normsGather Town has a number of nice features that led us to originally set up this group there and to stay there since:Intuitive video call protocol (if you stand near someone, you’re in conversation with them)Embeddable webpages (so you can eg have native access to pomodoro timers)Cute aesthetic - your office can look like a virtual office, a virtual park, a virtual pirate ship, or anything else you can imagine! You can traverse on foot, by go-kart, or by magic portal :)Various other bits of functionality and suggested normsDrawbacksThe main reasons why you might not want to use the space:BuggynessGather is relatively new, and has a few more moving parts than other video calling services. It has a few intermittent glitches (most of which can be resolved by refreshing the page). Twice in the last 13 months or so I’m aware of it having gone down for about an hour. If you need perfect uptime, Zoom is probably better. Note there’s both an app and browser version, so one might work substantially better than the other at any given po... |
May 15, 2023 |
EA - Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts by Luke Eure
03:48
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the African EA Forum Competition - $4500 in prizes for excellent EA Forum Posts, published by Luke Eure on May 14, 2023 on The Effective Altruism Forum.The Effective Altruism Forum serves as the main platform for thought leadership and experience sharing within the effective altruism movement. It is the virtual gathering spot for EAs and the space where most EAs will go to check for resources, materials and experiences for various areas of interest.However, there are low levels of authorship by EAs from Africa, which gives the impression that not much is happening in the African space with regards to EA - despite there being 10+ local EA groups in Africa.There is a need for increased thought leadership, experience sharing, and engagement from an African perspective that reflects the growth of EA in Africa and brings African perspectives to the wider EA community.To get more Africans writing on the forum, I'm excited to announce the African EA Forum Competition.How it worksPrizes will be awarded for winners and runners up in each of three categories:Cause exploration: Explorations of new cause areas or contribution to thinking on existing cause areasAfrican perspectives on EA: Posts that challenge or complement existing EA perspectives and/or cause areas from an African worldviewSummaries of existing work: Informing the EA community about ongoing/completed research projects, sharing experiences within community groups, or reflections on being in EATop prize in each category will be awarded $1,000. Runner up in each category will get $500.No need to include the category in your submission.JudgingPosts will be judged based on the following rubric:Originality of insightClarityDiscussion provoked: (judged by the post’s forum score and number of comments)Persuasiveness of argumentThis is replaced by Relevance to forum readers for summaries of existing workThe members of our judging panel are:Alimi Salifou - Events and Outreach Coordinator for EA NigeriaJordan Pieters - independent community builderKaleem Ahmid - Project Manager at Effective VenturesDr. Kikiope Oluwarore - co-founder of Healthier HensZainab Taonga Chirwa - Chairperson for Effective Altruism UCTSupport offered to writersWe will support writers in the following two ways:Virtual training on EA Forum writing: there will be a ~1-2 hour workshop to build the capacity of individuals interested in writing forum postsOffering mentorship: curating a list of individuals who are happy to offer feedback to new writers to bolster the confidence of those writing about their posts, especially those who may feel intimidated by the seemingly high bar of forum postsWriters do not have to attend the forum or receive mentorship to be eligible for the competition. They only have to make a forum post within the competition period, and meet the eligibility criteria below.Please fill out this form if you would like to join the workshop or receive mentorship.Who is eligibleAnybody who is:a citizen of an African countrya children of a citizen of an Africa countryPosts should be made to the EA forum with the tag ‘Africa EA Forum Competition’ to make them easy to find, and then the writer should identify themself by filling out this form.Posts with multiple authors are eligible as long as 50% or more of the authors meet the above criteria.TimelineThe competition will run for 3 months. Any post made between 14 May 2023 - 11:59pm on Friday 18 August 2023 is eligible.Please reach out to me with any questions or feedback on how this competition could be better! (ljeure@gmail.com)Thanks to Daniel Yu for funding the prizes, Waithera Mwangi for support in writing this post, and to our judges for offering their timeThanks for listening. To help us out with The Nonlinear Library or to learn mo... |
May 14, 2023 |
EA - Blog update: Reflective altruism by David Thorstad
15:00
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Blog update: Reflective altruism, published by David Thorstad on May 14, 2023 on The Effective Altruism Forum.About meI’m a research fellow in philosophy at the Global Priorities Institute. Starting in the Fall, I'll be Assistant Professor of Philosophy at Vanderbilt University. (All views are my own, except the worst. Those are to be blamed on my cat.).There are many things I like about effective altruism. I’ve started a blog to discuss some views and practices in effective altruism that I don’t like, in order to drive positive change both within and outside of the movement.About this blogThe blog features long-form discussions, structured into thematic series of posts, informed by academic research. Currently, the blog features six thematic series, described below.One distinctive feature of my approach is that I share a number of philosophical views with many effective altruists. I accept or am sympathetic to all of the following: consequentialism; totalism; fanaticism; expected value maximization; and the importance of using science, reasons and evidence to solve global problems. Nevertheless, I am skeptical of many views held by effective altruists including longtermism and the view that humanity currently faces high levels of existential risk. We also have a number of methodological disagreements.I've come to understand that this is a somewhat distinctive approach within the academic literature, as well as in the broader landscape. I think that is a shame. I want to say what can be said for this approach, and what can be learned from it. I try to do that on my blog.About this documentThe blog is currently five months old. Several readers have asked me to post an update about my blog on the EA Forum. I think that is a good idea: I try to be transparent about what I am up to, and I value feedback from my readers.Below, I say a bit about existing content on the blog; plans for new content; and some lessons learned during the past five months.Existing seriesSeries 1: Academic papersThe purpose of this blog is to use academic research to drive positive change within and outside of the effective altruism movement. This series draws insights from academic papers related to effective altruism.Sub-series A: Existential risk pessimism and the time of perilsThis series is based on my paper “Existential risk pessimism and the time of perilsâ€. The paper develops a tension between two claims: Existential Risk Pessimism (levels of existential risk are very high) and the Astronomical Value Thesis (efforts to reduce existential risk have astronomical value). It explores the Time of Perils hypothesis as a way out of the tension.Status: Completed. Parts 1-6 present the main argument of the paper. Part 7 discusses an application to calculating the cost-effectiveness of biosecurity. Part 8 draws implications. Part 9 responds to objections.Sub-series B: The good it promisesThis series is based on a volume of essays entitled The good it promises, the harm it does: Critical essays on effective altruism. The volume brings together a diverse collection of scholars, activists and practitioners to critically reflect on effective altruism. In this series, I draw lessons from papers contained in the volume.Status: In progress. Part 1 introduces the series and discusses the foreword to the book by Amia Srinivasan. Part 2 looks at Simone de Lima’s discussion of colonialism and animal advocacy in Brazil. Part 3 looks at Carol J Adams' care ethical approach.Series 2: Academics review WWOTFWill MacAskill’s book What we owe the future is one of the most influential recent books about effective altruism. A number of prominent academics have written insightful reviews of the book. In this series, I draw lessons from some of my favorite academic reviews of What we owe the future.... |
May 14, 2023 |
EA - The Implications of the US Supreme Court upholding Prop 12 by ishankhire
07:41
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Implications of the US Supreme Court upholding Prop 12, published by ishankhire on May 13, 2023 on The Effective Altruism Forum.Background: Farming Conditions and Prop 12Currently in the US, most breeding pigs live in factory farmers, where they are confined in gestation crates which are small metal cages so small that pigs can’t even turn around, while egg-laying hens live in tiny, cramped battery cages that cause a range of psychological and physiological harm. The crowded conditions also have potential health harms by increasing the stress levels of pigs and weakening their immune systems, which can make them more susceptible to zoonotic diseases that may spread to humans.Starting in the early 2000s, a few animal welfare groups including the Humane Society of the Unites States aimed to ban the farming system of cages for hens, breeding pigs and veal calves. In 2008, Proposition 2 was passed which put in place a “production†ban on cages, which said that producers had to ensure pigs, hens, and calves could lie down, turn around, and extend their limbs or wings without hitting the side of an enclosure. However, this specific language allowed some egg farms to circumvent the law by using bigger cages. In 2010, California passed AB 1437 which was a “sales†ban requiring all eggs sold in California had to meet those standards. These laws have brought about results — the share of hens that are cage-free has been rising and is expected to continue doing so.In 2018, over 62% of California voters passed Proposition 12, the strongest law to improve conditions for farmed animals. Under Prop 12, some of the gaps in these laws are covered — for one, it extends the cage-free ban to cover not just the eggs that are sold in the grocery store (shell eggs) but also liquid eggs, which are sold to restaurants, cafeterias and food manufacturers (liquid eggs).Opposition from Pork IndustryThe law is expected to be especially impactful to the pork industry which has been more resistant to change in doing away with confinement systems. Progress has been very mixed in terms of companies following through with their commitments to phase out gestation crates. So far, 10 states have banned them, but Prop 12’s space requirements are stricter and close some gaps that allow for loopholes. The law also makes it illegal for eggs and pork to be sold in California if the animals in other states are put in gestation crates (pigs) or battery cages (for chickens). California consumes 14% of the US’s pork and 12% of eggs and veal, so pork and egg producers would be forced to modify barns or construct new ones (only 1% of existing sow housing meets Prop 12’s standards according to the National Pork Producers Council (NPCC)), which would be costly and time taking, causing various meat trade groups to be opposed to it.Interestingly, some industries such as Whole Foods, aren’t concerned with the law as they claim they already meet animal welfare requirements. I think this is a crucial reason why the phase out of battery cages did not get as much opposition to phasing out pork crates — many companies already have commitments to phase out battery cage. In fact, these companies may have the incentive to increase regulations to raise costs on competitors.For this reason, the law was attacked by various meat industry trade groups, which filed three separate lawsuits to overturn it. The Supreme Court declined to take two of them, and in October 2022, the case National Pork Producers v. Ross began.Explaining the Supreme Court RulingOn May 11th, 2023, the Supreme Court upheld Prop 12 in a 5-4 decision of the case National Pork Producers v. Ross. Interestingly, the verdict was not split along conservative-liberal lines, with 3 conservative judges and 2 liberal judges in the majority.The pork industry ... |
May 13, 2023 |
EA - I want to read more stories by and about the people of Effective Altruism by Gemma Paterson
06:26
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I want to read more stories by and about the people of Effective Altruism, published by Gemma Paterson on May 12, 2023 on The Effective Altruism Forum.TL; DRI want to read more stories by and about the people of the Effective Altruism movementBut like, fun ones, not CVsI’ve added a tag for EA origin stories and tagged a bunch of relevant posts from the forumIf I’ve missed some, please tag themThe community experiences tag has a lot of others that don’t quite fitI think it is important to emphasise the personal in the effective altruism movement - you never know if your story is enough to connect with someone (especially if you don’t fit the stereotypical EA mold)I would also be very interested in reading folks’ answers to the “What is your current plan to improve the world?†question from the EA Global application - it’s really helpful to see other people’s thought processes (you can read mine here)Why?At least for me, what grabbed and kept my attention when I first heard about EA were the stories of people on the ground trying to do effective altruism.The audacity of a group of students looking at the enormity of suffering in the world but then pushing past that overwhelm. Recognising that they could use their privileges to make a dent if they really gave it a go.The folks behind Charity Entrepreneurship who didn’t stop at one highly effective charity but decided to jump straight into making an non-profit incubator to multiply their impact - building out, in my opinion, some of the coolest projects in the movement.I love that the 80,000 hours podcast takes the concept behind Big Talk seriouslyIt’s absurd but amazing!I love the ethos of practicality within the movement. It isn’t about purity, it isn’t about perfection, it’s about actually changing the world.These are the people I’d back to build a robust Theory of Change that might just move us towards Fully Automated Luxury Gay Space CommunismMaybe that google doc already exists?I have never been the kind of person who had role models. I have always been a bit too cynical to put people on a pedestal. I had respect for successful people and tried to learn what I could from them but I didn’t have heroes.But my response to finding the EA movement was, “Fuck, these people are cool.â€I think there is a problem with myth making and hero worshipping within EA. I do agree that it is healthier to Live Without Idols. However, I don’t think we should live without stories.The stories I’m more interested in are the personal ones. Of people actually going out and living their values. Examples of trades offs that real people make that allow them to be ambitiously altruistic in a way that suits them. That show that it is fine to care about lots of things. That it is okay to make changes in your life when you get more or better information.I think about this post a lot because I agree that if people think that “doing effective altruism†means they have to live like monks and change their whole lives then they’ll just reject it. Making big changes is hard. People aren’t perfect.I can trace huge number of positive changes in my life to my decision to take EA seriously but realistically it was my personal IRL and parasocial connections to the people of EA that gave me the space and support to make these big changes in my life. In the footnotes and in this post about my EA story, I’ve included a list of podcasts, blog posts and other media by people within EA that were particularly influential and meaningful to me (if you made them then thank you <3)While I do see as EA as the key source of purpose in my life, it is a core value among many (I like Valuism - doing the intrinsic values test was really helpful for me). Like everyone else in the EA movement, I’m not an impact machine, I’m a person. I love throwing the... |
May 13, 2023 |
EA - Prioritising animal welfare over global health and development? by Vasco Grilo
30:39
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Prioritising animal welfare over global health and development?, published by Vasco Grilo on May 13, 2023 on The Effective Altruism Forum.SummaryCorporate campaigns for chicken welfare increase wellbeing way more cost-effectively than the best global health and development (GHD) interventions.In addition, the effects on farmed animals of such interventions can influence which countries they should target, and those on wild animals might determine whether they are beneficial or harmful.I encourage Charity Entrepreneurship (CE), Founders Pledge (FP), GiveWell (GW), Open Philanthropy (OP) and Rethink Priorities (RP) to:Increase their support of animal welfare interventions relative to those of GHD (at the margin).Account for effects on animals in the cost-effectiveness analyses of GHD interventions.Corporate campaigns for chicken welfare increase nearterm wellbeing way more cost-effectively than GiveWell’s top charitiesCorporate campaigns for chicken welfare are considered one of the most effective animal welfare interventions. A key supporter of these is The Humane League (THL), which is one of the 3 top charities of Animal Charity Evaluators.I calculated the cost-effectiveness of corporate campaigns for broiler welfare in human-years per dollar from the product between:Chicken-years affected per dollar, which I set to 15 as estimated here by Saulius Simcikas.Improvement in welfare as a fraction of that of median welfare range when broilers go from a conventional to a reformed scenario, assuming:The time broilers experience each level of pain defined here (search for “definitionsâ€) in a conventional and reformed scenario is given by these data (search for “pain-tracksâ€) from the Welfare Footprint Project (WFP).The welfare range is symmetric around the neutral point, and excruciating pain corresponds to the worst possible experience.Excruciating pain is 1 k times as bad as disabling pain.Disabling pain is 100 times as bad as hurtful pain.Hurtful pain is 10 times as bad as annoying pain.The lifespan of broilers is 42 days, in agreement with section “Conventional and Reformed Scenarios†of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso.Broilers sleep 8 h each day, and have a neutral experience during that time.Broilers being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.Median welfare range of chickens, which I set to RP's median estimate of 0.332.Reciprocal of the intensity of the mean human experience, which I obtained supposing humans:Sleep 8 h each day, and have a neutral experience during that time.Being awake is as good as hurtful pain is bad. This means being awake with hurtful pain is neutral, thus accounting for positive experiences.I computed the cost-effectiveness in the same metric for the lowest cost to save a life among GW's top charities from the ratio between:Life expectancy at birth in Africa in 2021, which was 61.7 years according to these data from OWID.Lowest cost to save a life of 3.5 k$ (from Helen Keller International), as stated by GW here.The results are in the tables below. The data and calculations are here (see tab “Cost-effectivenessâ€).Intensity of the mean experience as a fraction of the median welfare rangeBroiler in a conventional scenarioBroiler in a reformed scenarioHuman5.7710^-62.5910^-53.3310^-6Broiler in a conventional scenario relative to a humanBroiler in a reformed scenario relative to a humanBroiler in a conventional scenario relative to a reformed scenario7.771.734.49Improvement in chicken welfare when broilers go from a conventional to a reformed scenario as a fraction of...The median welfare range of chickensThe intensity of the mean human experience2.... |
May 13, 2023 |
EA - Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool by david reinstein
03:54
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposed - 'How Much Does It Cost to Save a Life?' Quiz, calculator, tool, published by david reinstein on May 12, 2023 on The Effective Altruism Forum.Epistemic basis/status: I've talked this over with Grace and others at GWWC, and people seem generally interested. I'm posting this to get feedback and gauge interest before potentially pushing it further.Basic ideaI'd like to get your thoughts on a "How Much Does It Cost to Save a Life?"[1] quiz and calculator. I've been discussing this with Giving What We Can; it's somewhat modeled off their how rich am I calculator, which drives a lot of traffic to their site.This would mainly target non-EAs, but it would try to strike a good balance between sophistication and simplicity. It could start as a quiz to get people's attention. People would be asked to guess this cost. They could then be asked to reconsider it considering some follow-up questions. This might be a good opportunity for a chatbot to work its magic.After this interaction, the 'correct answer' and 'how well did I do' would take you to an interactive page, presenting the basic calculation and reasoning. (Before or after presenting this) it could also allow users to adjust their moral and epistemic parameters and the scope of their inquiry. This might be something to unfold gradually, letting people specify first one thing, and then maybe more, if they like.E.g.,Target: Rich or poor countries, which age groups, etc.Relative value of a child or adults lifeHow much do you weight life-years for certain statesWhich evidence do you find more plausibleDo you want to include or exclude certain types of benefitsDiscount rateWe would aim to go viral (or at least bacterial)!Value/ToCI believe that people would be highly interested in this: it could be engaging and pique curiosity and competitiveness (a bit click-baity, maybe, but the payoff is not click bait)!It could potentially make news headlines. It’s an “easy story†for media people, asks a question people can engage with, etc. . ’how much does it cost to save a life? find out after the break!) giving the public a chance to engage with the question: "How much does it cost to save a life?"It could help challenge misconceptions about the cost of saving lives, contributing to a more reality-based, impact-focused, and evidence-driven donor community. If people do think it’s much cheaper than it is, as some studies suggest, it would probably be good to change this misconception. It may also be a stepping stone towards encouraging people to think more critically about measuring impact and considering EA-aligned evaluations.> Greater acceptance and understanding of EA, better epistemics in the general public, better donation and policy choicesImplementationWhile GiveWell does have a page with a lot of technical details, it doesn't quite capture the interactive and compelling aspects I'm envisioning for this tool.Giving What We Can's response has been positive, but they understandably lack the capacity within their core team to take on such a project. They suggest it could make for an interesting volunteer project if a UX designer and an engineer were interested in participating.Considering the enthusiasm and the potential for synergy with academic research (which could be supported by funds for Facebook academic ads), I'm contemplating the best approach to bring this idea to life. I tentatively propose the following steps:Put out a request for a volunteer to help develop a proof of concept or minimum viable product. Giving What We Can has some interested engineers, and I could help with guidance and encouragement.2 Apply for direct funding for the project, possibly collaborating with groups focused on quantitative uncertainty and "build your own cost-effectiveness" initiatives, or perhaps with SoGi... |
May 13, 2023 |
EA - Why GiveWell funded the rollout of the malaria vaccine by GiveWell
08:05
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why GiveWell funded the rollout of the malaria vaccine, published by GiveWell on May 12, 2023 on The Effective Altruism Forum.Author: Audrey Cooper, GiveWell Philanthropy AdvisorSince our founding in 2007, GiveWell has directed over $600 million to programs that aim to prevent malaria, a mosquito-borne disease that causes severe illness and death. Malaria is preventable and curable, yet it killed over 600,000 people in 2021—mostly young children in Africa.[1]Following the World Health Organization’s approval of the RTS,S/AS01 malaria vaccine (RTS,S) in late 2021,[2] GiveWell directed $5 million to PATH to accelerate the rollout of the vaccine in certain areas of Ghana, Kenya, and Malawi. This grant aimed to enable these communities to gain access to the vaccine about a year earlier than they otherwise would, protecting hundreds of thousands of children from malaria.[3]Although we’re very excited about the potential of the RTS,S malaria vaccine to save lives, it isn’t a panacea. We still plan to support a range of malaria control interventions, including vaccines, nets, and antimalarial medicine.In this post, we will:Explain how we found the opportunity to fund the malaria vaccineDiscuss why we funded this grantShare our plan for malaria funding moving forwardIdentifying a gap in vaccine accessIn October 2021, we shared our initial thoughts on the approval of the RTS,S malaria vaccine by the World Health Organization (WHO). At that point, we weren’t sure whether the vaccine would be cost-effective and were not aware of any opportunities for private donors to support the expansion of vaccine access.In the following months, our conversations with PATH, a large global health nonprofit that we’ve previously funded, revealed that there might be an opportunity to help deploy the vaccine more quickly in certain regions. PATH had been supporting the delivery of the vaccine in Ghana, Kenya, and Malawi as part of the WHO-led pilot—the Malaria Vaccine Implementation Program (MVIP)—since the pilot began in 2019.[4] In order to generate evidence about the effectiveness of the vaccine, randomly selected areas in each country received the vaccine during the early years of the pilot, while “comparison areas†would receive the vaccine at a later date, if the vaccine was recommended by the WHO.[5]Once the vaccine had received approval from the WHO, the WHO and PATH believed there was an opportunity to build on the momentum and groundwork of the pilot to roll out the vaccine to the comparison areas as soon as possible. However, the expectation at the time was that expanding use to the comparison areas would need to wait for the standard process through which low-income countries apply for support to access vaccines from Gavi, the Vaccine Alliance.[6] This process would have made it possible to introduce the vaccine at the end of 2023 at the earliest.[7]However, there was another path through which these vaccines could be provided more quickly. GlaxoSmithKline (GSK), the vaccine manufacturer, had committed to donate up to 10 million vaccine doses as part of its support for the MVIP.[8] This quantity of vaccine was set aside to allow completion of the pilot program, including vaccination in the comparison areas.[9] However, additional support was needed to be able to utilize these vaccines in advance of Gavi financing, including (for example) funding to cover the costs of safe injection supplies and vaccine shipping and handling, as well as the technical assistance required to support vaccine implementation.With funding from GiveWell, PATH believed it could provide the necessary technical assistance to the ministries of health in Ghana, Kenya, and Malawi to support them in using the donated vaccines from GSK and expand vaccine access to the comparison areas at the end of 202... |
May 12, 2023 |
EA - US public opinion of AI policy and risk by Jamie Elsey
26:07
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US public opinion of AI policy and risk, published by Jamie Elsey on May 12, 2023 on The Effective Altruism Forum.SummaryOn April 21st 2023, Rethink Priorities conducted an online poll to assess US public perceptions of, and opinions about, AI risk. The poll was intended to conceptually replicate and extend a recent AI-related poll from YouGov, as well as drawing inspiration from some other recent AI polls from Monmouth University and Harris-MITRE.The poll covered opinions regarding:A pause on certain kinds of AI researchShould AI be regulated (akin to the FDA)?Worry about negative effects of AIExtinction risk in 10 and 50 yearsLikelihood of achieving greater than human level intelligencePerceived most likely existential threatsExpected harm vs. good from AIOur population estimates reflect the responses of 2444 US adults, poststratified to be representative of the US population. See the Methodology section of the Appendix for more information on sampling and estimation procedures.Key findingsFor each key finding below, more granular response categories are presented in the main text, along with demographic breakdowns of interest.Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23% opposition across different framings in YouGov’s polls). Hence, support is robust across different framings and surveys. The slightly lower level of support in our survey may be explained by our somewhat more neutral framing.Should AI be regulated (akin to the FDA)? Many more people think AI should be regulated than think it should not be. We estimate that 70% believe Yes, 21% believe No, and 9% don’t know.Worry about negative effects of AI. Worry in everyday life about the negative effects of AI appears to be quite low. We estimate 72% of US adults worry little or not at all about AI, 21% report a fair amount of worry, and less than 10% worry a lot or more.Extinction risk in 10 and 50 years. Expectation of extinction from AI is relatively low in the next 10 years but increases in the 50 year time horizon. We estimate 9% think AI-caused extinction to be moderately likely or more in the next 10 years, and 22% think this in the next 50 years.Likelihood of achieving greater than human level intelligence. Most people think AI will ultimately become more intelligent than people. We estimate 67% think this moderately likely or more, 40% highly likely or more, and only 15% think it is not at all likely.Perceived most likely existential threats. AI ranks low among other perceived existential threats to humanity. AI ranked below all 4 other specific existential threats we asked about, with an estimated 4% thinking it the most likely cause of human extinction. For reference, the most likely cause, nuclear war, is estimated to be selected by 42% of people. The other least likely cause - a pandemic - is expected to be picked by 8% of the population.Expected harm vs. good from AI. Despite perceived risks, people tend to anticipate more benefits than harms from AI. We estimate that 48% expect more good than harm, 31% more harm than good, 19% expecting an even balance, and 2% reporting no opinion.The estimates from this poll may inform policy making and advocacy efforts regarding AI risk mitigation. The findings suggest an attitude of caution from the public, with substantially greater support than opposition to measures that are intended to curb the evolution of certain types of AI, as well as for regulation of AI. However, concerns over AI do not yet appear to feature especially prominently in public perception of the existential risk landscape: people report worrying about it only a little, and rarely picked i... |
May 12, 2023 |
EA - Our Progress in 2022 and Plans for 2023 by Open Philanthropy
14:16
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Our Progress in 2022 and Plans for 2023, published by Open Philanthropy on May 12, 2023 on The Effective Altruism Forum.2022 was a big year for Open Philanthropy:We recommended over $650 million in grants — more, by far, than in any other year of our history. [More]We hired our first program officers for three new focus areas in our Global Health and Wellbeing portfolio. [More]Within our Longtermism portfolio, we significantly expanded our grantmaking and used a series of open calls to identify hundreds of promising grants to individuals and small projects. [More]We ran the Regranting Challenge, a novel experiment which allocated $150 million to outstanding programs at other grantmaking organizations. [More]We nearly doubled the size of our team. [More]This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year, including:A significant update on how we handle allocating our grantmaking across causes. [More]A potential leadership transition. [More]Continued growth in grantmaking and staff. [More]Continued grantmakingLast year, we wrote:We aim to roughly double the amount [of funding] we recommend [in 2022] relative to [2021], and triple it by 2025.In 2022, we recommended over $650 million in grants (up from roughly $400 million in 2021).We changed our plans midway through the year, due to a stock market decline[ref]This just reflects a decline in the market; our main donors are still planning to give away virtually all of their wealth within their lifetimes.[/ref] that reduced our available assets and led us to adjust the cost-effectiveness bar we use for our spending on global health and wellbeing. When we wrote last year’s post, we had tentatively planned to allocate $500 million to GiveWell’s recommended charities; the actual allocation wound up being $350 million (up from $300 million in 2021).Currently, we expect to recommend over $700 million in grants in 2023, and no longer have a definite grantmaking goal for 2024 and 2025.Highlights from this year’s grantmakingThis section outlines some of the major grants we made across our program areas.In grants to charities recommended by GiveWell:$10.4 million to the Clinton Health Access Initiative to support their Incubator program, which looks for cost-effective and scalable health interventions.$13.7 million to New Incentives for conditional cash transfers to boost vaccination rates in Nigeria.$4.4 million to Evidence Action to support their in-line chlorination program in Malawi.We also made a $48.8 million grant to the same program with funds from our 2021 allocation.Many other grants we haven’t listed here (see our full list of GiveWell-recommended grants).In potential risks from advanced AI:Redwood Research to support their research on aligning AI systems.Center for a New American Security to support their work on AI policy and governance.A number of projects related to understanding and aligning deep learning systems.In biosecurity and pandemic preparedness:Columbia University to support research on far-UVC light to reduce airborne disease transmission.Bipartisan Commission on Biodefense to support work on biodefense policy in the US.The Johns Hopkins Center for Health Security to support their degree program for students pursuing careers in biosecurity.In effective altruism community growth (with a focus on longtermism):80,000 Hours (marketing and general support) for its work to help people have more impact with their careers.Support for the translation of effective altruism-related content into non-English languages.Bluedot Impact to run courses related to several of our priority cause areas.Asterisk to publish a quarterly journal focused on topics related to effective altruism, among others.A program open to applicat... |
May 12, 2023 |
EA - Simple charitable donation app idea by kokotajlod
03:48
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Simple charitable donation app idea, published by kokotajlod on May 12, 2023 on The Effective Altruism Forum.I'll pay $10xN to the people who build this app, where N is the total karma of this post three months from now, up to a max of $20,000, unless something shady happens like some sort of bot farm. If it turns out this app already exists, I'll pay $1xN instead to the people who find it for me. I'm open to paying significantly more in both cases if I'm convinced of the altruistic case for this app existing; this is just the minimum I personally can commit to and afford.The app consists of a gigantic, full-screen button such that if you press it, the phone will vibrate and play a little satisfying "ching" sound and light up sparkles around where your finger hit, and then $1 will be donated to GiveDirectly.You can keep slamming that button as much as you like to thereby donate as many dollars as you like.In the corner there's a menu button that lets you change from GiveDirectly to Humane League or AMF or whatever (you can go into the settings and input the details for a charity of your choice, adding it to your personal menu of charity options, and then toggle between options as you see fit. You can also set up a "Donate $X per button press instead of $1" option and a "Split each donation between the following N charities" option.)That's it really.Why is this a good idea? Well, I'm not completely confident it is, and part of why I'm posting is to get feedback. But here's my thinking:I often feel guilty for eating out at restaurants. Especially when meat is involved.Currently I donate a substantial amount to charity on a yearly basis (aiming for 10% of income, though I'm not doing a great job of tracking that) but it feels like a chore, I have to remember to do it and then log on and wire the funds. Like paying a bill.If I had this app, I think I'd experiment with the following policy instead: Every time I buy something not-necessary such as a meal at a restaurant, I whip out my phone, pull up the app, and slam that button N times where N is the number of dollars my purchase cost. Thus my personal spending would be matched with my donations. I think I'd feel pretty good while doing so, it would give me a rush of warm fuzzies instead of feeling like a chore. (For this reason I suggest having to press the button N times, instead of building the app to use a text-box-and-number-pad.)Then I'd check in every year or so to see whether my donations were meeting the 10% goal and make a bulk donation to make up the difference if not.If it exceeds the goal, great!I think even if no one saw me use this app, I'd still use it & pay for it. But there's a bonus effect having to do with the social consequences of being seen using it. Kinda like how a big part of why veganism is effective is that you can't hide it from anyone, you are forced to bring it up constantly. Using this app would hopefully have a similar effect -- if you were following a policy similar to the one I described, people would notice you tapping your phone at restaurants and ask you what you were doing & you'd explain and maybe they'd be inspired and do something similar themselves. (Come to think of it, it's important that the "ching" sound not be loud and obnoxious, otherwise it might come across as ostentatious.)I can imagine a world where this app becomes really popular, at least among certain demographics, similar to (though probably not as successful as) veganism.Another mild bonus is that this app could double as a tracker for your discretionary spending. You can go into the settings and see e.g. a graph of your donations over time, statistics on what time of day you do them, etc. and learn things like "jesus do I really spend that much on dining out per month?" and "huh, I guess those Amazon purchase... |
May 12, 2023 |
EA - In defence of epistemic modesty [distillation] by Luise
13:57
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: In defence of epistemic modesty [distillation], published by Luise on May 10, 2023 on The Effective Altruism Forum.This is a distillation of In defence of epistemic modesty, a 2017 essay by Gregory Lewis. I hope to make the essay’s key points accessible in a quick and easy way so more people engage with them. I thank Gregory Lewis for helpful comments on an earlier version of this post. Errors are my own.Note: I sometimes use the first person (“I claimâ€/â€I thinkâ€) in this post. This felt most natural but is not meant to imply any of the ideas or arguments are mine. Unless I clearly state otherwise, they are Gregory Lewis’s.What I CutI had to make some judgment calls on what is essential and what isn’t. Among other things, I decided most math and toy models weren’t essential. Moreover, I cut the details on the “self-defeating†objection, which felt quite philosophical and probably not relevant to most readers. Furthermore, it will be most useful to treat all the arguments brought up in this distillation as mere introductions, while detailed/conclusive arguments may be found in the original post and the literature.ClaimsI claim two things:You should practice strong epistemic modesty: On a given issue, adopt the view experts generally hold, instead of the view you personally like.EAs/rationalists in particular are too epistemically immodest.Let’s first dive deeper into claim 1.Claim 1: Strong Epistemic ModestyTo distinguish the view you personally like from the view strong epistemic modesty favors, call the former “view by your own lights†and the latter “view all things consideredâ€.In detail, strong epistemic modesty says you should do the following to form your view on an issue:Determine the ‘epistemic virtue’ of people who hold a view on the issue. By ‘epistemic virtue’ I mean someone’s ability to form accurate beliefs, including how much the person knows about the issue, their intelligence, how truth-seeking they are, etc.Determine what everyone's credences by their own lights are.Take an average of everyone’s credences by their own lights (including yourself), weighting them by their epistemic virtue.The product is your view all things considered. Importantly, this process weighs your credences by your own lights no more heavily than those of people with similar epistemic virtue. These people are your ‘epistemic peers’.In practice, you can round this process to “use the existing consensus of experts on the issue or, if there is none, be uncertainâ€.Why?Intuition PumpSay your mom is convinced she’s figured out the one weird trick to make money on the stock market. You are concerned about the validity of this one weird trick, because of two worries:Does she have a better chance at making money than all the other people with similar (low) amounts of knowledge on the stock market who’re all also convinced they know the one weird trick? (These are her epistemic peers.)How do her odds of making money stack up against people working full-time at a hedge fund with lots of relevant background and access to heavy analysis? (These are the experts.)The point is that we are all sometimes like the mom in this example. We’re overconfident, forgetting that we are no better than our epistemic peers, be the question investing, sports bets, musical taste, or politics. Everyone always thinks they are an exception and have figured [investing/sports/politics] out. It’s our epistemic peers that are wrong! But from their perspective, we look just as foolish and misguided as they look to us.Not only do we treat our epistemic peers incorrectly, but also our epistemic superiors. The mom in this example didn’t seek out the expert consensus on making money on the stock market (maybe something like “use algorithms†and “you don’t stand a chanceâ€). Instead, she may have li... |
May 11, 2023 |
EA - How much funging is there with donations to different EA animal charities? by Brian Tomasik
10:25
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How much funging is there with donations to different EA animal charities?, published by Brian Tomasik on May 11, 2023 on The Effective Altruism Forum.My main questionThe EA Funds Animal Welfare Fund makes grants to many different animal charities. Suppose I want to support one particular charity that they grant to because I think it's better, relative to my values, than most of the other ones. For example, maybe I want to specifically give to Legal Impact for Chickens (LIC), so I donate $1000 to them.Because this donation reduces LIC's room for more funding, it may decrease the amount that the Animal Welfare Fund itself (or Open Philanthropy, Animal Charity Evaluators, or individual EA donors) will give to LIC in the future. How large should I expect this effect to be in general? Will my $1000 donation tend to "funge" against these other EA donors almost fully, so that LIC can be expected to get about $1000 less from them? Is the funging amount more like $500? Is it roughly $0 of funging? Or maybe donating to LIC helps them grow faster, so that they can hire more people and do more things, thereby increasing their room for funding and how much other EA donors give to them?The answer to this question probably varies substantially from one case to the next, and maybe the best way to figure it out would be to learn a lot about the funding situation for a particular charity and the funding inclinations of big EA donors toward that charity. But that takes a lot of work, so I wonder if EA funders have some intuition for what tends to happen on average in situations like this, to inform small donors who aren't going to get that far into the weeds with a particular charity. Does the funging amount tend to be closer to 0% or closer to 100% of what an individual donor gives?I notice that the Animal Welfare Fund sometimes funds ~10% to ~50% of an organization's operating budget, which I imagine may be partly intentional to avoid crowding out small donors. (It may also be motivated by wanting charities to diversify their funding sources and due to limited funds to disburse.) Is it true in general that the Animal Welfare Fund doesn't fully fill room for funding, or are there charities for which the Fund does top up the charity completely? (Note that it would actually be better impact-wise to ensure that the very best charities are roughly fully funded, so I'm not encouraging a strategy of deliberately underfunding them.)In the rest of this post, I'll give more details on why I'm asking about this topic, but this further elaboration is optional reading and is more specific to my situation.My donation preferencesI think a lot of EA donations to animal charities are really exciting. About 1/3 of the grants in the Animal Welfare Fund's Grants Database seem to me roughly as cost-effective as possible for reducing near-term animal suffering. However, for some other grants, I'm pretty ambivalent about the sign of the net impact (whether it's net good or bad).This is mainly for two reasons:I'm unsure if meat reduction, on the whole, reduces animal suffering, mainly because certain kinds of animal farming, especially cattle grazing on non-irrigated pasture, may reduce an enormous amount of wild-animal suffering (though there are huge error bars on this analysis).I'm unsure if antispeciesism in general reduces net suffering. In the short run, I worry that it may encourage more habitat preservation, thereby increasing wild-animal suffering. In the long run, moral-circle expansion could encourage people to create lots of additional small-brained sentience, and in (hopefully unlikely) scenarios where human values become inverted, antispeciesist values could multiply total suffering manyfold.If I could press a button to reduce overall meat consumption or to increase concern for an... |
May 11, 2023 |
EA - US Supreme Court Upholds Prop 12! by Rockwell
01:53
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: US Supreme Court Upholds Prop 12!, published by Rockwell on May 11, 2023 on The Effective Altruism Forum.The United States Supreme Court just released its decision on the country's most pivotal farmed animal welfare case—NATIONAL PORK PRODUCERS COUNCIL ET AL. v. ROSS, SECRETARY OF THE CALIFORNIA DEPARTMENT OF FOOD AND AGRICULTURE, ET AL. —upholding California's Prop 12, the strongest piece of farmed animal legislation in the US.In 2018, California residents voted by ballot measure to ban the sale of pig products that come from producers that use gestation crates, individual crates the size of an adult pig's body that mother pigs are confined to 24/7 for the full gestation of their pregnancies, unable to turn around. In response, the pork industry sued and the case made its way to the nation's highest court.If the Supreme Court had not upheld Prop 12, years of advocacy efforts would have been nullified and advocates would no longer be able to pursue state-level legislative interventions that improve welfare by banning the sale of particularly cruelly produced animal products.It would have been a tremendous setback for the US animal welfare movement. Instead, today is a huge victory.Groups like HSUS spearheaded efforts to uphold Prop 12, even in the face of massive opposition. The case exemplified the extent to which even left-leaning politicians side with animal industry over animal welfare, as even the Biden administration sided with the pork industry.Today is a monumental moment for farmed animal advocacy. Congratulations to everyone who worked to make this happen!Read more about it:Summary and analysis from Lewis Bollard (Senior Program Officer for Farm Animal Welfare at Open Phil) here on Twitter.Victory announcement by the Humane Society of the United States here.New York Times coverage here.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 11, 2023 |
EA - Fatebook for Slack: Track your forecasts, right where your team works by Adam Binks
01:22
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fatebook for Slack: Track your forecasts, right where your team works, published by Adam Binks on May 11, 2023 on The Effective Altruism Forum.Announcing Fatebook for Slack - a Slack bot designed to help high-impact orgs build a culture of forecasting.With Fatebook, you can ask a forecasting question in your team's Slack:Then, everyone in the channel can forecast:When it's time to resolve the question as Yes, No or Ambiguous, the author gets a reminder. Then everyone gets a Brier score, based on their accuracy.It's like a tiny, private, fast Metaculus inside your team's Slack.Why build a culture of forecasting?Make better decisionsCommunicate more clearlyBuild your track recordTrust your most reliable forecastersWe built Fatebook for Slack aiming to help high-impact orgs become more effective.See the FAQs on the website for more info. We'd really value your feedback in the comments, in our Discord, or at adam@sage-future.org.You can add Fatebook to your workspace here.Thanks to all our alpha testers for their valuable feedback, especially the teams at 80,000 Hours, Lightcone, EA Cambridge, and Samotsvety.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 11, 2023 |
EA - Community Health & Special Projects: Updates and Contacting Us by evemccormick
11:18
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Health & Special Projects: Updates and Contacting Us, published by evemccormick on May 10, 2023 on The Effective Altruism Forum.SummaryWeâve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond whatâs often considered to be âcommunity health.âSince our last forum update, weâve started working closely with Fynn Heide as an affiliate, along with Anu Oak and Åukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties.In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our teamâs response is underway, as well as an internal review.Other key proactive projects weâve been working on include the Gender Experiences project and the EA Organization Reform project.We are in the early stages of considering some significant strategic changes for our team. Weâve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.As a reminder, if youâve experienced anything youâre uncomfortable with in the community or if you would like to report a concern, you can reach our teamâs contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose).We can also be contacted individually (our individual forms are linked here), or you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org.We can provide anonymous, real-time conversations in place of calls when requested, e.g. through Google Chat with your anonymous email address.The Community Health team is now Community Health and Special ProjectsWe decided to rename our team to better reflect the scope of our work. Weâve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so âSpecial Projectsâ seemed an appropriate name to gesture towards âother miscellaneous things that seem important and may not have a home somewhere else.âWe hope that this might go some way to encouraging people to report a wider range of concerns to our team.Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we donât have the capacity or the right expertise to have fully covered. If youâre thinking of working on something you think we might have some knowledge about, the meme we want to spread is âloop us in, but donât assume itâs totally covered or uncovered.â If we can be helpful, weâll give advice, recommend resources or connect you with others interested in similar work.Team changesHereâs our current team:Nicole Ross (Head of Community Health and Special Projects)Julia Wise (Community Liaison)Catherine Low (Community Health Associate)Chana Messinger (Interim Head and Community Health Analyst)Eve McCormick (Community Health Pr...
|
May 10, 2023 |
EA - Community Health and Special Projects: Updates and Contacting Us by evemccormick
11:18
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Community Health & Special Projects: Updates and Contacting Us, published by evemccormick on May 10, 2023 on The Effective Altruism Forum.SummaryWe’ve renamed our team to Community Health and Special Projects, in part to reflect our scope extending beyond what’s often considered to be “community health.â€Since our last forum update, we’ve started working closely with Fynn Heide as an affiliate, along with Anu Oak and Åukasz Grabowski as contractors. Chana Messinger has been acting as interim team lead, while Nicole Ross has been focused on EV US board duties.In response to reports of sexual misconduct by Owen Cotton-Barratt, an external investigation into our team’s response is underway, as well as an internal review.Other key proactive projects we’ve been working on include the Gender Experiences project and the EA Organization Reform project.We are in the early stages of considering some significant strategic changes for our team. We’ve highlighted two examples of possible changes below, one being a potential spin-out of CEA and/or EV and another being a pivot to focus more on the AI safety space.As a reminder, if you’ve experienced anything you’re uncomfortable with in the community or if you would like to report a concern, you can reach our team’s contact people (currently Julia Wise and Catherine Low) via this form (anonymously if you choose).We can also be contacted individually (our individual forms are linked here), or you can contact the whole team at community.health.special.projects@centreforeffectivealtruism.org.We can provide anonymous, real-time conversations in place of calls when requested, e.g. through Google Chat with your anonymous email address.The Community Health team is now Community Health and Special ProjectsWe decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.We see these other areas of work as contributing to our goal: to strengthen the ability of EA and related communities to fulfil their potential for impact, and to address problems that could prevent that. However, those areas of work can be quite disparate, and so “Special Projects†seemed an appropriate name to gesture towards “other miscellaneous things that seem important and may not have a home somewhere else.â€We hope that this might go some way to encouraging people to report a wider range of concerns to our team.Our scope of work is guided by pragmatism: we aim to go wherever there are important community-related gaps not covered by others and try to make sure the highest priority gaps are filled. Where it seems better than the counterfactual, we sometimes try to fill those gaps ourselves. That means that our scope is both very broad and not always clear, and also that there will be plenty of things we don’t have the capacity or the right expertise to have fully covered. If you’re thinking of working on something you think we might have some knowledge about, the meme we want to spread is “loop us in, but don’t assume it’s totally covered or uncovered.†If we can be helpful, we’ll give advice, recommend resources or connect you with others interested in similar work.Team changesHere’s our current team:Nicole Ross (Head of Community Health and Special Projects)Julia Wise (Community Liaison)Catherine Low (Community Health Associate)Chana Messinger (Interim Head and Community Health Analyst)Eve McCormick (Community Health Pr... |
May 10, 2023 |
EA - Continuous doesnât mean slow by Tom Davidson
04:24
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Continuous doesnât mean slow, published by Tom Davidson on May 10, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.Thereâs a lot of disagreement about how likely AI is to end up overthrowing humanity. Thoughtful pundits vary from <5% to >90%. Whatâs driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from todayâs AI, to âexpert-human levelâ AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch â the transition from expert-human level AI to AI systems that can easily overpower all of us â is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we wonât have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled âWhy I Am Not (As Much Of) A Doomer (As Some People)â, he says:So far weâve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry thereâs a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.Iâm optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and donât particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if itâs continuous.The amount of âcomputeâ (i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 itâll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 itâll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. Thatâs a mind-boggling large increase, a factor of 100,000. Itâs like going from 1000 people to the entire US workforce. Whatâs more, these AIs could work tirelessly through the night and could potentially âthinkâ much more quickly than human workers.[5] (This change wonât happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? Itâs hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if youâre imagining specific years, imagine human-genius-level AI in the 2030s and world...
|
May 10, 2023 |
EA - Continuous doesn’t mean slow by Tom Davidson
04:24
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Continuous doesn’t mean slow, published by Tom Davidson on May 10, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year.There’s a lot of disagreement about how likely AI is to end up overthrowing humanity. Thoughtful pundits vary from 5% to >90%. What’s driving this disagreement?One factor that often comes up in discussions is takeoff speeds, which Ajeya mentioned in the previous post. How quickly and suddenly do we move from today’s AI, to “expert-human level†AI[1], to AI that is way beyond human experts and could easily overpower humanity?The final stretch — the transition from expert-human level AI to AI systems that can easily overpower all of us — is especially crucial. If this final transition happens slowly, we could potentially have a long time to get used to the obsolescence regime and use very competent AI to help us solve AI alignment (among other things). But if it happens very quickly, we won’t have much time to ensure superhuman systems are aligned, or to prepare for human obsolescence in any other way.Scott Alexander is optimistic that things might move gradually. In a recent ACX post titled ‘Why I Am Not (As Much Of) A Doomer (As Some People)’, he says:So far we’ve had brisk but still gradual progress in AI; GPT-3 is better than GPT-2, and GPT-4 will probably be better still. Every few years we get a new model which is better than previous models by some predictable amount.Some people (eg Nate Soares) worry there’s a point where this changes. Maybe some jump. could take an AI from IQ 90 to IQ 1000 with no (or very short) period of IQ 200 in between.I’m optimistic because the past few years have provided some evidence for gradual progress.I agree with Scott that recent AI progress has been continuous and fairly predictable, and don’t particularly expect a break in that trend. But I expect the transition to superhuman AI to be very fast, even if it’s continuous.The amount of “compute†(i.e. the number of AI chips) needed to train a powerful AI is much bigger than the amount of compute needed to run it. I estimate that OpenAI has enough compute to run GPT-4 on hundreds of thousands of tasks at once.[2]This ratio will only become more extreme as models get bigger. Once OpenAI trains GPT-5 it’ll have enough compute for GPT-5 to perform millions of tasks in parallel, and once they train GPT-6 it’ll be able to perform tens of millions of tasks in parallel.[3]Now imagine that GPT-6 is as good at AI research as the average OpenAI researcher.[4] OpenAI could expand their AI researcher workforce from hundreds of experts to tens of millions. That’s a mind-boggling large increase, a factor of 100,000. It’s like going from 1000 people to the entire US workforce. What’s more, these AIs could work tirelessly through the night and could potentially “think†much more quickly than human workers.[5] (This change won’t happen all-at-once. I expect speed-ups from less capable AI before this point, as Ajeya wrote in the previous post.)How much faster would AI progress be in this scenario? It’s hard to know. But my best guess, from my recent report on takeoff speeds, is that progress would be much much faster. I think that less than a year after AI is expert-human level at AI research, AI could improve to the point of being able to easily overthrow humanity.This is much faster than the timeline mentioned in the ACX post:if you’re imagining specific years, imagine human-genius-level AI in the 2030s and world... |
May 10, 2023 |
EA - Psychological safety as the yardstick of good EA movement building by Severin
06:47
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Psychological safety as the yardstick of good EA movement building, published by Severin on May 10, 2023 on The Effective Altruism Forum.I recently learned about the distinction between "movement building" and "community building": Community building is for the people involved in a community, and movement building is in service of the cause itself.A story I've heard from a bunch of EA groups is that they start out with community building. They attract a couple people, develop a wonderful vibe, and those people notoriously slack on their reading group preparations. Then, the group organizers get dissatisfied with the lack of visible progress on the EA path, doubt their own impact, and pivot all the way from community building to movement building. No funny pub meetups anymore. Career fellowships and 1-on-1s all the way.I think this throws the baby out with the bathwater, and that more often than not, community building is indeed tremendously valuable movement building, even if it doesn't look like that at first glance.The piece of evidence I can cite on this (and indeed cite over and over again) is Google's "Project Aristotle"-study.In Project Aristotle, Google studied what makes their highest-performing teams highest-performing. And alas: It is not the fanciness of degrees or individual intelligence or agentyness or any other property of the individual team members, but five factors:"The researchers found that what really mattered was less about who is on the team, and more about how the team worked together. In order of importance:Psychological safety: Psychological safety refers to an individual’s perception of the consequences of taking an interpersonal risk or a belief that a team is safe for risk taking in the face of being seen as ignorant, incompetent, negative, or disruptive. In a team with high psychological safety, teammates feel safe to take risks around their team members. They feel confident that no one on the team will embarrass or punish anyone else for admitting a mistake, asking a question, or offering a new idea.Dependability: On dependable teams, members reliably complete quality work on time (vs the opposite - shirking responsibilities).Structure and clarity: An individual’s understanding of job expectations, the process for fulfilling these expectations, and the consequences of one’s performance are important for team effectiveness. Goals can be set at the individual or group level, and must be specific, challenging, and attainable. Google often uses Objectives and Key Results (OKRs) to help set and communicate short and long term goals.Meaning: Finding a sense of purpose in either the work itself or the output is important for team effectiveness. The meaning of work is personal and can vary: financial security, supporting family, helping the team succeed, or self-expression for each individual, for example.Impact: The results of one’s work, the subjective judgement that your work is making a difference, is important for teams. Seeing that one’s work is contributing to the organization’s goals can help reveal impact."What I find remarkable is that "psychological safety" leads the list. While some factors in EA actively work against the psychological safety of its members. To name just a few:EA tends to attract pretty smart people. If you throw a bunch of people together who have been used all their lives to being the smart kid in the room, they suddenly lose the default role they had in just about any context. Because now, surrounded by even smarter kids, they are merely the kid. I think this is where a bunch of EAs' impostor syndrome comes from.EAs like to work at EA-aligned organizations. That means that some of us feel like any little chat at a conference (or any little comment on the EA Forum or our social media accounts) also i... |
May 10, 2023 |
EA - Why Not EA? [paper draft] by Richard Y Chappell
03:31
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Not EA? [paper draft], published by Richard Y Chappell on May 9, 2023 on The Effective Altruism Forum.Hi all, I'm currently working on a contribution to a special issue of Public Affairs Quarterly on the topic of "philosophical issues in effective altruism". I'm hoping that my contribution can provide a helpful survey of common philosophical objections to EA (and why I think those objections fail)—the sort of thing that might be useful to assign in an undergraduate philosophy class discussing EA.The abstract:Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but every decent person should share the basic goals or values underlying effective altruism.I cover:Five objections to moral prioritization (including the systems critique)Earning to giveBillionaire philanthropyLongtermism; andPolitical critique.Given the broad (survey-style) scope of the paper, each argument is addressed pretty briefly. But I hope it nonetheless contains some useful insights. For example, I suggest the following "simple dilemma for those who claim that EA is incapable of recognizing the need for 'systemic change'":Either their total evidence supports the idea that attempting to promote systemic change would be a better bet (in expectation) than safer alternatives, or it does not. If it does, then EA principles straightforwardly endorse attempting to promote systemic change. If it does not, then by their own lights they have no basis for thinking it a better option. In neither case does it constitute a coherent objection to EA principles.On earning to give:Rare exceptions aside, most careers are presumably permissible. The basic idea of earning to give is just that we have good moral reasons to prefer better-paying careers, from among our permissible options, if we would donate the excess earnings. There can thus be excellent altruistic reasons to pursue higher pay. This claim is both true and widely neglected. The same may be said of the comparative claim that one could easily have more moral reason to pursue "earning to give" than to pursue a conventionally "altruistic" career that more directly helps people. This comparative claim, too, is both true and widely neglected. Neither of these important truths is threatened by the deontologist's claim that one should not pursue an impermissible career. The relevant moral claim is just that the directness of our moral aid is not intrinsically morally significant, so a wider range of possible actions are potentially worth considering, for altruistic reasons, than people commonly recognize.On billionaire philanthropy:EA explicitly acknowledges the fact that billionaire philanthropists are capable of doing immense good, not just immense harm. Some find this an inconvenient truth, and may dislike EA for highlighting it. But I do not think it is objectionable to acknowledge relevant facts, even when politically inconvenient... Unless critics seriously want billionaires to deliberately try to do less good rather than more, it's hard to make sense of their opposing EA principles on the basis of how they apply to billionaires.I still have time to make revisions -- and space to expand the paper if needed -- so if anyone has time to read the whole draft and offer any feedback (either in comments below, or privately via DM/email/whatever), that would be most welcome!Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 10, 2023 |
EA - On missing moods and tradeoffs by Lizka
05:28
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On missing moods and tradeoffs, published by Lizka on May 9, 2023 on The Effective Altruism Forum.My favorite jargony phrase of the ~week is "missing mood."How I've been using it:If you're not feeling sad about some tradeoffs/facts about the world (or if you notice that someone else doesn't seem to be), then you might not be tracking something important (you might be biased, etc.). The “missing mood†is a signal.Note: I’m sharing this short post with some thoughts to hear disagreements, get other examples, and add nuance to my understanding of what’s going on. I might not be able to respond to all comments.Examples1. Immigration restrictionsAn example from the linked essay: immigration restrictions are sometimes justified. But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.2. Long contentThe example from Ben — a simplified sketch of our conversation:Me: How seriously do you hold your belief that “more people should have short attention spans?†And that long content is bad?Ben: I think I mostly just mean that there’s a missing mood: it’s ok to create long content, but you should be sad that you’re failing to communicate those ideas more concisely. I don’t think people are. (And content consumers should signal that they’d prefer shorter content.)(Related: Distillation and research debt, apparently Ben had written a shortform about this a year ago, and Using the “executive summary†style: writing that respects your reader’s time)3-6. Selective spaces, transparency, cause prioritization, and slowing AII had been trying to (re)invent the phrase for situations like the following, where I want to see people acknowledging tradeoffs:Some spaces and events have restricted access. I think this is the right decision in many cases. But we should notice that it's sad to reject people from things, and there are negative effects from the fact that some people/groups can make those decisions.I want some groups of people to be more transparent and more widely accountable (and I frequently want to prioritize transparency-motivated projects on my team, and am sad when we drop them). In some cases, it's just true that I think transparency (or accountability) is more valuable than the other person does. But as I learn more about or start getting involved in any given situation, I usually notice that there are real tradeoffs; transparency has costs like time, risks, etc. There are two ways missing moods pop up in this case:When I'm just ~rallying for transparency, I'm missing a mood of "yes, it's costly in many ways, and it's awful that prioritizing transparency might mean that some good things don’t happen, but I still want more of it." If I don't have this mood, I might be biased by a vibe of "transparency good.†When I start thinking more about the tradeoffs, I sometimes entirely change my opinion to agree with the prioritization of whoever it is I’m disagreeing with. Alternatively, my position becomes closer to: "Ok, I don't really know what tradeoffs you're making, and you might be making the right ones. I'm sad that you don't seem to be valuing transparency that much. Or I just wish that you were transparent — I don't actually know how much you're valuing transparency."The people I’m disagreeing with might also be missing a mood. They might just not care about transparency or acknowledge its benefits. There’s a big difference (to me) between someone deciding not to prioritize transparency because the costs are too high and someone not valuing it at all, and if I’m not sensing the mood, it might be the latter. (This is especially true if I don’t h... |
May 10, 2023 |
EA - A note of caution on believing things on a gut level by Nathan Barnard
03:05
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A note of caution on believing things on a gut level, published by Nathan Barnard on May 9, 2023 on The Effective Altruism Forum.Joe Carlsmith's latest post discusses the difference between the probabilities that one puts on events on a gut level and on a cognitive level, and advocates updating your gut beliefs towards your cognitive beliefs insofar as the latter better tracks the truth.The post briefly notes that there can be some negative mental health consequences of this. I would like to provide a personal anecdote of some of the costs (and benefits) of changing your gut beliefs to be in line with your cognitive ones.Around 6 months ago my gut realised that one day I was going to die, in all likelihood well before I would wish to. During this period, my gut also adopted the same cognitive beliefs I have about TAI and AI x-risk. All things considered, I expect this to have both decreased my impact from an impartial welfarist perspective and my personal life satisfaction by a substantial amount.Some of the costs for me of this have been:A substantial decrease in my altruistic motivation in favour of self-preservationA dramatic drop in my motivation to workSubstantially worse ability to carry out causes prioritisationDepressionGenerically being a less clear thinkerDiffering my examsI expect to receive a somewhat lower mark in my degree than I otherwise would haveFailing to run my university EA group wellThere have also been some benefits to this:I much more closely examined my beliefs about AI and AI X-riskEngaging quite deeply with some philosophy questionsNote that this is just the experience of one individual and there are some good reasons to think that the net negative effects I've experienced won’t generalise:I’ve always been very good at acting on beliefs that I held at a cognitive level but not at a gut level. The upside therefore to me believing things at a gut level was always going to be small.I have a history of ruminative OCD (also known as pure O) - I almost without caveat recommend that others with ruminative OCD do not engage with potentially unpleasant beliefs one has at a cognitive level on a gut level.I’ve been experiencing some other difficulties in my life that probably made me more vulnerable to depression.In some EA and Rationalist circles, there’s a norm of being quite in touch with one’s emotions. I’m sure that this is very good for some people but I expect that it is quite harmful to others, including myself. For such individuals, there is an advantage to a certain level of detachment from one’s emotions. I say this because I think it’s somewhat lower status to reject engaging with one’s emotions and I think that this is probably harmful.As a final point, note that you are probably bad at affective forecasting. I’ve spent quite a lot of time reading about how people felt close to death and there are a wide variety of experiences. Some people do find that they are afraid own deaths when close to them, and others find that they have no fear. I’m particularly struck by De Gaulle’s recollections of his experiences during the first world war where he found he had no fear of death, after being shot leading his men during his early years in the war as a junior officer.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 09, 2023 |
EA - [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models by Center for AI Safety
07:06
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #5]: Geoffrey Hinton speaks out on AI risk, the White House meets with AI labs, and Trojan attacks on language models, published by Center for AI Safety on May 9, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Geoffrey Hinton is concerned about existential risks from AIGeoffrey Hinton won the Turing Award for his work on AI. Now he says that part of him regrets his life’s work, as he believes that AI poses an existential threat to humanity. As Hinton puts it, “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.â€AI is developing more rapidly than Hinton expected. In 2015, Andrew Ng argued that worrying about AI risk is like worrying about overpopulation on Mars. Geoffrey Hinton also used to believe that advanced AI was decades away, but recent progress has changed his views. Now he says that AI will become “smarter than a human†in “5 to 20 years, but without much confidence. We live in very uncertain times.â€The AI race is heating up, but Hinton sees a way out. In an interview with MIT Technology Review, Hinton argues that building AI is “inevitable†given competition between companies and countries. But he argues that “we’re all in the same boat with respect to existential risk,†so potentially “we could get the US and China to agree like we could with nuclear weapons.â€Similar to climate change, AI risk will require coordination to solve. Hinton compared the two risks by saying, "I wouldn't like to devalue climate change. I wouldn't like to say, 'You shouldn't worry about climate change.' That's a huge risk too. But I think this might end up being more urgent."When AIs create their own subgoals, they will seek power. Hinton argues that AI agents like AutoGPT and BabyAGI demonstrate that people will build AIs that choose their own goals and pursue them. Hinton and others have argued that this is dangerous because “getting more control is a very good subgoal because it helps you achieve other goals.â€Other experts are speaking up on AI risk. Demis Hassabis, CEO of DeepMind, recently said that he believes some form of AGI is “a few years, maybe within a decade away†and recommended “developing these types of AGI technologies in a cautious manner.†Shane Legg, co-founder of DeepMind, thinks AGI is likely to arrive around 2026. Warren Buffet compared AI to the nuclear bomb, and many others are concerned about advanced AI.White House meets with AI labsVice President Kamala Harris met at the White House on Thursday with leaders of Microsoft, Google, Anthropic, and OpenAI to discuss risks from artificial intelligence. This is an important step towards AI governance, though it’s a bit like inviting oil companies to a discussion on climate change—they have the power to solve the problem, but incentives to ignore it.New executive action on AI. After the meeting, the White House outlined three steps they plan to take to continue responding to the challenges posed by AI:To evaluate the risks of generative AI models, the White House will facilitate a public red-teaming competition. The event will take place at the DEF CON 31 conference and will feature cutting-edge models provided by leading AI labs.The White House continues to support investments in AI research, such as committing $140M over 5 years to National AI Research Institutes. Unfortunately, it’s plausible that most of this investment will be used to accelerate AI development without being directed at making these systems more safe.The Office of Management and Budget will release guidelines for federal use of AI.Federal agencies promise enforcement action on AI. Four federal agencies iss... |
May 09, 2023 |
EA - Chilean AIS Hackathon Retrospective by AgustÃn Covarrubias
09:03
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Chilean AIS Hackathon Retrospective, published by AgustÃn Covarrubias on May 9, 2023 on The Effective Altruism Forum.TL;DRWe hosted an AI Safety “Thinkathon†in Chile. We had participation from 40 students with differing skill levels and backgrounds, with groups totalling 13 submissions.We see potential in:Similar introductory events aiming for a broad audienceCollaborating more often with student organizationsLeveraging remote help from external mentorsWe experimented with an alternative naming, having remote mentors, different problem sources, and student organization partnerships, with varying results.We could have improved planning and communicating the difficulty of challenges.IntroductionIn February, we ran the first AI Safety Hackathon in Chile (and possibly in all of South America). This post provides some details about the event, a teaser of some resulting projects and our learnings throughout.Goals and overview of the eventThe hackathon was meant to kick-start our nascent AI Safety Group at UC Chile, generating interest in AI Safety and encouraging people to register for our AGI Safety Fundamentals course group.It ran between the 25th and the 28th of February, the first two days being in-person events and the other two serving as additional time for participants to work on their proposals, with some remote assistance on our part. Participants formed teams of up to four people, and could choose to assist either virtually (through Discord) or in-person (on the first two days).We had help from Apart Research and partial funding from AI Alignment Awards.Things we experimented withAiming for a broad audience, we named the event “Thinkathon†(instead of hackathon) and provided plenty of introductory material alongside the proposed problems.We think this was the right choice, as the desired effect was reflected on the participant demographics (see below).We could have been better at preparing participants. Some participants suggested we could have done an introductory workshop.We incorporated the two problems from the AI Alignment Awards (Goal Misgeneralization and the Shutdown problem), alongside easier, self-contained problems aimed at students with different backgrounds (like policy or psychology).We think most teams weren't prepared to tackle the AI Alignment Awards challenges. Most teams (77%) chose them initially regardless of their experience, getting stuck quickly.This might have worked better by communicating difficulty more clearly, as well as emphasizing that aiming for incremental progress rather than a complete solution is a better strategy for a beginner's hackathon.As we don't know many people with previous experience in AIS in Chile, we got help from external mentors, which connected remotely to help participants.We think this was a good decision, as participants rated mentor support highly (see below).We collaborated actively with two student governments from our university (the Administration and Economics Student Council and the Engineering Student Council). They helped with funding, logistics and outreach.We think this was an excellent choice, as they provided a much broader platform for outreach and crucial logistics help.We had a great time working with them, and they were eager to work with us again!Things that went well40 people attended in-person and 10 people remotely (through Discord), we were surprised by both the high number of attendants and the preference for in-person participation.We had a total of 13 submitted proposals, much higher than expected.While all proposals were incremental contributions, most were of high quality.Skill level and majors varied significantly, going from relatively advanced CS students to freshmen from other fields (like economics). We were aiming for diversity, so this is a w... |
May 09, 2023 |
EA - Predictable updating about AI risk by Joe Carlsmith
01:00:56
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictable updating about AI risk, published by Joe Carlsmith on May 8, 2023 on The Effective Altruism Forum.(Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.)"This present moment used to be the unimaginable future."Stewart Brand1. IntroductionHere’s a pattern you may have noticed. A new frontier AI, like GPT-4, gets released. People play with it. It’s better than the previous AIs, and many people are impressed. And as a result, many people who weren’t worried about existential risk from misaligned AI (hereafter: “AI riskâ€) get much more worried.Now, if these people didn’t expect AI to get so much better so soon, such a pattern can make sense. And so, too, if they got other unexpected evidence for AI risk – for example, concerned experts signing letters and quitting their jobs.But if you’re a good Bayesian, and you currently put low probability on existential catastrophe from misaligned AI (hereafter: “AI doomâ€), you probably shouldn’t be able to predict that this pattern will happen to you in the future. When GPT-5 comes out, for example, it probably shouldn’t be the case that your probability on doom goes up a bunch. Similarly, it probably shouldn’t be the case that if you could see, now, the sorts of AI systems we’ll have in 2030, or 2050, that you’d get a lot more worried about doom than you are now.But I worry that we’re going to see this pattern anyway. Indeed, I’ve seen it myself. I’m working on fixing the problem. And I think we, as a collective discourse, should try to fix it, too. In particular: I think we’re in a position to predict, now, that AI is going to get a lot better in the coming years. I think we should worry, now, accordingly, without having to see these much-better AIs up close. If we do this right, then in expectation, when we confront GPT-5 (or GPT-6, or Agent-GPT-8, or Chaos-GPT-10) in the flesh, in all the concreteness and detail and not-a-game-ness of the real world, we’ll be just as scared as we are now.This essay is about what “doing this right†looks like. In particular: part of what happens, when you meet something in the flesh, is that it “seems more real†at a gut level. So the essay is partly a reflection on the epistemology of guts: of visceral vs. abstract; “up close†vs. “far away.†My views on this have changed over the years: and in particular, I now put less weight on my gut’s (comparatively skeptical) views about doom.But the essay is also about grokking some basic Bayesianism about future evidence, dispelling a common misconception about it (namely: that directional updates shouldn’t be predictable in general), and pointing at some of the constraints it places on our beliefs over time, especially with respect to stuff we’re currently skeptical or dismissive about. For example, at least in theory: you should never think it >50% that your credence on something will later double; never >10% that it will later 10x, and so forth. So if you’re currently e.g. 1% or less on AI doom, you should think it’s less than 50% likely that you’ll ever be at 2%; less than 10% likely that you’ll ever be at 10%, and so on. And if your credence is very small, or if you’re acting dismissive, you should be very confident you’ll never end up worried. Are you?I also discuss when, exactly, it’s problematic to update in predictable directions. My sense is that generally, you should expect to update in the direction of the truth as the evidence comes in; and thus, that people who think AI doom unlikely should expect to feel less worried as time goes on (such that consistently getting more worried is a red flag). But in the case of AI risk, I think at least some non-crazy views should actually expect to get more worried over time, even while being fairly non-worried now. In particular, i... |
May 09, 2023 |
EA - How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast) by 80000 Hours
23:27
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How quickly AI could transform the world (Tom Davidson on The 80,000 Hours Podcast), published by 80000 Hours on May 8, 2023 on The Effective Altruism Forum.Over at The 80,000 Hours Podcast we just published an interview that is likely to be of particular interest to people who identify as involved in the effective altruism community: Tom Davidson on how quickly AI could transform the world.You can click through for the audio, a full transcript and related links. Below is the episode summary and some key excerpts.Episode SummaryBy the time that the AIs can do 20% of cognitive tasks in the broader economy, maybe they can already do 40% or 50% of tasks specifically in AI R&D. So they could have already really started accelerating the pace of progress by the time we get to that 20% economic impact threshold.At that point you could easily imagine that really it’s just one year, you give them a 10x bigger brain. That’s like going from chimps to humans — and then doing that jump again. That could easily be enough to go from [AIs being able to do] 20% [of cognitive tasks] to 100%, just intuitively. I think that’s kind of the default, really.Tom DavidsonIt’s easy to dismiss alarming AI-related predictions when you don’t know where the numbers came from.For example: what if we told you that within 15 years, it’s likely that we’ll see a 1,000x improvement in AI capabilities in a single year? And what if we then told you that those improvements would lead to explosive economic growth unlike anything humanity has seen before?You might think, “Congratulations, you said a big number — but this kind of stuff seems crazy, so I’m going to keep scrolling through Twitter.â€But this 1,000x yearly improvement is a prediction based on real economic models created by today’s guest Tom Davidson, Senior Research Analyst at Open Philanthropy. By the end of the episode, you’ll either be able to point out specific flaws in his step-by-step reasoning, or have to at least consider the idea that the world is about to get — at a minimum — incredibly weird.As a teaser, consider the following:Developing artificial general intelligence (AGI) — AI that can do 100% of cognitive tasks at least as well as the best humans can — could very easily lead us to an unrecognisable world.You might think having to train AI systems individually to do every conceivable cognitive task — one for diagnosing diseases, one for doing your taxes, one for teaching your kids, etc. — sounds implausible, or at least like it’ll take decades.But Tom thinks we might not need to train AI to do every single job — we might just need to train it to do one: AI research.And building AI capable of doing research and development might be a much easier task — especially given that the researchers training the AI are AI researchers themselves.And once an AI system is as good at accelerating future AI progress as the best humans are today — and we can run billions of copies of it round the clock — it’s hard to make the case that we won’t achieve AGI very quickly.To give you some perspective: 17 years ago we saw the launch of Twitter, the release of Al Gore’s An Inconvenient Truth, and your first chance to play the Nintendo Wii.Tom thinks that if we have AI that significantly accelerates AI R&D, then it’s hard to imagine not having AGI 17 years from now.Wild.Host Luisa Rodriguez gets Tom to walk us through his careful reports on the topic, and how he came up with these numbers, across a terrifying but fascinating three hours.Luisa and Tom also discuss:How we might go from GPT-4 to AI disasterTom’s journey from finding AI risk to be kind of scary to really scaryWhether international cooperation or an anti-AI social movement can slow AI progress downWhy it might take just a few years to go from pretty good AI to superhum... |
May 08, 2023 |
EA - EA Anywhere Slack: consolidating professional and affinity groups by Sasha Berezhnoi
04:59
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Anywhere Slack: consolidating professional and affinity groups, published by Sasha Berezhnoi on May 8, 2023 on The Effective Altruism Forum.SummaryIn an effort to improve communication infrastructure, the EA Anywhere Slack workspace has recently undergone major changes to accommodate several professional and affiliation Slack workspaces like EA Entrepreneurs, EA Global Discussion, EA Creatives & Communicators, and more. The project was initiated by Pineapple Operations because the old structure was inefficient and overwhelming - others have made similar observations. We believe making it easier to track discussions and reducing the number of workspaces will increase activity, avoid information loss and prevent duplicated communications.If you have a Slack workspace you’d like to merge with EA Anywhere’s, reach out to us!Why consolidate?With too many workspaces, the infrastructure of the EA movement becomes increasingly overwhelming and confusing, and it’s difficult to keep up with new workspaces. Having a central Slack gives people access to a broader range of communities at once.People don’t have the time or energy to check multiple Slacks, which results in low activity. Some discussions just don’t reach the critical mass, and valuable connections are not happening.Most of the workspaces were on a free plan that hid messages and files older than 90 days. Consolidation around a few paid workspaces prevents groups from losing historical information.There is overlapping membership between Slacks (we estimate between 10-50%), so consolidation makes it easier to track communications.These reasons were true for the dozens of Slack workspaces we identified with low activity and limited facilitation.Why EA Anywhere?EA Anywhere is an online discussion space for the global EA community and a touchpoint for people without local groups nearby. It plays an important role in supporting other virtual ecosystems in EA: we host EAGxVirtual conferences, provide support and share knowledge with other online groups.EA Anywhere Slack is on a paid Pro plan and has active facilitation from a full-time community organizer, which makes it a good choice for this project. We can provide support for groups joining the space, including events promotion, Zoom accounts, Slack integrations, and knowledge-sharing calls.There is a demand for informal conversation spaces and networking that the EA Forum doesn’t currently provide. We are inspired by Slack-based communities that thrive and create value for thousands of members without becoming too overwhelming.Progress to dateWe reached out to workspace owners with our proposal and received positive feedback. In most cases, we merged the users and message history.List of Slack workspaces we have already merged or consolidated (thanks to the admins of these groups!):EA Global DiscussionsEA EntrepreneursEA Creatives & CommunicatorsEA GeneralistsEA HousingEA Tech NetworkPublic Interest TechEA Project ManagementEA Supply Chain LogisticsEA Math and PhysicsProduct in EAEffective EnvironmentalismWe already see the benefits of increased coordination:Members are engaging with other groups and projects that have been locked into small workspaces.Members are more willing to ask questions and ask for advice. Activity in the #all-questions-welcome channel increased four-fold compared to the previous three-month average, with both new and former members engaging in discussions.Organizers have an easier way to promote opportunities and events.As of April 30th, a month after the merger, the initial hype subsided but the activity is still twice as high:90 members who posted weekly (44 before)360 weekly active members (210 before the merger)We will continue using Slack Analytics to track the activity and send a follow-up user surve... |
May 08, 2023 |
EA - The Legend of Dr. Oguntola Sapara by jai
09:23
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Legend of Dr. Oguntola Sapara, published by jai on May 8, 2023 on The Effective Altruism Forum.This is a true story.At the dawn of the long war's final century, the tide was turning. The Abomination was in retreat, driven back by warriors wielding the Lance of Jenner. But, as in all wars, some battles could not be fought so directly. Battles waged in cunning and lies. Battles where the enemy lurked in shadow. In that darkness an unholy alliance was forged between the Pox Abomination and a conspiracy of traitors to the Yoruba people.They named it "Sopona", gave it a face, and proclaimed themselves the Abomination’s priests. They claimed that they alone could intercede with the Abomination on behalf of innocent victims. To cross them, they said, was to incur Sopona's wrath. They would unleash the Abomination upon any who dared oppose them - and sometimes they would simply inflict Sopona's torture indiscriminately. Amid death and devastation their victims would beg the Priests for help, further cementing their grip on power. And all the while they kept Jenner's Lance at bay, for their power rested on fear of the Abomination, and should it be slain their power, too, would come to an end.They operated in secrecy: the better to obscure their lies, and the better to hide from those who would challenge it. Through blackmail and terror, they maintained their iron grip for generations. None dared utter "Sopona" lest they invoke its wrath - and so even the true name was hidden.Every measure by every authority failed to contain the Abomination. They could never understand why, for they were blind to the enemy's allies. The Deathly Priests and their twisted methods were beyond the grasp of governments, warriors, and weapons. Here the global campaign could not reach. Here, harbored by its murderous allies, the Abomination reigned, and the Yoruba people resigned themselves to an abominable god against which there seemed no hope.It is inadvisable to try to hide from humans. They are curious, relentless, ruthless creatures, fearless when determined and cunning as well. And none were more human than Dr. Oguntola Sapara.Oguntola was a proud child of the Yoruba people. His father, born in chains, together with his mother, raised a family of prodigies: not only Oguntula, but his brother Alexander and his sister Clementina. But Clementina's story was all too short, for when she was to bring life into the world, she was instead taken by death.There are no records of how Oguntola felt that day; All we know is that this was the moment that Oguntula dedicated his life to defending the innocent from the inhuman evils of the world, to master the protective arts and wield them against any who would dare threaten his people.For years he studied, and toiled, and healed, growing ever stronger in the art through talent and sheer force of will. Ten years on his quest took him across the seas to study in a far away land, and here yet more obstacles greeted him. For among the practitioners of the art were counted a great many fools. They would forfeit the privilege of working alongside one of humanity's best for the most vapid and meaningless of reasons, and worse still actively stymied his efforts in all things lest their foolishness be revealed for the lie it was.But Oguntola persisted, surmounting every obstacle lesser humans would set before him. In time he not only prevailed, but he proved himself among the greatest of the practitioners. He was recognized as a master of the art, and elected to the Royal Institute of healers.(In the midst of everything, he even assisted the legendary truth-seeker Ida Wells in her crusades against evil and ignorance - but that is another story.)His training complete and his mastery assured, Dr. Oguntola Sapara returned to Lagos to confront his true e... |
May 08, 2023 |
EA - The Rethink Priorities Existential Security Team's Strategy for 2023 by Ben Snodin
25:06
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Rethink Priorities Existential Security Team's Strategy for 2023, published by Ben Snodin on May 8, 2023 on The Effective Altruism Forum.The Rethink Priorities Existential Security Team's Strategy for 2023SummaryThis post contains a moderately rough, high-level description of the Rethink Priorities Existential Security team’s (XST’s) strategy for the period April-October 2023.XST is a team of researchers focused on improving the world according to a longtermist outlook through research and other projects, and is part of Rethink Priorities (RP).Note that until very recently we were called the General Longtermism team (GLT). We have now renamed ourselves the Existential Security team (XST), which is slightly more descriptive and more closely reflects our focus on reducing existential risk.XST’s three focus areas for 2023 will be:Longtermist entrepreneurship (65%): Making highly impactful longtermist projects happen by finding and developing ideas for highly promising longtermist projects, identifying potential founders, and supporting them as they get these projects started. Our main activities will be:Identifying and detailing the most promising ideas for longtermist projects, with a goal of having ~5 detailed project ideas by the end of June, that we can bring to a potential meeting of talented entrepreneurs in July/August, organized by Mike McCormick.A relatively brief founder-first-style founder search (looking for highly promising founders and finding projects that they are an especially good fit for).Exploring founder-in-residence MVPs (hiring potential founders and giving them space to develop their own ideas for promising projects).Supporting founders once they’re identified.Strategic clarity research (25%): Research that helps shed light on high-level strategic questions relevant for the EA community and for people working on reducing existential risk. This year, we plan to focus on high-level EA movement-building strategy questions (such as “What kind of EA movement do we want?†or “What’s the optimal portfolio among priority cause areas we should aim at building?â€), and possibly on high-level questions that seem important for assessing whether and how to help launch entrepreneurial projects. Most of our work on this will happen in the second half of the year.Flexible time for high-impact opportunities (10%): Time allocated for i) team members working on projects that they are very keen on and ii) highly impactful and time-sensitive projects that arise due to changes in external circumstances.Concrete outputs we’ll aim for:5 project idea memos by the end of June that are of a standard equal to or better than the 2023 Q1 megaproject speedruns that we posted on the EA Forum in February.1 strategic clarity research output by the end of October.1 new promising project launched by the end of October.11 publicly shared research or project idea outputs by the end of the year.From mid-May onwards, we’re planning to have 4 FTE executing this strategy: me (Ben), Marie, Jam, and Renan. Linch is pursuing a separate research agenda related to longtermist strategic clarity.The high-level timeline is:[completed] March: The team winds down current projects and begins work on executing the team strategy from the start of April.[in progress] April-July: The team focuses on the entrepreneurship program, and works on founder-first activities, founder support, and project research. The project research is focused on generating a new prioritization model and shallow project ranking by the end of April, and 5 project ideas memos by the end of June for a potential meeting of promising entrepreneurs in July/August.August-October: Jam and Ben continue working on the entrepreneurship program, while Marie and Renan switch to strategic clarity research.Start o... |
May 08, 2023 |
EA - Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals? by Jamie Bernardi
11:57
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk - a first-step towards evals?, published by Jamie Bernardi on May 7, 2023 on The Effective Altruism Forum.IntroducionOn Wednesday 4th May, Sam Altman (Open AI) and Dario Amodei (Anthropic) - amongst others - met with US Vice President Kamala Harris (with a drop-in from President Joe Biden), to discuss the dangers of AI.Announcement | Fact sheet | EA Forum linkpostI spent about 2 hours trying to understand what happened, who was involved, and what its possible implications for superintelligence risk.I decided to make this post for two reasons:I am practising writing and developing my opinions on AI strategy (so feedback is very welcome, and you should treat my epistemic status as ‘new to this’!)I think demystifying the facts of the announcement and offering some tentative conclusions will positively contribute to the community's understanding of AI-related political developments.My main conclusionsThree announcements were made, but the announcement on public model evaluations involving major AI labs seemed most relevant and actionable to me.My two actionable conclusions are:I think folks with technical alignment expertise should consider attending DEF CON 31 if it’s convenient, to help shape the conclusions from the event.My main speculative concern is that this evaluation event could positively associate advanced AI and the open source community. As far as those that feel the downside of model proliferation outweighs the benefits of open sourcing, spreading this message in a more focused way now may be valuable.Summary of the model evaluations announcementThis is mostly factual, and I’ve flagged where I’m offering my interpretation. Primary source: AI village announcement.There’s going to be an evaluation platform made available during a conference called DEF CON 31. DEF CON 31 is the 31st iteration of DEF CON, “the world’s largest security conferenceâ€, taking place in Los Angeles on 10th August 2023. The platform is being organised by a subcommunity at that conference called the AI village.The evaluation platform will be provided by Scale AI. The platform will provide “timed access to LLMs†via laptops available at the conference, and attendees will red-team various models by injecting prompts. I expect that the humans will then rate the output of the model as good or bad, much like on the ChatGPT platform. There’s a points-based system to encourage participation, and the winner will win a “high-end Nvidia GPUâ€.The intent of this whole event appears to be to collect adversarial data that the AI organisations in question can use and 'learn from' (and presumably do more RLHF on). The orgs that signed up include: Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI.It seems that there won’t be any direct implications for the AI organisations. They will, by default, be allowed to carry on as normal no matter what is learned at the event.I’ll provide more details on what has happened after the takeaways section.Takeaways from the Whitehouse announcement on model evaluationsI prioritised communicating my takeaways in this section. If you want more factual context to understand exactly what happened and who's involved- see the section below this one.For the avoidance of doubt, the Whitehouse announcement on the model evaluation event doesn’t come with any regulatory teeth.I don’t mean that as a criticism necessarily; I’m not sure anyone has a concrete proposal for what the evaluation criteria should even be, or how they should be enforced, etc, so it’d be too soon to see an announcement like that.That does mean I’m left with the slightly odd conclusion that all that’s happened is the Whitehouse has endorsed a community red-teaming event at a con... |
May 07, 2023 |
EA - On Child Wasting, Mega-Charities, and Measurability Bias by Jesper
03:12
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Child Wasting, Mega-Charities, and Measurability Bias, published by Jesper on May 7, 2023 on The Effective Altruism Forum.Recently I ran into a volunteer for UNICEF who was gathering donations for helping malnourished children. He gave me some explanation on why child wasting is a serious problem and how there are cheap ways to help children who are suffering from it (the UNICEF website has some information on child wasting and specifically on the treatment of wasting using simplified approaches, in case you are interested).Since I happen to have taken the Giving What We Can pledge and have read quite a bit on comparing charities, I asked what evidence there is that compares this action to - say - protecting people from malaria with bednets or directly giving cash to very poor people. The response I got was quite specific: the volunteer claimed that UNICEF can save a life with just 1€ a day for an average period of 7 months. If these claims are true then that means they can save a life for 210€, a lot less than the >3000$ that Givewell estimates is needed for AMF to save one life. Probably these numbers should not be compared directly, but I am still curious to know why there can be over an order of magnitude difference between the two. So to practice my critical thinking on these kinds of questions, I made a list of possible explanations for the difference:The UNICEF campaign has little room for additional funding.The program would be funded anyway from other sources (e.g. governments).The 1€/day figure might not include all the costs.Some of the children who receive the food supplements might die of malnutrition anyway.Only some of the children who receive the food supplements would have died without them.Children who are saved from malnutrition could still die of other causes.Obviously I do not have the time nor resources of GiveWell so it is hard to determine how much all of these explanations count in the overall picture, or if there are others that I missed. Unfortunately, there does not seem to be much information on this question from GiveWell (or other EA organizations) either. Looking on the GiveWell website, the most I could find is this blog post on mega-charities from 2011, which makes the argument that mega-charities like UNICEF have too many different campaigns running simultaneously, and that they do not have the required transparency for a proper evaluation. The first argument sounds fake to me: if there are different campaigns, then can you not just evaluate these individual campaigns, or at least the most promising ones? The second point about transparency is a real problem, but there is also the risk of measurability bias if we never even consider less transparent charities.I would very much like to have a more convincing argument for why these kind of charities are not rated. If for nothing else then at least it would be useful for discussing with people who currently donate to them, or who try to convince me to donate to them. Perhaps the reason is just a lack of resources at GiveWell, or perhaps there is research on this but I just couldn't find it. But either way I believe the current state of affairs does not provide a convincing case of why the biggest EA evaluator barely even mentions one of the largest and most respected charity organizations.[Comment: I'm not new here but I'm mostly a lurker on this forum. I'm open to criticism on my writing style and epistemics as long as you're kind!]Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 07, 2023 |
EA - The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have? by Jim Buhler
07:43
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Grabby Values Selection Thesis: What values do space-faring civilizations plausibly have?, published by Jim Buhler on May 6, 2023 on The Effective Altruism Forum.Summary: The Grabby Values Selection Thesis (or GST, for short) is the thesis that some values are more expansion-conducive (and therefore more adapted to space colonization races) than others such that we should – all else equal – expect such values to be more represented among the grabbiest civilizations/AGIs. In this post, I present and argue for GST, and raise some considerations regarding how strong and decisive we should expect this selection effect to be. The stronger it is, the more we should expect our successors – in worlds where the future of humanity is big – to have values more grabbing-prone than ours. The same holds for grabby aliens relative to us present humans. While these claims are trivially true, they seem to support conclusions that most longtermists have not paid attention to, such as “the most powerful civilizations don’t care about what the moral truth might be†(see my previous post), and “they don’t care (much) about suffering†(see my forthcoming next post).The thesisSpreading to new territories can be motivated by very different values and seems to be a convergent instrumental goal. Whatever a given agent wants, they likely have some incentive to accumulate resources and spread to new territories in order to better achieve their goal(s).However, not all moral preferences are equally conducive to expansion. Some of them value (intrinsically or instrumentally) colonization more than others. For instance, agents who value spreading intrinsically will likely colonize more and/or more efficiently than those who disvalue being the direct cause of something like “space pollutionâ€, in the interstellar context.Therefore, there is a selection effect where the most powerful civilizations/AGIs are those who have the values that are the most prone to “grabbingâ€. This is the Grabby Values Selection Thesis (GST), which is the formalization and generalization of an idea that has been expressed by Robin Hanson (1998).We can differentiate between two sub-selection effects, here:The intra-civ (grabby values) selection: Within a civilization, those who colonize space and influence the future of the civilization are those with the most grabby-prone values. Here is a specific plausible instance of that selection effect, given by Robin Hanson (1998): “Far enough away from the origin of an expanding wave of interstellar colonization, and in the absence of property rights in virgin oases, a selection effect should make leading edge colonists primarily value whatever it takes to stay at the leading edge.â€The inter-civ (grabby values) selection: The civilizations that end up with the most grabby-prone values will get more territory than the others.Do these two different sub-selection effects matter equally? My current impression is that this mainly depends on the likelihood of an early value lock-in – or of design escaping selection early and longlastingly, in Robin Hanson’s (2022) terminology – where “early†means “before grabby values get the time to be selected for within the civilizationâ€. If such an early value lock-in occurs, the inter-civ selection effect is the only one left. If it doesn’t occur, however, the importance of the intra-civ selection effect seems vastly superior to that of the inter-civ one. This is mainly explained by the fact that there is very likely much more room for selection effects within a (not-locked-in) civilization than in between different civilizations.GST seems trivially true. It is pretty obvious that all values are not equal in terms of how much they value (intrinsically or instrumentally) space colonization, and that those who value space expansion more ... |
May 07, 2023 |
EA - Don't Interpret Prediction Market Prices as Probabilities by bob
07:11
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Interpret Prediction Market Prices as Probabilities, published by bob on May 5, 2023 on The Effective Altruism Forum.Epistemic status: most of it is right, probablyPrediction markets sell shares for future events. Here's an example for the 2024 US presidential election:This market allows any US person to bet on the gender of the 2024 president. Male shares and female shares are issued in equal amounts. If the demand for shares of one gender is higher than shares of the other, the price is adjusted.At the time of writing, female shares cost 17 cents, and male shares cost 83 cents. If a female president is elected in 2024, owners of female shares will be able to cash them out for $1 each. If not, male shares can be cashed out for $1. The prices of male and female shares sum up to $1, which makes sense given that only one of them will be worth $1 in the future.Because bettors think a female president is relatively unlikely, the price for male shares is higher. The bettors may be wrong here, but the beauty of prediction markets it that anyone can put their money where their mouth is. If you believe that a female president is more likely than a male president, you can buy female shares for 17 cents a piece. If you're right, each of these shares will likely appreciate to $1 by 2024, almost sextupling your investment. If enough people predict a female president to be more likely, the demand for female shares will grow until they are more expensive than male shares. As such, the price of the shares reflects the predictions of everyone involved in the market.Even if you believe a female president is, say, 25% likely, you'd still be inclined to buy a female share for 17 cents. (That is, if you'd take a 1 in 4 chance of a 500% return on investment.) The interesting thing is that whenever you buy shares, the price will move closer to the probability you perceive be true. Only when the price matches your perceived probability, the market is no longer interesting for you. Because of this, the price of a share reflects the crowd's perceived probability of the corresponding outcome. If the market believes the probability to be 17%, the price will be 17 cents.Or so the story goes.In reality, it's more complicated.You're betting in a currency and, as such, you're betting on a currency.Let's say you believe a male president is about 90% likely, so you're considering buying male shares at 83 cents. Every 83 cents you put in can only become $1, so your maximum return on investment (ROI) is about 20%. Your expected ROI is closer to about 8% because you believe there's only a 90% chance the president will be male. Still, that's a positive ROI, so why not make the bet?This bet is denominated in US dollars, and it will be only resolved in 20 months or so. The problem is that US dollars are subject to inflation.Instead of locking up our investing money in a long-term bet for nearly two years, we could instead put it in an index fund, like the S&P 500, or invest in a large number of random stocks. Both methods have historically had a 10% annualized return. That's much better than an 8% two-year return!Because everyone thinks this way, there will be an artificially low demand for boring long-term positions, like predicting that the next US president will be male. This will drive the price of these shares down, while driving the price of shares for low-probability events up. A share that pays out USD will never have a price that reflects the market's perceived probability, because most people believe there are better things to invest in than USD.There's a solution for this, although regulators might not like it: allow people to bet bonds or shares. The famous 1 million USD bet between Warren Buffett and Protege Partners was actually not denominated in USD, but in bonds and sh... |
May 06, 2023 |
EA - Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones by Hank B
04:14
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Maybe Family Planning Charities Are Better For Farmed Animals Than Animal Welfare Ones, published by Hank B on May 6, 2023 on The Effective Altruism Forum.This piece estimates that a donation to the Humane League, an animal welfare organization considered highly cost-effective, and which mainly engages in corporate lobbying for higher welfare standards, saved around 4 animals per dollar donated, mostly chickens. “Saving a farmed animal†here means “preventing a farmed animal from existing†or “improving the welfare of enough farmed animals by enough to count as preventing one farmed animal from existing.†That second definition is a little weird, sorry.If you’re trying to help as many farmed animals as possible this seems like a pretty good deal. Can we do better? Maybe.Enter MSI Reproductive Choices, an international family planning organization, which mainly distributes contraception and performs abortions. They reported in 2021 that they prevented around 14 million unintended pregnancies on a total income of 290 million pounds, or 360 million dollars at time of writing. This is roughly 25 dollars per unintended pregnancy prevented. Let’s pretend that for every unintended pregnancy prevented, a child who would have been born otherwise is not born. This is plausibly true for some of these unintended pregnancies. But not all. On the other hand, MSI also provided abortions which plausibly prevent child lives as well. Maybe that means MSI prevented 14 million child lives from starting in 2021 (if we think the undercounting from not including abortions counter perfectly the overcounting of unintended pregnancy). I have no reason to think that’s particularly plausible, but let’s just keep pretending that’s right.Let’s further pretend that all of MSI’s work happened in Zambia. MSI does work in Zambia, but they also do work in lots of other countries. I choose Zambia mostly because trying to do this math with all the countries that MSI works with would be hard. Zambia had a life expectancy at birth of 62 years in 2020 according to this. According to this, Zambians consumed an average of 28kg of meat per person per year. The important subfigures here are the 2.6kg of poultry and 13kg of seafood per person per year, since chickens and fish are much lighter than other animals killed for meat. One chicken provides say 1kg of meat (I’m sort of making this number up, but similar numbers come up on google). One fish provides say 0.5kg. This means that the average Zambian would eat 2.6 chickens and 26 fish per person per year. Over a lifetime, that’d be 62 years of consumption.If a human who would have otherwise existed no longer exists because of your efforts, they also no longer eat the meat they would have eaten otherwise. Thus, if MSI prevents one human lifetime for every $25 you donate, then you’d be saving 62(2.6+26) farmed animals which is around 1,750. That’s 70 animals saved per dollar donated.This analysis is so bad in so many ways. I took the number for animals saved per dollar donated to The Humane League on total faith. I also just assumed that MSI is correct in saying that they prevented 14 million unintended pregnancies and I made clearly bad assumptions to get from that number to number of human lifetimes prevented. At least we can have some confidence in the total weight of meat consumed on average by a Zambian per year and the life expectancy at birth in Zambia. However, my way of getting from total weight to animals slaughtered is pretty hokey and doesn’t even include cows, sheep, pigs, etc. There are many other problems too. For example, I took the average cost per unintended pregnancy prevented by MSI. However, the average is not the relevant figure here. We’d like the marginal cost of preventing an additional unintended pregnancy. This is a figure I don... |
May 06, 2023 |
EA - What is effective altruism? How could it be improved? by MichaelPlant
26:30
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is effective altruism? How could it be improved?, published by MichaelPlant on May 5, 2023 on The Effective Altruism Forum.The EA community has been convulsing since FTX. There's been lots of discontent, but almost no public discussion between community leaders, and little in the way of a constructive suggestions for what could change. In this post, I offer a reconceptualisation of what the EA community is and then use that to sketch some ideas for how to do good better together.I’m writing this purely in my personal capacity as a long-term member of the effective altruism community. I drafted this at the start of 2023, in large part to help me process my own thoughts. The ideas here are still, by my lights, dissatisfyingly underdeveloped. But I’m posting it now, in its current state and with minimal changes, because it's suddenly relevant to topical discussions about how to run the Effective Ventures Foundation and the Centre for Effective Altruism and I don't know if I would ever make time to polish it.[I'm grateful to Ben West, Chana Messinger, Luke Freeman, Jack Lewars, Nathan Young, Peter Brietbart, Sam Bernecker, and Will Troy for their comments on this. All errors are theirs mine]SummaryWe can think of effective altruists as participants in a market for maximum impact activities. It’s much like a local farmers’ market, except people are buying and selling goods and services for how best to help others.Just like people in a market, EAs don’t all share the same goal - a marketplace isn’t an army. Rather, people have different goals, based on their different accounts of what matters. The participants can agree, however, that they all want there to be a marketplace to allow them to meet and trade; this market is useful because people want different things.Presumably, the EA market should function as a free, competitive market. This means lots of choice and debate among the participants. It requires the market administrators to operate a level playing-field.Currently, the EA community doesn’t quite operate like this. The market administrators - CEA, its staff and trustees - are also major market participants, i.e. promoting particular ideas and running key organisations. And the market is dominated by one big buyer (i.e. it’s a ‘monopsony’).I suggest some possible reforms: CEA to have its trustees elected by the community; it should strive to be impartial rather than take a stand on the priorities. I don’t claim this will solve all the issues, but it should help. I'm sure there are other implications of the market model I've not thought of.These reforms seem sensible even without any of EA’s recent scandals. I do, however, explain how they would likely have helped lessened these scandals too.I’ve tried to resist getting into the minutiae of “how would EA be run if modelled on a free market?†and I would encourage readers also to resist this. I want people to focus on the basic idea and the most obvious implications, not get stuck on the details.I’m not very confident in the below. It’s an odd mix of ideas from philosophy, politics, and economics. I wrote it up in the hope others can develop the ideas and I can stop ruminating on the “what should FTX mean for EA?†question.What is EA? A market for maximum-impact altruistic activitiesWhat is effective altruism? It's described by the website effectivealtruism.org as a "research field and practical community that aims to find the best ways to help others, and put them into practice". That's all well and good, but it's not very informative if we want to understand the behaviour of individuals in the community and the functioning of the community as a whole.An alternative approach is to think of effective altruists, the people themselves, in economic terms. In this case, we might characterise the effe... |
May 05, 2023 |
EA - RIP Bear Braumoeller by Stephen Clare
02:17
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: RIP Bear Braumoeller, published by Stephen Clare on May 5, 2023 on The Effective Altruism Forum.Professor Bear Braumoeller passed away earlier this week. Bear was a political scientist who studied the likelihood and causes of catastrophic wars. You may have read his book Only the Dead or heard his appearance on the 80,000 Hours podcast. For a short, recent example of his work I recommend this piece of his about the Russia-Ukraine war.Bear’s work on conflict likelihood, escalation, and catastrophic wars is certainly among the best research on major conflict risks. Only the Dead was an important counter to strong claims about the long-term declines in interstate violence. Bear found, in brief, that the data on war severity offer few reasons to think that the risk of huge wars (including much-larger-than-WWII-wars) has declined much. And this risk accumulates catastrophically over time.One of my favourite sentences from Bear is his darkly-humorous conclusion to a chapter on war severity (p. 130):When I sat down to write this conclusion, I briefly considered typing, “We’re all going to die,†and leaving it at that. I chose to write more, not because that conclusion is too alarmist, but because it’s not specific enough.Bear combined expertise in both statistical analysis and the theory of what causes war to great effect. He pushed forward our understanding of not just how the likelihood of major conflict has changed over time, but why. His work was interesting not just to political scientists but to anyone seeking to understand and reduce global risks.I’d corresponded with Bear frequently over the last two years while researching catastrophic conflict risks. He was generous and cared deeply about the social impact of his work. Despite my utter lack of credentials and experience, Bear gave me a lot of his time, advice, and connections to other researchers. In my experience academics rarely engage so meaningfully with outsiders. I was grateful.Bear’s interest in EA had been piqued and as far as I know he was planning to do more work on catastrophic risks. Last year his lab received a grant from the Future Fund for follow-up research on the themes he wrote about in Only the Dead.He is gone far too soon and will be missed.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 05, 2023 |
EA - Please donât vote brigade by Lizka
04:24
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please donât vote brigade, published by Lizka on May 5, 2023 on The Effective Altruism Forum.Once in a while, the moderators will find out that something like the following happened:Someone posted an update from their organization, and shared it on Slack or social media, asking coworkers and friends to go upvote it for increased visibility.Someone saw something they didnât like on the Forum â maybe comments criticizing a friend, or a point of view they disagree with â and encouraged everyone in some discussion space to go downvote it.This is a form of vote brigading. It messes with karmaâs ability to provide people with a signal of what to engage with and is against Forum norms.Please donât do it. We might ban you for it.If youâre worried that someone else (or some other group) is engaging in vote brigading, bring it up to the moderators instead of trying to correct for it.Why is it bad?Karma is meant to provide a signal of what Forum users will find useful to engage with. Vote brigading turns karma into a popularity contest.Voting should be based on readersâ opinions of the content theyâre voting on. If someone convinces you that a post is terrible â or great â itâs fine to downvote or upvote it as a result of that, but you should actually believe that.We should resolve disagreements by discussing them, not by comparing the sizes of the groups who agree with each position.If people try to hide criticism by downvoting it just because they feel an affinity to the group(s) criticized, the Forum will become predictably biased. We wonât have important conversations, we wonât learn from each othersâ mistakes, etc.What actions should we avoid? (What counts as vote brigading?)If youâre sharing content:Donât encourage people to all go upvote or downvote something (âeveryone go upvote this!â) â especially when you have power over the people youâre talking to.Itâs more ok to say âgo upvote this if you think itâs good,â but itâs still borderline, and you should be careful to make sure that it doesnât feel like pressure on people.Be careful with bias: if the content is criticizing your work, or your friendâs work, or something you feel an affinity towards â be suspicious of your ability to objectively engage with it.Consider letting other Forum users sort it out or leaving a comment explaining your point of view.If youâre voting:Please make sure youâre really voting because you think this content is good.If your friends or coworker shared their content and thatâs the only thing you really engage with and vote on, interrogate your heart or mind about whether you might be biased.Please report attempts at vote brigading to us.ExamplesThere are many borderline cases. Here are some examples, sorted by how fine/bad the action of person sharing the content is:The actionIs it ok to do?You share a post (and maybe what you like or dislike about it), without explicitly asking people to upvote or downvote.Itâs fine (Iâm very happy for people to straightforwardly share posts with people who might find them interesting)You share a post and what you like about it, and say something like âupvote the post if you like itâYou share a post that criticizes your work, and write something like âdownvote the post if you think it should have less visibilityâ Not ok â even though thereâs an âif.â. Donât do this, especially if youâre in a leadership role. You share a post and say something like âEveryone: go upvote the post!âNot ok. Once again, itâs even worse if youâre in a leadership role with respect to the people youâre sharing the post with.On a call with other people, and you say, âthereâs this post I donât like / a post thatâs criticizing me/us. Could you all upvote / downvote it?âExtremely not ok.This has the added harm of making it easy for the asker to see if the other p...
|
May 05, 2023 |
EA - Please don’t vote brigade by Lizka
04:24
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don’t vote brigade, published by Lizka on May 5, 2023 on The Effective Altruism Forum.Once in a while, the moderators will find out that something like the following happened:Someone posted an update from their organization, and shared it on Slack or social media, asking coworkers and friends to go upvote it for increased visibility.Someone saw something they didn’t like on the Forum — maybe comments criticizing a friend, or a point of view they disagree with — and encouraged everyone in some discussion space to go downvote it.This is a form of vote brigading. It messes with karma’s ability to provide people with a signal of what to engage with and is against Forum norms.Please don’t do it. We might ban you for it.If you’re worried that someone else (or some other group) is engaging in vote brigading, bring it up to the moderators instead of trying to correct for it.Why is it bad?Karma is meant to provide a signal of what Forum users will find useful to engage with. Vote brigading turns karma into a popularity contest.Voting should be based on readers’ opinions of the content they’re voting on. If someone convinces you that a post is terrible — or great — it’s fine to downvote or upvote it as a result of that, but you should actually believe that.We should resolve disagreements by discussing them, not by comparing the sizes of the groups who agree with each position.If people try to hide criticism by downvoting it just because they feel an affinity to the group(s) criticized, the Forum will become predictably biased. We won’t have important conversations, we won’t learn from each others’ mistakes, etc.What actions should we avoid? (What counts as vote brigading?)If you’re sharing content:Don’t encourage people to all go upvote or downvote something (“everyone go upvote this!â€) — especially when you have power over the people you’re talking to.It’s more ok to say “go upvote this if you think it’s good,†but it’s still borderline, and you should be careful to make sure that it doesn’t feel like pressure on people.Be careful with bias: if the content is criticizing your work, or your friend’s work, or something you feel an affinity towards — be suspicious of your ability to objectively engage with it.Consider letting other Forum users sort it out or leaving a comment explaining your point of view.If you’re voting:Please make sure you’re really voting because you think this content is good.If your friends or coworker shared their content and that’s the only thing you really engage with and vote on, interrogate your heart or mind about whether you might be biased.Please report attempts at vote brigading to us.ExamplesThere are many borderline cases. Here are some examples, sorted by how fine/bad the action of person sharing the content is:The actionIs it ok to do?You share a post (and maybe what you like or dislike about it), without explicitly asking people to upvote or downvote.It’s fine (I’m very happy for people to straightforwardly share posts with people who might find them interesting)You share a post and what you like about it, and say something like “upvote the post if you like itâ€You share a post that criticizes your work, and write something like “downvote the post if you think it should have less visibility†Not ok — even though there’s an “if.â€. Don’t do this, especially if you’re in a leadership role. You share a post and say something like “Everyone: go upvote the post!â€Not ok. Once again, it’s even worse if you’re in a leadership role with respect to the people you’re sharing the post with.On a call with other people, and you say, “there’s this post I don’t like / a post that’s criticizing me/us. Could you all upvote / downvote it?â€Extremely not ok.This has the added harm of making it easy for the asker to see if the other p... |
May 05, 2023 |
EA - Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions by Rockwell
04:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions, published by Rockwell on May 4, 2023 on The Effective Altruism Forum.Note: This is a post I've talked about writing for >6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas...
|
May 05, 2023 |
EA - Orgs and Individuals Should Spend ~1 Hour/Month Making More Introductions by Rockwell
04:21
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Orgs & Individuals Should Spend ~1 Hour/Month Making More Introductions, published by Rockwell on May 4, 2023 on The Effective Altruism Forum.Note: This is a post I've talked about writing for >6 months, so I'm giving myself 30 minutes to write and publish it. For context, I'm the full-time director of EA NYC, an organization dedicated to building and supporting the effective altruism community in and around New York City.Claim: More organizations and individuals should allot a small amount of time to a particularly high-value activity: 1-1 or 1-org introductions.Outside the scope of this post: I'm not going to make the case here for the value of connections. Many in the community already believe they are extremely valuable, e.g. they're the primary metric CEA uses for its events.Context: I frequently meet people who are deeply engaged in EA, have ended up at an EAG(x), work for an EA or EA-adjacent organization, or are otherwise exciting and active community members, but have no idea there are existing EA groups located in their city or university, focused on their profession, or coordinating across their cause area. When they do learn about these groups, they are often thrilled and eager to plug in. Many times, they've been engaging heavily with other community members who did know, and perhaps even once mentioned such in passing, but didn't think to make a direct introduction. For many, a direct introduction dramatically increases the likelihood of their actually engaging with another individual or organization. As a result, opportunities for valuable connections and community growth are missed.Introductions can be burdensome, but they don't have to be.80,000 Hours80,000 Hours' staff frequently directly connects me to individuals over email who are based in or near NYC, whether or not they've already advised them. In 2022, they sent over 30 emails that followed a format like this:Subject: Rocky [Name]Hi both,Rocky, meet [Name]. [Name] works in [Professional Field] and lives in [Location]. They're interested in [Career Change, Learning about ___ EA Topic, Connecting with Local EAs, Something Else]. Because of this, I thought it might be useful for [Name] to speak to you and others in the EA NYC community.[Name], meet Rocky. Rocky is Director of Effective Altruism NYC. Before that she did [Career Summary] and studied [My Degree]. Effective Altruism NYC works on helping connect and grow the community of New Yorkers who are looking to do the most good through: advising, socials, reading groups, and other activities. I thought she would be a good person for you to speak with about some next steps to get more involved with Effective Altruism.Hope you get to speak soon. Thanks!Best, [80K Staff Member]They typically link to our respected LinkedIn profiles.I then set up one-on-one calls with the individuals they connect me to and many subsequently become involved in EA NYC in various capacities.EA Virtual ProgramsEA Virtual Programs does something similar:Subject: [EA NYC] Your group has a new prospective memberHi,We are the EA Virtual Programs (EA VP) team. A recent EA Virtual Programs participant has expressed an interest in joining your Effective Altruism New York City group.Name: ____Email: ____Background Info: [Involvement in EA] [Profession] [Location] [LinkedIn]Note these connections come from the participants themselves, as they nominated they would like to get in touch with your group specifically in our exit survey.It would be wonderful for them to get a warm welcome to your group. Please do reach out to them in 1-2 weeks preferably. However, no worries if this is not a priority for you now.I hope these connections are valuable!Sincerely,EA Virtual ProgramsIn both cases, the connector receives permission from both parties, something eas... |
May 05, 2023 |
EA - AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results. by Otto
17:02
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results., published by Otto on May 4, 2023 on The Effective Altruism Forum.This is a summary of a follow-up study conducted by the Existential Risk Observatory, which delves into a greater number media items. To access our previous study, please follow this link. The data collected will be presented in two separate posts. The first post, which is the current one, has two parts. The first part examines the key indicators used in the previous research, such as "Human Extinction Events" and "Human Extinction Percentage," along with a new key indicator called "Concern Level." The Concern Level indicator assesses participants' level of concern about AI existential risk on a scale of 0 to 10 before and after the intervention. The second part analyzes the changes in public awareness about AI existential risk over time. It also explores the connection between the effectiveness of different media formats, namely articles and videos, and their length in raising awareness.In addition, it investigates how trust levels are related to the effectiveness of media sources in increasing public awareness of AI existential risk. In the second post, the research covers a new aspect of this study: participants' opinions on an AI moratorium and their likelihood of voting for it.PART 1: Effectiveness per media itemThis research aimed to evaluate the effectiveness of AI existential risk communication in increasing awareness of the potential risks posed by AI to human extinction.Research Objectives: The objective of the study was to determine the effectiveness of AI existential risk communication in raising public awareness. This was done by examining the changes in participants' views on the likelihood and ranking of AI as a potential cause of extinction before and after the intervention. Furthermore, the study evaluated the difference in the level of concern of participants before and after the intervention.Measurements and Operationalization: Three primary measurements - "Human Extinction Events," "Human Extinction Percentage," and "Concern Level" - were utilized to examine alterations in participants' perceptions. The coding scheme that was previously used in our research was employed to assess participants' increased awareness of AI. The data was gathered through Prolific, a platform that locates survey respondents based on predefined criteria. The study involved 350 participants, with 50 participants in each survey, who were required to be at least 18 years old, residents of the United States, and fluent in English.Data Collection and Analysis: Data was collected through surveys in April 2023. The data analysis comprised three main sections: (1) comparing changes in the key indicators before and after the intervention, (2) exploring participants' views on the possibility of an AI moratorium and their likelihood of voting for it, and (3) assessing the number of participants who were familiar with or had confidence in the media channel used in the intervention.Media Items Examined:CNN: Stuart Russell on why A.I. experiments must be pausedCNBC: Here's why A.I. needs a six-month pause: NYU Professor Gary MarcusThe Economist: How to stop AI going rogueTime 1: Why Uncontrollable AI Looks More Likely Than Ever | TimeTime 2: The Only Way to Deal With the Threat From AI? Shut It Down | TimeFoxNews Article: Artificial intelligence 'godfather' on AI possibly wiping out humanity: ‘It's not inconceivable’ | ArticleFoxNews Video: White House responds to concerns about AI development | VideoResults:Human Extinction EventsThe graph below displays the percentage of increased awareness across various media sources. The Economist survey showed the highest increase in awareness at 52 ... |
May 04, 2023 |
EA - Getting Cats Vegan is Possible and Imperative by Karthik Sekar
13:24
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Cats Vegan is Possible and Imperative, published by Karthik Sekar on May 4, 2023 on The Effective Altruism Forum.SummaryCarnivore is a classification, not a diet requirement.The amount of meat that cats eat is significant. Transitioning domestic cats to eating vegan would do much good for the environment and animal welfare.Having vegan cats now is not convenient, but we (humanity) should make that so.We do not need to wait around for cultivated meat. There are tractable opportunities now.We also need randomized control trials with measured health outcomes; funding is the main limitation here.Making domestic cats vegan meets all of the Effective Altruism criteria: significant, tractable, and neglected.MainImagine you are a surveyor traveling to remote parts of the world. Within a thick rainforest, you come across an indigenous group long separated from the modern world. They fashion spears to hunt fish and thicket baskets to collect foraged berries. Notably, they wear distinctive yellow loincloths dyed with local fruit. You are not one with words, so you call them the Yellowclothea.This is not a farfetched story. Most species worldwide are classified similarly–someone observes them and then contrives a classification named on what they see. Carnivora was coined in 1821 to describe an Order of animals by the observation that they consumed the meat of other animals–carnem vorÄre is Latin for “to eat fleshâ€.Let us go back to the Yellowclothea. You can already intuit that these natives do not have to wear the yellow loincloths–it is simply what you initially observed. If the natives swapped the dye with purple or green, that would work out fine. However, the rainforest lacks those colors, so Yellowclothea is resigned to their monotone. In other words, wearing yellow cloth is not a requirement for them to live, just what works for them and is available.1821, the year of Carnivora’s naming, is ages ago in the scientific world. It was before the Theory of Evolution, first described in The Origin of Species in 1859. It was before the molecular biology revolution. It was before we understood the basis of metabolism and nutrition. So it is easy to confuse classification/observation with the requirement. It is the same fallacy as assuming that the Yellowclothea people can only wear yellow clothing.Since 1821, we learned more about nutrition, molecular biology, and metabolism to demystify meat. Meat is mostly muscle fibers with some marbled fat and critical nutrients. Carnivora animals generally have more acidic stomachs and shorter gastrointestinal (GI) tracts than nominal herbivores. The extra acid helps chop proteins into the alphabet amino acid molecules, which are readily taken up, so a long GI tract is unnecessary.So Carnivora animals cannot have salads or raw vegetables, which are rich in fiber and would not break down in their GI tracts in time. Nevertheless, we can make protein-rich and highly digestible foods for Carnivora starting from plant and microbial ingredients. Just as a cow will chemically process the plants into their muscle–flesh, we can similarly turn the plants into food that a carnivore would thrive off without an animal intermediary. In other words, we can source all the required nutrients from elsewhere, without meat.There is—at least in theory—no reason why diets comprised entirely of plants, minerals, and synthetically-based ingredients (i.e., vegan diets) cannot meet the necessary palatability, bioavailability, and nutritional requirements of catsAndrew Knight, Director, Centre for Animal Welfare, University of WinchesterI have written about how succeeding meat, dairy, and eggs with plant and microbial-based alternatives will be one of the best things we ever do–I argue that it is better than curing cancer or transitioning ful... |
May 04, 2023 |
EA - [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I. by Rockwell
04:01
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I., published by Rockwell on May 4, 2023 on The Effective Altruism Forum.This is a linkpost forThe White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,†adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job.Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.But the A.I. boom has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that many A.I. systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law on their own.President Biden recently said that it “remains to be seen†whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting. A spokesman for Anthropic confirmed the company would be attending. A spokeswoman for OpenAI did not respond to a request for comment.The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a “Blueprint for an A.I. Bill of Rights,†which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible... |
May 04, 2023 |
EA - Introducing Animal Policy International by Rainer Kravets
05:15
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Animal Policy International, published by Rainer Kravets on May 4, 2023 on The Effective Altruism Forum.Animal Policy International is a new organisation launched through the Charity Entrepreneurship Incubation Program focused on ensuring that animal welfare standards are upheld in international trade policy.ProblemThere are significant differences between farmed animal welfare standards across the globe, with billions of animals still confined in factory farms. Even those regions with higher standards like the EU, the UK, Switzerland and New Zealand tend to import a significant portion of their animal products from countries where animals experience significant suffering due to lack of protective measures.SolutionThe higher welfare countries can apply their standards to imported animal products by restricting the access of low-welfare animal products that would have been illegal to produce domestically. This can incentivise farmers elsewhere to increase their standards to keep existing supply chains.A law restricting the importation of low-welfare products provides a unique win-win opportunity for both animal advocates and farmers in higher welfare countries, especially in our likely first country of operation: New Zealand. Some farmers are facing tough competition from low-priced low-welfare imports and demand more equal standards between imports and local produce after New Zealand’s decision to phase out farrowing crates on local pig farms by December 2025.Potential ImpactA law passed in New Zealand restricting the importation of animal products that do not adhere to local standards could save approximately 8 million fish per year from suffering poor living conditions, transportation, and slaughter practices; spare 330,000 pigs from cruel farrowing crates and 380,000 chickens from inhumane living conditions.Differences in animal welfare standards: New ZealandBelow is an outline of differences between animal welfare standards in New Zealand and its main importers of particular animals.Fish: In China, Vietnam and Thailand (total 79% of imports in 2020) there is no legislation for fish meaning they may endure slow, painful deaths by asphyxiation, crushing, or even being gutted alive. New Zealand outlines some protections for fish at the time of killing and during transport.Hens: 80% of eggs imported into New Zealand come from China where hens are allowed to be kept in battery cages. Battery cages are illegal in New Zealand from 2023. (Colony (enriched) cages are still used).Pigs: The US, an importer of pork to New Zealand, has no federal ban on the use of sow stalls or farrowing crates, leading to sows being cruelly confined to narrow cages where they cannot perform basic behaviours, turn around, or properly mother their piglets. New Zealand has banned sow stalls, and farrowing crates are being phased out by 2025.Sheep: Australia, which imports wool products to New Zealand, allows several practices that are prohibited in New Zealand, including the extremely cruel practice of mulesing, which involves removing parts of the skin from live sheep without anaesthetic.Next stepsEstablishing connections with potential partner NGOs and industryProducing a policy briefConducting public pollingAddressing the question of legality of import restrictionsMeeting policymakersOpen questionsWill farmers in low-welfare countries be motivated and capable of increasing their animal welfare standards?What enforcement mechanisms should be used?How would a restriction on importation affect the country’s relationships with its trade partners?What externalities (e.g. changes in animal product prices) would such a trade law have?How you can helpExpertise: if you have experience/knowledge in international trade, policy work, WTO laws and can help answer th... |
May 04, 2023 |
EA - 500 Million, But Not A Single One More by jai
04:27
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 500 Million, But Not A Single One More, published by jai on May 4, 2023 on The Effective Altruism Forum.We will never know their names.The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.It was in Ancient Egypt, where it attacked slave and pharaoh alike. In Rome, it effortlessly decimated armies. It killed in Syria. It killed in Moscow. In India, five million dead. It killed a thousand Europeans every day in the 18th century. It killed more than fifty million Native Americans. From the Peloponnesian War to the Civil War, it slew more soldiers and civilians than any weapon, any soldier, any army. (Not that this stopped the most foolish and empty souls from attempting to harness the demon as a weapon against their enemies.)Cultures grew and faltered, and it remained. Empires rose and fell, and it thrived. Ideologies waxed and waned, but it did not care. Kill. Maim. Spread. An ancient, mad god, hidden from view, that could not be fought, could not be confronted, could not even be comprehended. Not the only one of its kind, but the most devastating.For a long time, there was no hope — only the bitter, hollow endurance of survivors.In China, in the 10th century, humanity began to fight back.It was observed that survivors of the mad god’s curse would never be touched again: They had taken a portion of that power into themselves, and were so protected from it. Not only that, but this power could be shared by consuming a remnant of the wounds. There was a price, for you could not take the god’s power without first defeating it — but a smaller battle, on humanity’s terms.By the 16th century, the technique spread to India, then across Asia, the Ottoman Empire and, in the 18th century, Europe. In 1796, a more powerful technique was discovered by Edward Jenner.An idea began to take hold: Perhaps the ancient god could be killed.A whisper became a voice; a voice became a call; a call became a battle cry, sweeping across villages, cities, nations. Humanity began to cooperate, spreading the protective power across the globe, dispatching masters of the craft to protect whole populations. People who had once been sworn enemies joined in a common cause for this one battle. Governments mandated that all citizens protect themselves, for giving the ancient enemy a single life would put millions in danger.And, inch by inch, humanity drove its enemy back. Fewer friends wept; fewer neighbors were crippled; fewer parents had to bury their children.At the dawn of the 20th century, for the first time, humanity banished the enemy from entire regions of the world. Humanity faltered many times in its efforts, but there were individuals who never gave up, who fought for the dream of a world where no child or loved one would ever fear the demon ever again. Viktor Zhdanov, who called for humanity to unite in a final push against the demon; the great tactician Karel Raška, who conceived of a strategy to annihilate the enemy; Donald Henderson, who led the efforts in those final days.The enemy grew weaker. Millions became thousands, thousands became dozens. And then, when the enemy did strike, scores of humans came forth to defy it, protecting all those whom it might endanger.The enemy’s last attack in the wild was on Ali Maow Maalin, in 1977. For months afterwards, dedicated humans swept the surrounding area, seeking out an... |
May 04, 2023 |
EA - Upcoming EA conferences in 2023 by OllieBase
03:10
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Upcoming EA conferences in 2023, published by OllieBase on May 4, 2023 on The Effective Altruism Forum.The Centre for Effective Altruism will be organizing and supporting conferences for the EA community all over the world for the remainder of 2023, including the first-ever EA conferences in Poland, NYC and the Philippines.We currently have the following events scheduled:EA GlobalEA Global: London | (May 19–21) | Tobacco Dock - applications close 11:59 pm UTC Friday 5 MayEA Global: Boston | (October 27–29) | Hynes Convention CenterEAGxEAGxWarsaw | (June 9–11) | POLINEAGxNYC | (August 18–20) | Convene, 225 Liberty St.EAGxBerlin | (September 8–10) | UraniaEAGxAustralia | (September 22–24, provisional) | MelbourneEAGxPhilippines | (October 20–22, provisional)EAGxVirtual | (November 17–19, provisional)Applications for EAG London, EAG Boston, EAGxWarsaw and EAGxNYC are open, and we expect applications for the other conferences to open approximately 3 months before the event. Please go to the event page links above to apply. Please note again that applications to EAG London close 11:59 pm UTC Friday 5 May.If you'd like to add EA events like these directly to your Google Calendar, use this link.Some notes on these conferences:EA Globals are run in-house by the CEA events team, whereas EAGx conferences are organized independently by local community builders with financial support and mentoring from CEA.EA Global conferences have a high bar for admission and are for people who are very familiar with EA and are taking significant actions (e.g. full-time work or study) based on EA ideas.Admissions for EAGx conferences are processed independently by the EAGx conference organizers. These events are primarily for those who are newer to EA and interested in getting more involved and who are based in the region the conference is taking place in (e.g. EAGxWarsaw is primarily for people who are interested in EA and are based in Eastern Europe).Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one than miss out on potential applicants to a conference.Travel support funds for events this year are limited (though will vary by event), and we can only accommodate a small number of requests. If you do not end up receiving travel support, this is likely the result of limited funds, rather than an evaluation of your potential for impact. When planning around an event, we’d recommend you act under the assumption that we will not be able to grant your travel funding request (unless it has already been approved).Find more info on our website.Feel free to email hello@eaglobal.org with any questions, or comment below. You can also contact EAGx organisers using the format [location]@eaglobalx.org (e.g. warsaw@eaglobalx.org, nyc@eaglobalx.org).Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 04, 2023 |
EA - Air Safety to Combat Global Catastrophic Biorisks [REVISED] by Gavriel Kleinwaks
11:09
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Air Safety to Combat Global Catastrophic Biorisks [REVISED], published by Gavriel Kleinwaks on May 3, 2023 on The Effective Altruism Forum.This report is a collaboration between researchers from 1Day Sooner and Rethink Priorities.OverviewThis post is a revision of a report previously published on how improvements in indoor air quality can address global catastrophic risk from pandemics. After feedback from expert reviewers, we revised the report in accordance with comments. The comments greatly improved the report and we consider the earlier version to be misphrased, misleading, or mathematically underspecified in several places, but we are leaving the post available to illustrate the revision process.Unlike in the previous post, we are not including the full report, given its length. Instead, this post contains a summary of the reviews and of the report, with a link to the full report.Many thanks to the expert reviewers (listed below) for their detailed feedback. Additional thanks to Rachel Shu for research and writing assistance. We also received help and feedback from many other people over the course of this process—a full list is in the “Acknowledgements†section of the report.Summary of Expert ReviewWe asked biosecurity and indoor air quality experts to review this report: Dr. Richard Bruns of the John Hopkins Center for Health Security, Dr. Jacob Bueno de Mesquita and Dr. Alexandra Johnson of Lawrence Berkeley National Lab, Dr. David Manheim of ALTER, and Professor Shelly Miller of the University of Colorado.These experts suggested a variety of both minor and substantive changes to the document, though these changes do not alter the overall conclusion of the report that indoor air safety is an important lever for reducing GCBRs and that there are several high-leverage funding opportunities around promoting indoor air quality and specific air cleaning interventions.The main changes suggested were:Providing confidence intervals on key estimates, such as our estimate of the overall impact of IAQ interventions, and reframing certain estimates to improve clarity.Modifying the phrasing around the section concerning ‘modelling’, to better clarify our position around the specific limitations of existing models (specifically that there aren’t models that move from the room and building-level transmission to population-level transmission).Clarifying the distinction between mechanical interventions, specific in-duct vs upper-room systems (254nm) and HVAC-filtration vs portable air cleaners and adding additional information about some interactions between different intervention typesAdding general public advocacy for indoor air quality as a funding opportunity and related research that could be done support advocacy efforts.Adding additional relevant literature and more minor details regarding indoor air quality across different sections.Improving the overall readability of the report, by removing repetitive elements.Report Executive Summary(Full report available here.)Top-line summaryMost efforts to address indoor air quality (IAQ) do not address airborne pathogen levels, and creating indoor air quality standards that include airborne pathogen levels could meaningfully reduce global catastrophic biorisk from pandemics.We estimate that an ideal adoption of indoor air quality interventions, like ventilation, filtration, and ultraviolet germicidal irradiation (GUV) in all public buildings in the US, would reduce overall population transmission of respiratory illnesses by 30-75%, with a median estimate of 52.5%.Bottlenecks inhibiting the mass deployment of these technologies include a lack of clear standards, cost of implementation, and difficulty changing regulation/public attitudes.The following actions can accelerate deployment and improve IAQ to red... |
May 04, 2023 |
EA - Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms by Omnizoid
44:18
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Here's a comprehensive fact sheet of almost all the ways animals are mistreated in factory farms, published by Omnizoid on May 4, 2023 on The Effective Altruism Forum.Crosspost of this on my blog.1 IntroductionSee, there's the difference between us. I don't care about animals at all. I'm well aware of the cramped, squalid, and altogether unpleasant conditions suffered by livestock animals on many factory farms, and I simply could not care less. I see animals as a natural resource to be exploited. I don't care about them any more than I care about the trees that were cut down to make my house.Random person on the bizarre anti-vegan subredditI’ve previously argued against factory farming at some length, arguing that it is the worst thing ever. Here I will just lay out the facts about factory farming. I will describe what happens to the 80 or so billion beings we factory farm every year, who scream in agony and terror in the great juggernauts of despair, whose cries we ignore. They scream because of us—because of our apathy, because of our demand for their flesh—and it’s about time that people learned exactly what is going on. Here I describe the horrors of factory farms, though if one is convinced that factory farms are evil, they should stop paying for their products—an act which demonstrably causes more animals to be tormented in concentration camp-esque conditions.If factory farms are as cruel as I suggest, then the obligation not to pay for them is a point of elementary morality. Anyone who is not a moral imbecile recognizes that it’s wrong to contribute to senseless cruelty for the sake of comparatively minor benefits. We all recognize it’s wrong to torture animals for pleasure—paying others to torture animals for our pleasure is similarly wrong. If factory farms are half as cruel as I make them out to be, then factory farming is inarguably the worst thing in human history. Around 99% of meat comes from factory farms—if you purchase meat without careful vetting, it almost definitely comes from a factory farm.Here, I’ll just describe the facts about what goes on in factory farms. Of course, this understates the case, because much of what goes on is secret—the meat industry has fought hard to make it impossible to film them. As Scully notesIt would be reasonable for the justices to ask themselves this question, too: If the use of gestation crates is proper and defensible animal husbandry, why has the NPPC lobbied to make it a crime to photograph that very practice?Here, I will show that factory farming is literally torture. This is not hyperbolic, but instead the obvious conclusion of a sober look at the facts. If we treated child molesters the way we treat billions of animals, we’d be condemned by the international community. The treatment of animals is unimaginably horrifying—evocative of the worst crimes in human history.Some may say that animals just cannot be tortured. But this is clearly a crazy view. If a person used pliers to cut off the toes of their pets, we’d regard that as torture. Unfortunately, what we do to billions of animals is far worse.2 PigsJust like those who defended slavery, the eaters of meat often have farcical notions about how the beings whose mistreatment they defend are treated. But unfortunately, the facts are quite different from those suggested by meat industry propaganda, and are worth reviewing.Excess pigs were roasted to death. Specifically, these pigs were killed by having hot steam enter the barn, at around 150 degrees, leading to them choking, suffocating, and roasting to death. It’s hard to see how an industry that chokes and burns beings to death can be said to be anything other than nightmarish, especially given that pigs are smarter than dogs.Factory-farmed pigs, while pregnant, are stuffed in tiny gestation cr... |
May 04, 2023 |
EA - How Engineers can Contribute to Civilisation Resilience by Jessica Wen
15:14
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Engineers can Contribute to Civilisation Resilience, published by Jessica Wen on May 3, 2023 on The Effective Altruism Forum.Cross-posted from the High Impact Engineers Resource Portal. You can view the most up-to-date version on the Portal.SummaryCivilisation resilience is concerned with both reducing the risk of civilisation collapse and increasing the capability for humanity to recover from such a collapse. A collapse of civilisation would likely cause a great deal of suffering and may jeopardise the future of the human race. We can defend against such risks by reducing the chances that a localised catastrophe starts, that it scales up to a global catastrophe, or that it triggers irreversible civilisation collapse. Many facets of our defence layers are physical, meaning there are many opportunities for engineers to contribute to improving humanity's resilience.UncertaintyThe content of this article is largely based on research by 80,000 Hours, the Future of Humanity Institute, the Centre for the Study of Existential Risk, and ALLFED. We feel somewhat confident in the recommendations in this article.What is civilisation resilience?The industrial revolution gave humanity access to unprecedented amounts of valuable and lifesaving technologies and improved the lives we are able to live immensely. However, a global catastrophe could put unprecedented strain on the infrastructure — global agriculture, energy, industry, intercontinental shipping, communications, etc. — that enables civilisation as we know it today. If these systems were to collapse, would we be able to recover and return to the state of civilisation we have today?Could we re-industrialise? Would this be possible without easy access to fossil fuels, minerals, and chemicals? Could we rebuild flourishing global societies and infrastructure if there was a breakdown of international relations? Questions such as these fall under the purview of civilisation resilience.Civilisation resilience focuses on how we can buttress civilisation against collapse and increase our ability to recover from a collapse if it did occur.A framework for thinking about civilisation resilienceHaving a framework with which to analyse the risks and prioritise the strengthening of our defences is useful to sharpen our focus and direct our efforts to bolstering civilisation resilience. The paper Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter(Cotton-Barratt, Daniel and Sandberg) introduces a framework that breaks down protection against extinction risk into three layers of defence (figure 1). This framework is equally applicable to civilisation collapse given civilisation collapse is a precursor to human extinction.In evaluating extinction risk, the defence layers protect against an event becoming a catastrophe, scaling to a global catastrophe, and then wiping out the human race. When considering a given catastrophe, the following three defence layers are proposed:Prevention — how can we stop a catastrophe from starting?Response — how do we stop it from scaling up to a global catastrophe?Resilience — how does a global catastrophe get everyone?Figure 1: The three layers of defence against extinction risk (Cotton-Barratt, Daniel, Sandberg)One advantage of this characterisation framework is that it can be used to evaluate where the weaknesses are in humanity’s defence against a given catastrophe. If we consider a given catastrophic risk, , we can define an extinction probability,where:is the probability that risk is not prevented.is the probability that the risk gets past the response stage, given that it was not prevented.is the probability that the risk causes human extinction, given that it got past the response stage and became a global catastrophe.Within th... |
May 03, 2023 |
EA - Test fit for roles / job types / work types, not cause areas by freedomandutility
01:46
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Test fit for roles / job types / work types, not cause areas, published by freedomandutility on May 3, 2023 on The Effective Altruism Forum.I often see university EAs aiming to do research projects to test their fit for specific cause areas. I don't think this is a good idea.I think if you felt you were good or bad fit for a research project, either you were a good or bad fit for research generally or a specific style of research (qualitative, quantitative, philosophical, primary, secondary, wet-lab, programming, focus groups, interviews, questionnaires, clinical trials).For example, it seems very unlikely to me that someone who disliked wet-lab research in biosecurity will enjoy wet-lab research in alternative proteins, but it seems less unlikely that someone who disliked wet-lab research in biosecurity will enjoy dry-lab research in biosecurity.Similarly, if you enjoyed literature review based research in one cause areas, I think you are likely to enjoy the same type of research across a range of different cause areas (provided you consider the cause areas to be highly impactful).I think decisions on cause areas should be made primarily on your views on what is most impactful (whilst avoiding single player thinking and considering any comparative advantages your background may give you), but decisions on roles / job types / work types should heavily consider what you have enjoyed and have done well.I think rather than testing fit for particular cause areas, students should test fit for different roles / job types / work types, such as entrepreneurship / operations, policy / advocacy and a range of different types of research.Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 03, 2023 |
EA - [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks by Center for AI Safety
08:27
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [AISN #4]: AI and Cybersecurity, Persuasive AIs, Weaponization, and Geoffrey Hinton talks AI risks, published by Center for AI Safety on May 2, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Cybersecurity Challenges in AI SafetyMeta accidentally leaks a language model to the public. Meta’s newest language model, LLaMa, was publicly leaked online against the intentions of its developers. Gradual rollout is a popular goal with new AI models, opening access to academic researchers and government officials before sharing models with anonymous internet users. Meta intended to use this strategy, but within a week of sharing the model with an approved list of researchers, an unknown person who had been given access to the model publicly posted it online.How can AI developers selectively share their models? One inspiration could be the film industry, which places watermarks and tracking technology on “screener†copies of movies sent out to critics before the movie’s official release. AI equivalents could involve encrypting model weights or inserting undetectable Trojans to identify individual copies of a model. Yet efforts to cooperate with other AI companies could face legal opposition under antitrust law. As the LLaMa leak demonstrates, we don’t yet have good ways to share AI models securely.LLaMa leak. March 2023, colorized.OpenAI faces their own cybersecurity problems. ChatGPT recently leaked user data including conversation histories, email addresses, and payment information. Businesses including JPMorgan, Amazon, and Verizon prohibit employees from using ChatGPT because of data privacy concerns, though OpenAI is trying to assuage those concerns with a business subscription plan where OpenAI promises not to train models on the data of business users. OpenAI also started a bug bounty program that pays people to find security vulnerabilities.AI can help hackers create novel cyberattacks. Code writing tools open up the possibility of new kinds of cyberattacks. CyberArk, an information security firm, recently showed that OpenAI’s code generation tool can be used to create adaptive malware that writes new lines of code while hacking into a system in order to bypass cyberdefenses. GPT-4 has also been shown capable of hacking into password management systems, convincing humans to help it bypass CAPTCHA verification, and performing coding challenges in offensive cybersecurity.The threat of automated cyberattacks is no surprise given previous research on the topic. One possibility for mitigating the threat involves using AI for cyberdefense. Microsoft is beginning an initiative to use AI for cyberdefense, but the tools are not yet publicly available.Artificial Influence: An Analysis Of AI-Driven PersuasionFormer CAIS affiliate Thomas Woodside and his colleague Matthew Bartell released a paper titled Artificial influence: An analysis of AI-driven persuasion.The abstract for the paper is as follows:Persuasion is a key aspect of what it means to be human, and is central to business, politics, and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading humans to buy products, watch videos, click on search results, and more. Even systems that are not explicitly designed to persuade may do so in practice. In the future, increasingly anthropomorphic AI systems may form ongoing relationships with users, increasing their persuasive power. This paper investigates the uncertain future of persuasive AI systems. We examine ways that AI could qualitatively alter our relationship to and views regarding persuasion by shifting the balance of persuasi... |
May 03, 2023 |
EA - Summaries of top forum posts (24th - 30th April 2023) by Zoe Williams
16:40
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summaries of top forum posts (24th - 30th April 2023), published by Zoe Williams on May 2, 2023 on The Effective Altruism Forum.We've just passed the half year mark for this project! If you're reading this, please consider taking this 5 minute survey - all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already!Back to our regularly scheduled intro...This is part of a weekly series summarizing the top posts on the EA and LW forums - you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.If you'd like to receive these summaries via email, you can subscribe here.Podcast version: Subscribe on your favorite podcast app by searching for 'EA Forum Podcast (Summaries)'. A big thanks to Coleman Snell for producing these!Object Level Interventions / ReviewsAIby Guillem Bas, Jaime Sevilla, Mónica UlloaAuthor’s summary: “The European Union is designing a regulatory framework for artificial intelligence that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.â€by Jaime SevillaPaper by Epoch. World record progressions in video game speedrunning fit very well to a power law pattern. Due to lack of longitudinal data, the authors can’t provide definitive evidence of power-law decay in Machine learning benchmark improvements (though it is a better model than assuming no improvement over time). However, if they assume this model, it would suggest that a) machine learning benchmarks aren’t close to saturation and b) sudden large improvements are infrequent but aren’t ruled out.No, the EMH does not imply that markets have long AGI timelinesby JakobArgues that interest rates are not a reliable instrument for assessing market beliefs about transformative AI (TAI) timelines, because of two reasons:Savvy investors have no incentive to bet on short timelines, because it will tie up their capital until it loses value (ie. they are dead, or they’re so rich it doesn’t matter).They do have incentive to increase personal consumption, as savings are less useful in a TAI future. However, they aren’t a large enough group to influence interest rates this way.This makes interest rates more of a poll of upper middle class consumers than investors, and reflects whether they believe that a) timelines are short and b) savings won’t be useful post-TAI (vs. eg. believing they are more useful, due to worries of losing their job to AI).by Lao MeinOn April 11th, the Cybersecurity Administration of China released a draft of “Management Measures for Generative Artificial Intelligence Services†for public comment. Some in the AI safety community think this is a positive sign that China is considering AI risk and may participate in a disarmament treaty. However, the author argues that it is just a PR statement, no-one in China is talking about it, and the focus if any is on near-term stability.They also note that the EA/Rationalist/AI Safety forums in China are mostly populated by expats or people physically outside of China, most posts are in English... |
May 03, 2023 |
EA - Review of The Good It Promises, the Harm It Does by Richard Y Chappell
17:29
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of The Good It Promises, the Harm It Does, published by Richard Y Chappell on May 2, 2023 on The Effective Altruism Forum.[TL;DR: I didn't find much of value in the book. The quality of argumentation is worse than on most blogs I read. Maybe others will have better luck discerning any hidden gems in the mix?]The Good It Promises, the Harm It Does: Critical Essays on Effective Altruism (eds. Adams, Crary, & Gruen) puts me in mind of Bastiat’s Candlestick Makers' Petition. For any proposed change—be it the invention of electricity, or even the sun rising—there will be some in a position to complain. This is a book of such complaints. There is much recounting of various “harms†caused by EA (primarily to social justice activists who are no longer as competitive for grant funding). But nowhere in the volume is there any serious attempt to compare these costs against the gains to others—especially the populations supposedly served by charitable work, as opposed to the workers themselves—to determine which is greater. (One gets the impression that cost-benefit analysis is too capitalistic for these authors to even consider.) The word “trade-off†does not appear in this volume.A second respect in which the book’s title may be misleading is that it is exclusively about the animal welfare wing of EA. (There is a ‘Coda’ that mentions longtermism, but merely to sneer at it. There was no substantive engagement with the ideas.)I personally didn’t find much of value in the volume, but I’ll start out by flagging what good I can. I’ll then briefly explain why I wasn’t much impressed with the rest—mainly by way of sharing representative quotes, so readers can judge it for themselves.The GoodThe more empirically-oriented chapters raise interesting challenges about animal advocacy strategy. We learn that EA funders have focused on two main strategies to reform or eventually upend animal agriculture: (i) corporate cage-free (and similar) campaigns, and (ii) investment in meat alternatives. Neither involves the sort of “grassroots†activism that the contributors to this volume prefer. So some of the authors discuss potential shortcomings of the above two strategies, and potential benefits of alternatives like (iii) vegan outreach in Black communities, and (iv) animal sanctuaries.I expect EAs will welcome discussion of the effectiveness of different strategies. That’s what the movement is all about, after all. By far the most constructive article in the volume (chapter 4, ‘Animal Advocacy’s Stockholm Syndrome’) noted that “cage- free campaigns. can be particularly tragic in a global context†where factory-farms are not yet ubiquitous:The conscientious urban, middle-class Indian consumer cannot see that there is a minor difference between the cage-free egg and the standard factory-farmed egg, and a massive gulf separating both of these from the traditionally produced egg [where birds freely roam their whole lives] for a simple reason: the animal protection groups the consumer is relying upon are pointing to the (factory- farmed) cage-free egg instead of alternatives to industrial farming. (p. 45)Such evidence of localized ineffectiveness (or counterproductivity) is certainly important to identify & take into account!There’s a larger issue (not really addressed in this volume) of when it makes sense for a funder to go “all in†on their best bets vs. when they’d do better to “diversify†their philanthropic portfolio. This is something EAs have discussed a bit before (often by making purely theoretical arguments that the “best bet†maximizes expected value), but I’d be excited to see more work on this problem using different methodologies, including taking into account the risk of “model error†or systemic bias in our initial EV estimates. (Maybe such work is already out ther... |
May 02, 2023 |
EA - Apply Now: First-Ever EAGxNYC This August by Arthur Malone
03:36
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply Now: First-Ever EAGxNYC This August, published by Arthur Malone on May 2, 2023 on The Effective Altruism Forum.TL;DR: Applications are now open for EAGxNYC 2023, taking place in Manhattan this August 18-20!We’re thrilled to announce that this summer, EAGx comes to New York City for the first time!Application:Reviewed on a rolling basis, apply here before the deadline of July 31, 2023. Applying early means you'll have more time to prep and help us plan for your needs!When:August 18-20, 2023Where:Convene, 225 Liberty Street, New York, NY, in Lower Manhattan near the World Trade Center complexWho:EAGxNYC is intended for both individuals new to the movement and those already professionally engaged with EA, and will cover a diverse range of high-impact cause areas. We believe the conference will be of particular value to those currently exploring new ways they can have an impact, such as students, young professionals, and mid-career professionals looking to shift into EA-aligned work. We also invite established organizations looking to share their work and grow their pool of potential collaborators or hirees. Due to venue and funding capacity, the conference will be capped at 500 attendees.Geographic scope: As a locally-organized supplement to the Centre for Effective Altruism-organized EAG conferences, EAGxNYC aims to primarily serve, and foster connections between, those living in the NYC area. While we are also excited to welcome individuals from around the globe, due to limited capacity we will prioritize applicants who have a connection to our New York metropolitan area or are seriously considering relocating here, followed by applicants from throughout the East Coast. However, if you are uncertain about your eligibility, don't hesitate to apply!Travel Grants: Limited travel grants of up to $500 are available to individuals from outside of NYC who would not be able to attend EAGxNYC without financial assistance. Applications for financial assistance have no bearing on admissions to the conference.Programming:EAGxNYC will take place from Friday, August 18th through Sunday, August 20th with registration opening in the early afternoon Friday, followed by dinner and opening talks that evening. Content will be scheduled and the venue will be open for networking until 10PM Friday, 8AM-10PM Saturday, and 8AM-7PM Sunday. Along with dinner on Friday, the venue will be providing breakfast, lunch, and snacks and drinks on Saturday and Sunday. Dinner will not be served on the premises Saturday or Sunday, but the EAGxNYC team will help coordinate group dinners nearby and encourage all attendees to make use of the venue throughout the evening.We aim to program content covering all effective altruism cause areas with a special emphasis on the intersection between EA and New York City. If you are interested in presenting at the conference, please reach out to the organizing team.Satellite Programming: If you’re already in the New York City area and want to get involved leading up to or following the conference, check out the local EA NYC group for public events, cause-related and professional subgroup events, opportunities for online engagement, and more!More info: Detailed information on the agenda, speakers, and content will be available closer to the conference via Swapcard and updates to this website page. Periodically checking in on our website will help you stay up to date in the meantime, and if you have any questions or concerns, drop email us at nyc@eaglobalx.org.We can't wait to see you in NYC this Summer!The organizing team :)Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org. |
May 02, 2023 |
EA - Legal Priorities Project â Annual Report 2022 by Legal Priorities Project
58:27
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Priorities Project â Annual Report 2022, published by Legal Priorities Project on May 2, 2023 on The Effective Altruism Forum.SummaryNote: A private version of this report with additional confidential updates has been shared with a few major donors and close collaborators. If you fall into this category and would like to access the extended version, please get in touch with us at hello@legalpriorities.org.This report presents a review of the Legal Priorities Projectâs work in 2022, including key successes, statistics, bottlenecks, and issues. We also offer a short overview of our priorities for 2023 and briefly describe our methodology for updating our priorities. You can learn more about how to support our work at the end of the report.A PDF version of this report is available here.In 2022:Our research output per FTE was high to very high: With only 3.6 FTE researchers, we had 7 peer-reviewed papers (2 journal articles and 5 book chapters) accepted for publication, 3 papers currently under review, and one book under contract with Oxford University Press. We also added 7 papers to our Working Paper Series (for a total of 18), published a new chapter of our research agenda, and published 6 shorter pieces in online forums. We also spent significant time on reports aimed at informing our prioritization on artificial intelligence and biosecurity in particular (which we plan to publish in Q2 of 2023) and ran a writing competition on âImproving Cost-Benefit Analysis to Account for Existential and Catastrophic Risksâ with a judging panel composed of eminent figures in law. Based on our experience, our research output was much higher than typical legal academic research groups of similar size.Beyond academic research, we analyzed ongoing policy efforts, and our research received positive feedback from policymakers. Relevant feedback and discussions provided valuable insight into what research would support decision-making and favorable outcomes, which we believe improved our prioritization.We experimented with running several events targeting different audiences and received hundreds of applications from students and academics at top institutions worldwide. Some participants have already reported significant updates to their plans as a result. Feedback on our events was overwhelmingly positive, and we gained valuable information about the different types of programs and their effectiveness, which will inform future events.Team morale remained high, including during stressful developments, and our operations ran smoothly.In 2023:Our research will increasingly focus on reducing specific types of existential risk based on concrete risk scenarios, shifting more focus toward AI risk. While this shift started in 2022, AI risk will become more central to our research in 2023. We will publish an update to our research agenda and theory of change accordingly.We will continue to publish research of various types. However, we will significantly increase our focus on non-academic publications, such as policy/technical reports and blog posts, in order to make our work more accessible to policymakers and a wider audience. As part of this strategy, we will also launch a blog featuring shorter pieces by LPP staff and invited researchers.We would like to run at least one, but ideally two, flagship field-building programs: The Legal Priorities Summer Institute and the Summer Research Fellowship.We will seek to raise at least $1.1m to maintain our current level of operations for another year. More optimistically, we aim to increase our team size by 1â3 additional FTE, ideally hiring a senior researcher with a background in US law to work on risks from advanced artificial intelligence.IntroductionThe Legal Priorities Project is an independent, global research and field-bui...
|
May 02, 2023 |
EA - Legal Priorities Project – Annual Report 2022 by Legal Priorities Project
58:27
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Priorities Project – Annual Report 2022, published by Legal Priorities Project on May 2, 2023 on The Effective Altruism Forum.SummaryNote: A private version of this report with additional confidential updates has been shared with a few major donors and close collaborators. If you fall into this category and would like to access the extended version, please get in touch with us at hello@legalpriorities.org.This report presents a review of the Legal Priorities Project’s work in 2022, including key successes, statistics, bottlenecks, and issues. We also offer a short overview of our priorities for 2023 and briefly describe our methodology for updating our priorities. You can learn more about how to support our work at the end of the report.A PDF version of this report is available here.In 2022:Our research output per FTE was high to very high: With only 3.6 FTE researchers, we had 7 peer-reviewed papers (2 journal articles and 5 book chapters) accepted for publication, 3 papers currently under review, and one book under contract with Oxford University Press. We also added 7 papers to our Working Paper Series (for a total of 18), published a new chapter of our research agenda, and published 6 shorter pieces in online forums. We also spent significant time on reports aimed at informing our prioritization on artificial intelligence and biosecurity in particular (which we plan to publish in Q2 of 2023) and ran a writing competition on “Improving Cost-Benefit Analysis to Account for Existential and Catastrophic Risks†with a judging panel composed of eminent figures in law. Based on our experience, our research output was much higher than typical legal academic research groups of similar size.Beyond academic research, we analyzed ongoing policy efforts, and our research received positive feedback from policymakers. Relevant feedback and discussions provided valuable insight into what research would support decision-making and favorable outcomes, which we believe improved our prioritization.We experimented with running several events targeting different audiences and received hundreds of applications from students and academics at top institutions worldwide. Some participants have already reported significant updates to their plans as a result. Feedback on our events was overwhelmingly positive, and we gained valuable information about the different types of programs and their effectiveness, which will inform future events.Team morale remained high, including during stressful developments, and our operations ran smoothly.In 2023:Our research will increasingly focus on reducing specific types of existential risk based on concrete risk scenarios, shifting more focus toward AI risk. While this shift started in 2022, AI risk will become more central to our research in 2023. We will publish an update to our research agenda and theory of change accordingly.We will continue to publish research of various types. However, we will significantly increase our focus on non-academic publications, such as policy/technical reports and blog posts, in order to make our work more accessible to policymakers and a wider audience. As part of this strategy, we will also launch a blog featuring shorter pieces by LPP staff and invited researchers.We would like to run at least one, but ideally two, flagship field-building programs: The Legal Priorities Summer Institute and the Summer Research Fellowship.We will seek to raise at least $1.1m to maintain our current level of operations for another year. More optimistically, we aim to increase our team size by 1–3 additional FTE, ideally hiring a senior researcher with a background in US law to work on risks from advanced artificial intelligence.IntroductionThe Legal Priorities Project is an independent, global research and field-bui... |
May 02, 2023 |
EA - Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4) by MichaelA
27:20
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Intermediate goals for reducing risks from nuclear weapons: A shallow review (part 1/4), published by MichaelA on May 1, 2023 on The Effective Altruism Forum.This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.SummaryWhat is this post?This post is the first part of what was intended to be a shallow review of potential “intermediate goals†one could pursue in order to reduce nuclear risk (focusing especially on the contribution of nuclear weapons to existential risk). The full review would’ve broken intermediate goals down into:goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonationsgoals aimed at changing how nuclear conflict plays out if it does occur (in a way that reduces its harms)goals aimed at improving resilience to or ability to recover from the harms of nuclear conflictgoals that are cross-cutting, focused on field-building, or otherwise have indirect effectsThis first part of the shallow review focuses just on the first of those categories: goals aimed at reducing the odds of nuclear conflict or other non-test nuclear detonations. We tentatively think that, on the margin, this is the least promising of those four categories of goals, but that there are still some promising interventions in this category.Within this category, we review multiple potential goals. For most of those goals, we briefly discuss:What we mean by the goalWhy progress on this goal might reduce or increase nuclear riskExamples of specific interventions or organizations that could advance the goalOur very tentative bottom-line beliefs about:How much progress on the goal would reduce or increase nuclear riskWhat resources are most needed to make progress on the goalHow easy making progress on the goal would beWhat key effects making progress on the goal might have on things other than nuclear riskNote that, due to time constraints, this post is much less comprehensive and thoroughly researched and reviewed than we’d like.The intermediate goals we considered, and our tentative bottom line beliefs on themThis post and table breaks down high-level goals into increasingly granular goals, and shares our current best guesses on the relatively granular goals. Many goals could be pursued for multiple reasons and could hence appear in multiple places in this table, but we generally just showed each goal in the first relevant place anyway. This means in some cases a lot of the benefits of a given goal may be for higher-level goals we haven’t shown it as nested under.We unfortunately did even less research on the goals listed from From 1.1.2.6 onwards than the earlier ones, and the bottom-line views for that later set are mostly Will’s especially tentative personal views.Potential intermediate goalWhat effect would progress on this goal have on nuclear risk?How easy would it be to make progress on this goal?What resources are most needed for progress on this goal?Key effects this goal might have on things other than nuclear risk?1.1.1.1Reduce the odds of armed conflict in generalModerate reduction in riskHardUnsure.1.1.1.2Reduce the odds of (initially non-nuclear) armed conflict, with a focus on those involving at least one nuclear-armed stateMajor reduction in riskHard Similar to “1.1.1.1: Reduce the odds of armed conflict in generalâ€1.1.1.3Reduce proliferationModerate reduction in riskHardMore non-nuclear conflict?Or maybe less?1.1.1.4Promote complete nuclear disarmamentMajor reduction in riskAlmost impossible to fully achieve this unless the world changes radically (e.g., a world government or transformative artificial intelligence is created, or a great power war occurs) 1.1.2.1Promote no first use (N... |
May 02, 2023 |
EA - AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now by Greg Colbourn
31:35
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now, published by Greg Colbourn on May 2, 2023 on The Effective Altruism Forum.Content note: discussion of a near-term, potentially hopeless life-and-death situation that affects everyone.Tldr: AGI is basically here. Alignment is nowhere near ready. We may only have a matter of months to get a lid on this (strictly enforced global limits to compute and data) in order to stand a strong chance of survival. This post is unapologetically alarmist because the situation is highly alarming. Please help. Fill out this form to get involved. Here is a list of practical steps you can take.We are in a new era of acute risk from AGIArtificial General Intelligence (AGI) is now in its ascendency. GPT-4 is already ~human-level at language and showing sparks of AGI. Large multimodal models – text-, image-, audio-, video-, VR/games-, robotics-manipulation by a single AI – will arrive very soon (from Google DeepMind) and will be ~human-level at many things: physical as well as mental tasks; blue collar jobs in addition to white collar jobs. It’s looking highly likely that the current paradigm of AI architecture (Foundation models), basically just scales all the way to AGI. These things are “General Cognition Enginesâ€.All that is stopping them being even more powerful is spending on compute. Google & Microsoft are worth $1-2T each, and $10B can buy ~100x the compute used for GPT-4. Think about this: it means we are already well into hardware overhang territory.Here is a warning written two months ago by people working at applied AI Alignment lab Conjecture: “we are now in the end-game for AGI, and we (humans) are losingâ€. Things are now worse. It’s looking like GPT-4 will be used to meaningfully speed up AI research, finding more efficient architectures and therefore reducing the cost of training more sophisticated models.And then there is the reckless fervour of plugin development to make proto-AGI systems more capable and agent-like to contend with. In very short succession from GPT-4, OpenAI announced the ChatGPT plugin store, and there has been great enthusiasm for AutoGPT. Adding Planners to LLMs (known as LLM+P) seems like a good recipe for turning them into agents. One way of looking at this is that the planners and plugins act as the System 2 to the underlying System 1 of the general cognitive engine (LLM). And here we have agentic AGI. There may not be any secret sauce left.Given the scaling of capabilities observed so far for the progression of GPT-2 to GPT-3 to GPT3.5 to GPT-4, the next generation of AI could well end up superhuman. I think most people here are aware of the dangers: we have no idea how to reliably control superhuman AI or make it value-aligned (enough to prevent catastrophic outcomes from its existence). The expected outcome from the advent of AGI is doom. This is in large part because AI Alignment research has been completely outpaced by AI capabilities research and is now years behind where it needs to be.To allow Alignment time to catch up, we need a global moratorium on AGI, now.A short argument for uncontrollable superintelligent AI happening soon (without urgent regulation of big AI):This is a recipe for humans extincting themselves that appears to be playing out along the mainline of future timelines:Either ofGPT-4 + curious (but ultimately reckless) academics -> more efficient AI -> next generation foundation model AI (which I’ll call NextAI for short); OrGoogle DeepMind just builds NextAI (they are probably training it already)NextAI + planners + AutoGPT + plugins + further algorithmic advancements + gung ho humans (e/acc etc) = NextAI2 in short order. Weeks even. Access to compute for training is not a bottleneck because that cyborg syste... |
May 02, 2023 |
EA - Exploring Metaculusâs AI Track Record by Peter Scoblic
13:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculusâs AI Track Record, published by Peter Scoblic on May 1, 2023 on The Effective Altruism Forum.By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at MetaculusMetaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics of interest ranging from scientific progress to geopolitics. Forecasts are aggregated into a time-weighted median, the âCommunity Predictionâ, as well as the more sophisticated âMetaculus Predictionâ, which weights forecasts based on past performance and extremises in order to compensate for systematic human cognitive biases. Although we feature questions on a wide range of topics, Metaculus focuses on issues of artificial intelligence, biosecurity, climate change and nuclear risk.In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is ânear chanceâ requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. Whatâs more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Predictionâon both binary and continuous questionsâprovide a clear and useful insight into the future of artificial intelligence, despite not being âperfectâ.Summary FindingsWe reviewed Metaculusâs resolved binary questions (âWhat is the probability that X will happen?â) and resolved continuous questions (âWhat will be the value of X?â) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: âComputer Science: AI and Machine Learningâ; âComputing: Artificial Intelligenceâ; âComputing: AIâ; and âSeries: Forecasting AI Progress.â This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:Both the community and Metaculus predictions robustly outperform naïve baselines.Analysis showing that the community predictionâs Brier score on binary questions is 0.237 relies on sampling only a small subset of our AI-related questions.Our analysis of all binary AI-related questions finds that the score is actually 0.207 (a point a recent analysis agrees with), which is significantly better than âchanceâ.Forecasters performed better on continuous questions than binary ones.Top-Line ResultsThis chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)Brier (binary questions)CRPS (continuous questions)Community Prediction0.2070.096Metaculus Prediction0.1820.103baseline prediction0.250.172Results for Binary QuestionsWe can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:If you alread...
|
May 02, 2023 |
EA - Exploring Metaculus’s AI Track Record by Peter Scoblic
13:00
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exploring Metaculus’s AI Track Record, published by Peter Scoblic on May 1, 2023 on The Effective Altruism Forum.By Peter Mühlbacher, Research Scientist at Metaculus, and Peter Scoblic, Director of Nuclear Risk at MetaculusMetaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics of interest ranging from scientific progress to geopolitics. Forecasts are aggregated into a time-weighted median, the “Community Predictionâ€, as well as the more sophisticated “Metaculus Predictionâ€, which weights forecasts based on past performance and extremises in order to compensate for systematic human cognitive biases. Although we feature questions on a wide range of topics, Metaculus focuses on issues of artificial intelligence, biosecurity, climate change and nuclear risk.In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform, including an investigation of the factors that enhance or degrade accuracy.Most significantly, in this analysis we found that both the Community and Metaculus Predictions robustly outperform naïve baselines. The recent claim that performance on binary questions is “near chance†requires sampling on only a small subset of the forecasting questions we have posed or on the questionable proposition that a Brier score of 0.207 is akin to a coin flip. What’s more, forecasters performed better on continuous questions, as measured by the continuous ranked probability score (CRPS). In sum, both the Community Prediction and the Metaculus Prediction—on both binary and continuous questions—provide a clear and useful insight into the future of artificial intelligence, despite not being “perfectâ€.Summary FindingsWe reviewed Metaculus’s resolved binary questions (“What is the probability that X will happen?â€) and resolved continuous questions (“What will be the value of X?â€) that were related to the future of artificial intelligence. For the purpose of this analysis, we defined AI-related questions as those which belonged to one or more of the following categories: “Computer Science: AI and Machine Learningâ€; “Computing: Artificial Intelligenceâ€; “Computing: AIâ€; and “Series: Forecasting AI Progress.†This gave us: 64 resolved binary questions (with 10,497 forecasts by 2,052 users) and 88 resolved continuous questions (with 13,683 predictions by 1,114 users). Our review of these forecasts found:Both the community and Metaculus predictions robustly outperform naïve baselines.Analysis showing that the community prediction’s Brier score on binary questions is 0.237 relies on sampling only a small subset of our AI-related questions.Our analysis of all binary AI-related questions finds that the score is actually 0.207 (a point a recent analysis agrees with), which is significantly better than “chanceâ€.Forecasters performed better on continuous questions than binary ones.Top-Line ResultsThis chart details the performance of both the Community and Metaculus predictions on binary and continuous questions. Please note that, for all scores, lower is better and that Brier scores, which range from 0 to 1 (where 0 represents oracular omniscience and 1 represents complete anticipatory failure) are roughly comparable to continuous ranked probability scores (CRPS) given the way we conducted our analysis. (For more on scoring methodology, see below.)Brier (binary questions)CRPS (continuous questions)Community Prediction0.2070.096Metaculus Prediction0.1820.103baseline prediction0.250.172Results for Binary QuestionsWe can use Brier scores to measure the quality of a forecast on binary questions. Given that a Brier score is the mean squared error of a forecast, the following things are true:If you alread... |
May 02, 2023 |
EA - Retrospective on recent activity of Riesgos Catastróficos Globales by Jaime Sevilla
07:42
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Retrospective on recent activity of Riesgos Catastróficos Globales, published by Jaime Sevilla on May 1, 2023 on The Effective Altruism Forum.The new team of Riesgos Catastróficos Globales started their job two months ago.During this time, they have been working on two reports on what we have identified as top priorities for the management of Global Catastrophic Risks from Spanish-Speaking countries: food security during Abrupt Sunlight-Reduction Scenarios (e.g. nuclear winter) and AI regulation.In this article, I will cover their output in more depth and future plans, with some reflections on how the project is going.The short version is that I am reasonably pleased, and the directive board has decided to continue the project for two more months. The team's productivity has exceeded my expectations, though I see opportunities for improvement in our quality assurance, formation and outreach. We remain short of funding; if you want to support our work you can donate through our donation portal.Intellectual outputIn the last two months, the team has been working on two major reports and several minor outputs.1) Report on food security in Argentina during abrupt sun-reducing scenarios (ASRS), in collaboration with ALLFED. In this report, we explain the important role Argentina could have during ASRS to mitigate global famine. We sketch several policies that would be useful inclusions in an emergency plan, such as resilient food deployment, together with suggestions on which public organisms could implement them.2) Report on AI regulation for the EU AI Act Spanish sandbox (forthcoming). We are interviewing and eliciting opinions from several experts, to compile an overview of AI risk for Spanish policymakers and proposals to make the most out of the upcoming EU AI sandbox.3) An article about AI regulation in Spain. In this short article, we explain the relevance of Spain for AI regulation in the context of the EU AI Act. We propose four policies that could be tested in the upcoming sandbox. It serves as a preview of the report I mentioned above.4) An article about the new GCR mitigation law in USA, reporting on its meaning and proposing similar initiatives for Spanish-Speaking countries.5) Two statements about Our Common Agenda Policy Briefs, in collaboration with the Simon Institute.Overall, I think we have done a good job of contextualizing the research done in the international GCR community. However, I feel we rely a lot on the involvement of the direction board for quality assurance, and our limited time means that some mistakes and misconceptions will likely have made it to publication.Having said that, I am pleased with the results. The team has been amazingly productive, publishing a 60-page report in two months and several minor publications alongside it.In the future, we will be involving more experts for a more thorough review process. This also means that we will be erring towards producing shorter reports, which can be more thoroughly checked and are better for engaging policy-makers.FormationEarly in the project, we identified the education of our staff as a key challenge to overcome. Our staff has work experience and credentials, but their exposure to the GCR literature was limited.We undertook several activities to address this lack of formation:Knowledge transfer talks with Spanish-speaking experts from our directive board and advisory network (Juan GarcÃa from ALLFED, Jaime Sevilla from Epoch, Clarissa Rios Rojas from CSER).A GCR reading group with curated reading recommendations.An online course taught by Sandra Malagón from Carreras con Impacto.A dedicated course on the basics of Machine Learning.I am satisfied with the results, and I see a clear progression in the team. In hindsight, I think we erred on the side of too much form... |
May 02, 2023 |
EA - [Linkpost] âThe Godfather of A.I.â Leaves Google and Warns of Danger Ahead by Darius1
04:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] âThe Godfather of A.I.â Leaves Google and Warns of Danger Ahead, published by Darius1 on May 1, 2023 on The Effective Altruism Forum.Geoffrey Hintonâa pioneer in artificial neural networksâjust left Google, as reported by the New York Times: âThe Godfather of A.I.â Leaves Google and Warns of Danger Ahead (archive version).Some highlights from the article [emphasis added]:Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his lifeâs work.âI console myself with the normal excuse: If I hadnât done it, somebody else would have,â Dr. Hinton saidâIt is hard to see how you can prevent the bad actors from using it for bad things,â Dr. Hinton said.Dr. Hinton, often called âthe Godfather of A.I.,â did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job.In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his lifeâs work.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called âthe Nobel Prize of computing,â for their work on neural networks.last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. âMaybe what is going on in these systems,â he said, âis actually a lot better than what is going on in the brain.âAs companies improve their A.I. systems, he believes, they become increasingly dangerous. âLook at how it was five years ago and how it is now,â he said of A.I. technology. âTake the difference and propagate it forwards. Thatâs scary.âUntil last year, he said, Google acted as a âproper stewardâ for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot â challenging Googleâs core business â Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.âThe idea that this stuff could actually get smarter than people â a few people believed that,â he said. âBut most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.âDr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technol...
|
May 02, 2023 |
EA - [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead by Darius1
04:10
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead, published by Darius1 on May 1, 2023 on The Effective Altruism Forum.Geoffrey Hinton—a pioneer in artificial neural networks—just left Google, as reported by the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (archive version).Some highlights from the article [emphasis added]:Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,†Dr. Hinton said“It is hard to see how you can prevent the bad actors from using it for bad things,†Dr. Hinton said.Dr. Hinton, often called “the Godfather of A.I.,†did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job.In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network...At the time, few researchers believed in the idea. But it became his life’s work.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,†for their work on neural networks.last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,†he said, “is actually a lot better than what is going on in the brain.â€As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,†he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.â€Until last year, he said, Google acted as a “proper steward†for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own.“The idea that this stuff could actually get smarter than people — a few people believed that,†he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.â€Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technol... |
May 02, 2023 |
EA - Overview: Reflection Projects on Community Reform by Joris P
05:41
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Overview: Reflection Projects on Community Reform, published by Joris P on May 1, 2023 on The Effective Altruism Forum.TL;DR: We’re sharing this short collection of active projects that we’re aware of, that focus on community reform and reflections on recent events.We don’t know much about the projects and are probably missing some projects (please let us know in the comments!), but we’ve found it hard to keep up with what is and is not happening, and noticed that we wished that a list like this existed.We’re posting this in our personal capacity. Any mistakes are our own. We did ask colleagues at CEA for input and feedback; thank you to Lizka & Chana (CEA), and Robert (not CEA) in particular!A lot has happened in the EA community over the past half year. In light of all the developments, people have discussed all kinds of changes EA could make.As community members, we’ve been finding it quite hard to follow what work is being done to reflect on the various ways in which we could reform. That's why we tried to put together a (incomplete!) list of (mostly ‘official’) projects that are being done to reflect on all sorts of questions that came up over the past six months.Some CaveatsThe topics listed below are not in any way a complete overview of the things that could or should be discussed - we don’t think we’re covering all the things community members are thinking about, and aren’t trying to!A topic isn’t “covered†or “taken†if there’s already a project that’s focused on it. Many of these projects are trying to cover a lot of ground, and the people working on them would probably appreciate other groups trying to do something on the topic, too.We chose to include mostly reflection projects that are being done in an official capacity, or by more than one person. That means we’re not including a lot of interesting Forum posts or news articles that have been written about possible problems and reforms! See the section 'Assorted written criticisms and reflections' below for some examples.This list is almost certainly incomplete. For instance, some projects we've heard about aren't public, and we think it's likely that there are other projects we haven't heard about. We encourage people to share what they’re working on in the comments!We’re not really sharing our views on how excited we are about the projects.The projects we’re aware ofThe EA survey (run by Rethink Priorities in collaboration with CEA) was updated and re-sent to people to ask about FTX community response — you can see the results hereCommunity health concernsInvestigations into the CEA Community Health Team’s processes and past actions:There’s an external investigation into how the CEA Community Health Team responded to concerns about Owen Cotton-Barratt (source)An internal review of the CEA Community Health Team’s processes is also ongoing (same source as above)Members of the Community Health team shared they hope that both investigations will be concluded sometime in the next month. They noted that the team does not have control over the timeline of the external investigationThe CEA Community Health Team is (separately) conducting “a project to get a better understanding of the experiences of women and gender minorities in the EA community†(source)Governance in EA institutionsJulia Wise and Ozzie Gooen are setting up a taskforce that will recommend potential governance reforms for various EA organizations. They are currently looking to get in touch with people with relevant expertise - see hereNote that they write: “This project doesn’t aim to be a retrospective on what happened with FTX, and won’t address all problems in EA, but we hope to make progress on some of them.â€EA leadership & accountabilityAn investigation by the law firm Mintz, commissioned by EVF UK and EVF US, “to... |
May 01, 2023 |
EA - The costs of caution by Kelsey Piper
05:04
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The costs of caution, published by Kelsey Piper on May 1, 2023 on The Effective Altruism Forum.Note: This post was crossposted from Planned Obsolescence by the Forum team, with the author's permission. The author may not see or respond to comments on this post.If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development:April 2, 2023Or raise your hand if you or someone you love has a terminal illness, believes Ai has a chance at accelerating medical work exponentially, and doesn't have til Christmas, to wait on your make believe moratorium. Have a heart man â¤ï¸I’ve said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh’s objection is important and deserves a full airing.Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms.Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms.There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren’t enough people to do the work.If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help.As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies.This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1]That’s more than a thousand times as much effort going into tackling humanity’s biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2]All this may be a massive underestimate. This envisions a world that’s pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce’. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to thi... |
May 01, 2023 |
EA - First clean water, now clean air by finm
32:37
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: First clean water, now clean air, published by finm on April 30, 2023 on The Effective Altruism Forum.The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.Clean waterIn the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of brick and wood, and hundreds of thousands of cesspits. The Thames — Londoners’ main source of drinking water — was near-opaque with waste. Here is Michael Faraday in an 1855 letter to The Times:Near the bridges the feculence rolled up in clouds so dense that they were visible at the surface even in water of this kind [.] The smell was very bad, and common to the whole of the water. It was the same as that which now comes up from the gully holes in the streets. The whole river was for the time a real sewer [.] If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a season give us sad proof of the folly of our carelessness.That “sad proof†arrived more than once. London saw around three outbreaks of cholera, killing upwards of 50,000 people in each outbreak.But early efforts to address the public health crisis were guided by the wrong theory about how diseases spread. On the prevailing view, epidemics were caused by ‘miasma’ (bad air) — a kind of poisonous mist from decomposing matter. Parliament commissioned a report on the ‘Sanitary Condition of the Labouring Population’, which showed a clear link between poverty and disease, and recommended a bunch of excellent and historically significant reforms. But one recommendation backfired because of this scientific misunderstanding: according to the miasma theory, it made sense to remove human waste through wastewater — but that water flowed into the Thames and contaminated it further.But in one of these outbreaks, the physician John Snow has spotted how incidence of cholera clustered around a single water pump in Soho, suggesting that unclean water was the major source of the outbreak. A few years later, the experiments of Louis Pasteur helped foster the germ theory of disease, sharpening the understanding of how and why to treat drinking water for public health. These were well-timed discoveriesBecause soon things got even worse. Heat exacerbated the smell; and the summer of 1858 was unusually hot. 1858 was the year of London’s ‘Great Stink’, and the Thames “a Stygian pool, reeking with ineffable and intolerable horrors†in Prime Minister Disraeli’s words. The problem had become totally unignorable.Parliament turned to Joseph Bazalgette, chief engineer of London’s Metropolitan Board of Works. Spurred by the Great Stink, he was given licence to oversee the construction of an ambitious plan to rebuild London’s sewage system, to his own design. 1,800km of street sewers would feed into 132km of main interconnecting sewers. A network of pumping stations was built, to lift sewage from streets below the high water mark. 18 years later, the result was the kind of modern sewage system we mostly take for granted: a system to collect wastewater and dump it far from where it could contaminate food and drinking water; in this case a dozen miles eastwards to the Thames estuary. "The great sewer that runs beneath Londonersâ€, wrote Bazalgette’s obituarist, “has added some 20 years to their chance of lifeâ€.Remarkably, most of the system remains in use. London’s sewage system has obviously been expanded, and wastewater treatment is much better. Bazalgette’s plan was built to last, and succeeded.As London built ways of expelling wastewater, it also built ways of channelling c... |
May 01, 2023 |
EA - Bridging EA's Gender Gap: Input From 60 People by Alexandra Bos
11:11
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bridging EA's Gender Gap: Input From 60 People, published by Alexandra Bos on April 30, 2023 on The Effective Altruism Forum.TLDRWe hosted a session at EAGxRotterdam during which 60 participants discussed potential reasons why there are fewer women in EA and how this could be improved. The main categories of solutions people came up with were (1) adjusting outreach strategies, (2) putting women in more visible positions, (3) making EA’s atmosphere more female-friendly, (4) pursuing strategies to empower women in EA, and (5) adjusting certain attributes of EA thought. The goal of this post is to facilitate a solution-oriented discussion within the EA community so we can make tangible progress on its currently skewed gender ratio and underlying problems.Some notes before we start:Whether gender diversity is something to strive for is beyond this discussion. We will simply assume that it is and go from there. You could for example check out these posts (1, 2, 3) for a discussion on (gender) diversity if you want to read about this or discuss it.To keep the scope of this post and the session we hosted manageable, we focused on women specifically. However, we do not claim gender is binary and acknowledge that to increase diversity there are many more groups to focus on than just women (such as POC or other minorities).The views we describe in this post don’t necessarily correspond with our (Veerle Bakker's & Alexandra Bos') own but rather we are describing others’ input.Please view this post as crowdsourcing hypotheses from community members as a starting point for further discussion rather than as presenting The Hard Truth. You can also view posts such as these (1, 2, 3) for additional views on EA’s gender gap.EA & Its Gender GapIt is no secret that more men than women are involved with the EA community currently. In the last EA survey (2020), only 26.9% of respondents identified as female. This is similar to the 2019 survey.Graph source: EA Survey 2020.The goal of this post is to get a solution-oriented discussion started within the wider EA community to take steps towards tangible change. We aim to do this by sharing the insights from a discussion session at EAGxRotterdam in November 2022 titled "Discussion: how to engage more women with EA". In this post, we will be going through the different problems the EAGx’ers suspected may underlie the gender gap. Each problem will be paired with the potential solutions they proposed.MethodologyThis post summarises and categorises the insights from group discussions from a workshop at EAGxRotterdam. Around 60 people attended this session, approximately 40 of whom were women. In groups of 5, participants brainstormed on both 1) what may be the reasons for the relatively low number of women in EA (focusing on the causes, 15 mins), and 2) in what ways the EA community could attract more women to balance this out (focusing on solutions, 15 mins). The discussions were based on these prompts. We asked the groups to take notes on paper during their discussions so that this could be turned into this forum post. We informed them of this in advance. If you want to take a deep dive and look at the source materials, you are welcome to take a look at the participants’ written discussion notes.LimitationsThis project has some considerable limitations. First of all, the groups’ ideas are based on short brainstorming sessions, so they are not substantiated with research or confirmed in other ways. It is also worth mentioning that not all attendees had a lot of experience with the EA community - some only knew about EA for a couple of weeks or months. Furthermore, a considerable amount of information may have gotten lost in translation because it was transferred through hasty discussion notes. Additionally, the information was the... |
May 01, 2023 |
EA - More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios by Vasco Grilo
25:38
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More global warming might be good to mitigate the food shocks caused by abrupt sunlight reduction scenarios, published by Vasco Grilo on April 29, 2023 on The Effective Altruism Forum.Disclaimer: this is not a project from Alliance to Feed the Earth in Disasters (ALLFED).SummaryGlobal warming increases the risk from climate change. This “has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat lossâ€.However, I think global warming also decreases the risk from food shocks caused by abrupt sunlight reduction scenarios (ASRSs), which can be a nuclear winter, volcanic winter, or impact winter. In essence, because low temperature is a major driver for the decrease in crop yields that can lead to widespread starvation (see Xia 2022, and this post from Luisa Rodriguez).Factoring in both of the above, my best guess is that additional emissions of greenhouse gases (GHGs) are beneficial up to an optimal median global warming in 2100 relative to 1880 of 3.3 ºC, after which the increase in the risk from climate change outweighs the reduction in that from ASRSs. This suggests delaying decarbonisation is good at the margin if one trusts (on top of my assumptions!):Metaculus’ community median prediction of 2.41 ºC.Climate Action Tracker’s projections of 2.6 to 2.9 ºC for current policies and action.Nevertheless, I am not confident the above conclusion is resilient. My sensitivity analysis indicates the optimal median global warming can range from 0.1 to 4.3 ºC. So the takeaway for me is that we do not really know whether additional GHG emissions are good/bad.In any case, it looks like the effect of global warming on the risk from ASRSs is a crucial consideration, and therefore it must be investigated, especially because it is very neglected. Another potentially crucial consideration is that an energy system which relies more on renewables, and less on fossil fuels is less resilient to ASRSs.Robustly good actions would be:Improving civilisation resilience.Prioritising the risk from nuclear war over that from climate change (at the margin).Keeping options open by:Not massively decreasing/increasing GHG emissions.Researching cost-effective ways to decrease/increase GHG emissions.Learning more about the risks posed by ASRSs and climate change.IntroductionIn the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss.In What We Owe to the Future (WWOF), William MacAskill argues “decarbonisation [decreasing GHG emissions] is a proof of concept for longtermismâ€, describing it as a “win-win-win-win-winâ€. In addition to (supposedly) improving the longterm future:“Moving to clean energy has enormous benefits in terms of present-day human health. Burning fossil fuels pollutes the air with small particles that cause lung cancer, heart disease, and respiratory infectionsâ€.“By making energy cheaper [in the long run], clean energy innovation improves living standards in poorer countriesâ€.“By helping keep fossil fuels in the ground, it guards against the risk of unrecovered collapseâ€.“By furthering technological progress, it reduces the risk of longterm stagnationâ€.I agree decarbonisation will eventually be beneficial, but I am not sure decreasing GHG emissions is good at the margin now. As I said in my hot takes on counterproductive altruism:Mitigating global warming dec... |
May 01, 2023 |
EA - Discussion about AI Safety funding (FB transcript) by Akash
10:01
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion about AI Safety funding (FB transcript), published by Akash on April 30, 2023 on The Effective Altruism Forum.Kat Woods recently wrote a Facebook post about Nonlinear's new funding program.This led to a discussion (in the comments section) about funding norms, the current funding bar, concerns about lowering the bar, and concerns about the current (relatively centralized) funding situation.I'm posting a few of the comments below. I'm hoping this might promote more discussion about the funding landscape. Such discussion could be especially valuable right now, given that:Many people are starting to get interested in AI safety (including people who are not from the EA/rationalist communities)AGI timeline estimates have generally shortenedInvestment in overall AI development is increasing quicklyThere may be opportunities to spend large amounts of money in the upcoming year (e.g., scalable career transition grant programs, regranting programs, 2024 US elections, AI governance/policy infrastructure, public campaigns for AI safety).Many ideas with high potential upside also have noteworthy downside risks (phrased less vaguely, I think that among governance/policy/comms projects that have high potential upside, >50% also have non-trivial downside risks).We might see pretty big changes in the funding landscape over the next 6-24 monthsNew funders appear to be getting interested in AI safetyGovernments are getting interested in AI safetyMajor tech companies may decide to invest more resources into AI safetySelected comments from FB threadNote: I've made some editorial decisions to keep this post relatively short. Bolding is added by me. See the full thread here. Also, as usual, statements from individuals don't necessarily reflect the views of their employers.Kat Woods (Nonlinear)I often talk to dejected people who say they tried to get EA funding and were rejectedAnd what I want to do is to give them a rousing speech about how being rejected by one funder doesn't mean that their idea is bad or that their personal qualities are bad.The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.That to succeed, you'll have to ask a ton of people, and get a ton of rejections, but that's OK, because you only need a handful of yeses.(Kat then describes the new funding program from Nonlinear. TLDR: People submit an application that can then be reviewed by a network of 50+ funders.)Claire Zabel (Program Officer at Open Philanthropy)Claire's comment:(Claire quoting Kat:) The evaluation process is noisy. Even the best funders make mistakes. They might just have a different world model or value system than you. They might have been hangry while reading your application.(Claire's response): That's true. It's also possible the project they are applying for is harmful, but if they apply to enough funders, eventually someone will fund the harmful project (unilateralist's curse). In my experience as a grantmaker, a substantial fraction (though certainly very far from all) rejected applications in the longtermist space seem harmful in expectation, not just "not cost-effective enough"Selected portions of Kat's response to Claire:1. We’re probably going to be setting up channels where funders can discuss applicants. This way if there are concerns about net negativity, other funders considering it can see that. This might even lead to less unilateralist curse because if lots of funders think that the idea is net negative, others will be able to see that, instead of the status quo, where it’s hard to know what other funders think of an application.2. All these donors were giving anyways, with all the possibilities of the u... |
Apr 30, 2023 |
EA - Introducing Stanfordâs new Humane & Sustainable Food Lab by MMathur
10:54
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing Stanfordâs new Humane & Sustainable Food Lab, published by MMathur on April 30, 2023 on The Effective Altruism Forum.We are excited to announce the new Humane & Sustainable Food Lab at Stanford Universityâs School of Medicine (California, USA). Our mission is to end factory farming through cutting-edge scientific research that we are uniquely positioned to conduct. I am the principal investigator of the lab, an Assistant Professor at the Stanford School of Medicine with dual appointments in the Quantitative Sciences Unit and Department of Pediatrics. Because arguments for reducing factory farming as a cause area have been detailed elsewhere, here I focus on describing:Our approachOur research and publications to dateOur upcoming research prioritiesWhy we are funding-constrained1. Our approach1.1. Breadth, then depthEmpirical research on how to reduce factory farming is still nascent, with many low-hanging fruit and unexplored possibilities. As such, it is critical to explore broadly to see what general directions are most promising and in what real-world contexts (e.g., educational interventions that appeal to animal welfare [1, 2, 3], choice-architecture ânudgesâ that subtly shift food-service environments, etc.). We are conducting studies on a range of individual- and society-level interventions (see below), ultimately aiming to find and refine the most tractable, cost-effective, and scalable interventions. As we home in on candidate interventions, we expect our research to become more deeply focused on a smaller number of interventions.1.2. Collaborating with food service to conduct and disseminate research in real-world contextsWe have a unique collaboration with the Director and Executive Chefs at the Stanford dining halls, allowing us to conduct controlled trials in real-world settings to assess interventions to reduce consumption of meat and animal products. Some of our interventions have been as simple and scalable as reducing the size of spoons used to serve these foods. Also, Stanford Residential & Dining Enterprises is a founding member of the Menus of Change University Research Collaborative (MCURC), a nationwide research consortium of 74 colleges and universities that conduct groundbreaking, collaborative studies on healthy and sustainable food choices in food service. MCURC provides evidence-based recommendations for promoting healthier and more sustainable food choices in food service operations, providing a natural route to dissemination.Our established research model involves conducting initial pilot studies at Stanford's |