Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.
Episode | Date |
---|---|
Eight Short Studies On Excuses by Scott Alexander
15:53
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Eight Short Studies On Excuses , published by Scott Alexander on LessWrong.
The Clumsy Game-Player
You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button.
"Uh, sorry," says your partner. "My finger slipped."
"I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it."
"Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation."
"True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse."
"How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn."
You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.
After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful."
The Lazy Student
You are a perfectly utilitarian school teacher, who attaches exactly the same weight to others' welfare as to your own. You have to have the reports of all fifty students in your class ready by the time midterm grades go out on January 1st. You don't want to have to work during Christmas vacation, so you set a deadline that all reports must be in by December 15th or you won't grade them and the students will fail the class. Oh, and your class is Economics 101, and as part of a class project all your students have to behave as selfish utility-maximizing agents for the year.
It costs your students 0 utility to turn in the report on time, but they gain +1 utility by turning it in late (they enjoy procrastinating). It costs you 0 utility to grade a report turned in before December 15th, but -30 utility to grade one after December 15th. And students get 0 utility from having their reports graded on time, but get -100 utility from having a report marked incomplete and failing the class.
If you say "There's no penalty for turning in your report after deadline," then the students will procrastinate and turn in their reports late, for a total of +50 utility (1 per student times fifty students). You will have to grade all fifty reports during Christmas break, for a total of - 1500 utility (-30 per report times fifty reports). Total utility is -1450.
So instead you say "If you don't turn in your report on time, I won't grade it." All students calculate the cost of being late, which is +1 utility from procrastinating and -100 from failing the class, and turn in their reports on time. You get all reports graded before Christmas, no students fail the class, and total utility loss is zero. Yay!
Or else - one student comes to you the day after deadline and says "Sorry, I was really tired yesterday, so I really didn't want to come all the way here to hand in my report. I expect you'll grade my report anyway, because I know you to be a perfect utilitarian, an...
|
Dec 12, 2021 |
Making Vaccine by johnswentworth
09:46
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Making Vaccine, published by johnswentworth on LessWrong.
Back in December, I asked how hard it would be to make a vaccine for oneself. Several people pointed to radvac. It was a best-case scenario: an open-source vaccine design, made for self-experimenters, dead simple to make with readily-available materials, well-explained reasoning about the design, and with the name of one of the world’s more competent biologists (who I already knew of beforehand) stamped on the whitepaper. My girlfriend and I made a batch a week ago and took our first booster yesterday.
This post talks a bit about the process, a bit about our plan, and a bit about motivations. Bear in mind that we may have made mistakes - if something seems off, leave a comment.
The Process
All of the materials and equipment to make the vaccine cost us about $1000. We did not need any special licenses or anything like that. I do have a little wetlab experience from my undergrad days, but the skills required were pretty minimal.
One vial of custom peptide - that little pile of white powder at the bottom.
The large majority of the cost (about $850) was the peptides. These are the main active ingredients of the vaccine: short segments of proteins from the COVID virus. They’re all <25 amino acids, so far too small to have any likely function as proteins (for comparison, COVID’s spike protein has 1273 amino acids). They’re just meant to be recognized by the immune system: the immune system learns to recognize these sequences, and that’s what provides immunity.
Each of six peptides came in two vials of 4.5 mg each. These are the half we haven't dissolved; we keep them in the freezer as backups.
The peptides were custom synthesized. There are companies which synthesize any (short) peptide sequence you want - you can find dozens of them online. The cheapest options suffice for the vaccine - the peptides don’t need to be “purified” (this just means removing partial sequences), they don’t need any special modifications, and very small amounts suffice. The minimum order size from the company we used would have been sufficient for around 250 doses. We bought twice that much (9 mg of each peptide), because it only cost ~$50 extra to get 2x the peptides and extras are nice in case of mistakes.
The only unusual hiccup was an email about customs restrictions on COVID-related peptides. Apparently the company was not allowed to send us 9 mg in one vial, but could send us two vials of 4.5 mg each for each peptide. This didn’t require any effort on my part, other than saying “yes, two vials is fine, thankyou”. Kudos to their customer service for handling it.
Equipment - stir plate, beakers, microcentrifuge tubes, 10 and 50 mL vials, pipette (0.1-1 mL range), and pipette tips. It's all available on Amazon.
Other materials - these are sold as supplements. We also need such rare and costly ingredients as vinegar and deionized water. Also all available on Amazon.
Besides the peptides, all the other materials and equipment were on amazon, food grade, in quantities far larger than we are ever likely to use. Peptide synthesis and delivery was the slowest; everything else showed up within ~3 days of ordering (it’s amazon, after all).
The actual preparation process involves three main high-level steps:
Prepare solutions of each component - basically dissolve everything separately, then stick it in the freezer until it’s needed.
Circularize two of the peptides. Concretely, this means adding a few grains of activated charcoal to the tube and gently shaking it for three hours. Then, back in the freezer.
When it’s time for a batch, take everything out of the freezer and mix it together.
Prepping a batch mostly just involves pipetting things into a beaker on a stir plate, sometimes drop-by-drop.
Finally, a dose goes into a microcentrifuge tube....
|
Dec 12, 2021 |
The Best Textbooks on Every Subject by lukeprog
15:01
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Best Textbooks on Every Subject, published by lukeprog on LessWrong.
For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!
I've since discovered that textbooks are usually the quickest and best way to learn new material. That's what they are designed to be, after all. Less Wrong has often recommended the "read textbooks!" method. Make progress by accumulation, not random walks.
But textbooks vary widely in quality. I was forced to read some awful textbooks in college. The ones on American history and sociology were memorably bad, in my case. Other textbooks are exciting, accurate, fair, well-paced, and immediately useful.
What if we could compile a list of the best textbooks on every subject? That would be extremely useful.
Let's do it.
There have been other pages of recommended reading on Less Wrong before (and elsewhere), but this post is unique. Here are the rules:
Post the title of your favorite textbook on a given subject.
You must have read at least two other textbooks on that same subject.
You must briefly name the other books you've read on the subject and explain why you think your chosen textbook is superior to them.
Rules #2 and #3 are to protect against recommending a bad book that only seems impressive because it's the only book you've read on the subject. Once, a popular author on Less Wrong recommended Bertrand Russell's A History of Western Philosophy to me, but when I noted that it was more polemical and inaccurate than the other major histories of philosophy, he admitted he hadn't really done much other reading in the field, and only liked the book because it was exciting.
I'll start the list with three of my own recommendations...
Subject: History of Western Philosophy
Recommendation: The Great Conversation, 6th edition, by Norman Melchert
Reason: The most popular history of western philosophy is Bertrand Russell's A History of Western Philosophy, which is exciting but also polemical and inaccurate. More accurate but dry and dull is Frederick Copelston's 11-volume A History of Philosophy. Anthony Kenny's recent 4-volume history, collected into one book as A New History of Western Philosophy, is both exciting and accurate, but perhaps too long (1000 pages) and technical for a first read on the history of philosophy. Melchert's textbook, The Great Conversation, is accurate but also the easiest to read, and has the clearest explanations of the important positions and debates, though of course it has its weaknesses (it spends too many pages on ancient Greek mythology but barely mentions Gottlob Frege, the father of analytic philosophy and of the philosophy of language). Melchert's history is also the only one to seriously cover the dominant mode of Anglophone philosophy done today: naturalism (what Melchert calls "physical realism"). Be sure to get the 6th edition, which has major improvements over the 5th edition.
Subject: Cognitive Science
Recommendation: Cognitive Science, by Jose Luis Bermudez
Reason: Jose Luis Bermudez's Cognitive Science: An Introduction to the Science of Mind does an excellent job setting the historical and conceptual context for cognitive science, and draws fairly from all the fields involved in this heavily interdisciplinary science. Bermudez does a good job of making himself invisible, and the explanations here are some of the clearest available. In contrast, Paul Thagard's Mind: Introduction to Cognitive Science skips the context and jumps right into a systematic comparison (by explanatory merit) of the leading theories of mental representation: logic, rules, concepts, analogies, images, and neural networks. The book is o...
|
Dec 12, 2021 |
Preface by Eliezer Yudkowsky
05:34
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Preface, published Eliezer Yudkowsky on LessWrong.
You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I’m fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven’t learned anything or changed your mind since then.
It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.
In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.”
Yes, sometimes those big issues really are big and really are important; but that doesn’t change the basic truth that to master skills you need to practice them and it’s harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)
A third huge mistake I made was to focus too much on rational belief, too little on rational action.
The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence.
That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized them as much as he can without trying to rewrite all the actual material (though he’s rewritten a bit of it).
My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.
Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.
Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.)
To be able to look backwards and say that you’ve “failed” implies that you had goals. So what was it that I was trying to do?
Th...
|
Dec 12, 2021 |
Rationalism before the Sequences by Eric Raymond
18:03
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Rationalism before the Sequences, published by Eric Raymond on LessWrong.
I'm here to tell you a story about what it was like to be a rationalist decades before the Sequences and the formation of the modern rationalist community. It is not the only story that could be told, but it is one that runs parallel to and has important connections to Eliezer Yudkowsky's and how his ideas developed.
My goal in writing this essay is to give the LW community a sense of the prehistory of their movement. It is not intended to be "where Eliezer got his ideas"; that would be stupidly reductive. I aim more to exhibit where the drive and spirit of the Yudkowskian reform came from, and the interesting ways in which Eliezer's formative experiences were not unique.
My standing to write this essay begins with the fact that I am roughly 20 years older than Eliezer and read many of his sources before he was old enough to read. I was acquainted with him over an email list before he wrote the Sequences, though I somehow managed to forget those interactions afterwards and only rediscovered them while researching for this essay. In 2005 he had even sent me a book manuscript to review that covered some of the Sequences topics.
My reaction on reading "The Twelve Virtues of Rationality" a few years later was dual. It was a different kind of writing than the book manuscript - stronger, more individual, taking some serious risks. On the one hand, I was deeply impressed by its clarity and courage. On the other hand, much of it seemed very familiar, full of hints and callbacks and allusions to books I knew very well.
Today it is probably more difficult to back-read Eliezer's sources than it was in 2006, because the body of more recent work within his reformation of rationalism tends to get in the way. I'm going to attempt to draw aside that veil by talking about four specific topics: General Semantics, analytic philosophy, science fiction, and Zen Buddhism.
Before I get to those specifics, I want to try to convey that sense of what it was like. I was a bright geeky kid in the 1960s and 1970s, immersed in a lot of obscure topics often with an implicit common theme: intelligence can save us! Learning how to think more clearly can make us better! But at the beginning I was groping as if in a dense fog, unclear about how to turn that belief into actionable advice.
Sometimes I would get a flash of light through the fog, or at least a sense that there were other people on the same lonely quest. A bit of that sense sometimes drifted over USENET, an early precursor of today's Internet fora. More often than not, though, the clue would be fictional; somebody's imagination about what it would be like to increase intelligence, to burn away error and think more clearly.
When I found non-fiction sources on rationality and intelligence increase I devoured them. Alas, most were useless junk. But in a few places I found gold. Not by coincidence, the places I found real value were sources Eliezer would later draw on. I'm not guessing about this, I was able to confirm it first from Eliezer's explicit reports of what influenced him and then via an email conversation.
Eliezer and I were not unique. We know directly of a few others with experiences like ours. There were likely dozens of others we didn't know - possibly hundreds - on parallel paths, all hungrily seeking clarity of thought, all finding largely overlapping subsets of clues and techniques because there simply wasn't that much out there to be mined.
One piece of evidence for this parallelism besides Eliezer's reports is that I bounced a draft of this essay off Nancy Lebovitz, a former LW moderator who I've known personally since the 1970s. Her instant reaction? "Full of stuff I knew already."
Around the time Nancy and I first met, some years before Eliezer Yudk...
|
Dec 12, 2021 |
Schelling fences on slippery slopes by Scott Alexander
09:20
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Schelling fences on slippery slopes, published by Scott Alexander on LessWrong.
Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien:
"Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on the road to things like banning unpopular political positions or religions, and we right-thinking people oppose that, so we won't ban Holocaust denial."
And the alien might well respond: "But you could just ban Holocaust denial, but not ban unpopular political positions or religions. Then you right-thinking people get the thing you want, but not the thing you don't want."
This post is about some of the replies you might give the alien.
Abandoning the Power of Choice
This is the boring one without any philosophical insight that gets mentioned only for completeness' sake. In this reply, giving up a certain point risks losing the ability to decide whether or not to give up other points.
For example, if people gave up the right to privacy and allowed the government to monitor all phone calls, online communications, and public places, then if someone launched a military coup, it would be very difficult to resist them because there would be no way to secretly organize a rebellion. This is also brought up in arguments about gun control a lot.
I'm not sure this is properly thought of as a slippery slope argument at all. It seems to be a more straightforward "Don't give up useful tools for fighting tyranny" argument.
The Legend of Murder-Gandhi
Previously on Less Wrong's The Adventures of Murder-Gandhi: Gandhi is offered a pill that will turn him into an unstoppable murderer. He refuses to take it, because in his current incarnation as a pacifist, he doesn't want others to die, and he knows that would be a consequence of taking the pill. Even if we offered him $1 million to take the pill, his abhorrence of violence would lead him to refuse.
But suppose we offered Gandhi $1 million to take a different pill: one which would decrease his reluctance to murder by 1%. This sounds like a pretty good deal. Even a person with 1% less reluctance to murder than Gandhi is still pretty pacifist and not likely to go killing anybody. And he could donate the money to his favorite charity and perhaps save some lives. Gandhi accepts the offer.
Now we iterate the process: every time Gandhi takes the 1%-more-likely-to-murder-pill, we offer him another $1 million to take the same pill again.
Maybe original Gandhi, upon sober contemplation, would decide to accept $5 million to become 5% less reluctant to murder. Maybe 95% of his original pacifism is the only level at which he can be absolutely sure that he will still pursue his pacifist ideals.
Unfortunately, original Gandhi isn't the one making the choice of whether or not to take the 6th pill. 95%-Gandhi is. And 95% Gandhi doesn't care quite as much about pacifism as original Gandhi did. He still doesn't want to become a murderer, but it wouldn't be a disaster if he were just 90% as reluctant as original Gandhi, that stuck-up goody-goody.
What if there were a general principle that each Gandhi was comfortable with Gandhis 5% more murderous than himself, but no more? Original Gandhi would start taking the pills, hoping to get down to 95%, but 95%-Gandhi would start taking five more, hoping to get down to 90%, and so on until he's rampaging through the streets of Delhi, killing everything in sight.
Now we're tempted to say Gandhi shouldn't even take the first pill. But this also seems odd. Are we really saying Gandhi shouldn't take what's basically a free million dollars to turn himself into 99%-Gandhi, who might well be nearly indistinguishable in his actions from the original?
Maybe Gandhi's best...
|
Dec 12, 2021 |
Diseased thinking: dissolving questions about disease by Scott Alexander
15:44
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Diseased thinking: dissolving questions about disease, published by Scott Alexander on LessWrong.
Related to: Disguised Queries, Words as Hidden Inferences, Dissolving the Question, Eight Short Studies on Excuses
Today's therapeutic ethos, which celebrates curing and disparages judging, expresses the liberal disposition to assume that crime and other problematic behaviors reflect social or biological causation. While this absolves the individual of responsibility, it also strips the individual of personhood, and moral dignity
-- George Will, townhall.com
Sandy is a morbidly obese woman looking for advice.
Her husband has no sympathy for her, and tells her she obviously needs to stop eating like a pig, and would it kill her to go to the gym once in a while?
Her doctor tells her that obesity is primarily genetic, and recommends the diet pill orlistat and a consultation with a surgeon about gastric bypass.
Her sister tells her that obesity is a perfectly valid lifestyle choice, and that fat-ism, equivalent to racism, is society's way of keeping her down.
When she tells each of her friends about the opinions of the others, things really start to heat up.
Her husband accuses her doctor and sister of absolving her of personal responsibility with feel-good platitudes that in the end will only prevent her from getting the willpower she needs to start a real diet.
Her doctor accuses her husband of ignorance of the real causes of obesity and of the most effective treatments, and accuses her sister of legitimizing a dangerous health risk that could end with Sandy in hospital or even dead.
Her sister accuses her husband of being a jerk, and her doctor of trying to medicalize her behavior in order to turn it into a "condition" that will keep her on pills for life and make lots of money for Big Pharma.
Sandy is fictional, but similar conversations happen every day, not only about obesity but about a host of other marginal conditions that some consider character flaws, others diseases, and still others normal variation in the human condition. Attention deficit disorder, internet addiction, social anxiety disorder (as one skeptic said, didn't we used to call this "shyness"?), alcoholism, chronic fatigue, oppositional defiant disorder ("didn't we used to call this being a teenager?"), compulsive gambling, homosexuality, Aspergers' syndrome, antisocial personality, even depression have all been placed in two or more of these categories by different people.
Sandy's sister may have a point, but this post will concentrate on the debate between her husband and her doctor, with the understanding that the same techniques will apply to evaluating her sister's opinion. The disagreement between Sandy's husband and doctor centers around the idea of "disease". If obesity, depression, alcoholism, and the like are diseases, most people default to the doctor's point of view; if they are not diseases, they tend to agree with the husband.
The debate over such marginal conditions is in many ways a debate over whether or not they are "real" diseases. The usual surface level arguments trotted out in favor of or against the proposition are generally inconclusive, but this post will apply a host of techniques previously discussed on Less Wrong to illuminate the issue.
What is Disease?
In Disguised Queries , Eliezer demonstrates how a word refers to a cluster of objects related upon multiple axes. For example, in a company that sorts red smooth translucent cubes full of vanadium from blue furry opaque eggs full of palladium, you might invent the word "rube" to designate the red cubes, and another "blegg", to designate the blue eggs. Both words are useful because they "carve reality at the joints" - they refer to two completely separate classes of things which it's practically useful to keep in separate cat...
|
Dec 12, 2021 |
Generalizing From One Example by Scott Alexander
08:30
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Generalizing From One Example, published by Scott Alexander on LessWrong.
Related to: The Psychological Unity of Humankind, Instrumental vs. Epistemic: A Bardic Perspective
"Everyone generalizes from one example. At least, I do."
-- Vlad Taltos (Issola, Steven Brust)
My old professor, David Berman, liked to talk about what he called the "typical mind fallacy", which he illustrated through the following example:
There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, or do they simply say "I saw it in my mind" as a metaphor for considering what it looked like?
Upon hearing this, my response was "How the stars was this actually a real debate? Of course we have mental imagery. Anyone who doesn't think we have mental imagery is either such a fanatical Behaviorist that she doubts the evidence of her own senses, or simply insane." Unfortunately, the professor was able to parade a long list of famous people who denied mental imagery, including some leading scientists of the era. And this was all before Behaviorism even existed.
The debate was resolved by Francis Galton, a fascinating man who among other achievements invented eugenics, the "wisdom of crowds", and standard deviation. Galton gave people some very detailed surveys, and found that some people did have mental imagery and others didn't. The ones who did had simply assumed everyone did, and the ones who didn't had simply assumed everyone didn't, to the point of coming up with absurd justifications for why they were lying or misunderstanding the question. There was a wide spectrum of imaging ability, from about five percent of people with perfect eidetic imagery1 to three percent of people completely unable to form mental images2.
Dr. Berman dubbed this the Typical Mind Fallacy: the human tendency to believe that one's own mental structure can be generalized to apply to everyone else's.
He kind of took this idea and ran with it. He interpreted certain passages in George Berkeley's biography to mean that Berkeley was an eidetic imager, and that this was why the idea of the universe as sense-perception held such interest to him. He also suggested that experience of consciousness and qualia were as variable as imaging, and that philosophers who deny their existence (Ryle? Dennett? Behaviorists?) were simply people whose mind lacked the ability to easily experience qualia. In general, he believed philosophy of mind was littered with examples of philosophers taking their own mental experiences and building theories on them, and other philosophers with different mental experiences critiquing them and wondering why they disagreed.
The formal typical mind fallacy is about serious matters of mental structure. But I've also run into something similar with something more like the psyche than the mind: a tendency to generalize from our personalities and behaviors.
For example, I'm about as introverted a person as you're ever likely to meet - anyone more introverted than I am doesn't communicate with anyone. All through elementary and middle school, I suspected that the other children were out to get me. They kept on grabbing me when I was busy with something and trying to drag me off to do some rough activity with them and their friends. When I protested, they counter-protested and told me I really needed to stop whatever I was doing and come join them. I figured they were bullies who were trying to annoy me, and found ways to hide from them and scare them off.
Eventually I realized that it was a double misunderstanding. They figured I must be like them, and the only thing keeping me from playing their fun games was that I was too shy. I figured they must be like me, and that the only re...
|
Dec 12, 2021 |
Reason as memetic immune disorder by PhilGoetz
07:33
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Reason as memetic immune disorder, published PhilGoetz on LessWrong.
A prophet is without dishonor in his hometown
I'm reading the book "The Year of Living Biblically," by A.J. Jacobs. He tried to follow all of the commandments in the Bible (Old and New Testaments) for one year. He quickly found that
a lot of the rules in the Bible are impossible, illegal, or embarassing to follow nowadays; like wearing tassels, tying your money to yourself, stoning adulterers, not eating fruit from a tree less than 5 years old, and not touching anything that a menstruating woman has touched; and
this didn't seem to bother more than a handful of the one-third to one-half of Americans who claim the Bible is the word of God.
You may have noticed that people who convert to religion after the age of 20 or so are generally more zealous than people who grew up with the same religion. People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do.
I remember many times growing up when missionaries described the crazy things their new converts in remote areas did on reading the Bible for the first time - they refused to be taught by female missionaries; they insisted on following Old Testament commandments; they decided that everyone in the village had to confess all of their sins against everyone else in the village; they prayed to God and assumed He would do what they asked; they believed the Christian God would cure their diseases. We would always laugh a little at the naivete of these new converts; I could barely hear the tiny voice in my head saying but they're just believing that the Bible means what it says...
How do we explain the blindness of people to a religion they grew up with?
Cultural immunity
Europe has lived with Christianity for nearly 2000 years. European culture has co-evolved with Christianity. Culturally, memetically, it's developed a tolerance for Christianity. These new Christian converts, in Uganda, Papua New Guinea, and other remote parts of the world, were being exposed to Christian memes for the first time, and had no immunity to them.
The history of religions sometimes resembles the history of viruses. Judaism and Islam were both highly virulent when they first broke out, driving the first generations of their people to conquer (Islam) or just slaughter (Judaism) everyone around them for the sin of not being them. They both grew more sedate over time. (Christianity was pacifist at the start, as it arose in a conquered people. When the Romans adopted it, it didn't make them any more militaristic than they already were.)
The mechanism isn't the same as for diseases, which can't be too virulent or they kill their hosts. Religions don't generally kill their hosts. I suspect that, over time, individual selection favors those who are less zealous. The point is that a culture develops antibodies for the particular religions it co-exists with - attitudes and practices that make them less virulent.
I have a theory that "radical Islam" is not native Islam, but Westernized Islam. Over half of 75 Muslim terrorists studied by Bergen & Pandey 2005 in the New York Times had gone to a Western college. (Only 9% had attended madrassas.) A very small percentage of all Muslims have received a Western college education. When someone lives all their life in a Muslim country, they're not likely to be hit with the urge to travel abroad and blow something up. But when someone from an Islamic nation goes to Europe for college, and co...
|
Dec 12, 2021 |
Pain is not the unit of Effort by alkjash
07:44
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Pain is not the unit of Effort, published by alkjash on LessWrong.
Write a Review
This is a linkpost for/
(Content warning: self-harm, parts of this post may be actively counterproductive for readers with certain mental illnesses or idiosyncrasies.)
What doesn't kill you makes you stronger. ~ Kelly Clarkson.
No pain, no gain. ~ Exercise motto.
The more bitterness you swallow, the higher you'll go. ~ Chinese proverb.
I noticed recently that, at least in my social bubble, pain is the unit of effort. In other words, how hard you are trying is explicitly measured by how much suffering you put yourself through. In this post, I will share some anecdotes of how damaging and pervasive this belief is, and propose some counterbalancing ideas that might help rectify this problem.
I. Anecdotes
1. As a child, I spent most of my evenings studying mathematics under some amount of supervision from my mother. While studying, if I expressed discomfort or fatigue, my mother would bring me a snack or drink and tell me to stretch or take a break. I think she took it as a sign that I was trying my best. If on the other hand I was smiling or joyful for extended periods of time, she took that as a sign that I had effort to spare and increased the hours I was supposed to study each day. To this day there's a gremlin on my shoulder that whispers, "If you're happy, you're not trying your best."
2. A close friend who played sports in school reports that training can be harrowing. He told me that players who fell behind the pack during for daily jogs would be singled out and publicly humiliated. One time the coach screamed at my friend for falling behind the asthmatic boy who was alternating between running and using his inhaler. Another time, my friend internalized "no pain, no gain" to the point of losing his toenails.
3. In high school and college, I was surrounded by overachievers constantly making (what seemed to me) incomprehensibly bad life choices. My classmates would sign up for eight classes per semester when the recommended number is five, jigsaw extracurricular activities into their calendar like a dynamic programming knapsack-solver, and then proceed to have loud public complaining contests about which libraries are most comfortable to study at past 2am and how many pages they have left to write for the essay due in three hours. Only later did I learn to ask: what incentives were they responding to?
4. A while ago I became a connoisseur of Chinese webnovels. Among those written for a male audience, there is a surprisingly diverse set of character traits represented among the main characters. Doubtless many are womanizing murderhobos with no redeeming qualities, but others are classical heroes with big hearts, or sarcastic antiheroes who actually grow up a little, or ambitious empire-builders with grand plans to pave the universe with Confucian order, or down-on-their-luck starving artists who just want to bring happiness to the world through song.
If there is a single common virtue shared by all these protagonists, it is their superhuman pain tolerance. Protagonists routinely and often voluntarily dunk themselves in vats of lava, have all their bones broken, shattered, and reforged, get trapped inside alternate dimensions of freezing cold for millennia (which conveniently only takes a day in the outside world), and overdose on level-up pills right up to the brink of death, all in the name of becoming stronger. Oftentimes the defining difference between the protagonist and the antagonist is that the antagonist did not have enough pain tolerance and allowed the (unbearable physical) suffering in his life to drive him mad.
5. I have a close friend who often asks for my perspective on personal problems. A pattern arose in a couple of our conversations:
alkjash: I feel like you're not ac...
|
Dec 12, 2021 |
Bets, Bonds, and Kindergarteners by jefftk
02:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Bets, Bonds, and Kindergarteners, published by jefftk on LessWrong.
Bets and bonds are tools for handling different epistemic states and levels of trust. Which makes them a great fit for negotiating with small children!
A few weeks ago Anna (4y) wanted to play with some packing material. It looked very messy to me, I didn't expect she would clean it up, and I didn't want to fight with her about cleaning it up. I considered saying no, but after thinking about how things like this are handled in the real world I had an idea. If you want to do a hazardous activity, and we think you might go bankrupt and not clean up, we make you post a bond. This money is held in escrow to fund the cleanup if you disappear. I explained how this worked, and she went and got a dollar:
Then:
When she was done playing, she cleaned it up without complaint and got her dollar back. If she hadn't cleaned it up, I would have, and kept the dollar.
Some situations are more complicated, and call for bets. I wanted to go to a park, but Lily (6y) didn't want to go to that park because the last time we had been there there'd been lots of bees. I remembered that had been a summer with unusually many bees, and it no longer being that summer or, in fact, summer at all, I was not worried. Since I was so confident, I offered my $1 to her $0.10 that we would not run into bees at the park. This seemed fair to her, and when there were no bees she was happy to pay up.
Over time, they've learned that my being willing to bet, especially at large odds, is pretty informative, and often all I need to do is offer. Lily was having a rough morning, crying by herself about a project not working out. I suggested some things that might be fun to do together, and she rejected them angrily. I told her that often when people are feeling that way, going outside can help a lot, and when she didn't seem to believe me I offered to bet. Once she heard the 10:1 odds I was offering her I think she just started expecting that I was right, and she decided we should go ride bikes. (She didn't actually cheer up when we got outside: she cheered up as soon as she made this decision.)
I do think there is some risk with this approach that the child will have a bad time just to get the money, or say they are having a bad time and they are actually not, but this isn't something we've run into. Another risk, if we were to wager large amounts, would be that the child would end up less happy than if I hadn't interacted with them at all. I handle this by making sure not to offer a bet I think they would regret losing, and while this is not a courtesy I expect people to make later in life, I think it's appropriate at their ages.
Comment via: facebook
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Thoughts on the Singularity Institute (SI) by HoldenKarnofsky
44:55
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Thoughts on the Singularity Institute (SI), published by HoldenKarnofsky on LessWrong.
This post presents thoughts on the Singularity Institute from Holden Karnofsky, Co-Executive Director of GiveWell. Note: Luke Muehlhauser, the Executive Director of the Singularity Institute, reviewed a draft of this post, and commented: "I do generally agree that your complaints are either correct (especially re: past organizational competence) or incorrect but not addressed by SI in clear argumentative writing (this includes the part on 'tool' AI). I am working to address both categories of issues." I take Luke's comment to be a significant mark in SI's favor, because it indicates an explicit recognition of the problems I raise, and thus increases my estimate of the likelihood that SI will work to address them.
September 2012 update: responses have been posted by Luke and Eliezer (and I have responded in the comments of their posts). I have also added acknowledgements.
The Singularity Institute (SI) is a charity that GiveWell has been repeatedly asked to evaluate. In the past, SI has been outside our scope (as we were focused on specific areas such as international aid). With GiveWell Labs we are open to any giving opportunity, no matter what form and what sector, but we still do not currently plan to recommend SI; given the amount of interest some of our audience has expressed, I feel it is important to explain why. Our views, of course, remain open to change. (Note: I am posting this only to Less Wrong, not to the GiveWell Blog, because I believe that everyone who would be interested in this post will see it here.)
I am currently the GiveWell staff member who has put the most time and effort into engaging with and evaluating SI. Other GiveWell staff currently agree with my bottom-line view that we should not recommend SI, but this does not mean they have engaged with each of my specific arguments. Therefore, while the lack of recommendation of SI is something that GiveWell stands behind, the specific arguments in this post should be attributed only to me, not to GiveWell.
Summary of my views
The argument advanced by SI for why the work it's doing is beneficial and important seems both wrong and poorly argued to me. My sense at the moment is that the arguments SI is making would, if accepted, increase rather than decrease the risk of an AI-related catastrophe. More
SI has, or has had, multiple properties that I associate with ineffective organizations, and I do not see any specific evidence that its personnel/organization are well-suited to the tasks it has set for itself. More
A common argument for giving to SI is that "even an infinitesimal chance that it is right" would be sufficient given the stakes. I have written previously about why I reject this reasoning; in addition, prominent SI representatives seem to reject this particular argument as well (i.e., they believe that one should support SI only if one believes it is a strong organization making strong arguments). More
My sense is that at this point, given SI's current financial state, withholding funds from SI is likely better for its mission than donating to it. (I would not take this view to the furthest extreme; the argument that SI should have some funding seems stronger to me than the argument that it should have as much as it currently has.)
I find existential risk reduction to be a fairly promising area for philanthropy, and plan to investigate it further. More
There are many things that could happen that would cause me to revise my view on SI. However, I do not plan to respond to all comment responses to this post. (Given the volume of responses we may receive, I may not be able to even read all the comments on this post.) I do not believe these two statements are inconsistent, and I lay out paths for getting me...
|
Dec 12, 2021 |
Humans are not automatically strategic by AnnaSalamon
06:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Humans are not automatically strategic, published by AnnaSalamon on LessWrong.
Reply to: A "Failure to Evaluate Return-on-Time" Fallacy
Lionhearted writes:
[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.
A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....
I’m curious as to why.
Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)
Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective.
To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.
But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
(a) Ask ourselves what we’re trying to achieve;
(b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
(c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
(d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
(e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
(f) Focus most of the energy that isn’t going into systematic exploration, on the methods that work best;
(g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
(h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;
.... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.
Why? Most basically, because humans are only just on the cusp o...
|
Dec 12, 2021 |
Anti-Aging: State of the Art by JackH
20:59
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Anti-Aging: State of the Art, published by JackH on LessWrong.
Write a Review
Aging is a problem that ought to be solved, and most Less Wrongers recognize this. However, few members of the community seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work.
Today, there are over 130 longevity biotechnology companies and over 50 anti-aging drugs in clinical trials in humans. The evidence is promising that in the next 5-10 years, we will start seeing robust evidence that aging can be therapeutically slowed or reversed in humans. Whether we live to see anti-aging therapies to keep us alive indefinitely (i.e. whether we make it to longevity escape velocity) depends on how much traction and funding the field gets in coming decades.
In this post, I summarise the state of the art of the anti-aging field (also known as longevity biotechnology, rejuvenation biotechnology, translational biogerontology or geroscience). If you feel you already possess the necessary background on aging, feel free to skip to Part V.
Part I: Why is Aging a problem?
Aging is the biggest killer worldwide, and also the largest source of morbidity. Aging kills 100,000 people per day; more than twice the sum of all other causes of death. This equates to 37 million people - a population the size of Canada - dying per year of aging. In developed countries, 9 out of 10 deaths are due to aging.
Aging also accounts for more than 30% of all disability-adjusted life years lost (DALYs); more than any other single cause. Deaths due to aging are not usually quick and painless, but preceded by 10-15 years of chronic illnesses such as cancer, type 2 diabetes and Alzheimer’s disease. Quality of life typically deteriorates in older age, and the highest rates of depression worldwide are among the elderly.
To give a relevant example of the effects of aging, consider that aging is primarily responsible for almost all COVID-19 deaths. This is observable in the strong association of COVID-19 mortality with age (below, middle panel):
The death rate from COVID-19 increases exponentially with age (above, middle). This is not a coincidence - it is because biological aging weakens the immune system and results in a much higher chance of death from COVID-19. On a side note, waning immunity with age also increases cancer risk, as another example of how aging is associated with chronic illness.
The mortality rate doubling time for COVID-19 is close to the all-cause mortality rate doubling time, suggesting that people who die of COVID-19 are really dying of aging. Without aging, COVID-19 would not be a global pandemic, since the death rate in individuals below 30 years old is extremely low.
Part II: What does a world without aging look like?
For those who have broken free of the pro-aging trance and recognise aging as a problem, there is the further challenge of imagining a world without aging. The prominent ‘black mirror’ portrayals of immortality as a curse or hubristic may distort our model of what a world with anti-aging actually looks like.
The 'white mirror' of aging is a world in which biological age is halted at 20-30 years, and people maintain optimal health for a much longer or indefinite period of time. Although people will still age chronologically (exist over time) they will not undergo physical and cognitive decline associated with biological aging. At chronological ages of 70s, 80s, even 200s, t...
|
Dec 12, 2021 |
The noncentral fallacy - the worst argument in the world? by Scott Alexander
11:28
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The noncentral fallacy - the worst argument in the world?, published by Scott Alexander on LessWrong.
Related to: Leaky Generalizations, Replace the Symbol With The Substance, Sneaking In Connotations
David Stove once ran a contest to find the Worst Argument In The World, but he awarded the prize to his own entry, and one that shored up his politics to boot. It hardly seems like an objective process.
If he can unilaterally declare a Worst Argument, then so can I. I declare the Worst Argument In The World to be this: "X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member."
Call it the Noncentral Fallacy. It sounds dumb when you put it like that. Who even does that, anyway?
It sounds dumb only because we are talking soberly of categories and features. As soon as the argument gets framed in terms of words, it becomes so powerful that somewhere between many and most of the bad arguments in politics, philosophy and culture take some form of the noncentral fallacy. Before we get to those, let's look at a simpler example.
Suppose someone wants to build a statue honoring Martin Luther King Jr. for his nonviolent resistance to racism. An opponent of the statue objects: "But Martin Luther King was a criminal!"
Any historian can confirm this is correct. A criminal is technically someone who breaks the law, and King knowingly broke a law against peaceful anti-segregation protest - hence his famous Letter from Birmingham Jail.
But in this case calling Martin Luther King a criminal is the noncentral. The archetypal criminal is a mugger or bank robber. He is driven only by greed, preys on the innocent, and weakens the fabric of society. Since we don't like these things, calling someone a "criminal" naturally lowers our opinion of them.
The opponent is saying "Because you don't like criminals, and Martin Luther King is a criminal, you should stop liking Martin Luther King." But King doesn't share the important criminal features of being driven by greed, preying on the innocent, or weakening the fabric of society that made us dislike criminals in the first place. Therefore, even though he is a criminal, there is no reason to dislike King.
This all seems so nice and logical when it's presented in this format. Unfortunately, it's also one hundred percent contrary to instinct: the urge is to respond "Martin Luther King? A criminal? No he wasn't! You take that back!" This is why the noncentral is so successful. As soon as you do that you've fallen into their trap. Your argument is no longer about whether you should build a statue, it's about whether King was a criminal. Since he was, you have now lost the argument.
Ideally, you should just be able to say "Well, King was the good kind of criminal." But that seems pretty tough as a debating maneuver, and it may be even harder in some of the cases where the noncentral Fallacy is commonly used.
Now I want to list some of these cases. Many will be political1, for which I apologize, but it's hard to separate out a bad argument from its specific instantiations. None of these examples are meant to imply that the position they support is wrong (and in fact I myself hold some of them). They only show that certain particular arguments for the position are flawed, such as:
"Abortion is murder!" The archetypal murder is Charles Manson breaking into your house and shooting you. This sort of murder is bad for a number of reasons: you prefer not to die, you have various thoughts and hopes and dreams that would be snuffed out, your family and friends would be heartbroken, and the rest of society has to live in fear until Manson gets caught. If you define murder as "killing another human being", then abortion is technically ...
|
Dec 12, 2021 |
Dying Outside by HalFinney
04:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Dying Outside, published by HalFinney on LessWrong.
A man goes in to see his doctor, and after some tests, the doctor says, "I'm sorry, but you have a fatal disease."
Man: "That's terrible! How long have I got?"
Doctor: "Ten."
Man: "Ten? What kind of answer is that? Ten months? Ten years? Ten what?"
The doctor looks at his watch. "Nine."
Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years.
There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside.
The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition.
With mechanical respiration, survival with ALS can be indefinitely extended. And the great majority of people living on respirators say that their quality of life is good and they are happy with their decision. (There may be a selection effect here.) It seems, then, that calling ALS a fatal disease is an oversimplification. ALS takes away your body, but it does not take away your mind, and if you are determined and fortunate, it does not have to take away your life.
There are a number of practical and financial obstacles to successfully surviving on a ventilator, foremost among them the great load on caregivers. No doubt this contributes to the high rates of choosing death. But it seems that much of the objection is philosophical. People are not happy about being kept alive by machines. And they assume that their quality of life would be poor, without the ability to move and participate in their usual activities. This is despite the fact that most people on respirators describe their quality of life as acceptable to good. As we have seen in other contexts, people are surprisingly poor predictors of how they will react to changed circumstances. This seems to be such a case, contributing to the high death rates for ALS patients.
I hope that when the time comes, I will choose life. ALS kills only motor neurons, which carry signals to the muscles. The senses are intact. And most patients retain at least some vestige of control over a few muscles, which with modern technology can offer a surprisingly effective mode of communication. Stephen Hawking, the world's longest surviving ALS patient at over 40 years since diagnosis, is said to be able to type at ten words per minute by twitching a cheek muscle. I hope to be able to read, browse ...
|
Dec 12, 2021 |
There"s no such thing as a tree (phylogenetically) by eukaryote
12:41
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: There"s no such thing as a tree (phylogenetically), published by eukaryote on LessWrong.
This is a linkpost for/
[Crossposted from Eukaryote Writes Blog.]
So you’ve heard about how fish aren’t a monophyletic group? You’ve heard about carcinization, the process by which ocean arthropods convergently evolve into crabs? You say you get it now? Sit down. Sit down. Shut up. Listen. You don’t know nothing yet.
“Trees” are not a coherent phylogenetic category. On the evolutionary tree of plants, trees are regularly interspersed with things that are absolutely, 100% not trees. This means that, for instance, either:
The common ancestor of a maple and a mulberry tree was not a tree.
The common ancestor of a stinging nettle and a strawberry plant was a tree.
And this is true for most trees or non-trees that you can think of.
I thought I had a pretty good guess at this, but the situation is far worse than I could have imagined.
CLICK TO EXPAND. Partial phylogenetic tree of various plants. TL;DR: Tan is definitely, 100% trees. Yellow is tree-like. Green is 100% not a tree. Sourced mostly from Wikipedia.
I learned after making this chart that tree ferns exist (h/t seebs), which I think just emphasizes my point further. Also, h/t kithpendragon for suggestions on improving accessibility of the graph.
Why do trees keep happening?
First, what is a tree? It’s a big long-lived self-supporting plant with leaves and wood.
Also of interest to us are the non-tree “woody plants”, like lianas (thick woody vines) and shrubs. They’re not trees, but at least to me, it’s relatively apparent how a tree could evolve into a shrub, or vice-versa. The confusing part is a tree evolving into a dandelion. (Or vice-versa.)
Wood, as you may have guessed by now, is also not a clear phyletic category. But it’s a reasonable category – a lignin-dense structure, usually that grows from the exterior and that forms a pretty readily identifiable material when separated from the tree. (.Okay, not the most explainable, but you know wood? You know when you hold something in your hand, and it’s made of wood, and you can tell that? Yeah, that thing.)
All plants have lignin and cellulose as structural elements – wood is plant matter that is dense with both of these.
Botanists don’t seem to think it only could have gone one way – for instance, the common ancestor of flowering plants is theorized to have been woody. But we also have pretty clear evidence of recent evolution of woodiness – say, a new plant arrives on a relatively barren island, and some of the offspring of that plant becomes treelike. Of plants native to the Canary Islands, wood independently evolved at least 38 times!
One relevant factor is that all woody plants do, in a sense, begin life as herbaceous plants – by and large, a tree sprout shares a lot of properties with any herbaceous plant. Indeed, botanists call this kind of fleshy, soft growth from the center that elongates a plant “primary growth”, and the later growth from towards the outside which causes a plant to thicken is “secondary growth.” In a woody plant, secondary growth also means growing wood and bark – but other plants sometimes do secondary growth as well, like potatoes (in roots)
This paper addresses the question. I don’t understand a lot of the closely genetic details, but my impression of its thesis is that: Analysis of convergently-evolved woody plants show that the genes for secondary woody growth are similar to primary growth in plants that don’t do any secondary growth – even in unrelated plants. And woody growth is an adaption of secondary growth. To abstract a little more, there is a common and useful structure in herbaceous plants that, when slightly tweaked, “dendronizes” them into woody plants.
Dendronization – Evolving into a tree-like morphology. (In the style of “carciniz...
|
Dec 12, 2021 |
Intellectual Hipsters and Meta-Contrarianism by Scott Alexander
11:48
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Intellectual Hipsters and Meta-Contrarianism, published by Scott Alexander on LessWrong.
Related to: Why Real Men Wear Pink, That Other Kind of Status, Pretending to be Wise, The "Outside The Box" Box
WARNING: Beware of things that are fun to argue -- Eliezer Yudkowsky
Science has inexplicably failed to come up with a precise definition of "hipster", but from my limited understanding a hipster is a person who deliberately uses unpopular, obsolete, or obscure styles and preferences in an attempt to be "cooler" than the mainstream. But why would being deliberately uncool be cooler than being cool?
As previously discussed, in certain situations refusing to signal can be a sign of high status. Thorstein Veblen invented the term "conspicuous consumption" to refer to the showy spending habits of the nouveau riche, who unlike the established money of his day took great pains to signal their wealth by buying fast cars, expensive clothes, and shiny jewelery. Why was such flashiness common among new money but not old? Because the old money was so secure in their position that it never even occurred to them that they might be confused with poor people, whereas new money, with their lack of aristocratic breeding, worried they might be mistaken for poor people if they didn't make it blatantly obvious that they had expensive things.
The old money might have started off not buying flashy things for pragmatic reasons - they didn't need to, so why waste the money? But if F. Scott Fitzgerald is to be believed, the old money actively cultivated an air of superiority to the nouveau riche and their conspicuous consumption; not buying flashy objects becomes a matter of principle. This makes sense: the nouveau riche need to differentiate themselves from the poor, but the old money need to differentiate themselves from the nouveau riche.
This process is called countersignaling, and one can find its telltale patterns in many walks of life. Those who study human romantic attraction warn men not to "come on too strong", and this has similarities to the nouveau riche example. A total loser might come up to a woman without a hint of romance, promise her nothing, and demand sex. A more sophisticated man might buy roses for a woman, write her love poetry, hover on her every wish, et cetera; this signifies that he is not a total loser. But the most desirable men may deliberately avoid doing nice things for women in an attempt to signal they are so high status that they don't need to. The average man tries to differentiate himself from the total loser by being nice; the extremely attractive man tries to differentiate himself from the average man by not being especially nice.
In all three examples, people at the top of the pyramid end up displaying characteristics similar to those at the bottom. Hipsters deliberately wear the same clothes uncool people wear. Families with old money don't wear much more jewelry than the middle class. And very attractive men approach women with the same lack of subtlety a total loser would use.1
If politics, philosophy, and religion are really about signaling, we should expect to find countersignaling there as well.
Pretending To Be Wise
Let's go back to Less Wrong's long-running discussion on death. Ask any five year old child, and ey can tell you that death is bad. Death is bad because it kills you. There is nothing subtle about it, and there does not need to be. Death universally seems bad to pretty much everyone on first analysis, and what it seems, it is.
But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly. Precisely because these benefits are so much smaller than th...
|
Dec 12, 2021 |
100 Tips for a Better Life by Ideopunk
16:46
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: 100 Tips for a Better Life, published by Ideopunk on LessWrong.
Write a Review
(Cross-posted from my blog)
The other day I made an advice thread based on Jacobian’s from last year! If you know a source for one of these, shout and I’ll edit it in.
Possessions
1. If you want to find out about people’s opinions on a product, google
|
Dec 12, 2021 |
Taboo "Outside View" by Daniel Kokotajlo
11:57
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Taboo "Outside View", published by Daniel Kokotajlo on LessWrong.
No one has ever seen an AGI takeoff, so any attempt to understand it must use these outside view considerations.
[Redacted for privacy]
What? That’s exactly backwards. If we had lots of experience with past AGI takeoffs, using the outside view to predict the next one would be a lot more effective.
My reaction
Two years ago I wrote a deep-dive summary of Superforecasting and the associated scientific literature. I learned about the “Outside view” / “Inside view” distinction, and the evidence supporting it. At the time I was excited about the concept and wrote: “...I think we should do our best to imitate these best-practices, and that means using the outside view far more than we would naturally be inclined.”
Now that I have more experience, I think the concept is doing more harm than good in our community. The term is easily abused and its meaning has expanded too much. I recommend we permanently taboo “Outside view,” i.e. stop using the word and use more precise, less confused concepts instead. This post explains why.
What does “Outside view” mean now?
Over the past two years I’ve noticed people (including myself!) do lots of different things in the name of the Outside View. I’ve compiled the following lists based on fuzzy memory of hundreds of conversations with dozens of people:
Big List O’ Things People Describe As Outside View:
Reference class forecasting, the practice of computing a probability of an event by looking at the frequency with which similar events occurred in similar situations. Also called comparison class forecasting. [EDIT: Eliezer rightly points out that sometimes reasoning by analogy is undeservedly called reference class forecasting; reference classes are supposed to be held to a much higher standard, in which your sample size is larger and the analogy is especially tight.]
Trend extrapolation, e.g. “AGI implies insane GWP growth; let’s forecast AGI timelines by extrapolating GWP trends.”
Foxy aggregation, the practice of using multiple methods to compute an answer and then making your final forecast be some intuition-weighted average of those methods.
Bias correction, in others or in oneself, e.g. “There’s a selection effect in our community for people who think AI is a big deal, and one reason to think AI is a big deal is if you have short timelines, so I’m going to bump my timelines estimate longer to correct for this.”
Deference to wisdom of the many, e.g. expert surveys, or appeals to the efficient market hypothesis, or to conventional wisdom in some fairly large group of people such as the EA community or Western academia.
Anti-weirdness heuristic, e.g. “How sure are we about all this AI stuff? It’s pretty wild, it sounds like science fiction or doomsday cult material.”
Priors, e.g. “This sort of thing seems like a really rare, surprising sort of event; I guess I’m saying the prior is low / the outside view says it’s unlikely.” Note that I’ve heard this said even in cases where the prior is not generated by a reference class, but rather from raw intuition.
Ajeya’s timelines model (transcript of interview, link to model)
. and probably many more I don’t remember
Big List O’ Things People Describe As Inside View:
Having a gears-level model, e.g. “Language data contains enough structure to learn human-level general intelligence with the right architecture and training setup; GPT-3 + recent theory papers indicate that this should be possible with X more data and compute.”
Having any model at all, e.g. “I model AI progress as a function of compute and clock time, with the probability distribution over how much compute is needed shifting 2 OOMs lower each decade.”
Deference to wisdom of the few, e.g. “the people I trust most on this matter seem to think.”
Intuition-based-on-deta...
|
Dec 12, 2021 |
The Neglected Virtue of Scholarship by lukeprog
05:37
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Neglected Virtue of Scholarship, published by lukeprog on LessWrong.
Eliezer Yudkowsky identifies scholarship as one of the Twelve Virtues of Rationality:
Study many sciences and absorb their power as your own. Each field that you consume makes you larger... It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory. But these cannot be the only fields you study...
I think he's right, and I think scholarship doesn't get enough praise - even on Less Wrong, where it is regularly encouraged.
First, consider the evangelical atheist community to which I belong. There is a tendency for lay atheists to write "refutations" of theism without first doing a modicum of research on the current state of the arguments. This can get atheists into trouble when they go toe-to-toe with a theist who did do his homework. I'll share two examples:
In a debate with theist Bill Craig, agnostic Bart Ehrman paraphrased David Hume's argument that we can't demonstrate the occurrence of a miracle in the past. Craig responded with a PowerPoint slide showing Bayes' Theorem, and explained that Ehrman was only considering prior probabilities, when of course he needed to consider the relevant conditional probabilities as well. Ehrman failed to respond to this, and looked as though he had never seen Bayes' Theorem before. Had Ehrman practiced the virtue of scholarship on this issue, he might have noticed that much of the scholarly work on Hume's argument in the past two decades has involved Bayes' Theorem. He might also have discovered that the correct response to Craig's use of Bayes' Theorem can be found in pages 298-341 of J.H. Sobel’s Logic and Theism.
In another debate with Bill Craig, atheist Christopher Hitchens gave this objection: "Who designed the Designer? Don’t you run the risk. of asking 'Well, where does that come from? And where does that come from?' and running into an infinite regress?" But this is an elementary misunderstanding in philosophy of science. Why? Because every successful scientific explanation faces the exact same problem. It’s called the “why regress” because no matter what explanation is given of something, you can always still ask “Why?” Craig pointed this out and handily won that part of the debate. Had Hitchens had a passing understanding of science or explanation, he could have avoided looking foolish, and also spent more time on substantive objections to theism. (One can give a "Who made God?" objection to theism that has some meat, but that's not the one Hitchens gave. Hitchens' objection concerned an infinite regress of explanations, which is just as much a feature of science as it is of theism.)
The lesson I take from these and a hundred other examples is to employ the rationality virtue of scholarship. Stand on the shoulders of giants. We don't each need to cut our own path into a subject right from the point of near-total ignorance. That's silly. Just catch the bus on the road of knowledge paved by hundreds of diligent workers before you, and get off somewhere near where the road finally fades into fresh jungle. Study enough to have a view of the current state of the debate so you don't waste your time on paths that have already dead-ended, or on arguments that have already been refuted. Catch up before you speak up.
This is why, in more than 1000 posts on my own blog, I've said almost nothing that is original. Most of my posts instead summarize what other experts have said, in an effort to bring myself and my readers up to the level of the current debate on a subject before we try to make new contributions to it.
The Less Wrong community is a particularly smart and well-read bunch, but of course it doesn't always embrace the virtu...
|
Dec 12, 2021 |
Covid 12/24: We’re Fed, It’s Over by Zvi
01:02:24
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Covid 12/24: We’re Fed, It’s Over, published by Zvi on LessWrong.
Write a Review
UPDATE 7/21/2021: As you doubtless know at this point, it was not over. Given the visibility of this post, I'm going to note here at the top that the prediction of a potential large wave of infections between March and May did not happen, no matter what ultimately happens with Delta (and the prediction was not made with Delta in mind anyway, only Alpha). Some more reflections on that at the bottom of this post here.
A year ago, there were reports coming out of China about a new coronavirus. Various people were saying things about exponential growth and the inevitability of a new pandemic, and urging action be taken. The media told us it was nothing to worry about, right up until hospitals got overwhelmed and enough people started dying.
This past week, it likely happened again.
A new strain of Covid-19 has emerged from southern England, along with a similar one in South Africa. The new strain has rapidly taken over the region, and all signs point to it being about 65% more infectious than the old one, albeit with large uncertainty and error bars around that.
I give it a 70% chance that these reports are largely correct.
There is no plausible way that a Western country can sustain restrictions that can overcome that via anything other than widespread immunity. This would be the level required to previously cut new infections in half every week. And all that would do is stabilize the rate of new infections.
Like last time, the media is mostly assuring us that there is nothing to worry about, and not extrapolating exponential growth into the future.
Like last time, there are attempts to slow down travel, that are both not tight enough to plausibly work even if they were implemented soon enough, and also clearly not implemented soon enough.
Like last time, no one is responding with a rush to get us prepared for what is about to happen. There are no additional pushes to improve our ability to test, or our supplies of equipment, or to speed our vaccine efforts or distribute the vaccine more efficiently (in any sense), or to lift restrictions on useful private action.
Like last time, the actions urged upon us to contain spread clearly have little or no chance of actually doing that.
The first time, I made the mistake of not thinking hard enough early enough, or taking enough action. I also didn’t think through the implications, and didn’t do things like buying put options, even though it was obvious. This time, I want to not make those same mistakes. Let’s figure out what actually happens, then act upon it.
We can’t be sure yet. I only give the new strain a 70% chance of being sufficiently more infectious than the old one that the scenario fully plays out here in America before we have a chance to vaccinate enough people. I am very willing to revise that probability as new data comes in, or based on changes in methods of projection, including projections of what people will decide to do in various scenarios.
What I do know is we can’t hide our heads in the sand again. Never again. When we have strong Bayesian evidence that something is happening, we need to work through that and act accordingly. Not say “there’s no proof” or “we don’t know anything yet.” This isn’t about proof via experiment, or ruling out all possible alternative explanations. This is about likelihood ratios and probabilities. And on that front, as far as I can tell, it doesn’t look good. Change my mind.
The short term outlook in America has clearly stabilized, with R0 close to 1, as the control system once again sets in. Cases and deaths (and test counts) aren’t moving much. We have a double whammy of holidays about to hit us in Christmas and New Year’s, but after that I expect the tide to turn until such time as we get whamm...
|
Dec 12, 2021 |
The Blue-Minimizing Robot by Scott Alexander
05:34
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Blue-Minimizing Robot, published by Scott Alexander on LessWrong.
Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way.
Watching the robot's behavior, we would conclude that this is a robot that destroys blue objects. Maybe it is a surgical robot that destroys cancer cells marked by a blue dye; maybe it was built by the Department of Homeland Security to fight a group of terrorists who wear blue uniforms. Whatever. The point is that we would analyze this robot in terms of its goals, and in those terms we would be tempted to call this robot a blue-minimizer: a machine that exists solely to reduce the amount of blue objects in the world.
Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.
But now stick the robot in a room with a hologram projector. The hologram projector (which is itself gray) projects a hologram of a blue object five meters in front of it. The robot's camera detects the projector, but its RGB value is harmless and the robot does not fire. Then the robot's camera detects the blue hologram and zaps it. We arrange for the robot to enter this room several times, and each time it ignores the projector and zaps the hologram, without effect.
Here the robot is failing at its goal of being a blue-minimizer. The right way to reduce the amount of blue in the universe is to destroy the projector; instead its beams flit harmlessly through the hologram.
Again, give the robot human level intelligence. Teach it exactly what a hologram projector is and how it works. Now what happens? Exactly the same thing - the robot executes its code, which says to scan the room until its camera registers blue, then shoot its laser.
In fact, there are many ways to subvert this robot. What if we put a lens over its camera which inverts the image, so that white appears as black, red as green, blue as yellow, and so on? The robot will not shoot us with its laser to prevent such a violation (unless we happen to be wearing blue clothes when we approach) - its entire program was detailed in the first paragraph, and there's nothing about resisting lens alterations. Nor will the robot correct itself and shoot only at objects that appear yellow - its entire program was detailed in the first paragraph, and there's nothing about correcting its program for new lenses. The robot will continue to zap objects that register a blue RGB value; but now it'll be shooting at anything that is yellow.
The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization. And maybe if it had previously decided it was on a holy quest to rid the world of blue, it will be deeply horrified and ashamed of its actions. It will wonder why it has suddenly started to deviate from this quest, and why it just can't work up the will to destroy blue objects anymore.
The robot goes to Quirinus Quirrell, who explains that robots don't really care about minimizing the color blue. They only care about status and power, and pretend to care about minimizing blue in order to impress potential allies.
The robot goes to Robin ...
|
Dec 12, 2021 |
To listen well, get curious by benkuhn
07:13
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: To listen well, get curious, published by benkuhn on LessWrong.
Write a Review
source
A common piece of interacting-with-people advice goes: “often when people complain, they don’t want help, they just want you to listen!”
For instance, Nonviolent Communication:✻✻ Nonviolent Communication, ch. 7.
It is often frustrating for someone needing empathy to have us assume that they want reassurance or “fix-it” advice.
Active Listening:†† Active Listening, p. 2
Similarly, advice and information are almost always seen as efforts to change a person and thus serve as barriers to his self-expression and the development of a creative relationship.
You can find similar advice in most books on relationships, people management, etc.
This always used to seem silly to me. If I complain at my partner and she “just listens,” I’ve accomplished nothing except maybe made her empathetically sad. When I complain at people, I want results, not to grouse into the void!‡‡ Empirically, I did notice that I usually got better results from listening than from giving advice. So I inferred that this advice was true for other people, but not me, because other people didn’t actually want to fix their problems.
Frequently the “just listen” advice comes with tactical tips, like “reflect what people said back to you to prove that you’re listening.” For instance, consider these example dialogues from Nonviolent Communication:§§ Nonviolent Communication, Chapter 7, Exercise 5.5, 5.6 and solutions.
Person A: How could you say a thing like that to me?
Person B: Are you feeling hurt because you would have liked me to agree to do what you requested?
Or:
Person A: I’m furious with my husband. He’s never around when I need him.
Person B: So you’re feeling furious because you would like him to be around more than he is?
I say this with great respect for Nonviolent Communication, but these sound like a 1970s-era chatbot. If I were Person A in either of these dialogues my next line would be “yes, you dingbat—can you turn the nonviolence down a couple notches?” I’d feel alienated knowing that someone is going through their NVC checklist on me.
Recently, I realized why people keep giving this weird-seeming advice. Good listeners do often reflect words back—but not because they read it in a book somewhere. Rather, it’s cargo cult advice: it teaches you to imitate the surface appearance of good listening, but misses what’s actually important, the thing that’s generating that surface appearance.
The generator is curiosity.
When I’ve listened the most effectively to people, it’s because I was intensely curious—I was trying to build a detailed, precise understanding of what was going on in their head. When a friend says, “I’m furious with my husband. He’s never around when I need him,” that one sentence has a huge amount underneath. How often does she need him? What does she need him for? Why isn’t he around? Have they talked about it? If so, what did he say? If not, why not?
It turns out that reality has a surprising amount of detail, and those details can matter a lot to figuring out what the root problem or best solution is. So if I want to help, I can’t treat those details as a black box: I need to open it up and see the gears inside. Otherwise, anything I suggest will be wrong—or even if it’s right, I won’t have enough “shared language” with my friend for it to land correctly.
Some stories from recent memory:
When we started doing a pair programming rotation at Wave, I suggested that, to make scheduling easier, we designate a default time when pairing sessions would happen. A coworker objected that this seemed authoritarian. I was extremely puzzled, but they’d previously mentioned being an anarchist, so I was tempted to just chalk it up to a political disagreement and move on. But instead I tried to get curious and ex...
|
Dec 12, 2021 |
How To Write Quickly While Maintaining Epistemic Rigor by johnswentworth
06:09
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How To Write Quickly While Maintaining Epistemic Rigor, published by johnswentworth on LessWrong.
There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece.
This post is about how to avoid that, without sacrificing good epistemics.
There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.
I claim that this promotes better epistemics overall than always researching everything in depth.
Why?
It’s About The Process, Not The Conclusion
Suppose I have a box, and I want to guess whether there’s a cat in it. I do some tests - maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”.
Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That’s not really what I’m aiming for; luck isn’t reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don’t just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly.
Correctly Conveying Uncertainty
Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty.
An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn’t even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints.
Now, let’s zoom out. How much confidence should I put in this theory? Obviously not very much - we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more.
On the other hand. what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It’s a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data.
Point is: the amount of uncertainty I should assign depends on the details of my ...
|
Dec 12, 2021 |
Working hurts less than procrastinating, we fear the twinge of starting by Eliezer Yudkowsky
04:51
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Working hurts less than procrastinating, we fear the twinge of starting, published by Eliezer Yudkowsky on LessWrong.
When you procrastinate, you're probably not procrastinating because of the pain of working.
How do I know this? Because on a moment-to-moment basis, being in the middle of doing the work is usually less painful than being in the middle of procrastinating.
(Bolded because it's true, important, and nearly impossible to get your brain to remember - even though a few moments of reflection should convince you that it's true.)
So what is our brain flinching away from, if not the pain of doing the work?
I think it's flinching away from the pain of the decision to do the work - the momentary, immediate pain of (1) disengaging yourself from the (probably very small) flow of reinforcement that you're getting from reading a random unimportant Internet article, and (2) paying the energy cost for a prefrontal override to exert control of your own behavior and begin working.
Thanks to hyperbolic discounting (i.e., weighting values in inverse proportion to their temporal distance) the instant pain of disengaging from an Internet article and paying a prefrontal override cost, can outweigh the slightly more distant (minutes in the future, rather than seconds) pain of continuing to procrastinate, which is, once again, usually more painful than being in the middle of doing the work.
I think that hyperbolic discounting is far more ubiquitous as a failure mode than I once realized, because it's not just for commensurate-seeming tradeoffs like smoking a cigarette in a minute versus dying of lung cancer later.
When it comes to procrastinating, the obvious, salient, commensurate-seeming tradeoff, is between the (assumed) pleasure of reading a random Internet article now, versus the (assumed) pain of doing the work now. But this, as I said above, is not where I think the real tradeoff is; events that are five minutes away are too distant to dominate the thought process of a hyperbolic discounter like a human. Instead our thought processes are dominated by the prospective immediate pain of a thought, a cost that isn't even salient as something to be traded off. "Working" is an obvious, salient event, and "reading random articles" seems like an event. But "paying a small twinge of pain to make the decision to stop procrastinating now, exerting a bit of frontal override, and not getting to read the next paragraph of this random article" is so map-level that we don't even focus on it as a manipulable territory, a cost to be traded off; it is a transparent thought.
The real damage done by hyperbolic discounting is for thoughts that are only very slightly painful, and yet, these slight pains being immediate, they manage to dominate everything else in our calculation. And being transparent, we aren't even aware that's what's happening. "Beware of immediately trivially painful transparent thoughts", one might say.
Similarly, you may read a mediocre book for an hour, instead of a good book, because if you first spent a few minutes to search your library to obtain a better book, that would be an immediate cost - not that searching your library is all that unpleasant, but you'd have to pay an immediate activation cost to do that instead of taking the path of least resistance and grabbing the first thing in front of you. It's a hyperbolically discounted tradeoff that you make without realizing it, because the cost you're refusing to pay isn't commensurate enough with the payoff you're forgoing to be salient as an explicit tradeoff.
A related note that I might as well dump into this post: I'm starting to think that procrastination by reading random articles does not cause you to rest, that is, you do not regain mental energy from it. Success and happiness cause you to regain willpower; wh...
|
Dec 12, 2021 |
Feature Selection by Zack_M_Davis
43:53
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Feature Selection, published by
|
Dec 12, 2021 |
Ugh fields by Roko
04:37
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Ugh fields, published by Roko on LessWrong.
Tl;Dr version: Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.
A problem with the human mind — your human mind — is that it's a horrific kludge that will fail when you most need it not to. The Ugh Field failure mode is one of those really annoying failures. The idea is simple: if a person receives constant negative conditioning via unhappy thoughts whenever their mind goes into a certain zone of thought, they will begin to develop a psychological flinch mechanism around the thought. The "Unhappy Thing" — the source of negative thoughts — is typically some part of your model of the world that relates to bad things being likely to happen to you.
A key part of the Ugh Field phenomenon is that, to start with, there is no flinch, only negative real consequences resulting from real physical actions in the problem area. Then, gradually, you begin to feel the emotional hit when you are planning to take physical actions in the problem area. Then eventually, the emotional hit comes when you even begin to think about the problem. The reason for this may be that your brain operates a temporal difference learning (TDL) algorithm. Your brain propagates the psychological pain "back to the earliest reliable stimulus for the punishment". If you fail or are punished sufficiently many times in some problem area, and acting in that area is always preceeded by thinking about it, your brain will propagate the psychological pain right back to the moment you first begin to entertain a thought about the problem, and hence cut your conscious optimizing ability right out of the loop. Related to this is engaging in a displacement activity: this is some activity that usually involves comfort, done instead of confronting the problem. Perhaps (though this is speculative) the comforting displacement activity is there to counterbalance the psychological pain that you experienced just because you thought about the problem.
For example, suppose that you started off in life with a wandering mind and were punished a few times for failing to respond to official letters. Your TDL algorithm began to propagate the pain back to the moment you looked at an official letter or bill. As a result, you would be less effective than average at responding, so you got punished a few more times. Henceforth, when you received a bill, you got the pain before you even opened it, and it laid unpaid on the mantelpiece until a Big Bad Red late payment notice with an $25 fine arrived. More negative conditioning. Now even thinking about a bill, form or letter invokes the flinch response, and your lizard brain has fully cut you out out. You find yourself spending time on internet time-wasters, comfort food, TV, computer games, etc. Your life may not obviously be a disaster, but this is only because you can't see the alternative paths that it could have taken if you had been able to take advantage of the opportunities that came as letters and forms with deadlines.
The subtlety with the Ugh Field is that the flinch occurs before you start to consciously think about how to deal with the Unhappy Thing, meaning that you never deal with it, and you don't even have the option of dealing with it in the normal run of things. I find it frightening that my lizard brain could implicitly be making life decisions for me, without even asking my permission!
Possible antidotes to Ugh Field problem:
Actively look out for the flinch, preferably when you are in a motivationally "high" state. Better still, do this when you are both motivationally high, not under time pressure, an...
|
Dec 12, 2021 |
An Unexpected Victory: Container Stacking at the Port of Long Beach by Zvi
14:15
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: An Unexpected Victory: Container Stacking at the Port of Long Beach, published by Zvi on LessWrong.
A miracle occurred this week. Everyone I have talked to about it, myself included, is shocked that it happened. It’s important to
Understand what happened.
Make sure everyone knows it happened.
Understand how and why it happened.
Understand how we might cause it to happen again.
Update our models and actions.
Ideally make this a turning point to save civilization.
That last one is a bit of a stretch goal, but I am being fully serious. If you’re not terrified that the United States is a dead player, you haven’t been paying attention – the whole reason this is a miracle, and that it shocked so many people, is that we didn’t think the system was capable of noticing a stupid, massively destructive rule with no non-trivial benefits and no defenders and scrapping it, certainly not within a day. If your model did expect it, I’m very curious to know how that is possible, and how you explain the years 2020 and 2021.
Here’s my understanding of what happened. First, the setup.
The Ports of Los Angeles and Long Beach together are responsible for a huge percentage of shipping into the Western United States.
There was a rule in the Port saying you could only stack shipping containers two containers high.
This is despite the whole point of shipping containers being to stack them on top of each other so you can have a container ship.
This rule was created, and I am not making this up, because it was decided that higher stacks were not sufficiently aesthetically pleasing.
If you violated this rule, you lost your right to operate at the port.
In normal times, this was annoying but not a huge deal.
Thanks to Covid-19, there was increased demand to ship containers, creating more empty containers, and less throughput to remove those containers.
Normally one would settle this by changing prices, but for various reasons we won’t get into price mechanisms aren’t working properly to fix supply shortages.
Trucking companies started accumulating empty containers.
The companies ran out of room to store the containers, because in many places they could only stack them in stacks of two, and there was no practical way to move the containers off-site.
Trucks were forced to sit there with empty containers rather than hauling freight.
This made all the problems worse, in a downward spiral, resulting in a standstill throughout the port.
This was big enough to threaten the entire supply chain, and with it the economy, at least of the Western United States and potentially of the whole world via cascading problems. And similar problems are likely happening elsewhere.
Everyone in the port, or at least a lot of them, knew this was happening.
None of those people managed to do anything about the rule, or even get word out about the rule. No reporters wrote up news reports. No one was calling for a fix. The supply chain problems kept getting worse and mostly everyone agreed not to talk about it much and hope it would go away.
A bureaucrat insisting that stacked containers are an eyesore, causing freight to pile up because trucks are stuck sitting on empty containers, thus causing a cascading failure that destroys supply lines and brings down the economy. That certainly sounds like something that was in an early draft of Atlas Shrugged but got crossed out as too preposterous for anyone to take seriously.
Then our hero enters, and decides to coordinate and plan a persuasion campaign to get the rule changed. Here’s how I think this went down.
He in advance arranges for various sources to give him a signal boost when the time comes, in various ways.
He designs the message for a format that will have maximum reach and be maximally persuasive.
This takes the form of an easy to tell physical story, that he pretends t...
|
Dec 12, 2021 |
Discussion with Eliezer Yudkowsky on AGI interventions by Rob Bensinger, Eliezer Yudkowsky
55:03
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Discussion with Eliezer Yudkowsky on AGI interventions, published by Rob Bensinger, Eliezer Yudkowsky on LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
The following is a partially redacted and lightly edited transcript of a chat conversation about AGI between Eliezer Yudkowsky and a set of invitees in early September 2021. By default, all other participants are anonymized as "Anonymous".
I think this Nate Soares quote (excerpted from Nate's ) is a useful context-setting preface regarding timelines, which weren't discussed as much in the transcript:
[...] My odds [of AGI by the year 2070] are around 85%[...]
I can list a handful of things that drive my probability of AGI-in-the-next-49-years above 80%:
1. 50 years ago was 1970. The gap between AI systems then and AI systems now seems pretty plausibly greater than the remaining gap, even before accounting the recent dramatic increase in the rate of progress, and potential future increases in rate-of-progress as it starts to feel within-grasp.
2. I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn't do -- basic image recognition, go, starcraft, winograd schemas, programmer assistance. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. (Computer Programming That Is Actually Good? Theorem proving? Sure, but on my model, "good" versions of those are a hair's breadth away from full AGI already. And the fact that I need to clarify that "bad" versions don't count, speaks to my point that the only barriers people can name right now are intangibles.) That's a very uncomfortable place to be!
3. When I look at the history of invention, and the various anecdotes about the Wright brothers and Enrico Fermi, I get an impression that, when a technology is pretty close, the world looks a lot like how our world looks.
Of course, the trick is that when a technology is a little far, the world might also look pretty similar!
Though when a technology is very far, the world does look different -- it looks like experts pointing to specific technical hurdles. We exited that regime a few years ago.
4. Summarizing the above two points, I suspect that I'm in more-or-less the "penultimate epistemic state" on AGI timelines: I don't know of a project that seems like they're right on the brink; that would put me in the "final epistemic state" of thinking AGI is imminent. But I'm in the second-to-last epistemic state, where I wouldn't feel all that shocked to learn that some group has reached the brink. Maybe I won't get that call for 10 years! Or 20! But it could also be 2, and I wouldn't get to be indignant with reality. I wouldn't get to say "but all the following things should have happened first, before I made that observation". I have made those observations.
5. It seems to me that the Cotra-style compute-based model provides pretty conservative estimates. For one thing, I don't expect to need human-level compute to get human-level intelligence, and for another I think there's a decent chance that insight and innovation have a big role to play, especially on 50 year timescales.
6. There has been a lot of AI progress recently. When I tried to adjust my beliefs so that I was positively surprised by AI progress just about as often as I was negatively surprised by AI progress, I ended up expecting a bunch of rapid progress. [...]
Further preface by Eliezer:
In some sections here, I sound gloomy about the probability that coordination between AGI groups succeeds in saving the world. Andrew Critch reminds me to point out that gloominess like this can be a self-fulfilling prophecy - if people think successful coordination is impossible, they won’t try to coordinate. I therefore remark in retrospective advance that it seem...
|
Dec 12, 2021 |
Leaky Delegation: You are not a Commodity by Darmani
23:27
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Leaky Delegation: You are not a Commodity, published by Darmani on LessWrong.
Epistemic status: The accumulation of several insights over the years. Reasonably confident that everything mentioned here is an informative factor in decision-making.
Carl is furiously slicing and skinning peaches. His hands move like lightning as slice after slice fills his tray. His freezer has been freshly cleared. Within a day, he will have a new bag of frozen fruit, and can enjoy smoothies for another month.
Stan stands in the kitchen of his college dorm. His hands are carefully placing ingredients on pizza dough: homemade tomato sauce, spiced pork, and mozzarella cheese from a nearby farmer's market. "I don't know why people will pay a restaurant for this," he muses. "So much cheaper to do it yourself."
Michelle is on her way to her job as a software engineer. She tosses a pile of clothes into a bag, and presses a few buttons on her phone. Later that day, someone will come by to pick them up, wash and fold them at a nearby laundromat, and return them the next morning. Less time doing laundry means more time writing code. Her roommate calls her lazy.
An alert flashes on Bruce's screen: "us-east-prod-1 not responding to ping." Almost like a reflex, he pulls up diagnostics on his terminal. The software itself is still running fine, but it looks like his datacenter had a network change. A few more minutes, and everything is functioning again. Hopefully only a few customers noticed the downtime. His mentor keeps asking why he doesn't just run his website on AWS instead of owning his own servers, but Bruce insists it's worth it. His 4-person company has been profitable for 3 years, and keeping server costs low has meant the difference between staying independent and being forced to take outside investment.
The four characters above each take a minority position on outsourcing a task. In the past, I saw the decision as simple: if your time is valuable, then be like Michelle and delegate and outsource as much as you can. Not to do so would be an irrational loss. I silently judged the people I met who inspired Carl and Stan.
Years later, I've found myself cooking daily during a pandemic and appreciating the savings, and just finished arguing online in favor of running one's own servers.
My goal in this post is to share the perspective shift that led to me wholly or partially reverse my position on paying a person or company for a good or service (collectively, "delegating" or "outsourcing") in a number of domains, even as I continue to pay for many things most people do themselves. I've noticed hidden factors which mean that, sometimes, the quality will be better if you do it yourself, even if the alternative is offered by an expert or company with specialized tools. And sometimes, it can be cheaper, even if you value your time very highly and the other person is much faster.
The Internet is full of articles on the generic "buy vs. build" and "DIY vs. build" decisions Though some are written from the corporate boardroom and others from the home kitchen or workshop, the underlying analysis is eerily similar: that it's a choice between spending time (or "in-house resources") or money for a similar value. More sophisticated articles will also consider transaction costs, such as walking to a restaurant or finding your kid a tutor, and costs from principal-agent problems, such as vetting the tutor. In fact, as I've come to realize, the do-or-delegate decision is often not about two alternative ways of getting the same thing, but rather about two options sufficiently different that they're best considered not as replacements for each other, but entirely separate objects with overlapping benefits.
These differences can be obvious for specific examples, as every home baker can give you an earful abou...
|
Dec 12, 2021 |
Self-Integrity and the Drowning Child by Eliezer Yudkowsky
07:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Self-Integrity and the Drowning Child, published by Eliezer Yudkowsky on LessWrong.
(Excerpted from "mad investor chaos and the woman of asmodeus", about an unusually selfish dath ilani, "Keltham", who dies in a plane accident and ends up in Cheliax, a country governed by D&D!Hell. Keltham is here remembering an incident from his childhood.)
And the Watcher told the class a parable, about an adult, coming across a child who'd somehow bypassed the various safeguards around a wilderness area, and fallen into a muddy pond, and seemed to be showing signs of drowning (for they'd already been told, then, what drowning looked like). The water, in this parable, didn't look like it would be over their own adult heads. But - in the parable - they'd just bought some incredibly-expensive clothing, costing dozens of their own labor-hours, and less resilient than usual, that would be ruined by the muddy water.
And the Watcher asked the class if they thought it was right to save the child, at the cost of ruining their clothing.
Everyone in there moved their hand to the 'yes' position, of course. Except Keltham, who by this point had already decided quite clearly who he was, and who simply closed his hand into a fist, otherwise saying neither 'yes' nor 'no' to the question, defying it entirely.
The Watcher asked him to explain, and Keltham said that it seemed to him that it was okay for an adult to take an extra fifteen seconds to strip off all their super-expensive clothing and then jump in to save the child.
The Watcher invited the other children to argue with Keltham about that, which they did, though Keltham's first defense, that his utility function was what it was, had not been a friendly one, or inviting of further argument. But they did eventually convince Keltham that, especially if you weren't sure you could call in other help or get attention or successfully drag the child's body towards help, if that child actually did drown - meaning the child's true life was at stake - then it would make sense to jump in right away, not take the extra risk of waiting another quarter-minute to strip off your clothes, and bill the child's parents' insurance for the cost. Or at least, that was where Keltham shifted his position, in the face of that argumentative pressure.
Some kids, at that point, questioned the Watcher about this actually being a pretty good point, and why wouldn't anyone just bill the child's parents' insurance.
To which the Watcher asked them to consider hypothetically the case where insurance refused to pay out in cases like that, because it would be too easy for people to set up 'accidents' letting them bill insurances - not that this precaution had proven to be necessary in real life, of course. But the Watcher asked them to consider the Least Convenient Possible World where insurance companies, and even parents, did need to reason like that; because there'd proven to be too many master criminals setting up 'children at risk of true death from drowning' accidents that they could apparently avert and claim bounties on.
Well, said Keltham, in that case, he was going right back to taking another fifteen seconds to strip off his super-expensive clothes, if the child didn't look like it was literally right about to drown. And if society didn't like that, it was society's job to solve that thing with the master criminals. Though he'd maybe modify that if they were in a possible-true-death situation, because a true life is worth a huge number of labor-hours, and that part did feel like some bit of decision theory would say that everyone would be wealthier if everyone would sacrifice small amounts of wealth to save huge amounts of somebody else's wealth, if that happened unpredictably to people, and if society was also that incompetent at setting up proper reimbursements. T...
|
Dec 12, 2021 |
Attention control is critical for changing/increasing/altering motivation by kalla724
13:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Attention control is critical for changing/increasing/altering motivation, published by kalla724 on LessWrong.
I’ve just been reading Luke’s “Crash Course in the Neuroscience of Human Motivation.” It is a useful text, although there are a few technical errors and a few bits of outdated information (see [1], updated information about one particular quibble in [2] and [3]).
There is one significant missing piece, however, which is of critical importance for our subject matter here on LW: the effect of attention on plasticity, including the plasticity of motivation. Since I don’t see any other texts addressing it directly (certainly not from a neuroscientific perspective), let’s cover the main idea here.
Summary for impatient readers: focus of attention physically determines which synapses in your brain get stronger, and which areas of your cortex physically grow in size. The implications of this provide direct guidance for alteration of behaviors and motivational patterns. This is used for this purpose extensively: for instance, many benefits of the Cognitive-Behavioral Therapy approach rely on this mechanism.
I – Attention and plasticity
To illustrate this properly, we need to define two terms. I’m guessing these are very familiar to most readers here, but let’s cover them briefly just in case.
First thing to keep in mind is the plasticity of cortical maps. In essence, particular functional areas of our brain can expand or shrink based on how often (and how intensely) they are used. A small amount of this growth is physical, as new axons grow, expanding the white matter; most of it happens by repurposing any less-used circuitry in the vicinity of the active area. For example, our sense of sight is processed by our visual cortex, which turns signals from our eyes into lines, shapes, colors and movement. In blind people, however, this part of the brain becomes invaded by other senses, and begins to process sensations like touch and hearing, such that they become significantly more sensitive than in sighted people. Similarly, in deaf people, auditory cortex (part of the brain that processes sounds) becomes adapted to process visual information and gather language clues by sight.
Second concept we’ll need is somatosensory cortex (SSC for short). This is an area of the (vertebrate) brain where most of the incoming touch and positional (proprioceptive) sensations from the body converge. There is a map-like quality to this part of our brain, as every body part links to a particular bit of the SSC surface (which can be illustrated with silly-looking things, such as the sensory homunculus). More touch-sensitive areas of the body have larger corresponding areas within the SSC.
With these two in mind, let’s consider one actual experiment [4]. Scientists measured and mapped the area of an owl monkey’s SSC which became activated when one of his fingertips was touched. The monkey was then trained to hold that finger on a tactile stimulator – a moving wheel that stimulates touch receptors. The monkey had to pay attention to the stimulus, and was rewarded for letting go upon detecting certain changes in spinning frequency. After a few weeks of training, the area was measured again.
As you probably expected, the area had grown larger. The touch-processing neurons grew out, co-opting surrounding circuitry in order to achieve better and faster processing of the stimulus that produced the reward. Which is, so far, just another way of showing plasticity of cortical maps.
But then, there is something else. The SSC area expanded only when the monkey had to pay attention to the sensation of touch in order to receive the reward. If a monkey was trained to keep a hand on the wheel that moved just the same, but he did not have to pay attention to it. the cortical map remained the same size. Thi...
|
Dec 12, 2021 |
Great minds might not think alike by UnexpectedValues
17:08
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Great minds might not think alike, published by UnexpectedValues on LessWrong.
Write a Review
This is a linkpost for/
[Previously known as "Alike minds think great"]
I.
It is famously the case that almost everyone thinks they’re above average. Derek Sivers writes:
Ninety-four percent of professors say they are better-than-average teachers.
Ninety percent of students think they are more intelligent than the average student.
Ninety-three percent of drivers say they are safer-than-average drivers.
Interesting. Intuitively this seems to suggest that people are prone to vastly overestimate their competence. But is that true? As Bill Kuszmaul points out, these people aren’t necessarily wrong!
There’s no fundamental reason why you can’t have 90% of people be better than average. For example, more than 99.9% of people have an above-average number of legs. And more than 90% of people commit fewer felonies than average. These examples are obvious, but they’re not so different than some of the examples [in Sivers’ post].
This has something to it! On the other hand, I don’t think this explains everything. Is the quality of a professor’s teaching really so skewed that 94% are above average? But more importantly, do you really think that way fewer people would answer “yes” if you just replaced the word “average” with “median” when asking the question?
That said, I don’t think these numbers necessarily point to a bias! That’s because the interpretation of “above average” is left entirely up to the person being asked. Maybe you think a good driver is one who drives safely (and so you drive safely and slowly) whereas I think a good driver is one who gets from point A to point B efficiently (and so I drive quickly but not safely). We are both, from our own perspectives, above average drivers!
Put otherwise, for any skill where “goodness at that skill” doesn’t have an objective, agreed-upon measure, we should expect more than 50% of people to think they’re better than the median, because people optimize for things they care about.
To give a personal example, I suppose I would call myself an above average blogger. This isn’t true in some objective sense; it’s just that I judge bloggers by how interesting their thoughts are to me, and obviously I write about things that are interesting to me! There’s no bias I’m falling for here; it’s just that “Are you an above average blogger?” leaves “above average” open to my interpretation.
II.
There is, however, a closely related bias that I and lots of other people have. This bias occurs when we take a situation like those above, but now create a more objective test of that skill. To illustrate with an example, suppose you asked all the students at a university whether they have an above-median GPA. If 90% of students said yes, that would demonstrate a widespread bias — because unlike “Are you a better than the median student”, here there’s no room for interpretation.
The way this bias manifests in me (and many others I imagine) is: I tend to underestimate the competence of people who think very differently from me. I started thinking about this the other day when I listened to Julia Galef’s podcast episode with David Shor (which I highly recommend1). Shor is a young Democratic political strategist, originally hired by Barack Obama’s 2012 reelection campaign to run their data operation and figure out how the campaign should spend its money. Shor says:
When I first started in 2012, I was 20 and I was like, “Oh, I’m going to do all of this math and we’re going to win elections.” And I was with all these other nerds, we were in this cave. We really hated these old school consultants who had been in politics for like 20 years. [.] We had all these disagreements because the old school consultants were like, “You need to go up on TV, you need to focus o...
|
Dec 12, 2021 |
LessWrong is providing feedback and proofreading on drafts as a service by Ruby
05:20
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: LessWrong is providing feedback and proofreading on drafts as a service, published by by Ruby on LessWrong.
This announcement follows the Amazon PR/FAQ format. This is an actual feature announcement.
TL;DR
Before one publishes a post, it can be hard to know if you caught all the typos, explained things clearly, made a critical error, or wrote something that anybody is interested in to begin with. To reduce the guesswork, LessWrong is now providing free feedback on drafts (and post ideas) to any user with 100+ karma. We’ll provide the feedback ourselves, send your draft to a professional copy editor, or get the opinion of a relevant peer or expert in your domain. Or something else, whatever is needed to be helpful!
The Problem
Many people are reluctant to share posts before they’re confident that (i) they’re correct, (ii) they’ll get a good reception. It sucks to put out a post and then notice a dumb typo a day later, or to publish and then have a critical flaw immediately revealed to everyone, or to share a post and hear only crickets. The fear of these outcomes is enough to prevent a lot of great ideas from ever escaping their creators’ heads. And although many people feel better after getting some feedback, soliciting it can be effortful–you’ve got to find someone else and then tap into your social capital and ask a favor.
Solution
To help get more excellent posts into the world, LessWrong is now providing feedback on tap. Any author with 100+ karma can ask for the kind of feedback they need, and the LessWrong team will make it happen. Quick, easy, free. Within a couple of days (or hours), we’ll have feedback on your post that will let you post with greater confidence that your post is good.
Getting Started
On the post edit page (create a new post or edit an existing draft), if you have 100+ karma, you will see a new button: Request Feedback. Clicking it will start an Intercom chat with a LessWrong team member; in that chat, describe what kind of feedback you’re looking for (proofreading, style, coherence, expert feedback, etc.) and the LessWrong team will make it happen.
You needn’t have even written anything to use the feature. Feel free to chat to us about post ideas you have.
The new button (left) appears when create a new post or edit an existing one.
Press "Request Feedback" to have the Intercom Messenger popup.
Quotes (fictional)
After getting a round of feedback through the new LessWrong system, I’m much less afraid that people will ignore or downvote my post. I’ve got evidence that it’s something good that people will want to read - Oliver Habryka
A great benefit from the LessWrong feedback system, now that I’ve used it several times, is that the detailed feedback has helped me improve as a writer. - John McPostALot
FAQ
Who will provide the feedback?
It depends on the kind of feedback being sought. For a quick sanity check or proofread, a LessWrong team member or volunteer might do it. If more thorough copy-editing is requested, we’ll send your draft to a professional copy-editor. And if you’re looking for comments from a domain expert (biology, AI, etc), we’ll find someone willing to provide such feedback.
These types of reviewers are our current guess at what we will provide, but that might evolve over time as we figure out what kinds of feedback people need.
How quickly will I get the feedback?
Depends on the kind of feedback being sought. The LessWrong team can get things back you within a day or two; copy-editor will probably be variable, but sometimes quick; for external domain experts, could be a bit longer.
How much does this cost?
Free to eligible users.
How many times can I use it?
We’re not setting any explicit limits on how many times you can request feedback; however requests will be prioritized at our discretion (hopefully we have the capacit...
|
Dec 12, 2021 |
Industrial literacy by jasoncrawford
04:59
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Industrial literacy, published by jasoncrawford on LessWrong.
Write a Review
This is a linkpost for
I’ve said before that understanding where our modern standard of living comes from, at a basic level, is a responsibility of every citizen in an industrial civilization. Let’s call it “industrial literacy.”
Industrial literacy is understanding.
That the food you eat is grown using synthetic fertilizers, and that this is needed for agricultural productivity, because all soil loses its fertility naturally over time if it is not deliberately replenished. That before we had modern agriculture, more than half the workforce had to labor on farms, just to feed the other half. That if synthetic fertilizer was suddenly lost, a mass famine would ensue and billions would starve.
That those same crops would not be able to feed us if they were not also protected from pests, who will ravage entire fields if given a chance. That whole regions used to see seasons where they would lose large swaths of their produce to swarms of insects, such as boll weevils attacking cotton plants in the American South, or the phylloxera devouring grapes in the vineyards of France. That before synthetic pesticides, farmers were forced to rely on much more toxic substances, such as compounds of arsenic.
That before we had electricity and clean natural gas, people burned unrefined solid fuels in their homes—wood, coal, even dung (!)—to cook their food and to keep from freezing in winter. That these primitive fuels, dirty with contaminants, created toxic smoke: indoor air pollution. That indoor air pollution remains a problem today for 40% of the world population, who still rely on pre-industrial fuels.
That before twentieth-century appliances, housework was a full-time job, which invariably fell on women. That each household would spend almost 60 hours a week on manual labor: hauling water from the well for drinking and cooking, and then carrying the dirty water outside again; sewing clothes by hand, since store-bought ones were too expensive for most families; laundering clothes in a basin, scrubbing laboriously by hand, then hanging them up to dry; cooking every meal from scratch. That the washing machine, clothes dryer, dishwasher, vacuum cleaner, and microwave are the equivalent of a full-time mechanical servant for every household.
That plastics are produced in enormous quantities because, for so many purposes—from food containers to electrical wire coatings to children’s toys—we need a material that is cheap, light, flexible, waterproof, and insulating, and that can easily be made in any shape and color (including transparent!) That before plastic, many of these applications used animal parts, such as ivory tusks, tortoise shells, or whale bone. That in such a world, those products were a luxury for a wealthy elite, instead of a commodity for the masses, and the animals that provided them were hunted to near extinction.
That automobiles are a lifeline to people who live in rural areas (almost 20% in the US alone), and who were deeply isolated in the era before the car and the telephone. That in a world without automobiles, we relied on millions of horses, which in New York City around 1900 dumped a hundred thousand gallons of urine and millions of pounds of manure on the streets daily.
That half of everyone you know over the age of five is alive today only because of antibiotics, vaccines, and sanitizing chemicals in our water supply. That before these innovations, infant mortality (in the first year of life) was as high as 20%.
When you know these facts of history—which many schools do not teach—you understand what “industrial civilization” is and why it is the benefactor of everyone who is lucky enough to live in it. You understand that the electric generator, the automobile, the chemical plant, ...
|
Dec 12, 2021 |
The Parable of Predict-O-Matic by abramdemski
23:58
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Parable of Predict-O-Matic, published by abramdemski on LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
I've been thinking more about partial agency. I want to expand on some issues brought up in the comments to my previous post, and on other complications which I've been thinking about. But for now, a more informal parable. (Mainly because this is easier to write than my more technical thoughts.)
This relates to oracle AI and to inner optimizers, but my focus is a little different.
1
Suppose you are designing a new invention, a predict-o-matic. It is a wonderous machine which will predict everything for us: weather, politics, the newest advances in quantum physics, you name it. The machine isn't infallible, but it will integrate data across a wide range of domains, automatically keeping itself up-to-date with all areas of science and current events. You fully expect that once your product goes live, it will become a household utility, replacing services like Google. (Google only lets you search the known!)
Things are going well. You've got investors. You have an office and a staff. These days, it hardly even feels like a start-up any more; progress is going well.
One day, an intern raises a concern.
"If everyone is going to be using Predict-O-Matic, we can't think of it as a passive observer. Its answers will shape events. If it says stocks will rise, they'll rise. If it says stocks will fall, then fall they will. Many people will vote based on its predictions."
"Yes," you say, "but Predict-O-Matic is an impartial observer nonetheless. It will answer people's questions as best it can, and they react however they will."
"But --" the intern objects -- "Predict-O-Matic will see those possible reactions. It knows it could give several different valid predictions, and different predictions result in different futures. It has to decide which one to give somehow."
You tap on your desk in thought for a few seconds. "That's true. But we can still keep it objective. It could pick randomly."
"Randomly? But some of these will be huge issues! Companies -- no, nations -- will one day rise or fall based on the word of Predict-O-Matic. When Predict-O-Matic is making a prediction, it is choosing a future for us. We can't leave that to a coin flip! We have to select the prediction which results in the best overall future. Forget being an impassive observer! We need to teach Predict-O-Matic human values!"
You think about this. The thought of Predict-O-Matic deliberately steering the future sends a shudder down your spine. But what alternative do you have? The intern isn't suggesting Predict-O-Matic should lie, or bend the truth in any way -- it answers 100% honestly to the best of its ability. But (you realize with a sinking feeling) honesty still leaves a lot of wiggle room, and the consequences of wiggles could be huge.
After a long silence, you meet the interns eyes. "Look. People have to trust Predict-O-Matic. And I don't just mean they have to believe Predict-O-Matic. They're bringing this thing into their homes. They have to trust that Predict-O-Matic is something they should be listening to. We can't build value judgements into this thing! If it ever came out that we had coded a value function into Predict-O-Matic, a value function which selected the very future itself by selecting which predictions to make -- we'd be done for! No matter how honest Predict-O-Matic remained, it would be seen as a manipulator. No matter how beneficent its guiding hand, there are always compromises, downsides, questionable calls. No matter how careful we were to set up its values -- to make them moral, to make them humanitarian, to make them politically correct and broadly appealing -- who are we to choose? No. We'd be done for. They'd hang us....
|
Dec 12, 2021 |
Being the (Pareto) Best in the World by johnswentworth
04:55
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Being the (Pareto) Best in the World, published by johnswentworth on LessWrong.
The generalized efficient markets (GEM) principle says, roughly, that things which would give you a big windfall of money and/or status, will not be easy. If such an opportunity were available, someone else would have already taken it. You will never find a $100 bill on the floor of Grand Central Station at rush hour, because someone would have picked it up already.
One way to circumvent GEM is to be the best in the world at some relevant skill. A superhuman with hawk-like eyesight and the speed of the Flash might very well be able to snag $100 bills off the floor of Grand Central. More realistically, even though financial markets are the ur-example of efficiency, a handful of firms do make impressive amounts of money by being faster than anyone else in their market. I’m unlikely to ever find a proof of the Riemann Hypothesis, but Terry Tao might. Etc.
But being the best in the world, in a sense sufficient to circumvent GEM, is not as hard as it might seem at first glance (though that doesn’t exactly make it easy). The trick is to exploit dimensionality.
Consider: becoming one of the world’s top experts in proteomics is hard. Becoming one of the world’s top experts in macroeconomic modelling is hard. But how hard is it to become sufficiently expert in proteomics and macroeconomic modelling that nobody is better than you at both simultaneously? In other words, how hard is it to reach the Pareto frontier?
Having reached that Pareto frontier, you will have circumvented the GEM: you will be the single best-qualified person in the world for (some) problems which apply macroeconomic modelling to proteomic data. You will have a realistic shot at a big money/status windfall, with relatively little effort.
(Obviously we’re oversimplifying a lot by putting things like “macroeconomic modelling skill” on a single axis, and breaking it out onto multiple axes would strengthen the main point of this post. On the other hand, it would complicate the explanation; I’m keeping it simple for now.)
Let’s dig into a few details of this approach.
Elbow Room
There are many table tennis players, but only one best player in the world. This is a side effect of ranking people on one dimension: there’s only going to be one point furthest to the right (absent a tie).
Pareto optimality pushes us into more dimensions. There’s only one best table tennis player, and only one best 100-meter sprinter, but there can be an unlimited number of Pareto-optimal table tennis/sprinters.
Problem is, for GEM purposes, elbow room matters. Maybe I’m the on the pareto frontier of Bayesian statistics and gerontology, but if there’s one person just little bit better at statistics and worse at gerontology than me, and another person just a little bit better at gerontology and worse at statistics, then GEM only gives me the advantage over a tiny little chunk of the skill-space.
This brings up another aspect.
Problem Density
Claiming a spot on a Pareto frontier gives you some chunk of the skill-space to call your own. But that’s only useful to the extent that your territory contains useful problems.
Two pieces factor in here. First, how large a territory can you claim? This is about elbow room, as in the diagram above. Second, what’s the density of useful problems within this region of skill-space? The table tennis/sprinting space doesn’t have a whole lot going on. Statistics and gerontology sounds more promising. Cryptography and monetary economics is probably a particularly rich Pareto frontier these days. (And of course, we don’t need to stop at two dimensions - but we’re going to stop there in this post in order to keep things simple.)
Dimensionality
One problem with this whole GEM-vs-Pareto concept: if chasing a Pareto frontier makes it ...
|
Dec 12, 2021 |
Welcome to LessWrong!by Ruby, habryka, Ben Pace, Raemon
03:12
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Welcome to LessWrong!, published by Ruby, habryka, Ben Pace, Raemon on LessWrong.
The road to wisdom?
-- Well, it's plain
and simple to express:
Err
and err
and err again
but less
and less
and less.
- Piet Hein
Hence the name LessWrong. We might never attain perfect understanding of the world, but we can at least strive to become less and less wrong each day.
We are a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we work to develop and practice the art of human rationality.[1]
To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.
LessWrong serves these purposes with its library of rationality writings, community discussion forum, open questions research platform, and community page for in-person events.
To get a feel for what LessWrong is about, check out our Concepts page, or view this selection of LessWrong posts which might appeal to you:
What is rationality and why care about it? Try Your intuitions are not magic and The Cognitive Science of Rationality.
Curious about the mind? You might enjoy How An Algorithm Feels From The Inside and The Apologist and the Revolutionary.
Keen on self-improvement? Remember that Humans are not automatically strategic.
Care about argument and evidence? Consider Policy Debates Should Not Appear One-Sided and How To Convince Me that 2 + 2 = 3.
Interested in how to use language well? Be aware of 37 Ways That Words Can Be Wrong.
Want to teach yourself something? We compiled a list of The Best Textbooks on Every Subject.
Like probability and statistics? Around here we're fans of Bayesianism, you might like this interactive guide to Bayes' theorem (hosted on Arbital.com).
Of an altruistic mindset? We recommend On Caring.
Check out this footnote[2] below the fold for samples of posts about AI, science, philosophy, history, communication, culture, self-care, and more.
If LessWrong seems like a place for you, we encourage you to become familiar with LessWrong’s philosophical foundations. Our core readings can be be found on the Library page.
We especially recommend:
Rationality: From AI to Zombies by Eliezer Yudkowsky (or Harry Potter and the Methods of Rationality by the same author, which covers similar ground in narrative form)
The Codex by Scott Alexander
Find more details about these texts in this footnote[3]
For further getting started info, we direct you to LessWrong’s FAQ. Lastly, we suggest you create an account so you can vote, comment, save your reading progress, get tailored recommendations, and subscribe to our latest and best posts. Once you've done so, please say hello on our latest welcome thread!
Related Pages
LessWrong FAQ
A Brief History of LessWrong
Team
LessWrong Concepts
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
PR is corrosive; “reputation” is not by AnnaSalamon
02:53
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: PR is corrosive; “reputation” is not, published by AnnaSalamon on LessWrong.
This is in some sense a small detail, but one important enough to be worth write-up and critique: AFAICT, “PR” is a corrupt concept, in the sense that if you try to “navigate PR concerns” about yourself / your organization / your cause area / etc., the concept will guide you toward harmful and confused actions. In contrast, if you try to safeguard your “reputation”, your “brand”, or your “honor,” I predict this will basically go fine, and will not lead you to leave a weird confused residue in yourself or others.
To explain the difference:
If I am safeguarding my “honor” (or my “reputation”, “brand”, or “good name”), there are some fixed standards that I try to be known as adhering to. For example, in Game of Thrones, the Lannisters are safeguarding their “honor” by adhering to the principle “A Lannister always pays his debts.” They take pains to adhere to a certain standard, and to be known to adhere to that standard. Many examples are more complicated than this; a gentleman of 1800 who took up a duel to defend his “honor” was usually not defending his known adherence to a single simple principle a la the Lannisters. But it was still about his visible adherence to a fixed (though not explicit) societal standard.
In contrast, if I am “managing PR concerns,” there is no fixed standards of good conduct, or of my-brand-like conduct, that I am trying to adhere to. Instead, I am trying to do a more complicated operation:
Model which words or actions may cause “people” (especially media, or self-reinforcing miasma) to get upset with me;
Try to speak in such a way as to not set that off.
It’s a weirder or loopier process. One that’s more prone to self-reinforcing fears of shadows, and one that somehow (I think?) tends to pull a person away from communicating anything at all. Reminiscent of “Politics and the English Language.” Not reminiscent of Strunk and White.
One way you can see the difference, is that when people think about “PR” they imagine a weird outside expertise, such that you need to have a “PR consultant” or a “media consultant” who you should nervously heed advice from. When people think about their “honor," it's more a thing they can know or choose directly, and so it is more a thing that leaves them free to communicate something.
So: simple suggestion. If, at any point, you find yourself trying to “navigate PR”, or to help some person or organization or cause area or club or whatever to “navigate PR,” see if you can instead think and speak in terms of defending your/their “honor”, “reputation”, or “good name”. And see if that doesn’t make everybody feel a bit clearer, freer, and more as though their feet are on the ground.
Related: The Inner Ring, by CS Lewis; The New York Times, by Robert Rhinehart.
thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Making Beliefs Pay Rent (in Anticipated Experiences) by Eliezer Yudkowsky
06:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Making Beliefs Pay Rent (in Anticipated Experiences), published by Eliezer Yudkowsky on LessWrong.
Thus begins the ancient parable:
If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”
If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.
Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?
Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.
It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.
You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?
To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock’s second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.
It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.
The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could simplistically model their minds by drawing a little node labeled “Phlogiston,” and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configur...
|
Dec 12, 2021 |
That Alien Message by Eliezer Yudkowsky
17:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: That Alien Message, published Eliezer Yudkowsky on LessWrong.
Imagine a world much like this one, in which, thanks to gene-selection technologies, the average IQ is 140 (on our scale). Potential Einsteins are one-in-a-thousand, not one-in-a-million; and they grow up in a school system suited, if not to them personally, then at least to bright kids. Calculus is routinely taught in sixth grade. Albert Einstein, himself, still lived and still made approximately the same discoveries, but his work no longer seems exceptional. Several modern top-flight physicists have made equivalent breakthroughs, and are still around to talk.
(No, this is not the world Brennan lives in.)
One day, the stars in the night sky begin to change.
Some grow brighter. Some grow dimmer. Most remain the same. Astronomical telescopes capture it all, moment by moment. The stars that change, change their luminosity one at a time, distinctly so; the luminosity change occurs over the course of a microsecond, but a whole second separates each change.
It is clear, from the first instant anyone realizes that more than one star is changing, that the process seems to center around Earth particularly. The arrival of the light from the events, at many stars scattered around the galaxy, has been precisely timed to Earth in its orbit. Soon, confirmation comes in from high-orbiting telescopes (they have those) that the astronomical miracles do not seem as synchronized from outside Earth. Only Earth's telescopes see one star changing every second (1005 milliseconds, actually).
Almost the entire combined brainpower of Earth turns to analysis.
It quickly becomes clear that the stars that jump in luminosity, all jump by a factor of exactly 256; those that diminish in luminosity, diminish by a factor of exactly 256. There is no apparent pattern in the stellar coordinates. This leaves, simply, a pattern of BRIGHT-dim-BRIGHT-BRIGHT...
"A binary message!" is everyone's first thought.
But in this world there are careful thinkers, of great prestige as well, and they are not so sure. "There are easier ways to send a message," they post to their blogs, "if you can make stars flicker, and if you want to communicate. Something is happening. It appears, prima facie, to focus on Earth in particular. To call it a 'message' presumes a great deal more about the cause behind it. There might be some kind of evolutionary process among, um, things that can make stars flicker, that ends up sensitive to intelligence somehow... Yeah, there's probably something like 'intelligence' behind it, but try to appreciate how wide a range of possibilities that really implies. We don't know this is a message, or that it was sent from the same kind of motivations that might move us. I mean, we would just signal using a big flashlight, we wouldn't mess up a whole galaxy."
By this time, someone has started to collate the astronomical data and post it to the Internet. Early suggestions that the data might be harmful, have been... not ignored, but not obeyed, either. If anything this powerful wants to hurt you, you're pretty much dead (people reason).
Multiple research groups are looking for patterns in the stellar coordinates—or fractional arrival times of the changes, relative to the center of the Earth—or exact durations of the luminosity shift—or any tiny variance in the magnitude shift—or any other fact that might be known about the stars before they changed. But most people are turning their attention to the pattern of BRIGHTS and dims.
It becomes clear almost instantly that the pattern sent is highly redundant. Of the first 16 bits, 12 are BRIGHTS and 4 are dims. The first 32 bits received align with the second 32 bits received, with only 7 out of 32 bits different, and then the next 32 bits received have only 9 out of 32 bits different from the s...
|
Dec 12, 2021 |
Why the tails come apart by Thrasymachus
12:02
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Why the tails come apart, published by Thrasymachus on LessWrong.
[I'm unsure how much this rehashes things 'everyone knows already' - if old hat, feel free to downvote into oblivion. My other motivation for the cross-post is the hope it might catch the interest of someone with a stronger mathematical background who could make this line of argument more robust]
[Edit 2014/11/14: mainly adjustments and rewording in light of the many helpful comments below (thanks!). I've also added a geometric explanation.]
Many outcomes of interest have pretty good predictors. It seems that height correlates to performance in basketball (the average height in the NBA is around 6'7"). Faster serves in tennis improve one's likelihood of winning. IQ scores are known to predict a slew of factors, from income, to chance of being imprisoned, to lifespan.
What's interesting is what happens to these relationships 'out on the tail': extreme outliers of a given predictor are seldom similarly extreme outliers on the outcome it predicts, and vice versa. Although 6'7" is very tall, it lies within a couple of standard deviations of the median US adult male height - there are many thousands of US men taller than the average NBA player, yet are not in the NBA. Although elite tennis players have very fast serves, if you look at the players serving the fastest serves ever recorded, they aren't the very best players of their time. It is harder to look at the IQ case due to test ceilings, but again there seems to be some divergence near the top: the very highest earners tend to be very smart, but their intelligence is not in step with their income (their cognitive ability is around +3 to +4 SD above the mean, yet their wealth is much higher than this) (1).
The trend seems to be that even when two factors are correlated, their tails diverge: the fastest servers are good tennis players, but not the very best (and the very best players serve fast, but not the very fastest); the very richest tend to be smart, but not the very smartest (and vice versa). Why?
Too much of a good thing?
One candidate explanation would be that more isn't always better, and the correlations one gets looking at the whole population doesn't capture a reversal at the right tail. Maybe being taller at basketball is good up to a point, but being really tall leads to greater costs in terms of things like agility. Maybe although having a faster serve is better all things being equal, but focusing too heavily on one's serve counterproductively neglects other areas of one's game. Maybe a high IQ is good for earning money, but a stratospherically high IQ has an increased risk of productivity-reducing mental illness. Or something along those lines.
I would guess that these sorts of 'hidden trade-offs' are common. But, the 'divergence of tails' seems pretty ubiquitous (the tallest aren't the heaviest, the smartest parents don't have the smartest children, the fastest runners aren't the best footballers, etc. etc.), and it would be weird if there was always a 'too much of a good thing' story to be told for all of these associations. I think there is a more general explanation.
The simple graphical explanation
[Inspired by this essay from Grady Towers]
Suppose you make a scatter plot of two correlated variables. Here's one I grabbed off google, comparing the speed of a ball out of a baseball pitchers hand compared to its speed crossing crossing the plate:
It is unsurprising to see these are correlated (I'd guess the R-square is > 0.8). But if one looks at the extreme end of the graph, the very fastest balls out of the hand aren't the very fastest balls crossing the plate, and vice versa. This feature is general. Look at this data (again convenience sampled from googling 'scatter plot') of this:
Or this:
Or this:
Given a correlation, the envelo...
|
Dec 12, 2021 |
What Do We Mean By "Rationality"? by Eliezer Yudkowsky
10:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: What Do We Mean By "Rationality"?, published by Eliezer Yudkowsky on LessWrong.
I mean two things:
1. Epistemic rationality: systematically improving the accuracy of your beliefs.
2. Instrumental rationality: systematically achieving your values.
The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.
This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1
Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”
So rationality is about forming true beliefs and making decisions that help you win.
(Where truth doesn't mean “certainty,” since we can do plenty to increase the probability that our beliefs are accurate even though we're uncertain; and winning doesn't mean “winning at others' expense,” since our values include everything we care about, including other people.)
When people say “X is rational!” it’s usually just a more strident way of saying “I think X is true” or “I think X is good.” So why have an additional word for “rational” as well as “true” and “good”?
An analogous argument can be given against using “true.” There is no need to say “it is true that snow is white” when you could just say “snow is white.” What makes the idea of truth useful is that it allows us to talk about the general features of map-territory correspondence. “True models usually produce better experimental predictions than false models” is a useful generalization, and it’s not one you can make without using a concept like “true” or “accurate.”
Similarly, “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” is the kind of thought that depends on a concept of (instrumental) rationality, whereas “It’s rational to eat vegetables” can probably be replaced with “It’s useful to eat vegetables” or “It’s in your interest to eat vegetables.” We need a concept like “rational” in order to note general facts about those ways of thinking that systematically produce truth or value—and the systematic ways in which we fall short of those standards.
As we’ve observed in the previous essays, experimental psychologists sometimes uncover human reasoning that seems very strange. For example, someone rates the probability “Bill plays jazz” as less than the probability “Bill is an accountant who plays jazz.” This seems like an odd judgment, since any particular jazz-playing accountant is obviously a jazz player. But to what higher vantage point do we appeal in saying that the judgment is wrong ?
Experimental psychologists use two gold standards: probability theory, and decision theory.
Probability theory is the set of laws underlying rational belief. The mathematics of probability applies equally to “figuring out where your bookcase is” and “estimating how many hairs were on Julius Caesars head,” even though our evidence for the claim “Julius Caesar was bald” is likely to be more complicated and indirect than our evidence for the claim “theres a bookcase in my room.” It’s all the same problem of how to process the evidence and observations to update one’s beliefs. Similarly, decision theory is the set of laws underlying rational ...
|
Dec 12, 2021 |
What 2026 looks like by Daniel Kokotajlo
27:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: What 2026 looks like, published by Daniel Kokotajlo on LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This was written for the Vignettes Workshop.[1] The goal is to write out a detailed future history (“trajectory”) that is as realistic (to me) as I can currently manage, i.e. I’m not aware of any alternative trajectory that is similarly detailed and clearly more plausible to me. The methodology is roughly: Write a future history of 2022. Condition on it, and write a future history of 2023. Repeat for 2024, 2025, etc. (I'm posting 2022-2026 now so I can get feedback that will help me write 2027+. I intend to keep writing until the story reaches singularity/extinction/utopia/etc.)
What’s the point of doing this? Well, there are a couple of reasons:
Sometimes attempting to write down a concrete example causes you to learn things, e.g. that a possibility is more or less plausible than you thought.
Most serious conversation about the future takes place at a high level of abstraction, talking about e.g. GDP acceleration, timelines until TAI is affordable, multipolar vs. unipolar takeoff. vignettes are a neglected complementary approach worth exploring.
Most stories are written backwards. The author begins with some idea of how it will end, and arranges the story to achieve that ending. Reality, by contrast, proceeds from past to future. It isn’t trying to entertain anyone or prove a point in an argument.
Anecdotally, various people seem to have found Paul Christiano’s “tales of doom” stories helpful, and relative to typical discussions those stories are quite close to what we want. (I still think a bit more detail would be good — e.g. Paul’s stories don’t give dates, or durations, or any numbers at all really.)[2]
“I want someone to ... write a trajectory for how AI goes down, that is really specific about what the world GDP is in every one of the years from now until insane intelligence explosion. And just write down what the world is like in each of those years because I don't know how to write an internally consistent, plausible trajectory. I don't know how to write even one of those for anything except a ridiculously fast takeoff.” --Buck Shlegeris
This vignette was hard to write. To achieve the desired level of detail I had to make a bunch of stuff up, but in order to be realistic I had to constantly ask “but actually though, what would really happen in this situation?” which made it painfully obvious how little I know about the future. There are numerous points where I had to conclude “Well, this does seem implausible, but I can’t think of anything more plausible at the moment and I need to move on.” I fully expect the actual world to diverge quickly from the trajectory laid out here. Let anyone who (with the benefit of hindsight) claims this divergence as evidence against my judgment prove it by exhibiting a vignette/trajectory they themselves wrote in 2021. If it maintains a similar level of detail (and thus sticks its neck out just as much) while being more accurate, I bow deeply in respect!
I hope this inspires other people to write more vignettes soon. We at the Center on Long-Term Risk would like to have a collection to use for strategy discussions. Let me know if you’d like to do this, and I can give you advice & encouragement! I’d be happy to run another workshop.
2022
GPT-3 is finally obsolete. OpenAI, Google, Facebook, and DeepMind all have gigantic multimodal transformers, similar in size to GPT-3 but trained on images, video, maybe audio too, and generally higher-quality data.
Not only that, but they are now typically fine-tuned in various ways--for example, to answer questions correctly, or produce engaging conversation as a chatbot.
The chatbots are fun to talk to but erratic and ultimately considered s...
|
Dec 12, 2021 |
Reality-Revealing and Reality-Masking Puzzles by AnnaSalamon
20:45
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Reality-Revealing and Reality-Masking Puzzles, published by AnnaSalamon on LessWrong.
Write a Review
Tl;dr: I’ll try here to show how CFAR’s “art of rationality” has evolved over time, and what has driven that evolution.
In the course of this, I’ll introduce the distinction between what I’ll call “reality-revealing puzzles” and “reality-masking puzzles”—a distinction that I think is almost necessary for anyone attempting to develop a psychological art in ways that will help rather than harm. (And one I wish I’d had explicitly back when the Center for Applied Rationality was founded.)
I’ll also be trying to elaborate, here, on the notion we at CFAR have recently been tossing around about CFAR being an attempt to bridge between common sense and Singularity scenarios—an attempt to figure out how people can stay grounded in common sense and ordinary decency and humane values and so on, while also taking in (and planning actions within) the kind of universe we may actually be living in.
Arts grow from puzzles. I like to look at mathematics, or music, or ungodly things like marketing, and ask: What puzzles were its creators tinkering with that led them to leave behind these structures? (Structures now being used by other people, for other reasons.)
I picture arts like coral reefs. Coral polyps build shell-bits for their own reasons, but over time there accumulates a reef usable by others. Math built up like this—and math is now a powerful structure for building from. [Sales and Freud and modern marketing/self-help/sales etc. built up some patterns too—and our basic way of seeing each other and ourselves is now built partly in and from all these structures, for better and for worse.]
So let’s ask: What sort of reef is CFAR living within, and adding to? From what puzzles (what patterns of tinkering) has our “rationality” accumulated?
Two kinds of puzzles: “reality-revealing” and “reality-masking”
First, some background. Some puzzles invite a kind of tinkering that lets the world in and leaves you smarter. A kid whittling with a pocket knife is entangling her mind with bits of reality. So is a driver who notices something small about how pedestrians dart into streets, and adjusts accordingly. So also is the mathematician at her daily work. And so on.
Other puzzles (or other contexts) invite a kind of tinkering that has the opposite effect. They invite a tinkering that gradually figures out how to mask parts of the world from your vision. For example, some months into my work as a math tutor I realized I’d been unconsciously learning how to cue my students into acting like my words made sense (even when they didn’t). I’d learned to mask from my own senses the clues about what my students were and were not learning.
We’ll be referring to these puzzle-types a lot, so it’ll help to have a term for them. I’ll call these puzzles “good” or “reality-revealing” puzzles, and “bad” or “reality-masking” puzzles, respectively. Both puzzle-types appear abundantly in most folks’ lives, often mixed together. The same kid with the pocket knife who is busy entangling her mind with data about bark and woodchips and fine motor patterns (from the “good” puzzle of “how can I whittle this stick”), may simultaneously be busy tinkering with the “bad” puzzle of “how can I not-notice when my creations fall short of my hopes.”
(Even “good” puzzles can cause skill loss: a person who studies Dvorak may lose some of their QWERTY skill, and someone who adapts to the unselfconscious arguing of the math department may do worse for a while in contexts requiring tact. The distinction is that “good” puzzles do this only incidentally. Good puzzles do not invite a search for configurations that mask bits of reality. Whereas with me and my math tutees, say, there was a direct reward/conditioning response that happe...
|
Dec 12, 2021 |
Is Success the Enemy of Freedom? (Full) by alkjash
13:38
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Is Success the Enemy of Freedom? (Full), published by alkjash on LessWrong.
Write a Review
This is a linkpost for/
I. Parables
A. Anna is a graduate student studying p-adic quasicoherent topology. It’s a niche subfield of mathematics where Anna feels comfortable working on neat little problems with the small handful of researchers interested in this topic. Last year, Anna stumbled upon a connection between her pet problem and algebraic matroid theory, solving a big open conjecture in the matroid Langlands program. Initially, she was over the moon about the awards and the Quanta articles, but now that things have returned to normal, her advisor is pressuring her to continue working with the matroid theorists with their massive NSF grants and real-world applications. Anna hasn’t had time to think about p-adic quasicoherent topology in months.
B. Ben is one of the top Tetris players in the world, infamous for his signature move: the reverse double T-spin. Ben spent years perfecting this move, which requires lightning fast reflexes and nerves of steel, and has won dozens of tournaments on its back. Recently, Ben felt like his other Tetris skills needed work and tried to play online without using his signature move, but was greeted by a long string of losses: the Tetris servers kept matching him with the other top players in the world, who absolutely stomped him. Discouraged, Ben gave up on the endeavor and went back to practicing the reverse double T-spin.
C. Clara was just promoted to be the youngest Engineering Director at a mid-sized software startup. She quickly climbed the ranks, thanks to her amazing knowledge of all things object-oriented and her excellent communication skills. These days, she finds her schedule packed with what the company needs: back-to-back high-level strategy meetings preparing for the optics of the next product launch, instead of what she loves: rewriting whole codebases in Haskell++.
D. Deborah started her writing career as a small-time crime novelist, who split her time between a colorful cast of sleuthy protagonists. One day, her spunky children’s character Detective Dolly blew up in popularity due to a Fruit Loops advertising campaign. At the beginning of every month, Deborah tells herself she’s going to finally kill off Dolly and get to work on that grand historical romance she’s been dreaming about. At the end of every month, Deborah’s husband comes home with the mortgage bills for their expensive bayside mansion, paid for with “Dolly money,” and Deborah starts yet another Elementary School Enigma.
E. While checking his email in the wee hours of the morning, Professor Evan Evanson notices an appealing seminar announcement: “A Gentle Introduction to P-adic Quasicoherent Topology (Part the First).” Ever since being exposed to the topic in his undergraduate matroid theory class, Evan has always wanted to learn more. He arrives bright and early on the day of the seminar and finds a prime seat, but as others file into the lecture hall, he’s greeted by a mortifying realization: it’s a graduate student learning seminar, and he’s the only faculty member present. Squeezing in his embarrassment, Evan sits through the talk and learns quite a bit of fascinating new mathematics. For some reason, even though he enjoyed the experience, Evan never comes back for Part the Second.
F. Whenever Frank looks back to his college years, he remembers most fondly the day he was kicked out of the conservative school newspaper for penning a provocative piece about jailing all billionaires. Although he was a mediocre student with a medium-sized drinking problem, on that day Frank felt like a man with principles. A real American patriot in the ranks of Patrick Henry or Thomas Jefferson. After college, Frank met a girl who helped him sort himself out and get sober, a...
|
Dec 12, 2021 |
Lessons I've Learned from Self-Teaching
15:09
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Lessons I've Learned from Self-Teaching, published by TurnTrout on LessWrong.
In 2018, I was a bright-eyed grad student who was freaking out about AI alignment. I guess I'm still a bright-eyed grad student freaking out about AI alignment, but that's beside the point.
I wanted to help, and so I started levelling up. While I'd read Nate Soares's self-teaching posts, there were a few key lessons I'd either failed to internalize or failed to consider at all. I think that implementing these might have doubled the benefit I drew from my studies.
I can't usefully write a letter to my past self, so let me write a letter to you instead, keeping in mind that good advice for past-me may not be good advice for you.
Make Sure You Remember The Content
TL;DR: use a spaced repetition system like Anki. Put in cards for key concepts and practice using the concepts. Review the cards every day without fail. This is the most important piece of advice.
The first few months of 2018 were a dream: I was learning math, having fun, and remaking myself. I read and reviewed about one textbook a month. I was learning how to math, how to write proofs and read equations fluently and think rigorously.
I had so much fun that I hurt my wrists typing up my thoughts on impact measures. This turned a lot of my life upside-down. My wrists wouldn't fully heal for two years, and a lot happened during that time. After I hurt my wrists, I became somewhat depressed, posted less frequently, and read fewer books.
When I looked back in 2019/2020 and asked "when and why did my love for textbooks sputter out?", the obvious answer was "when I hurt my hands and lost my sense of autonomy and became depressed, perchance? And maybe I just became averse to reading that way?"
The obvious answer was wrong, but its obvious-ness stopped me from finding the truth until late last year. It felt right, but my introspection had failed me.
The real answer is: when I started learning math, I gained a lot of implicit knowledge, like how to write proofs and read math (relatively) quickly. However, I'm no Hermione Granger: left unaided, I'm bad at remembering explicit facts / theorem statements / etc.
I gained implicit knowledge but I didn't remember the actual definitions, unless I actually used them regularly (e.g. as I did for real analysis, which I remained quite fluent in and which I regularly use in my research). Furthermore, I think I coincidentally hit steeply diminishing returns on the implicit knowledge around when I injured myself.
So basically I'm reading these math textbooks, doing the problems, getting a bit better at writing proofs but not really durably remembering 95% of the content. Maybe part of my subconscious noticed that I seem to be wasting time, that when I come back four months after reading a third of a graph theory textbook, I barely remember the new content I had "learned." I thought I was doing things right. I was doing dozens of exercises and thinking deeply about why each definition was the way it was, thinking about how I could apply these theorems to better reason about my own life and my own research, etc.
I explicitly noticed this problem in late 2020 and thought,
is there any way I know of to better retain content?
... gee, what about that thing I did in college that let me learn how to read 2,136 standard-use Japanese characters in 90 days? you know, Anki spaced repetition, that thing I never tried for math because once I tried and failed to memorize dozens of lines of MergeSort pseudocode with it?
hm...
This was the moment I started feeling extremely silly (the exact thought was "there's no possible way that my hand is big enough for how facepalm this moment is", IIRC), but also extremely excited. I could fix my problem!
And a problem this was. In early 2020, I had an interview where I was asked t...
|
Dec 12, 2021 |
Expecting Short Inferential Distances by Eliezer Yudkowsky
04:43
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Expecting Short Inferential Distances, published by Eliezer Yudkowsky on LessWrong.
Homo sapiens’s environment of evolutionary adaptedness (a.k.a. EEA or “ancestral environment”) consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.
In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period.
In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.
In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.
In the ancestral environment, anyone who says something with no obvious support is a liar or an idiot. You’re not likely to think, “Hey, maybe this person has well-supported background knowledge that no one in my band has even heard of,” because it was a reliable invariant of the ancestral environment that this didn’t happen.
Conversely, if you say something blatantly obvious and the other person doesn’t see it, they’re the idiot, or they’re being deliberately obstinate to annoy you.
And to top it off, if someone says something with no obvious support and expects you to believe it—acting all indignant when you don’t—then they must be crazy.
Combined with the illusion of transparency and self-anchoring (the tendency to model other minds as though the were slightly modified versions of oneself), I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
A biologist, speaking to a physicist, can justify evolution by saying it is the simplest explanation. But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase “simplest explanation” with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones. To someone else, “But it’s the simplest explanation!” may sound like an interesting but hardly knockdown argument; it doesn’t feel like all that powerful a tool for comprehending office politics or fixing a broken car. Obviously the biologist is infatuated with their own ideas, too arrogant to be open to alternative explanations which sound just as plausible. (If it sounds plausible to me, it should sound plausible to any sane member of my band.)
And from the biologist’s perspective, they can understand how evolution might sound a little odd at first—but when someone rejects evolution even after the biologist explains that it’s the simplest explanation, well, it’s clear that nonscientists are just idiots and there’s no point in talking ...
|
Dec 12, 2021 |
Are we in an AI overhang? by Andy Jones
07:58
Welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
This is: Are we in an AI overhang?, published by by Andy Jones on the LessWrong.
Write a Review
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Over on Developmental Stages of GPTs, orthonormal mentions
it at least reduces the chance of a hardware overhang.
An overhang is when you have had the ability to build transformative AI for quite some time, but you haven't because no-one's realised it's possible. Then someone does and surprise! It's a lot more capable than everyone expected.
I am worried we're in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months.
Investment Bounds
GPT-3 is the first AI system that has obvious, immediate, transformative economic value. While much hay has been made about how much more expensive it is than a typical AI research project, in the wider context of megacorp investment, its costs are insignificant.
GPT-3 has been estimated to cost $5m in compute to train, and - looking at the author list and OpenAI's overall size - maybe another $10m in labour.
Google, Amazon and Microsoft each spend about $20bn/year on R&D and another $20bn each on capital expenditure. Very roughly, it totals to $100bn/year. Against this budget, dropping $1bn or more on scaling GPT up by another factor of 100x is entirely plausible right now. All that's necessary is that tech executives stop thinking of natural language processing as cutesy blue-sky research and start thinking in terms of quarters-till-profitability.
A concrete example is Waymo, which is raising $2bn investment rounds - and that's for a technology with a much longer road to market.
Compute Cost
The other side of the equation is compute cost. The $5m GPT-3 training cost estimate comes from using V100s at $10k/unit and 30 TFLOPS, which is the performance without tensor cores being considered. Amortized over a year, this gives you about $1000/PFLOPS-day.
However, this cost is driven up an order of magnitude by NVIDIA's monopolistic cloud contracts, while performance will be higher when taking tensor cores into account. The current hardware floor is nearer to the RTX 2080 TI's $1k/unit for 125 tensor-core TFLOPS, and that gives you $25/PFLOPS-day. This roughly aligns with AI Impacts’ current estimates, and offers another >10x speedup to our model.
I strongly suspect other bottlenecks stop you from hitting that kind of efficiency or GPT-3 would've happened much sooner, but I still think $25/PFLOPS-day is a lower useful bound.
Other Constraints
I've focused on money so far because most of the current 3.5-month doubling times come from increasing investment. But money aside, there are a couple of other things that could prove to be the binding constraint.
Scaling law breakdown. The GPT series' scaling is expected to break down around 10k pflops-days (§6.3), which is a long way short of the amount of cash on the table.
This could be because the scaling analysis was done on 1024-token sequences. Maybe longer sequences can go further. More likely I'm misunderstanding something.
Sequence length. GPT-3 uses 2048 tokens at a time, and that's with an efficient encoding that cripples it on many tasks. With the naive architecture, increasing the sequence length is quadratically expensive, and getting up to novel-length sequences is not very likely.
But there are a lot of plausible ways to fix that, and complexity is no bar AI. This constraint might plausibly not be resolved on a timescale of months, however.
Data availability. From the same paper as the previous point, dataset size rises with the square-root of compute; a 1000x larger GPT-3 would want 10 trillion tokens of train...
|
Dec 12, 2021 |
RadVac Commercial Antibody Test Results by johnswentworth
05:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: RadVac Commercial Antibody Test Results, published by johnswentworth on the LessWrong.
Background: Making Vaccine
Results are in from the commercial antibody tests. Both my girlfriend and I came back negative - the test did not detect any Spike antibody response in the blood. This post will talk about how I'm updating based on these results, and the next steps.
Here's our timeline so far; more info on the vaccine is in the original post and the radvac whitepaper:
We've taken five doses, spaced apart weekly (on Tuesdays).
The first three doses only included six of the nine peptides, due to delays from the manufacturer. (Spike 660, Spike 1145, and Orf1 5471T were the three missing.)
The blood draw for this test took place the day after the fifth dose. I expect this is too soon to notice significant impact from the last two doses; vaccines in general seem to typically take 2-3 weeks to kick in, and that is my expectation for this one as well. (Also, it was an "IgG antibody test", and WebMD says these antibodies typically take about 2 weeks to show up after covid symptoms show from an actual infection.) This is intended to mainly be a test of the first three doses.
The test apparently used the "DiaSorin Liaison(R) SARS-CoV-2 S1/S2 IgG assay" (I didn't know this until the results came in). According to the FDA, it has about 92% sensitivity and 99% specificity. The "S1/S2" part indicates that it's testing for response to the S1 and S2 subunits of the spike protein - together, these are essentially the whole spike protein.
Important thing to notice: the test was looking for Spike antibodies, and two of our three missing peptides were Spike peptides. Indeed, there were only 3 Spike peptides among the full 9, so with two missing, we only had one Spike peptide in our first three doses. (The rest target other parts of the virus.) So that makes the test significantly less useful than it would otherwise be, and makes me more inclined to get another test in 2-3 weeks when the doses with the other three peptides have had time to kick in.
How I'm Updating
In the original post, I called this test "searching under the streetlamp". It wasn't super likely to come back positive even assuming the vaccine worked as intended, but it was relatively cheap and easy to run the test, so it was our first check. Given the missing Spike peptides and the test only checking against Spike, it was even more likely to come back negative than I originally estimated.
In Jacob's prediction questions, I gave roughly a 25% chance that a commercial antibody test would pass for most people, given three doses and all 9 peptides. I gave the vaccine about 75% chance of working overall, distributed over several different possible worlds. In this specific scenario, it's clear that the prior on test passing should be even lower.
(Reminder on the possible worlds: the vaccine could induce antibody response in the blood and mucus, only mucus, or not at all. It could induce T-cell response separate from antibody response. It could work sometimes, much like how the first dose of commercial mRNA vaccines tend to work in 75% or 85% of people, and in that case I expect more doses/more time to make it work more often.)
After updating on the results, I'm down to about 60-70% chance of working overall. Unfortunately this test just didn't give us very much information - at least about the vaccine working.
Aside from the test result, we do have one more small piece of information to update on: I was quite congested for 1-2 days after the most recent three doses (and I was generally not congested the rest of the week). That's exactly what we'd expect to see if the vaccine is working as intended, and it's pretty strong evidence that it's doing something. Updating on both that and the test results, I'm at ~70% that it works overall...
|
Dec 12, 2021 |
Politics is way too meta by Rob Bensinger
17:28
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Politics is way too meta, published by Rob Bensinger on the LessWrong.
... i.e., it doesn't spend enough time arguing about object-level things.
The way I'm using it in this post, "object-level" might include these kinds of things:
While serving as Secretary of State, did Hillary Clinton send classified information to an insecure email server? How large was the risk that attackers might get the information, and how much harm might that cause?
What are the costs and benefits of various communications protocols (e.g., the legal one during Clinton's tenure, the de facto one Clinton followed, or other possibilities), and how should we weight those costs and benefits?
How can we best forecast people's reliability on security issues? Are once-off mistakes like this predictive of future sloppiness? Are there better ways of predicting this?
"Meta" might include things like:
How much do voters care about Clinton's use of her email server?
How much will reporters cover this story, and how much is their coverage likely to influence voters?
What do various people (with no special information or expertise) believe about Clinton's email server, and how might these beliefs change their behavior?
I'll also consider discussions of abstract authority, principle, or symbolism more "meta" than concrete policy proposals and questions of fact.
This is too meta:
Meta stuff is real. Elections are real, and matter. Popularity, status, controversy, and Overton windows have real physical effects.
But it's possible to focus too much on one part of reality and neglect another. If you're driving a car while talking on the phone, the phone and your eyes are both perfectly good information channels; but if you allocate too little attention to the road, you still die.
When speaking the demon's name creates the demon
I claim:
There are many good ideas that start out discussed by blogs and journal articles for a long time, then get adopted by policymakers.
In many of these cases, you could delay adoption by many years by adding more sentences to the blog posts noting the political infeasibility or controversialness of the proposal. Or you could hasten adoption by making the posts just focus on analyzing the effects of the policy, without taking a moment to nervously look over their shoulder, without ritually bowing to the Overton window as though it were an authority on immigration law.
I also claim that this obeys the same basic causal dynamics as:
My friend Azzie posts something that I find cringe. So I decide to loudly and publicly (!) warn Azzie "hey, the thing you're doing is cringe!". Because, y'know, I want to help.
Regardless of how "cringe" the average reader would have considered the post, saying it out loud can only help strengthen the perceived level of cringe.
Or, suppose Bel overhears me and it doesn't cause her to see the post as more cringe. Still, it might make Bel worry that Cathy and other third parties think that the post is cringe. Which is a sufficient worry on its own to greatly change how Bel interacts with the post. Wouldn't want the cringe monster to come after you next!
This can result in:
Non-well-founded gaffes: statements that are controversial/offensive/impolitic largely or solely because some people think they sound like the kind of thing that would offend, alienate, or be disputed by a hypothetical third party.
Or, worse:
Even-less-than-non-well-founded gaffes: statements that are controversial/offensive/impolitic largely or solely because some people are worried that a hypothetical third party might think that a hypothetical fourth party might be offended, alienated, or unconvinced by the statement.
See: Common Knowledge and Miasma.
See: mimesis, herding, bystander effect, conformity instincts, coalitional instincts, and The World Forager Elite.
Regardless of how poli...
|
Dec 12, 2021 |
DeepMind: Generally capable agents emerge from open-ended play by Daniel Kokotajlo
03:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: DeepMind: Generally capable agents emerge from open-ended play, published by Daniel Kokotajlo on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a linkpost for
EDIT: Also see paper and results compilation video!
Today, we published "Open-Ended Learning Leads to Generally Capable Agents," a preprint detailing our first steps to train an agent capable of playing many different games without needing human interaction data. ... The result is an agent with the ability to succeed at a wide spectrum of tasks — from simple object-finding problems to complex games like hide and seek and capture the flag, which were not encountered during training. We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task.
The neural network architecture we use provides an attention mechanism over the agent’s internal recurrent state — helping guide the agent’s attention with estimates of subgoals unique to the game the agent is playing. We’ve found this goal-attentive agent (GOAT) learns more generally capable policies.
Playing roughly 700,000 unique games in 4,000 unique worlds within XLand, each agent in the final generation experienced 200 billion training steps as a result of 3.4 million unique tasks. At this time, our agents have been able to participate in every procedurally generated evaluation task except for a handful that were impossible even for a human. And the results we’re seeing clearly exhibit general, zero-shot behaviour across the task space — with the frontier of normalised score percentiles continually improving.
Looking qualitatively at our agents, we often see general, heuristic behaviours emerge — rather than highly optimised, specific behaviours for individual tasks. Instead of agents knowing exactly the “best thing” to do in a new situation, we see evidence of agents experimenting and changing the state of the world until they’ve achieved a rewarding state. We also see agents rely on the use of other tools, including objects to occlude visibility, to create ramps, and to retrieve other objects. Because the environment is multiplayer, we can examine the progression of agent behaviours while training on held-out social dilemmas, such as in a game of “chicken”. As training progresses, our agents appear to exhibit more cooperative behaviour when playing with a copy of themselves. Given the nature of the environment, it is difficult to pinpoint intentionality — the behaviours we see often appear to be accidental, but still we see them occur consistently.
My hot take: This seems like a somewhat big deal to me. It's what I would have predicted, but that's scary, given my timelines. I haven't read the paper itself yet but I look forward to seeing more numbers and scaling trends and attempting to extrapolate... When I do I'll leave a comment with my thoughts.
EDIT: My warm take: The details in the paper back up the claims it makes in the title and abstract. This is the GPT-1 of agent/goal-directed AGI; it is the proof of concept. Two more papers down the line (and a few OOMs more compute), and we'll have the agent/goal-directed AGI equivalent of GPT-3. Scary stuff.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
The Least Convenient Possible World by Scott Alexander
07:08
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Least Convenient Possible World, published Scott Alexander on the LessWrong.
Related to: Is That Your True Rejection?
"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."
-- Black Belt Bayesian, via Rationality Quotes 13
Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:
You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he's there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?
I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:
It wouldn't be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.
On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.
So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"
He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:
1: Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things most atheists think of is this:
Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain. Or perhaps, as the Discordians claim, "Hell is reserved for people who believe in it, and the hottest levels of Hell are reserved for people who believe in it on the principle that they'll go there if they don't."
This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the ...
|
Dec 12, 2021 |
The LessWrong Team is now Lightcone Infrastructure, come work with us!by habryka
05:58
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The LessWrong Team is now Lightcone Infrastructure, come work with us!, published by habryka on the LessWrong.
tl;dr: The LessWrong team is re-organizing as Lightcone Infrastructure. LessWrong is one of several projects we are working on to ensure the future of humanity goes well. We are looking to hire software engineers as well as generalist entrepreneurs in Berkeley who are excited to build infrastructure to ensure a good future.
I founded the LessWrong 2.0 team in 2017, with the goal of reviving LessWrong.com and reinvigorating the intellectual culture of the rationality community. I believed the community had great potential for affecting the long term future, but that the failing website was a key bottleneck to community health and growth.
Four years later, the website still seems very important. But when I step back and ask “what are the key bottlenecks for improving the longterm future?”, just ensuring the website is going well no longer seems sufficient.
For the past year, I’ve been re-organizing the LessWrong team into something with a larger scope. As I’ve learned from talking to over a thousand of you over the last 4 years, for most of you the rationality community is much larger than just this website, and your contributions to the future of humanity more frequently than not route through many disparate parts of our sprawling diaspora. Many more of those parts deserve attention and optimization than just LessWrong, and we seem to be the best positioned organization to make sure that happens.
I want to make sure that that whole ecosystem is successfully steering humanity towards safer and better futures, and more and more this has meant working on projects that weren't directly related to LessWrong.com:
A bit over a year ago we started building grant-making software for Jaan Tallinn and the Survival and Flourishing Fund, helping distribute over 30 million dollars to projects that I think have the potential to have a substantial effect on ensuring a flourishing future for humanity.
We helped run dozens of online meetups and events during the pandemic, and hundreds of in-person events for both this year and 2019s ACX Meetups everywhere
We helped build and run the EA Forum and the AI Alignment Forum,
We recently ran a 5-day retreat for 60-70 people whose work we think is highly impactful in reducing the likelihood of humanity's extinction,
We opened an in-person office space in the Bay Area for organizations that are working towards improving the long-term future of humanity.
As our projects outside of the LessWrong.com website multiplied, our name became more and more confusing when trying to explain to people what we were about.
This confusion reached a new peak when we started having a team that we were internally calling the "LessWrong team", which was responsible for running the website, distinct from all of our other projects, and which soon after caused me to utter the following sentence at one of our team meetings:
LessWrong really needs to figure out what the LessWrong team should set as a top priority for LessWrong
As one can imagine, the reaction from the rest of the team was confusion and laughter and at that point I knew we had to change our name and clarify our organizational mission.
So, after doing many rounds of coming up with names, asking many of our colleagues and friends (including GPT-3) for suggestions, we finally decided on:
I like the light cone as a symbol, because it represents the massive scale of opportunity that humanity is presented with. If things go right, we can shape almost the full light cone of humanity to be full of flourishing life. Billions of galaxies, billions of light years across, for some 10^36 (or so) years until the heat death of the universe.
Separately, I am excited about where Lightcone Infrastructure is head...
|
Dec 12, 2021 |
Your Dog is Even Smarter Than You Think by StyleOfDog
30:17
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Your Dog is Even Smarter Than You Think, published by StyleOfDog on the LessWrong.
Epistemic status: highly suggestive.
Epistemic status: highly suggestive.
[EDIT: Added more info on research methods. Addressed some common criticism. Added titles for video links and a few new vids. Prevented revolution with a military coup d'état]
A combination of surveys and bayesian estimates[1] leads me to believe this community is interested in autism, cats, cognition, philosophy, and moral valence of animals. What I’m going to show you checks every box, so it boggles my mind that I don't see anyone talk about it. It has been bothering me so much that I decided to create an account and write this article.
I have two theories.
The community will ignore fascinating insight just because its normie coded. Cute tiktok-famous poodle doesn't pattern match to "this contains real insight into animal cognition".
Nobody tried to sell this well enough.
I personally believe in the second one[2] and I'll try to sell it to you.
Stella
There’s an intervention to help non-verbal autistic kids communicate using “communication boards” (not to be confused with facilitated communication which has a bad reputation). It can be a paper board with pictures or it can be a board with buttons that say a word when pressed. In 2018 Christina Hunger (hungerforwords.com) - a speech pathologist working with autistic children using such boards - started to wonder if her dog was in fact autistic. Just kidding, she saw similarities in patterns of behavior between young kids she was working with ("learner" seems to be the term of art) and her dog. So she gave it a button that says “Outside” and expanded from there.
Now teaching a dog to press a button that says “outside” is not impressive or interesting to me. But then she kept adding buttons and her dog started to display capabilities for rudimentary syntax.
Stella the talking dog compilation - Stella answers whether she wants to play or eat, asks for help when one of her buttons breaks, alerts owner to possible "danger" outside.
Stella tells us she is all done with her bed!
Stella the Talking Dog says "Help Good Want Eat"
(most of the good videos are on her Instagram @hunger4words, not much is on YouTube)
Reaction from serious animal language researchers and animal cognition hobbyists was muted to non-existent, but dog moms ate this stuff up. One of them was Alexis.
Bunny
Most useful research is impractical to do within academia
The Importance of Methodology and Practical Matters
Ethology has some really interesting lessons about how important various practical matters and methodology can be when it comes to what your field can (and can't) produce. For example, it turns out that a surprising amount of useful data about animal cognition comes from experiments with dogs. [.] The main reason is because they will sit still for an fMRI to be the goodest boy (and to get hot dogs). [.] On the other side of that coin, elephants are clearly very smart, but we've done surprisingly little controlled experiments or close observation with them. Why? [.] They're damn inconvenient to keep in the basement of the biology building, they mess up the trees on alumni drive, and undergrads kept complaining about elephant-patty injuries while playing ultimate on the quad.
A lot of useful research isn't done because it's too inconvenient, too expensive or otherwise impractical to execute within confines of academia. This is a massive shaping force. Existence of ImageNet and its quirks is a stronger shaping force on AI research than all AI ethics committees combined.
Nobody had done this before because it takes months of everyday training to get interesting results. Once your dog gets the hang of it, you’re able to add more buttons faster, but it’s never quick. Dogs take a while to come u...
|
Dec 12, 2021 |
Embedded Interactive Predictions on LessWrong by Amandango
05:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Embedded Interactive Predictions on LessWrong, published by Amandango on the LessWrong.
Write a Review
Ought and LessWrong are excited to launch an embedded interactive prediction feature. You can now embed binary questions into LessWrong posts and comments. Hover over the widget to see other people’s predictions, and click to add your own.
Try it out
1%
2%
3%
4%
5%
6%
7%
8%
9%
10%
11%
12%
13%
14%
15%
16%
17%
18%
19%
20%
21%
22%
23%
24%
25%
26%
27%
28%
29%
30%
31%
32%
33%
34%
35%
36%
37%
38%
39%
40%
41%
42%
43%
44%
45%
46%
47%
48%
49%
50%
51%
52%
53%
54%
55%
56%
57%
58%
59%
60%
61%
62%
63%
64%
65%
66%
67%
68%
69%
70%
71%
72%
73%
74%
75%
76%
77%
78%
79%
80%
81%
82%
83%
84%
85%
86%
87%
88%
89%
90%
91%
92%
93%
94%
95%
96%
97%
98%
99%
1%
Will there be more than 50 prediction questions embedded in LessWrong posts and comments this month?
99%
How to use this
Create a question
Go to elicit.org/binary and create your question by typing it into the field at the top
Click on the question title, and click the copy button next to the title – it looks like this:
Paste the URL into your LW post or comment. It'll look like this in the editor:
Make a prediction
Click on the widget to add your own prediction
Click on your prediction line again to delete it
Link your accounts
Linking your LessWrong and Elicit accounts allows you to:
Filter for and browse all your LessWrong predictions on Elicit
Add notes to your LessWrong predictions on Elicit
See your calibration for your LessWrong predictions on Elicit
Predict on LessWrong questions in the Elicit app
To link your accounts:
Make an Elicit account
Send me (amanda@ought.org) an email with your LessWrong username and your Elicit account email
Motivation
We hope embedded predictions can prompt readers and authors to:
Actively engage with posts. By making predictions as they read, people have to stop and think periodically about how much they agree with the author.
Distill claims. For writers, integrating predictions challenges them to think more concretely about their claims and how readers might disagree.
Communicate uncertainty. Rather than just stating claims, writers can also communicate a confidence level.
Collect predictions. As a reader, you can build up a personal database of predictions as you browse LessWrong.
Get granular feedback. Writers can get feedback on their content at a more granular level than comments or upvotes.
By working with LessWrong on this, Ought hopes to make forecasting easier and more prevalent. As we learn more about how people think about the future, we can use Elicit to automate larger parts of the workflow and thought process until we end up with end-to-end automated reasoning that people endorse. Check out our blog post to see demos and more context.
Some examples of how to use this
To make specific predictions, like in Zvi’s post on COVID predictions
To express credences on claims like those in Daniel Kokotajlo’s soft takeoff post
Beyond LessWrong – if you want to integrate this into your blog or have other ideas for places you’d want to use this, let us know!
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
A Fable of Science and Politics by Eliezer Yudkowsky
08:32
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A Fable of Science and Politics, published by Eliezer Yudkowsky on the LessWrong.
In the time of the Roman Empire, civic life was divided between the Blue and Green factions. The Blues and the Greens murdered each other in single combats, in ambushes, in group battles, in riots. Procopius said of the warring factions: “So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place neither to the ties of marriage nor of relationship nor of friendship, and the case is the same even though those who differ with respect to these colors be brothers or any other kin.”1 Edward Gibbon wrote: “The support of a faction became necessary to every candidate for civil or ecclesiastical honors.”2
Who were the Blues and the Greens? They were sports fans—the partisans of the blue and green chariot-racing teams.
Imagine a future society that flees into a vast underground network of caverns and seals the entrances. We shall not specify whether they flee disease, war, or radiation; we shall suppose the first Undergrounders manage to grow food, find water, recycle air, make light, and survive, and that their descendants thrive and eventually form cities. Of the world above, there are only legends written on scraps of paper; and one of these scraps of paper describes the sky, a vast open space of air above a great unbounded floor. The sky is cerulean in color, and contains strange floating objects like enormous tufts of white cotton. But the meaning of the word “cerulean” is controversial; some say that it refers to the color known as “blue,” and others that it refers to the color known as “green.”
In the early days of the underground society, the Blues and Greens contested with open violence; but today, truce prevails—a peace born of a growing sense of pointlessness. Cultural mores have changed; there is a large and prosperous middle class that has grown up with effective law enforcement and become unaccustomed to violence. The schools provide some sense of historical perspective; how long the battle between Blues and Greens continued, how many died, how little changed as a result. Minds have been laid open to the strange new philosophy that people are people, whether they be Blue or Green.
The conflict has not vanished. Society is still divided along Blue and Green lines, and there is a “Blue” and a “Green” position on almost every contemporary issue of political or cultural importance. The Blues advocate taxes on individual incomes, the Greens advocate taxes on merchant sales; the Blues advocate stricter marriage laws, while the Greens wish to make it easier to obtain divorces; the Blues take their support from the heart of city areas, while the more distant farmers and watersellers tend to be Green; the Blues believe that the Earth is a huge spherical rock at the center of the universe, the Greens that it is a huge flat rock circling some other object called a Sun. Not every Blue or every Green citizen takes the “Blue” or “Green” position on every issue, but it would be rare to find a city merchant who believed the sky was blue, and yet advocated an individual tax and freer marriage laws.
The Underground is still polarized; an uneasy peace. A few folk genuinely think that Blues and Greens should be friends, and it is now common for a Green to patronize a Blue shop, or for a Blue to visit a Green tavern. Yet from a truce originally born of exhaustion, there is a quietly growing spirit of tolerance, even friendship.
One day, the Underground is shaken by a minor earthquake. A sightseeing party of six is caught in the tremblor while looking at the ruins of ancient dwellings in the upper caverns. They feel the brief movement of the rock under their feet, and one of the tourists trips and scrapes her knee. The...
|
Dec 12, 2021 |
Seven Years of Spaced Repetition Software in the Classroom
53:23
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Seven Years of Spaced Repetition Software in the Classroom, published by tanagrabeast on the LessWrong.
Description
This is a reflective essay and report on my experiences using Spaced Repetition Software (SRS) in an American high school classroom. It follows my 2015 and 2016 posts on the same topic.
Because I value concise summaries in non-fiction, I provide one immediately below. However, I also believe in the power of narrative, in carefully unfolding a story so as to maximize reader engagement and impact. As I have applied such narrative considerations in writing this post, I consider the following summary to be a spoiler.
I’ll let you decide what to do with that information.
Summary (spoilers)
My earlier push for classroom SRS solutions was driven by a belief I came to see as fallacious: that forgetting is the undoing of learning. This epistemic shift drove me to abandon designs for a custom app that would have integrated whole-class and individual SRS functions.
While I still see value in classroom use of Spaced Repetition Software, especially in basic language acquisition, I have greatly reduced its use in my own classes.
In my third year of experiments (2016-17), I used a windfall of classroom computers to give students supervised time to independently study using an SRS app with individual profiles. I found longer-term average performance to be slightly worse than under the whole-class group study model, though students of high intelligence and motivation saw slight improvements.
Intro and response to Piotr Woźniak
I have recently received a number of requests to revisit the topic of classroom SRS after years of silence on the subject. Understandably, the term “postmortem” has come up more than once. Did I hit a dead end? Do I still use it?
Also, I was informed that SRS founding father Piotr Woźniak recently added a page to his SuperMemo wiki in which he quoted me at length and claimed that SRS doesn’t belong in the classroom.
Well, I don’t have much in the way of rebuttal, because Woźniak’s main goal with the page seems to be to use my experience as ammunition against the perpetuation of school-as-we-know-it, which seems like a worthy crusade. He introduces my earlier classroom SRS posts by saying, “This teacher could write the same articles with the same conclusions. Only the terminology would differ.” I’ll take that as high praise.
If I were to quibble, it would be with the part shortly after this, where he says:
The entire analysis is made with an important assumption: "school is good, school is inevitable, and school is here to stay, so we better learn to live with it".
Inevitable? Maybe. Here to stay? Realistically, yes. But good? At best, I might describe our educational system as an “inadequate equilibrium”. At worst? A pit so deep we still don’t know what’s at the bottom, except that it eats souls.
Other than that, let me reiterate my long-running agreement with Woźniak that SRS is best when used by a self-motivated individual, and that my classroom antics are an ugly hack around the fact that self-motivation is a rare element this deep in the mines.
Anyone who can show us a way out will have my attention. In the meantime, I’ll do my best to keep a light on.
Prologue
At the end of my 2016 post, I teased a peek at a classroom SRS+ app I was preparing to build. It would have married whole-class and individual study functions with some other clever features to reduce teacher workload.
I had a 10k word document in hand: a mix of rationale, feature descriptions, and hypothetical “user stories”. I wasn’t looking for funding or a co-founder, just some technical suggestions and moral support. I would have been my own first user, and I had to keep my day job for that anyway.
But each time I read my draft, I had this growing, sickening sense that I was lying ...
|
Dec 12, 2021 |
Lies, Damn Lies, and Fabricated Options by Duncan_Sabien
22:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Lies, Damn Lies, and Fabricated Options, published by Duncan_Sabien on the LessWrong.
This is an essay about one of those "once you see it, you will see it everywhere" phenomena. It is a psychological and interpersonal dynamic roughly as common, and almost as destructive, as motte-and-bailey, and at least in my own personal experience it's been quite valuable to have it reified, so that I can quickly recognize the commonality between what I had previously thought of as completely unrelated situations.
The original quote referenced in the title is "There are three kinds of lies: lies, damned lies, and statistics."
Background 1: Gyroscopes
Gyroscopes are weird.
Except they're not. They're quite normal and mundane and straightforward. The weirdness of gyroscopes is a map-territory confusion—gyroscopes seem weird because my map is poorly made, and predicts that they will do something other than their normal, mundane, straightforward thing.
In large part, this is because I don't have the consequences of physical law engraved deeply enough into my soul that they make intuitive sense.
I can imagine a world that looks exactly like the world around me, in every way, except that in this imagined world, gyroscopes don't have any of their strange black-magic properties. It feels coherent to me. It feels like a world that could possibly exist.
"Everything's the same, except gyroscopes do nothing special." Sure, why not.
But in fact, this world is deeply, deeply incoherent. It is Not Possible with capital letters. And a physicist with sufficiently sharp intuitions would know this—would be able to see the implications of a world where gyroscopes "don't do anything weird," and tell me all of the ways in which reality falls apart.
The seeming coherence of the imaginary world where gyroscopes don't balance and don't precess and don't resist certain kinds of motion is a product of my own ignorance, and of the looseness with which I am tracking how different facts fit together, and what the consequences of those facts are. It's like a toddler thinking that they can eat their slice of cake, and still have that very same slice of cake available to eat again the next morning.
Background 2: H2O and XYZ
In the book Labyrinths of Reason, author William Poundstone delves into various thought experiments (like Searle's Chinese Room) to see whether they're actually coherent or not.
In one such exploration, he discusses the idea of a Twin Earth, on the opposite side of the sun, exactly like Earth in every way except that it doesn't have water. Instead, it has a chemical, labeled XYZ, which behaves like water and occupies water's place in biology and chemistry, but is unambiguously distinct.
Once again, this is the sort of thing humans are capable of imagining. I can nod along and say "sure, a liquid that behaves just like water, but isn't."
But a chemist, intimately familiar with the structure and behavior of molecules and with the properties of the elements and their isotopes, would be throwing up red flags.
"Just like water," they might say, and I would nod.
"Liquid, and transparent, with a density of 997 kilograms per meter cubed."
"Sure," I would reply.
"Which freezes and melts at exactly 0º Celsius, and which boils and condenses at exactly 100º Celsius."
"Yyyyeahhhh," I would say, uneasiness settling in.
"Which makes up roughly 70% of the mass of the bodies of the humans of Twin Earth, and which is a solvent for hydrophilic substances, but not hydrophobic ones, and which can hold ions and polar substances in solution."
"Um."
The more we drill down into what we mean by behaves exactly like water, the more it starts to become clear that there just isn't a possible substance which behaves exactly like water, but isn't. There are only so many configurations of electrons and protons and neutrons ...
|
Dec 12, 2021 |
Book summary: Unlocking the Emotional Brain by Kaj_Sotala
35:06
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Book summary: Unlocking the Emotional Brain, published by Kaj_Sotala on the LessWrong.
If the thesis in Unlocking the Emotional Brain (UtEB) is even half-right, it may be one of the most important books that I have read. Written by the psychotherapists Bruce Ecker, Robin Ticic and Laurel Hulley, it claims to offer a neuroscience-grounded, comprehensive model of how effective therapy works. In so doing, it also happens to formulate its theory in terms of belief updating, helping explain how the brain models the world and what kinds of techniques allow us to actually change our minds. Furthermore, if UtEB is correct, it also explains why rationalist techniques such as Internal Double Crux [1 2 3] work.
UtEB’s premise is that much if not most of our behavior is driven by emotional learning. Intense emotions generate unconscious predictive models of how the world functions and what caused those emotions to occur. The brain then uses those models to guide our future behavior. Emotional issues and seemingly irrational behaviors are generated from implicit world-models (schemas) which have been formed in response to various external challenges. Each schema contains memories relating to times when the challenge has been encountered and mental structures describing both the problem and a solution to it.
According to the authors, the key for updating such schemas involves a process of memory reconsolidation, originally identified in neuroscience. The emotional brain’s learnings are usually locked and not modifiable. However, once an emotional schema is activated, it is possible to simultaneously bring into awareness knowledge contradicting the active schema. When this happens, the information contained in the schema can be overwritten by the new knowledge.
While I am not convinced that the authors are entirely right, many of the book’s claims definitely feel like they are pointing in the right direction. I will discuss some of my caveats and reservations after summarizing some of the book’s claims in general. I also consider its model in the light of an issue of a psychology/cognitive science journal devoted to discussing a very similar hypothesis.
Emotional learning
In UtEB’s model, emotional learning forms the foundation of much of our behavior. It sets our basic understanding about what situations are safe or unsafe, desirable or undesirable. The authors do not quite say it explicitly, but the general feeling I get is that the subcortical emotional processes set many of the priorities for what we want to achieve, with higher cognitive functions then trying to figure out how to achieve it - often remaining unaware of what exactly they are doing.
UtEB’s first detailed example of an emotional schema comes from the case study of a man in his thirties they call Richard. He had been consistently successful and admired at work, but still suffered from serious self-doubt and low confidence at his job. On occasions such as daily technical meetings, when he considered saying something, he experienced thoughts including “Who am I to think I know what’s right?”, “This could be wrong” and “Watch out - don’t go out on a limb”. These prevented him from expressing any opinions.
From the point of view of the authors, these thoughts have a definite cause - Richard has “emotional learnings according to which it is adaptively necessary to go into negative thoughts and feelings towards [himself].” The self-doubts are a strategy which his emotional brain has generated for solving some particular problem.
Richard’s therapist guided Richard to imagine what it would feel like if he was at one of his work meetings, made useful comments, and felt confident in his knowledge while doing so. This was intended to elicit information about what Richard’s emotional brain predicted would happen if it failed ...
|
Dec 12, 2021 |
Roles are Martial Arts for Agency by Eneasz
04:54
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Roles are Martial Arts for Agency, published by Eneasz on the LessWrong.
A long time ago I thought that Martial Arts simply taught you how to fight – the right way to throw a punch, the best technique for blocking and countering an attack, etc. I thought training consisted of recognizing these attacks and choosing the correct responses more quickly, as well as simply faster/stronger physical execution of same. It was later that I learned that the entire purpose of martial arts is to train your body to react with minimal conscious deliberation, to remove “you” from the equation as much as possible.
The reason is of course that conscious thought is too slow. If you have to think about what you’re doing, you’ve already lost. It’s been said that if you had to think about walking to do it, you’d never make it across the room. Fighting is no different. (It isn’t just fighting either – anything that requires quick reaction suffers when exposed to conscious thought. I used to love Rock Band. One day when playing a particularly difficult guitar solo on expert I nailed 100%. except “I” didn’t do it at all. My eyes saw the notes, my hands executed them, and no where was I involved in the process. It was both exhilarating and creepy, and I basically dropped the game soon after.)
You’ve seen how long it takes a human to learn to walk effortlessly. That's a situation with a single constant force, an unmoving surface, no agents working against you, and minimal emotional agitation. No wonder it takes hundreds of hours, repeating the same basic movements over and over again, to attain even a basic level of martial mastery. To make your body react correctly without any thinking involved. When Neo says “I Know Kung Fu” he isn’t surprised that he now has knowledge he didn’t have before. He’s amazed that his body now reacts in the optimal manner when attacked without his involvement.
All of this is simply focusing on pure reaction time – it doesn’t even take into account the emotional terror of another human seeking to do violence to you. It doesn’t capture the indecision of how to respond, the paralysis of having to choose between outcomes which are all awful and you don’t know which will be worse, and the surge of hormones. The training of your body to respond without your involvement bypasses all of those obstacles as well.
This is the true strength of Martial Arts – eliminating your slow, conscious deliberation and acting while there is still time to do so.
Roles are the Martial Arts of Agency.
When one is well-trained in a certain Role, one defaults to certain prescribed actions immediately and confidently. I’ve acted as a guy standing around watching people faint in an overcrowded room, and I’ve acted as the guy telling people to clear the area. The difference was in one I had the role of Corporate Pleb, and the other I had the role of Guy Responsible For This Shit. You know the difference between the guy at the bar who breaks up a fight, and the guy who stands back and watches it happen? The former thinks of himself as the guy who stops fights. They could even be the same guy, on different nights. The role itself creates the actions, and it creates them as an immediate reflex. By the time corporate-me is done thinking “Huh, what’s this? Oh, this looks bad. Someone fainted? Wow, never seen that before. Damn, hope they’re OK. I should call 911.” enforcer-me has already yelled for the room to clear and whipped out a phone.
Roles are the difference between Hufflepuffs gawking when Neville tumbles off his broom (Protected), and Harry screaming “Wingardium Leviosa” (Protector). Draco insulted them afterwards, but it wasn’t a fair insult – they never had the slightest chance to react in time, given the role they were in. Roles are the difference between Minerva ordering Hagrid to stay wi...
|
Dec 12, 2021 |
Dark Arts of Rationality by So8res
28:37
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Dark Arts of Rationality, published by So8res on the LessWrong.
Today, we're going to talk about Dark rationalist techniques: productivity tools which seem incoherent, mad, and downright irrational. These techniques include:So8res
Willful Inconsistency
Intentional Compartmentalization
Modifying Terminal Goals
I expect many of you are already up in arms. It seems obvious that consistency is a virtue, that compartmentalization is a flaw, and that one should never modify their terminal goals.
I claim that these 'obvious' objections are incorrect, and that all three of these techniques can be instrumentally rational.
In this article, I'll promote the strategic cultivation of false beliefs and condone mindhacking on the values you hold most dear. Truly, these are Dark Arts. I aim to convince you that sometimes, the benefits are worth the price.
Changing your Terminal Goals
In many games there is no "absolutely optimal" strategy. Consider the Prisoner's Dilemma. The optimal strategy depends entirely upon the strategies of the other players. Entirely.
Intuitively, you may believe that there are some fixed "rational" strategies. Perhaps you think that even though complex behavior is dependent upon other players, there are still some constants, like "Never cooperate with DefectBot". DefectBot always defects against you, so you should never cooperate with it. Cooperating with DefectBot would be insane. Right?
Wrong. If you find yourself on a playing field where everyone else is a TrollBot (players who cooperate with you if and only if you cooperate with DefectBot) then you should cooperate with DefectBots and defect against TrollBots.
Consider that. There are playing fields where you should cooperate with DefectBot, even though that looks completely insane from a naïve viewpoint. Optimality is not a feature of the strategy, it is a relationship between the strategy and the playing field.
Take this lesson to heart: in certain games, there are strange playing fields where the optimal move looks completely irrational.
I'm here to convince you that life is one of those games, and that you occupy a strange playing field right now.
Here's a toy example of a strange playing field, which illustrates the fact that even your terminal goals are not sacred:
Imagine that you are completely self-consistent and have a utility function. For the sake of the thought experiment, pretend that your terminal goals are distinct, exclusive, orthogonal, and clearly labeled. You value your goals being achieved, but you have no preferences about how they are achieved or what happens afterwards (unless the goal explicitly mentions the past/future, in which case achieving the goal puts limits on the past/future). You possess at least two terminal goals, one of which we will call A.
Omega descends from on high and makes you an offer. Omega will cause your terminal goal A to become achieved over a certain span of time, without any expenditure of resources. As a price of taking the offer, you must switch out terminal goal A for terminal goal B. Omega guarantees that B is orthogonal to A and all your other terminal goals. Omega further guarantees that you will achieve B using less time and resources than you would have spent on A. Any other concerns you have are addressed via similar guarantees.
Clearly, you should take the offer. One of your terminal goals will be achieved, and while you'll be pursuing a new terminal goal that you (before the offer) don't care about, you'll come out ahead in terms of time and resources which can be spent achieving your other goals.
So the optimal move, in this scenario, is to change your terminal goals.
There are times when the optimal move of a rational agent is to hack its own terminal goals.
You may find this counter-intuitive. It helps to remember that "optimality" depen...
|
Dec 12, 2021 |
Anti-social Punishment by Martin Sustrik
09:55
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Anti-social Punishment , published by Martin Sustrik on the LessWrong.
This is a cross post from 250bpm.com.
Introduction
There's a trope among Slovak intellectual elite depicting an average Slovak as living in a village, sitting a local pub, drinking Borovička, criticizing everyone and everything but not willing to lift a finger to improve things. Moreover, it is assumed that if you actually tried to make things better, said individual would throw dirt at you and place obstacles in your way.
I always assumed that this caricature was silly. It was partly because I have a soft spot for Slovak rural life but mainly because such behavior makes absolutely no sense from game-theoretical point of view. If a do-gooder is stupid enough to try to altruistically improve your life, why go into trouble of actively opposing them? Why not just sit safely hidden in the pub, drink some more Borovička and wait until they are done?
Well, it turns out that the things are far more complex then I thought.
Public goods game
Benedikt Herrmann, Christian Thöni and Simon Gächter did a study of how people from different societies deal with cooperation and punishment. You can find the paper here and supporting material here.
The study is based on the "public goods" game. The game works as follows:
There are four players. Each player gets 20 tokens to start with. Every participant either keeps them or passes some of them into a common pool. After all the players are done with their moves, each of them, irrespective of how much they contributed, gets tokens equal to 40% of all the tokens in the common pool. The participants cannot communicate with each other and are unaware of each other's identities. The game is repeated, with the same players, 10 times in a row.
The earnings, obviously, depend not only on subject's move but also on the willingness of the other players to cooperate and put tokens into the common pool. But free riders get an advantage. They keep their original tokens but also get their share from the pool.
To get a feeling of the payoffs, let's have a look at the single-round earnings in the extreme case where each participant either puts all their tokens into the pool ("cooperator") or keeps all the tokens for themselves ("free-rider"):
Public goods game with punishment
There's a variant of the "public goods game" where players are able to punish each other after each round of the game. The mechanism is simple. When the round ends the participants are informed about how much each of them has put into the common pool. Then they decide whether to spend some of their tokens to administer punishment. For each token spent on punishment you can subtract 3 tokens from the earnings of an opponent. The players know that they've been punished but they are not informed about who exactly has punished them.
Participant pools
The researchers were interested in comparing the results of the game among different societies:
Our research strategy was to conduct the experiments with comparable social groups from complex developed societies with the widest possible range of cultural and economic backgrounds to maximize chances of observing cross-societal differences in punishment and cooperation. The societies represented in our participant pools diverge strongly according to several widely used criteria developed by social scientists in order to characterize societies. This variation, covering a large range of the worldwide available values of the respective criteria, provides us with a novel test for seeing whether societal differences between complex societies have any impact on experimentally observable disparities in cooperation and punishment behavior. ... To minimize sociodemographic variability, we conducted all experiments with university undergraduates who were similar in age, shared an (...
|
Dec 12, 2021 |
How to Beat Procrastination by lukeprog
33:02
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How to Beat Procrastination, published by lukeprogon the Lesswrong.
Part of the sequence: The Science of Winning at Life
My own behavior baffles me. I find myself doing what I hate, and not doing what I really want to do!
- Saint Paul (Romans 7:15)
Once you're trained in BayesCraft, it may be tempting to tackle classic problems "from scratch" with your new Rationality Powers. But often, it's more effective to do a bit of scholarship first and at least start from the state of our scientific knowledge on the subject.
Today, I want to tackle procrastination by summarizing what we know about it, and how to overcome it.
Let me begin with three character vignettes...
Eddie attended the sales seminar, read all the books, and repeated the self-affirmations in the mirror this morning. But he has yet to make his first sale. Rejection after rejection has demoralized him. He organizes his desk, surfs the internet, and puts off his cold calls until potential clients are leaving for the day.
Three blocks away, Valerie stares at a blank document in Microsoft Word. Her essay assignment on municipal politics, due tomorrow, is mind-numbingly dull. She decides she needs a break, texts some friends, watches a show, and finds herself even less motivated to write the paper than before. At 10pm she dives in, but the result reflects the time she put into it: it's terrible.
In the next apartment down, Tom is ahead of the game. He got his visa, bought his plane tickets, and booked time off for his vacation to the Dominican Republic. He still needs to reserve a hotel room, but that can be done anytime. Tom keeps pushing the task forward a week as he has more urgent things to do, and then forgets about it altogether. As he's packing, he remembers to book the room, but by now there are none left by the beach. When he arrives, he finds his room is 10 blocks from the beach and decorated with dead mosquitos.
Eddie, Valerie, and Tom are all procrastinators, but in different ways.1
Eddie's problem is low expectancy. By now, he expects only failure. Eddie has low expectancy of success from making his next round of cold calls. Results from 39 procrastination studies show that low expectancy is a major cause of procrastination.2 You doubt your ability to follow through with the diet. You don't expect to get the job. You really should be going out and meeting girls and learning to flirt better, but you expect only rejection now, so you procrastinate. You have learned to be helpless.
Valerie's problem is that her task has low value for her. We all put off what we dislike.3 It's easy to meet up with your friends for drinks or start playing a videogame; not so easy to start doing your taxes. This point may be obvious, but it's nice to see it confirmed in over a dozen scientific studies. We put off things we don't like to do.
But the strongest predictor of procrastination is Tom's problem: impulsiveness. It would have been easy for Tom to book the hotel in advance, but he kept getting distracted by more urgent or interesting things, and didn't remember to book the hotel until the last minute, which left him with a poor selection of rooms. Dozens of studies have shown that procrastination is closely tied to impulsiveness.4
Impulsiveness fits into a broader component of procrastination: time. An event's impact on our decisions decreases as its temporal distance from us increases.5 We are less motivated by delayed rewards than by immediate rewards, and the more impulsive you are, the more your motivation is affected by such delays.
Expectancy, value, delay, and impulsiveness are the four major components of procrastination. Piers Steel, a leading researcher on procrastination, explains:
Decrease the certainty or the size of a task's reward - its expectancy or its value - and you are unlikely to pursue its compl...
|
Dec 12, 2021 |
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by Zack_M_Davis
11:18
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists, published by Zack_M_Davis on the LessWrong.
Followup to: What Evidence Filtered Evidence?
In "What Evidence Filtered Evidence?", we are asked to consider a scenario involving a coin that is either biased to land Heads 2/3rds of the time, or Tails 2/3rds of the time. Observing Heads is 1 bit of evidence for the coin being Heads-biased (because the Heads-biased coin lands Heads with probability 2/3, the Tails-biased coin does so with probability 1/3, the likelihood ratio of these is
2
3
1
3
2
, and
log
2
2
1
), and analogously and respectively for Tails.
If such a coin is flipped ten times by someone who doesn't make literally false statements, who then reports that the 4th, 6th, and 9th flips came up Heads, then the update to our beliefs about the coin depends on what algorithm the not-lying[1] reporter used to decide to report those flips in particular. If they always report the 4th, 6th, and 9th flips independently of the flip outcomes—if there's no evidential entanglement between the flip outcomes and the choice of which flips get reported—then reported flip-outcomes can be treated the same as flips you observed yourself: three Headses is 3 1 = 3 bits of evidence in favor of the hypothesis that the coin is Heads-biased. (So if we were initially 50:50 on the question of which way the coin is biased, our posterior odds after collecting 3 bits of evidence for a Heads-biased coin would be
2
3
1
= 8:1, or a probability of 8/(1 + 8) ≈ 0.89 that the coin is Heads-biased.)
On the other hand, if the reporter mentions only and exactly the flips that came out Heads, then we can infer that the other 7 flips came out Tails (if they didn't, the reporter would have mentioned them), giving us posterior odds of
2
3
2
7
= 1:16, or a probability of around 0.06 that the coin is Heads-biased.
So far, so standard. (You did read the Sequences, right??) What I'd like to emphasize about this scenario today, however, is that while a Bayesian reasoner who knows the non-lying reporter's algorithm of what flips to report will never be misled by the selective reporting of flips, a Bayesian with mistaken beliefs about the reporter's decision algorithm can be misled quite badly: compare the 0.89 and 0.06 probabilities we just derived given the same reported outcomes, but different assumptions about the reporting algorithm.
If the coin gets flipped a sufficiently large number of times, a reporter whom you trust to be impartial (but isn't), can make you believe anything she wants without ever telling a single lie, just with appropriate selective reporting. Imagine a very biased coin that comes up Heads 99% of the time. If it gets flipped ten thousand times, 100 of those flips will be Tails (in expectation), giving a selective reporter plenty of examples to point to if she wants to convince you that the coin is extremely Tails-biased.
Toy models about biased coins are instructive for constructing examples with explicitly calculable probabilities, but the same structure applies to any real-world situation where you're receiving evidence from other agents, and you have uncertainty about what algorithm is being used to determine what reports get to you. Reality is like the coin's bias; evidence and arguments are like the outcome of a particular flip. Wrong theories will still have some valid arguments and evidence supporting them (as even a very Heads-biased coin will come up Tails sometimes), but theories that are less wrong will have more.
If selective reporting is mostly due to the idiosyncratic bad intent of rare malicious actors, then you might hope for safety in (the law of large) numbers: if Helga in particular is systematically more likely to report Headses than Tailses that she sees, then...
|
Dec 12, 2021 |
CFAR Participant Handbook now available to all by Duncan_Sabien
01:04
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: CFAR Participant Handbook now available to all, published by Duncan_Sabien on the LessWrong.
Google Drive PDF
Hey, guys—I wrote this, and CFAR has recently decided to make it publicly available. Much of it involved rewriting the original work of others, such as Anna Salamon, Kenzie Ashkie, Val Smith, Dan Keys, and other influential CFAR founders and staff, but the actual content was filtered through me as single author as part of getting everything into a consistent and coherent shape.
I have mild intentions to update it in the future with a handful of other new chapters that were on the list, but which didn't get written before CFAR let me go. Note that such updates will likely not be current-CFAR-approved, but will still derive directly from my understanding of the curriculum as former Curriculum Director.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
How I Ended Up Non-Ambitious by Swimmer963
15:24
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How I Ended Up Non-Ambitious, published by Swimmer963 on the LessWrong.
I have a confession to make. My life hasn’t changed all that much since I started reading Less Wrong. Hindsight bias makes it hard to tell, I guess, but I feel like pretty much the same person, or at least the person I would have evolved towards anyway, whether or not I spent those years reading about the Art of rationality.
But I can’t claim to be upset about it either. I can’t say that rationality has undershot my expectations. I didn’t come to Less Wrong expecting, or even wanting, to become the next Bill Gates; I came because I enjoyed reading it, just like I’ve enjoyed reading hundreds of books and websites.
In fact, I can’t claim that I would want my life to be any different. I have goals and I’m meeting them: my grades are good, my social skills are slowly but steadily improving, I get along well with my family, my friends, and my boyfriend. I’m in good shape financially despite making $12 an hour as a lifeguard, and in a year and a half I’ll be making over $50,000 a year as a registered nurse. I write stories, I sing in church, I teach kids how to swim. Compared to many people my age, I'm pretty successful. In general, I’m pretty happy.
Yvain suggested akrasia as a major limiting factor for why rationalists fail to have extraordinarily successful lives. Maybe that’s true for some people; maybe they are some readers and posters on LW who have big, exciting, challenging goals that they consistently fail to reach because they lack motivation and procrastinate. But that isn’t true for me. Though I can’t claim to be totally free of akrasia, it hasn’t gotten much in the way of my goals.
However, there are some assumptions that go too deep to be accessed by introspection, or even by LW meetup discussions. Sometimes you don't even realize they’re assumptions until you meet someone who assumes the opposite, and try to figure out why they make you so defensive. At the community meetup I described in my last post, a number of people asked me why I wasn’t studying physics, since I was obviously passionate about it. Trust me, I had plenty of good justifications for them–it’s a question I’ve been asked many times–but the question itself shouldn’t have made me feel attacked, and it did.
Aside from people in my life, there are some posts on Less Wrong that cause the same reaction of defensiveness. Eliezer’s Mandatory Secret Identities is a good example; my automatic reaction was “well, why do you assume everyone here wants to have a super cool, interesting life? In fact, why do you assume everyone wants to be a rationality instructor? I don’t. I want to be a nurse.”
After a bit of thought, I’ve concluded that there’s a simple reason why I’ve achieved all my life goals so far (and why learning about rationality failed to affect my achievements): they’re not hard goals. I’m not ambitious. As far as I can tell, not being ambitious is such a deep part of my identity that I never even noticed it, though I’ve used the underlying assumptions as arguments for why my goals and life decisions were the right ones.
But if there’s one thing Less Wrong has taught me, it’s that assumptions are to be questioned. There are plenty of good reasons to choose reasonable goals instead of impossible ones, but doing things on reflex is rarely better than thinking through them, especially for long-term goal making, where I do have time to think it through, Type 2 style.
What do I mean by ‘ambition’?
Here is the definition from my desktop dictionary:
(1) A strong desire to do or to achieve something, typically requiring determination and hard work: her ambition was to become a model | he achieved his ambition of making a fortune.
(2) Desire and determination to achieve success: life offered few opportunities for young people with...
|
Dec 12, 2021 |
A tale from Communist China by Wei_Dai
03:27
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A tale from Communist China, published by Wei_Dai on the LessWrong.
Write a Review
Judging from the upvotes, it seems like people are quite interested in my grandparents' failure to emigrate from Communist China before it was too late, so I thought I'd elaborate here with more details and for greater visibility. They were all actually supporters of the Communist Party at the beginning, and saw it as potential saviors/liberators of the Chinese people and nation. They were also treated well at the beginning - one of them (being one of few people in China with a higher education) was given a high official post, and another retained a manager position at the factory that he used to own and operate.
The latter side of the family considered moving away from China prior to the Communist victory since they would be classified as "capitalists" under the new regime, but were reassured by high level party members that they would be treated well if they stayed and helped to build the "new China". They actually did relatively ok, aside from most of their property being confiscated/nationalized early on and their living standards deteriorating steadily until they were forced to share the house that they used to own with something like 4 other families, and them being left with a single room in which to live.
The other side were more straightforward "true believers" who supported Communism at an early stage, as they were part of the educated class who generally saw it as the right side of history, something that would help China leapfrog the West in terms of both social and economic progress. My grandmother on that side even tried to run away from her family to join the revolution after graduating from the equivalent of high school. Just before the Communists took power, my grandmother changed her mind, and wanted to move away from China and even got the required visas. (I asked my father why, and he said "women's intuition" which I'm not sure is really accurate but he couldn't provide further details.) But my grandfather still believed in the cause so they stayed. After the Communist victory, there was still about a year before the borders were fully shut, but it seemed like things were moving in a good direction and disproved my grandmother's worries. My grandfather was given an important post and went around making important speeches and so on.
Unfortunately he was not very good at playing politics, as his background was in physics (although plenty of natural politicians also fared quite badly during the various "movements"). His position started attracting envy from those who thought he didn't contribute enough to the revolution to earn it. He was demoted and moved from city to city as the Party assigned him to various jobs. Finally, some kind of political dispute near the start of the Cultural Revolution led to his opponents digging up an incident in his distant past, which was then used as an excuse to persecute him in various ways, including confining him in a makeshift prison for ten years. He died shortly after the Cultural Revolution ended and he was released, just before I was born. According to my father, it was from over-eating due to finally being released from the extreme deprivation of his confinement.
BTW, I wasn't told any of this when I was still a kid living in China. My parents had of course grown quite disillusioned by Communism and the Communist Party by then, but probably didn't think it would be to my advantage to show any signs of resistance to the indoctrination and propaganda that I was being fed in school and in the mass media. So I can also testify from personal experience that if those in charge of schools and the media want to, and there's enough social pressure to not resist, it's not very hard to brainwash a child.
Thanks for listening. to help us ...
|
Dec 12, 2021 |
What Money Cannot Buy by johnswentworth
05:57
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: What Money Cannot Buy, published by johnswentworth on the LessWrong.
Paul Graham:
The problem is, if you're not a hacker, you can't tell who the good hackers are. A similar problem explains why American cars are so ugly. I call it the design paradox. You might think that you could make your products beautiful just by hiring a great designer to design them. But if you yourself don't have good taste, how are you going to recognize a good designer? By definition you can't tell from his portfolio. And you can't go by the awards he's won or the jobs he's had, because in design, as in most fields, those tend to be driven by fashion and schmoozing, with actual ability a distant third. There's no way around it: you can't manage a process intended to produce beautiful things without knowing what beautiful is. American cars are ugly because American car companies are run by people with bad taste.
I don’t know how much I believe this claim about cars, but I certainly believe it about software. A startup without a technical cofounder will usually produce bad software, because someone without software engineering skills does not know how to recognize such skills in someone else. The world is full of bad-to-mediocre “software engineers” who do not produce good software. If you don’t already know a fair bit about software engineering, you will not be able to distinguish them from the people who really know what they’re doing.
Same with user interface design. I’ve worked with a CEO who was good at UI; both the process and the results were visibly superior to others I’ve worked with. But if you don’t already know what good UI design looks like, you’d have no idea - good design is largely invisible.
Yudkowsky makes the case that the same applies to security: you can’t build a secure product with novel requirements without having a security expert as a founder. The world is full of “security experts” who do not, in fact, produce secure systems - I’ve met such people. (I believe they mostly make money by helping companies visibly pretend to have made a real effort at security, which is useful in the event of a lawsuit.) If you don’t already know a fair bit about security, you will not be able to distinguish such people from the people who really know what they’re doing.
But to really drive home the point, we need to go back to 1774.
As the American Revolution was heating up, a wave of smallpox was raging on the other side of the Atlantic. An English dairy farmer named Benjamin Jesty was concerned for his wife and children. He was not concerned for himself, though - he had previously contracted cowpox. Cowpox was contracted by milking infected cows, and was well known among dairy farmers to convey immunity against smallpox.
Unfortunately, neither Jesty’s wife nor his two children had any such advantage. When smallpox began to pop up in Dorset, Jesty decided to take drastic action. He took his family to a nearby farm with a cowpox-infected cow, scratched their arms, and wiped pus from the infected cow on the scratches. Over the next few days, their arms grew somewhat inflamed and they suffered the mild symptoms of cowpox - but it quickly passed. As the wave of smallpox passed through the town, none of the three were infected. Throughout the rest of their lives, through multiple waves of smallpox, they were immune.
The same technique would be popularized twenty years later by Edward Jenner, marking the first vaccine and the beginning of modern medicine.
The same wave of smallpox which ran across England in 1774 also made its way across Europe. In May, it reached Louis XV, King of France. Despite the wealth of a major government and the talents of Europe’s most respected doctors, Louis XV died of smallpox on May 10, 1774.
The point: there is knowledge for which money cannot substitute. Even...
|
Dec 12, 2021 |
Why Our Kind Can't Cooperate by Eliezer Yudkowsky
15:14
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Why Our Kind Can't Cooperate, published by Eliezer Yudkowskyon the LessWrong.
From when I was still forced to attend, I remember our synagogue's annual fundraising appeal. It was a simple enough format, if I recall correctly. The rabbi and the treasurer talked about the shul's expenses and how vital this annual fundraise was, and then the synagogue's members called out their pledges from their seats.
Straightforward, yes?
Let me tell you about a different annual fundraising appeal. One that I ran, in fact; during the early years of a nonprofit organization that may not be named. One difference was that the appeal was conducted over the Internet. And another difference was that the audience was largely drawn from the atheist/libertarian/technophile/sf-fan/early-adopter/programmer/etc crowd. (To point in the rough direction of an empirical cluster in personspace. If you understood the phrase "empirical cluster in personspace" then you know who I'm talking about.)
I crafted the fundraising appeal with care. By my nature I'm too proud to ask other people for help; but I've gotten over around 60% of that reluctance over the years. The nonprofit needed money and was growing too slowly, so I put some force and poetry into that year's annual appeal. I sent it out to several mailing lists that covered most of our potential support base.
And almost immediately, people started posting to the mailing lists about why they weren't going to donate. Some of them raised basic questions about the nonprofit's philosophy and mission. Others talked about their brilliant ideas for all the other sources that the nonprofit could get funding from, instead of them. (They didn't volunteer to contact any of those sources themselves, they just had ideas for how we could do it.)
Now you might say, "Well, maybe your mission and philosophy did have basic problems—you wouldn't want to censor that discussion, would you?"
Hold on to that thought.
Because people were donating. We started getting donations right away, via Paypal. We even got congratulatory notes saying how the appeal had finally gotten them to start moving. A donation of $111.11 was accompanied by a message saying, "I decided to give a little bit more. One more hundred, one more ten, one more single, one more dime, and one more penny. All may not be for one, but this one is trying to be for all."
But none of those donors posted their agreement to the mailing list. Not one.
So far as any of those donors knew, they were alone. And when they tuned in the next day, they discovered not thanks, but arguments for why they shouldn't have donated. The criticisms, the justifications for not donating—only those were displayed proudly in the open.
As though the treasurer had finished his annual appeal, and everyone not making a pledge had proudly stood up to call out justifications for refusing; while those making pledges whispered them quietly, so that no one could hear.
I know someone with a rationalist cause who goes around plaintively asking, "How come the Raelian flying-saucer cult can get tens of thousands of members [probably around 40,000] interested in complete nonsense, but we can't even get a thousand people working on this?"
The obvious wrong way to finish this thought is to say, "Let's do what the Raelians do! Let's add some nonsense to this meme!" For the benefit of those not immediately stopped by their ethical inhibitions, I will observe that there may be a hundred failed flying-saucer cults for every one that becomes famous. And the Dark Side may require non-obvious skills, which you, yes you, do not have: Not everyone can be a Sith Lord. In particular, if you talk about your planned lies on the public Internet, you fail. I'm no master criminal, but even I can tell certain people are not cut out to be crooks.
So it's probably not a...
|
Dec 12, 2021 |
9/26 is Petrov Day by Eliezer Yudkowsky
03:20
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: 9/26 is Petrov Day, published by Eliezer Yudkowsky on the LessWrong.
Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.
The story begins on September 1st, 1983, when Soviet jet interceptors shot down a Korean Air Lines civilian airliner after the aircraft crossed into Soviet airspace and then, for reasons still unknown, failed to respond to radio hails. 269 passengers and crew died, including US Congressman Lawrence McDonald. Ronald Reagan called it "barbarism", "inhuman brutality", "a crime against humanity that must never be forgotten". Note that this was already a very, very poor time for US/USSR relations. Andropov, the ailing Soviet leader, was half-convinced the US was planning a first strike. The KGB sent a flash message to its operatives warning them to prepare for possible nuclear war.
On September 26th, 1983, Lieutenant Colonel Stanislav Yevgrafovich Petrov was the officer on duty when the warning system reported a US missile launch. Petrov kept calm, suspecting a computer error.
Then the system reported another US missile launch.
And another, and another, and another.
What had actually happened, investigators later determined, was sunlight on high-altitude clouds aligning with the satellite view on a US missile base.
In the command post there were beeping signals, flashing lights, and officers screaming at people to remain calm. According to several accounts I've read, there was a large flashing screen from the automated computer system saying simply "START" (presumably in Russian). Afterward, when investigators asked Petrov why he hadn't written everything down in the logbook, Petrov replied,"Because I had a phone in one hand and the intercom in the other, and I don't have a third hand."
The policy of the Soviet Union called for launch on warning. The Soviet Union's land radar could not detect missiles over the horizon, and waiting for positive identification would limit the response time to minutes. Petrov's report would be relayed to his military superiors, who would decide whether to start a nuclear war.
Petrov decided that, all else being equal, he would prefer not to destroy the world. He sent messages declaring the launch detection a false alarm, based solely on his personal belief that the US did not seem likely to start an attack using only five missiles.
Petrov was first congratulated, then extensively interrogated, then reprimanded for failing to follow procedure. He resigned in poor health from the military several months later. According to Wikipedia, he is spending his retirement in relative poverty in the town of Fryazino, on a pension of $200/month. In 2004, the Association of World Citizens gave Petrov a trophy and $1000. There is also a movie scheduled for release in 2008, entitled The Red Button and the Man Who Saved the World.
Maybe someday, the names of people who decide not to start nuclear wars will be as well known as the name of Britney Spears. Looking forward to such a time, when humankind has grown a little wiser, let us celebrate, in this moment, Petrov Day.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
How to Ignore Your Emotions (while also thinking you're awesome at emotions) by Hazard
06:39
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How to Ignore Your Emotions (while also thinking you're awesome at emotions), published by Hazard on the LessWrong.
(cross posted from my personal blog)
Since middle school I've generally thought that I'm pretty good at dealing with my emotions, and a handful of close friends and family have made similar comments. Now I can see that though I was particularly good at never flipping out, I was decidedly not good "healthy emotional processing". I'll explain later what I think "healthy emotional processing" is, right now I'm using quotes to indicate "the thing that's good to do with emotions". Here it goes...
Relevant context
When I was a kid I adopted a strong, "Fix it or stop complaining about it" mentality. This applied to stress and worry as well. "Either address the problem you're worried about or quit worrying about it!" Also being a kid, I had a limited capacity to actually fix anything, and as such I was often exercising the "stop worrying about it" option.
Another thing about me, I was a massive book worm and loved to collect "obvious mistakes" that heroes and villains would make. My theory was, "Know all the traps, and then just don't fall for them". That plus the sort of books I read meant that I "knew" it was a big no-no to ignore or repress your emotions. Luckily, since I knew you shouldn't repress your emotions, I "just didn't" and have lived happily ever after
yeah nopes.
Wiggling ears
It can be really hard to teach someone to move in a way that is completely new to them. I teach parkour, and sometimes I want to say,
Me: "Do the shock absorbing thing with your legs!" Student: "What's the shock absorbing thing?" Me: "... uh, you know... the thing were your legs... absorb shock?"
It's hard to know how to give cues that will lead to someone making the right mental/muscle connection. Learning new motor movements is somewhat of a process of flailing around in the dark, until some feedback mechanism tells you you did it right (a coach, it's visually obvious, the jump doesn't hurt anymore, etc). Wiggling your ears is a nice concrete version of a) movement most people's bodies are capable of and b) one that most people feel like is impossible.
Claim: learning mental and emotional skills has a similar "flailing around in the dark" aspect. There are the mental and emotional controls you've practiced, and those just feel like moving your arm. Natural, effortless, atomic. But there are other moves, which you are totally capable of which seem impossible because you don't know how your "control panel" connects to that output. This feels like trying to wiggle your ears.
Why "ignore" and "deal with" looked the same
So young me is upset that the grub master for our camping trip forgot half the food on the menu, and all we have for breakfast is milk. I couldn't "fix it" given that we were in the woods, so my next option was "stop feeling upset about it." So I reached around in the dark of my mind, and Oops, the "healthily process feelings" lever is right next to the "stop listening to my emotions" lever.
The end result? "Wow, I decided to stop feeling upset, and then I stopped feeling upset. I'm so fucking good at emotional regulation!!!!!"
My model now is that I substituted "is there a monologue of upsetness in my conscious mental loop?" for "am I feeling upset?". So from my perspective, it just felt like I was very in control of my feelings. Whenever I wanted to stop feeling something, I could. When I thought of ignoring/repressing emotions, I imagined trying to cover up something that was there, maybe with a story. Or I thought if you poked around ignored emotions there would be a response of anger or annoyance. I at least expected that if I was ignoring my emotions, that if I got very calm and then asked myself, "Is there anything that you're feeling?" I would get an an...
|
Dec 12, 2021 |
When Money Is Abundant, Knowledge Is The Real Wealth by johnswentworth
08:15
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: When Money Is Abundant, Knowledge Is The Real Wealth , published by johnswentworth on the LessWrong.
First Puzzle Piece
By and large, the President of the United States can order people to do things, and they will do those things. POTUS is often considered the most powerful person in the world. And yet, the president cannot order a virus to stop replicating. The president cannot order GDP to increase. The president cannot order world peace.
Are there orders the president could give which would result in world peace, or increasing GDP, or the end of a virus? Probably, yes. Any of these could likely even be done with relatively little opportunity cost. Yet no president in history has known which orders will efficiently achieve these objectives. There are probably some people in the world who know which orders would efficiently increase GDP, but the president cannot distinguish them from the millions of people who claim to know (and may even believe it themselves) but are wrong.
Last I heard, Jeff Bezos was the official richest man in the world. He can buy basically anything money can buy. But he can’t buy a cure for cancer. Is there some way he could spend a billion dollars to cure cancer in five years? Probably, yes. But Jeff Bezos does not know how to do that. Even if someone somewhere in the world does know how to turn a billion dollars into a cancer cure in five years, Jeff Bezos cannot distinguish that person from the thousands of other people who claim to know (and may even believe it themselves) but are wrong.
When non-experts cannot distinguish true expertise from noise, money cannot buy expertise. Knowledge cannot be outsourced; we must understand things ourselves.
Second Puzzle Piece
The Haber process combines one molecule of nitrogen with three molecules of hydrogen to produce two molecules of ammonia - useful for fertilizer, explosives, etc. If I feed a few grams of hydrogen and several tons of nitrogen into the Haber process, I’ll get out a few grams of ammonia. No matter how much more nitrogen I pile in - a thousand tons, a million tons, whatever - I will not get more than a few grams of ammonia. If the reaction is limited by the amount of hydrogen, then throwing more nitrogen at it will not make much difference.
In the language of constraints and slackness: ammonia production is constrained by hydrogen, and by nitrogen. When nitrogen is abundant, the nitrogen constraint is slack; adding more nitrogen won’t make much difference. Conversely, since hydrogen is scarce, the hydrogen constraint is taut; adding more hydrogen will make a difference. Hydrogen is the bottleneck.
Likewise in economic production: if a medieval book-maker requires 12 sheep skins and 30 days’ work from a transcriptionist to produce a book, and the book-maker has thousands of transcriptionist-hours available but only 12 sheep, then he can only make one book. Throwing more transcriptionists at the book-maker will not increase the number of books produced; sheep are the bottleneck.
When some inputs become more or less abundant, bottlenecks change. If our book-maker suddenly acquires tens of thousands of sheep skins, then transcriptionists may become the bottleneck to book-production. In general, when one resource becomes abundant, other resources become bottlenecks.
Putting The Pieces Together
If I don’t know how to efficiently turn power into a GDP increase, or money into a cure for cancer, then throwing more power/money at the problem will not make much difference.
King Louis XV of France was one of the richest and most powerful people in the world. He died of smallpox in 1774, the same year that a dairy farmer successfully immunized his wife and children with cowpox. All that money and power could not buy the knowledge of a dairy farmer - the knowledge that cowpox could safely immuniz...
|
Dec 12, 2021 |
Credibility of the CDC on SARS-CoV-2 by Elizabeth, jimrandomh
10:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Credibility of the CDC on SARS-CoV-2, published y Elizabeth, jimrandomh on the LessWrong.
Introduction
One of the main places Americans look for information on coronavirus is the Center for Disease Control and Prevention (abbreviated CDC from the days before “and Prevention” was in the title). That’s natural; “handling contagious epidemics” is not their only job, but it is one of their primary ones, and they position themselves as the authority. At a time when so many things are uncertain, it saves a lot of anxiety (and time, and money) to have an expert source you can turn to and get solid advice.
Unfortunately, the CDC has repeatedly given advice with lots of evidence against it. Below is a list of actions from the CDC that we believe are misleading or otherwise indicative of an underlying problem. If you know of more examples or have information on any of these (for or against), please comment below and we will incorporate into this post.
Examples
Dismissed Risk of Infection Via Packages
On the CDC’s coronavirus FAQs pages on 2020-03-04, they say, under “Am I at risk for COVID-19 from a package or products shipping from China?”:
“In general, because of poor survivability of these coronaviruses on surfaces, there is likely very low risk of spread from products or packaging that are shipped over a period of days or weeks at ambient temperatures.”
However, this metareview found that various coronaviruses remained infectious for days at room temperature on certain surfaces (cardboard was not tested, alas) and potentially weeks at lower temperatures. The CDC’s answer is probably correct for packages from China, and it’s possible it’s even right for domestic packages with 2-day shipping, but it is incorrect to say that coronaviruses in general have low survivability, and to the best of my ability to determine, we don’t have the experiments that would prove deliveries are safe.
Blinded Itself to Community Spread
As late as 2020-02-29, the CDC was reporting that there had been no “community spread” of SARS-CoV-2. (Community spread means that the person hadn’t been traveling in an infected area or associating with someone who had). At this time, the CDC would only test a person for SARS-CoV-2 if they had been in China or in close contact with a confirmed COVID-19 case.
Testing Criteria as of 2020-02-11
This not only left them incapable of detecting community spread, it ignored potential cases who had travelled to other countries with known COVID-19 outbreaks.
By 2020-02-13, this had been amended to include
The criteria are intended to serve as guidance for evaluation. Patients should be evaluated and discussed with public health departments on a case-by-case basis. For severely ill individuals, testing can be considered when exposure history is equivocal (e.g., uncertain travel or exposure, or no known exposure) and another etiology has not been identified.
(The CDC describes this change as happening on 2020-02-12, however the Wayback Machine did not capture the page that day).
Based on this announcement on 2020-02-14, when testing that could detect community exposure was happening it was in one of 5 major cities. However as of 2020-03-01 only 472 tests had been done, so no test could have been happening very often.
Between 2020-02-27 and 2020-02-28, the primary guidelines on this page were amended to
However guidance went out on the same day (the 28th) that only listed China as a risk (and even then, only medium risk unless they had been exposed to a confirmed case or travelled to Hubei specifically).
Testing Kits the CDC Sent to Local Labs were Unreliable
They generated too many false positives to be useful.
Hamstrung Detection by Banning 3rd Party Testing (HHS/FDA, not CDC)
One reason the CDC used such stringent criteria for determining who to test was that they had a v...
|
Dec 12, 2021 |
Core Pathways of Aging by johnswentworth
38:24
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Core Pathways of Aging, published by johnswentworth on the LessWrong.
Most overviews of aging suffer from multiple problems:
They dump a bunch of findings with no high-level picture.
Many of the claims they make are outdated, purely theoretical, and sometimes even outright disproven by existing work.
They are usually written by working academics, who are shy about telling us when their peers’ work is completely wrong.
They are shy about making strong claims, since this would also implicitly mean denying some claims of the authors’ peers.
This post is a high-level brain-dump of my current best models of the core pathways of aging, as I currently understand them. I have no particular reason to avoid calling out claims I think are wrong/irrelevant, and I’m going to present high-level models without pages and pages of disclaimers and discussions about results which maybe disagree with them (but are probably just wrong/irrelevant).
Epistemic status: I would be surprised if none of it turned out to be wrong, but there are multiple lines of evidence supporting most claims. It is not highly polished, and references are included only when I have them readily on hand. My ideal version of this piece would have more detailed references, more double-checking behind the claims, and more direct presentation of the data which backs up each claim. Unfortunately, that would take enough time and effort that I’m unlikely to actually get to it soon. So. here’s what I could produce in a reasonable amount of time. Hopefully it will be wrong/unhelpful in ways orthogonal to how most overviews are wrong/unhelpful.
Foundations
First, let’s recap a couple foundational principles. I’ll go through these pretty quickly; see the linked posts for more info.
Homeostasis and “Root Causes” in Aging: the vast majority of proteins, cells, etc, in the human body turn over on a timescale from days to months. At any given time, their level (e.g. protein concentration, cell count, etc) is in equilibrium on the turnover timescale - i.e. the rate of creation approximately equals the rate of removal. For any X with turnover much faster than aging (i.e. decades), if we see the level of X increase/decrease on the timescale of a human lifetime, then that is not due to permanent “accumulation of X” or “depletion of X”; it is due to increase/decrease in the rate of creation/removal of X. For instance:
DNA damage is typically repaired on a timescale of hours or faster, depending on the type. If DNA damage levels increase with age, that is due to an increase in rate of damage or decrease in rate of repair, not permanent accumulation.
Typical senescent cells turn over on a timescale of days to weeks. If the number of senescent cells increases with age, that is due to an increase in rate of senescent cell production or decrease in rate of removal, not permanent accumulation.
Elastin is believed to not turn over at all in humans. So if we see elastin deposits increasing with age (e.g. in wrinkles), then that could be permanent accumulation.
Furthermore: suppose we have a positive feedback cycle. Increasing A decreases the rate of production of B, so B decreases. But decreasing B decreases the rate of removal of A, so A increases. If both A and B individually turn over on a timescale of hours or faster then this feedback loop as a whole will also typically operate on a timescale of hours or faster - i.e. count/concentration of A will explode upward on roughly that timescale. More generally, a feedback loop will usually operate on the timescale of its slowest component, exactly like the rate-limiting step of a chemical reaction.
Main upshot of all this: since aging involves changes on a timescale of decades, there must be some component which is out-of-equilibrium on a timescale of decades or longer (i.e. does not turn ove...
|
Dec 12, 2021 |
Religion's Claim to be Non-Disprovable by Eliezer Yudkowsky
07:31
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Religion's Claim to be Non-Disprovable, published by Eliezer Yudkowsky on the LessWrong.
The earliest account I know of a scientific experiment is, ironically, the story of Elijah and the priests of Baal.
The people of Israel are wavering between Jehovah and Baal, so Elijah announces that he will conduct an experiment to settle it—quite a novel concept in those days! The priests of Baal will place their bull on an altar, and Elijah will place Jehovah’s bull on an altar, but neither will be allowed to start the fire; whichever God is real will call down fire on His sacrifice. The priests of Baal serve as control group for Elijah—the same wooden fuel, the same bull, and the same priests making invocations, but to a false god. Then Elijah pours water on his altar—ruining the experimental symmetry, but this was back in the early days—to signify deliberate acceptance of the burden of proof, like needing a 0.05 significance level. The fire comes down on Elijah’s altar, which is the experimental observation. The watching people of Israel shout “The Lord is God!”—peer review.
And then the people haul the 450 priests of Baal down to the river Kishon and slit their throats. This is stern, but necessary. You must firmly discard the falsified hypothesis, and do so swiftly, before it can generate excuses to protect itself. If the priests of Baal are allowed to survive, they will start babbling about how religion is a separate magisterium which can be neither proven nor disproven.
Back in the old days, people actually believed their religions instead of just believing in them. The biblical archaeologists who went in search of Noah’s Ark did not think they were wasting their time; they anticipated they might become famous. Only after failing to find confirming evidence—and finding disconfirming evidence in its place—did religionists execute what William Bartley called the retreat to commitment, “I believe because I believe.”
Back in the old days, there was no concept of religion’s being a separate magisterium. The Old Testament is a stream-of-consciousness culture dump: history, law, moral parables, and yes, models of how the universe works—like the universe being created in six days (which is a metaphor for the Big Bang), or rabbits chewing their cud. (Which is a metaphor for . . .)
Back in the old days, saying the local religion “could not be proven” would have gotten you burned at the stake. One of the core beliefs of Orthodox Judaism is that God appeared at Mount Sinai and said in a thundering voice, “Yeah, it’s all true.” From a Bayesian perspective that’s some darned unambiguous evidence of a superhumanly powerful entity. (Although it doesn’t prove that the entity is God per se, or that the entity is benevolent—it could be alien teenagers.) The vast majority of religions in human history—excepting only those invented extremely recently—tell stories of events that would constitute completely unmistakable evidence if they’d actually happened. The orthogonality of religion and factual questions is a recent and strictly Western concept. The people who wrote the original scriptures didn’t even know the difference.
The Roman Empire inherited philosophy from the ancient Greeks; imposed law and order within its provinces; kept bureaucratic records; and enforced religious tolerance. The New Testament, created during the time of the Roman Empire, bears some traces of modernity as a result. You couldn’t invent a story about God completely obliterating the city of Rome (a la Sodom and Gomorrah), because the Roman historians would call you on it, and you couldn’t just stone them.
In contrast, the people who invented the Old Testament stories could make up pretty much anything they liked. Early Egyptologists were genuinely shocked to find no trace whatsoever of Hebrew tribes having ever be...
|
Dec 12, 2021 |
Coronavirus: Justified Practical Advice Thread by Ben Pace, Elizabeth
02:03
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Coronavirus: Justified Practical Advice Thread, published by Ben Pace, Elizabeth on the LessWrong.
(Added: To see the best advice in this thread, read this summary.)
This is a thread for practical advice for preparing for the coronavirus in places where it might substantially grow.
We'd like this thread to be a source of advice that attempts to explain itself. This is not a thread to drop links to recommendations that don’t explain why the advice is accurate or useful. That’s not to say that explanation-less advice isn’t useful, but this isn't the place for it.
Please include in your answers some advice and an explanation of the advice, an explicit model under which it makes sense. We will move answers to the comments if they don't explain their recommendations clearly. (Added: We have moved at least 4 comments so far.)
The more concrete the explanation the better. Speculation is fine, uncertain models are fine; sources, explicit models and numbers for variables that other people can play with based on their own beliefs are excellent.
Here are some examples of things that we'd like to see:
It is safe to mostly but not entirely rely on food that requires heating or other prep, because a pandemic is unlikely to take out utilities, although if if they are taken out for other reasons they will be slower to come back on
CDC estimates of prevalence are likely to be significant underestimates due to their narrow testing criteria.
A guesstimate model of the risks of accepting packages and delivery food
One piece of information that has been lacking in most advice we’ve seen is when to take a particular action. Sure, I can stock up on food ahead of time, but not going to work may be costly– what’s your model for the costs of going so I can decide when the costs outweigh the benefits for me? This is especially true for advice that has inherent trade-offs– total quarantine means eating your food stockpiles that you hopefully have, which means not having them later.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Intentionally Making Close Friends by Neel Nanda
27:52
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Intentionally Making Close Friends, published by Neel Nanda on the LessWrong.
This is a linkpost for
Introduction
One of the greatest sources of joy in my life are my close friends. People who bring excitement and novelty into my life. Who expose me to new experiences, and ways of seeing the world. Who help me learn, point out my blind spots, and correct me when I am wrong. Who I can lean on when I need support, and who lean on me in turn. Friends who help me grow more into the kind of person I want to be.
I am especially grateful for this, because up until about 4 years ago, I didn’t have any close friends in my life. I had friends, but struggled to form real emotional connections. Moreover, it didn’t even occur to me that I could try to do this. It wasn’t that I knew how to form close friends but was too anxious to try, rather, ‘try to form close friendships’ was a non-standard action, something that never even crossed my mind. And one of my most life-changing experiments was realising that this was something I wanted, and actually trying to intentionally form close friends.
It’s easy to slip into a passive mindset here, to think of emotional connections as ‘something that take time’ or ‘need to happen naturally’. That to be intentional about things is ‘inauthentic’. I think this mindset is absolutely crazy. My close friendships are one of the most important components of my life happiness. Leaving it up to chance feels like passing up an incredible opportunity. As with all important things in life, this can be optimised - further, if done right, this adds a massive amount to the lives of me and of my future close friends.
The first half of this post is the story of how I approached intentionally forming close friends, and the second half is an attempt to distill the lessons I learned from this. As such, this post is more autobiographical than most. Feel free to skip to the advice section if you don’t want that. Further, what you value in close friendships is highly personal - this post will focus on what I want in friendships and how I try to get it, but you should adapt this to your own situation, values, and what feels missing in your life!
Exercise: Think about your closest friends, and how these friendships happened. What needs are you fulfilling in each other’s lives? Are you happy with this state of affairs, or is something missing? What could be better?
My story
The Problem
Back when I was in school, I never had close friends. I had friends, people I liked, people I spent time with, whose company I genuinely enjoyed. But I was pretty terrible at being vulnerable and forming emotional connections. These friendships rarely went beyond the surface level. In hindsight, I expect these could have been far richer (and I’ve formed much stronger friendships with some of these friends since!), but I never really tried.
I find it hard to introspect on exactly what the internal experience of past Neel was like, but I think the core was that trying wasn’t available as a possible action. That I spent much of my life doing what felt socially conventional, normal and expected, for the role I saw myself in. And ‘go out of your way to form emotional connections’ wasn’t part of that. It wasn’t an action I considered, weighed up the costs and benefits, and decided against - it never even occurred to me to try. It didn’t feel like a void missing from my life - things just felt normal. It was like playing a video game, and having a list of actions to choose from, like ‘ask about their day’, ‘complain about a shared experience’ or ‘discuss something cool I learned recently’; but this list contained nothing about ‘intentionally form an emotional connection’. It wasn’t in my reference class of things I could do.
One of the core parts of my life philosophy now is the skill of agency...
|
Dec 12, 2021 |
Tsuyoku Naritai! (I Want To Become Stronger) by Eliezer Yudkowsky
04:48
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Tsuyoku Naritai! (I Want To Become Stronger), published by Eliezer Yudkowsky on the LessWrong.
In Orthodox Judaism there is a saying: “The previous generation is to the next one as angels are to men; the next generation is to the previous one as donkeys are to men.” This follows from the Orthodox Jewish belief that all Judaic law was given to Moses by God at Mount Sinai. After all, it’s not as if you could do an experiment to gain new halachic knowledge; the only way you can know is if someone tells you (who heard it from someone else, who heard it from God). Since there is no new source of information; it can only be degraded in transmission from generation to generation.
Thus, modern rabbis are not allowed to overrule ancient rabbis. Crawly things are ordinarily unkosher, but it is permissible to eat a worm found in an apple—the ancient rabbis believed the worm was spontaneously generated inside the apple, and therefore was part of the apple. A modern rabbi cannot say, “Yeah, well, the ancient rabbis knew diddly-squat about biology. Overruled!” A modern rabbi cannot possibly know a halachic principle the ancient rabbis did not, because how could the ancient rabbis have passed down the answer from Mount Sinai to him? Knowledge derives from authority, and therefore is only ever lost, not gained, as time passes.
When I was first exposed to the angels-and-donkeys proverb in (religious) elementary school, I was not old enough to be a full-blown atheist, but I still thought to myself: “Torah loses knowledge in every generation. Science gains knowledge with every generation. No matter where they started out, sooner or later science must surpass Torah.”
The most important thing is that there should be progress. So long as you keep moving forward you will reach your destination; but if you stop moving you will never reach it.
Tsuyoku naritai is Japanese. Tsuyoku is “strong”; naru is “becoming,” and the form naritai is “want to become.” Together it means, “I want to become stronger,” and it expresses a sentiment embodied more intensely in Japanese works than in any Western literature I’ve read. You might say it when expressing your determination to become a professional Go player—or after you lose an important match, but you haven’t given up—or after you win an important match, but you’re not a ninth-dan player yet—or after you’ve become the greatest Go player of all time, but you still think you can do better. That is tsuyoku naritai, the will to transcendence.
Each year on Yom Kippur, an Orthodox Jew recites a litany which begins Ashamnu, bagadnu, gazalnu, dibarnu dofi, and goes on through the entire Hebrew alphabet: We have acted shamefully, we have betrayed, we have stolen, we have slandered . . .
As you pronounce each word, you strike yourself over the heart in penitence. There’s no exemption whereby, if you manage to go without stealing all year long, you can skip the word gazalnu and strike yourself one less time. That would violate the community spirit of Yom Kippur, which is about confessing sins—not avoiding sins so that you have less to confess.
By the same token, the Ashamnu does not end, “But that was this year, and next year I will do better.”
The Ashamnu bears a remarkable resemblance to the notion that the way of rationality is to beat your fist against your heart and say, “We are all biased, we are all irrational, we are not fully informed, we are overconfident, we are poorly calibrated . . .”
Fine. Now tell me how you plan to become less biased, less irrational, more informed, less overconfident, better calibrated.
There is an old Jewish joke: During Yom Kippur, the rabbi is seized by a sudden wave of guilt, and prostrates himself and cries, “God, I am nothing before you!” The cantor is likewise seized by guilt, and cries, “God, I am nothing before you!” Se...
|
Dec 12, 2021 |
Lifestyle interventions to increase longevity by RomeoStevens
21:50
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Lifestyle interventions to increase longevity, published by RomeoStevens on the LessWrong.
There is a lot of bad science and controversy in the realm of how to have a healthy lifestyle. Every week we are bombarded with new studies conflicting older studies telling us X is good or Y is bad. Eventually we reach our psychological limit, throw up our hands, and give up. I used to do this a lot. I knew exercise was good, I knew flossing was good, and I wanted to eat better. But I never acted on any of that knowledge. I would feel guilty when I thought about this stuff and go back to what I was doing. Unsurprisingly, this didn't really cause me to make any positive lifestyle changes.
Instead of vaguely guilt-tripping you with potentially unreliable science news, this post aims to provide an overview of lifestyle interventions that have very strong evidence behind them and concrete ways to implement them.
A quick FAQ before we get started
Why should I care about longevity-promoting habits at a young age?
First, many longevity-promoting lifestyle changes will increase your quality of life in the short term. In doing this research, I found a few interventions that had shockingly large impacts on my subjective day-to-day wellness. Second, the choices you make have larger downstream effects the earlier you get started. Trying to undo years of damage and ingrained habits at an advanced age really isn’t a position you want to find yourself in. Third, extending your life matters more the more you believe in the proximity of transformative tech. If the pace of technological improvement is increasing, then adding a decade to your life may in fact be the decade that counts. Missing out on life extension tech by a few years would really suck.
Isn’t longevity mostly just genetics?
That's what I believed for a long time, but a quick trip to wikipedia tells us that only 20-30% of the variance in longevity is heritable.
What sort of benefits can I expect?
The life satisfaction of people who remain independent and active actually increases significantly with age. Mental and physical performance are strongly correlated, meaning maintaining your body will help maintain your mind. The qualitative benefits for life satisfaction of many of these interventions can be so dramatic that it is hard to estimate them. The gulf in quality of life between people maintaining good habits and those who do not widens with age.
How were these recommendations generated?/Why should I believe you?
This post summarizes studies at the intersection of having large effects, large sample sizes, and being well-designed in terms of methodology. The cutoff for an intervention being “worth it” is somewhat subjective given that there is often only a rough estimate of the overall effect sizes of various interventions in comparison to one another. CDC mortality statistics were used to determine the most likely causes of death in various age brackets. The list of things that kill people balloons significantly as you get towards the less common causes of death and I have limited research time. Individuals who face unusual health circumstances should of course be doing their own research and consulting health professionals.
This brings me to my disclaimer:
This post is not intended to diagnose, treat, cure, or prevent any disease. No claim or opinion on these pages is intended to be, nor should be construed as medical advice. Please consult with a healthcare professional before starting any diet or exercise program. None of these claims have been evaluated by the Food and Drug Administration. Suggestions herein are intended for normal healthy adults and should not be used if you are under the age of 18 or have any known medical condition.
Alright, let’s dive in.
Things that will eventually kill you
CVD
At the top of our list ...
|
Dec 12, 2021 |
Is Rationalist Self-Improvement Real? by Jacob Falkovich
17:59
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Is Rationalist Self-Improvement Real?, published by Jacob Falkovich on the LessWrong.
Cross-posted from Putanumonit.
Basketballism
Imagine that tomorrow everyone on the planet forgets the concept of training basketball skill.
The next day everyone is as good at basketball as they were the previous day, but this talent is assumed to be fixed. No one expects their performance to change over time. No one teaches basketball, although many people continue to play the game for fun.
Geneticists explain that some people are born with better hand-eye coordination and are able to shoot a basketball accurately. Economists explain that highly-paid NBA players have a stronger incentive to hit shots, which explains their improved performance. Psychologists note that people who take more jump shots each day hit a higher percentage and theorize a principal factor of basketball affinity that influences both desire and skill at basketball. Critical race theorists claim that white men’s under-representation in the NBA is due to systemic oppression.
Papers are published, tenure is awarded.
New scientific disciplines emerge and begin studying basketball more systematically. Evolutionary physiologists point out that our ancestors threw stones in a sidearm motion, which explains our lack of adaptation to the different motion of jump shots. Behavioral kinesiologists describe systematic biases in human basketball, such as the tendency to shoot balls with a flatter trajectory and a lower release point than is optimal.
When asked by aspiring basketball players if jump shots can be improved, they all shake their heads and rue that it is human nature to miss shots. A Nobel laureate behavioral kinesiologist tells audiences that even after writing books on biases in basketball his shot did not improve much. Someone publishes a study showing that basketball performance improves after a one-hour training session with schoolchildren, but Shott Ballexander writes a critical takedown pointing out that the effect wore off after a month and could simply be random noise. The field switches to studying “nudges”: ways to design systems so that players hit more shots at the same level of skill. They recommend that the NBA adopt larger hoops.
Papers are published, tenure is awarded.
Then, one day, someone merely looking to get good at basketball, as opposed to getting tenure, comes across these papers. She realizes that the lessons of behavioral kinesiology can be used to improve her jump shot. She practices releasing the ball at the top of her jump from above the forehead with a steep arc. As her shots start swooshing in more people gather at the gym to practice with her. They call themselves Basketballists.
Most people who walk past the gym sneer at the Basketballists. “You call yourselves Basketballists and yet none of you shoot 100%”, they taunt. “You should go to grad school if you want to learn about jump shots.” Some of Basketballists themselves begin to doubt the project, especially since switching to the new shooting techniques lowers their performance at first. “Did you hear what the Center for Applied Basketball is charging for a training camp?”, they mutter, “I bet their results are all due to selection bias.”
The Basketballists insist that the training does help, that they really get better by the day. Their shots hit at a slightly higher rate than before, although this is swamped by the inter-individual variance. How could they know if it works?
AsWrongAsEver
A core axiom of Rationality (capitalized to refer to LessWrong version) is that it is a skill that can be improved with time and practice. The names Overcoming Bias and LessWrong reflect this: rationality is a direction, not a fixed point.
What would it mean to "improve at Rationality"? On the epistemic side, to draw a map that more accurat...
|
Dec 12, 2021 |
A whirlwind tour of Ethereum financeby cata
17:52
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is:A whirlwind tour of Ethereum finance, published by cata on the LessWrong.
As a hacker and cryptocurrency liker, I have been hearing for a while about "DeFi" stuff going on in Ethereum without really knowing what it was. I own a bunch of ETH, so I finally decided that enough was enough and spent a few evenings figuring out what was going on. To my pleasant surprise, a lot of it was fascinating, and I thought I would share it with LW in the hopes that other people will be interested too and share their thoughts.
Throughout this post I will assume that the reader has a basic mental model of how Ethereum works. If you don't, you might find this intro & reference useful.
Why should I care about this?
For one thing, it's the coolest, most cypherpunk thing going. Remember how back in 2012, everyone knew that Bitcoin existed, but it was a pain in the ass to use and it kind of felt weird and risky? It feels exactly like that using all this stuff. It's loads of fun.
For another thing, the economic mechanism design stuff is really fun to think about, and in many cases nobody knows the right answer yet. It's a chance for random bystanders to hang out with problems on the edge of human understanding, because nobody cared about these problems before there was so much money floating around in them.
For a third thing, you can maybe make some money. Specifically, if you have spare time, a fair bit of cash, appetite for risk, conscientiousness, some programming and finance knowledge, and you are capable of and interested in understanding how these systems work, I think it's safe to say that you have a huge edge, and you should be able to find places to extract value.
General overview
In broad strokes, people are trying to reinvent all of the stuff from typical regulated finance in trustless, decentralized ways (thus "DeFi".) That includes:
Making anything that has value into a transferable asset, typically on Ethereum, and typically an ERC-20 token. A token is an interoperable currency that keeps track of people's balances and lets people transfer it.
Making liquid exchanges where you can swap all of those tokens at market prices.
Making schemes for moving those tokens over time, like borrowing, futures, etc.
Making elaborate scams and arbitrages to obtain other people's tokens.
It's not completely clear to me what the main value proposition of all of this is. It's easy to generate things about it that seem somewhat valuable, but hard to say how each stacks up. Some possible value includes:
Evading regulation, like securities laws, money laundering laws, sanctions, capital controls, laws against online gambling, etc. etc.
Allocation of capital among projects that can raise money using cryptocurrency tokens (because somehow they have a scheme to tie the success of their project to the value of the token, making it a kind of virtual equity.)
Having less middlemen than existing financial systems, making it more trustworthy and cheaper. (It is not currently more trustworthy or cheaper than mainstream American institutions, but it plausibly could be in a few years.)
Tokenization
The first step is to make everything into an ERC-20 token. This will let all the other products work with everything, because they will interoperate with ERC-20 tokens.
Stablecoins and pegs
It's common for someone to want to own an Ethereum version of some other asset that is not Ethereum, so that they can use it on Ethereum. The most typical example of this is US dollars. A token whose price is designed to be pegged to an external thing like this is called a stablecoin.
There are a few techniques people use to accomplish this. The most popular one is to have a giant pile of US dollars somewhere under someone's control, and have that person act as a counterparty for anyone who wants to buy or sell 1 US dollar for 1 ...
|
Dec 12, 2021 |
The Apologist and the Revolutionary by Scott Alexander
07:41
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Apologist and the Revolutionary, published by Scott Alexander on the LessWrong.
Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.
Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.
Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory.
One immediately plausible hypothesis: the patient is unable to cope psychologically with the possibility of being paralyzed, so he responds with denial. Plausible, but according to Dr. Ramachandran, wrong. He notes that patients with left-side strokes almost never suffer anosognosia, even though the left side controls the right half of the body in about the same way the right side controls the left half. There must be something special about the right hemisphere.
Another plausible hypothesis: the part of the brain responsible for thinking about the affected area was damaged in the stroke. Therefore, the patient has lost access to the area, so to speak. Dr. Ramachandran doesn't like this idea either. The lack of right-sided anosognosia in left-hemisphere stroke victims argues against it as well. But how can we disconfirm it?
Dr. Ramachandran performed an experiment2 where he "paralyzed" an anosognosiac's good right arm. He placed it in a clever system of mirrors that caused a research assistant's arm to look as if it was attached to the patient's shoulder. Ramachandran told the patient to move his own right arm, and the false arm didn't move. What happened? The patient claimed he could see the arm moving - a classic anosognosiac response. This suggests that the anosognosia is not specifically a deficit of the brain's left-arm monitoring system, but rather some sort of failure of rationality.
Says Dr. Ramachandran:
The reason anosognosia is so puzzling is that we have come to regard the 'intellect' as primarily propositional in character and one ordinarily expects propositional logic to be internally consistent. To listen to a patient deny ownership of her arm and yet, in the same breath, admit that it is attached to her shoulder is one of the most perplexing phenomena that one can encounter as a neurologist.
So what's Dr. Ramachandran's solution? He posits two different reasoning modules located in the two different hemispheres. The left brain tries to fit the data to the theory to preserve a coherent internal narrative and prevent a person from jumping back and forth between conclusions upon each new data point. It is primarily an apologist, there to explain why any experience is exactly what its own theory would have predicted. The right b...
|
Dec 12, 2021 |
Dark Matters
23:18
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Dark Matters, published by Diffractor on the LessWrong.
This post will be about the main points of evidence for the existence of dark matter. To evaluate whether a competing theory to dark matter is plausible, it's important to know what the actual arguments in favor of dark matter are in more detail than just "dark matter is the stuff you have to add to get galactic rotation curves to work out". A competitor has to address the strongest arguments in favor of the existence of dark matter, not just the weaker fare like galactic rotation curves.
So, when reading some hot new arxiv paper about dark matter or the lack thereof, it is fairly useful to know the top five lines of evidential support for dark matter (in my own personal estimation, others may differ). This lets you at least check whether the result is directly addressing the major cruxes that the case for dark matter rests upon, or just picking off one particular piece of evidence and sweeping the rest under the rug, even if you lack the full technical ability to evaluate the claimed result.
This post will be saving the best for last, so if you're not going to read the whole thing, skip down to sections 4 and 5.
Also, what exactly is meant when the term "dark matter" is used in this post? Anything with mass (so it's affected by gravity and gravitationally influences other things) which does not interact via the electromagnetic force. Electrons, protons, nuclei, and atoms emphatically do not count. Black holes, neutrinos, WIMPS (weakly interacting massive particles), and axions would count under this definition. The last two are theoretical, the first two are very much established. Of course, it would be a massive cop-out to go "neutrinos exist, therefore dark matter does", so "dark matter" will be used with a followup connotation of "and whatever the heck is (we don't know yet), there must be 5x more of it in the universe than matter made of atoms or atom parts, no way around that whatsoever"
Point 1: Galactic Rotation Curves
The story begins with galaxy rotation curves, which were the original motivation for postulating dark matter in the first place. Given a point gravitational mass, it's pretty simple to calculate the velocity of something orbiting around it, depending only on how far away the object is orbiting and how much mass is in the central point. Stuff orbiting further out from a point mass will be orbiting at a lower velocity.
With a bit more work, given a disc of mass, you can calculate the velocity of something orbiting around or within it. For this, the graph of orbital velocity vs distance from the center of the disc first rises, then falls. Orbital velocities are low in the center because stuff orbiting near the center of the disc isn't orbiting around very much mass, and orbital velocities are low at the outside of the disc, because you get closer to being able to approximate things by the situation "your distant object is orbiting around a central point mass", which, as previously discussed, already exhibits the "stars on further-out orbits move more slowly" behavior.
Computing this in practice requires knowledge of two things, however. First, you need to know how fast the stars in the galaxy are orbiting around the center. Second, you need to know the radial distribution of mass in the disc or ellipse.
It's pretty easy to tell how fast stars in a galaxy are orbiting around the center, for suitably chosen galaxies. Stars have emission and absorption lines at very specific frequencies measured to very high accuracy, which only depend on details of atomic physics that don't change in different galaxies. So, as an example, you could pick an edge-on spiral galaxy, and look at the position of the absorption lines in the center of the galaxy. Then, you can look at the two edges of the galaxy, and i...
|
Dec 12, 2021 |
Where do your eyes go? by alkjash
21:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Where do your eyes go?, published by alkjash on the LessWrong.
This is a linkpost for/
[Shoutout to the LW team for very helpful (and free!) feedback on this post.]
I. Prelude
When my wife first started playing the wonderful action roguelike Hades, she got stuck in Asphodel. Most Hades levels involve dodging arrows, lobbed bombs, melee monsters, and spike traps all whilst hacking and slashing as quickly as possible, but Asphodel adds an extra twist: in this particular charming suburb of the Greek underworld, you need to handle all of the above whilst also trying not to step in lava. Most of the islands in Asphodel are narrower than your dash is far, so it’s hard not to dash straight off solid ground into piping-hot doom.
I gave my wife some pointers about upgrade choices (cough Athena dash cough) and enemy attack patterns, but most of my advice was marginally helpful at best. She probably died in lava another half-dozen times. One quick trick, however, had an instant and visible effect.
"Stare at yourself."
Watch your step.
By watching my wife play, I came to realize that she was making one fundamental mistake: her eyes were in the wrong place. Instead of watching her own character Zagreus, she spent most of her time staring at the enemies and trying to react to their movements and attacks.
Hades is almost a bullet hell game: avoiding damage is the name of the game. Eighty percent of the time your eyes need to be honed on Zagreus's toned protagonist butt to make sure he dodges precisely away from, out of, or straight through enemy attacks. In the meantime, most of Zagreus's own attacks hit large areas, so tracking enemies with peripheral vision is enough to aim your attacks in the right general direction. Once my wife learned to fix her eyes on Zagreus, she made it through Asphodel in only a few attempts.
This is a post about the general skill of focusing your eyes, and your attention, to the right place. Instead of the standard questions "How do you make good decisions based on what you see?" and "How do you get better at executing those decisions?", this post focuses on a question further upstream: "Where should your eyes be placed to receive the right information in the first place?"
In Part II, I describe five archetypal video games, which are distinguished in my memory by the different answers to "Where do your eyes go?" I learned from each of them. I derive five general lessons about attention-paying. Part II can be safely skipped by those allergic to video games.
In Part III, I apply these lessons to three specific minigames that folks struggle with in graduate school: research meetings, seminar talks, and paper-reading. In all three cases, there can be an overwhelming amount of information to attend to, and the name of the game is to focus your eyes properly to perceive the most valuable subset.
II. Lessons from Video Games
Me or You?
Hades and Dark Souls are similar games in many respects. Both live in the same general genre of action RPGs, both share the core gameplay loop "kill, die, learn, repeat," and both are widely acknowledged to be among the best games of all time. Their visible differences are mostly aesthetic: for example, Hades' storytelling is more lighthearted, Dark Souls' more nonexistent.
But there is one striking difference between my experiences of these two games: in Hades I stared at myself, and in Dark Souls I stared at the enemy. Why?
One answer is obvious: in Dark Souls, the camera follows you around over your shoulder, so you're forced to stare at the enemies, while in Hades the isometric camera is centered on your own character. This is good game design because the camera itself gently suggests the right place for your eyes to focus, but it doesn't really explain why that place is right.
The more interesting answer is that your eyes g...
|
Dec 12, 2021 |
Confidence levels inside and outside an argument by Scott Alexander
10:35
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Confidence levels inside and outside an argument , published by Scott Alexander on the LessWrong.
Related to: Infinite Certainty
Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?
Mine would be significantly less than 999,999,999 in a billion.
When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in "But that still leaves a one in a billion chance, right?". The majority of the probability is in "That argument is flawed". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.
More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.
So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.
Is That Really True?
One might be tempted to respond "But there's an equal chance that the false model is too high, versus that it is too low." Maybe there was a bug in the computer program, but it prevented it from giving the incumbent's real chances of 999,999,999,999 out of a trillion.
The prior probability of a candidate winning an election is 50%1. We need information to push us away from this probability in either direction. To push significantly away from this probability, we need strong information. Any weakness in the information weakens its ability to push away from the prior. If there's a flaw in FiveThirtyEight's model, that takes us away from their probability of 999,999,999 in of a billion, and back closer to the prior probability of 50%
We can confirm this with a quick sanity check. Suppose we know nothing about the election (ie we still think it's 50-50) until an insane person reports a hallucination that an angel has declared the incumbent to have a 999,999,999/billion chance. We would not be tempted to accept this figure on the grounds that it is equally likely to be too high as too low.
A second objection covers situations such as a lottery. I would like to say the chance that Bob wins a lottery with one billion players is 1/1 billion. Do I have to adjust this upward to cover the possibility that my model for how lotteries work is somehow flawed? No. Even if I am misunderstanding the lottery, I have not departed from my prior. Here, new information really does have an equal chance of going against Bob as of going in his favor. For example, the lottery may be fixed (meaning my original model of how to determine lottery winners is fatally flawed), but there is no greater reason to believe it is fixed in favor of Bob than anyone else.2
Spotted in the Wild
The recent Pascal's Mugging thread spawned a discussion of the Large Hadron Colli...
|
Dec 12, 2021 |
Fact Posts: How and Why by sarahconstantin
05:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Fact Posts: How and Why, published by sarahconstantinon the LessWrong.
The most useful thinking skill I've taught myself, which I think should be more widely practiced, is writing what I call "fact posts." I write a bunch of these on my blog. (I write fact posts about pregnancy and childbirth here.)
To write a fact post, you start with an empirical question, or a general topic. Something like "How common are hate crimes?" or "Are epidurals really dangerous?" or "What causes manufacturing job loss?"
It's okay if this is a topic you know very little about. This is an exercise in original seeing and showing your reasoning, not finding the official last word on a topic or doing the best analysis in the world.
Then you open up a Google doc and start taking notes.
You look for quantitative data from conventionally reliable sources. CDC data for incidences of diseases and other health risks in the US; WHO data for global health issues; Bureau of Labor Statistics data for US employment; and so on. Published scientific journal articles, especially from reputable journals and large randomized studies.
You explicitly do not look for opinion, even expert opinion. You avoid news, and you're wary of think-tank white papers. You're looking for raw information. You are taking a sola scriptura approach, for better and for worse.
And then you start letting the data show you things.
You see things that are surprising or odd, and you note that.
You see facts that seem to be inconsistent with each other, and you look into the data sources and methodology until you clear up the mystery.
You orient towards the random, the unfamiliar, the things that are totally unfamiliar to your experience. One of the major exports of Germany is valves? When was the last time I even thought about valves? Why valves, what do you use valves in? OK, show me a list of all the different kinds of machine parts, by percent of total exports.
And so, you dig in a little bit, to this part of the world that you hadn't looked at before. You cultivate the ability to spin up a lightweight sort of fannish obsessive curiosity when something seems like it might be a big deal.
And you take casual notes and impressions (though keeping track of all the numbers and their sources in your notes).
You do a little bit of arithmetic to compare things to familiar reference points. How does this source of risk compare to the risk of smoking or going horseback riding? How does the effect size of this drug compare to the effect size of psychotherapy?
You don't really want to do statistics. You might take percents, means, standard deviations, maybe a Cohen's d here and there, but nothing fancy. You're just trying to figure out what's going on.
It's often a good idea to rank things by raw scale. What is responsible for the bulk of deaths, the bulk of money moved, etc? What is big? Then pay attention more to things, and ask more questions about things, that are big. (Or disproportionately high-impact.)
You may find that this process gives you contrarian beliefs, but often you won't, you'll just have a strongly fact-based assessment of why you believe the usual thing.
There's a quality of ordinariness about fact-based beliefs. It's not that they're never surprising -- they often are. But if you do fact-checking frequently enough, you begin to have a sense of the world overall that stays in place, even as you discover new facts, instead of swinging wildly around at every new stimulus. For example, after doing lots and lots of reading of the biomedical literature, I have sort of a "sense of the world" of biomedical science -- what sorts of things I expect to see, and what sorts of things I don't. My "sense of the world" isn't that the world itself is boring -- I actually believe in a world rich in discoveries and low-hanging fruit -- but th...
|
Dec 12, 2021 |
Alignment Research Field Guide by abramdemski
26:17
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Alignment Research Field Guide, published by abramdemski on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This field guide was written by the MIRI team with MIRIx groups in mind, though the advice may be relevant to others working on AI alignment research.
⠀
Preamble I: Decision Theory
Hello! You may notice that you are reading a document.
This fact comes with certain implications. For instance, why are you reading this? Will you finish it? What decisions will you come to as a result? What will you do next?
Notice that, whatever you end up doing, it’s likely that there are dozens or even hundreds of other people, quite similar to you and in quite similar positions, who will follow reasoning which strongly resembles yours, and make choices which correspondingly match.
Given that, it’s our recommendation that you make your next few decisions by asking the question “What policy, if followed by all agents similar to me, would result in the most good, and what does that policy suggest in my particular case?” It’s less of a question of trying to decide for all agents sufficiently-similar-to-you (which might cause you to make the wrong choice out of guilt or pressure) and more something like “if I were in charge of all agents in my reference class, how would I treat instances of that class with my specific characteristics?”
If that kind of thinking leads you to read further, great. If it leads you to set up a MIRIx chapter, even better. In the meantime, we will proceed as if the only people reading this document are those who justifiably expect to find it reasonably useful.
⠀
Preamble II: Surface Area
Imagine that you have been tasked with moving a cube of solid iron that is one meter on a side. Given that such a cube weighs ~16000 pounds, and that an average human can lift ~100 pounds, a naïve estimation tells you that you can solve this problem with ~150 willing friends.
But of course, a meter cube can fit at most something like 10 people around it. It doesn’t matter if you have the theoretical power to move the cube if you can’t bring that power to bear in an effective manner. The problem is constrained by its surface area.
MIRIx chapters are one of the best ways to increase the surface area of people thinking about and working on the technical problem of AI alignment. And just as it would be a bad idea to decree "the 10 people who happen to currently be closest to the metal cube are the only ones allowed to think about how to think about this problem", we don’t want MIRI to become the bottleneck or authority on what kinds of thinking can and should be done in the realm of embedded agency and other relevant fields of research.
The hope is that you and others like you will help actually solve the problem, not just follow directions or read what’s already been written. This document is designed to support people who are interested in doing real groundbreaking research themselves.
⠀
Contents
You and your research
Logistics of getting started
Models of social dynamics
Other useful thoughts and questions
⠀
1. You and your research
We sometimes hear questions of the form “Even a summer internship feels too short to make meaningful progress on real problems. How can anyone expect to meet and do real research in a single afternoon?”
There’s a Zeno-esque sense in which you can’t make research progress in a million years if you can’t also do it in five minutes. It’s easy to fall into a trap of (either implicitly or explicitly) conceptualizing “research” as “first studying and learning what’s already been figured out, and then attempting to push the boundaries and contribute new content.”
The problem with this frame (according to us) is that it leads people to optimize for absorbing information, rather than seeking it instrume...
|
Dec 12, 2021 |
The Pavlov Strategy by sarahconstantin
08:20
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Pavlov Strategy, published by sarahconstantinon the LessWrong.
Epistemic Status: Common knowledge, just not to me
The Evolution of Trust is a deceptively friendly little interactive game. Near the end, there’s a “sandbox” evolutionary game theory simulator. It’s pretty flexible. You can do quick experiments in it without writing code. I highly recommend playing around.
One of the things that surprised me was a strategy the game calls Simpleton, also known in the literature as Pavlov. In certain conditions, it works pretty well — even better than tit-for-tat or tit-for-tat with forgiveness.
Let’s set the framework first. You have a Prisoner’s dilemma type game.
If both parties cooperate, they each get +2 points.
If one cooperates and the other defects, the defector gets +3 points and the cooperator gets -1 point
If both defect, both get 0 points.
This game is iterated — you’re randomly assigned to a partner and you play many rounds. Longer rounds reward more cooperative strategies; shorter rounds reward more defection.
It’s also evolutionary — you have a proportion of bots each playing their strategies, and after each round, the bots with the most points replicate and the bots with the least points die out. Successful strategies will tend to reproduce while unsuccessful ones die out. In other words, this is the Darwin Game.
Finally, it’s stochastic — there’s a small probability that any bot will make a mistake and cooperate or defect at random.
Now, how does Pavlov work?
Pavlov starts off cooperating. If the other player cooperates with Pavlov, Pavlov keeps doing whatever it’s doing, even if it was a mistake; if the other player defects, Pavlov switches its behavior, even if it was a mistake.
In other words, Pavlov:
cooperates when you cooperate with it, except by mistake
“pushes boundaries” and keeps defecting when you cooperate, until you retaliate
“concedes when punished” and cooperates after a defect/defect result
“retaliates against unprovoked aggression”, defecting if you defect on it while it cooperates.
If there’s any randomness, Pavlov is better at cooperating with itself than Tit-For-Tat. One accidental defection and two Tit-For-Tats are stuck in an eternal defect cycle, while Pavlov’s forgive each other and wind up back in a cooperate/cooperate pattern.
Moreover, Pavlov can exploit CooperateBot (if it defects by accident, it will keep greedily defecting against the hapless CooperateBot, while Tit-For-Tat will not) but still exerts some pressure against DefectBot (defecting against it half the time, compared to Tit-For-Tat’s consistent defection.)
The interesting thing is that Pavlov can beat Tit-For-Tat or Tit-for-Tat-with-Forgiveness in a wide variety of scenarios.
If there are only Pavlov and Tit-For-Tat bots, Tit-For-Tat has to start out outnumbering Pavlov quite significantly in order to win. The same is true for a population of Pavlov and Tit-For-Tat-With-Forgiveness. It doesn’t change if we add in some Cooperators or Defectors either.
Why?
Compared to Tit-For-Tat, Pavlov cooperates better with itself. If two Tit-For-Tat bots are paired, and one of them accidentally defects, they’ll be stuck in a mutual defection equilibrium. However, if one Pavlov bot accidentally defects against its clone, we’ll see
C/D -> D/D -> C->C
which recovers a mutual-cooperation equilibrium and picks up more points.
Compared to Tit-For-Tat-With-Forgiveness, Pavlov cooperates worse with itself (it takes longer to recover from mistakes) but it “exploits” TFTWF’s patience better. If Pavlov accidentally defects against TFTWF, the result is
D/C -> D/C -> D/D -> C/D -> D/D -> C/C,
which leaves Pavlov with a net gain of 1 point per turn, (over the first five turns before a cooperative equilibrium) compared to TFTWF’s 1/5 point per turn.
If TFTWF accidentally defects against Pavl...
|
Dec 12, 2021 |
Tell Culture by LoganStrohl
04:23
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Tell Culture, published by LoganStrohl on the LessWrong.
Followup to: Ask and Guess
Ask culture: "I'll be in town this weekend for a business trip. Is it cool if I crash at your place?" Response: “Yes“ or “no”.
Guess culture: "Hey, great news! I'll be in town this weekend for a business trip!" Response: Infer that they might be telling you this because they want something from you, conclude that they might want a place to stay, and offer your hospitality only if you want to. Otherwise, pretend you didn’t infer that.
The two basic rules of Ask Culture: 1) Ask when you want something. 2) Interpret things as requests and feel free to say "no".
The two basic rules of Guess Culture: 1) Ask for things if, and only if, you're confident the person will say "yes". 2) Interpret requests as expectations of "yes", and, when possible, avoid saying "no".
Both approaches come with costs and benefits. In the end, I feel pretty strongly that Ask is superior.
But these are not the only two possibilities!
"I'll be in town this weekend for a business trip. I would like to stay at your place, since it would save me the cost of a hotel, plus I would enjoy seeing you and expect we’d have some fun. I'm looking for other options, though, and would rather stay elsewhere than inconvenience you." Response: “I think I need some space this weekend. But I’d love to get a beer or something while you’re in town!” or “You should totally stay with me. I’m looking forward to it.”
There is a third alternative, and I think it's probably what rationalist communities ought to strive for. I call it "Tell Culture".
The two basic rules of Tell Culture: 1) Tell the other person what's going on in your own mind whenever you suspect you'd both benefit from them knowing. (Do NOT assume others will accurately model your mind without your help, or that it will even occur to them to ask you questions to eliminate their ignorance.) 2) Interpret things people tell you as attempts to create common knowledge for shared benefit, rather than as requests or as presumptions of compliance.
Suppose you’re in a conversation that you’re finding aversive, and you can’t figure out why. Your goal is to procure a rain check.
Guess: You see this annoyed body language? Huh? Look at it! If you don’t stop talking soon I swear I’ll start tapping my foot. (Or, possibly, tell a little lie to excuse yourself. “Oh, look at the time.”)
Ask: “Can we talk about this another time?”
Tell: "I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."
Here are more examples from my own life:
"I didn't sleep well last night and am feeling frazzled and irritable today. I apologize if I snap at you during this meeting. It isn’t personal."
"I just realized this interaction will be far more productive if my brain has food. I think we should head toward the kitchen."
"It would be awfully convenient networking for me to stick around for a bit after our meeting to talk with you and [the next person you're meeting with]. But on a scale of one to ten, it's only about 3 useful to me. If you'd rate the loss of utility for you as two or higher, then I have a strong preference for not sticking around."
The burden of honesty is even greater in Tell culture than in Ask culture. To a Guess culture person, I imagine much of the above sounds passive aggressive or manipulative, much worse than the rude bluntness of mere Ask. It’s because Guess people aren’t expecting relentless truth-telling, which is exactly what’s necessary here.
If you’re occasionally dishonest and tell people you want things you don't actually care about--like their comfort or convenience--they’ll learn not to trust you, and the inherent freedom of the system will be lost. They’ll learn that you only pretend to care about them to take...
|
Dec 12, 2021 |
Other people are wrong vs I am right by Buck
13:12
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Other people are wrong vs I am right, published by Buck the LessWrong.
I’ve recently been spending some time thinking about the rationality mistakes I’ve made in the past. Here’s an interesting one: I think I have historically been too hasty to go from “other people seem very wrong on this topic” to “I am right on this topic”.
Throughout my life, I’ve often thought that other people had beliefs that were really repugnant and stupid. Now that I am older and wiser, I still think I was correct to think that these ideas were repugnant and stupid. Overall I was probably slightly insufficiently dismissive of things like the opinions of apparent domain experts and the opinions of people who seemed smart whose arguments I couldn’t really follow. I also overrated conventional wisdom about factual claims about how the world worked, though I underrated conventional wisdom about how to behave.
Examples of ideas where I thought the conventional wisdom was really dumb:
I thought that animal farming was a massive moral catastrophe, and I thought it was a sign of terrible moral failure that almost everyone around me didn’t care about this and wasn’t interested when I brought it up.
I thought that AI safety was a big deal, and I thought the arguments against it were all pretty stupid. (Nowadays the conventional wisdom has a much higher opinion of AI safety; I’m talking about 2010-2014.)
I thought that people have terrible taste in economic policy, and that they mostly vote for good-sounding stuff that stops sounding good if you think about it properly for even a minute
I was horrified by people proudly buying products that said “Made in Australia” on them; I didn’t understand how that wasn’t obviously racist, and I thought that we should make it much easier to allow anyone who wants to to come live in Australia. (This one has become much less controversial since Trump inadvertently convinced liberals that they should be in favor of immigration liberalization.)
I thought and still think that a lot of people’s arguments about why it’s good to call the police on bike thieves were dumb. See eg many of the arguments people made in response to a post of mine about this (that in fairness was a really dumb post, IMO)
I think I was right about other people being wrong. However, I think that my actual opinions on these topics were pretty confused and wrong, much more than I thought at the time. Here’s how I updated my opinion for all the things above:
I have updated against the simple view of hedonic utilitarianism under which it’s plausible that simple control systems can suffer. A few years ago, I was seriously worried that the future would contain much more factory farming and therefore end up net negative; I now think that I overrated this fear, because (among other arguments) almost no-one actually endorses torturing animals, we just do it out of expediency, and in the limit of better technology our weak preferences will override our expediency.
My understanding of AI safety was “eventually someone will build a recursively self improving singleton sovereign AGI, and we need to figure out how to build it such that it can have an off switch and it implements some good value function instead of something bad.” I think this picture was massively oversimplified. On the strategic side, I didn’t think about the possibilities of slower takeoffs or powerful technologies without recursive self improvement; on the technical safety side, I didn’t understand that it’s hard to even build a paperclip maximizer, and a lot of our effort might go into figuring out how to do that.
Other people have terrible taste in economic policy, but I think that I was at the time overconfident in various libertarianish ideas that I’m now less enthusiastic about. Also, I no longer think it’s a slam dunk that society is b...
|
Dec 12, 2021 |
Your Cheerful Price by Eliezer Yudkowsky
23:48
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Your Cheerful Price, published by Eliezer Yudkowsky on the LessWrong.
There's a concept I draw on often in social interactions. I've been calling it the "happy price", but that is originally terminology by Michael Ellsberg with subtly different implications. So I now fork off the term "cheerful price", and specialize it anew. Earlier Facebook discussion here.
Tl;dr:
When I ask you for your Cheerful Price for doing something, I'm asking you for the price that:
Gives you a cheerful feeling about the transaction;
Makes you feel energized about doing the thing;
Doesn't generate an ouch feeling to be associated with me;
Means I'm not expending social capital or friendship capital to make the thing happen;
Doesn't require the executive part of you, that knows you need money in the long-term, to shout down and override other parts of you.
The Cheerful Price is not:
A "fair" price;
The price you would pay somebody else to do similar work;
The lowest price such that you'd feel sad about learning the transaction was canceled;
The price that you'd charge a non-friend, though this is a good thing to check (see below);
A price you're willing to repeat for future transactions, though this is a good thing to check (see below);
The bare minimum amount of money such that you feel cheerful. It should include some safety margin to account for fluctuating feelings.
The point of a Cheerful Price, from my perspective as somebody who's usually the one trying to emit money, is that:
It lets me avoid the nightmare of accidentally inflicting small ouches on people;
It lets me avoid the nightmare of spending social capital while having no idea how much I'm spending;
It lets me feel good instead of bad about asking other people to do things.
Warnings:
Not everybody was raised with an attitude of "money is the unit of caring and the medium of cooperation" towards exchanges with an overt financial element. Some people may just not have a monetary price for some things, such that the exchange would boost rather than hurt their friendship, and their feelings too are valid. Not as valid as mine, of course, but still valid.
"I don't have a cheerful price for that, would you like a non-cheerful price" is a valid response.
Any time you ask somebody for a Cheerful Price, you are implicitly promising not to hold any price they name against them, even if it's a billion dollars.
If Tell Culture doesn't work for someone, Cheerful Prices may not work for them either.
If a friend didn't already ask for your cheerful price, it may be good to explicitly tell them when you're naming your cheerful price rather than your minimum price.
Life does not promise us that we will always get our Cheerful Prices, even from our friends.
Q: Why is my Cheerful Price not the same as the minimum price that would make me prefer doing the transaction to not doing it? If, on net, I'd rather do something than not do it, and I get to do it, shouldn't I feel cheerful about that?
As an oversimplified model, imagine that your mind consists of a bunch of oft-conflicting desires, plus a magic executive whose job it is to decide what to do in the end. This magic executive also better understands concepts like "hyperbolic discounting" that the wordless voices don't understand as well.
Now suppose that I don't want to hurt you even a little; and that I live in terror of accidentally overdrawing on other people's senses of friendship or obligation towards me; and that I worry about generating small ouches that your mind will thereafter associate with me.
In this case I do not want to offer you the minimum price such that your executive part, which knows you need money in the long-term, would forcibly override your inner voices that don't understand hyperbolic discounting, and force them to accept the offer. Those parts of you may then feel b...
|
Dec 12, 2021 |
The Importance of Sidekicks by Swimmer963
07:39
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Importance of Sidekicks, published by Swimmer963 on the LessWrong.
[Reposted from my personal blog.]
Mindspace is wide and deep. “People are different” is a truism, but even knowing this, it’s still easy to underestimate.
I spent much of my initial engagement with the rationality community feeling weird and different. I appreciated the principle and project of rationality as things that were deeply important to me; I was pretty pro-self improvement, and kept tsuyoku naritai as my motto for several years. But the rationality community, the people who shared this interest of mine, often seemed baffled by my values and desires. I wasn’t ambitious, and had a hard time wanting to be. I had a hard time wanting to be anything other than a nurse.
It wasn’t until this August that I convinced myself that this wasn’t a failure in my rationality, but rather a difference in my basic drives. It’s around then, in the aftermath of the 2014 CFAR alumni reunion, that I wrote the following post.
I don’t believe in life-changing insights (that happen to me), but I think I’ve had one–it’s been two weeks and I’m still thinking about it, thus it seems fairly safe to say I did.
At a CFAR Monday test session, Anna was talking about the idea of having an “aura of destiny”–it’s hard to fully convey what she meant and I’m not sure I get it fully, but something like seeing yourself as you’ll be in 25 years once you’ve saved the world and accomplished a ton of awesome things. She added that your aura of destiny had to be in line with your sense of personal aesthetic, to feel “you.”
I mentioned to Kenzi that I felt stuck on this because I was pretty sure that the combination of ambition and being the locus of control that “aura of destiny” conveyed to me was against my sense of personal aesthetic.
Kenzi said, approximately [I don't remember her exact words]: “What if your aura of destiny didn’t have to be those things? What if you could be like.Samwise, from Lord of the Rings? You’re competent, but most importantly, you’re loyal to Frodo. You’re the reason that the hero succeeds.”
I guess this isn’t true for most people–Kenzi said she didn’t want to keep thinking of other characters who were like this because she would get so insulted if someone kept comparing her to people’s sidekicks–but it feels like now I know what I am.
So. I’m Samwise. If you earn my loyalty, by convincing me that what you’re working on is valuable and that you’re the person who should be doing it, I’ll stick by you whatever it takes, and I’ll make sure you succeed. I don’t have a Frodo right now. But I’m looking for one.
It then turned out that quite a lot of other people recognized this, so I shifted from “this is a weird thing about me” to “this is one basic personality type, out of many.” Notably, Brienne wrote the following comment:
Sidekick” doesn’t quite fit my aesthetic, but it’s extremely close, and I feel it in certain moods. Most of the time, I think of myself more as what TV tropes would call a “dragon”. Like the Witch-king of Angmar, if we’re sticking of LOTR. Or Bellatrix Black. Or Darth Vader. (It’s not my fault people aren’t willing to give the good guys dragons in literature.)
For me, finding someone who shared my values, who was smart and rational enough for me to trust him, and who was in a much better position to actually accomplish what I most cared about than I imagined myself ever being, was the best thing that could have happened to me.
She also gave me what’s maybe one of the best and most moving compliments I’ve ever received.
In Australia, something about the way you interacted with people suggested to me that you help people in a completely free way, joyfully, because it fulfills you to serve those you care about, and not because you want something from them. I was able to relax around you, an...
|
Dec 12, 2021 |
Extreme Rationality: It's Not That Great
12:36
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Extreme Rationality: It's Not That Great, published by Scott Alexander on the LessWrong.
Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities
Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.
For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.
And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.
So, what are these "benefits" of "x-rationality"?
A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:
I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.
There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.
I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.
Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.
There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making ...
|
Dec 12, 2021 |
Learned Blankness by AnnaSalamon
08:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Learned Blankness, published by AnnaSalamon on the LessWrong.
Related to: Semantic stopsigns, Truly part of you.
One day, the dishwasher broke. I asked Steve Rayhawk to look at it because he’s “good with mechanical things”.
“The drain is clogged,” he said.
“How do you know?” I asked.
He pointed at a pool of backed up water. “Because the water is backed up.”
We cleared the clog and the dishwasher started working.
I felt silly, because I, too, could have reasoned that out. The water wasn’t draining -- therefore, perhaps the drain was clogged. Basic rationality in action.[1]
But before giving it even ten seconds’ thought, I’d classified the problem as a “mechanical thing”. And I’d remembered I “didn’t know how mechanical things worked” (a cached thought). And then -- prompted by my cached belief that there was a magical “way mechanical things work” that some knew and I didn’t -- I stopped trying to think at all.
“Mechanical things” was for me a mental stopsign -- a blank domain that stayed blank, because I never asked the obvious next questions (questions like “does the dishwasher look unusual in any way? Why is there water at the bottom?”).
When I tutored math, new students acted as though the laws of exponents (or whatever we were learning) had fallen from the sky on stone tablets. They clung rigidly to the handed-down procedures. It didn’t occur to them to try to understand, or to improvise. The students treated math the way I treated broken dishwashers.
Martin Seligman coined the term "learned helplessness" to describe a condition in which someone has learned to behave as though they were helpless. I think we need a term for learned helplessness about thinking (in a particular domain). I’ll call this “learned blankness”[2]. Folks who fall prey to learned blankness may still take actions -- sometimes my students practiced the procedures again and again, hired a tutor, etc. But they do so as though carrying out rituals to an unknown god -- parts of them may be trying, but their “understand X” center has given up.
To avoid misunderstanding: calling a plumber, and realizing he knows more than you do, can be good. The thing to avoid is mentally walling off your own impressions; keeping parts of your map blank, because you imagine either that the domain itself is chaotic, or that one needs some special skillset to reason about that.
Notice your learned blankness
Learned blankness is common. My guess is that most of us treat most of our environment as blank givens inaccessible to reason[3]. To spot it in yourself, try comparing yourself to the following examples:
1. Sandra runs helpless to her roommate when her computer breaks -- she isn’t “good with computers”. Her roommate, by contrast, clicks on one thing and then another, doing Google searches and puzzling it out.[4]
2. Most scientists know the scientific method is good (and that e.g. p-values of 0.05 are good). But many not only don’t understand why the scientific method (or these p-values) are good -- they don’t understand that it’s the sort of thing one could understand.
3. Many respond to questions about consciousness, morality, or God by expecting that some other, special kind of reasoning is needed, and, thus, walling off and distrusting their own impressions.
4. Fred finds he has an intuition about how serious nano risks are. His intuition is a blank for him; something he can act on or ignore, but not examine. It doesn’t occur to him that he could examine the causes of his intuition[5], or could examine the accuracy rate of similar intuitions.
5. I find it hard to fully try to write fiction -- though a drink of alcohol helps. The trouble is that since I’m unskilled at fiction-writing, and since I find it painful to notice my un-skill, most of my mind prefers to either not write at all, or to write half-heartedly...
|
Dec 12, 2021 |
What cognitive biases feel like from the insideby chaosmage
07:06
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: What cognitive biases feel like from the inside, published by chaosmage on the LessWrong.
Building on the recent SSC post Why Doctors Think They’re The Best...
What it feels like for me How I see others who feel the same
There is controversy on the subject but there shouldn't be because the side I am on is obviously right. They have taken one side in a debate that is unresolved for good reason that they are struggling to understand
I have been studying this carefully They preferentially seek out conforming evidence
The arguments for my side make obvious sense, they're almost boring. They're very ready to accept any and all arguments for their side.
The arguments for the opposing side are contradictory, superficial, illogical or debunked. They dismiss arguments for the opposing side at the earliest opportunity.
The people on the opposing side believe these arguments mostly because they are uninformed, have not thought about it enough or are being actively misled by people with bad motives. The flawed way they perceive the opposing side makes them confused about how anyone could be on that side. They resolve that confusion by making strong assumptions that can approach conspiracy theories.
The scientific term for this mismatch is: confirmation bias
What it feels like for me How I see others who feel the same
My customers/friends/relationships love me, so I am good for them, so I am probably just generally good. They neglect the customers / friends / relationships that did not love them and have left, so they overestimate how good they are.
When customers / friends / relationships switch to me, they tell horror stories of who I'm replacing for them, so I'm better than those. They don't see the people who are happy with who they have and therefore never become their customers / friends / relationships.
The scientific term for this mismatch is: selection bias
What it feels like for me How I see others who feel the same
Although I am smart and friendly, people don't listen to me. Although they are smart and friendly, they are hard to understand.
I have a deep understanding of the issue that people are too stupid or too disinterested to come to share. They are failing to communicate their understanding, or to give unambiguous evidence they even have it.
This lack of being listened to affects several areas of my life but it is particularly jarring on topics that are very important to me. This bad communication affects all areas of their life, but on the unimportant ones they don't even understand that others don't understand them.
The scientific term for this mismatch is: illusion of transparency
What it feels like for me How I see others who feel the same
I knew at the time this would not go as planned. They did not predict what was going to happen.
The plan was bad and we should have known it was bad. They fail to appreciate how hard prediction is, so the mistake seems more obvious to them than it was.
I knew it was bad, I just didn't say it, for good reasons (e.g. out of politeness or too much trust in those who made the bad plan) or because it is not my responsibility or because nobody listens to me anyway. In order to avoid blame for the seemingly obvious mistake, they are making up excuses.
The scientific term for this mismatch is: hindsight bias
What it feels like for me How I see others who feel the same
I have a good intuition; even decisions I make based on insufficient information tend to turn out to be right. They tend to recall their own successes and forget their own failures, leading to an inflated sense of past success.
I know early on how well certain projects are going to go or how well I will get along with certain people. They make self-fulfilling prophecies that directly influence how much effort they put into a project or relationship.
Compared to other...
|
Dec 12, 2021 |
Hiring engineers and researchers to help align GPT-3 by paulfchristiano
04:52
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Hiring engineers and researchers to help align GPT-3, published by paulfchristiano on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
My team at OpenAI, which works on aligning GPT-3, is hiring ML engineers and researchers. Apply here for the ML engineer role and here for the ML researcher role.
GPT-3 is similar enough to "prosaic" AGI that we can work on key alignment problems without relying on conjecture or speculative analogies. And because GPT-3 is already being deployed in the OpenAI API, its misalignment matters to OpenAI’s bottom line — it would be much better if we had an API that was trying to help the user instead of trying to predict the next word of text from the internet.
I think this puts our team in a great place to have an impact:
If our research succeeds I think it will directly reduce existential risk from AI. This is not meant to be a warm-up problem, I think it’s the real thing.
We are working with state of the art systems that could pose an existential risk if scaled up, and our team’s success actually matters to the people deploying those systems.
We are working on the whole pipeline from “interesting idea” to “production-ready system,” building critical skills and getting empirical feedback on whether our ideas actually work.
We have the real-world problems to motivate alignment research, the financial support to hire more people, and a research vision to execute on. We are bottlenecked by excellent researchers and engineers who are excited to work on alignment.
What the team does
In the past Reflection focused on fine-tuning GPT-3 using a reward function learned from human feedback. Our most recent results are here, and had the unusual virtue of simultaneously being exciting enough to ML researchers to be accepted at NeurIPS while being described by Eliezer as “directly, straight-up relevant to real alignment problems.”
We’re currently working on three things:
[20%] Applying basic alignment approaches to the API, aiming to close the gap between theory and practice.
[60%] Extending existing approaches to tasks that are too hard for humans to evaluate; in particular, we are training models that summarize more text than human trainers have time to read. Our approach is to use weaker ML systems operating over shorter contexts to help oversee stronger ones over longer contexts. This is conceptually straightforward but still poses significant engineering and ML challenges.
[20%] Conceptual research on domains that no one knows how to oversee and empirical work on debates between humans (see our 2019 writeup). I think the biggest open problem is figuring out how and if human overseers can leverage “knowledge” the model acquired during training (see an example here).
If successful, ideas will eventually move up this list, from the conceptual stage to ML prototypes to real deployments. We’re viewing this as practice for integrating alignment into transformative AI deployed by OpenAI or another organization.
What you’d do
Most people on the team do a subset of these core tasks:
Design+build+maintain code for experimenting with novel training strategies for large language models. This infrastructure needs to support a diversity of experimental changes that are hard to anticipate in advance, work as a solid base to build on for 6-12 months, and handle the complexity of working with large language models. Most of our code is maintained by 1-3 people and consumed by 2-4 people (all on the team).
Oversee ML training. Evaluate how well models are learning, figure out why they are learning badly, and identify+prioritize+implement changes to make them learn better. Tune hyperparameters and manage computing resources. Process datasets for machine consumption; understand datasets and how they affect the model...
|
Dec 12, 2021 |
Arbital postmortem by alexei
30:22
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Arbital postmortem , published by alexei on the LessWrong.
Disclaimer 1: These views are my own and don’t necessarily reflect the views of anyone else (Eric, Steph, or Eliezer).
Disclaimer 2: Most of the events happened at least a year ago. My memory is not particularly great, so the dates are fuzzy and a few things might be slightly out of order. But this post has been reviewed by Eric, Steph, and Eliezer, so it should mostly be okay.
I’m going to list events chronologically. At times I’ll insert a “Reflection” paragraph, where I’m going to outline my thoughts as of now. I’ll talk about what I could have done differently and how I would approach a similar problem today.
Chapter 0: Eliezer pitches Arbital and I say ‘no’
Around the summer of 2014 Eliezer approached me with the idea for what later would become Arbital. At first, I vaguely understood the idea as some kind of software to map out knowledge. Maybe something like a giant mind map, but not graphical. I took some time to research existing and previous projects in that area and found a huge graveyard of projects that have been tried. Yes, basically all of them were dead. Most were hobby projects, but some seemed pretty serious. None were successful, as far as I could tell. I didn’t see how Eliezer’s project was different, so I passed on it.
Reflection: Today, I’d probably try to sit down with Eliezer for longer and really try to understand what he is seeing that I’m not. It’s likely back then I didn’t have the right skills to extract that information, but I think I’m much better at it today.
Reflection: Also, after working with Eliezer for a few years, I’ve got a better feeling for how things he says often seem confusing / out of alignment / tilted, until you finally wrap your mind around it, and then it’s crystal clear and easy.
Chapter 1: Eliezer and I start Arbital
Early January 2015 I was sitting in my room, tired from looking in vain for a decent startup idea, when Arbital popped back into my mind. There were still a lot of red flags around the idea, but I rationalized to myself that given Eliezer’s track record, there was probably something good here. And, in the worst case, I’d just create a tool that would be useful to Eliezer alone. That didn’t seem like a bad outcome, so I decided to do it. I contacted Eliezer, he was still interested, and so we started the project.
Reflection: The decision process sounds a bit silly, but I don’t think it’s a bad one. I really prefer to do something decently useful, rather than sit around waiting for something perfect. I also still approve of the heuristic of accepting quests / projects from people you think are good at coming up with quests / projects. But if I did it again, I’d definitely put a lot more effort upfront to understand the entire vision before committing to it.
Reflection: Paul Graham wrote in one of his essays that it’s okay (though not ideal) to initially build a product for just one user. There are, of course, several caveats. The user needs to use the product extensively, otherwise you don’t get the necessary feedback on all the features you’re building. And the user needs to be somewhat typical of other users you hope to attract to the platform.
Reflection: Unfortunately, both of these turned out to be false. I’ll elaborate on the feature usage below. But the “typical” part probably could have been foreseen. There are only a few people in the world who write explanations at the scale and complexity that Eliezer does. The closest cluster is probably people writing college textbooks. So, in the beginning, I didn’t have any sense for who the first 10-100 users were going to be. That would have been fine if I was just building a tool for Eliezer, but since my goal was explicitly to create a for-profit consumer startup, this was a big mistake.
Eliezer ...
|
Dec 12, 2021 |
Personalized Medicine For Real by sarahconstantin
09:14
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Personalized Medicine For Real, published by sarahconstantin on the LessWrong.
I was part of the founding team at MetaMed, a personalized medicine startup. We went out of business back in 2015. We made a lot of mistakes due to inexperience, some of which I deeply regret.
I’m reflecting on that now, because Perlara just went out of business, and they got a lot farther on our original dream than we ever did. Q-State Biosciences, which is still around, is using a similar model.
The phenomenon that inspired MetaMed is that we knew of stories of heroic, scientifically literate patients and families of patients with incurable diseases, who came up with cures for their own conditions. Physicist Leo Szilard, the “father of the atom bomb”, designed a course of radiation therapy to cure his own bladder cancer. Computer scientist Matt Might analyzed his son’s genome to find a cure for his rare disorder. Cognitive scientist Joshua Tenenbaum found a personalized treatment for his father’s cancer.
So, we thought, could we try to scale up this process to help more people?
In Lois McMaster Bujold’s science fiction novels, the hero suffers an accident that leaves him with a seizure disorder. He goes to a medical research center and clinic, the Durona Group, and they design a neural prosthetic for him that prevents the seizures.
This sounds like it ought to be a thing that exists. Patient-led, bench-to-bedside drug discovery or medical device engineering. You get an incurable disease, you fund scientists/doctors/engineers to discover a cure, and now others with the disease can also be cured.
There’s actually a growing community of organizations trying to do things sort of in this vein. Recursion Pharmaceuticals, where I used to work, does drug discovery for rare diseases. Sv.ai organizes hackathons for analyzing genetic data to help patients with rare diseases find the root cause. Perlara and Q-state use animal models and in-vitro models respectively to simulate patients’ disorders, and then look for drugs or gene therapies that reverse those disease phenotypes in the animals or cells.
Back at MetaMed, I think we were groping towards something like this, but never really found our way there.
One reason is that we didn’t narrow our focus enough. We were trying to solve too many problems at once, all called “personalized medicine.”
Personalized Lifestyle Optimization
Some “personalized medicine” is about health optimization for basically healthy people. A lot of it amounts to superficial personalization on top of generic lifestyle advice. Harmless, but more of a marketing thing than a science thing, and not very interesting from a humanitarian perspective. Sometimes, we tried to get clients from this market. I pretty much always thought this was a bad idea.
Personalized Medicine For All
Some “personalized medicine” is about the claim that the best way to treat even common diseases often depends on individual factors, such as genes.
This was part of our pitch, but as I learned more, I came to believe that this kind of “personalization” has very little applicability. In most cases, we don’t know enough about how genes affect response to treatment to be able to improve outcomes by stratifying treatments based on genes. In the few cases where we know people with different genes need different treatments, it’s often already standard medical practice to run those tests. I now think there’s not a clear opportunity for a startup to improve the baseline through this kind of personalized medicine.
Preventing Medical Error
Some of our founding inspirations were the work of Gerd Gigerenzer and Atul Gawande, who showed that medical errors were the cause of many deaths, that doctors tend to be statistically illiterate, and that systematizing tools like checklists and statistical prediction rules save...
|
Dec 12, 2021 |
A LessWrong Crypto Autopsy by Scott Alexander
06:22
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A LessWrong Crypto Autopsy, published by Scott Alexander on the LessWrong.
Wei Dai, one of the first people Satoshi Nakamoto contacted about Bitcoin, was a frequent Less Wrong contributor. So was Hal Finney, the first person besides Satoshi to make a Bitcoin transaction.
The first mention of Bitcoin on Less Wrong, a post called Making Money With Bitcoin, was in early 2011 - when it was worth 91 cents. Gwern predicted that it could someday be worth "upwards of $10,000 a bitcoin". He also quoted Moldbug, who advised that:
If Bitcoin becomes the new global monetary system, one bitcoin purchased today (for 90 cents, last time I checked) will make you a very wealthy individual...Even if the probability of Bitcoin succeeding is epsilon, a million to one, it's still worthwhile for anyone to buy at least a few bitcoins now...I would not put it at a million to one, though, so I recommend that you go out and buy a few bitcoins if you have the technical chops. My financial advice is to not buy more than ten, which should be F-U money if Bitcoin wins.
A few people brought up some other points, like that if it ever became popular people might create a bunch of other cryptocurrencies, or that if there was too much controversy the Bitcoin economy might have to fork. The thread got a hundred or so comments before dying down.
But Bitcoin kept getting mentioned on Less Wrong over the next few years. It's hard to select highlights, but one of them is surely Ander's Why You Should Consider Buying Bitcoin Right Now If You Have High Risk Tolerance from January 2015. Again, people made basically the correct points and the correct predictions, and the thread got about a hundred comments before dying down.
I mention all this because of an idea, with a long history in this movement, that "rationalists should win". They should be able to use their training in critical thinking to recognize more opportunities, make better choices, and end up with more of whatever they want. So far it's been controversial to what degree we've lived up to that hope, or to what degree it's even realistic.
Well, suppose God had decided, out of some sympathy for our project, to make winning as easy as possible for rationalists. He might have created the biggest investment opportunity of the century, and made it visible only to libertarian programmers willing to dabble in crazy ideas. And then He might have made sure that all of the earliest adapters were Less Wrong regulars, just to make things extra obvious.
This was the easiest test case of our "make good choices" ability that we could possibly have gotten, the one where a multiply-your-money-by-a-thousand-times opportunity basically fell out of the sky and hit our community on its collective head. So how did we do?
I would say we did mediocre.
According to the recent SSC survey, 9% of SSC readers made $1000+ from crypto as of 12/2017. Among people who were referred to SSC from Less Wrong - my stand-in for long-time LW regulars - 15% made over $1000 on crypto, nearly twice as many. A full 3% of LWers made over $100K. That's pretty good.
On the other hand, 97% of us - including me - didn't make over $100K. All we would have needed to do was invest $10 (or a few CPU cycles) back when people on LW started recommending it. But we didn't. How bad should we feel, and what should we learn?
Here are the lessons I'm taking from this.
1: Our epistemic rationality has probably gotten way ahead of our instrumental rationality
When I first saw the posts saying that cryptocurrency investments were a good idea, I agreed with them. I even Googled "how to get Bitcoin" and got a bunch of technical stuff that seemed like a lot of work. So I didn't do it.
Back in 2016, my father asked me what this whole "cryptocurrency" thing was, and I told him he should invest in Ethereum. He did, ...
|
Dec 12, 2021 |
The Zettelkasten Method by abramdemski
01:05:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Zettelkasten Method, published by abramdemski on the LessWrong.
[Epistemic Status: Scroll to the bottom for my follow-up thoughts on this from months/years later.]
Early this year, Conor White-Sullivan introduced me to the Zettelkasten method of note-taking. I would say that this significantly increased my research productivity. I’ve been saying “at least 2x”. Naturally, this sort of thing is difficult to quantify. The truth is, I think it may be more like 3x, especially along the dimension of “producing ideas” and also “early-stage development of ideas”. (What I mean by this will become clearer as I describe how I think about research productivity more generally.) However, it is also very possible that the method produces serious biases in the types of ideas produced/developed, which should be considered. (This would be difficult to quantify at the best of times, but also, it should be noted that other factors have dramatically decreased my overall research productivity. So, unfortunately, someone looking in from outside would not see an overall boost. Still, my impression is that it's been very useful.)
I think there are some specific reasons why Zettelkasten has worked so well for me. I’ll try to make those clear, to help readers decide whether it would work for them. However, I honestly didn’t think Zettelkasten sounded like a good idea before I tried it. It only took me about 30 minutes of working with the cards to decide that it was really good. So, if you’re like me, this is a cheap experiment. I think a lot of people should actually try it to see how they like it, even if it sounds terrible.
My plan for this document is to first give a short summary and then an overview of Zettelkasten, so that readers know roughly what I’m talking about, and can possibly experiment with it without reading any further. I’ll then launch into a longer discussion of why it worked well for me, explaining the specific habits which I think contributed, including some descriptions of my previous approaches to keeping research notes. I expect some of this may be useful even if you don’t use Zettelkasten -- if Zettelkasten isn’t for you, maybe these ideas will nonetheless help you to think about optimizing your notes. However, I put it here primarily because I think it will boost the chances of Zettelkasten working for you. It will give you a more concrete picture of how I use Zettelkasten as a thinking tool.
Very Short Summary
Materials
Staples index-cards-on-a-ring or equivalent, possibly with:
plastic rings rather than metal
different 3x5 index cards (I recommend blank, but, other patterns may be good for you) as desired
some kind of divider
I use yellow index cards as dividers, but slightly larger cards, tabbed cards, plastic dividers, etc. might be better
quality hole punch (if you’re using different cards than the pre-punched ones)
I like this one.
Blank stickers or some other way to label card-binders with the address range stored within.
quality writing instrument -- must suit you, but,
multi-color click pen recommended
hi-tec-c coleto especially recommended
Technique
Number pages with alphanumeric strings, so that pages can be sorted hierarchically rather than linearly -- 11a goes between 11 and 12, 11a1 goes between 11a and 11b, et cetera. This allows pages to be easily inserted between other pages without messing up the existing ordering, which makes it much easier to continue topics.
Use the alphanumeric page identifiers to “hyperlink” pages. This allows sub-topics and tangents to be easily split off into new pages, and also allows for related ideas to be interlinked.
Before I launch into the proper description of Zettelkasten, here are some other resources on note-taking which I looked at before diving into using Zettelkasten myself. (Feel free to skip this part on a ...
|
Dec 12, 2021 |
The Fallacy of Gray by Eliezer Yudkowsky
07:26
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Fallacy of Gray, published by Eliezer Yudkowsky on the LessWrong.
The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.”
The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view . . .”
Marc Stiegler, David’s Sling
I don’t know if the Sophisticate’s mistake has an official name, but I call it the Fallacy of Gray. We saw it manifested in the previous essay—the one who believed that odds of two to the power of seven hundred and fifty million to one, against, meant “there was still a chance.” All probabilities, to him, were simply “uncertain” and that meant he was licensed to ignore them if he pleased.
“The Moon is made of green cheese” and “the Sun is made of mostly hydrogen and helium” are both uncertainties, but they are not the same uncertainty.
Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black. Or even if not, we can still compare shades, and say “it is darker” or “it is lighter.”
Years ago, one of the strange little formative moments in my career as a rationalist was reading this paragraph from Player of Games by Iain M. Banks, especially the sentence in bold:
A guilty system recognizes no innocents. As with any power apparatus which thinks everybody’s either for it or against it, we’re against it. You would be too, if you thought about it. The very way you think places you amongst its enemies. This might not be your fault, because every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it. You come from one of the latter and you’re being asked to explain yourself to one of the former. Prevarication will be more difficult than you might imagine; neutrality is probably impossible. You cannot choose not to have the politics you do; they are not some separate set of entities somehow detachable from the rest of your being; they are a function of your existence. I know that and they know that; you had better accept it.
Now, don’t write angry comments saying that, if societies impose fewer of their values, then each succeeding generation has more work to start over from scratch. That’s not what I got out of the paragraph.
What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me.
It was the whole notion of the Quantitative Way applied to life-problems like moral judgments and the quest for personal self-improvement. That, even if you couldn’t switch something from on to off, you could still tend to increase it or decrease it.
Is this too obvious to be worth mentioning? I say it is not too obvious, for many bloggers have said of Overcoming Bias: “It is impossible, no one can completely eliminate bias.” I don’t care if the one is a professional economist, it is clear that they have not yet grokked the Quantitative Way as it applies to everyday life and matters like personal self-improvement. That which I cannot eliminate may be well worth reducing.
Or consider an exchange between Robin Hanson and Tyler Cowen.1 Robin Hanson said that he preferred to put at least 75% weight on the prescriptions of economic theory versus his intuitions: “I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment.” Tyler Cowen replied:
In my view there is no such thing as “straightforwardly applying economic theory” . . . theories are always applied through our personal and cultural filters and the...
|
Dec 12, 2021 |
Long Covid Is Not Necessarily Your Biggest Problem by Elizabeth
17:56
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Long Covid Is Not Necessarily Your Biggest Problem, published by Elizabeth on the LessWrong.
At this point, people I know are not that worried about dying from covid. We’re all vaccinated, we’re mostly young and healthy(ish), and it turns out the odds were always low for us. We’re also not that worried about hospitalization: it’s much more likely than death, but maintaining covid precautions indefinitely is very costly so by and large we’re willing to risk it.
The big unknown here has been long covid. Losing a few weeks to being extremely sick might be worth the risk, but a lifetime of fatigue and reduced cognition is a very big deal. With that in mind, I set out to do some math on what risks we were running. Unfortunately baseline covid has barely been around long enough to have data on long covid, most of it is still terrible, and the vaccine and Delta variant have not been widespread long enough to have much data at all.
In the end, the conclusion I came to was that for vaccinated people under 40 with <=1 comorbidiy, the cognitive risks of long covid are lost in the noise of other risks they commonly take. Coming to this conclusion involved reading a number of papers, but also a lot of emotional processing around risk and health. I’ve included that processing under a “personal stuff” section, which you can skip if you just want the info but I encourage you to read if you feel yourself starting to yell that I’m not taking small risks of great suffering seriously. I do encourage you to read the caveats section before deciding how much weight to put on my conclusions.
Personal Stuff
This post took a long time to write, much longer than I wanted, because this is not an abstract topic to me. I have chronic pain from nerve damage in my jaw caused by medical incompetence, and my attempts to seek treatment for this continually run into the brick wall of a medical system that doesn’t consider my pain important (tangent: if you have a pain specialist you trust, anywhere in the US, please e-mail me (elizabeth@acesounderglass.com)). I empathize very much with the long covid sufferers who are being told their suffering doesn’t exist because it’s too hard to measure and we can’t prove what caused it.
Additionally, I’m still suffering from side effects from my covid vaccine in April. It’s very minor, chest congestion that doesn’t seem to affect my lung capacity (but I don’t have a clear before picture, so hard to say for sure). But it’s getting worse and while my medical practitioners are taking it seriously, this + the experience with dental pain make me very sensitive to the possibility they might stop if it becomes too much work for them. As I type this, I am taking a supplement stack from a high end internet crackpot because first line treatment failed and there aren’t a lot of other options. And that’s just from the vaccine; I imagine if I actually had covid I would not be one of the people who shakes it off the way I describe later in this post.
All this is to say that when I describe the long term cognitive impact of covid as being too small to measure with our current tools against our current noise levels, that is very much not the same as saying it’s zero. It’s much worse than that. What I’m saying is that you are taking risks of similar levels of suffering and impairment constantly, which our health system is very bad at measuring, and against that background long covid does not make much of a difference for people within certain age and health parameters.
A common complaint when people say “X isn’t dangerous to the young and healthy” is that it implies the death and suffering of those who aren’t young and healthy don’t matter. I’m not saying that. It matters a lot, and it’s impossible for me to forget that because I’m very unlikely to be one of the people who gets to...
|
Dec 12, 2021 |
Cached Selves by AnnaSalamon
11:47
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Cached Selves, published by AnnaSalamon on the LessWrong.
by Anna Salamon and Steve Rayhawk (joint authorship)
Related to: Beware identity
Update, 2021: I believe a large majority of the priming studies failed replication, though I haven't looked into it in depth. I still personally do a great many of the "possible strategies" listed at the bottom; and they subjectively seem useful to me; but if you end up believing that it should not be on the basis of the claimed studies.
A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."
Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.
To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing. So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards. Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself. If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends. If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.
All familiar phenomena, right? You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas. But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena. And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence. (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)
Consider the following research.
In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting. Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting. The theory is that the test subjects remembered calling the experiment interesting, and either:
Honestly figured they must have found the experiment interesting -- why else would they have said so for only $1? (This interpretation is called self-perception theory.), or
Didn’t want to think they were the type to lie for just $1, and so deceived themselves into thinking their lie had been true. (This interpretation is one strand within cognitive dissonance theory.)
In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”. Some boys were given big threats, or were kept carefully su...
|
Dec 12, 2021 |
Simulate and Defer To More Rational Selves by LoganStrohl
08:03
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Simulate and Defer To More Rational Selves, published by LoganStrohl on the LessWrong.
I sometimes let imaginary versions of myself make decisions for me.
I first started doing this after a friend told me (something along the lines of) this story. When they first became executive director of their organization, they suddenly had many more decisions to deal with per day than ever before. "Should we hire this person?" "Should I go buy more coffee for the coffee machine, or wait for someone else deal with it?" "How many participants should attend our first event?" "When can I schedule time to plan the fund drive?"
I'm making up these examples myself, but I'm sure you, too, can imagine how leading a brand new organization might involve a constant assault on the parts of your brain responsible for making decisions. They found it exhausting, and by the time they got home at the end of the day, a question like, "Would you rather we have peas or green beans with dinner?" often felt like the last straw. "I don't care about the stupid vegetables, just give me food and don't make me decide any more things!"
They were rescued by the following technique. When faced with a decision, they'd imagine "the Executive Director of the organization", and ask themselves, "What would 'the Executive Director of the organization' do?" Instead of making a decision, they'd make a prediction about the actions of that other person. Then, they'd just do whatever that person would do!
In my friend's case, they were trying to reduce decision fatigue. When I started trying it out myself, I was after a cure for something slightly different.
Imagine you're about to go bungee jumping off a high cliff. You know it's perfectly safe, and all you have to do is take a step forward, just like you've done every single time you've ever walked. But something is stopping you. The decision to step off the ledge is entirely yours, and you know you want to do it because this is why you're here. Yet here you are, still standing on the ledge.
You're scared. There's a battle happening in your brain. Part of you is going, "Just jump, it's easy, just do it!", while another part--the part in charge of your legs, apparently--is going, "NOPE. Nope nope nope nope NOPE." And you have this strange thought: "I wish someone would just push me so I don't have to decide."
Maybe you've been bungee jumping, and this is not at all how you responded to it. But I hope (for the sake of communication) that you've experienced this sensation in other contexts. Maybe when you wanted to tell someone that you loved them, but the phrase hovered just behind your lips, and you couldn't get it out. You almost wished it would tumble out of your mouth accidentally. "Just say it," you thought to yourself, and remained silent. For some reason, you were terrified of the decision, and inaction felt more like not deciding.
When I heard this story from my friend, I had social anxiety. I didn't have way more decisions than I knew how to handle, but I did find certain decisions terrifying, and was often paralyzed by them. For example, this always happened if someone I liked, respected, and wanted to interact with more asked to meet with them. It was pretty obvious to me that it was a good idea to say yes, but I'd agonize over the email endlessly instead of simply typing "yes" and hitting "send".
So here's what it looked like when I applied the technique. I'd be invited to a party. I'd feel paralyzing fear, and a sense of impending doom as I noticed that I likely believed going to the party was the right decision. Then, as soon as I felt that doom, I'd take a mental step backward and not try to force myself to decide. Instead, I'd imagine a version of myself who wasn't scared, and I'd predict what she'd do. If the party really wasn't a great idea, either be...
|
Dec 12, 2021 |
Gravity Turn by alkjash
09:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Gravity Turn, published by alkjashon the LessWrong.
This is a linkpost for/
[The first in a sequence of retrospective essays on my five years in math graduate school.]
My favorite analogy for graduate school is the gravity turn: the maneuver a rocket performs to get from the launch pad to orbit. I like to imagine a first-year graduate student as a Falcon X rocket, newly-constructed and tasked with delivering a six-ton payload into low Earth orbit.
Picture this: you begin graduate school, fresh as a rocket arriving at Cape Canaveral and bubbling with excitement for your maiden voyage. Your PhD adviser, on the other hand, is the Hubble Space Telescope. Let's call her Dr. Hubble (not to be confused with the astronomer of the same name). Dr. Hubble is ostensibly the ideal guide for your first orbit insertion. After all, she is famously good at staying in orbit - she’s been up there since 1990.
But problems quickly arise as you probe Dr. Hubble for advice on how to approach the launch. Namely:
She left Earth more than thirty years ago, and space technology has since been completely revolutionized.
She states all advice at an extremely high level with birds-eye-view detachment, observing, as she is, from a vantage point a thousand miles overhead.
Most fatally, the Hubble Space Telescope vessel does not include the lower-stage rockets that brought her into space. In fact, she doesn’t include large engines of any kind. Her thirty years of experience free-falling in orbit will do you very little good until you break out of the stratosphere.
The problem is even worse than this, however. It is not that Dr. Hubble, despite her best intentions, gives outdated advice. It is not even that Dr. Hubble cannot consciously articulate all the illegible skills she’s reflexively performing to stay in orbit. The problem is that even if you could perfectly imitate what Dr. Hubble is doing right now, you would likely still crash and burn.
What I didn’t understand going into graduate school is that academic mathematicians are often working in a state akin to the free-fall of orbit. The Hubble Space Telescope remains in orbit around Earth because it travels horizontally so quickly that, even as it’s continuously accelerating towards the Earth, it continually misses. The laws of physics have arranged it so that it is not possible - barring deliberate sabotage - for her to fall back into a sub-orbital trajectory.
Similarly, a successful research professor is embedded in an intricate system that, as surely as Newton’s laws, keeps her in a state of steadily producing new research. Many of her ground-breaking papers are not one-off productions - they produce sequels, variants, and interdisciplinary applications year after year. She has cultivated dozens of long-time collaborators of the highest level who freely share ideas and research directions, and has the reputation to find more at will. She attends conferences every other month that keep her updated on the leading edge of the field. Every year her research group grows, as if by clockwork, adding a couple graduate students and postdocs to whom she can delegate projects with only the gentlest supervision. As a result, the careers of many other people depend on Dr. Hubble to continue producing research at a steady rate. Every incentive is aligned for objects in motion to stay in motion, and it would take deliberate sabotage to bring Dr. Hubble out of her successful research trajectory.
This is not to say that academic researchers all start cruising in free-fall after they leave graduate school or make tenure. It is perfectly normal for a spaceship that reaches orbit to proceed onto its next adventure after some rest, continuing on to visit another planet or leave the solar system altogether. The best researchers I know are similarly courageous, ta...
|
Dec 12, 2021 |
Feeling Rational by Eliezer Yudkowsky
05:21
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Feeling Rational , published by Eliezer Yudkowsky on the LessWrong.
Since curiosity is an emotion, I suspect that some people will object to treating curiosity as a part of rationality. A popular belief about “rationality” is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.
When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. System 2’s deliberative judgments aren’t always true, and System 1’s perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality.” Both systems can serve the goal of truth, or defeat it, depending on how they are used.
For my part, I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on mistake-producing epistemic conduct. “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.” Conversely, an emotion that is evoked by correct beliefs or truth-conducive thinking is a “rational emotion”; and this has the advantage of letting us regard calm as an emotional state, rather than a privileged default.
So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: “That which can be destroyed by the truth should be.” My dreaming self’s happiness was opposed by truth. My sadness on waking is rational; there is no truth which destroys it.
Rationality begins by asking how-the-world-is, but spreads virally to any other thought which depends on how we think the world is. Your beliefs about “how-the-world-is” can concern anything you think is out there in reality, anything that either does or does not exist, any member of the class “things that can make other things happen.” If you believe that there is a goblin in your closet that ties your shoes’ laces together, then this is a belief about how-the-world-is. Your shoes are real—you can pick them up. If there’s something out there that can reach out and tie your shoelaces together, it must be real too, part of the vast web of causes and effects we call the “universe.”
Feeling angry at the goblin who tied your shoelaces involves a state of mind that is not just about how-the-world-is. Suppose that, as a Buddhist or a lobotomy patient or just a very phlegmatic person, finding your shoelaces tied together didn’t make you angry. This wouldn’t affect what you expected to see in the world—you’d still expect to open up your closet and find your shoelaces tied together. Your anger or calm shouldn’t affect your best guess here, because what happens in your closet does not depend on your emotional state of mind; though it may take some effort to think that clearly.
But the angry feeling is tangled up with a state of mind that is about how-the-world-is; you become angry because you think the goblin tied your shoelaces. The criterion of rationality spreads virally, from the initial question of whether or not a goblin tied your shoelaces, to the resulting anger.
Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them. Sometimes we run away from strong feelings by denying the facts, by flinching away from the view of the world that gave rise to the powerful emotion. If so, then as you study the skills of rational...
|
Dec 12, 2021 |
Building up to an Internal Family Systems model by Kaj_Sotala
43:42
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Building up to an Internal Family Systems model, published by Kaj_Sotala on the LessWrong.
Introduction
Internal Family Systems (IFS) is a psychotherapy school/technique/model which lends itself particularly well for being used alone or with a peer. For years, I had noticed that many of the kinds of people who put in a lot of work into developing their emotional and communication skills, some within the rationalist community and some outside it, kept mentioning IFS.
So I looked at the Wikipedia page about the IFS model, and bounced off, since it sounded like nonsense to me. Then someone brought it up again, and I thought that maybe I should reconsider. So I looked at the WP page again, thought “nah, still nonsense”, and continued to ignore it.
This continued until I participated in CFAR mentorship training last September, and we had a class on CFAR’s Internal Double Crux (IDC) technique. IDC clicked really well for me, so I started using it a lot and also facilitating it to some friends. However, once we started using it on more emotional issues (as opposed to just things with empirical facts pointing in different directions), we started running into some weird things, which it felt like IDC couldn’t quite handle. things which reminded me of how people had been describing IFS. So I finally read up on it, and have been successfully applying it ever since.
In this post, I’ll try to describe and motivate IFS in terms which are less likely to give people in this audience the same kind of a “no, that’s nonsense” reaction as I initially had.
Epistemic status
This post is intended to give an argument for why something like the IFS model could be true and a thing that works. It’s not really an argument that IFS is correct. My reason for thinking in terms of IFS is simply that I was initially super-skeptical of it (more on the reasons of my skepticism later), but then started encountering things which it turned out IFS predicted - and I only found out about IFS predicting those things after I familiarized myself with it.
Additionally, I now feel that IFS gives me significantly more gears for understanding the behavior of both other people and myself, and it has been significantly transformative in addressing my own emotional issues. Several other people who I know report it having been similarly powerful for them. On the other hand, aside for a few isolated papers with titles like “proof-of-concept” or “pilot study”, there seems to be conspicuously little peer-reviewed evidence in favor of IFS, meaning that we should probably exercise some caution.
I think that, even if not completely correct, IFS is currently the best model that I have for explaining the observations that it’s pointing at. I encourage you to read this post in the style of learning soft skills - trying on this perspective, and seeing if there’s anything in the description which feels like it resonates with your experiences.
But before we talk about IFS, let’s first talk about building robots. It turns out that if we put together some existing ideas from machine learning and neuroscience, we can end up with a robot design that pretty closely resembles IFS’s model of the human mind.
What follows is an intentionally simplified story, which is simpler than either the full IFS model or a full account that would incorporate everything that I know about human brains. Its intent is to demonstrate that an agent architecture with IFS-style subagents might easily emerge from basic machine learning principles, without claiming that all the details of that toy model would exactly match human brains. A discussion of what exactly IFS does claim in the context of human brains follows after the robot story.
Wanted: a robot which avoids catastrophes
Suppose that we’re building a robot that we want to be generally intelligent. ...
|
Dec 12, 2021 |
How An Algorithm Feels From Inside by Eliezer Yudkowsky
07:03
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How An Algorithm Feels From Inside, published by Eliezer Yudkowsky on the LessWrong.
"If a tree falls in the forest, and no one hears it, does it make a sound?" I remember seeing an actual argument get started on this subject—a fully naive argument that went nowhere near Berkeleyan subjectivism. Just:
"It makes a sound, just like any other falling tree!"
"But how can there be a sound that no one hears?"
The standard rationalist view would be that the first person is speaking as if "sound" means acoustic vibrations in the air; the second person is speaking as if "sound" means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word "sound".
I think the standard analysis is essentially correct. So let's accept that as a premise, and ask: Why do people get into such an argument? What's the underlying psychology?
A key idea of the heuristics and biases program is that mistakes are often more revealing of cognition than correct answers. Getting into a heated dispute about whether, if a tree falls in a deserted forest, it makes a sound, is traditionally considered a mistake.
So what kind of mind design corresponds to that error?
In Disguised Queries I introduced the blegg/rube classification task, in which Susan the Senior Sorter explains that your job is to sort objects coming off a conveyor belt, putting the blue eggs or "bleggs" into one bin, and the red cubes or "rubes" into the rube bin. This, it turns out, is because bleggs contain small nuggets of vanadium ore, and rubes contain small shreds of palladium, both of which are useful industrially.
Except that around 2% of blue egg-shaped objects contain palladium instead. So if you find a blue egg-shaped thing that contains palladium, should you call it a "rube" instead? You're going to put it in the rube bin—why not call it a "rube"?
But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object.
So if you find a blue egg-shaped object that contains palladium, and you ask "Is it a blegg?", the answer depends on what you have to do with the answer: If you ask "Which bin does the object go in?", then you choose as if the object is a rube. But if you ask "If I turn off the light, will it glow?", you predict as if the object is a blegg. In one case, the question "Is it a blegg?" stands in for the disguised query, "Which bin does it go in?". In the other case, the question "Is it a blegg?" stands in for the disguised query, "Will it glow in the dark?"
Now suppose that you have an object that is blue and egg-shaped and contains palladium; and you have already observed that it is furred, flexible, opaque, and glows in the dark.
This answers every query, observes every observable introduced. There's nothing left for a disguised query to stand for.
So why might someone feel an impulse to go on arguing whether the object is really a blegg?
Blegg3
This diagram from Neural Categories shows two different neural networks that might be used to answer questions about bleggs and rubes. Network 1 has a number of disadvantages—such as potentially oscillating/chaotic behavior, or requiring O(N2) connections—but Network 1's structure does have one major advantage over Network 2: Every unit in the network corresponds to a testable query. If you observe every observable, clamping every value, there are no units in the network left over.
Network 2, however, is a far better candidate for being something vaguely like how the human brain works: It's fast, cheap, scalable—and has an extra dangling unit in the center, whose activation can still vary, even a...
|
Dec 12, 2021 |
Notes from "Don't Shoot the Dog" by juliawise
19:05
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Notes from "Don't Shoot the Dog", published by juliawise on the LessWrong.
I just finished Karen Pryor’s “Don’t Shoot the Dog: the New Art of Teaching and Training.” Partly because a friend points out that it’s not on Audible and therefore she can’t possibly read it, here are the notes I took and some thoughts. It’s a quick, easy read.
The book applies behavioral psychology to training animals and people. The author started off as a dolphin trainer at an aquarium park in the 1960s and moved on to horses, dogs, and her own children. There are a lot of anecdotes about how to train animals (apparently polar bears like raisins). At the time, training animals without violence was considered novel and maybe impossible. I read it as a parenting book since I don’t plan to train dogs, horses, or polar bears.
It’s probably not the best guide to training dogs since a lot of it is about people, and not the best guide to training people since a lot is about animals. She’s written a bunch of other books about training dogs and cats. But this book is an entertaining overview of all of it.
The specter of behaviorism
I can understand not wanting to use behavioral methods on children; the idea can sound overly harsh or reductive. The thing is, we already reinforce behavior all the time, including bad behavior, often without meaning to. So you might as well notice what you’re doing.
To people schooled in the humanistic tradition, the manipulation of human behavior by some sort of conscious technique seems incorrigibly wicked, in spite of the obvious fact that we all go around trying to manipulate one another’s behavior all the time, by whatever means come to hand.
There are still people who shudder at the very name of Skinner, which conjures in their minds some amalgam of Brave New World, mind control, and electric shock.
(B. F. Skinner in fact believed that punishment was not an effective learning tool, and that positive reinforcement was much better for teaching.)
Pryor argues that behavioral training allows you to get good results more pleasantly than with other methods. She describes her daughter’s experience directing a play in high school:
At the closing performance the drama coach told me that she’d been amazed to see that throughout rehearsals Gale never yelled at her cast. Student directors always yell, but Gale never yelled. ‘Of course not,’ I said without thinking, ‘she’s an animal trainer.’ From the look on the teacher’s face, I realized I’d said the wrong thing—her students were not animals! But of course all I meant was that Gale would know how to establish stimulus control without unnecessary escalation.
Of course there are bad applications of behavioral training: “The psychological literature abounds with shaping programs that are so unimaginative, not to say ham-handed, that they constitute in my opinion cruel and unusual punishment.”
I don’t know a lot about ABA (applied behavior analysis), which is one application of behaviorism. My understanding is that its bad applications are certainly cruel and ham-handed, although there also seem to be good applications. I think that even people opposed to ABA should be able to find a lot of useful material in this book.
You’re already doing reinforcement training
One point I think is underappreciated is that we all reinforce each other, and children train parents as well as the other way around.
A child is tantruming in the store for candy. The parent gives in and lets the child have a candy bar. The tantruming is positively reinforced by the candy, but the more powerful event is that the parent is negatively reinforced for giving in, since the public tantrum, so aversive and embarrassing for the parent, actually stopped.
It’s also easy to accidentally reinforce bad behavior.
I recently read Beverly Cleary’s Beezus and Ramona w...
|
Dec 12, 2021 |
Precognition by jasoncrawford
03:47
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Precognition, published by jasoncrawford on the LessWrong.
This is a linkpost for
It’s almost impossible to predict the future. But it’s also unnecessary, because most people are living in the past. All you have to do is see the present before everyone else does.
To be less pithy, but more clear: Most people are slow to notice and accept change. If you can just be faster than most people at seeing what’s going on, updating your model of the world, and reacting accordingly, it’s almost as good as seeing the future.
We see this in the US with covid: The same people who didn’t realize that we all should be wearing masks, when they were life-saving, are now slow to realize/admit that we can stop wearing them.
For a dramatic historical example (from The Making of the Atomic Bomb), take Leo Szilard’s observations of 1930s Germany:
Adolf Hitler was appointed Chancellor of Germany on January 30, 1933. . In late March, Jewish judges and lawyers in Prussia and Bavaria were dismissed from practice. On the weekend of April 1, Julius Streicher directed a national boycott of Jewish businesses and Jews were beaten in the streets. “I took a train from Berlin to Vienna on a certain date, close to the first of April, 1933,” Szilard writes. “The train was empty. The same train the next day was overcrowded, was stopped at the frontier, the people had to get out, and everybody was interrogated by the Nazis. This just goes to show that if you want to succeed in this world you don’t have to be much cleverer than other people, you just have to be one day earlier.”
How to be earlier
1. Independent thinking. If you only believe things that are accepted by the majority of people, then by definition you’ll always be behind the curve in a changing world.
2. Listen to other independent thinkers. You can’t pay attention to everything at once or evaluate every area. You can only be the first to realize something in a narrow domain in which you are an expert. But if you tune your intellectual radar to other independent thinkers, you can be in the first ~1% of people to realize a new fact. Seek them out, find them, and follow them.
I was taking covid precautions in late February 2020, about three weeks ahead of official “lockdown” measures—but only because I was tuned in to the people who were six weeks ahead.
But:
3. Distinguish independent thinkers from crackpots. Both are “contrarian”; only one has any hope of being right. This is an art, honed over decades. Pay attention to both the source’s evidence and their logic. Credentials are relevant, but they are neither necessary nor sufficient.
4. Read broadly; seek out and adopt concepts and frameworks that help you understand the world (e.g.: exponential growth, network effects, efficient frontiers).
Finally:
5. Learn how to make decisions in the face of uncertainty. Even when you see the present earlier, you won’t see it with full clarity, nor will you be able to predict the future. You’ll just have a set of probabilities that are closer to reality than most people’s.
To return to the covid example: in January/February 2020, even the people farthest ahead of the curve weren’t certain whether there would be a pandemic or how bad it would be. They just knew that the chances were double-digit percent, before it was even on most people’s radar.
Find low-cost ways to avoid extreme downside, and low-investment opportunities for extreme upside. For example, when a pandemic might be starting, it makes sense to stock up on supplies, move meetings to phone calls, etc.—these are cheap insurance.
In some fantasy worlds, there are superheroes with “pre-cognition”, able to see the immediate future. They’re always one step ahead. But since most people are a few steps behind reality, you don’t need pre-cognition—just independent thinking.
Thanks for listening. to h...
|
Dec 12, 2021 |
Scientific Self-Help: The State of Our Knowledge by lukeprog
24:46
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Scientific Self-Help: The State of Our Knowledge, published by lukeprog on the LessWrong.
Part of the sequence: The Science of Winning at Life
Some have suggested that the Less Wrong community could improve readers' instrumental rationality more effectively if it first caught up with the scientific literature on productivity and self-help, and then enabled readers to deliberately practice self-help skills and apply what they've learned in real life.
I think that's a good idea. My contribution today is a quick overview of scientific self-help: what professionals call "the psychology of adjustment." First I'll review the state of the industry and the scientific literature, and then I'll briefly summarize the scientific data available on three topics in self-help: study methods, productivity, and happiness.
The industry and the literature
As you probably know, much of the self-help industry is a sham, ripe for parody. Most self-help books are written to sell, not to help. Pop psychology may be more myth than fact. As Christopher Buckley (1999) writes, "The more people read [self-help books], the more they think they need them... [it's] more like an addiction than an alliance."
Where can you turn for reliable, empirically-based self-help advice? A few leading therapeutic psychologists (e.g., Albert Ellis, Arnold Lazarus, Martin Seligman) have written self-help books based on decades of research, but even these works tend to give recommendations that are still debated, because they aren't yet part of settled science.
Lifelong self-help researcher Clayton Tucker-Ladd wrote and updated Psychological Self-Help (pdf) over several decades. It's a summary of what scientists do and don't know about self-help methods (as of about 2003), but it's also more than 2,000 pages long, and much of it surveys scientific opinion rather than experimental results, because on many subjects there aren't any experimental results yet. The book is associated with an internet community of people sharing what does and doesn't work for them.
More immediately useful is Richard Wiseman's 59 Seconds. Wiseman is an experimental psychologist and paranormal investigator who gathered together what little self-help research is part of settled science, and put it into a short, fun, and useful Malcolm Gladwell-ish book. The next best popular-level general self-help book is perhaps Martin Seligman's What You Can Change and What You Can't.
Two large books rate hundreds of popular self-help books according to what professional psychologists think of them, and offer advice on how to choose self-help books. Unfortunately, this may not mean much because even professional psychologists very often have opinions that depart from the empirical data, as documented extensively by Scott Lilienfeld and others in Science and Pseudoscience in Clinical Psychology and Navigating the Mindfield. These two books are helpful in assessing what is and isn't known according to empirical research (rather than according to expert opinion). Lilienfeld also edits the useful journal Scientific Review of Mental Health Practice, and has compiled a list of harmful psychological treatments. Also see Nathan and Gorman's A Guide to Treatments That Work, Roth & Fonagy's What Works for Whom?, and, more generally, Stanovich's How to Think Straight about Psychology.
Many self-help books are written as "one size fits all," but of course this is rarely appropriate in psychology, and this leads to reader disappointment (Norem & Chang, 2000). But psychologists have tested the effectiveness of reading particular problem-focused self-help books ("bibliotherapy").1 For example, it appears that reading David Burns' Feeling Good can be as effective for treating depression as individual or group therapy. Results vary from book to book.
There are at least fou...
|
Dec 12, 2021 |
A voting theory primer for rationalists by Jameson Quinn
29:49
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A voting theory primer for rationalists, published by Jameson Quinn on the LessWrong.
What is voting theory?
Voting theory, also called social choice theory, is the study of the design and evaulation of democratic voting methods (that's the activists' word; game theorists call them "voting mechanisms", engineers call them "electoral algorithms", and political scientists say "electoral formulas"). In other words, for a given list of candidates and voters, a voting method specifies a set of valid ways to fill out a ballot, and, given a valid ballot from each voter, produces an outcome.
(An "electoral system" includes a voting method, but also other implementation details, such as how the candidates and voters are validated, how often elections happen and for what offices, etc. "Voting system" is an ambiguous term that can refer to a full electoral system, just to the voting method, or even to the machinery for counting votes.)
Most voting theory limits itself to studying "democratic" voting methods. That typically has both empirical and normative implications. Empirically, "democratic" means:
There are many voters
There can be more than two candidates
In order to be considered "democratic", voting methods generally should meet various normative criteria as well. There are many possible such criteria, and on many of them theorists do not agree; but in general they do agree on this minimal set:
Anonymity; permuting the ballots does not change the probability of any election outcome.
Neutrality; permuting the candidates on all ballots does not change the probability of any election outcome.
Unanimity: If voters universally vote a preference for a given outcome over all others, that outcome is selected. (This is a weak criterion, and is implied by many other stronger ones; but those stronger ones are often disputed, while this one rarely is.)
Methods typically do not directly involve money changing hands or other enduring state-changes for individual voters. (There can be exceptions to this, but there are good reasons to want to understand "moneyless" elections.)
Why is voting theory important for rationalists?
First off, because democratic processes in the real world are important loci of power. That means that it's useful to understand the dynamics of the voting methods used in such real-world elections.
Second, because these real-world democratic processes have all been created and/or evolved in the past, and so there are likely to be opportunities to replace, reform, or add to them in the future. If you want to make political change of any kind over a medium-to-long time horizon, these systemic reforms should probably be part of your agenda. The fact is that FPTP, the voting method we use in most of the English-speaking world, is absolutely horrible, and there is reason to believe that reforming it would substantially (though not of course completely) alleviate much political dysfunction and suffering.
Third, because understanding social choice theory helps clarify ideas about how it's possible and/or desirable to resolve value disputes between multiple agents. For instance, if you believe that superintelligences should perform a "values handshake" when meeting, replacing each of their individual value functions by some common one so as to avoid the dead weight loss of a conflict, then social choice theory suggests both questions and answers about what that might look like. (Note that the ethical and practical importance of such considerations is not at all limited to "post-singularity" examples like that one.)
In fact, on that third point: my own ideas of ethics and of fun theory are deeply informed by my decades of interest in voting theory. To simplify into a few words my complex thoughts on this, I believe that voting theory elucidates "ethical incompleteness" (tha...
|
Dec 12, 2021 |
Flinching away from truth” is often about protecting the epistemology by AnnaSalamon
10:30
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Flinching away from truth” is often about protecting the epistemology, published by AnnaSalamon on the LessWrong.
Related to: Leave a line of retreat; Categorizing has consequences.
There’s a story I like, about this little kid who wants to be a writer. So she writes a story and shows it to her teacher.
“You misspelt the word ‘ocean’”, says the teacher.
“No I didn’t!”, says the kid.
The teacher looks a bit apologetic, but persists: “‘Ocean’ is spelt with a ‘c’ rather than an ‘sh’; this makes sense, because the ‘e’ after the ‘c’ changes its sound.”
“No I didn’t!” interrupts the kid.
“Look,” says the teacher, “I get it that it hurts to notice mistakes. But that which can be destroyed by the truth should be! You did, in fact, misspell the word ‘ocean’.”
“I did not!” says the kid, whereupon she bursts into tears, and runs away and hides in the closet, repeating again and again: “I did not misspell the word! I can too be a writer!”.
I like to imagine the inside of the kid’s head as containing a single bucket that houses three different variables that are initially all stuck together:
Original state of the kid's head:
The goal, if one is seeking actual true beliefs, is to separate out each of these variables into its own separate bucket, so that the “is ‘oshun’ spelt correctly?” variable can update to the accurate state of "no", without simultaneously forcing the "Am I allowed to pursue my writing ambition?" variable to update to the inaccurate state of "no".
Desirable state (requires somehow acquiring more buckets):
The trouble is, the kid won’t necessarily acquire enough buckets by trying to “grit her teeth and look at the painful thing”. A naive attempt to "just refrain from flinching away, and form true beliefs, however painful" risks introducing a more important error than her current spelling error: mistakenly believing she must stop working toward being a writer, since the bitter truth is that she spelled 'oshun' incorrectly.
State the kid might accidentally land in, if she naively tries to "face the truth":
(You might take a moment, right now, to name the cognitive ritual the kid in the story should do (if only she knew the ritual). Or to name what you think you'd do if you found yourself in the kid's situation -- and how you would notice that you were at risk of a "buckets error".)
More examples:
It seems to me that bucket errors are actually pretty common, and that many (most?) mental flinches are in some sense attempts to avoid bucket errors. The following examples are slightly-fictionalized composites of things I suspect happen a lot (except the "me" ones; those are just literally real):
Diet: Adam is on a diet with the intent to lose weight. Betty starts to tell him about some studies suggesting that the diet he is on may cause health problems. Adam complains: “Don’t tell me this! I need to stay motivated!”
One interpretation, as diagramed above: Adam is at risk of accidentally equating the two variables, and accidentally assuming that the studies imply that the diet must stop being viscerally motivating. He semi-consciously perceives that this risks error, and so objects to having the information come in and potentially force the error.
Pizza purchase: I was trying to save money. But I also wanted pizza. So I found myself tempted to buy the pizza really quickly so that I wouldn't be able to notice that it would cost money (and, thus, so I would be able to buy the pizza):
On this narration: It wasn't necessarily a mistake to buy pizza today. Part of me correctly perceived this "not necessarily a mistake to buy pizza" state. Part of me also expected that the rest of me wouldn't perceive this, and that, if I started thinking it through, I might get locked into the no-pizza state even if pizza was better. So it tried to 'help' by buying the pizza really quickly, ...
|
Dec 12, 2021 |
Raising the Sanity Waterline by Eliezer Yudkowsky
05:07
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is:Raising the Sanity Waterline, published by Eliezer Yudkowskyon the LessWrong.
To paraphrase the Black Belt Bayesian: Behind every exciting, dramatic failure, there is a more important story about a larger and less dramatic failure that made the first failure possible.
If every trace of religion was magically eliminated from the world tomorrow, then—however much improved the lives of many people would be—we would not even have come close to solving the larger failures of sanity that made religion possible in the first place.
We have good cause to spend some of our efforts on trying to eliminate religion directly, because it is a direct problem. But religion also serves the function of an asphyxiated canary in a coal mine—religion is a sign, a symptom, of larger problems that don't go away just because someone loses their religion.
Consider this thought experiment—what could you teach people that is not directly about religion, which is true and useful as a general method of rationality, which would cause them to lose their religions? In fact—imagine that we're going to go and survey all your students five years later, and see how many of them have lost their religions compared to a control group; if you make the slightest move at fighting religion directly, you will invalidate the experiment. You may not make a single mention of religion or any religious belief in your classroom, you may not even hint at it in any obvious way. All your examples must center about real-world cases that have nothing to do with religion.
If you can't fight religion directly, what do you teach that raises the general waterline of sanity to the point that religion goes underwater?
Here are some such topics I've already covered—not avoiding all mention of religion, but it could be done:
Affective Death Spirals—plenty of non-supernaturalist examples.
How to avoid cached thoughts and fake wisdom; the pressure of conformity.
Evidence and Occam's Razor—the rules of probability.
The Bottom Line / Engines of Cognition—the causal reasons why Reason works.
Mysterious Answers to Mysterious Questions—and the whole associated sequence, like making beliefs pay rent and curiosity-stoppers—have excellent historical examples in vitalism and phlogiston.
Non-existence of ontologically fundamental mental things—apply the Mind Projection Fallacy to probability, move on to reductionism versus holism, then brains and cognitive science.
The many sub-arts of Crisis of Faith—though you'd better find something else to call this ultimate high master-level technique of actually updating on evidence.
Dark Side Epistemology—teaching this with no mention of religion would be hard, but perhaps you could videotape the interrogation of some snake-oil sales agent as your real-world example.
Fun Theory—teach as a literary theory of utopian fiction, without the direct application to theodicy.
Joy in the Merely Real, naturalistic metaethics, etcetera etcetera etcetera and so on.
But to look at it another way
Suppose we have a scientist who's still religious, either full-blown scriptural-religion, or in the sense of tossing around vague casual endorsements of "spirituality".
We now know this person is not applying any technical, explicit understanding of...
...what constitutes evidence and why;
...Occam's Razor;
...how the above two rules derive from the lawful and causal operation of minds as mapping engines, and do not switch off when you talk about tooth fairies;
...how to tell the difference between a real answer and a curiosity-stopper;
...how to rethink matters for themselves instead of just repeating things they heard;
...certain general trends of science over the last three thousand years;
...the difficult arts of actually updating on new evidence and relinquishing old beliefs;
...epistemology 101;
...self-honesty 201;
.....
|
Dec 12, 2021 |
Efficient Charity: Do Unto Others... by Scott Alexander
09:45
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Efficient Charity: Do Unto Others..., published by Scott Alexander on the LessWrong.
This was originally posted as part of the efficient charity contest back in November. Thanks to Roko, multifoliaterose, Louie, jmmcd, jsalvatier, and others I forget for help, corrections, encouragement, and bothering me until I finally remembered to post this here.
Imagine you are setting out on a dangerous expedition through the Arctic on a limited budget. The grizzled old prospector at the general store shakes his head sadly: you can't afford everything you need; you'll just have to purchase the bare essentials and hope you get lucky. But what is essential? Should you buy the warmest parka, if it means you can't afford a sleeping bag? Should you bring an extra week's food, just in case, even if it means going without a rifle? Or can you buy the rifle, leave the food, and hunt for your dinner?
And how about the field guide to Arctic flowers? You like flowers, and you'd hate to feel like you're failing to appreciate the harsh yet delicate environment around you. And a digital camera, of course - if you make it back alive, you'll have to put the Arctic expedition pics up on Facebook. And a hand-crafted scarf with authentic Inuit tribal patterns woven from organic fibres! Wicked!
...but of course buying any of those items would be insane. The problem is what economists call opportunity costs: buying one thing costs money that could be used to buy others. A hand-crafted designer scarf might have some value in the Arctic, but it would cost so much it would prevent you from buying much more important things. And when your life is on the line, things like impressing your friends and buying organic pale in comparison. You have one goal - staying alive - and your only problem is how to distribute your resources to keep your chances as high as possible. These sorts of economics concepts are natural enough when faced with a journey through the freezing tundra.
But they are decidedly not natural when facing a decision about charitable giving. Most donors say they want to "help people". If that's true, they should try to distribute their resources to help people as much as possible. Most people don't. In the "Buy A Brushstroke" campaign, eleven thousand British donors gave a total of £550,000 to keep the famous painting "Blue Rigi" in a UK museum. If they had given that £550,000 to buy better sanitation systems in African villages instead, the latest statistics suggest it would have saved the lives of about one thousand two hundred people from disease. Each individual $50 donation could have given a year of normal life back to a Third Worlder afflicted with a disabling condition like blindness or limb deformity..
Most of those 11,000 donors genuinely wanted to help people by preserving access to the original canvas of a beautiful painting. And most of those 11,000 donors, if you asked, would say that a thousand people's lives are more important than a beautiful painting, original or no. But these people didn't have the proper mental habits to realize that was the choice before them, and so a beautiful painting remains in a British museum and somewhere in the Third World a thousand people are dead.
If you are to "love your neighbor as yourself", then you should be as careful in maximizing the benefit to others when donating to charity as you would be in maximizing the benefit to yourself when choosing purchases for a polar trek. And if you wouldn't buy a pretty picture to hang on your sled in preference to a parka, you should consider not helping save a famous painting in preference to helping save a thousand lives.
Not all charitable choices are as simple as that one, but many charitable choices do have right answers. GiveWell.org, a site which collects and interprets data on the effectiveness ...
|
Dec 12, 2021 |
2018 AI Alignment Literature Review and Charity Comparison by Larks
02:01:50
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: 2018 AI Alignment Literature Review and Charity Comparison, published by Larkson the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Cross-posted to the EA forum.
Introduction
Like last year and the year before, I’ve attempted to review the research that has been produced by various organisations working on AI safety, to help potential donors gain a better understanding of the landscape. This is a similar role to that which GiveWell performs for global health charities, and somewhat similar to an securities analyst with regards to possible investments. It appears that once again no-one else has attempted to do this, to my knowledge, so I've once again undertaken the task.
This year I have included several groups not covered in previous years, and read more widely in the literature.
My aim is basically to judge the output of each organisation in 2018 and compare it to their budget. This should give a sense for the organisations' average cost-effectiveness. We can also compare their financial reserves to their 2019 budgets to get a sense of urgency.
Note that this document is quite long, so I encourage you to just read the sections that seem most relevant to your interests, probably the sections about the individual organisations. I do not recommend you skip to the conclusions!
I’d like to apologize in advance to everyone doing useful AI Safety work whose contributions I may have overlooked or misconstrued.
Methodological Considerations
Track Records
Judging organisations on their historical output is naturally going to favour more mature organisations. A new startup, whose value all lies in the future, will be disadvantaged. However, I think that this is correct. The newer the organisation, the more funding should come from people with close knowledge. As organisations mature, and have more easily verifiable signals of quality, their funding sources can transition to larger pools of less expert money. This is how it works for startups turning into public companies and I think the same model applies here.
This judgement involves analysing a large number papers relating to Xrisk that were produced during 2018. Hopefully the year-to-year volatility of output is sufficiently low that this is a reasonable metric. I also attempted to include papers during December 2017, to take into account the fact that I'm missing the last month's worth of output from 2017, but I can't be sure I did this successfully.
This article focuses on AI risk work. If you think other causes are important too, your priorities might differ. This particularly affects GCRI, FHI and CSER, who both do a lot of work on other issues.
We focus on papers, rather than outreach or other activities. This is partly because they are much easier to measure; while there has been a large increase in interest in AI safety over the last year, it’s hard to work out who to credit for this, and partly because I think progress has to come by persuading AI researchers, which I think comes through technical outreach and publishing good work, not popular/political work.
Politics
My impression is that policy on technical subjects (as opposed to issues that attract strong views from the general population) is generally made by the government and civil servants in consultation with, and being lobbied by, outside experts and interests. Without expert (e.g. top ML researchers at Google, CMU & Baidu) consensus, no useful policy will be enacted. Pushing directly for policy seems if anything likely to hinder expert consensus. Attempts to directly influence the government to regulate AI research seem very adversarial, and risk being pattern-matched to ignorant opposition to GM foods or nuclear power. We don't want the 'us-vs-them' situation, that has occurred with climate change, to h...
|
Dec 12, 2021 |
Scholarship: How to DoIt Efficiently by lukeprog
08:31
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Scholarship: How to DoIt Efficiently, published by lukeprog on the LessWrong.
Scholarship is an important virtue of rationality, but it can be costly. Its major costs are time and effort. Thus, if you can reduce the time and effort required for scholarship - if you can learn to do scholarship more efficiently - then scholarship will be worth your effort more often than it previously was.
As an autodidact who now consumes whole fields of knowledge in mere weeks, I've developed efficient habits that allow me to research topics quickly. I'll share my research habits with you now.
Review articles and textbooks are king
My first task is to find scholarly review (or 'survey') articles on my chosen topic from the past five years (the more recent, the better). A good review article provides:
An overview of the subject matter of the field and the terms being used (for scholarly googling later).
An overview of the open and solved problems in the field, and which researchers are working on them.
Pointers to the key studies that give researchers their current understanding of the topic.
If you can find a recent scholarly edited volume of review articles on the topic, then you've hit the jackpot. (Edited volumes are better than single-author volumes, because when starting out you want to avoid reading only one particular researcher's perspective.) Examples from my own research of just this year include:
Affective neuroscience: Pleasures of the Brain (2009)
Neuroeconomics: Decision Making and the Brain (2008)
Dual process theories of psychology: In Two Minds (2009)
Intuition and unconscious learning: Intuition in Judgment and Decision Making (2007)
Goals: The Psychology of Goals (2009)
Catastrophic risks: Global Catastrophic Risks (2008)
If the field is large enough, there may exist an edited 'Handbook' on the subject, which is basically just a very large scholarly edited volume of review articles. Examples: Oxford Handbook of Evolutionary Psychology (2007), Oxford Handbook of Positive Psychology (2009), Oxford Handbook of Philosophy and Neuroscience (2009), Handbook of Developmental Cognitive Neuroscience (2008), Oxford Handbook of Neuroethics (2011), Handbook of Relationship Intitiation (2008), and Handbook of Implicit Social Cognition (2010). For the humanities, see the Blackwell Companions and Cambridge Companions.
If your questions are basic enough, a recent entry-level textbook on the subject may be just as good. Textbooks are basically book-length review articles written for undergrads. Textbooks I purchased this year include:
Evolutionary Psychology: The New Science of Mind, 4th edition (2011)
Artificial Intelligence: A Modern Approach, 3rd edition (2009)
Psychology Applied to Modern Life, 10th edition (2011)
Psychology, 9th edition (2009)
Use Google Books and Amazon's 'Look Inside' feature to see if the books appear to be of high quality, and likely to answer the questions you have. Also check the textbook recommendations here. You can save money by checking Library Genesis and library.nu for a PDF copy first, or by buying used books, or by buying ebook versions from Amazon, B&N, or Google.
Keep in mind that if you take the virtue of scholarship seriously, you may need to change how you think about the cost of obtaining knowledge. Purchasing the right book can save you dozens of hours of research. Because a huge part of my life these days is devoted to scholarship, a significant portion of my monthly budget is set aside for purchasing knowledge. So far this year I've averaged over $150/mo spent on textbooks and scholarly edited volumes.
Recent scholarly review articles can also be found on Google scholar. Search for key terms, and review articles will often be listed near the top of the results because review articles are cited widely. For example, result #9 on Google sch...
|
Dec 12, 2021 |
How to Be Happy by lukeprog
12:49
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How to Be Happy, published by lukeprog on the LessWrong.
Part of the sequence: The Science of Winning at Life
One day a coworker said to me, "Luke! You're, like, the happiest person I know! How come you're so happy all the time?"
It was probably a rhetorical question, but I had a very long answer to give. See, I was unhappy for most of my life,1 and even considered suicide a few times. Then I spent two years studying the science of happiness. Now, happiness is my natural state. I can't remember the last time I felt unhappy for longer than 20 minutes.
That kind of change won't happen for everyone, or even most people (beware of other-optimizing), but it's worth a shot!
We all want to be happy, and happiness is useful for other things, too.2 For example, happiness improves physical health,3 improves creativity,4 and even enables you to make better decisions.5 (It's harder to be rational when you're unhappy.6) So, as part of a series on how to win at life with science and rationality, let's review the science of happiness.
The correlates of happiness
Earlier, I noted that there is an abundance of research on factors that correlate with subjective well-being (individuals' own assessments of their happiness and life satisfaction).
Factors that don't correlate much with happiness include: age,7 gender,8 parenthood,9 intelligence,10 physical attractiveness,11 and money12 (as long as you're above the poverty line). Factors that correlate moderately with happiness include: health,13 social activity,14 and religiosity.15 Factors that correlate strongly with happiness include: genetics,16 love and relationship satisfaction,17 and work satisfaction.18
But correlation is not enough. We want to know what causes happiness. And that is a trickier thing to measure. But we do know a few things.
Happiness, personality, and skills
Genes account for about 50% of the variance in happiness.19 Even lottery winners and newly-made quadriplegics do not see as much of a change in happiness as you would expect.20 Presumably, genes shape your happiness by shaping your personality, which is known to be quite heritable.21
So which personality traits tend to correlate most with happiness? Extroversion is among the best predictors of happiness,22 as are conscientiousness, agreeableness, self-esteem, and optimism.23
What if you don't have those traits? The first thing to say is that you might be capable of them without knowing it. Introversion, for example, can be exacerbated by a lack of social skills. If you decide to learn and practice social skills, you might find that you are more extroverted than you thought! (That's what happened to me.) The same goes for conscientiousness, agreeableness, self-esteem, and optimism - these are only partly linked to personality. They are to some extent learnable skills, and learning these skills (or even "acting as if") can increase happiness.24
The second thing to say is that lacking some of these traits does not, of course, doom you to unhappiness.
Happiness is subjective and relative
Happiness is not determined by objective factors, but by how you feel about them.25
Happiness is also relative26: you'll probably be happier making $25,000/yr in Costa Rica (where your neighbors are making $13,000/yr) than you will be making $80,000/yr in Beverly Hills (where your neighbors are making $130,000/yr).
Happiness is relative in another sense, too: it is relative to your expectations.27 We are quite poor at predicting the strength of our emotional reactions to future events. We overestimate the misery we will experience after a romantic breakup, failure to get a promotion, or even contracting an illness. We also overestimate the pleasure we will get from buying a nice car, getting a promotion, or moving to a lovely coastal city. So: lower your expectations about the ple...
|
Dec 12, 2021 |
The curse of identity by Kaj_Sotala
09:17
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The curse of identity, published by Kaj_Sotala on the LessWrong.
So what you probably mean is, "I intend to do school to improve my chances on the market". But this statement is still false, unless it is also true that "I intend to improve my chances on the market". Do you, in actual fact, intend to improve your chances on the market?
I expect not. Rather, I expect that your motivation is to appear to be the sort of person who you think you would be if you were ambitiously attempting to improve your chances on the market... which is not really motivating enough to actually DO the work. However, by persistently trying to do so, and presenting yourself with enough suffering at your failure to do it, you get to feel as if you are that sort of person without having to actually do the work. This is actually a pretty optimal solution to the problem, if you think about it. (Or rather, if you DON'T think about it!) -- PJ Eby
I have become convinced that problems of this kind are the number one problem humanity has. I'm also pretty sure that most people here, no matter how much they've been reading about signaling, still fail to appreciate the magnitude of the problem.
Here are two major screw-ups and one narrowly averted screw-up that I've been guilty of. See if you can find the pattern.
When I began my university studies back in 2006, I felt strongly motivated to do something about Singularity matters. I genuinely believed that this was the most important thing facing humanity, and that it needed to be urgently taken care of. So in order to become able to contribute, I tried to study as much as possible. I had had troubles with procrastination, and so, in what has to be one of the most idiotic and ill-thought-out acts of self-sabotage possible, I taught myself to feel guilty whenever I was relaxing and not working. Combine an inability to properly relax with an attempted course load that was twice the university's recommended pace, and you can guess the results: after a year or two, I had an extended burnout that I still haven't fully recovered from. I ended up completing my Bachelor's degree in five years, which is the official target time for doing both your Bachelor's and your Master's.
A few years later, I became one of the founding members of the Finnish Pirate Party, and on the basis of some writings the others thought were pretty good, got myself elected as the spokesman. Unfortunately – and as I should have known before taking up the post – I was a pretty bad choice for this job. I'm good at expressing myself in writing, and when I have the time to think. I hate talking with strangers on the phone, find it distracting to look people in the eyes when I'm talking with them, and have a tendency to start a sentence over two or three times before hitting on a formulation I like. I'm also bad at thinking quickly on my feet and coming up with snappy answers in live conversation. The spokesman task involved things like giving quick statements to reporters ten seconds after I'd been woken up by their phone call, and live interviews where I had to reply to criticisms so foreign to my thinking that they would never have occurred to me naturally. I was pretty terrible at the job, and finally delegated most of it to other people until my term ran out – though not before I'd already done noticeable damage to our cause.
Last year, I was a Visiting Fellow at the Singularity Institute. At one point, I ended up helping Eliezer in writing his book. Mostly this involved me just sitting next to him and making sure he did get writing done while I surfed the Internet or played a computer game. Occasionally I would offer some suggestion if asked. Although I did not actually do much, the multitasking required still made me unable to spend this time productively myself, and for some reason i...
|
Dec 12, 2021 |
Embedded Agents by abramdemski, Scott Garrabrant
00:35
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is:Embedded Agents, published by abramdemski, Scott Garrabrant on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
(A longer text-based version of this post is also available on MIRI's blog here, and the bibliography for the whole sequence can be found here)
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Applause Lights by Eliezer Yudkowsky
04:22
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Eliezer Yudkowsky, published by Eliezer Yudkowsky on the LessWrong.
At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of artificial intelligence. So I stepped up to the microphone and asked:
Suppose that a group of democratic republics form a consortium to develop AI, and there’s a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies. Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn’t have them—and do whatever the majority says. Which of these do you think is more “democratic,” and would you feel safe with either?
I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting. But the speaker replied:
The first scenario sounds like an editorial in Reason magazine, and the second sounds like a Hollywood movie plot.
Confused, I asked:
Then what kind of democratic process did you have in mind?
The speaker replied:
Something like the Human Genome Project—that was an internationally sponsored research project.
I asked:
How would different interest groups resolve their conflicts in a structure like the Human Genome Project?
And the speaker said:
I don’t know.
This exchange puts me in mind of a quote from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:
We believe we are already within a democratic system. Some factors are still missing, like the expression of the people’s will.
The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?
I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.
This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all. Most applause lights are much more blatant, and can be detected by a simple reversal test. For example, suppose someone says:
We need to balance the risks and opportunities of AI.
If you reverse this statement, you get:
We shouldn’t balance the risks and opportunities of AI.
Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.
There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. “We need to balance the risks and opportunities of AI” can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal. Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:
I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize t...
|
Dec 12, 2021 |
Takeaways from one year of lockdown by mingyuan
08:15
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Takeaways from one year of lockdown, published by mingyuan on the LessWrong.
As of today, I've been in full-on, hardcore lockdown for an entire year. I have a lot of feelings – both about the personal social impacts of lockdown and about society being broken – that I won't go into in this public space. What I want to figure out in this post is what rationality-relevant lessons I can draw from what happened in my life this past year.
(Meta: This post is not well-written and is mostly bullet points, because the first few versions I wrote were unusable but I still wanted to publish it today.)
Observations
Some facts about my lockdown:
I have spent 99.9% of the year within 1 mile of my house
Up until last month I had spent the entire year within 10 miles of my house
Between February 29th and June 15th of 2020, I did not set foot outside of my front gate
I have not gotten COVID, nor has anyone in my bubble
I have incurred an average of 0 microCOVIDs per week
The absolute riskiest thing I've done this whole time cost ~20 microCOVIDs
I can only remember talking to a friend not-in-my-bubble, in person, twice
Some observations about other people with similar levels of caution:
Almost no one I know has caught COVID, even though Zvi estimates that ~25% of Americans have had it (the official confirmed rate is 10%). I know of only one person who caught it while taking serious precautions, and I know a few hundred people about as well as I know this person. (see also)
I was recently tracking down a reference in the Sequences and found that the author was so afraid of COVID that he failed to seek medical care for appendicitis and died of sepsis.
On negotiations:
A blanket heuristic of "absolutely no interactions outside of the household" makes decisions simple but is very costly in other ways
microCOVID spreadsheets are useful but fairly high-effort
I went on a date once. The COVID negotiations with my house were so stressful that I had a migraine for a week afterwards.
On hopelessness:
I spent a fair amount of time trying to get vaccinated early, and failed. I now appear to have a belief that I will never succeed at getting vaccinated; and further that other people can succeed but I never can.
Related: My system 1 believes that lockdown will last forever. Also that vaccines aren't real – not that they don't work, but that they're a lovely dream, like unicorns or God, that ultimately turns out to be a lie. A vaccine cannot cause me to leave lockdown because lockdown is an eternal, all-consuming metaphysical state.
I would have liked to be dating this year, but the first date and the surrounding ~week of house discussion was so stressful that I gave up on dates entirely after that.
I notice that I feel the lack of friendships, but wasn't motivated enough about any particular friendship to put in the effort to make it work despite the situation. By contrast, some people I know did do this and have benefited a lot.
My house had ~3-hour meetings ~3 times a week at the very beginning of the pandemic, where people did math on the board and talked about their feelings and we tried to figure out what to do. In retrospect, this burned me out so much that I gave up on trying to figure anything out and defaulted to an absolutely-zero-risk strategy, because at least that was simple.
The fact that SlateStarCodex went down at the same time everything else in life went to shit destroyed my soul.
Oops I have started talking about feelings and will now stop.
Taking all these observations together, it's clear to me that my social group has been insanely overcautious, to our great detriment. I think this has been obvious for quite a while, but I didn't and still don't know how to act on that information.
It seems like extreme caution made sense at first, when we didn't know much. And by the time we k...
|
Dec 12, 2021 |
Another (outer) alignment failure story by paulfchristiano
18:00
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Another (outer) alignment failure story, published by paulfchristiano on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Meta
This is a story where the alignment problem is somewhat harder than I expect, society handles AI more competently than I expect, and the outcome is worse than I expect. It also involves inner alignment turning out to be a surprisingly small problem. Maybe the story is 10-20th percentile on each of those axes. At the end I’m going to go through some salient ways you could vary the story.
This isn’t intended to be a particularly great story (and it’s pretty informal). I’m still trying to think through what I expect to happen if alignment turns out to be hard, and this more like the most recent entry in a long journey of gradually-improving stories.
I wrote this up a few months ago and was reminded to post it by Critch’s recent post (which is similar in many ways). This story has definitely been shaped by a broader community of people gradually refining failure stories rather than being written in a vacuum.
I’d like to continue spending time poking at aspects of this story that don’t make sense, digging into parts that seem worth digging into, and eventually developing clearer and more plausible stories. I still think it’s very plausible that my views about alignment will change in the course of thinking concretely about stories, and even if my basic views about alignment stay the same it’s pretty likely that the story will change.
Story
ML starts running factories, warehouses, shipping, and construction. ML assistants help write code and integrate ML into new domains. ML designers help build factories and the robots that go in them. ML finance systems invest in companies on the basis of complicated forecasts and (ML-generated) audits. Tons of new factories, warehouses, power plants, trucks and roads are being built. Things are happening quickly, investors have super strong FOMO, no one really knows whether it’s a bubble but they can tell that e.g. huge solar farms are getting built and something is happening that they want a piece of. Defense contractors are using ML systems to design new drones, and ML is helping the DoD decide what to buy and how to deploy it. The expectation is that automated systems will manage drones during high-speed ML-on-ML conflicts because humans won’t be able to understand what’s going on. ML systems are designing new ML systems, testing variations, commissioning giant clusters. The financing is coming from automated systems, the clusters are built by robots. A new generation of fabs is being built with unprecedented speed using new automation.
At this point everything kind of makes sense to humans. It feels like we are living at the most exciting time in history. People are making tons of money. The US defense establishment is scared because it has no idea what a war is going to look like right now, but in terms of policy their top priority is making sure the boom proceeds as quickly in the US as it does in China because it now seems plausible that being even a few years behind would result in national irrelevance.
Things are moving very quickly and getting increasingly hard for humans to evaluate. We can no longer train systems to make factory designs that look good to humans, because we don’t actually understand exactly what robots are doing in those factories or why; we can’t evaluate the tradeoffs between quality and robustness and cost that are being made; we can't really understand the constraints on a proposed robot design or why one design is better than another. We can’t evaluate arguments about investments very well because they come down to claims about where the overall economy is going over the next 6 months that seem kind of alien (even the more reco...
|
Dec 12, 2021 |
Working With Monsters by johnswentworth
08:20
Welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
This is: Working With Monsters, published by johnswentworth on the LessWrong.
This is a fictional piece based on Sort By Controversial. You do not need to read that first, though it may make Scissor Statements feel more real. Content Warning: semipolitical. Views expressed by characters in this piece are not necessarily the views of the author.
I stared out at a parking lot, the pavement cracked and growing grass. A few cars could still be seen, every one with a shattered windshield or no tires or bashed-in roof, one even laying on its side. Of the buildings in sight, two had clearly burned, only blackened reinforced concrete skeletons left behind. To the left, an overpass had collapsed. To the right, the road was cut by a hole four meters across. Everywhere, trees and vines climbed the remains of the small city. The collapsed ceilings and shattered windows and nests of small animals in the once-hospital behind me seemed remarkably minor damage, relatively speaking.
Eighty years of cryonic freeze, and I woke to a post-apocalyptic dystopia.
“It’s all like that,” said a voice behind me. One of my. rescuers? Awakeners. He went by Red. “Whole world’s like that.”
“What happened?” I asked. “Bioweapon?”
“Scissor,” replied a woman, walking through the empty doorway behind Red. Judge, he’d called her earlier.
I raised an eyebrow, and waited for elaboration. Apparently they expected a long conversation - both took a few seconds to get comfortable, Red leaning up against the wall in a patch of shade, Judge righting an overturned bench to sit on. It was Red who took up the conversation thread.
“Let’s start with an ethical question,” he began, then laid out a simple scenario. “So,” he asked once finished, “blue or green?”.
“Blue,” I replied. “Obviously. Is this one of those things where you try to draw an analogy from this nice obvious case to a more complicated one where it isn’t so obvious?”
“No,” Judge cut in, “It’s just that question. But you need some more background.”
“There was a writer in your time who coined the term ‘scissor statement’,” Red explained, “It’s a statement optimized to be as controversial as possible, to generate maximum conflict. To get a really powerful scissor, you need AI, but the media environment of your time was already selecting for controversy in order to draw clicks.”
“Oh no,” I said, “I read about that. and the question you asked, green or blue, it seems completely obvious, like anyone who’d say green would have to be trolling or delusional or a threat to society or something. but that’s exactly how scissor statements work.”
“Exactly,” replied Judge. “The answer seems completely obvious to everyone, yet people disagree about which answer is obviously-correct. And someone with the opposite answer seems like a monster, a threat to the world, like a serial killer or a child torturer or a war criminal. They need to be put down for the good of society.”
I hesitated. I knew I shouldn’t ask, but. “So, you two.”
Judge casually shifted position, placing a hand on some kind of weapon on her belt. I glanced at Red, and only then noticed that his body was slightly tensed, as if ready to run. Or fight.
“I’m a blue, same as you,” said Judge. Then she pointed to Red. “He’s a green.”
I felt a wave of disbelief, then disgust, then fury. It was so wrong, how could anyone even consider green... I took a step toward him, intent on punching his empty face even if I got shot in the process.
“Stop,” said Judge, “unless you want to get tazed.” She was holding her weapon aimed at me, now. Red hadn’t moved. If he had, I’d probably have charged him. But Judge wasn’t the monster here. wait.
I turned to Judge, and felt a different sort of anger.
“How can you just stand there?”, I asked. “You know that he’s in the wrong, that he’s a monster, that he deserves to be put down, preferabl...
|
Dec 12, 2021 |
Policy Debates Should Not Appear One-Sided by Eliezer Yudkowsky
06:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Policy Debates Should Not Appear One-Sided, published by Eliezer Yudkowsky on the LessWrong.
Robin Hanson proposed stores where banned products could be sold.1 There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of five children is going to go into these stores and buy a “Dr. Snakeoil’s Sulfuric Acid Drink” for her arthritis and die, leaving her orphans to weep on national television.
I was just making a factual observation. Why did some people think it was an argument in favor of regulation?
On questions of simple fact (for example, whether Earthly life arose by natural selection) there’s a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called “balance of evidence” should reflect this. Indeed, under the Bayesian definition of evidence, “strong evidence” is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?
Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.
One should also be aware of a related failure pattern: thinking that the course of Deep Wisdom is to compromise with perfect evenness between whichever two policy positions receive the most airtime. A policy may legitimately have lopsided costs or benefits. If policy questions were not tilted one way or the other, we would be unable to make decisions about them. But there is also a human tendency to deny all costs of a favored policy, or deny all benefits of a disfavored policy; and people will therefore tend to think policy tradeoffs are tilted much further than they actually are.
If you allow shops that sell otherwise banned products, some poor, honest, poorly educated mother of five kids is going to buy something that kills her. This is a prediction about a factual consequence, and as a factual question it appears rather straightforward—a sane person should readily confess this to be true regardless of which stance they take on the policy issue. You may also think that making things illegal just makes them more expensive, that regulators will abuse their power, or that her individual freedom trumps your desire to meddle with her life. But, as a matter of simple fact, she’s still going to die.
We live in an unfair universe. Like all primates, humans have strong negative reactions to perceived unfairness; thus we find this fact stressful. There are two popular methods of dealing with the resulting cognitive dissonance. First, one may change one’s view of the facts—deny that the unfair events took place, or edit the history to make it appear fair.2 Second, one may change one’s morality—deny that the events are unfair.
Some libertarians might say that if you go into a “banned products shop,” passing clear warning labels that say THINGS IN THIS STORE MAY KILL YOU, and buy something that kills you, then it’s your own fault and you deserve it. If that were a moral truth, there would be no downside to having shops that sell banned products. It wouldn’t just be a net benefit, it would be a one-sided tradeoff with no draw...
|
Dec 12, 2021 |
Norms of Membership for Voluntary Groups by sarahconstantin
11:23
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Norms of Membership for Voluntary Groups, published by sarahconstantin on the LessWrong.
Epistemic Status: Idea Generation
One feature of the internet that we haven’t fully adapted to yet is that it’s trivial to create voluntary groups for discussion. It’s as easy as making a mailing list, group chat, Facebook group, Discord server, Slack channel, etc.
What we don’t seem to have is a good practical language for talking about norms on these mini-groups — what kind of moderation do we use, how do we admit and expel members, what kinds of governance structures do we create.
Maybe this is a minor thing to talk about, but I suspect it has broader impact. In past decades voluntary membership in organizations has declined in the US — we’re less likely to be members of the Elks or of churches or bowling leagues — so lots of people who don’t have any experience in founding or participating in traditional types of voluntary organizations are now finding themselves engaged in governance without even knowing that’s what they’re doing.
When we do this badly, we get “internet drama.” When we do it really badly, we get harassment campaigns and calls for regulation/moderation at the corporate or even governmental level. And that makes the news. It’s not inconceivable that Twitter moderation norms affect international relations, for instance.
It’s a traditional observation about 19th century America that Americans were eager joiners of voluntary groups, and that these groups were practice for democratic participation. Political wonks today lament the lack of civic participation and loss of trust in our national and democratic institutions. Now, maybe you’ve moved on; maybe you’re a creature of the 21st century and you’re not hoping to restore trust in the institutions of the 20th. But what will be the institutions of the future? That may well be affected by what formats and frames for group membership people are used to at the small scale.
It’s also relevant for the future of freedom. It’s starting to be a common claim that “give people absolute ‘free speech’ and the results are awful; therefore we need regulation/governance at the corporate or national level.” If you’re not satisfied with that solution (as I’m not), you have work to do — there are a lot of questions to unpack like “what kind of ‘freedom’, with what implementational details, is the valuable kind?”, “if small-scale voluntary organizations can handle some of the functions of the state, how exactly will they work?”, “how does one prevent the outcomes that people consider so awful that they want large institutions to step in to govern smaller groups?”
Thinking about, and working on, governance for voluntary organizations (and micro-organizations like online discussion groups) is a laboratory for figuring this stuff out in real time, with fairly low resource investment and risk. That’s why I find this stuff fascinating and wish more people did.
The other place to start, of course, is history, which I’m not very knowledgeable about, but intend to learn a bit. David Friedman is the historian I’m familiar with who’s studied historical governance and legal systems with an eye to potential applicability to building voluntary governance systems today; I’m interested in hearing about others. (Commenters?)
In the meantime, I want to start generating a (non-exhaustive list) of types of norms for group membership, to illustrate the diversity of how groups work and what forms “expectations for members” can take.
We found organizations based on formats and norms that we’ve seen before. It’s useful to have an idea of the range of formats that we might encounter, so we don’t get anchored on the first format that comes to mind. It’s also good to have a vocabulary so we can have higher-quality disagreements about the purpose & nature of ...
|
Dec 12, 2021 |
Eliezer Yudkowsky Facts by steven0461
02:35
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Eliezer Yudkowsky Facts, published by steven0461 on the LessWrong.
Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
There is no spacetime symmetry. Eliezer Yudkowsky just sometimes holds the territory upside down, and he doesn't care.
Eliezer Yudkowsky has no need for doctors. He has implemented a Universal Curing Machine in a system made out of five marbles, three pieces of plastic, and some of MacGyver's fingernail clippings.
Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.
If you know more Eliezer Yudkowsky facts, post them in the comments.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Guessing the Teacher's Password by Eliezer Yudkowsky
05:45
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Guessing the Teacher's Password, published by Eliezer Yudkowsky on the LessWrong.
When I was young, I read popular physics books such as Richard Feynman’s QED: The Strange Theory of Light and Matter. I knew that light was waves, sound was waves, matter was waves. I took pride in my scientific literacy, when I was nine years old.
When I was older, and I began to read the Feynman Lectures on Physics, I ran across a gem called “the wave equation.” I could follow the equation’s derivation, but, looking back, I couldn’t see its truth at a glance. So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious. And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word “wave” meant to a physicist.
There is an instinctive tendency to think that if a physicist says “light is made of waves,” and the teacher says “What is light made of?” and the student says “Waves!”, then the student has made a true statement. That’s only fair, right? We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student? Surely, the answer “Waves!” is either true or false, right?
Which is one more bad habit to unlearn from school. Words do not have intrinsic definitions. If I hear the syllables “bea-ver” and think of a large rodent, that is a fact about my own state of mind, not a fact about the syllables “bea-ver.” The sequence of syllables “made of waves” (or “because of heat conduction”) is not a hypothesis; it is a pattern of vibrations traveling through the air, or ink on paper. It can associate to a hypothesis in someone’s mind, but it is not, of itself, right or wrong. But in school, the teacher hands you a gold star for saying “made of waves,” which must be the correct answer because the teacher heard a physicist emit the same sound-vibrations. Since verbal behavior (spoken or written) is what gets the gold star, students begin to think that verbal behavior has a truth-value. After all, either light is made of waves, or it isn’t, right?
And this leads into an even worse habit. Suppose the teacher asks you why the far side of a metal plate feels warmer than the side next to the radiator. If you say “I don’t know,” you have no chance of getting a gold star—it won’t even count as class participation. But, during the current semester, this teacher has used the phrases “because of heat convection,” “because of heat conduction,” and “because of radiant heat.” One of these is probably what the teacher wants. You say, “Eh, maybe because of heat conduction?”
This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher’s password.
Even visualizing the symbols of the diffusion equation (the math governing heat conduction) doesn’t mean you’ve formed a hypothesis about the metal plate. This is not school; we are not testing your memory to see if you can write down the diffusion equation. This is Bayescraft; we are scoring your anticipations of experience. If you use the diffusion equation, by measuring a few points with a thermometer and then trying to predict what the thermometer will say on the next measurement, then it is definitely connected to experience. Even if the student just visualizes something flowing, and therefore holds a match near the cooler side of the plate to try to measure where the heat goes, then this mental image of flowing-ness connects to experience; it controls anticipation.
If you aren’t using the diffusion equation—putting in numbers and getting out results that control your anticipation of particular experiences—then the connection between map and territory is severed a...
|
Dec 12, 2021 |
Asymmetric Justice by Zvi
08:54
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Asymmetric Justice, published by Zvi on the LessWrong.
Related and required reading in life (ANOIEAEIB): The Copenhagen Interpretation of Ethics
Epistemic Status: Trying to be minimally judgmental
Spoiler Alert: Contains minor mostly harmless spoiler for The Good Place, which is the best show currently on television.
The Copenhagen Interpretation of Ethics (in parallel with the similarly named one in physics) is as follows:
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.
I don’t say this often, but seriously, read the whole thing.
I do not subscribe to this interpretation.
I believe that the majority of people effectively endorse this interpretation. I do not think they endorse it consciously or explicitly. But they act as if it is true.
Another aspect of this same phenomenon is how most people view justice.
Almost everyone agrees justice is a sacred value. That it is good and super important. Justice is one of the few universally agreed upon goals of government. Justice is one of the eight virtues of the avatar. Justice is up there with truth and the American way. No justice, no peace.
But what is justice? Or rather, to avoid going too deeply into an infinitely complex philosophical debate millenniums or eons old, how do most people instinctively model justice in broad terms?
In a conversation last night, this was offered to me (I am probably paraphrasing due to bad memory, but it’s functionally what was said), and seems common: Justice is giving appropriate punishment to those who have taken bad action.
I asked whether, in this person’s model, the actions needed to be bad in order to be relevant to justice. This prompted pondering, after which the reply was that yes, that was how their model worked.
I then asked whether rewarding a good action counted as justice, or failing to do so counted as injustice, using the example of saving someone’s life going unrewarded.
We can consider three point-based justice systems.
In the asymmetric system, when bad action is taken, bad action points are accumulated. Justice punishes in proportion to those points to the extent possible. Each action is assigned a non-negative point total.
In the symmetric system, when any action is taken, good or bad, points are accumulated. This can be and often is zero, is negative for bad action, positive for good action. Justice consists of punishing negative point totals and rewarding positive point totals.
In what we will call the Good Place system (Spoiler Alert for Season 1), when any action is taken, good or bad, points are accumulated as in the symmetric system. But there’s a catch (which is where the spoiler comes in). If you take actions with good consequences, you only get those points if your motive was to do good. When a character attempts to score points by holding open doors for people, they fail to score any points because they are gaming the system. Gaming the system isn’t allowed.
Thus, if one takes action even under the best of motives, one fails to capture much of the gains from such action. Second or higher order benefits, or surprising benefits, that are real but unintended, will mostly not get captured.
The opposite is not true of actions with bad consequences. You lose points for bad actions whether or not you intended to be bad. It is your responsibility to check yourself before you wreck yourself.
When (Spoiler Alert fo...
|
Dec 12, 2021 |
We've built Connected Papers - a visual tool for researchers to find and explore academic papers by discordy
04:09
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: We've built Connected Papers - a visual tool for researchers to find and explore academic papers, published by discordy on the LessWrong.
Hi LessWrong. I'm a long time lurker and finally have something that I'm really proud to share with you.
After a long beta, we are releasing Connected Papers to the public!
Connected papers is a unique, visual tool to help researchers and applied scientists find and explore papers relevant to their field of work.
First - let's look at a couple of examples graphs for work that is representative of this community:
Nick Bostrom:
Eliezer Yudkowsky, Nate Soares:
Did you find new and interesting papers to read? Would this be helpful as an introduction to the literature of a new field of study?
The problem
Almost every research project in academia or industry involves phases of literature review. Many times we find an interesting paper, and we’d like to:
Find different methods and approaches to the same subject
Track down the state of the art research in the field
Identify seminal works and background reading
Explore and immerse ourselves in the topic and become aware of the trends and dynamics in the literature
Previously, the best ways to do this were to browse reference lists, or hope to find good keywords in textual search engines and databases.
Enter Connected Papers
It started as a side project between friends. We’ve felt the pains of academic literature review and exploration for years and kept thinking about how to solve it.
For the past year we’ve been meeting on weekends and prototyping a tool that would allow a very different type of search process for academic papers. When we saw how much it improved our own research and development workflows — and got increasingly more requests from friends and colleagues to use it — we committed to release it to the public.
You know. for science.
So how does it work?
Connected Papers is not a citation tree. Those have been done before.
In our graph, papers are arranged according to their similarity. That means that even papers that do not directly cite each other can be strongly connected and positioned close to each other in the graph.
To get a bit technical, our similarity is based primarily on the concepts of co-citation and bibliographic coupling (aka co-reference). According to this measure, two papers that have highly overlapping citations and references are presumed to have a higher chance of treating a related subject matter.
Reading the graph
Our graph is designed to make the important and relevant papers pop out immediately
With our layout algorithm, similar papers cluster together in space and are connected by stronger lines (edges). Popular papers (that are frequently cited) are represented by bigger circles (nodes) and more recent papers are represented by a darker color.
So for example, finding an important new paper in your field is as easy as identifying the dark large node at the center of a big cluster.
List view
In some cases it is convenient to work with just a list of connected papers. For these occasions, we’ve built the List view which you can access by clicking “Expand” at the top of the left panel. Here you can view additional paper details as well as sort and filter them according to various properties.
Prior and derivative works
The Prior works feature lists the top common ancestral papers for the connected papers in the graph. It usually includes seminal works in the field that heavily influenced the next generation.
Meanwhile, the Derivative works feature is the opposite: it shows a list of common descendants of the papers in the graph. It usually includes relevant state of the art papers or systematic reviews and meta-analyses in the field.
We have found these features to be especially useful when we have a paper from one era of research and we would like to be ...
|
Dec 12, 2021 |
Two non-obvious lessons from microcovid.org by catherio
02:35
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Two non-obvious lessons from microcovid.org, published by catherio on the LessWrong.
At the 2021 Summer Solstice, Elizabeth Van Nostrand made a brief speech thanking the organizers of microcovid.org, which I found very heartwarming and meaningful.
I wish I had thought in advance about taking the opportunity to make a few public remarks about microcovid.org: things I wish the community knew, that weren't obvious.
Here's what I would've said, in terms of my lessons from this project:
Build your own oxygen mask. Next, share it with others.
Connect and collaborate with non-rationalists.
1) Build your own oxygen mask. Next, share it with others.
We didn't start out trying to create a resource for our whole community, let alone a website with many thousands of users. All we wanted was to save our own asses. We looked at the precipitous "autonomy crunch" we were facing, and said "oh shit, our house is going to explode if we don't fix this."
So, we built a spreadsheet — for ourselves first. Other group houses asked about it, and the momentum snowballed inexorably from there. Each broadening of project scope was compelled by a commensurate rise in demand, and corresponding deeply felt motivation.
I think many people who have altruistic or worldsaving ambitions could stand to have more focus on first making their own lives not suck. Fixing huge problems in your own life — and then later making an extra effort to share and export them — is one important path to altruistic impact.
2) Connect and collaborate with non-rationalists
To my knowledge, I'm the only project member out of the top dozen or so top contributors who self-identifies as a rationalist.
The "core idea" is an extremely, extremely rationalist idea. But the implementation took writers, copyeditors, web developers, backend developers, UX designers, a medical doctor whose patients were among our first users, and many more. These folks had to understand the core idea and know how to use it, but did not have to be skilled enough at quantitative risk thinking to have designed it in the first place.
The final product had a vastly more scalable reach because many people, who had very little identity-level commitment to epistemics, looked at it and said things like "I need to access this on my phone, I won't ever use a spreadsheet" or "This has too much jargon, move all these details to the appendix."
Thank you again everyone for the gratitude and recognition; and for using the system to make your own lives suck a little less!
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
SIAI - An Examination by BrandonReinhart
24:54
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: SIAI - An Examination, published by BrandonReinhart on the LessWrong.
12/13/2011 - A 2011 update with data from the 2010 fiscal year is in progress. Should be done by the end of the week or sooner.
Disclaimer
I am not affiliated with the Singularity Institute for Artificial Intelligence.
I have not donated to the SIAI prior to writing this.
I made this pledge prior to writing this document.
Notes
Images are now hosted on LessWrong.com.
The 2010 Form 990 data will be available later this month.
It is not my intent to propagate misinformation. Errors will be corrected as soon as they are identified.
Introduction
Acting on gwern's suggestion in his Girl Scout Cookie analysis, I decided to look at SIAI funding. After reading about the Visiting Fellows Program and more recently the Rationality Boot Camp, I decided that the SIAI might be something I would want to support. I am concerned with existential risk and grapple with the utility implications. I feel that I should do more.
I wrote on the mini-boot camp page a pledge that I would donate enough to send someone to rationality mini-boot camp. This seemed to me a small cost for the potential benefit. The SIAI might get better at building rationalists. It might build a rationalist who goes on to solve a problem. Should I donate more? I wasn’t sure. I read gwern’s article and realized that I could easily get more information to clarify my thinking.
So I downloaded the SIAI’s Form 990 annual IRS filings and started to write down notes in a spreadsheet. As I gathered data and compared it to my expectations and my goals, my beliefs changed. I now believe that donating to the SIAI is valuable. I cannot hide this belief in my writing. I simply have it.
My goal is not to convince you to donate to the SIAI. My goal is to provide you with information necessary for you to determine for yourself whether or not you should donate to the SIAI. Or, if not that, to provide you with some direction so that you can continue your investigation.
The SIAI's Form 990's are available at GuideStar and Foundation Center. You must register in order to access the files at GuideStar.
2002 (Form 990-EZ)
2003 (Form 990-EZ)
2004 (Form 990-EZ)
2005 (Form 990)
2006 (Form 990)
2007 (Form 990)
2008 (Form 990-EZ)
2009 (Form 990)
SIAI Financial Overview
The Singularity Institute for Artificial Intelligence (SIAI) is a public organization working to reduce existential risk from future technologies, in particular artificial intelligence. "The Singularity Institute brings rational analysis and rational strategy to the challenges facing humanity as we develop cognitive technologies that will exceed the current upper bounds on human intelligence." The SIAI are also the founders of Less Wrong.
The graphs above offer an accurate summary of SIAI financial state since 2002. Sometimes the end of year balances listed in the Form 990 doesn’t match what you’d get if you did the math by hand. These are noted as discrepancies between the filed year end balance and the expected year end balance or between the filed year start balance and the expected year start balance.
Filing Error 1 - There appears to be a minor typo to the effect of $4.86 in the end of year balance for the 2004 document. It appears that Part I, Line 18 has been summed incorrectly. $32,445.76 is listed, but the expected result is $32,450.41. The Part II balance sheet calculations which agree with the error so the source of the error is unclear. The start of year balance in 2005 reflects the expected value so this was probably just a typo in 2004. The following year’s reported start of year balance does not contain the error.
Filing Error 2 - The 2006 document reports a year start balance of $95,105.00 when the expected year start balance is $165,284.00, a discrepancy of $70,179.00. This amount is close to ...
|
Dec 12, 2021 |
Covid-19: My Current Model by Zvi
32:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Covid-19: My Current Model, published by Zvion the LessWrong.
The post will be a summary of my current key views on various aspects what is going on, especially in places where I see many or most responsible-looking people getting it importantly wrong.
This post is not making strong evidence-based arguments for these views. This is not that post. This is me getting all this out there, on the record, in a place one can reference.
Risks Follow Power Laws
It is impossible to actually understand Covid-19 if you think of some things as ‘risky’ and other things as ‘safe’ and group together all the things in each category. And yet, that’s exactly how most of our thinking is directed.
Instead, think of risks as following power laws.
The riskiest activities are indoors, involve close physical proximity with others, while those for extended periods of time others cough, sing, puff or otherwise powerfully exhale, or talk directly at us, or we are in actual physical contact that then reaches one’s eyes, nose or mouth.
Activities missing any of those components are much, much safer than activities that share all those components.
Then other actions, such as masks and hand washing and not-face-touching, can reduce that risk by further large percentages.
Slight reductions in the frequency and severity of your very risky actions is much more important than reducing the frequency of nominally risky actions.
The few times you end up talking directly with someone in the course of business, the one social gathering you attend, the one overly crowded store you had to walk through, will dominate your risk profile. Be paranoid about that, and think how to make it less risky, or ideally avoid it. Don’t sweat the small stuff.
And think about the physical world and what’s actually happening around you!
Sacrifices To The Gods Are Demanded Everywhere
A sacrifice to the Gods (post of this topic to be linked in when finally written) is an action with physical costs but with no interest in any meaningful physical benefits, taken in the hope that it will make one less blameworthy. Things are bad because we have sinned. The Gods demand sacrifice. If we do not act appropriately repentant and concerned, things will surely get worse.
Once we act appropriately, we are virtuous and will doubtless be saved. We can stop. There is no need to proceed in a way that would actually work, once the Gods have been placated. Everything will work out.
If you don’t make the proper sacrifices, then anything that goes wrong means it’s your fault. Or at least, you’ll always worry it is your fault. As will others. If you do make the proper sacrifices, nothing is your fault. Much better.
If the action is efficient and actually were to solve the problem in a meaningful way, that would invalidate the whole operation. You can either show you are righteous and trust in the Gods, or you go about actually solving the problem. For obvious reasons, you can’t do both.
A steelman of this is that Complexity is Bad and nuance impossible. If we start doing things based on whether they make sense that sets a terrible example and most people will be hopelessly lost.
Thus, we sanitize packages. We stay exactly six feet apart. We wait exactly two weeks. We close all ‘non-essential’ businesses, but not ‘essential’ ones. We issue stay at home orders and give huge checks to the unemployed. Then we turn around and ‘reopen’ at which point that unemployment is voluntary, the state doesn’t have to pay, and so people are forced to go back to work. We lie to ban masks, then we try to mandate them, and wonder why people don’t trust the authorities. We hail our health care workers as heroes but don’t let them run experiments or gather much data. And of course, we enforce regulations enforce regulations enforce regulations, while shouting about how g...
|
Dec 12, 2021 |
Eliezer's Sequences and Mainstream Academia by lukeprog
06:47
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Eliezer's Sequences and Mainstream Academia, published by lukeprog on the LessWrong.
Due in part to Eliezer's writing style (e.g. not many citations), and in part to Eliezer's scholarship preferences (e.g. his preference to figure out much of philosophy on his own), Eliezer's Sequences don't accurately reflect the close agreement between the content of The Sequences and work previously done in mainstream academia.
I predict several effects from this:
Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
Some readers will mistakenly think Eliezer's Sequences are more original than they really are.
If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer's article.
I'd like to counteract these effects by connecting the Sequences to the professional literature. (Note: I sort of doubt it would have been a good idea for Eliezer to spend his time tracking down more references and so on, but I realized a few weeks ago that it wouldn't take me much effort to list some of those references.)
I don't mean to minimize the awesomeness of the Sequences. There is much original content in them (edit: probably most of their content is original), they are engagingly written, and they often have a more transformative effect on readers than the corresponding academic literature.
I'll break my list of references into sections based on how likely I think it is that a reader will have missed the agreement between Eliezer's articles and mainstream academic work.
(This is only a preliminary list of connections.)
Obviously connected to mainstream academic work
Eliezer's posts on evolution mostly cover material you can find in any good evolutionary biology textbook, e.g. Freeman & Herron (2007).
Likewise, much of the Quantum Physics sequence can be found in quantum physics textbooks, e.g. Sakurai & Napolitano (2010).
An Intuitive Explanation of Bayes' Theorem, How Much Evidence Does it Take, Probability is in the Mind, Absence of Evidence Is Evidence of Absence, Conservation of Expected Evidence, Trust in Bayes: see any textbook on Bayesian probability theory, e.g. Jaynes (2003) or Friedman & Koller (2009).
What's a Bias, again?, Hindsight Bias, Correspondence Bias; Positive Bias: Look into the Dark, Doublethink: Choosing to be Biased, Rationalization, Motivated Stopping and Motivated Continuation, We Change Our Minds Less Often Than We Think, Knowing About Biases Can Hurt People, Asch's Conformity Experiment, The Affect Heuristic, The Halo Effect, Anchoring and Adjustment, Priming and Contamination, Do We Believe Everything We're Told, Scope Insensitivity: see standard works in the heuristics & biases tradition, e.g. Kahneman et al. (1982), Gilovich et al. 2002, Kahneman 2011.
According to Eliezer, The Simple Truth is Tarskian and Making Beliefs Pay Rent is Peircian.
The notion of Belief in Belief comes from Dennett (2007).
Fake Causality and Timeless Causality report on work summarized in Pearl (2000).
Fake Selfishness argues that humans aren't purely selfish, a point argued more forcefully in Batson (2011).
Less obviously connected to mainstream academic work
Eliezer's metaethics sequences includes dozens of lemmas previously discussed by philosophers (see Miller 2003 for an overview), and the resulting metaethical theory shares much in common with the metaethical theories of Jackson (1998) and Railton (2003), and must face some of the same critiques as those theories do (e.g. Sobel 1994).
Eliezer's free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl's work on causality), but the conclusion is standard compatibilism.
How an Algorithm Feels F...
|
Dec 12, 2021 |
Less Wrong NYC: Case Study of a Successful Rationalist Chapter by Cosmos
17:48
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Less Wrong NYC: Case Study of a Successful Rationalist Chapter, published by Cosmos on the LessWrong.
It is perhaps the best-kept secret on Less Wrong that the New York City community has been meeting regularly for almost two years. For nearly a year we've been meeting weekly or more. The rest of this post is going to be a practical guide to the benefits of group rationality, but first I will do something that is still too rare on this blog: make it clear how strongly I feel about this. Before this community took off, I did not believe that life could be this much fun or that I could possibly achieve such a sustained level of happiness.
Being rational in an irrational world is incredibly lonely. Every interaction reveals that our thought processes differ widely from those around us, and I had accepted that such a divide would always exist. For the first time in my life I have dozens of people with whom I can act freely and revel in the joy of rationality without any social concern - hell, it's actively rewarded! Until the NYC Less Wrong community formed, I didn't realize that I was a forager lost without a tribe...
Rationalists are still human, and we still have basic human needs. lukeprog summarizes the literature on subjective well-being, and the only factors which correlate to any degree are genetics, health, work satisfaction and social life - which actually gets listed three separate times as social activity, relationship satisfaction and religiosity. Rationalists tend to be less socially adept on average, and this can make it difficult to obtain the full rewards of social interaction. However, once rationalists learn to socialize with each other, they also become increasingly social towards everyone more generally. This improves your life. A lot.
We are a group of friends to enjoy life alongside, while we try miracle fruit, dance ecstatically until sunrise, actively embarrass ourselves at karaoke, get lost in the woods, and jump off waterfalls. Poker, paintball, parties, go-karts, concerts, camping... I have a community where I can live in truth and be accepted as I am, where I can give and receive feedback and get help becoming stronger. I am immensely grateful to have all of these people in my life, and I look forward to every moment I spend with them. To love and be loved is an unparalleled experience in this world, once you actually try it.
So, you ask, how did all of this get started...?
Genesis, or a Brief History of Nearly Everything
The origin of the NYC chapter was the April 24th, 2009 meetup that Robin Hanson organized when he came to the city for a prediction markets conference. Approximately 15 people attended over the course of the night, and we all agreed that we had way too much fun together not to do this on a regular basis. I handed out my business cards to everyone there, told them to e-mail me, and I would create a mailing list. Thus Overcoming Bias NYC was born.
It was clear from the very beginning that Jasen Murray was the person most interested in seeing this happen, so he became the organizer of the group for the first year of its existence. At first the times and locations were impromptu, but in August Jasen made the brilliant move of precommitting to be at a specific time and place for a minimum of two hours twice per month. Because enough of us liked Jasen and wanted to hang out with him anyway, several people began showing up every time and a regular meetup was established. Going forward we tried a combination of social meetups, focused discussions and game nights. Jasen also attempted to shift coordination from the mailing list to the Meetup group, but Meetup is not a great mailing list and people were loathe to use multiple services. That now serves as our public face.
In April 2010, Jasen departed to run the Visiting Fellows progra...
|
Dec 12, 2021 |
Hero Licensing by Eliezer Yudkowsky
01:24:45
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Hero Licensing, published by Eliezer Yudkowsky on the LessWrong.
I expect most readers to know me either as MIRI's co-founder and the originator of a number of the early research problems in AI alignment, or as the author of Harry Potter and the Methods of Rationality, a popular work of Harry Potter fanfiction. I’ve described how I apply concepts in Inadequate Equilibria to various decisions in my personal life, and some readers may be wondering how I see these tying in to my AI work and my fiction-writing. And I do think these serve as useful case studies in inadequacy, exploitability, and modesty.
As a supplement to Inadequate Equilibria, then, the following is a dialogue that never took place—largely written in 2014, and revised and posted online in 2017.
i. Outperforming and the outside view
(The year is 2010. eliezer-2010 is sitting in a nonexistent park in Redwood City, California, working on his laptop. A person walks up to him.)
person: Pardon me, but are you Eliezer Yudkowsky?
eliezer-2010: I have that dubious honor.
person: My name is Pat; Pat Modesto. We haven’t met, but I know you from your writing online. What are you doing with your life these days?
eliezer-2010: I’m trying to write a nonfiction book on rationality. The blog posts I wrote on Overcoming Bias—I mean Less Wrong—aren’t very compact or edited, and while they had some impact, it seems like a book on rationality could reach a wider audience and have a greater impact.
pat: Sounds like an interesting project! Do you mind if I peek in on your screen and
eliezer: (shielding the screen) —Yes, I mind.
pat: Sorry. Um... I did catch a glimpse and that didn’t look like a nonfiction book on rationality to me.
eliezer: Yes, well, work on that book was going very slowly, so I decided to try to write something else in my off hours, just to see if my general writing speed was slowing down to molasses or if it was this particular book that was the problem.
pat: It looked, in fact, like Harry Potter fanfiction. Like, I’m pretty sure I saw the words “Harry” and “Hermione” in configurations not originally written by J. K. Rowling.
eliezer: Yes, and I currently seem to be writing it very quickly. And it doesn’t seem to use up mental energy the way my regular writing does, either.
(A mysterious masked stranger, watching this exchange, sighs wistfully.)
eliezer: Now I’ve just got to figure out why my main book-writing project is going so much slower and taking vastly more energy... There are so many books I could write, if I could just write everything as fast as I’m writing this...
pat: Excuse me if this is a silly question. I don’t mean to say that Harry Potter fanfiction is bad—in fact I’ve read quite a bit of it myself—but as I understand it, according to your basic philosophy the world is currently on fire and needs to be put out. Now given that this is true, why are you writing Harry Potter fanfiction, rather than doing something else?
eliezer: I am doing something else. I’m writing a nonfiction rationality book. This is just in my off hours.
pat: Okay, but I’m asking why you are doing this particular thing in your off hours.
eliezer: Because my life is limited by mental energy far more than by time. I can currently produce this work very cheaply, so I’m producing more of it.
pat: What I’m trying to ask is why, even given that you can write Harry Potter fanfiction very cheaply, you are writing Harry Potter fanfiction. Unless it really is true that the only reason is that you need to observe yourself writing quickly in order to understand the way of quick writing, in which case I’d ask what probability you assign to learning that successfully. I’m skeptical that this is really the best way of using your off hours.
eliezer: I’m skeptical that you have correctly understood the concept of “off hours.” There’s a ...
|
Dec 12, 2021 |
Common knowledge about Leverage Research 1.0 by BayAreaHuman
08:10
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Common knowledge about Leverage Research 1.0, published by BayAreaHuman on the LessWrong.
I've spoken to people recently who were unaware of some basic facts about Leverage Research 1.0; facts that are more-or-less "common knowledge" among people who spent time socially adjacent to Leverage, and are not particularly secret or surprising in Leverage-adjacent circles, but aren't attested publicly in one place anywhere.
Today, Geoff Anders and Leverage 2.0 are moving into the "Progress Studies" space, and seeking funding in this area (see: Geoff recently got a small grant from Emergent Ventures). This seems like an important time to contribute to common knowledge about Leverage 1.0.
You might conclude that I'm trying to discredit people who were involved, but that's not my aim here. My friends who were involved in Leverage 1.0 are people who I respect greatly. Rather, I just keep being surprised that people haven't heard certain specific, more-or-less legible facts about the past, that seem well-known or obvious to me, and that I feel should be taken into account when evaluating Leverage as a player in the current landscape. I would like to create here a publicly-linkable document containing these statements.
Facts that are common knowledge among people I know:
Members of Leverage 1.0 lived and worked in the same Leverage-run building, an apartment complex near Lake Merritt. (Living there was not required, but perhaps half the members did, and new members were particularly encouraged to.)
Participation in the project involved secrecy / privacy / information-management agreements. People were asked to sign an agreement that prohibited publishing almost anything (for example, in one case someone I know starting a personal blog on unrelated topics without permission led to a stern reprimand).
Geoff developed a therapy technique, "charting". He says he developed it based on his novel and complete theory of psychology, called "Connection Theory". In my estimation, "charting" is in the same rough family of psychotherapy techniques as Internal Family Systems, Coherence Therapy, Core Transformation, and similar. Like those techniques, it leads to shifts in clients' beliefs and moods. I know people from outside Leverage who did charting sessions with a "coach" from Paradigm Academy, and reported it helped them greatly. I've also heard people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse.
Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory "trainer", and to "train" other members. The role of trainer is something like "manager + therapist": that is, both "is evaluating your job performance" and "is doing therapy on you".
Another type of practice done at the organization, and offered to some people outside the organization, was "bodywork", which involved physical contact between the trainer and the trainee. "Bodywork" could in other contexts be a synonym for "massage", but that's not what's meant here; descriptions I heard of sessions sounded to me more like "energy work". People I've spoken to say it was reported to produce deeper and less legible change.
Using psychological techniques to experiment on one another, and on the "sociology" of the group itself, was a main purpose of the group. It was understood among members that they were signing up to be guinea pigs for experiments in introspection, altering one's belief structure, and experimental group dynamics.
The stated purpose of the group was to discover more theories of human behavior and civilization by "theorizing", while building power, and then literally take over US and/or global governance (the vibe was "take over the world"). The purpose of gaining global power was to lead to bett...
|
Dec 12, 2021 |
On saving the world by So8res
25:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: On saving the world by So8res, published by So8reson the LessWrong.
This is the final post in my productivity sequence.
The first post described what I achieved. The next three posts describe how. This post describes why, explaining the sources of my passion and the circumstances that convinced a young Nate to try and save the world. Within, you will find no suggestions, no techniques to emulate, no new ideas to ponder. This is a rationalist coming-of-age story. With luck, you may find it inspiring. Regardless, I hope you can learn from my mistakes.
Never fear, I'll be back to business soon — there's lots of studying to do. But before then, there's a story to tell, a memorial to what I left behind.
I was raised Catholic. On my eighth birthday, having received my first communion about a year prior, I casually asked my priest how to reaffirm my faith and do something for the Lord. The memory is fuzzy, but I think I donated a chunk of allowance money and made a public confession at the following mass.
A bunch of the grownups made a big deal out of it, as grownups are like to do. "Faith of a child", and all that. This confused me, especially when I realized that what I had done was rare. I wasn't trying to get pats on the head, I was appealing to the Lord of the Heavens and the Earth. Were we all on the same page, here? This was the creator. He was infinitely virtuous, and he had told us what to do.
And yet, everyone was content to recite hymns once a week and donate for the reconstruction of the church. What about the rest of the world, the sick, the dying? Where were the proselytizers, the missionary opportunities? Why was everyone just sitting around?
On that day, I became acquainted with civilizational inadequacy. I realized you could hand a room full of people the literal word of God, and they'd still struggle to pay attention for an hour every weekend.
This didn't shake my faith, mind you. It didn't even occur to me that the grownups might not actually believe their tales. No, what I learned that day was that there are a lot of people who hold beliefs they aren't willing to act upon.
Eventually, my faith faded. The distrust remained.
Gaining Confidence
I grew up in a small village, population ~1200. My early education took place in a one-room schoolhouse. The local towns eventually rolled all their school districts into one, but even then, my graduating class barely broke 50 people. It wasn't difficult to excel.
Ages twelve and thirteen were rough — that was right after they merged school districts, and those were the years I was first put a few grades ahead in math classes. I was awkward and underconfident. I felt estranged and lonely, and it was easy to get shoehorned into the "smart kid" stereotype by all the new students.
Eventually, though, I decided that the stereotype was bogus. Anyone intelligent should be able to escape such pigeonholing. In fact, I concluded that anyone with real smarts should be able to find their way out of any mess. I observed the confidence possessed by my peers, even those who seemed to have no reason for confidence. I noticed the ease with which they engaged in social interactions. I decided I could emulate these.
I faked confidence, and it soon became real. I found that my social limitations had been largely psychological, and that the majority of my classmates were more than willing to be friends. I learned how to get good grades without alienating my peers. It helped that I tended to buck authority (I was no "teacher's pet") and that I enjoyed teaching others. I had a knack for pinpointing misunderstandings and was often able to teach better than the teachers could — as a peer, I could communicate on a different level.
I started doing very well for myself. I got excellent grades with minimal effort. I overcame my social anxieties...
|
Dec 12, 2021 |
A Crash Course in the Neuroscience of Human Motivation by lukeprog
01:10:20
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A Crash Course in the Neuroscience of Human Motivation , published by
|
Dec 12, 2021 |
Can the Chain Still Hold You? by lukeprog
06:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Can the Chain Still Hold You?, published by lukeprogon the LessWrong.
Robert Sapolsky:
Baboons... literally have been the textbook example of a highly aggressive, male-dominated, hierarchical society. Because these animals hunt, because they live in these aggressive troupes on the Savannah... they have a constant baseline level of aggression which inevitably spills over into their social lives.
Scientists have never observed a baboon troupe that wasn't highly aggressive, and they have compelling reasons to think this is simply baboon nature, written into their genes. Inescapable.
Or at least, that was true until the 1980s, when Kenya experienced a tourism boom.
Sapolsky was a grad student, studying his first baboon troupe. A new tourist lodge was built at the edge of the forest where his baboons lived. The owners of the lodge dug a hole behind the lodge and dumped their trash there every morning, after which the males of several baboon troupes — including Sapolsky's — would fight over this pungent bounty.
Before too long, someone noticed the baboons didn't look too good. It turned out they had eaten some infected meat and developed tuberculosis, which kills baboons in weeks. Their hands rotted away, so they hobbled around on their elbows. Half the males in Sapolsky's troupe died.
This had a surprising effect. There was now almost no violence in the troupe. Males often reciprocated when females groomed them, and males even groomed other males. To a baboonologist, this was like watching Mike Tyson suddenly stop swinging in a heavyweight fight to start nuzzling Evander Holyfield. It never happened.
This was interesting, but Sapolsky moved to the other side of the park and began studying other baboons. His first troupe was "scientifically ruined" by such a non-natural event. But really, he was just heartbroken. He never visited.
Six years later, Sapolsky wanted to show his girlfriend where he had studied his first troupe, and found that they were still there, and still surprisingly violence-free. This one troupe had apparently been so transformed by their unusual experience — and the continued availability of easy food — that they were now basically non-violent.
And then it hit him.
Only one of the males now in the troupe had been through the event. All the rest were new, and hadn't been raised in the tribe. The new males had come from the violent, dog-eat-dog world of normal baboon-land. But instead of coming into the new troupe and roughing everybody up as they always did, the new males had learned, "We don't do stuff like that here." They had unlearned their childhood culture and adapted to the new norms of the first baboon pacifists.
As it turned out, violence wasn't an unchanging part of baboon nature. In fact it changed rather quickly, when the right causal factor flipped, and — for this troupe and the new males coming in — it has stayed changed to this day.
Somehow, the violence had been largely circumstantial. It was just that the circumstances had always been the same.
Until they weren't.
We still don't know how much baboon violence to attribute to nature vs. nurture, or exactly how this change happened. But it's worth noting that changes like this can and do happen pretty often.
Slavery was ubiquitous for millennia. Until it was outlawed in every country on Earth.
Humans had never left the Earth. Until we achieved the first manned orbit and the first manned moon landing in a single decade.
Smallpox occasionally decimated human populations for thousands of years. Until it was eradicated.
The human species was always too weak to render itself extinct. Until we discovered the nuclear chain reaction and manufactured thousands of atomic bombs.
Religion had a grip on 99.5% or more of humanity until 1900, and then the rate of religious adherence plummeted to 85% by th...
|
Dec 12, 2021 |
Checklist of Rationality Habits by AnnaSalamon
13:49
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Checklist of Rationality Habits, published by AnnaSalamon on the LessWrong.
As you may know, the Center for Applied Rationality has run several workshops, each teaching content similar to that in the core sequences, but made more practical, and more into fine-grained habits.
Below is the checklist of rationality habits we have been using in the minicamps' opening session. It was co-written by Eliezer, myself, and a number of others at CFAR. As mentioned below, the goal is not to assess how "rational" you are, but, rather, to develop a personal shopping list of habits to consider developing. We generated it by asking ourselves, not what rationality content it's useful to understand, but what rationality-related actions (or thinking habits) it's useful to actually do.
I hope you find it useful; I certainly have. Comments and suggestions are most welcome; it remains a work in progress. (It's also available as a pdf.)
This checklist is meant for your personal use so you can have a wish-list of rationality habits, and so that you can see if you're acquiring good habits over the next year—it's not meant to be a way to get a 'how rational are you?' score, but, rather, a way to notice specific habits you might want to develop. For each item, you might ask yourself: did you last use this habit...
Never
Today/yesterday
Last week
Last month
Last year
Before the last year
Reacting to evidence / surprises / arguments you haven't heard before; flagging beliefs for examination.
When I see something odd - something that doesn't fit with what I'd ordinarily expect, given my other beliefs - I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof. (Example: You think that your flight is scheduled to depart on Thursday. On Tuesday, you get an email from Travelocity advising you to prepare for your flight “tomorrow”, which seems wrong. Do you successfully raise this anomaly to the level of conscious attention? (Based on the experience of an actual LWer who failed to notice confusion at this point and missed their plane flight.)
When somebody says something that isn't quite clear enough for me to visualize, I notice this and ask for examples. (Recent example from Eliezer: A mathematics student said they were studying "stacks". I asked for an example of a stack. They said that the integers could form a stack. I asked for an example of something that was not a stack.) (Recent example from Anna: Cat said that her boyfriend was very competitive. I asked her for an example of "very competitive." She said that when he’s driving and the person next to him revs their engine, he must be the one to leave the intersection first—and when he’s the passenger he gets mad at the driver when they don’t react similarly.)
I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode. (Recent example from Anna: Noticed myself explaining to myself why outsourcing my clothes shopping does make sense, rather than evaluating whether to do it.)
I notice my mind flinching away from a thought; and when I notice, I flag that area as requiring more deliberate exploration. (Recent example from Anna: I have a failure mode where, when I feel socially uncomfortable, I try to make others feel mistaken so that I will feel less vulnerable. Pulling this thought into words required repeated conscious effort, as my mind kept wanting to just drop the subject.)
I consciously attempt to welcome bad news, or at least not push it away. (Recent example from Eliezer: At a brainstorming session for future Singularity Summits, one issue raised was that we hadn't really been asking for money at previous ones. My brain was offering resistance, so I applied the "bad news is good news" pattern to rephrase this as, ...
|
Dec 12, 2021 |
The Treacherous Path to Rationality by Jacob Falkovich
17:55
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Treacherous Path to Rationality, published by Jacob Falkovich on the LessWrong.
Cross-posted, as always, from Putanumonit.
Rats v. Plague
The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.
We started discussing the virus and raising the alarm in private back in January. By late February, as American health officials were almost unanimously downplaying the threat, we wrote posts on taking the disease seriously, buying masks, and preparing for quarantine.
Throughout March, the CDC was telling people not to wear masks and not to get tested unless displaying symptoms. At the same time, Rationalists were already covering every relevant angle, from asymptomatic transmission to the effect of viral load, to the credibility of the CDC itself. As despair and confusion reigned everywhere into the summer, Rationalists built online dashboards modeling nationwide responses and personal activity risk to let both governments and individuals make informed decisions.
This remarkable success did not go unnoticed. Before he threatened to doxx Scott Alexander and triggered a shitstorm, New York Times reporter Cade Metz interviewed me and other Rationalists mostly about how we were ahead of the curve on COVID and what others can learn from us. I told him that Rationality has a simple message: “people can use explicit reason to figure things out, but they rarely do”
If rationalists led the way in covering COVID-19, Vox brought up the rear
Rationalists have been working to promote the application of explicit reason, to “raise the sanity waterline” as it were, but with limited success. I wrote recently about success stories of rationalist improvement but I don’t think it inspired a rush to LessWrong. This post is in a way a response to my previous one. It’s about the obstacles preventing people from training and succeeding in the use of explicit reason, impediments I faced myself and saw others stumble over or turn back from. This post is a lot less sanguine about the sanity waterline’s prospects.
The Path
I recently chatted with Spencer Greenberg about teaching rationality. Spencer regularly publishes articles like 7 questions for deciding whether to trust your gut or 3 types of binary thinking you fall for. Reading him, you’d think that the main obstacle to pure reason ruling the land is lack of intellectual listicles on ways to overcome bias.
But we’ve been developing written and in-person curricula for improving your ability to reason for more than a decade. Spencer’s work is contributing to those curricula, an important task. And yet, I don’t think that people’s main failure point is in procuring educational material.
I think that people don’t want to use explicit reason. And if they want to, they fail. And if they start succeeding, they’re punished. And if they push on, they get scared. And if they gather their courage, they hurt themselves. And if they make it to the other side, their lives enriched and empowered by reason, they will forget the hard path they walked and will wonder incredulously why everyone else doesn’t try using reason for themselves.
This post is about that hard path.
The map is not the territory
Alternatives to Reason
What do I mean by explicit reason? I don’t refer merely to “System 2”, the brain’s slow, sequential, analytical, fully conscious, and effortful mode of cognition. I refer to the informed application of this type of thinking. Gathering data with real effort to find out, crunching the numbers with a grasp of the math, modeling the world with testable predictions, reflection on your thinking with an awareness of biases. Reason requires good inputs and a lot of...
|
Dec 12, 2021 |
On Caring by So8res
15:13
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: On Caring, published by So8res on the LessWrong.
This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.
1
I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".
Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius. I would still just feel like that's a lot of Earths.
The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.
I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.
This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.
For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.
The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.
Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.
And this is a problem.
2
It's a common trope that courage isn't about being fearless, it's about being afraid but doing the right thing anyway. In the same sense, caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.
My internal care-o-meter was calibrated to deal with about a hundred and fifty people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high.
Humanity is playing for unimaginably high stakes. At the very least, there are billions of people suffering today. At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.
When you're faced with stakes like these, your internal caring heuristics — calibrated on numbers like "ten" or "twenty" — completely fail to gras...
|
Dec 12, 2021 |
Well-Kept Gardens Die By Pacifism by Eliezer Yudkowsky
07:19
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Well-Kept Gardens Die By Pacifism , published by Eliezer Yudkowsky on the LessWrong.
Previously in series: My Way
Followup to: The Sin of Underconfidence
Good online communities die primarily by refusing to defend themselves.
Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
I am old enough to remember the USENET that is forgotten, though I was very young. Unlike the first Internet that died so long ago in the Eternal September, in these days there is always some way to delete unwanted content. We can thank spam for that—so egregious that no one defends it, so prolific that no one can just ignore it, there must be a banhammer somewhere.
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.
It is easy to be naive about the evils of censorship when you already live in a carefully kept garden. Just like it is easy to be naive about the universal virtue of unconditional nonviolent pacifism, when your country already has armed soldiers on the borders, and your city already has police. It costs you nothing to be righteous, so long as the police stay on their jobs.
The thing about online communities, though, is that you can't rely on the police ignoring you and staying on the job; the community actually pays the price of its virtuousness.
In the beginning, while the community is still thriving, censorship seems like a terrible and unnecessary imposition. Things are still going fine. It's just one fool, and if we can't tolerate just one fool, well, we must not be very tolerant. Perhaps the fool will give up and go away, without any need of censorship. And if the whole community has become just that much less fun to be a part of... mere fun doesn't seem like a good justification for (gasp!) censorship, any more than disliking someone's looks seems like a good reason to punch them in the nose.
(But joining a community is a strictly voluntary process, and if prospective new members don't like your looks, they won't join in the first place.)
And after all—who will be the censor? Who can possibly be trusted with such power?
Quite a lot of people, probably, in any well-kept garden. But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer
(for such internal politics often seem like a matter of far greater import than mere invading barbarians)
then trying to defend the community is typically depicted as a coup attempt. Who is th...
|
Dec 12, 2021 |
Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More by Ben Pace
26:12
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More , published by Ben Pace on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
An actual debate about instrumental convergence, in a public space! Major respect to all involved, especially Yoshua Bengio for great facilitation.
For posterity (i.e. having a good historical archive) and further discussion, I've reproduced the conversation here. I'm happy to make edits at the request of anyone in the discussion who is quoted below. I've improved formatting for clarity and fixed some typos. For people who are not researchers in this area who wish to comment, see the public version of this post here. For people who do work on the relevant areas, please sign up in the top right. It will take a day or so to confirm membership.
Original Post
Yann LeCun: "don't fear the Terminator", a short opinion piece by Tony Zador and me that was just published in Scientific American.
"We dramatically overestimate the threat of an accidental AI takeover, because we tend to conflate intelligence with the drive to achieve dominance. [...] But intelligence per se does not generate the drive for domination, any more than horns do."
Comment Thread #1
Elliot Olds: Yann, the smart people who are very worried about AI seeking power and ensuring its own survival believe it's a big risk because power and survival are instrumental goals for almost any ultimate goal.
If you give a generally intelligent AI the goal to make as much money in the stock market as possible, it will resist being shut down because that would interfere with tis goal. It would try to become more powerful because then it could make money more effectively. This is the natural consequence of giving a smart agent a goal, unless we do something special to counteract this.
You've often written about how we shouldn't be so worried about AI, but I've never seen you address this point directly.
Stuart Russell: It is trivial to construct a toy MDP in which the agent's only reward comes from fetching the coffee. If, in that MDP, there is another "human" who has some probability, however small, of switching the agent off, and if the agent has available a button that switches off that human, the agent will necessarily press that button as part of the optimal solution for fetching the coffee. No hatred, no desire for power, no built-in emotions, no built-in survival instinct, nothing except the desire to fetch the coffee successfully. This point cannot be addressed because it's a simple mathematical observation.
Comment Thread #2
Yoshua Bengio: Yann, I'd be curious about your response to Stuart Russell's point.
Yann LeCun: You mean, the so-called "instrumental convergence" argument by which "a robot can't fetch you coffee if it's dead. Hence it will develop self-preservation as an instrumental sub-goal."
It might even kill you if you get in the way.
1. Once the robot has brought you coffee, its self-preservation instinct disappears. You can turn it off.
2. One would have to be unbelievably stupid to build open-ended objectives in a super-intelligent (and super-powerful) machine without some safeguard terms in the objective.
3. One would have to be rather incompetent not to have a mechanism by which new terms in the objective could be added to prevent previously-unforeseen bad behavior. For humans, we have education and laws to shape our objective functions and complement the hardwired terms built into us by evolution.
4. The power of even the most super-intelligent machine is limited by physics, and its size and needs make it vulnerable to physical attacks. No need for much intelligence here. A virus is infinitely less intelligent than you, but it can still kill you.
5. A second machine, designed solely to neut...
|
Dec 12, 2021 |
Limits of Current US Prediction Markets (PredictIt Case Study) by aphyer
14:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Limits of Current US Prediction Markets (PredictIt Case Study), published by aphyer on the LessWrong.
(Disclaimers: I work in the financial industry, though not in a way related to prediction markets. Anything I write here is my opinion and not that of my employer.
This is a US-centric piece based on a case study of PredictIt: as some people have pointed out in the comments below, if you are outside the US you may have substantially better options.)
SECTION I: INTRODUCTION
So there's an argument that I've seen a lot over the past few years, particularly in LW-adjacent circles, that goes something like this:
You say you believe X is likely to happen. But prediction markets say X is likely not to happen. Since markets are efficient, you must be wrong. Or if you do know better than the market, why aren't you rich? Since you haven't bet on that market to make free money, you must be lying. Or stupid. Or both!
This post is dedicated to disagreeing with that argument, not from an anti-Efficient-Market Hypothesis position, but from a pro-Efficient-Market Hypothesis position. My position is:
The argument above is pretty much sound if we are discussing mainstream financial markets. If someone claims to have better information than a mainstream financial market on the value of Google stock, or of copper, they ought to either use this knowledge to make a huge amount of money or stop talking about it. However, it is not true if we are discussing prediction markets. Current prediction markets are so bad in so many different ways that it simply is not surprising for people to know better than them, and it often is not possible for people to make money from knowing better.
I've been meaning to write this for a while, but got tipped over the edge by the recent post here, which talks about the limitation of prediction markets being the correlation of the events they predict to other assets, and their consequent value as hedging instruments. That is...well...it's not wrong exactly, but there are so many other problems that are so much bigger that I felt it was worth laying (some of) them out.
Math follows. I will be focusing on PredictIt for this analysis. Other prediction markets may work a bit differently, but similar analysis is applicable to any of them. If you think the math is wrong I am happy to discuss/make changes, but I very much doubt any changes will materially alter the final message.
As of this writing PredictIt has Donald Trump at 40% to win the election (or, to put it another way, you can pay 40 cents for a share that pays out $1 if Trump wins). Suppose you think he is more/less likely to win. How likely/unlikely does it need to be for Trump to win for you to make money (in expectation)? Or, to put it another way, what range of probabilities for Trump to win are consistent with the prediction market values?
SECTION II: REASONABLY SIMPLE PROBLEMS
1: Spread.
This is only a small problem, but it is non-zero. PredictIt will sell me 'Donald Trump wins' shares for 40 cents, but will sell me 'Donald Trump loses' shares for 61 cents (which, from a finance perspective, works out very similarly to letting me sell 'Donald Trump wins' shares for 39 cents). So if I think there is a 39.5% chance of Trump winning, there is no way for me to make money off of it: I can buy 'Trump wins' shares for 40 cents, or sell them for 39 cents, and if the true value is 39.5 cents both of these will lose me money.
The range of possible probabilities for which you cannot make money starts at 39-40%.
2: Transaction Fees.
PredictIt charges a 10% fee on profits (see). As far as I can tell, it does not net profits against losses before calculating these fees. That is to say, if I make two $100 bets at even odds, win one, and lose the other, PredictIt will charge me a $10 fee on my winnings on the bet I ...
|
Dec 12, 2021 |
The LessWrong 2018 Book is Available for Pre-order by Ben Pace, jacobjacob
10:38
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The LessWrong 2018 Book is Available for Pre-order, published by Ben Pace, jacobjacobon the LessWrong.
For the first time, you can now buy the best new ideas on LessWrong in a physical book set, titled:
A Map that Reflects the Territory: Essays by the LessWrong Community
It is available for pre-order here.
The standard advice for creating things is "show, don't tell", so first some images of the books, followed by a short FAQ by me (Ben).
The full five-book set. Yes, that’s the iconic Mississippi river flowing across the spines.
Each book has a unique color.
The first book: Epistemology.
The second book: Agency.
The third book: Coordination.
The fourth book: Curiosity.
The fifth book: Alignment.
FAQ
What exactly is in the book set?
LessWrong has an annual Review process (the second of which is beginning today!) to determine the best content on the site. We reviewed all the posts on LessWrong from 2018, and users voted to rank the best of them, the outcome of which can be seen here.
Of the over 2000 LessWrong posts reviewed, this book contains 41 of the top voted essays, along with some comment sections, some reviews, a few extra essays to give context, and some preface/meta writing.
What are the books in the set?
The essays have been clustered around five topics relating to rationality: Epistemology, Agency, Coordination, Curiosity, and Alignment.
Are all the essays in this book from 2018?
Yes, all the essays in this book were originally published in 2018, and were reviewed and voted on during the 2018 LessWrong Review (which happened at the end of 2019).
How small are the books?
Each book is 4x6 inches, small enough to fit in your pocket. This was the book size that, empirically, most beta-testers found that they actually read.
Can I order a copy of the book?
Pre-order the book here for $29. We currently sell to North America, Europe, Australia, New Zealand, Israel. (If you bought it by end-of-day Wednesday December 9th and ordered within North America, you'll get it before Christmas.) You'll be able to buy the book on Amazon in a couple of weeks.
How much is shipping?
The price above includes shipping to any location that we accept shipping addresses for. We are still figuring out some details about shipping internationally, so if you are somewhere that is not North America, there is a small chance (~10%) that we will reach out to you to ask you for more shipping details, and an even smaller chance (~6%) that we offer you the option to either pay for some additional shipping fees or get a refund.
Can I order more than one copy at a time?
Yes. Just open the form multiple times. We will make sure to combine your shipments.
Does this book assume I have read other LessWrong content, like The Sequences?
No. It's largely standalone, and does not require reading other content on the site, although it will be enhanced by having engaged with those ideas.
Can I see an extract from the book?
Sure. Here is the preface and first chapter of Curiosity, specifically the essay Is Science Slowing Down? by Scott Alexander.
I'm new — what is this all about? What is 'rationality'?
A scientist is not simply someone who tries to understand how biological life works, or how chemicals combine, or how physical objects move, but is someone who uses the general scientific method in all areas, that allows them to empirically test their beliefs and discover what's true in general.
Similarly, a rationalist is not simply someone who tries to think clearly about their personal life, or who tries to understand how civilization works, or who tries to figure out what's true in a single domain like nutrition or machine learning; a rationalist is someone who is curious about the general thinking patterns that allows them to think clearly in all such areas, and understand the laws and tools that help th...
|
Dec 12, 2021 |
Scope Insensitivity by Eliezer Yudkowsky
05:12
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Scope Insensitivity, published by Eliezer Yudkowsky on the LessWrong.
Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.
Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario, or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.2 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”3 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”
An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.
We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.4 Baron and Greene found no effect from varying lives saved by a factor of 10.5
A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.6
The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.
1 William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010).
2 Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235; Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis 220 (New York: North-Holland, 1993), 165–215.
3...
|
Dec 12, 2021 |
Procedural Knowledge Gaps by Alicorn
02:09
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Procedural Knowledge Gaps, published by Alicorn on the LessWrong.
I am beginning to suspect that it is surprisingly common for intelligent, competent adults to somehow make it through the world for a few decades while missing some ordinary skill, like mailing a physical letter, folding a fitted sheet, depositing a check, or reading a bus schedule. Since these tasks are often presented atomically - or, worse, embedded implicitly into other instructions - and it is often possible to get around the need for them, this ignorance is not self-correcting. One can Google "how to deposit a check" and similar phrases, but the sorts of instructions that crop up are often misleading, rely on entangled and potentially similarly-deficient knowledge to be understandable, or are not so much instructions as they are tips and tricks and warnings for people who already know the basic procedure. Asking other people is more effective because they can respond to requests for clarification (and physically pointing at stuff is useful too), but embarrassing, since lacking these skills as an adult is stigmatized. (They are rarely even considered skills by people who have had them for a while.)
This seems like a bad situation. And - if I am correct and gaps like these are common - then it is something of a collective action problem to handle gap-filling without undue social drama. Supposedly, we're good at collective action problems, us rationalists, right? So I propose a thread for the purpose here, with the stipulation that all replies to gap announcements are to be constructive attempts at conveying the relevant procedural knowledge. No asking "how did you manage to be X years old without knowing that?" - if the gap-haver wishes to volunteer the information, that is fine, but asking is to be considered poor form.
(And yes, I have one. It's this: how in the world do people go about the supposedly atomic action of investing in the stock market? Here I am, sitting at my computer, and suppose I want a share of Apple - there isn't a button that says "Buy Our Stock" on their website. There goes my one idea. Where do I go and what do I do there?)
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Some AI research areas and their relevance to existential safety by Andrew_Critch
01:25:37
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Some AI research areas and their relevance to existential safety, published by Andrew_Critch on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Followed by: What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs), which provides examples of multi-stakeholder/multi-agent interactions leading to extinction events.
Introduction
This post is an overview of a variety of AI research areas in terms of how much I think contributing to and/or learning from those areas might help reduce AI x-risk. By research areas I mean “AI research topics that already have groups of people working on them and writing up their results”, as opposed to research “directions” in which I’d like to see these areas “move”.
I formed these views mostly pursuant to writing AI Research Considerations for Human Existential Safety (ARCHES). My hope is that my assessments in this post can be helpful to students and established AI researchers who are thinking about shifting into new research areas specifically with the goal of contributing to existential safety somehow. In these assessments, I find it important to distinguish between the following types of value:
The helpfulness of the area to existential safety, which I think of as a function of what services are likely to be provided as a result of research contributions to the area, and whether those services will be helpful to existential safety, versus
The educational value of the area for thinking about existential safety, which I think of as a function of how much a researcher motivated by existential safety might become more effective through the process of familiarizing with or contributing to that area, usually by focusing on ways the area could be used in service of existential safety.
The neglect of the area at various times, which is a function of how much technical progress has been made in the area relative to how much I think is needed.
Importantly:
The helpfulness to existential safety scores do not assume that your contributions to this area would be used only for projects with existential safety as their mission. This can negatively impact the helpfulness of contributing to areas that are more likely to be used in ways that harm existential safety.
The educational value scores are not about the value of an existential-safety-motivated researcher teaching about the topic, but rather, learning about the topic.
The neglect scores are not measuring whether there is enough “buzz” around the topic, but rather, whether there has been adequate technical progress in it. Buzz can predict future technical progress, though, by causing people to work on it.
Below is a table of all the areas I considered for this post, along with their entirely subjective “scores” I’ve given them. The rest of this post can be viewed simply as an elaboration/explanation of this table:
Existing Research Area
Social Application
Helpfulness to Existential Safety
Educational Value
2015 Neglect
2020 Neglect
2030 Neglect
Out of Distribution Robustness
Zero/
Single
1/10
4/10
5/10
3/10
1/10
Agent Foundations
Zero/
Single
3/10
8/10
9/10
8/10
7/10
Multi-agent RL
Zero/
Multi
2/10
6/10
5/10
4/10
0/10
Preference Learning
Single/
Single
1/10
4/10
5/10
1/10
0/10
Side-effect Minimization
Single/
Single
4/10
4/10
6/10
5/10
4/10
Human-Robot Interaction
Single/
Single
6/10
7/10
5/10
4/10
3/10
Interpretability in ML
Single/
Single
8/10
6/10
8/10
6/10
2/10
Fairness in ML
Multi/
Single
6/10
5/10
7/10
3/10
2/10
Computational Social Choice
Multi/
Single
7/10
7/10
7/10
5/10
4/10
Accountability in ML
Multi/
Multi
8/10
3/10
8/10
7/10
5/10
The research areas are ordered from least-socially-complex to most-socially-complex. This roughly (though imperfectly) correlates with addressing existential safety problems of increa...
|
Dec 12, 2021 |
Announcing the Alignment Research Center by paulfchristiano
01:25
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Announcing the Alignment Research Center, published by paulfchristiano on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
(Cross-post from ai-alignment.com)
I’m now working full-time on the Alignment Research Center (ARC), a new non-profit focused on intent alignment research.
I left OpenAI at the end of January and I’ve spent the last few months planning, doing some theoretical research, doing some logistical set-up, and taking time off.
For now it’s just me, focusing on theoretical research. I’m currently feeling pretty optimistic about this work: I think there’s a good chance that it will yield big alignment improvements within the next few years, and a good chance that those improvements will be integrated into practice at leading ML labs.
My current goal is to build a small team working productively on theory. I’m not yet sure how we’ll approach hiring, but if you’re potentially interested in joining you can fill out this tiny form to get notified when we’re ready.
Over the medium term (and maybe starting quite soon) I also expect to implement and study techniques that emerge from theoretical work, to help ML labs adopt alignment techniques, and to work on alignment forecasting and strategy.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Whole Brain Emulation: No Progress on C. elgans After 10 Years by niconiconi
09:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Whole Brain Emulation: No Progress on C. elgans After 10 Years, published by niconiconi on the LessWrong.
Since the early 21st century, some transhumanist proponents and futuristic researchers claim that Whole Brain Emulation (WBE) is not merely science fiction - although still hypothetical, it's said to be a potentially viable technology in the near future. Such beliefs attracted significant fanfare in tech communities such as LessWrong.
In 2011 at LessWrong, jefftk did a literature review on the emulation of a worm, C. elegans, as an indicator of WBE research progress.
Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it's difficult to evaluate progress. Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system. It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we've known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans. At 302 neurons, simulation has been within our computational capacity for at least that long. With 25 years to work on it, shouldn't we be able to 'upload' a nematode by now?
There were three research projects from the 1990s to the 2000s, but all are dead-ends that were unable to reach the full research goals, giving a rather pessimistic vision of WBE. However, immediately after the initial publication of that post, LW readers Stephen Larson (slarson) & David Dalrymple (davidad) pointed out in the comments that they were working on it, the two ongoing new projects of their own made the future look promising again.
The first was the OpenWorm project, coordinated by slarson. Its goal is to create a complete model and simulation of C. elegans, and to release all tools and data as free and open source software. Implementing a structural model of all 302 C. elegans neurons in the NeuroML description language was an early task completed by the project.
The next was another research effort at MIT by davidad. David explained that the OpenWorm project focused on anatomical data from dead worms, but very little data exists from living animals' cells. They can't tell scientists about the relative importance of connections between neurons within the worm's neural system, only that a connection exists.
The "connectome" of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you'd get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
What you actually need is to functionally characterize the system's dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
With optogenetic techniques, we are just at the point where it's not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.
In a year or two, he believed an automated device can be built to gather such dat...
|
Dec 12, 2021 |
Slack by Zvi
07:16
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Slack, published by Zvi on the LessWrong.
Epistemic Status: Reference post. Strong beliefs strongly held after much thought, but hard to explain well. Intentionally abstract.
Disambiguation: This does not refer to any physical good, app or piece of software.
Further Research (book, recommended but not at all required, take seriously but not literally): The Book of the Subgenius
Related (from sam[ ]zdat, recommended but not required, take seriously and also literally, entire very long series also recommended): The Uruk Machine
Further Reading (book): Scarcity: Why Having Too Little Means So Much
Previously here (not required): Play in Hard Mode, Play in Easy Mode, Out to Get You
Leads to (I’ve been scooped! Somewhat.): Sabbath Hard and Go Home
An illustrative little game: Carpe Diem: The Problem of Scarcity and Abundance
Slack is hard to precisely define, but I think this comes close:
Definition: Slack. The absence of binding constraints on behavior.
Poor is the person without Slack. Lack of Slack compounds and traps.
Slack means margin for error. You can relax.
Slack allows pursuing opportunities. You can explore. You can trade.
Slack prevents desperation. You can avoid bad trades and wait for better spots. You can be efficient.
Slack permits planning for the long term. You can invest.
Slack enables doing things for your own amusement. You can play games. You can have fun.
Slack enables doing the right thing. Stand by your friends. Reward the worthy. Punish the wicked. You can have a code.
Slack presents things as they are without concern for how things look or what others think. You can be honest.
You can do some of these things, and choose not to do others. Because you don’t have to.
Only with slack can one be a righteous dude.
Slack is life.
Related Slackness
Slack in project management is the time a task can be delayed without causing a delay to either subsequent tasks or project completion time. The amount of time before a constraint binds.
Slack the app was likely named in reference to a promise of Slack in the project sense.
Slacks as trousers are pants that are actual pants, but do not bind or constrain.
Slackness refers to vulgarity in West Indian culture, behavior and music. It also refers to a subgenre of dancehall music with straightforward sexual lyrics. Again, slackness refers to the absence of a binding constraint. In this case, common decency or politeness.
A slacker is one who has a lazy work ethic or otherwise does not exert maximum effort. They slack off. They refuse to be bound by what others view as hard constraints.
Out to Get You and the Attack on Slack
Many things in this world are Out to Get You. Often they are Out to Get You for a lot, usually but not always your time, attention and money.
If you Get Got for compact amounts too often, it will add up and the constraints will bind.
If you Get Got even once for a non-compact amount, the cost expands until the you have no Slack left. The constraints bind you.
You might spend every spare minute and/or dollar on politics, advocacy or charity. You might think of every dollar as a fraction of a third-world life saved. Racing to find a cure for your daughter’s cancer, you already work around the clock. You could have an all-consuming job or be a soldier marching off to war. It could be a quest for revenge, for glory, for love. Or you might spend every spare minute mindlessly checking Facebook or obsessed with your fantasy football league.
You cannot relax. Your life is not your own.
It might even be the right choice! Especially for brief periods. When about to be run over by a truck or evicted from your house, Slack is a luxury you cannot afford. Extraordinary times call for extraordinary effort.
Most times are ordinary. Make an ordinary effort.
You Can Afford It
No, you can’t. This is the most famou...
|
Dec 12, 2021 |
The Rocket Alignment Problem by Eliezer Yudkowsky
23:40
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Rocket Alignment Problem, published by Eliezer Yudkowsky on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
The following is a fictional dialogue building off of AI Alignment: Why It’s Hard, and Where to Start.
(Somewhere in a not-very-near neighboring world, where science took a very different course.)
ALFONSO: Hello, Beth. I’ve noticed a lot of speculations lately about “spaceplanes” being used to attack cities, or possibly becoming infused with malevolent spirits that inhabit the celestial realms so that they turn on their own engineers.
I’m rather skeptical of these speculations. Indeed, I’m a bit skeptical that airplanes will be able to even rise as high as stratospheric weather balloons anytime in the next century. But I understand that your institute wants to address the potential problem of malevolent or dangerous spaceplanes, and that you think this is an important present-day cause.
BETH: That’s. really not how we at the Mathematics of Intentional Rocketry Institute would phrase things.
The problem of malevolent celestial spirits is what all the news articles are focusing on, but we think the real problem is something entirely different. We’re worried that there’s a difficult, theoretically challenging problem which modern-day rocket punditry is mostly overlooking. We’re worried that if you aim a rocket at where the Moon is in the sky, and press the launch button, the rocket may not actually end up at the Moon.
ALFONSO: I understand that it’s very important to design fins that can stabilize a spaceplane’s flight in heavy winds. That’s important spaceplane safety research and someone needs to do it.
But if you were working on that sort of safety research, I’d expect you to be collaborating tightly with modern airplane engineers to test out your fin designs, to demonstrate that they are actually useful.
BETH: Aerodynamic designs are important features of any safe rocket, and we’re quite glad that rocket scientists are working on these problems and taking safety seriously. That’s not the sort of problem that we at MIRI focus on, though.
ALFONSO: What’s the concern, then? Do you fear that spaceplanes may be developed by ill-intentioned people?
BETH: That’s not the failure mode we’re worried about right now. We’re more worried that right now, nobody can tell you how to point your rocket’s nose such that it goes to the moon, nor indeed any prespecified celestial destination. Whether Google or the US Government or North Korea is the one to launch the rocket won’t make a pragmatic difference to the probability of a successful Moon landing from our perspective, because right now nobody knows how to aim any kind of rocket anywhere.
ALFONSO: I’m not sure I understand.
BETH: We’re worried that even if you aim a rocket at the Moon, such that the nose of the rocket is clearly lined up with the Moon in the sky, the rocket won’t go to the Moon. We’re not sure what a realistic path from the Earth to the moon looks like, but we suspect it might not be a very straight path, and it may not involve pointing the nose of the rocket at the moon at all. We think the most important thing to do next is to advance our understanding of rocket trajectories until we have a better, deeper understanding of what we’ve started calling the “rocket alignment problem”. There are other safety problems, but this rocket alignment problem will probably take the most total time to work on, so it’s the most urgent.
ALFONSO: Hmm, that sounds like a bold claim to me. Do you have a reason to think that there are invisible barriers between here and the moon that the spaceplane might hit? Are you saying that it might get very very windy between here and the moon, more so than on Earth? Both eventualities could be worth preparing for, I suppose, but...
|
Dec 12, 2021 |
Have epistemic conditions always been this bad? by Wei_Dai
06:15
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Have epistemic conditions always been this bad?, published by Wei_Dai on the LessWrong.
In the last few months, I've gotten increasingly alarmed by leftist politics in the US, and the epistemic conditions that it operates under and is imposing wherever it gains power. (Quite possibly the conditions are just as dire on the right, but they are not as visible or salient to me, because most of the places I can easily see, either directly or through news stories, i.e., local politics in my area, academia, journalism, large corporations, seem to have been taken over by the left.)
I'm worried that my alarmism is itself based on confirmation bias, tribalism, catastrophizing, or any number of other biases. (It confuses me that I seem to be the first person to talk much about this on either LW or EA Forum, given that there must be people who have been exposed to the current political environment earlier or to a greater extent than me. On the other hand, all my posts/comments on the subject have generally been upvoted on both forums, and nobody has specifically said that I'm being too alarmist. One possible explanation for nobody else raising an alarm about this is that they're afraid of the current political climate and they're not as "cancel-proof" as I am, or don't feel that they have as much leeway to talk about politics-adjacent issues here as I do.)
So I want to ask, have things always been like this, or have they actually gotten significantly worse in recent (say the last 5 or 10) years? If they've always been like this, then perhaps there is less cause for alarm, because (1) if things have always been this bad, and we muddled through them in the past, we can probably continue to muddle through in the future (modulo new x-risks like AGI), and (2) if there is no recent trend towards worsening conditions then we don't need to worry so much about conditions getting worse in the near future. (Obviously if we go back far enough, say to the Middle Ages, then things were almost certainly as bad or worse, but I'm worried about more recent trends.)
If there are other reasons to not be very alarmed aside from the past being just as bad, please feel free to talk about those as well. But in case one of those reasons is "why be alarmed when there's little that can be done about it", my answer is that being alarmed motivates one to try to understand what is going on, which can help with (1) deciding personal behavior now in expectation of future changes (for example if there's going to be a literal Cultural Revolution in the future, then I need to be really really careful what I say today), (2) planning x-risk strategy, and (3) defending LW/EA from either outside attack or similar internal dynamics.
Here's some of what I've observed so far, which has led me to my current epistemic state:
In local politics, "asking for evidence of oppression is a form of oppression" or even more directly "questioning the experiences of a marginalized group that you don't belong to is not allowed and will result in a ban" has apparently been an implicit norm, and is being made increasingly explicit. (E.g., I saw a FB group explicitly codifying this in their rules.) As a result, anyone can say "Policy X or Program Y oppresses Group Z and must be changed" and nobody can argue against that, except by making the same kind of claim based on a different identity group, and then it comes down to which group is recognized as being more privileged or oppressed by the current orthodoxy. (If someone does belong to Group Z and wants to argue against the claim on that basis, they'll be dismissed based on "being tokenized" or "internalized oppression".)
In academia, even leftist professors are being silenced or kicked out on a regular basis for speaking out against an ever-shifting "party line". ("Party line" is in q...
|
Dec 12, 2021 |
The case for aligning narrowly superhuman models by Ajeya Cotra
53:27
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The case for aligning narrowly superhuman models, published by Ajeya Cotra on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
I wrote this post to get people’s takes on a type of work that seems exciting to me personally; I’m not speaking for Open Phil as a whole. Institutionally, we are very uncertain whether to prioritize this (and if we do where it should be housed and how our giving should be structured). We are not seeking grant applications on this topic right now.
Thanks to Daniel Dewey, Eliezer Yudkowsky, Evan Hubinger, Holden Karnofsky, Jared Kaplan, Mike Levine, Nick Beckstead, Owen Cotton-Barratt, Paul Christiano, Rob Bensinger, and Rohin Shah for comments on earlier drafts.
A genre of technical AI risk reduction work that seems exciting to me is trying to align existing models that already are, or have the potential to be, “superhuman”[1] at some particular task (which I’ll call narrowly superhuman models).[2] I don’t just mean “train these models to be more robust, reliable, interpretable, etc” (though that seems good too); I mean “figure out how to harness their full abilities so they can be as useful as possible to humans” (focusing on “fuzzy” domains where it’s intuitively non-obvious how to make that happen).
Here’s an example of what I’m thinking of: intuitively speaking, it feels like GPT-3 is “smart enough to” (say) give advice about what to do if I’m sick that’s better than advice I’d get from asking humans on Reddit or Facebook, because it’s digested a vast store of knowledge about illness symptoms and remedies. Moreover, certain ways of prompting it provide suggestive evidence that it could use this knowledge to give helpful advice. With respect to the Reddit or Facebook users I might otherwise ask, it seems like GPT-3 has the potential to be narrowly superhuman in the domain of health advice.
But GPT-3 doesn’t seem to “want” to give me the best possible health advice -- instead it “wants” to play a strange improv game riffing off the prompt I give it, pretending it’s a random internet user. So if I want to use GPT-3 to get advice about my health, there is a gap between what it’s capable of (which could even exceed humans) and what I can get it to actually provide me. I’m interested in the challenge of:
How can we get GPT-3 to give “the best health advice it can give” when humans[3] in some sense “understand less” about what to do when you’re sick than GPT-3 does? And in that regime, how can we even tell whether it’s actually “doing the best it can”?
I think there are other similar challenges we could define for existing models, especially large language models.
I’m excited about tackling this particular type of near-term challenge because it feels like a microcosm of the long-term AI alignment problem in a real, non-superficial sense. In the end, we probably want to find ways to meaningfully supervise (or justifiably trust) models that are more capable than ~all humans in ~all domains.[4] So it seems like a promising form of practice to figure out how to get particular humans to oversee models that are more capable than them in specific ways, if this is done with an eye to developing scalable and domain-general techniques.
I’ll call this type of project aligning narrowly superhuman models. In the rest of this post, I:
Give a more detailed description of what aligning narrowly superhuman models could look like, what does and doesn’t “count”, and what future projects I think could be done in this space (more).
Explain why I think aligning narrowly superhuman models could meaningfully reduce long-term existential risk from misaligned AI (more).
Lay out the potential advantages that I think this work has over other types of AI alignment research: (a) conceptual thinking, (b) demos in small-scal...
|
Dec 12, 2021 |
Privileging the Question by Qiaochu_Yuan
04:02
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Privileging the Question, published by Qiaochu_Yuan on the LessWrong.
Related to: Privileging the Hypothesis
Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all.
-- Paul Graham
There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about.
-- Doug Henwood
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
-- Eliezer Yudkowsky
Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?
These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?"
Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.
The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!
Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they'll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions.
So far I've found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:
What do I plan on doing with an answer to this question?
With the worst privileged questions I frequently find that the answer is "nothing," som...
|
Dec 12, 2021 |
The 5-Second Level by Eliezer Yudkowsky
12:44
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The 5-Second Level, published by Eliezer Yudkowsky on the LessWrong.
To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less. Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.
As our first example, let's take the vital rationalist skill, "Be specific."
Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"
A couple of formative childhood readings that taught me to be specific:
"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
"Say, what are you trying to do, anyway?"
You have pushed him into the clouds. If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about. This habit displays itself in an answer such as this:
"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them. Also, you might go to the fire department and see how their trucks are painted."
-- S. I. Hayakawa, Language in Thought and Action
and:
"Beware, demon!" he intoned hollowly. "I am not without defenses."
"Oh yeah? Name three."
-- Robert Asprin, Another Fine Myth
And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?" Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."
But the real subject of today's lesson is how to see skills like this on the 5-second level. And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.
Over-abstraction happens because it's easy to be abstract. It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign. Abstraction is a path of least resistance, a form of mental laziness.
So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.
Then, you have actionable stored procedures that associate to the trigger. And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task. An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."
Or to be more specific on the last mental procedure: Why were you trying to describe redness to someone? Did they just run a red traffic light?
(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights fr...
|
Dec 12, 2021 |
The 3 Books Technique for Learning a New Skilll by Matt Goldenberg
03:07
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is:The 3 Books Technique for Learning a New Skilll, published by Matt Goldenberg on the LESSWRONG.
When I'm learning a new skill, there's a technique I often use to quickly gain the basics of the new skill without getting drowned in the plethora of resources that exist. I've found that just 3 resources that cover the skill from 3 separate viewpoints(along with either daily practice or a project) is enough to quickly get all the pieces I need to learn the new skill.
I'm partial to books, so I've called this The 3 Books Technique, but feel free to substitute books for courses, mentors, or videos as needed.
The "What" Book
The "What" book is used as reference material. It should be a thorough resource that gives you a broad overview of your skill. If you run into a novel situation, you should be able to go to this book and get the information you need. It covers the "surface" section of the learning model from nature pictured above.
Positive reviews of this book should contain phrases like "Thorough" and "Got me out of a pinch more than once." Negative reviews of this book should talk about "overwhelming" and "didn't know where to start."
The "How" Book
The "How" Book explains the step-by-step, nuts and bolts of how to put the skill into practice. It often contains processes, tools, and steps. It covers the "deep" part of the learning model covered above.
Positive reviews of this book should talk about "Well structured" and "Clearly thought out." Negative reviews should mention it being "too rote" or "not enough theory."
The "Why" Book
The "WHY" book explains the mindset and intuitions behind the skill. It tries to get into the authors head and lets you understand what to do in novel situations. It should cover the "transfer" part of the learning model above.
Positive reviews of this book should talk about "gaining intuitions" or "really understanding". Negative reviews should contain phrases like "not practical" or "still don't know what steps to take."
The Project or Practice
Once I have these 3 resources, I'll choose a single project or a daily practice that allows me to practice the skills from the "How" book and the mindsets from the "Why" book. If I get stuck, I'll use the "What" book to help me.
Examples
Overcoming Procrastination
"What" Book: The Procrastination Equation by Piers Steel
"How" Book: The Now Habit by Neil Fiore
"Why" Book: The Replacing Guilt blog sequence by Nate Soares
Project or Practice: Five pomodoros every day where I deliberately use the tools from the now habit and the mindsets from replacing guilt. If I find myself stuck, I'll choose from the plethora of techniques in the Procrastination Equation.
Learning Calculus
"What" Book: A First Course in Calculus by Serge Lange
"How" Book: The Khan Academy series on Calculus
"Why" Book: The Essence of Calculus Youtube series by 3blue1brown
Project or Practice: Daily practice of the Khan Academy calculus exercises.
Conclusion
This is a simple technique that I've found very helpful in systematizing my learning process. I would be particularly interested in other skills you've learned and the 3 books you would recommend for those skills.
Thanks for listening. to help us out with the nonlinear library or to learn more, please visit nonlinear.org.
|
Dec 12, 2021 |
Realism about rationality Richard_Ngo
07:32
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Realism about rationality, published Richard_Ngo on the LESSWRONG.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
This is a linkpost for http://thinkingcomplete.blogspot.com/2018/09/rational-and-real.html
Epistemic status: trying to vaguely gesture at vague intuitions. A similar idea was explored here under the heading "the intelligibility of intelligence", although I hadn't seen it before writing this post. As of 2020, I consider this follow-up comment to be a better summary of the thing I was trying to convey with this post than the post itself.
There’s a mindset which is common in the rationalist community, which I call “realism about rationality” (the name being intended as a parallel to moral realism). I feel like my skepticism about agent foundations research is closely tied to my skepticism about this mindset, and so in this essay I try to articulate what it is.
Humans ascribe properties to entities in the world in order to describe and predict them. Here are three such properties: "momentum", "evolutionary fitness", and "intelligence". These are all pretty useful properties for high-level reasoning in the fields of physics, biology and AI, respectively. There's a key difference between the first two, though. Momentum is very amenable to formalisation: we can describe it using precise equations, and even prove things about it. Evolutionary fitness is the opposite: although nothing in biology makes sense without it, no biologist can take an organism and write down a simple equation to define its fitness in terms of more basic traits. This isn't just because biologists haven't figured out that equation yet. Rather, we have excellent reasons to think that fitness is an incredibly complicated "function" which basically requires you to describe that organism's entire phenotype, genotype and environment.
In a nutshell, then, realism about rationality is a mindset in which reasoning and intelligence are more like momentum than like fitness. It's a mindset which makes the following ideas seem natural:
The idea that there is a simple yet powerful theoretical framework which describes human intelligence and/or intelligence in general. (I don't count brute force approaches like AIXI for the same reason I don't consider physics a simple yet powerful description of biology).
The idea that there is an “ideal” decision theory.
The idea that AGI will very likely be an “agent”.
The idea that Turing machines and Kolmogorov complexity are foundational for epistemology.
The idea that, given certain evidence for a proposition, there's an "objective" level of subjective credence which you should assign to it, even under computational constraints.
The idea that Aumann's agreement theorem is relevant to humans.
The idea that morality is quite like mathematics, in that there are certain types of moral reasoning that are just correct.
The idea that defining coherent extrapolated volition in terms of an idealised process of reflection roughly makes sense, and that it converges in a way which doesn’t depend very much on morally arbitrary factors.
The idea that having having contradictory preferences or beliefs is really bad, even when there’s no clear way that they’ll lead to bad consequences (and you’re very good at avoiding dutch books and money pumps and so on).
To be clear, I am neither claiming that realism about rationality makes people dogmatic about such ideas, nor claiming that they're all false. In fact, from a historical point of view I’m quite optimistic about using maths to describe things in general. But starting from that historical baseline, I’m inclined to adjust downwards on questions related to formalising intelligent thought, whereas rationality realism would endorse adjusting upwards. This essay is primarily intended to explain...
|
Dec 12, 2021 |
The EMH Aten't Dead by Richard Meadows
30:45
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The EMH Aten't Dead, published by Richard Meadows on the LessWrong.
Cross-posting from my personal blog, but written primarily for Less Wrong after recent discussion here.
There are whispers that the Efficient-Market Hypothesis is dead. Eliezer's faith has been shaken. Scott says EMH may have been the real victim of the coronavirus.
The EMH states that “asset prices reflect all available information”. The direct implication is that if you don’t have any non-available information, you shouldn’t expect to be able to beat the market, except by chance.
But some people were able to preempt the corona crash, without any special knowledge! Jacob mentioned selling some of his stocks before the market reacted. Wei Dai bought out-of-the-money 'put' options, and took a very handsome profit. Others shorted the market.
These people were reading the same news and reports as everyone else. They profited on the basis of public information that should have been priced in.
And so, the EMH is dead, or dying, or at the very least, has a very nasty-sounding cough.
I think that rumours of the death of efficient markets have been greatly exaggerated. It seems to me the EMH is very much alive-and-kicking, and the recent discussion often involves common misunderstandings that it might be helpful to iron out.
This necessarily involves pissing on people's parade, which is not much fun. So it's important to say upfront that although I don't know Wei Dai, he is no doubt a brilliant guy, that Jacob is my favourite blogger in the diaspora, that I would give my left testicle to have Scott's writing talent and ridiculous work ethic, that Eliezer is a legend whose work I have personally benefited from greatly, etc.
But in the spirit of the whole rationality thing, I want to gently challenge what looks more like a case of 'back-slaps for the boys' than a death knell for efficient markets.
First: how the heck did the market get the coronavirus so wrong?
The Great Coronavirus Trade
Lots of people initially underreacted to COVID-19. We are only human. But the stockmarket is not only human—it’s meant to be better than this.
Here’s Scott, in A Failure, But Not of Prediction:
The stock market is a giant coordinated attempt to predict the economy, and it reached an all-time high on February 12, suggesting that analysts expected the economy to do great over the following few months. On February 20th it fell in a way that suggested a mild inconvenience to the economy, but it didn’t really start plummeting until mid-March – the same time the media finally got a clue. These aren’t empty suits on cable TV with no skin in the game. These are the best predictive institutions we have, and they got it wrong.
But. this isn't how it went down. As AllAmericanBreakfast and others pointed out in the comments, the market started reacting in the last week of February, with news headlines directly linking the decline to the ‘coronavirus’. By the time we get to mid-March, we’re not far off the bottom.
(You can confirm this for yourself in a few seconds by looking at a chart of the relevant time period.)
EDIT: Scott has explained his rationale here. Although I still think his version of events is incorrect as phrased, I want to make it clear I am not accusing him of deliberately massaging the data or any other such shenanigans, and the next paragraph about revisionist history etc was only meant to be a general observation about how people responded. My apologies to Scott for the unclear wording, as well any perceived slight against his very good reputation.
For whatever reason, COVID-19 seems to be a magnet for revisionist history and/or wishful thinking. In other comments under the same post, the notion that people from our ‘tribe’ did especially well also comes under serious question—in fact, it looks like many of the names ...
|
Dec 12, 2021 |
DARPA Digital Tutor: Four Months to Total Technical Expertise?by JohnBuridan
12:02
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: DARPA Digital Tutor: Four Months to Total Technical Expertise?, published by JohnBuridan on the LessWrong.
DARPA spent a few million dollars around 2009 to create the world’s best digital tutoring system for IT workers in the Navy. I am going to explain their results, the system itself, possible limitations, and where to go from here.
It is a truth universally acknowledged that a single nerd having read Ender’s Game must be in want of the Fantasy Game. The great draw of the Fantasy Game is that the game changes with the player and reflects the needs of the learner growing dynamically with him/her. This dream of the student is best realized in the world of tutoring, which while not as fun, is known to be very, very effective. Individualized instruction can make students jump to the 98 percentile compared to non tutored students. DARPA poked at this idea with their Digital Tutor trying to answer this question: How close to the expertise and knowledge base of well-experienced IT experts can we get new recruits in 16 weeks using a digital tutoring system?
I will say the results upfront, but before I do, I want to do two things. First pause to note the audacity of the project. Some project manager thought, “I bet we can design a system for training that is as good as 5 years on the job experience.” This is astoundingly ambitious. I love it! Second a few caveats. Caveat 1) Don’t be confused. Technical training is not the same as education. The goals in education are not merely to learn some technical skills like reading, writing, and arithmetic. Getting any system to usefully measure things like inculturation, citizenship, moral uprightness, and social mores is not yet something any system can do, let alone a digital system. Caveat 2) Online classes have notoriously high attrition rates, drop rates, and no shows. Caveat 3) Going in we should not expect the digital tutor to be as good as a human tutor. A human tutor likely can catch nuances that a digital tutor, no matter how good cannot. Caveat 4) Language processing technology, chat bots, and AI systems are significantly better in 2020 than they were 2009, so we should be forgiving if the DARPA IT program is not as good as it would be if the experiment were rerun today.
All these caveats, I think should give us a reason to adjust our mental score of the Digital Tutor a few clicks upward and give it some credit. However, this charitable read of the Digital Tutor that I started with when reading the paper turned out to be unnecessary. The Digital Tutor students outperformed traditionally taught students and field experts in solving IT problems on the final assessment. They did not merely meet the goal of being as good after 16 weeks as experts in the field, but they actually outperformed them. This is a ridiculously positive outcome, and we need to look closely to see what parts of this story are believable and make some conjectures for why this happened and some bets about whether it will replicate.
The Digital Tutor Experience
We will start with the Digital Tutor student experience. This will give us the context we need to understand the results.
Students (cadets?) were on the same campus and in classrooms with their computers which ran the Digital Tutor program. A uniformed Naval officer proctored each day for their 16 week course. The last ‘period’ of the day was a study hall with occasional hands-on practice sessions led by the Naval officer. This set-up is important for a few reasons, in my opinion. There is a shared experience among the students of working on IT training, plus the added accountability of a proctor keeps everyone on task. This social aspect is very important and powerful compared to the dissipation experienced by the lone laborer at home on the computer. This social structure completely counteracts ca...
|
Dec 12, 2021 |
Notes on The Anthropology of Childhood by juliawise
01:21:31
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio..
This is: Notes on The Anthropology of Childhood, published by juliawise on the LessWrong.
Crossposted from The Whole Sky.
I read David Lancy’s “The Anthropology of Childhood: Cherubs, Chattel, and Changelings” and highlighted some passages. A lot of passages, it turns out.
[content note: discussion of abortion and infanticide, including infanticide of children with disabilities, in “Life and Death” section but not elsewhere]
I was a sociology major and understood anthropology to be basically “like sociology, but in Papua New Guinea.” This is the first cultural anthropology book I’ve read, and that was pretty much right. I found it very accessible as a first dive into anthropology. The first chapter summarizes all his points without the examples, so you could try that if you want to get the gist without reading the whole book.
I enjoyed it and would recommend it to people interested in this topic. A few things that shifted for me:
I feel less obliged to entertain my children and intervene in their conflicts. We don’t live with a tribe of extended family, but my two children play with each other all day, which is how most people throughout time have spent their childhoods. Lancy isn’t a child development expert, but I buy his argument that handling conflict (for example about the rules of a game) is a skill children need to learn, rather than having conflicts always mediated by adults.
Even though it doesn’t change anything concrete, I feel some relief that not having endless patience for toddlers seems to be normal. Except where families were very isolated, it’s not normal in traditional societies for one or two adults to watch their own children all day every day. And childcare has traditionally looked mostly like “being sure they don’t hurt themselves too badly.”
It surprised me that childcare by non-parents was so common. Some more modern views treat women’s childcare work as basically free, but traditional cultures have valued women’s labor enough that the society wants to free up their time from childcare. It was striking to me that the expectation that stay-at-home mothers will be responsible for all childcare was a relatively short historical blip. But of course, having childcare done by teenagers and grandmothers requires that those people’s time be available, which usually isn’t the reality we live in.
I was surprised at how apparently universal it is for fathers to be uninvolved. I expect they're typically involved in providing food and other material resources, but that wasn't emphasized in this book.
I’m a little unclear on how valid Lancy’s conclusions are or how much data they’re based on. It seems like an anthropologist could squint at a society and see all kinds of things that someone with a different ideology wouldn’t see.
Big caveat that what Lancy is describing is traditional, non-industrialized societies where children are expected to learn how to fit into the appropriate role in their village, not to develop as an individual or do anything different from what their parents and ancestors did. He stresses that traditional childrearing practices are very poor preparation for school. Given that I want my children to learn things I don’t know, to think analytically, etc, the way I approach learning is very different from how traditional societies approach it.
Lancy periodically complains about how much money Western families spend on fertility treatments, medical care for premature infants, etc. He argues that the same money could be used to provide adequate nutrition for many more children in the societies he’s studied. I’m sympathetic, but assuming that families would donate this money if they weren’t spending it to have a baby is not realistic. I see cutting luxury spending as a much more feasible way that people might do some redistribution.
And now, my no...
|
Dec 12, 2021 |
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms by Kaj_Sotala
26:24
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms, published by Kaj_Sotala on the LessWrong.
Epistemic status: pretty confident. Based on several years of meditation experience combined with various pieces of Buddhist theory as popularized in various sources, including but not limited to books like The Mind Illuminated, Mastering the Core Teachings of the Buddha, and The Seeing That Frees; also discussions with other people who have practiced meditation, and scatterings of cognitive psychology papers that relate to the topic. The part that I’m the least confident of is the long-term nature of enlightenment; I’m speculating on what comes next based on what I’ve experienced, but have not actually had a full enlightenment. I also suspect that different kinds of traditions and practices may produce different kinds of enlightenment states.
While I liked Valentine’s recent post on kensho and its follow-ups a lot, one thing that I was annoyed by were the comments that the whole thing can’t be explained from a reductionist, third-person perspective. I agree that such an explanation can’t produce the necessary mental changes that the explanation is talking about. But it seemed wrong to me to claim that all of this would be somehow intrinsically mysterious and impossible to explain on such a level that would give people at least an intellectual understanding of what Looking and enlightenment and all that are. Especially not after I spoke to Val and realized that hey, I actually do know how to Look, and that thing he’s calling kensho, that’s happened to me too.
(Note however that kensho is a Zen term and I'm unfamiliar with Zen; I don't want to use a term which might imply that I was going with whatever theoretical assumptions Zen might have, so I will just talk about “my experience” when it comes up.)
So here is my attempt to give an explanation. I don’t know if I’ve succeeded, but here goes anyway.
One of my key concepts is going to be cognitive fusion.
Cognitive fusion is a term from Acceptance and Commitment Therapy (ACT), which refers to a person “fusing together” with the content of a thought or emotion, so that the content is experienced as an objective fact about the world rather than as a mental construct. The most obvious example of this might be if you get really upset with someone else and become convinced that something was all their fault (even if you had actually done something blameworthy too).
In this example, your anger isn’t letting you see clearly, and you can’t step back from your anger to question it, because you have become “fused together” with it and experience everything in terms of the anger’s internal logic.
Another emotional example might be feelings of shame, where it’s easy to experience yourself as a horrible person and feel that this is the literal truth, rather than being just an emotional interpretation.
Cognitive fusion isn’t necessarily a bad thing. If you suddenly notice a car driving towards you at a high speed, you don’t want to get stuck pondering about how the feeling of danger is actually a mental construct produced by your brain. You want to get out of the way as fast as possible, with minimal mental clutter interfering with your actions. Likewise, if you are doing programming or math, you want to become at least partially fused together with your understanding of the domain, taking its axioms as objective facts so that you can focus on figuring out how to work with those axioms and get your desired results.
On the other hand, even when doing math, it can sometimes be useful to question the axioms you’re using. In programming, taking the guarantees of your abstractions as literal axioms can also lead to trouble. And while it is useful to perceive something as objectively life-threatening and ...
|
Dec 12, 2021 |
Act of Charity by jessicata
13:07
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Act of Charity , published by jessicata on the LessWrong.
(Cross-posted from my blog)
The stories and information posted here are artistic works of fiction and falsehood. Only a fool would take anything posted here as fact.
Anonymous
Act I.
Carl walked through the downtown. He came across a charity stall. The charity worker at the stall called out, "Food for the Africans. Helps with local autonomy and environmental sustainability. Have a heart and help them out." Carl glanced at the stall's poster. Along with pictures of emaciated children, it displayed infographics about how global warming would cause problems for African communities' food production, and numbers about how easy it is to help out with money. But something caught Carl's eye. In the top left, in bold font, the poster read, "IT IS ALL AN ACT. ASK FOR DETAILS."
Carl: "It's all an act, huh? What do you mean?"
Worker: "All of it. This charity stall. The information on the poster. The charity itself. All the other charities like us. The whole Western idea of charity, really."
Carl: "Care to clarify?"
Worker: "Sure. This poster contains some correct information. But a lot of it is presented in a misleading fashion, and a lot of it is just lies. We designed the poster this way because it fits with people's idea is of a good charity they should give money to. It's a prop in the act."
Carl: "Wait, the stuff about global warming and food production is a lie?"
Worker: "No, that part is actually true. But in context we're presenting it as some kind of imminent crisis that requires an immediate infusion of resources, when really it's a very long-term problem that will require gradual adjustment of agricultural techniques, locations, and policies."
Carl: "Okay, that doesn't actually sound like more of a lie than most charities tell."
Worker: "Exactly! It's all an act."
Carl: "So why don't you tell the truth anyway?"
Worker: "Like I said before, we're trying to fit with people's idea of what a charity they should give money to looks like. More to the point, we want them to feel compelled to give us money. And they are compelled by some acts, but not by others. The idea of an immediate food crisis creates more moral and social pressure towards immediate action, than the idea that there will be long-term agricultural problems that require adjustments.
Carl: "That sounds...kind of scammy?"
Worker: "Yes, you're starting to get it! The act is about violence! It's all violence!"
Carl: "Now hold on, that seems like a false equivalence. Even if they were scammed by you, they still gave you money of their own free will."
Worker: "Most people, at some level, know we're lying to them. Their eyes glaze over 'IT IS ALL AN ACT' as if it were just a regulatory requirement to put this on charity posters. So why would they give money to a charity that lies to them? Why do you think?"
Carl: "I'm not nearly as sure as you that they know this! Anyway, even if they know at some level it's a lie, that doesn't mean they consciously know, so to their conscious mind it seems like being completely heartless."
Worker: "Exactly, it's emotional blackmail. I even say 'Have a heart and help them out'. So if they don't give us money, there's a really convenient story that says they're heartless, and a lot of them will even start thinking about themselves that way. Having that story told about them opens them up to violence."
Carl: "How?"
Worker: "Remember Martin Shkreli?"
Carl: "Yeah, that asshole who jacked up the Daraprim prices."
Worker: "Right. He ended up going to prison. Nominally, it was for securities fraud. But it's not actually clear that whatever security fraud he did was worse than what others in his industry were doing. Rather, it seems likely that he was especially targeted because he was a heartless asshole."
Carl: "But he still brok...
|
Dec 12, 2021 |
Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain by Daniel Kokotajlo
23:09
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain, published by Daniel Kokotajlo on the LessWrong.
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
[Epistemic status: Strong opinions lightly held, this time with a cool graph.]
I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable.
In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.
In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the way it does.
The case of birds & planes illustrates this point nicely. Moreover, it is also a precedent for several other short-timelines talking points, such as the human-brain-human-lifetime (HBHL) anchor.
Plan:
Illustrative Analogy
Exciting Graph
Analysis
Extra brute force can make the problem a lot easier
Evolution produces complex mysterious efficient designs by default, even when simple inefficient designs work just fine for human purposes.
What’s bogus and what’s not
Example: Data-efficiency
Conclusion
Appendix
1909 French military plane, the Antionette VII.
By Deep silence (Mikaël Restoux) - Own work (Bourget museum, in France), CC BY 2.5,
Illustrative Analogy
AI timelines, from our current perspective Flying machine timelines, from the perspective of the late 1800’s:
Shorty: Human brains are giant neural nets. This is reason to think we can make human-level AGI (or at least AI with strategically relevant skills, like politics and science) by making giant neural nets. Shorty: Birds are winged creatures that paddle through the air. This is reason to think we can make winged machines that paddle through the air.
Longs: Whoa whoa, there are loads of important differences between brains and artificial neural nets: [what follows is a direct quote from the objection a friend raised when reading an early draft of this post!]
- During training, deep neural nets use some variant of backpropagation. My understanding is that the brain does something else, closer to Hebbian learning. (Though I vaguely remember at least one paper claiming that maybe the brain does something that's similar to backprop after all.)
- It's at least possible that the wiring diagram of neurons plus weights is too coarse-grained to accurately model the brain's computation, but it's all there is in deep neural nets. If we need to pay attention to glial cells, intracellular processes, different neurotransmitters etc., it's not clear how to integrate this into the deep learning paradigm.
- My impression is that several biological observations on the brain don't have a plausible analog in deep neural nets: growing new neurons (though unclear how important it is for an adult brain), "repurposing" in response to brain damage, .
Longs: Whoa whoa, there are loads of important differences between birds and flying machines:
- Birds paddle the air by flapping, whereas current machine designs use propellers and fixed wings.
- It’s at least possible that the anatomical diagram of bones, muscles, and wing surfaces is too coarse-grained to accurately model how a bird flies, but that’s all there is to current machine designs (replacing bones with struts and muscles with motors, that is). If we need to pay attention to the percolation of air through and between feathers, micro-eddies in t...
|