09/12/2016

notable wordwordword


  • dragon-king (n.): An extreme event among extreme events: roughly, an outlier of a Pareto distribution, even. An elaboration on Taleb's black swan metaphor for unforeseeable extreme events. Not sure if it adds much, since the black swan is distribution-independent and Taleb doesn't fixate on power laws iirc.

  • chef's arse (n.): Painful chafing of the buttocks against each other; attends exercise in hot environments.

  • groufie ( n.): group selfie, obvs. No less contemptible for the awkward swerve around "groupie".

  • detaliate (mangled v.): To explain. Seen in this Quora answer by a non-native English speaker (possibly Romanian). I want to appropriate it: to detaliate is to respond to casual comments with a fisking.

  • consing: (n.): To save on memory allocation by comparing new values to existing allocations and just storing a hash to the existing one if it's a hit. From Lisp's cons cells, a basic key-value data structure.

  • sadcore: ( n.): Slow indie. Journo term: avoided by anyone musical for obvious reasons.

  • key-signing party (n.): A meetup in person for secure exchange of cryptographic keys. Not to be confused with the other key party, but I hope someone has.

  • Wadsworth constant (n.): The first 30% of a Youtube video. The part of a video that can be skipped because it will contain "no worthwhile or interesting information".

  • Kegan levels ( n.): the ranked parts of a particular theory of consciousness. I usually disdain these things, since the data has been so poor and the theorising simultaneously fragile and unfalsifiable. But I find this one really interesting, not least for its falsifiability - e.g. "logical reasoning is a necessary condition of having stable feelings", "most people never reach this stage, and never before age 40".

  • shirt-tail relatives ( n.): Can't beat this Quora answer: "people related to me enough that someone in my family circle cares deeply about their lives, but not closely enough related to me that I have any idea what the relationship is, in a biological or legal sense."

  • annalist (pej. n.): An insult between historians: someone who merely records events uncritically. Comes from the official scribes of ancient Rome, who were indeed uncritical to the point of deceit.



21/11/2016

Notes from Effective Altruism "Global" "x" in Oxford, in 2016


(This is about this thing. The following would work better as a bunch of tweets but seriously screw that: )


###########################################################################################################


Single lines which do much of the work of a whole talk:

"Effective altruism is to the pursuit of the good as science is to the pursuit of the truth." (Toby Ord)

"If the richest gave just the interest on their wealth for a year they could double the income of the poorest billion." (Will MacAskill)

"If you use a computer the size of the sun to beat a human at chess, either you are confused about programming or chess." (Nate Soares)

"Evolution optimised very, very hard for one goal - genetic fitness - and produced an AGI with a very different goal: roughly, fun." (Nate Soares)

"The goodness of outcomes cannot depend on other possible outcomes. You're thinking of optimality." (Derek Parfit)



###########################################################################################################


Owen Cotton-Barratt formally restated the key EA idea: that importance has a highly heavy-tailed distribution. This is a generalisation from the GiveWell/OpenPhil research programme, which dismisses (ahem, "fails to recommend") almost everyone because a handful of organisations are thousands of times more efficient at harvesting importance (in the form of unmalarial children or untortured pigs or an unended world).

Then, Sandberg's big talk on power laws generalised on Cotton-Barratt's, by claiming to find the mechanism which generates that importance distribution (roughly: "many morally important things in the world, from disease to natural disasters to info breaches to democides all fall under a single power-law-outputting distribution").

Cotton-Barratt then formalised the "Impact-Tractability-Neglectedness" model, as a precursor to a full quantitative model of cause prioritisation.



Then, Stefan Schubert's talk on the younger-sibling fallacy attempted to extend said ITN model with a fourth key factor: awareness of likely herding behaviour and market distortions (or "diachronic reflexivity").

There will come a time - probably now tbf - when the ITN model will have to split in two: into one rigorous model with nonlinearities and market dynamism, and a heuristic version. (The latter won't need to foreground dynamical concerns unless you are 1) incredibly influential or 2) incredibly influenceable in the same direction as everyone else. Contrarianism ftw.)


###########################################################################################################


Catherine Rhodes' biorisk talk made me update in the worst direction: I came away convinced that biorisk is both extremely neglected and extremely intractable to anyone outside the international bureaucracy / national security / life sciences clique. Also that "we have no surge capacity in healthcare. The NHS runs at "98%" of max on an ordinary day."

(This harsh blow was mollified a bit by news of Microsoft's mosquito-hunting drones - used for cheap and large-sample disease monitoring, that is, not personalised justice.)


###########################################################################################################




Anders Sandberg contributed to six events, sprinkling the whole thing with his hyper-literate, uncliched themes. People persisted in asking him things on the order of "whether GTA characters are morally relevant yet". But even these he handled with his rigorous levity.

My favourite was his take on the possible expanded value space of later humans: "chimps like bananas and sex. Humans like bananas, and sex, and philosophy and competitive sport. There is a part of value space completely invisible to the chimp. So it is likely that there is this other thing, which is like whoooaa to the posthuman, but which we do not see the value in."


###########################################################################################################


Books usually say that "modern aid" started in '49, when Truman announced a secular international development programme. Really liked Alena Stern's rebuke to this, pointing out that the field didn't even try to be scientific until the mid-90s, and did a correspondly low amount of good, health aside. It didn't deserve the word, and mostly still doesn't.


###########################################################################################################


Nate Soares is an excellent public communicator: he broadcasts seriousness without pretension, strong weird claims without arrogance. A catch.


###########################################################################################################


What is the comparative advantage of us 2016 people, relative to future do-gooders?

  • Anything happening soon. (AI risk)
  • Anything with a positive multiplier. (schistosomiasis, malaria, cause-building)
  • Anything that is hurting now. (meat industry)


###########################################################################################################


Dinner with Wiblin. My partner noted that I looked a bit flushed. I mean, I was eating jalfrezi.


###########################################################################################################


Most every session I attended had the same desultory question asked: "how might this affect inequality?" (AI, human augmentation, ) The answer's always the same: if it can be automated and mass-produced with the usual industrial speed, it won't. If it can't, it will.

It was good to ask (and ask, and ask) this for an ulterior reason though, see the following:


###########################################################################################################


Molly Crockett's research - how a majority of people* might relatively dislike utilitarians - was great and sad. Concrete proposals though: people distrust people who don't appear morally conflicted, who use physical harm for greater good, or more generally who use people as a means. So express confusion and regret, support autonomy whenever the harms aren't too massive to ignore, and put extra effort into maintaining relationships.

These are pretty superficial. Which is good news: we can still do the right thing (and profess the right thing), we just have to present it better.

(That said, the observed effects on trust weren't that large: about 20%, stable across various measures of trust.)



* She calls them deontologists, but that's a slander on Kantians: really, most people are just sentimentalists, in the popular and the technical sense.



###########################################################################################################


Not sure I've ever experienced this high a level of background understanding in a large group. Deep context - years of realisations - mutually taken for granted; and so many shortcuts and quicksteps to the frontier of common knowledge. In none of these rooms was I remotely the smartest person. An incredible feeling: you want to start lifting much heavier things as soon as possible.


###########################################################################################################



Very big difference between Parfit's talk and basically all the others. This led to a sadly fruitless Q&A, people talking past each other by bad choice of examples. Still riveting: emphatic and authoritative though hunched over with age. A wonderful performance with an air of the Last of His Kind.

Parfit handled 'the nonidentity problem' (how can we explain the wrongness of situations involving merely potential people? Why is it bad for a species to cease procreating?) and 'the triviality problem' (how exactly do tiny harms committed by a huge aggregate of people combine to form wrongness? Why is it wrong to discount one's own carbon emissions when considering the misery of future lives?).

He proceeded in the (lC20th) classic mode: state clean principles that summarise an opposing view, and then find devastating counterexamples to them. All well and good as far as it goes. But the new principles he sets upon the rubble - unpublished so far - are sure to have their own counterexamples in production by the grad mill.


The audience struggled through the fairly short deductive chains, possibly just out of unfamiliarity with philosophy's unlikely apodicticity. They couldn't parse it fast enough to answer a yes/no poll at the end. ("Are you convinced of the non-difference view?")

The Q&A questions all had a good core, but none hit home for various reasons:

  • "Does your theory imply that it is acceptable to torture one person to prevent a billion people getting a speck in their eye?" Parfit didn't bite, simply noting, correctly, that 1) Dostoevsky said this in a more manipulative way, and 2) it is irrelevant to the Triviality Problem as he stated it. (This rebuffing did not appear to be a clever PR decision, though it was, since he is indeed a totalarian.)

  • "What implications does this have for software design?" Initial response was just a frowning stare. (Sandberg meant: lost time is clearly a harm; thus the designers of mass-market products are responsible for thousands of years of life when they fail to optimise away even 1 second delays.)

  • "I'd rather give one person a year of life than a million people one second. Isn't continuity important in experiencing value?" This person's point was that Parfit was assuming the linearity of marginal life without justification, but this good point got lost in the forum. Parfit replied simply - as if the questioner was making a simple mistake: "These things add up". I disagree with the questioner about any such extreme nonlinearity - they may be allowing the narrative salience of a single life to distract them from the sheer scale of the number of recipients in the other case - but it's certainly worth asking.


We owe Parfit a lot. His emphasis on total impartiality, the counterintuitive additivity of the good, and most of all his attempted cleaving of old, fossilised disagreements to get to the co-operative core of diverse viewpoints: all of these shine throughout EA. I don't know if that's coincidental rather than formative debt.

(Other bits are not core to EA but are still indispensable for anyone trying to be a consistent, non-repugnant consequentialist: e.g. thinking in terms of degrees of personhood, and what he calls "lexical superiority" for some reason (it is two-level consequentialism).)

The discourse has definitely diverged from non-probabilistic apriorism, also known as the Great Conversation. Sandberg is of the new kind of philosopher: a scientific mind, procuring probabilities, but also unable to restrain creativity/speculation because of the heavy, heavy tails here and just around the corner.