EVERYTHINGWORLDLY POSITIONSMETEUPHORIC

  • Positly covid survey: long covid

    Here are some more careful results from a survey I ran the other day on Positly, to test whether it’s trivial to find people who have had their lives seriously impacted by long covid, and to get a better sense of the distribution of what people mean by things like ‘brain fog’, in bigger, vaguer, research efforts.

  • Long covid: probably worth avoiding—some considerations

    I hear friends reasoning, “I’ll get covid eventually and long covid probably isn’t that bad; therefore it’s not worth much to avoid it now”. Here are some things informing my sense that that’s an error:

  • Survey supports ‘long covid is bad’ hypothesis (very tentative)

    I wanted more clues about whether really bad long covid outcomes were vanishingly rare (but concentrated a lot in my Twitter) or whether for instance a large fraction of ‘brain fogs’ reported in datasets are anything like the horrors sometimes described. So I took my questions to Positly, hoping that the set of people who would answer questions for money there was fairly random relative to covid outcomes.

    I hope to write something more careful about this survey soon, especially if it is of interest, but figure the basic data is better to share sooner. This summary is not very careful, and may e.g. conflate slightly differently worded questions, or fail to exclude obviously confused answers, or slightly miscount.

    This is a survey of ~230 Positly survey takers in the US, all between 20 and 40 years old. Very few of the responses I’ve looked at seem incoherent or botlike, unlike those in the survey I did around the time of the election.

  • Beyond fire alarms: freeing the groupstruck

    Crossposted from AI Impacts

    [Content warning: death in fires, death in machine apocalypse]

    ‘No fire alarms for AGI’

    Eliezer Yudkowsky wrote that ‘there’s no fire alarm for Artificial General Intelligence’, by which I think he meant: ‘there will be no future AI development that proves that artificial general intelligence (AGI) is a problem clearly enough that the world gets common knowledge (i.e. everyone knows that everyone knows, etc) that freaking out about AGI is socially acceptable instead of embarrassing.’

    He calls this kind of event a ‘fire alarm’ because he posits that this is how fire alarms work: rather than alerting you to a fire, they primarily help by making it common knowledge that it has become socially acceptable to act on the potential fire.

    He supports this view with a great 1968 study by Darley and Latané, in which they found that if you pipe a white plume of ‘smoke’ through a vent into a room where participants fill out surveys, a lone participant will quickly leave to report it, whereas a group of three (innocent) participants will tend to sit by in the haze for much longer[^1].

    Here’s a video of a rerun[^2] of part of this experiment, if you want to see what people look like while they try to negotiate the dual dangers of fire and social awkwardness.


  • Punishing the good

    Should you punish people for wronging others, or for making the wrong call about wronging others?

  • Lafayette: empty traffic signals

    Seeking to cross a road on the walk into downtown Lafayette, instead of the normal pedestrian crossing situation, we met a button with a sign, ‘Push button to turn on warning lights’. I wondered, if I pressed it, would it then be my turn to cross? Or would there just be some warning lights? What was the difference? Do traffic buttons normally do something other than change the lights?

  • Lafayette: traffic vessels

    This week I’m in Lafayette, a town merely twenty-three minutes further from my San Franciscan office than my usual San Franciscan home, thanks to light rail. There are deer in the street and woods on the walk from the train to town.

  • Typology of blog posts that don't always add anything clear and insightful

    I used to think a good blog post should basically be a description of a novel insight.

  • Do incoherent entities have stronger reason to become more coherent than less?

    My understanding is that various ‘coherence arguments’ exist, of the form:

    1. If your preferences diverged from being representable by a utility function in some way, then you would do strictly worse in some way than by having some kind of preferences that were representable by a utility function. For instance, you will lose money, for nothing.
    2. You have good reason not to do that / don’t do that / you should predict that reasonable creatures will stop doing that if they notice that they are doing it.
  • Holidaying and purpose

    I’m on holiday. A basic issue with holidays is that it feels more satisfying and meaningful to do purposeful things, but for a thing to actually serve a purpose, it often needs to pass a higher bar than a less purposeful thing does. In particular, you often have to finish a thing and do it well in order for it to achieve its purpose. And finishing things well is generally harder and less fun than starting them, and so in other ways contrary to holidaying.