EVERYTHINGWORLDLY POSITIONSMETEUPHORIC

  • An easy coordination problem?

    Common wisdom says that it is incredibly hard to coordinate to not build more dangerous AI. This sounds believable in the abstract: international geopolitics arms race game theory something something.

  • How I love running

    There is a particular flavor of suffering I fear: where something is not just unpleasant, but is requiring active effort from you to continue having the unpleasant thing happen, and so you have to not only suffer the suffering, but also the constant thinking about whether maybe you should stop right now—and so are also having to dip peripherally into questions of free will and will power and who you are and if you will ever do anything and if you are fundamentally bad, and all this while you are already quite taxed by the original suffering.

    The epitome of this kind of suffering to my mind has traditionally been running. What everyday activity was less pleasant than running? Better to be lightly tortured by someone else, than have to do the inflicting as well. (No, I’m probably not a very athletic person.)

    But that was years ago. These days running is often one of the most joyous things I do.

    (I still don’t do it nearly enough, but often when I do I think “oh wow this is so good, I should do this much more often” rather than “can I stop? can I stop? I’m stopping.. no, oh god, when is it over?”)

    What changed?

  • Canberra: folk music

    “…was anyone ever so young? I am here to tell you that someone was…”

    - Joan Didion, on being a twenty-year-old in New York City, “Goodbye to All That”

    Well I am here to tell you that someone was even younger than that.

  • We can prevent progress! Conceptual clarity, and inspiration from the FDA

    “We can’t prevent progress” say the people for some reason enthusiastically advocating that we just risk dying by AI rather than even consider contravening this law.

  • AI as a Trojan horse race

    I’ve argued that the AI situation is not clearly an ‘arms race’. By which I mean, going fast is not clearly good, even selfishly.

    I think this is a hard point to get across. Like, these people are RACING. They say they are RACING. They are GOING FAST. If they stop RACING the other side will get there first. How is it not a RACE??

    Which is a fair response.

    It’s like if I said “this isn’t a chess tournament” gesturing at a group of chess champions aggressively playing chess. How could it not be?

  • 'Wicked': thoughts

    I watched Wicked (the 2024 movie) with my ex and his family at Christmas. My current stance is that it was pretty fun but not especially incredible or deep. I could be pretty wrong—watching movies isn’t my strong suit, but I do like chatting about them afterwards. Some thoughts:

  • Association taxes are collusion subsidies

    Under present norms, if Alice associates with Bob, and Bob is considered objectionable in some way, Alice can be blamed for her association, even if there is no sign she was complicit in Bob’s sin.

    An interesting upshot is that as soon as you become visibly involved with someone, you are slightly invested in their social standing—when their social stock price rises and falls, yours also wavers.

  • Mental software updates

    Brains are like computers in that the hardware can do all kinds of stuff in principle, but each one tends to run through some particular patterns of activity repeatedly. For computers you can change this by changing programs. What are big ways brain ‘software’ changes?

  • Winning the power to lose

    Have the Accelerationists won?

    Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution, with the reinstatement of Sam Altman at OpenAI. Let’s ignore whether Kevin’s was a good description of the world, and deal with a more basic question: if it were so—i.e. if Team Acceleration would control the acceleration from here on out—what kind of win was it they won?

  • Ten arguments that AI is an existential risk

    You can read Ten arguments that AI is an existential risk by Nathan Young and I at the AI Impacts Blog.