EVERYTHINGWORLDLY POSITIONSMETEUPHORIC

  • Association taxes are collusion subsidies

    Under present norms, if Alice associates with Bob, and Bob is considered objectionable in some way, Alice can be blamed for her association, even if there is no sign she was complicit in Bob’s sin.

    An interesting upshot is that as soon as you become visibly involved with someone, you are slightly invested in their social standing—when their social stock price rises and falls, yours also wavers.

  • Mental software updates

    Brains are like computers in that the hardware can do all kinds of stuff in principle, but each one tends to run through some particular patterns of activity repeatedly. For computers you can change this by changing programs. What are big ways brain ‘software’ changes?

  • Winning the power to lose

    Have the Accelerationists won?

    Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution, with the reinstatement of Sam Altman at OpenAI. Let’s ignore whether Kevin’s was a good description of the world, and deal with a more basic question: if it were so—i.e. if Team Acceleration would control the acceleration from here on out—what kind of win was it they won?

  • Ten arguments that AI is an existential risk

    You can read Ten arguments that AI is an existential risk by Nathan Young and I at the AI Impacts Blog.

  • Secondary forces of debt

    A general thing I hadn’t noticed about debts until lately:

    • Whenever Bob owes Alice, then Alice has reason to look after Bob, to the extent that increases the chance he satisfies the debt.
    • Yet at the same time, Bob has an incentive for Alice to disappear, insofar as it would relieve him.

    These might be tiny incentives, and not overwhelm for instance Bob’s many reasons for not wanting Alice to disappear. 

  • Podcasts: AGI Show, Consistently Candid, London Futurists

    For those of you who enjoy learning things via listening in on numerous slightly different conversations about them, and who also want to learn more about this AI survey I led, three more podcasts on the topic, and also other topics:

    • The AGI Show: audio, video (other topics include: my own thoughts about the future of AI and my path into AI forecasting)
    • Consistently Candid: audio (other topics include: whether we should slow down AI progress, the best arguments for and against existential risk from AI, parsing the online AI safety debate)
    • London Futurists: audio (other topics include: are we in an arms race? Why is my blog called that?)
  • What if a tech company forced you to move to NYC?

    It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.

  • Podcast: Center for AI Policy, on AI risk and listening to AI researchers

    I was on the Center for AI Policy Podcast. We talked about topics around the 2023 Expert Survey on Progress in AI, including why I think AI is an existential risk, and how much to listen to AI researchers on the subject. Full transcript at the link.

  • Is suffering like shit?

    People seem to find suffering deep. Serious writings explore the experiences of all manner of misfortunes, and the nuances of trauma and torment involved. It’s hard to write an essay about a really good holiday that seems as profound as an essay about a really unjust abuse. A dark past can be plumbed for all manner of meaning, whereas a slew of happy years is boring and empty, unless perhaps they are too happy and suggest something dark below the surface. (More thoughts in the vicinity of this here.)

    I wonder if one day suffering will be so avoidable that the myriad hurts of present-day existence will seem to future people like the problem of excrement getting on everything. Presumably a real issue in 1100 AD, but now irrelevant, unrelatable, decidedly not fascinating or in need of deep analysis.

  • Twin Peaks: under the air

    Content warning: low content

    ~ Feb 2021

    The other day I decided to try imbibing work-relevant blog posts via AI-generated recital, while scaling the Twin Peaks—large hills near my house in San Francisco, of the sort that one lives near and doesn’t get around to going to. It was pretty strange, all around.