EVERYTHINGWORLDLY POSITIONSMETEUPHORIC

  • Secondary forces of debt

    A general thing I hadn’t noticed about debts until lately:

    • Whenever Bob owes Alice, then Alice has reason to look after Bob, to the extent that increases the chance he satisfies the debt.
    • Yet at the same time, Bob has an incentive for Alice to disappear, insofar as it would relieve him.

    These might be tiny incentives, and not overwhelm for instance Bob’s many reasons for not wanting Alice to disappear. 

  • Podcasts: AGI Show, Consistently Candid, London Futurists

    For those of you who enjoy learning things via listening in on numerous slightly different conversations about them, and who also want to learn more about this AI survey I led, three more podcasts on the topic, and also other topics:

    • The AGI Show: audio, video (other topics include: my own thoughts about the future of AI and my path into AI forecasting)
    • Consistently Candid: audio (other topics include: whether we should slow down AI progress, the best arguments for and against existential risk from AI, parsing the online AI safety debate)
    • London Futurists: audio (other topics include: are we in an arms race? Why is my blog called that?)
  • What if a tech company forced you to move to NYC?

    It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.

  • Podcast: Center for AI Policy, on AI risk and listening to AI researchers

    I was on the Center for AI Policy Podcast. We talked about topics around the 2023 Expert Survey on Progress in AI, including why I think AI is an existential risk, and how much to listen to AI researchers on the subject. Full transcript at the link.

  • Is suffering like shit?

    People seem to find suffering deep. Serious writings explore the experiences of all manner of misfortunes, and the nuances of trauma and torment involved. It’s hard to write an essay about a really good holiday that seems as profound as an essay about a really unjust abuse. A dark past can be plumbed for all manner of meaning, whereas a slew of happy years is boring and empty, unless perhaps they are too happy and suggest something dark below the surface. (More thoughts in the vicinity of this here.)

    I wonder if one day suffering will be so avoidable that the myriad hurts of present-day existence will seem to future people like the problem of excrement getting on everything. Presumably a real issue in 1100 AD, but now irrelevant, unrelatable, decidedly not fascinating or in need of deep analysis.

  • Twin Peaks: under the air

    Content warning: low content

    ~ Feb 2021

    The other day I decided to try imbibing work-relevant blog posts via AI-generated recital, while scaling the Twin Peaks—large hills near my house in San Francisco, of the sort that one lives near and doesn’t get around to going to. It was pretty strange, all around.

  • What is my Facebook feed about?

    Sometimes I look at social media a bunch, but it would be hard for me to tell you what the actual content is, I suppose because whenever I’m looking at it, I’m focused on each object thing level in turn, not the big picture. So sometimes I’m curious what it is I read about there. Here is the answer for Facebook, in 2019—according to a list I found that appears to be a survey of such—and again now. Plausibly not at a useful level of abstraction, but I have a bit of a migraine and no more energy for this project.

  • Things I hate about Partiful

    • The aesthetic for all parties is basically the same.

    • That aesthetic is bad.

    • A party is an aesthetic creation, so having all guests’ first experience of the thing you are offering them be a chintzy piece of crap that matches every other chintzy piece of crap is much worse than if the thing they were selling was like low-quality toilet paper or something.

    • As far as I can tell, the only way to be informed of parties using Partiful is via SMS. Perhaps this is idiosyncratic to me, but I have no desire to ever use SMS. I also don’t want to receive a message in the middle of whatever I’m doing to hear about a new party happening. Fuck off. This should only happen if the party is very time sensitive and important. Like if a best friend or much sought after celebrity is having a party in the next twenty minutes, sure text me, if you don’t have WhatsApp. Otherwise, ffs email me.

    • As far as I can tell, the only way to message the host a question about the party is to post it to the entire group. Yet there are very few questions I want to text an entire guest list about.

    • Supposing I make the error of doing that (which I do not), as far as I can tell, the guest list receives an sms saying that I have sent a message, and they have to click to follow a link to the website to see what the message is.

  • An explanation of evil in an organized world

    A classic problem with Christianity is the so-called ‘problem of evil’—that friction between the hypothesis that the world’s creator is arbitrarily good and powerful, and a large fraction of actual observations of the world.

    Coming up with solutions to the problem of evil is a compelling endeavor if you are really rooting for a particular bottom line re Christianity, or I guess if you enjoy making up faux-valid arguments for wrong conclusions. At any rate, I think about this more than you might guess.

    And I think I’ve solved it!

  • The first future and the best future

    It seems to me worth trying to slow down AI development to steer successfully around the shoals of extinction and out to utopia.

    But I was thinking lately: even if I didn’t think there was any chance of extinction risk, it might still be worth prioritizing a lot of care over moving at maximal speed. Because there are many different possible AI futures, and I think there’s a good chance that the initial direction affects the long term path, and different long term paths go to different places. The systems we build now will shape the next systems, and so forth. If the first human-level-ish AI is brain emulations, I expect a quite different sequence of events to if it is GPT-ish.

    People genuinely pushing for AI speed over care (rather than just feeling impotent) apparently think there is negligible risk of bad outcomes, but also they are asking to take the first future to which there is a path. Yet possible futures are a large space, and arguably we are in a rare plateau where we could climb very different hills, and get to much better futures.