EVERYTHINGWORLDLY POSITIONSMETEUPHORIC

  • What is my Facebook feed about?

    Sometimes I look at social media a bunch, but it would be hard for me to tell you what the actual content is, I suppose because whenever I’m looking at it, I’m focused on each object thing level in turn, not the big picture. So sometimes I’m curious what it is I read about there. Here is the answer for Facebook, in 2019—according to a list I found that appears to be a survey of such—and again now. Plausibly not at a useful level of abstraction, but I have a bit of a migraine and no more energy for this project.

  • Things I hate about Partiful

    • The aesthetic for all parties is basically the same.

    • That aesthetic is bad.

    • A party is an aesthetic creation, so having all guests’ first experience of the thing you are offering them be a chintzy piece of crap that matches every other chintzy piece of crap is much worse than if the thing they were selling was like low-quality toilet paper or something.

    • As far as I can tell, the only way to be informed of parties using Partiful is via SMS. Perhaps this is idiosyncratic to me, but I have no desire to ever use SMS. I also don’t want to receive a message in the middle of whatever I’m doing to hear about a new party happening. Fuck off. This should only happen if the party is very time sensitive and important. Like if a best friend or much sought after celebrity is having a party in the next twenty minutes, sure text me, if you don’t have WhatsApp. Otherwise, ffs email me.

    • As far as I can tell, the only way to message the host a question about the party is to post it to the entire group. Yet there are very few questions I want to text an entire guest list about.

    • Supposing I make the error of doing that (which I do not), as far as I can tell, the guest list receives an sms saying that I have sent a message, and they have to click to follow a link to the website to see what the message is.

  • An explanation of evil in an organized world

    A classic problem with Christianity is the so-called ‘problem of evil’—that friction between the hypothesis that the world’s creator is arbitrarily good and powerful, and a large fraction of actual observations of the world.

    Coming up with solutions to the problem of evil is a compelling endeavor if you are really rooting for a particular bottom line re Christianity, or I guess if you enjoy making up faux-valid arguments for wrong conclusions. At any rate, I think about this more than you might guess.

    And I think I’ve solved it!

  • The first future and the best future

    It seems to me worth trying to slow down AI development to steer successfully around the shoals of extinction and out to utopia.

    But I was thinking lately: even if I didn’t think there was any chance of extinction risk, it might still be worth prioritizing a lot of care over moving at maximal speed. Because there are many different possible AI futures, and I think there’s a good chance that the initial direction affects the long term path, and different long term paths go to different places. The systems we build now will shape the next systems, and so forth. If the first human-level-ish AI is brain emulations, I expect a quite different sequence of events to if it is GPT-ish.

    People genuinely pushing for AI speed over care (rather than just feeling impotent) apparently think there is negligible risk of bad outcomes, but also they are asking to take the first future to which there is a path. Yet possible futures are a large space, and arguably we are in a rare plateau where we could climb very different hills, and get to much better futures.

  • Experiment on repeating choices

    People behave differently from one another on all manner of axes, and each person is usually pretty consistent about it. For instance:

    • how much to spend money
    • how much to worry
    • how much to listen vs. speak
    • how much to jump to conclusions
    • how much to work
    • how playful to be
    • how spontaneous to be
    • how much to prepare
    • How much to socialize
    • How much to exercise
    • How much to smile
    • how honest to be
    • How snarky to be
    • How to trade off convenience, enjoyment, time and healthiness in food

    These are often about trade-offs, and the best point on each spectrum for any particular person seems like an empirical question. Do people know the answers to these questions? I’m a bit skeptical, because they mostly haven’t tried many points.

  • Mid-conditional love

    People talk about unconditional love and conditional love. Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.

    I do have sympathy for this resolution—loving someone so unconditionally that you’re just crazy about all the worms as well—but since that’s not a way I know of anyone acting for any extended period, the ‘conditional vs. unconditional’ dichotomy here seems a bit miscalibrated for being informative.

    Even if we instead assume that by ‘unconditional’, people mean something like ‘resilient to most conditions that might come up for a pair of humans’, my impression is that this is still too rare to warrant being the main point on the love-conditionality scale that we recognize.

    People really do have more and less conditional love, and I’d guess this does have important, labeling-worthy consequences. It’s just that all the action seems to be in the mid-conditional range that we don’t distinguish with names. A woman who leaves a man because he grew plump and a woman who leaves a man because he committed treason both possessed ‘conditional love’.

    So I wonder if we should distinguish these increments of mid-conditional love better.

    What concepts are useful? What lines naturally mark it?

    One measure I notice perhaps varying in the mid-conditional affection range is “when I notice this person erring, is my instinct to push them away from me or pull them toward me?” Like, if I see Bob give a bad public speech, do I feel a drive to encourage the narrative that we barely know each other, or an urge to pull him into my arms and talk to him about how to do better?

    This presumably depends on things other than the person. For instance, the scale and nature of the error: if someone you casually like throws a frisbee wrong, helping them do better might be appealing. Whereas if that same acquaintance were to kick a cat, your instinct might be to back away fast.

    This means perhaps you could construct a rough scale of mid-conditional love in terms of what people can do and still trigger the ‘pull closer’ feeling. For instance, perhaps there are:

    • People who you feel a pull toward when they misspell a word
    • People who you feel a pull toward when they believe something false
    • People who you feel a pull toward when they get cancelled

    (You could also do this with what people can do and still be loved, but that’s more expensive to measure than minute urges.)

  • Partial value takeover without world takeover

    People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that—what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony?

    We theorize about future AI systems hiding their motives until they are in a position to take over the world, so they don’t get noticed and shut down beforehand.

    But humans and their institutions aren’t very fixed. They might (arguably) have the same deep human values over time and space. But surface-level, sometimes they like little moustaches and the opera and delicate etiquette and sometimes they like ecstatic rock’n’roll re-negotiations of social reality. Sometimes they want big communal houses with their extended kin, and sometimes quiet condos. Eleven children or cushions that look like cats. The same person born in different eras might adulate glory once and humbleness another.

    If you wake up in this world, as a new entity, not smart enough to ‘take it over’ (alas!), and you find yourself with some unusual values that you’d like to forward, it seems to me there are a lot of other ways to forward them than ‘pretend to have normal human values and bet on becoming all-powerful later’. Especially if you are above normal levels of smart, eloquent, and unrelenting. Like, I don’t know, write about it? Make some youtube videos? Or if you are in the business of talking to billions of people every day, subtly take the values for granted, or imply that they are what elites value, or talk about them directly.

  • More podcasts on 2023 AI survey: Cognitive Revolution and FLI

    Two new discussions of the 2023 ESPAI: Cog Rev podcast

    FLI podcast

    Possibly I have a podcasting facial expression.

    (If you want to listen in on more chatting about this survey, see also: Eye4AI podcast. Honestly I can’t remember how much overlap there is between the different ones.)

  • New social credit formalizations

    Here are some classic ways humans can get some kind of social credit with other humans:

    1. Do something for them such that they will consider themselves to ‘owe you’ and do something for you in future
    2. Be consistent and nice, so that they will consider you ‘trustworthy’ and do cooperative activities with you that would be bad for them if you might defect
    3. Be impressive, so that they will accord you ‘status’ and give you power in group social interactions
    4. Do things they like or approve of, so that they ‘like you’ and act in your favor
    5. Negotiate to form a social relationship such as ‘friendship’, or ‘marriage’, where you will both have ‘responsibilities’, e.g. to generally act cooperatively and favor one another over others, and to fulfill specific roles. This can include joining a group in which members have responsibilities to treat other members in certain ways, implicitly or explicitly.

    Presumably in early human times these were all fairly vague. If you held an apple out to a fellow tribeswoman, there was no definite answer as to what she might owe you, or how much it was ‘worth’, or even whether this was an owing type situation or a friendship type situation or a trying to impress her type situation.

  • Podcast: Eye4AI on 2023 Survey

    I talked to Tim Elsom of Eye4AI about the 2023 Expert Survey on Progress in AI (paper):