HOMEFULLLISTWORLDLY POSITIONSMETEUPHORIC

 
  • The ecology of conviction

    Supposing that sincerity has declined, why?

    It feels natural to me that sincere enthusiasms should be rare relative to criticism and half-heartedness. But I would have thought this was born of fairly basic features of the situation, and so wouldn’t change over time.

    It seems clearly easier and less socially risky to be critical of things, or non-committal, than to stand for a positive vision. It is easier to produce a valid criticism than an idea immune to valid criticism (and easier again to say, ‘this is very simplistic - the situation is subtle’). And if an idea is criticized, the critic gets to seem sophisticated, while the holder of the idea gets to seem naïve. A criticism is smaller than a positive vision, so a critic is usually not staking their reputation on their criticism as much, or claiming that it is good, in the way that the enthusiast is.

    But there are also rewards for positive visions and for sincere enthusiasm that aren’t had by critics and routine doubters. So for things to change over time, you really just need the scale of these incentives to change, whether in a basic way or because the situation is changing.

    One way this could have happened is that the internet (or even earlier change in the information economy) somehow changed the ecology of enthusiasts and doubters, pushing the incentives away from enthusiasm. e.g. The ease, convenience and anonymity of criticizing and doubting on the internet puts a given positive vision in contact with many more critics, making it basically impossible for an idea to emerge not substantially marred by doubt and teeming with uncertainties and summarizable as ‘maybe X, but I don’t know, it’s complicated’. This makes presenting positive visions less appealing, reducing the population of positive vision havers, and making them either less confident or more the kinds of people whose confidence isn’t affected by the volume of doubt other people might have about what they are saying. Which all make them even easier targets for criticism, and make confident enthusiasm for an idea increasingly correlated with being some kind of arrogant fool. Which decreases the basic respect offered by society for someone seeming to have a positive vision.

    This is a very speculative story, but something like these kinds of dynamics seems plausible.

    These thoughts were inspired by a conversation I had with Nick Beckstead.

  • In balance and flux

    Someone more familiar with ecology recently noted to me that it used to be a popular view that nature was ‘in balance’ and had some equilibrium state, that it should be returned to. Whereas the new understanding is that there was never an equilibrium state. Natural systems are always changing. Another friend who works in natural management also recently told me that their role in the past might have been trying to restore things to their ‘natural state’, but now the goal was to prepare yourself for what your ecology was becoming. A brief Googling returns this National Geographic article by Tik Root, The ‘balance of nature’ is an enduring concept. But it’s wrong. along the same lines. In fairness, they seem to be arguing against both the idea that nature is in a balance so intense that you can easily disrupt it, and the idea that nature is in a balance so sturdy that it will correct anything you do to it, which sounds plausible. But they don’t say that ecosystems are probably in some kind of intermediately sturdy balance, in many dimensions at least. They say that nature is ‘in flux’ and that the notion of balance is a misconception.

    It seems to me though that there is very often equilibrium in some dimensions, even in a system that is in motion in other dimensions, and that that balance can be very important to maintain.

    Some examples:

    • bicycle
    • society with citizens with a variety of demeanors, undergoing broad social change
    • human growing older, moving to Germany, and getting pregnant, while maintaining a narrow range of temperatures and blood concentrations of different chemicals

    So the observation that a system is in flux seems fairly irrelevant to whether it is in equilibrium.

    Any system designed to go somewhere relies on some of its parameters remaining within narrow windows. Nature isn’t designed to go somewhere, so the issue of what ‘should’ happen with it is non-obvious. But the fact that ecosystems always gradually change along some dimensions (e.g. grassland becoming forest) doesn’t seem to imply that there are not still balance in other dimensions, where they don’t change so much, and where changing is more liable to lead to very different and arguably less good states.

    For instance, as a grassland gradually reforests, it might continue to have a large number of plant eating bugs, and bug-eating birds, such that the plant eating bugs would destroy the plants entirely if there were ever too many of them, but as there become more of them, the birds also flourish, and then eat them. As the forest grows, the tree-eating bugs become more common relative to the grass-eating bugs, but the rough equilibrium of plants, bugs, and birds remains. If the modern world was disrupting the reproduction of the birds, so that they were diminishing even while the bugs to eat were plentiful, threatening a bug-explosion-collapse in which the trees and grass would be destroyed by the brief insect plague, I think it would be reasonable to say that the modern world was disrupting the equilibrium, or putting nature out of balance.

    The fact that your bike has been moving forward for miles doesn’t mean that leaning a foot to the left suddenly is meaningless, in systems terms.

  • What is going on in the world?

    Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take:

    • The US is falling apart rapidly (on the scale of years), as evident in US politics departing from sanity and honor, sharp polarization, violent civil unrest, hopeless pandemic responses, ensuing economic catastrophe, one in a thousand Americans dying by infectious disease in 2020, and the abiding popularity of Trump in spite of it all.
    • Western civilization is declining on the scale of half a century, as evidenced by its inability to build things it used to be able to build, and the ceasing of apparent economic acceleration toward a singularity.
    • AI agents will control the future, and which ones we create is the only thing about our time that will matter in the long run. Major subplots:
      • ‘Aligned’ AI is necessary for a non-doom outcome, and hard.
      • Arms races worsen things a lot.
      • The order of technologies matters a lot / who gets things first matters a lot, and many groups will develop or do things as a matter of local incentives, with no regard for the larger consequences.
      • Seeing more clearly what’s going on ahead of time helps all efforts, especially in the very unclear and speculative circumstances (e.g. this has a decent chance of replacing subplots here with truer ones, moving large sections of AI-risk effort to better endeavors).
      • The main task is finding levers that can be pulled at all.
      • Bringing in people with energy to pull levers is where it’s at.
    • Institutions could be way better across the board, and these are key to large numbers of people positively interacting, which is critical to the bounty of our times. Improvement could make a big difference to swathes of endeavors, and well-picked improvements would make a difference to endeavors that matter.
    • Most people are suffering or drastically undershooting their potential, for tractable reasons.
    • Most human effort is being wasted on endeavors with no abiding value.
    • If we take anthropic reasoning and our observations about space seriously, we appear very likely to be in a ‘Great Filter’, which appears likely to kill us (and unlikely to be AI).
    • Everyone is going to die, the way things stand.
    • Most of the resources ever available are in space, not subject to property rights, and in danger of being ultimately had by the most effective stuff-grabbers. This could begin fairly soon in historical terms.
    • Nothing we do matters for any of several reasons (moral non-realism, infinite ethics, living in a simulation, being a Boltzmann brain, ..?)
    • There are vast quantum worlds that we are not considering in any of our dealings.
    • There is a strong chance that we live in a simulation, making the relevance of each of our actions different from that which we assume.
    • There is reason to think that acausal trade should be a major factor in what we do, long term, and we are not focusing on it much and ill prepared.
    • Expected utility theory is the basis of our best understanding of how best to behave, and there is reason to think that it does not represent what we want. Namely, Pascal’s mugging, or the option of destroying the world with all but one in a trillion chance for a proportionately greater utopia, etc.
    • Consciousness is a substantial component of what we care about, and we not only don’t understand it, but are frequently convinced that it is impossible to understand satisfactorily. At the same time, we are on the verge of creating things that are very likely conscious, and so being able to affect the set of conscious experiences in the world tremendously. Very little attention is being given to doing this well.
    • We have weapons that could destroy civilization immediately, which are under the control of various not-perfectly-reliable people. We don’t have a strong guarantee of this not going badly.
    • Biotechnology is advancing rapidly, and threatens to put extremely dangerous tools in the hands of personal labs, possibly bringing about a ‘vulnerable world’ scenario.
    • Technology keeps advancing, and we may be in a vulnerable world scenario.
    • The world is utterly full of un-internalized externalities and they are wrecking everything.
    • There are lots of things to do in the world, we can only do a minuscule fraction, and we are hardly systematically evaluating them at all. Meanwhile massive well-intentioned efforts are going into doing things that are probably much less good than they could be.
    • AI is powerful force for good, and if it doesn’t pose an existential risk, the earlier we make progress on it, the faster we can move to a world of unprecedented awesomeness, health and prosperity.
    • There are risks to the future of humanity (‘existential risks’), and vastly more is at stake in these than in anything else going on (if we also include catastrophic trajectory changes). Meanwhile the world’s thinking and responsiveness to these risks is incredibly minor and they are taken unseriously.
    • The world is controlled by governments, and really awesome governance seems to be scarce and terrible governance common. Yet we probably have a lot of academic theorizing on governance institutions, and a single excellent government based on scalable principles might have influence beyond its own state.
    • The world is hiding, immobilized and wasted by a raging pandemic.

    It’s a draft. What should I add? (If, in life, you’ve chosen among ways to improve the world, is there a simple story within which your choices make particular sense?)

  • Condition-directedness

    In chess, you can’t play by picking a desired end of the game and backward chaining to the first move, because there are vastly more possible chains of moves than your brain can deal with, and the good ones are few. Instead, chess players steer by heuristic senses of the worth of situations. I assume they still back-chain a few moves (‘if I go there, she’ll have to move her rook, freeing my queen’) but just leading from a heuristically worse to a heuristically better situation a short hop away.

    In life, it is often taken for granted that one should pursue goals, not just very locally, but over scales of decades. The alternative is taken to be being unambitious and directionless.

    But there should also be an alternative that is equivalent to the chess one: heuristically improving the situation, without setting your eye on a particular pathway to a particular end-state.

    Which seems like actually what people do a lot of the time. For instance, making your living room nice without a particular plan for it, or reading to be ‘well read’, or exercising to be ‘fit’ (at least insofar as having a nice living space and being fit and well-read are taken as generally promising situations rather than stepping stones immediately prior to some envisaged meeting, say). Even at a much higher level, spending a whole working life upholding the law or reporting on events or teaching the young because these put society in a better situation overall, not because they will lead to some very specific outcome.

    In spite of its commonness, I’m not sure that I have heard of this type of action labeled as distinct from goal-directedness and undirectedness. I’ll call it condition-directedness for now. When people are asked for their five year plans, they become uncomfortable if they don’t have one, rather than proudly stating that they don’t currently subscribe to goal-oriented strategy at that scale. Maybe it’s just that I hang out in this strange Effective Altruist community, where all things are meant to be judged by their final measure on the goal, which perhaps encourages evaluating them explicitly with reference to an envisaged path to the goal, especially if it is otherwise hard to distinguish the valuable actions from doing whatever you feel like.

    It seems like one could be condition-directed and yet very ambitious and not directionless. (Though your ambition would be non-specific, and your direction would be local, and maybe they are the worse for these things?) For instance, you might work tirelessly on whatever seems like it will improve the thriving of a community that you are part of, and always know in which direction you are pushing, and have no idea what you will be doing in five years.

    Whether condition-directedness is a good kind of strategy would seem to depend on the game you are playing, and your resources for measuring and reasoning about it. In chess, condition-directedness seems necessary. Somehow longer term plans do seem more feasible in life than in chess though, so it is possible that they are always better in life, at the scales in question. I doubt this, especially given the observation that people often seem to be condition-directed, at least at some scales and in some parts of life.

    (These thoughts currently seem confused to me - for instance, what is up with scales? How is my knowing that I do want to take the king relevant?)

    Inspired by a conversation with John Salvatier.

  • Opposite attractions

    Is the opposite of what you love also what you love?

    I think there’s a general pattern where if you value A you tend to increase the amount of it in your life, and you end feeling very positively about various opposites of A—things that are very unlike A, or partially prevent A, or undo some of A’s consequences—as well. At least some of the time, or for some parts of you, or in some aspects, or when your situation changes a bit. Especially if you contain multitudes.

    Examples:

    • Alice values openness, so tends to be very open: she tells anyone who asks (and many people who don’t) what’s going on in her life, and writes about it abundantly on the internet. But when she is embarrassed about something, she feels oppressed by everyone being able to see her so easily. So then she hides in her room, works at night when nobody is awake to think of her, and writes nothing online. Because for her, interacting with someone basically equates to showing them everything, her love of openness comes with a secondary love of being totally alone in her room.
    • Bob values connecting with people, and it seems hard in the modern world, but he practices heartfelt listening and looking people in the eye, and mentally jumping into their perspectives. He often has meaningful conversations in the grocery line, which he enjoys and is proud of. He goes to Burning Man and finds thousands of people desperate to connect with him, so that his normal behavior is quickly leading to an onslaught of connecting that is more than he wants. He finds himself savoring the impediments to connection—the end of an eye-gazing activity, the chance to duck out of a conversation, the walls of his tent—in a way that nobody else at Burning Man is.
    • An extreme commitment to honesty and openness with your partner might leads to a secondary inclination away from honesty and openness with yourself.
    • A person who loves travel also loves being at home again afterward, with a pointed passion absent from a person who is a perpetual homebody.
    • A person who loves jumping in ice water is more likely to also love saunas than someone who doesn’t.
    • A person who loves snow is more likely to love roaring fires.
    • A person who loves walking has really enjoyed lying down at the end of the day.
    • A person who surrounds themselves with systems loves total abandonment of them during holiday more than he who only had an appointment calendar and an alarm clock to begin with.
    • A person with five children because they love children probably wants a babysitter for the evening more than the person who ambivalently had a single child.
    • A person who loves hanging out with people who share an interest in the principles of effective altruism is often also especially excited to hang out with people who don’t, on the occasions when they do that.
    • A person who directs most of their money to charity is more obsessed with the possibility of buying an expensive dress than their friend who cares less about charity.
    • A person who is so drawn to their partner’s company that they can’t stay away from them at home sometimes gets more out of solitary travel than someone more solitariness-focused in general.
    • A person craving danger also cares about confidence in safety mechanisms.
    • A person who loves the sun wants sunglasses and sunscreen more than a person who stays indoors.

    This pattern makes sense, because people and things are multifaceted, and effects are uncertain and delayed. So some aspect of you liking some aspect of a thing at some time will often mean you ramp up that kind of thing, producing effects other than the one you liked, plus more of the effect that you liked than intended because of delay. And anyway you are a somewhat different creature by then, and maybe always had parts less amenable to the desired thing anyway. Or more simply, because in systems full of negative feedbacks, effects tend to produce opposite effects, and you and the world are such systems.

  • What is it good for? But actually?

    I didn’t learn about history very well prior to my thirties somehow, but lately I’ve been variously trying to rectify this. Lately I’ve been reading Howard Zinn’s People’s History of the United States, listening to Steven Pinker’s the Better Angels of Our Nature, watching Ken Burns and Lynn Novick’s documentary about the Vietnam War and watching Oversimplified history videos on YouTube (which I find too lighthearted for the subject matter, but if you want to squeeze extra history learning in your leisure and dessert time, compromises can be worth it.)

    There is a basic feature of all this that I’m perpetually confused about: how has there been so much energy for going to war?

    It’s hard to explain my confusion, because in each particular case, there might be plenty of plausible motives given–someone wants ‘power’, or to ‘reunite their country’, or there is some customary enemy, or that enemy might attack them otherwise–but overall, it seems like the kind of thing people should be extremely averse to, such that even if there were plausibly good justifications, they wouldn’t just win out constantly, other justifications for not doing the thing would usually be found. Like, there are great reasons for writing epic treatises on abstract topics, but somehow, most people find that they don’t get around to it. I expect going to some huge effort to travel overseas and die in the mud to be more like that, intuitively.

    To be clear, I’m not confused here about people fighting in defense of things they care a lot about—joining the army when their country is under attack, or joining the Allies in WWII. And I’m not confused by people who are forced to fight, by conscription or desperate need of money. It’s just that across these various sources on history, I haven’t seen much comprehensible-to-me explanation of what’s going on in the minds of the people who volunteer to go to war (or take part in smaller dangerous violence) when the stakes aren’t already at the life or death level for them.

    I am also not criticizing the people whose motives I am confused by–I’m confident that I’m missing things.

    It’s like if I woke up tomorrow to find that half the country was volunteering to cut off their little finger for charity, I’d be pretty surprised. And if upon inquiring, each person had something to say—about how it was a good charity, or how suffering is brave and valiant, or how their Dad did it already, or how they were being emotionally manipulated by someone else who wanted it to happen, or they how wanted to be part of something—each one might not be that unlikely, but I’d still feel overall super confused, at a high level, at there being enough total energy behind this, given that it’s a pretty costly thing to do.

    At first glance, the historical people heading off to war don’t feel surprising. But I feel like this is because it is taken for granted as what historical people do. Just as in stories about Christmas, it is taken for granted that Santa Clause will make and distribute billions of toys, because that’s what he does, even though his motives are actually fairly opaque. But historical people presumably had internal lives that would be recognizable to me. What did it look like from the inside, to hear that WWI was starting, and hurry to sign up? Or to volunteer for the French military in time to fight to maintain French control in Vietnam, in the First Indochina War, that preceded the Vietnam War?

    I’d feel less surprised in a world where deadly conflict was more like cannibalism is in our world. Where yes, technically humans are edible, so if you are hungry enough you can eat them, but it is extremely rare for it to get to that, because nobody wants to be on any side of it, and they have very strong and consistent feelings about that, and if anyone really wanted to eat thousands or millions of people, say to bolster their personal or group power, it would be prohibitively expensive in terms of money or social capital to overcome the universal distaste for this idea.

  • Unexplored modes of language

    English can be communicated via 2D symbols that can be drawn on paper using a hand and seen with eyes, or via sounds that can be made with a mouth and heard by ears.

    These two forms are the same language because the mouth sounds and drawn symbols correspond at the level of words (and usually as far as sounds and letters, at least substantially). That is, if I write ‘ambition’, there is a specific mouth sound that you would use if converting it to spoken English, whereas if you were converting it to spoken French, there might not be a natural equivalent.

    As far as I know, most popular languages are like this: they have a mouth-sound version and a hand-drawn (or hand-typed) version. They often have a braille version, with symbols that can be felt by touch instead of vision. An exception is sign languages (which are generally not just alternate versions of spoken languages), which use 4-D symbols gestured by hands over time, and received by eyes.

    I wonder whether there are more modes of languages that it would be good to have. Would we have them, if there were? It’s not clear from a brief perusal of Wikipedia that Europe had sophisticated sign languages prior to about five hundred years ago. Communication methods generally have strong network effects—it’s not worth communicating by some method that nobody can understand, just like it’s not worth joining an empty dating site—and new physical modes of English are much more expensive than for instance new messaging platforms, and have nobody to promote them.

    Uncommon modes of language that seem potentially good (an uninformed brainstorm):

    • symbols drawn with hands on receiver’s skin, received by touch, I’ve heard of blind and deaf people such as Helen Keller using this, but it seems useful for instance when it is loud, or when you don’t want to be overheard or to annoy people nearby, or for covert communication under the table at a larger event, or for when you are wearing a giant face mask. -symbols gestured with whole body like interpretive dance, but with objective interpretation. Good from a distance, when loud, etc. Perhaps conducive to different sorts of expressiveness, like how verbal communication makes singing with lyrics possible, and there is complementarity between the words and the music.
    • symbols gestured with whole body, interpreted by computer, received as written text What if keyboards were like a Kinect dance game? Instead of using your treadmill desk while you type with your hands, you just type with your arms, legs and body in a virtual reality whole-body keyboard space. Mostly good for exercise, non-sedentariness, feeling alive, etc.
    • drumming/tapping, received by ears or touch possibly faster than spoken language, because precise sounds can be very fast. I don’t know. This doesn’t really sound good.
    • a sign version of English this exists, but is rare. Good for when it is loud, when you don’t want to be overheard, when you are wearing a giant face mask or are opposed to exhaling too much on the other person, when you are at a distance, etc.
    • symbols drawn with hands in one place e.g. the surface of a phone, or a small number of phone buttons, such that you could enter stuff on your phone by tapping your fingers in place in a comfortable position with the hand you were holding it with, preferably still in your pocket, rather than awkwardly moving them around on the surface while you hold it either with another hand or some non-moving parts of the same hand, and having to look at the screen while you do it. This could be combined with the first one on this list.
    • What else?

    Maybe if there’s a really good one, we could overcome the network effect with an assurance contract. (Or try to, and learn more about why assurance contracts aren’t used more.)

  • Why are delicious biscuits obscure?

    I saw a picture of these biscuits (or cookies), and they looked very delicious. So much so that I took the uncharacteristic step of actually making them. They were indeed among the most delicious biscuits of which I am aware. And yet I don’t recall hearing of them before. This seems like a telling sign about something. (The capitalist machinery? Culture? Industrial food production constraints? The vagaries of individual enjoyment?)

    Kolakakor

    Why doesn’t the market offer these delicious biscuits all over the place? Isn’t this just the kind of rival, excludable, information-available, well-internalized good that markets are on top of?

    Some explanations that occur to me:

    1. I am wrong or unusual in my assessment of deliciousness, and for instance most people would find a chocolate chip cookie or an Oreo more delicious.
    2. They are harder to cook commercially than the ubiquitous biscuits for some reason. e.g. they are most delicious warm.
    3. They are Swedish, and there are mysterious cultural or linguistic barriers to foods spreading from their original homes. This would also help explain some other observations, to the extent that it counts as an explanation at all.
    4. Deliciousness is not a central factor in food spread. (Then what is?)

    If you want to help investigate, you can do so by carrying out the following recipe and reporting on the percentile of deliciousness of the resulting biscuits. (I do not claim that this is a high priority investigation to take part in, unless you are hungry for delicious biscuits or a firsthand encounter with a moderately interesting sociological puzzle.)

     

    *

     

    Kolakakor

    (Or Kolasnittar. Adapted from House & Garden’s account of a recipe in Magnus Nilsson’s “The Nordic Baking Book”. It’s quite plausible that their versions are better than mine, which has undergone pressure for ease plus some random ingredient substitutions. However I offer mine, since it is the one I can really vouch for.)

    Takes about fifteen minutes of making, and fifteen further minutes of waiting. Makes enough biscuits for about five people to eat too many biscuits, plus a handful left over. (Other recipe calls it about 40 ‘shortbreads’)

    Ingredients

    • 200 g melted butter (e.g. microwave it)
    • 180 g sugar
    • 50 g golden syrup
    • 50g honey
    • 300 g flour, ideally King Arthur gluten free flour, but wheat flour will also do
    • 1 teaspoon bicarbonate of soda (baking soda)
    • 2 teaspoon ground ginger
    • 2 good pinches of salt

    Method

    1. Preheat oven: 175°C/345°F
    2. Put everything in a mixing bowl (if you have kitchen scales, put the mixing bowl on them, set scales to zero, add an ingredient, reset scales to zero, add the next ingredient, etc.)
    3. Mix.
    4. Taste [warning: public health officials say not to do this because eating raw flour is dangerous]. Adjust mixedness, saltiness, etc. It should be very roughly the consistency of peanut butter, i.e. probably less firm than you expect. (Taste more, as desired. Wonder why we cook biscuits at all. Consider rebellion. Consider Chesterton’s fence. Taste one more time.)
    5. Cover a big tray or a couple of small trays with baking paper.
    6. Make the dough into about four logs, around an inch in diameter, spaced several inches from one another and the edges of the paper. They can be misshapen; their shapes are temporary.
    7. Cook for about 15 minutes, or until golden and spread out into 1-4 giant flat seas of biscuit. When you take them out, they will be very soft and probably not appear to be cooked.
    8. As soon as they slightly cool and firm up enough to pick up, start chopping them into strips about 1.25 inches wide and eating them.

     

    *

     

    Bonus mystery: they are gluten free, egg free, and can probably easily be dairy free. The contest with common vegan and/or gluten free biscuit seems even more winnable, so why haven’t they even taken over that market?

  • Cultural accumulation

    When I think of humans being so smart due to ‘cultural accumulation’, I think of lots of tiny innovations in thought and technology being made by different people, and added to the interpersonal currents of culture that wash into each person’s brain, leaving a twenty year old in 2020 much better intellectually equipped than a 90 year old who spent their whole life thinking in 1200 AD.

    This morning I was chatting to my boyfriend about whether a person who went back in time (let’s say a thousand years) would be able to gather more social power than they can now in their own time. Some folk we know were discussing the claim that some humans would have a shot at literally take over the world if sent back in time, and we found this implausible.

    The most obvious differences between a 2020 person and a 1200 AD person, in 1200 AD, is that they have experience with incredible technological advances that the 1200 AD native doesn’t even know are possible. But a notable thing about a modern person is that they famously don’t know what a bicycle looks like, so the level of technology they might be able to actually rebuild on short notice in 1200 AD is probably not at the level of a nutcracker, and they probably already had those in 1200 AD.

    How does 2020 have complicated technology, if most people don’t know how it works? One big part is specialization: across the world, quite a few people do know what bicycles look like. And more to the point, presumably some of them know in great detail what bicycle chains look like, and what they are made of, and what happens if you make them out of slightly different materials or in slightly different shapes, and how such things interact with the functioning of the bicycle.

    But suppose the 2020 person who is sent back is a bicycle expert, and regularly builds their own at home. Can they introduce bikes to the world 600 years early? My tentative guess is yes, but not very ridable ones, because they don’t have machines for making bike parts, or any idea what those machines are like or the principles behind them. They can probably demonstrate the idea of a bike with wood and cast iron and leather, supposing others are cooperative with various iron casting, wood shaping, leather-making know-how. But can they make a bike that is worth paying for and riding?

    I’m not sure, and bikes were selected here for being so simple that an average person might know what their machinery looks like. Which makes them unusually close among technologies to simple chunks of metal. I don’t think a microwave oven engineer can introduce microwave ovens in 1200, or a silicon chip engineer can make much progress on introducing silicon chips. These require other technologies that require other technologies too many layers back.

    But what if the whole of 2020 society was transported to 1200? The metal extruding experts and the electricity experts and the factory construction experts and Elon Musk? Could they just jump back to 2020 levels of technology, since they know everything relevant between them? (Assuming they are somehow as well coordinated in this project as they are in 2020, and are not just putting all of their personal efforts into avoiding being burned at the stake or randomly tortured in the streets.)

    A big way this might fail is if 2020 society knows everything between them needed to use 2020 artifacts to get more 2020 artifacts, but don’t know how to use 1200 artifacts to get 2020 artifacts.

    On that story, the 1200 people might start out knowing methods for making c. 1200 artifacts using c. 1200 artifacts, but they accumulate between them the ideas to get them to c. 1220 artifacts with the c. 1200 artifacts, which they use to actually create those new artifacts. They pass to their children this collection of c. 1220 artifacts and the ideas needed to use those artifacts to get more c. 1220 artifacts. But the new c. 1220 artifacts and methods replaced some of the old c. 1200 artifacts and methods. So the knowledge passed on doesn’t include how to use those obsoleted artifacts to create the new artifacts, or the knowledge about how to make the obsoleted artifacts. And the artifacts passed on don’t include the obsoleted ones. If this happens every generation for a thousand years, the cultural inheritance received by the 2020 generation includes some highly improved artifacts plus the knowledge about how to use them, but not necessarily any record of the path that got there from prehistory, or of the tools that made the tools that made the tools that made these artifacts.

    This differs from my first impression of ‘cultural accumulation’ in that:

    1. physical artifacts are central to the process: a lot of the accumulation is happening inside them, rather than in memetic space.
    2. humanity is not accumulating all of the ideas it has come up with so far, even the important ones. It is accumulating something more like a best set of instructions for the current situation, and throwing a lot out as it goes.

    Is this is how things are, or is my first impression more true?

  • Misalignment and misuse: whose values are manifest?

    AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where:

    • misuse means the bad outcomes were wanted by the people involved,
    • misalignment means the bad outcomes were wanted by AI (and not by its human creators), and
    • accident means that the bad outcomes were not wanted by those in power but happened anyway due to error.

    In thinking about specific scenarios, these concepts seem less helpful.

    I think a likely scenario leading to bad outcomes is that AI can be made which gives a set of people things they want, at the expense of future or distant resources that the relevant people do not care about or do not own.

    For example, consider autonomous business strategizing AI systems that are profitable additions to many companies, but in the long run accrue resources and influence and really just want certain businesses to nominally succeed, resulting in a worthless future. Suppose Bob is considering whether to get a business strategizing AI for his business. It will make the difference between his business thriving and struggling, which will change his life. He suspects that within several hundred years, if this sort of thing continues, the AI systems will control everything. Bob probably doesn’t hesitate, in the way that businesses don’t hesitate to use gas vehicles even if the people involved genuinely think that climate change will be a massive catastrophe in hundreds of years.

    When the business strategizing AI systems finally plough all of the resources in the universe into a host of thriving 21st Century businesses, was this misuse or misalignment or accident? The strange new values that were satisfied were those of the AI systems, but the entire outcome only happened because people like Bob chose it knowingly (let’s say). Bob liked it more than the long glorious human future where his business was less good. That sounds like misuse. Yet also in a system of many people, letting this decision fall to Bob may well have been an accident on the part of others, such as the technology’s makers or legislators.

    Outcomes are the result of the interplay of choices, driven by different values. Thus it isn’t necessarily sensical to think of them as flowing from one entity’s values or another’s. Here, AI technology created a better option for both Bob and some newly-minted misaligned AI values that it also created—‘Bob has a great business, AI gets the future’—and that option was worse for the rest of the world. They chose it together, and the choice needed both Bob to be a misuser and the AI to be misaligned. But this isn’t a weird corner case, this is a natural way for the future to be destroyed in an economy.

    Thanks to Joe Carlsmith for conversation leading to this post.

  • Tweet markets for impersonal truth tracking?

    Should social media label statements as false, misleading or contested?

    Let’s approach it from the perspective of what would make the world best, rather than e.g. what rights do the social media companies have, as owners of the social media companies.

    The basic upside seems to be that pragmatically, people share all kinds of false things on social media, and that leads to badness, and this slows that down.

    The basic problem with it is that maybe we can’t distinguish worlds where social media companies label false things as false, and those where they label things they don’t like as false, or things that aren’t endorsed by other ‘official’ entities. So maybe we don’t want such companies to have the job of deciding what is considered true or false, because a) we don’t trust them enough to give them this sacred and highly pressured job forever, or b) we don’t expect everyone to trust them forever, and it would be nice to have better recourse when disagreement appears than ‘but I believe them’.

    If there were a way to systematically inhibit or label false content based on its falseness directly, rather than via a person’s judgment, that would be an interesting solution that perhaps everyone reasonable would agree to add. If prediction markets were way more ubiquitous, each contentious propositional Tweet could say under it the market odds for the claim.

    Or what if Twitter itself were a prediction market, trading in Twitter visibility? For just-posted Tweets, instead of liking them, you can bet your own cred on them. Then a while later, they are shown again and people can vote on whether they turned out right and you win or lose cred. Then your total cred determines how much visibility your own Tweets get.

    It seems like this would solve:

    • the problem for prediction markets where it is illegal to bet money and hard to be excited about fake money
    • the problem for prediction markets where it’s annoying to go somewhere to predict things when you are doing something else, like looking at Twitter
    • the problem for Twitter where it is full of fake claims
    • the problem for Twitter users where they have to listen to fake claims all the time, and worry about whether all kinds of things are true or not

    It would be pretty imperfect, since it throws the gavel to future Twitter users, but perhaps they are an improvement on the status quo, or on the status quo without the social media platforms themselves making judgments.

  • Automated intelligence is not AI

    Sometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor.

    I question this equivalence, looking at past automation. In practice human cognitive labor is replaced by things that don’t seem at all cognitive, or like what we otherwise mean by AI.

    Some examples:

    1. Early in the existence of bread, it might have been toasted by someone holding it close to a fire and repeatedly observing it and recognizing its level of doneness and adjusting. Now we have machines that hold the bread exactly the right distance away from a predictable heat source for a perfect amount of time. You could say that the shape of the object embodies a lot of intelligence, or that intelligence went into creating this ideal but non-intelligent tool.
    2. Self-cleaning ovens replace humans cleaning ovens. Humans clean ovens with a lot of thought—looking at and identifying different materials and forming and following plans to remove some of them. Ovens clean themselves by getting very hot.
    3. Carving a rabbit out of chocolate takes knowledge of a rabbit’s details, along with knowledge of how to move your hands to translate such details into chocolate with a knife. A rabbit mold automates this work, and while this route may still involve intelligence in the melting and pouring of the chocolate, all rabbit knowledge is now implicit in the shape of the tool, though I think nobody would call a rabbit-shaped tin ‘artificial intelligence’.
    4. Human pouring of orange juice into glasses involves various mental skills. For instance, classifying orange juice and glasses and judging how they relate to one another in space, and moving them while keeping an eye on this. Automatic orange juice pouring involves for instance a button that can only be pressed with a glass when the glass is in a narrow range of locations, which opens an orange juice faucet running into a spot common to all the possible glass-locations.

    Some of this is that humans use intelligence where they can use some other resource, because it is cheap on the margin where the other resource is expensive. For instance, to get toast, you could just leave a lot of bread at different distances then eat the one that is good. That is bread-expensive and human-intelligence-cheap (once you come up with the plan at least). But humans had lots of intelligence and not much bread. And if later we automate a task like this, before we have computers that can act very similarly to brains, then the alternate procedure will tend to be one that replaces human thought with something that actually is cheap at the time, such as metal.

    I think a lot of this is that to deal with a given problem you can either use flexible intelligence in the moment, or you can have an inflexible system that happens to be just what you need. Often you will start out using the flexible intelligence, because being flexible it is useful for lots of things, so you have some sitting around for everything, whereas you don’t have an inflexible system that happens to be just what you need. But if a problem seems to be happening a lot, it can become worth investing the up-front cost of getting the ideal tool, to free up your flexible intelligence again.

  • Whence the symptoms of social media?

    A thing I liked about The Social Dilemma was the evocative image of oneself being in an epic contest for one’s attention with a massive and sophisticated data-nourished machine, tended by teams of manipulation experts. The hopelessness of the usual strategies—like spur-of-the-moment deciding to ‘try to use social media less’—in the face of such power seems clear.

    But another question I have is whether this basic story of our situation—that powerful forces are fluently manipulating our behavior—is true.

    Some contrary observations from my own life:

    • The phenomenon of spending way too long doing apparently pointless things on my phone seems to be at least as often caused by things that are not massively honed to manipulate me. For instance, I recently play a lot of nonograms, a kind of visual logic puzzle that was invented by two people independently in the 80s and which I play in one of many somewhat awkward-to-use phone apps, I assume made by small teams mostly focused on making the app work smoothly. My sense is that if I didn’t have nonograms style games or social media or news to scroll through, then I would still often idly pick up my phone and draw, or read books, or learn Spanish, or memorize geographic facts, or scroll through just anything on offer to scroll through (I also do these kinds of things already). So my guess is that it is my phone’s responsiveness and portability and tendency to do complicated things if you press buttons on it, that makes it a risk for time consumption. Facebook’s efforts to grab my attention probably don’t hurt, but I don’t feel like they are most of the explanation for phone-overuse in my own life.
    • Notifications seem clumsy and costly. They do grab my attention pretty straightforwardly, but this strategy appears to have about the sophistication of going up to someone and tapping them on the shoulder continually, when you have a sufficiently valuable relationship that they can’t just break it off you annoy them too much. In that case it isn’t some genius manipulation technique, it’s just burning through the goodwill the services have gathered by being valuable in other ways. If I get unnecessary notifications, I am often annoyed and try to stop them or destroy the thing causing them.
    • I do often scroll through feeds for longer than I might have planned to, but the same goes for non-manipulatively-honed feeds. For instance when I do a Google Image search for skin infections, or open some random report and forget why I’m looking at it. So I think scrolling down things might be a pretty natural behavior for things that haven’t finished yet, and are interesting at all (but maybe not so interesting that one is, you know, awake..)1
    • A thing that feels attractive about Facebook is that one wants to look at things that other people are looking at. (Thus for instance reading books and blog posts that just came out over older, better ones.) Social media have this, but presumably not much more than newspapers did before, since a greater fraction of the world was looking at the same newspaper before.

    In sum, I offer the alternate theory that various technology companies have combined:

    • pinging people
    • about things they are at least somewhat interested in
    • that everyone is looking at
    • situated in an indefinite scroll
    • on a responsive, detailed pocket button-box

    …and that most of the attention-suck and influence that we see is about those things, not about the hidden algorithmic optimizing forces that Facebook might have.


    (Part 1 of Social Dilemma review)

    1. My boyfriend offers alternate theory, that my scrolling instinct comes from Facebook. 

  • But what kinds of puppets are we?

    I watched The Social Dilemma last night. I took the problem that it warned of to be the following:

    1. Social media and similar online services make their money by selling your attention to advertisers
    2. These companies put vast optimization effort into manipulating you, to extract more attention
    3. This means your behavior and attention is probably very shaped by these forces (which you can perhaps confirm by noting your own readiness to scroll through stuff on your phone)

    This seems broadly plausible and bad, but I wonder if it isn’t quite that bad.

    I heard the film as suggesting that your behavior and thoughts in general are being twisted by these forces. But lets distinguish between a system where huge resources are going into keeping you scrolling say—at which point an advertiser will pay for their shot at persuading you—and a system where those resources are going into manipulating you directly to do the things that the advertiser would like. In the first case, maybe you look at your phone too much, but there isn’t a clear pressure on your opinions or behavior besides pro phone. In the second case, maybe you end up with whatever opinions and actions someone paid the most for (this all supposing the system works). Let’s call these distorted-looking and distorted-acting.

    While watching I interpreted the film suggesting the sort of broad manipulation that would come with distorted-acting, but thinking about it afterwards, isn’t the kind of optimization going on with social media actually distorted-looking? (Followed by whatever optimization the advertisers do to get you to do what they want, which I guess is of a kind with what they have always done, so at least not a new experimental horror.) I actually don’t really know. And maybe it isn’t a bright distinction.

    Maybe optimization for you clicking on ads should be a different category (i.e. ‘distorted-clicking’). This seems close to distorted-looking, in that it isn’t directly seeking to manipulate your behavior outside of your phone session, but a big step closer to distorted-acting, since you have been set off toward whatever you have ultimately been targeted to buy.

    I was at first thinking that distorted-looking was safer than distorted-acting. But distorted-looking forces probably do also distort your opinions and actions. For instance, as the film suggested, you are likely to look more if you get interested in something that there is a lot of content on, or something that upsets you and traps your attention.

    I could imagine distorted-looking actually being worse than distorted-acting: when your opinion can be bought, the change in it is presumably what someone would want. Whereas when your opinion is manipulated as a weird side effect of someone trying to get you to look more, then it could be any random thing, which might be terrible.(Or would there be such weird side effects in both cases anyway?)

  • Yet another world spirit sock puppet

    I have almost successfully made and made decent this here my new blog, in spite of little pre-existing familiarity with relevant tools beyond things like persistence in the face of adversity and Googling things. I don’t fully understand how it works, but it is a different and freer non-understanding than with Wordpress or Tumblr. This blog is more mine to have mis-built and to go back and fix. It is like not understanding why your cake is still a liquid rather than like not understanding why your printer isn’t recognized by your computer.

    My plan is to blog at worldspiritsockpuppet.com now, and cross-post to my older blogs the subset of posts that fit there.

    The main remaining thing is to add comments. If anyone has views about how those should be, er, tweet at me?

  • The bads of ads

    In London at the start of the year, perhaps there was more advertising than there usually is in my life, because I found its presence disgusting and upsetting. Could I not use public transport without having my mind intruded upon continually by trite performative questions?

    London underground

    Sometimes I fantasize about a future where stealing someone’s attention to suggest for the fourteenth time that they watch your awful-looking play is rightly looked upon as akin to picking their pocket.

    Stepping back, advertising is widely found to be a distasteful activity. But I think it is helpful to distinguish the different unpleasant flavors potentially involved (and often not involved—there is good advertising):

    1. Mind manipulation: Advertising is famous for uncooperatively manipulating people’s beliefs and values in whatever way makes them more likely to pay money somehow. For instance, deceptively encouraging the belief that everyone uses a certain product, or trying to spark unwanted wants.

      Painting an ad

    2. Zero-sumness: To the extent advertising is aimed at raising the name recognition and thus market share of one product over its similar rivals, it is zero or negative sum: burning effort on both sides and the attention of the customer for no overall value.

    3. Theft of a precious thing: Attention is arguably one of the best things you have, and its protection arguably worthy of great effort. In cases where it is vulnerable—for instance because you are outside and so do not personally control everything you might look at or hear—advertising is the shameless snatching of it. This might be naively done, in the same way that a person may naively steal silverware assuming that it is theirs to take because nothing is stopping them.

      London underground

    4. Cultural poison: Culture and the common consciousness are an organic dance of the multitude of voices and experiences in society. In the name of advertising, huge amounts of effort and money flow into amplifying fake voices, designed to warp perceptions–and therefore the shared world–to ready them for exploitation. Advertising can be a large fraction of the voices a person hears. It can draw social creatures into its thin world. And in this way, it goes beyond manipulating the minds of those who listen to it. Through those minds it can warp the whole shared world, even for those who don’t listen firsthand. Advertising shifts your conception of what you can do, and what other people are doing, and what you should pay attention to. It presents role models, designed entirely for someone else’s profit. It saturates the central gathering places with inanity, as long as that might sell something.

      Outdoor ads over darkened figures

    5. Market failure: Ideally, whoever my attention is worth most to would get it, regardless of whether it was initially stolen. For instance, if I have better uses for my attention than advertising, hopefully I will pay more to have it back than the advertiser expects to make by advertising to me. So we will be able to make a trade, and I’ll get my attention back. In practice this is probably too complicated, since so many tiny transactions are needed. E.g. the best message for me to see, if I have to see a message, when sitting on a train, is probably something fairly different from what I do see. It is also probably worth me paying a small sum to each person who would advertise at me to just see a blank wall instead. But it is hard for them to collect that money from each person. And in cases where the advertiser was just a random attention thief and didn’t have some special right to my attention, if I were to pay one to leave me alone, another one might immediately replace them.1

      Underground ads over crowd

    6. Ugliness: At the object level, advertising is often clearly detracting from the beauty of a place.

      Ads overwhelming buildings

    These aren’t necessarily distinct—to the extent ugliness is bad, say, one might expect that it is related to some market failure. But they are different reasons for disliking a thing-a person can hate something ugly while having no strong view on the perfection of ideal markets.

    What would good and ethical advertising look like? Maybe I decide that I want to be advertised to now, and go to my preferred advertising venue. I see a series of beautiful messages about things that are actively helpful for me to know. I can downvote ads if I don’t like the picture of the world that they are feeding into my brain, or the apparent uncooperativeness of their message. I leave advertising time feeling inspired and happy.

    Ads: we are building a new story


    Images: London Underground: Mona Eendra, painting ads: Megan Markham, Nescafe ad: Ketut Subiyanto, Coca-Cola: Hamish Weir, London Underground again: Willam Santos, figures in shade under ad: David Geib, Clear ad in train: Life of Wu, Piccadilly Circus: Negative Space, Building a new story: Wilhelm Gunkel.

    1. For advertising in specific public locations, I could in principle pay by buying up the billboard or whatever and leaving it blank. 

subscribe via RSS feed or via email