FULLLISTWORLDLY POSITIONSMETEUPHORIC

  • Misalignment and misuse: whose values are manifest?

    AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where:

    • misuse means the bad outcomes were wanted by the people involved,
    • misalignment means the bad outcomes were wanted by AI (and not by its human creators), and
    • accident means that the bad outcomes were not wanted by those in power but happened anyway due to error.

    In thinking about specific scenarios, these concepts seem less helpful.

    I think a likely scenario leading to bad outcomes is that AI can be made which gives a set of people things they want, at the expense of future or distant resources that the relevant people do not care about or do not own.

    For example, consider autonomous business strategizing AI systems that are profitable additions to many companies, but in the long run accrue resources and influence and really just want certain businesses to nominally succeed, resulting in a worthless future. Suppose Bob is considering whether to get a business strategizing AI for his business. It will make the difference between his business thriving and struggling, which will change his life. He suspects that within several hundred years, if this sort of thing continues, the AI systems will control everything. Bob probably doesn’t hesitate, in the way that businesses don’t hesitate to use gas vehicles even if the people involved genuinely think that climate change will be a massive catastrophe in hundreds of years.

    When the business strategizing AI systems finally plough all of the resources in the universe into a host of thriving 21st Century businesses, was this misuse or misalignment or accident? The strange new values that were satisfied were those of the AI systems, but the entire outcome only happened because people like Bob chose it knowingly (let’s say). Bob liked it more than the long glorious human future where his business was less good. That sounds like misuse. Yet also in a system of many people, letting this decision fall to Bob may well have been an accident on the part of others, such as the technology’s makers or legislators.

    Outcomes are the result of the interplay of choices, driven by different values. Thus it isn’t necessarily sensical to think of them as flowing from one entity’s values or another’s. Here, AI technology created a better option for both Bob and some newly-minted misaligned AI values that it also created—‘Bob has a great business, AI gets the future’—and that option was worse for the rest of the world. They chose it together, and the choice needed both Bob to be a misuser and the AI to be misaligned. But this isn’t a weird corner case, this is a natural way for the future to be destroyed in an economy.

    Thanks to Joe Carlsmith for conversation leading to this post.

  • Tweet markets for impersonal truth tracking?

    Should social media label statements as false, misleading or contested?

    Let’s approach it from the perspective of what would make the world best, rather than e.g. what rights do the social media companies have, as owners of the social media companies.

    The basic upside seems to be that pragmatically, people share all kinds of false things on social media, and that leads to badness, and this slows that down.

    The basic problem with it is that maybe we can’t distinguish worlds where social media companies label false things as false, and those where they label things they don’t like as false, or things that aren’t endorsed by other ‘official’ entities. So maybe we don’t want such companies to have the job of deciding what is considered true or false, because a) we don’t trust them enough to give them this sacred and highly pressured job forever, or b) we don’t expect everyone to trust them forever, and it would be nice to have better recourse when disagreement appears than ‘but I believe them’.

    If there were a way to systematically inhibit or label false content based on its falseness directly, rather than via a person’s judgment, that would be an interesting solution that perhaps everyone reasonable would agree to add. If prediction markets were way more ubiquitous, each contentious propositional Tweet could say under it the market odds for the claim.

    Or what if Twitter itself were a prediction market, trading in Twitter visibility? For just-posted Tweets, instead of liking them, you can bet your own cred on them. Then a while later, they are shown again and people can vote on whether they turned out right and you win or lose cred. Then your total cred determines how much visibility your own Tweets get.

    It seems like this would solve:

    • the problem for prediction markets where it is illegal to bet money and hard to be excited about fake money
    • the problem for prediction markets where it’s annoying to go somewhere to predict things when you are doing something else, like looking at Twitter
    • the problem for Twitter where it is full of fake claims
    • the problem for Twitter users where they have to listen to fake claims all the time, and worry about whether all kinds of things are true or not

    It would be pretty imperfect, since it throws the gavel to future Twitter users, but perhaps they are an improvement on the status quo, or on the status quo without the social media platforms themselves making judgments.

  • Automated intelligence is not AI

    Sometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor.

    I question this equivalence, looking at past automation. In practice human cognitive labor is replaced by things that don’t seem at all cognitive, or like what we otherwise mean by AI.

    Some examples:

    1. Early in the existence of bread, it might have been toasted by someone holding it close to a fire and repeatedly observing it and recognizing its level of doneness and adjusting. Now we have machines that hold the bread exactly the right distance away from a predictable heat source for a perfect amount of time. You could say that the shape of the object embodies a lot of intelligence, or that intelligence went into creating this ideal but non-intelligent tool.
    2. Self-cleaning ovens replace humans cleaning ovens. Humans clean ovens with a lot of thought—looking at and identifying different materials and forming and following plans to remove some of them. Ovens clean themselves by getting very hot.
    3. Carving a rabbit out of chocolate takes knowledge of a rabbit’s details, along with knowledge of how to move your hands to translate such details into chocolate with a knife. A rabbit mold automates this work, and while this route may still involve intelligence in the melting and pouring of the chocolate, all rabbit knowledge is now implicit in the shape of the tool, though I think nobody would call a rabbit-shaped tin ‘artificial intelligence’.
    4. Human pouring of orange juice into glasses involves various mental skills. For instance, classifying orange juice and glasses and judging how they relate to one another in space, and moving them while keeping an eye on this. Automatic orange juice pouring involves for instance a button that can only be pressed with a glass when the glass is in a narrow range of locations, which opens an orange juice faucet running into a spot common to all the possible glass-locations.

    Some of this is that humans use intelligence where they can use some other resource, because it is cheap on the margin where the other resource is expensive. For instance, to get toast, you could just leave a lot of bread at different distances then eat the one that is good. That is bread-expensive and human-intelligence-cheap (once you come up with the plan at least). But humans had lots of intelligence and not much bread. And if later we automate a task like this, before we have computers that can act very similarly to brains, then the alternate procedure will tend to be one that replaces human thought with something that actually is cheap at the time, such as metal.

    I think a lot of this is that to deal with a given problem you can either use flexible intelligence in the moment, or you can have an inflexible system that happens to be just what you need. Often you will start out using the flexible intelligence, because being flexible it is useful for lots of things, so you have some sitting around for everything, whereas you don’t have an inflexible system that happens to be just what you need. But if a problem seems to be happening a lot, it can become worth investing the up-front cost of getting the ideal tool, to free up your flexible intelligence again.

  • Whence the symptoms of social media?

    A thing I liked about The Social Dilemma was the evocative image of oneself being in an epic contest for one’s attention with a massive and sophisticated data-nourished machine, tended by teams of manipulation experts. The hopelessness of the usual strategies—like spur-of-the-moment deciding to ‘try to use social media less’—in the face of such power seems clear.

    But another question I have is whether this basic story of our situation—that powerful forces are fluently manipulating our behavior—is true.

    Some contrary observations from my own life:

    • The phenomenon of spending way too long doing apparently pointless things on my phone seems to be at least as often caused by things that are not massively honed to manipulate me. For instance, I recently play a lot of nonograms, a kind of visual logic puzzle that was invented by two people independently in the 80s and which I play in one of many somewhat awkward-to-use phone apps, I assume made by small teams mostly focused on making the app work smoothly. My sense is that if I didn’t have nonograms style games or social media or news to scroll through, then I would still often idly pick up my phone and draw, or read books, or learn Spanish, or memorize geographic facts, or scroll through just anything on offer to scroll through (I also do these kinds of things already). So my guess is that it is my phone’s responsiveness and portability and tendency to do complicated things if you press buttons on it, that makes it a risk for time consumption. Facebook’s efforts to grab my attention probably don’t hurt, but I don’t feel like they are most of the explanation for phone-overuse in my own life.
    • Notifications seem clumsy and costly. They do grab my attention pretty straightforwardly, but this strategy appears to have about the sophistication of going up to someone and tapping them on the shoulder continually, when you have a sufficiently valuable relationship that they can’t just break it off you annoy them too much. In that case it isn’t some genius manipulation technique, it’s just burning through the goodwill the services have gathered by being valuable in other ways. If I get unnecessary notifications, I am often annoyed and try to stop them or destroy the thing causing them.
    • I do often scroll through feeds for longer than I might have planned to, but the same goes for non-manipulatively-honed feeds. For instance when I do a Google Image search for skin infections, or open some random report and forget why I’m looking at it. So I think scrolling down things might be a pretty natural behavior for things that haven’t finished yet, and are interesting at all (but maybe not so interesting that one is, you know, awake..)1
    • A thing that feels attractive about Facebook is that one wants to look at things that other people are looking at. (Thus for instance reading books and blog posts that just came out over older, better ones.) Social media have this, but presumably not much more than newspapers did before, since a greater fraction of the world was looking at the same newspaper before.

    In sum, I offer the alternate theory that various technology companies have combined:

    • pinging people
    • about things they are at least somewhat interested in
    • that everyone is looking at
    • situated in an indefinite scroll
    • on a responsive, detailed pocket button-box

    …and that most of the attention-suck and influence that we see is about those things, not about the hidden algorithmic optimizing forces that Facebook might have.


    (Part 1 of Social Dilemma review)

    1. My boyfriend offers alternate theory, that my scrolling instinct comes from Facebook. 

  • But what kinds of puppets are we?

    I watched The Social Dilemma last night. I took the problem that it warned of to be the following:

    1. Social media and similar online services make their money by selling your attention to advertisers
    2. These companies put vast optimization effort into manipulating you, to extract more attention
    3. This means your behavior and attention is probably very shaped by these forces (which you can perhaps confirm by noting your own readiness to scroll through stuff on your phone)

    This seems broadly plausible and bad, but I wonder if it isn’t quite that bad.

    I heard the film as suggesting that your behavior and thoughts in general are being twisted by these forces. But lets distinguish between a system where huge resources are going into keeping you scrolling say—at which point an advertiser will pay for their shot at persuading you—and a system where those resources are going into manipulating you directly to do the things that the advertiser would like. In the first case, maybe you look at your phone too much, but there isn’t a clear pressure on your opinions or behavior besides pro phone. In the second case, maybe you end up with whatever opinions and actions someone paid the most for (this all supposing the system works). Let’s call these distorted-looking and distorted-acting.

    While watching I interpreted the film suggesting the sort of broad manipulation that would come with distorted-acting, but thinking about it afterwards, isn’t the kind of optimization going on with social media actually distorted-looking? (Followed by whatever optimization the advertisers do to get you to do what they want, which I guess is of a kind with what they have always done, so at least not a new experimental horror.) I actually don’t really know. And maybe it isn’t a bright distinction.

    Maybe optimization for you clicking on ads should be a different category (i.e. ‘distorted-clicking’). This seems close to distorted-looking, in that it isn’t directly seeking to manipulate your behavior outside of your phone session, but a big step closer to distorted-acting, since you have been set off toward whatever you have ultimately been targeted to buy.

    I was at first thinking that distorted-looking was safer than distorted-acting. But distorted-looking forces probably do also distort your opinions and actions. For instance, as the film suggested, you are likely to look more if you get interested in something that there is a lot of content on, or something that upsets you and traps your attention.

    I could imagine distorted-looking actually being worse than distorted-acting: when your opinion can be bought, the change in it is presumably what someone would want. Whereas when your opinion is manipulated as a weird side effect of someone trying to get you to look more, then it could be any random thing, which might be terrible.(Or would there be such weird side effects in both cases anyway?)

  • Yet another world spirit sock puppet

    I have almost successfully made and made decent this here my new blog, in spite of little pre-existing familiarity with relevant tools beyond things like persistence in the face of adversity and Googling things. I don’t fully understand how it works, but it is a different and freer non-understanding than with Wordpress or Tumblr. This blog is more mine to have mis-built and to go back and fix. It is like not understanding why your cake is still a liquid rather than like not understanding why your printer isn’t recognized by your computer.

    My plan is to blog at worldspiritsockpuppet.com now, and cross-post to my older blogs the subset of posts that fit there.

    The main remaining thing is to add comments. If anyone has views about how those should be, er, tweet at me?

  • The bads of ads

    In London at the start of the year, perhaps there was more advertising than there usually is in my life, because I found its presence disgusting and upsetting. Could I not use public transport without having my mind intruded upon continually by trite performative questions?

    London underground

    Sometimes I fantasize about a future where stealing someone’s attention to suggest for the fourteenth time that they watch your awful-looking play is rightly looked upon as akin to picking their pocket.

    Stepping back, advertising is widely found to be a distasteful activity. But I think it is helpful to distinguish the different unpleasant flavors potentially involved (and often not involved—there is good advertising):

    1. Mind manipulation: Advertising is famous for uncooperatively manipulating people’s beliefs and values in whatever way makes them more likely to pay money somehow. For instance, deceptively encouraging the belief that everyone uses a certain product, or trying to spark unwanted wants.

      Painting an ad

    2. Zero-sumness: To the extent advertising is aimed at raising the name recognition and thus market share of one product over its similar rivals, it is zero or negative sum: burning effort on both sides and the attention of the customer for no overall value.

    3. Theft of a precious thing: Attention is arguably one of the best things you have, and its protection arguably worthy of great effort. In cases where it is vulnerable—for instance because you are outside and so do not personally control everything you might look at or hear—advertising is the shameless snatching of it. This might be naively done, in the same way that a person may naively steal silverware assuming that it is theirs to take because nothing is stopping them.

      London underground

    4. Cultural poison: Culture and the common consciousness are an organic dance of the multitude of voices and experiences in society. In the name of advertising, huge amounts of effort and money flow into amplifying fake voices, designed to warp perceptions–and therefore the shared world–to ready them for exploitation. Advertising can be a large fraction of the voices a person hears. It can draw social creatures into its thin world. And in this way, it goes beyond manipulating the minds of those who listen to it. Through those minds it can warp the whole shared world, even for those who don’t listen firsthand. Advertising shifts your conception of what you can do, and what other people are doing, and what you should pay attention to. It presents role models, designed entirely for someone else’s profit. It saturates the central gathering places with inanity, as long as that might sell something.

      Outdoor ads over darkened figures

    5. Market failure: Ideally, whoever my attention is worth most to would get it, regardless of whether it was initially stolen. For instance, if I have better uses for my attention than advertising, hopefully I will pay more to have it back than the advertiser expects to make by advertising to me. So we will be able to make a trade, and I’ll get my attention back. In practice this is probably too complicated, since so many tiny transactions are needed. E.g. the best message for me to see, if I have to see a message, when sitting on a train, is probably something fairly different from what I do see. It is also probably worth me paying a small sum to each person who would advertise at me to just see a blank wall instead. But it is hard for them to collect that money from each person. And in cases where the advertiser was just a random attention thief and didn’t have some special right to my attention, if I were to pay one to leave me alone, another one might immediately replace them.1

      Underground ads over crowd

    6. Ugliness: At the object level, advertising is often clearly detracting from the beauty of a place.

      Ads overwhelming buildings

    These aren’t necessarily distinct—to the extent ugliness is bad, say, one might expect that it is related to some market failure. But they are different reasons for disliking a thing-a person can hate something ugly while having no strong view on the perfection of ideal markets.

    What would good and ethical advertising look like? Maybe I decide that I want to be advertised to now, and go to my preferred advertising venue. I see a series of beautiful messages about things that are actively helpful for me to know. I can downvote ads if I don’t like the picture of the world that they are feeding into my brain, or the apparent uncooperativeness of their message. I leave advertising time feeling inspired and happy.

    Ads: we are building a new story


    Images: London Underground: Mona Eendra, painting ads: Megan Markham, Nescafe ad: Ketut Subiyanto, Coca-Cola: Hamish Weir, London Underground again: Willam Santos, figures in shade under ad: David Geib, Clear ad in train: Life of Wu, Piccadilly Circus: Negative Space, Building a new story: Wilhelm Gunkel.

    1. For advertising in specific public locations, I could in principle pay by buying up the billboard or whatever and leaving it blank. 

subscribe via RSS or via email