It’s not always better to be more capable. As I mentioned yesterday, it can (famously) be helpful in negotiations to have your hands tied. That is, to be disempowered from giving up everything the other party wants.

I had previously thought of this as a somewhat rare corner case of human behavior—I for one don’t haggle very often—but I now think negotiations where this is an element are are quite common: yesterday I described it in friendly (and honest) negotiations about how to spend time, for instance. And I also see a related thing in the practice of dietary commitments.

But is being less capable helpful outside of negotiating? And is this going to become AI related?

Yes and yes!

Commitments: more good things come to those who can commit (e.g. rides out of deserts, secrets, trust, love). ‘Committing’ generally involves cutting off certain options to yourself, whether in practical terms or via you being the kind of honorable person who can’t bear to do a thing they promised not to do. These are both kinds of limitations. If you were a more powerful creature, who was fully capable of breaking down any barrier, and fully capable of breaking a promise—a creature to whom all options were always open—then commitments would be less available to you.

Transparency: a big way humans know what is going on inside other humans, well enough to trust them, is that there is a connection between what is happening inside them and what is happening on their faces and in their bodies, and they usually can’t control this very well. People who can break this connection and control their external behavior independently tend to be feared and distrusted. It is valuable to be unable to stop these signals escaping.

Consistency: a big way we predict how a specific human will behave in the future is that each human has specific kinds of behavior that come easily to them, and it is hard for them to behave entirely differently. So if you are friends with someone who you have observed be attentive and kind to other people for five years, it is very likely that they continue behaving in that way going forward. Whereas a creature with more freedom of behavior could wholly inhabit that persona for five years, then change to a different one.

Relatedly, we know a lot about what to expect from a human stranger because of our prior knowledge of humans. If humans had the power to rewrite their internal dynamics and become totally different creatures, then we would much less know what to expect from one.

Scope of risk: people are safer to interact with if you know they are limited in their ability to cause destruction. You might prefer to hire a person who you think would be less able to wrest control of your organization if they wanted to. You might prefer to babysit a child who does not know how to pick locks or set fires. So a person might be more employable, or be taken care of by better babysitters, if they are less capable. Similarly, an extremely capable AI system might be a less desirable accountant than a human, if you can only fully trust the human to not be up to the task of hacking your accounts.

These are all to do with interacting with other creatures. For a creature alone in the universe, I don’t know of any situation where they are better off being less capable. But when you need to trust another creature, it is better to know more about them, and better to know they are cut off from options that might harm you.

In the usual picture of AI progress, AI is worse than humans at various tasks, and we are waiting for it to surpass us everywhere, at which point humans will be obsolete as labor. But in a world where AI needs to interact with other agents (humans or AIs) the aforementioned value of being less capable complicates things: perhaps there are skills where AI is already more capable than humans, but where that capability is a liability. For instance, lying smoothly and otherwise generating outward behavior that isn’t revealing about internal dynamics, switching between entirely different personas, and hacking skills. Given that, what does the trajectory look like?