An easy coordination problem?
Common wisdom says that it is incredibly hard to coordinate to not build more dangerous AI. This sounds believable in the abstract: international geopolitics arms race game theory something something.
But pragmatically, what exactly is the difficulty?
I agree there would seem to be obstacles for the average person. But four of the people apparently succumbing to the overpowering arms race forces while saying AI poses a huge imminent risk to humanity are Sam Altman, Elon Musk, Demis Hassabis and Dario Amodei. Shouldn’t this be fairly tractable for them? What exactly is the difficulty?
Like, if they discussed together and decided they wanted to mutually pause, do you think that wouldn’t happen? Do you think they couldn’t get cooperation from other necessary people? Do you think they couldn’t figure out the verification and policing details?
It’s true that one of the necessary people is the leader of China, but what exactly is the problem there? None of the CEOs have his phone number? He won’t talk to them? He is beyond reason or incentives? He is intent on building AI regardless of how dangerous it is to his own country because he is fundamentally bad? They have nothing he wants?
Like, these people are not only incredibly powerful and wealthy and smart, but they include a Diplomacy world team champion, the acknowledged king of making complex things happen more efficiently than was believed possible, and one of the most gifted social maneuverers in the world. I don’t feel like they are bringing their A game to this.
Picture: Zhongnanhai, photo by 維基小霸王 (Wiki Little Overlord)