The ancient Chinese board game Go is the ultimate strategy game. It has a few simple rules. But, these simple rules create the most complex strategy game of all time, with more potential combinations of games than there are atoms in the observable universe. In 2016, humans were no longer the best at playing Go, joining the ranks of chess and Jeopardy having been conquered by artificial intelligence. AlphaGo, the program that beat the world champion, not only beat the world champion, but the strategy it used overturned centuries of collective wisdom on Go strategy. To go further, another program called AlphaGo Zero learned by doing nothing but play against itself over the course of three days and beat the original AlphaGo 100-0. This is the power of artificial intelligence, and the number of domains that its becoming superhuman at are growing larger.
Artificial intelligence is being applied in areas outside of board games; areas that might result in disaster. It can be applied to weapons like drones and tanks that can act without any human input, find strategies, network together, and act with superhuman speed. Smart viruses are also a major concern, especially given how poor the state of cybersecurity is. There are other concerns too, but the biggest and potentially the most dangerous is artificial superintelligence, a hypothetical AI that could outsmart humans in any domain of intelligence, including social and emotional intelligence. We not only have to worry about these things being used maliciously, but also the unintended consequences of its use. These programs do what they’re programmed to do, not what they’re intended to do, and the smarter they are, the greater the consequences. Imagine if you’re driving a car 40 mph down the road. If the steering wheel is slightly off, no big deal; you can adjust it. If you’re going 40,000 mph, then a slight misalignment results in you smashing into the side of the road. This is the nature of powerful value-misaligned intelligence, and the brooding arms race in intelligence can make issues like this worse. That’s a lot to deal with.
So how do we stop humanity from crashing into the side of the road? That’s a difficult question, but regardless, there needs to be some sort of governance on an international scale to handle these issues. I looked at a few factors to try and determine what the best governance system might look at. First, instead of letting the car smash into the side of the road and try to pick up the pieces, we should avoid that situation to begin with. So, this governance system should be anticipatory, working with different stakeholders in capacity building and foresight on these issues. Political factors also need to be taken into consideration so that we can understand what sort of issues might prevent cooperation. Then, by comparing the strengths, weaknesses, and issues with previous governance frameworks, we might be able to determine which is best.
Because AI is such a new technology and there hasn’t been a lot of international discussion of the topic, I decided to look at cybersecurity as a proxy field to understand what political issues there might be. I looked at the United Nation’s 2016/2017 Group of Government Experts meeting regarding the state use of information communication technologies (ICTs), which focused on norm building and the interpretation of international law in the context of ICTs. To say the least, the meeting failed to reach a conclusion because of the conflicting interpretations of international law between Western countries and more authoritarian countries like Russia and China. One of the key issues is over what defines sovereignty. Since the sovereign can only define sovereignty, countries like Russia and China view the management of all information going into and out of their borders as key to maintaining their regimes, which conflicts with the West’s view of freedom of expression as a human right. This not only signifies that there are fundamental differences in interests and values that could prevent cooperation with AI, but it shows that the vague and conflicting nature of international law might prevent it from being an effective means of creating cooperation.
So, with all this in mind, I looked at four different governance frameworks: soft-law, multi-stakeholder governance, multilateralism, and polycentric governance. Soft-law focuses on the creation of norms and non-binding agreements. Multi-stakeholder governance tries to bring together relevant stakeholders including corporations and civil society groups, to have everyone voice their opinions and collectively make and enforce decisions. Multilateralism just involves nation states creating agreements, such as the United Nations. Finally, polycentric governance is where there are multiple self-governing groups with multiple centers of decision-making that are informally networked with one another and pursue problems as they see fit. These organizations are autonomous but are also nested together with overlapping jurisdiction and different levels of authority, such as states, regional jurisdictions, or international bodies of governance.
Ultimately, after comparing the different strengths and weaknesses of these governance frameworks, I decided that a multilateral version of a specific type of soft-law regime called a ‘Governance Coordination Committee’ (GCC) that helps direct actors within a larger polycentric community might be the best way forward. This GCC would act as an issue conductor that helps manage the community of people working to reduce risk and other stakeholders in industry and government. It would also help find areas of cooperation, sponsor research, guide technological development, and drive norms. While this isn’t a final solution to the problem, it can help create the conditions for further agreements in the future. It’s time to start governing artificial intelligence, and this GCC-Polycentric approach is something that can be implemented now.
Leave a Reply