16 (2019): 1. Due to the potential global harms developing AI can cause, it would be reasonable to assume that government actors would try impose safety measures and regulations on actors developing AI, and perhaps even coordinate on an international scale to ensure that all actors developing AI might cooperate under an AI Coordination Regime[35] that sets, monitors, and enforces standards to maximize safety. As discussed, there are both great benefits and harms to developing AI, and due to the relevance AI development has to national security, it is likely that governments will take over this development (specifically the US and China). <<3B74F05AAAB3B2110A0010B6ACF6FC7F>]/Prev 397494>> hTIOSQ>M2P22PQFAH 1. Scholars of civil war have argued, for example, that peacekeepers can preserve lasting cease-fires by enabling warring parties to cooperate with the knowledge that their security will be guaranteed by a third party. PRICE CODE 17. 695 20 This iterated structure creates an incentive to cooperate; cheating in the first round significantly reduces the likelihood that the other player will trust one enough to attempt to cooperate in the future. It is his argument: "The information that such an agreement conveys is not that the players will keep it (since it is not binding), but that each wants the other to keep it." For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . If the regime allows for multilateral development, for example, the actors might agree that whoever reaches AI first receives 60% of the benefit, while the other actor receives 40% of the benefit. [5] They can, for example, work together to improve good corporate governance. Furthermore, a unilateral strategy could be employed under a Prisoners Dilemma in order to effect cooperation. At the same time, a growing literature has illuminated the risk that developing AI has of leading to global catastrophe[4] and further pointed out the effect that racing dynamics has on exacerbating this risk. This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. Table 5. Under this principle, parties to an armed conflict must always distinguish between civilians and civilian objects on the one hand, and combatants and military targets on the other. It involves a group of . This is taken to be an important analogy for social cooperation. The remainder of this section looks at these payoffs and the variables that determine them in more detail.[53]. 695 0 obj In this book, you will make an introduction to realism, liberalism and economic structuralism as major traditions in the field, their historical evolution and some theories they have given birth . Using their intuition, the remainder of this paper looks at strategy and policy considerations relevant to some game models in the context of the AI Coordination Problem. In international relations, countries are the participants in the stag hunt. Weiss and Agassi wrote about this argument: "This we deem somewhat incorrect since it is an oversight of the agreement that may change the mutual expectations of players that the result of the game depends on Aumanns assertion that there is no a priori reason to expect agreement to lead to cooperation requires completion; at times, but only at times, there is a posteriori reason for that How a given player will behave in a given game, thus, depends on the culture within which the game takes place".[8]. For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. This is visually represented in Table 4 with each actors preference order explicitly outlined. Despite this, there still might be cases where the expected benefits of pursuing AI development alone outweigh (in the perception of the actor) the potential harms that might arise. The primary difference between the Prisoners Dilemma and Chicken, however, is that both actors failing to cooperate is the least desired outcome of the game. But, at various critical junctures, including the countrys highly contentious presidential elections in 2009 and 2014, rivals have ultimately opted to stick with the state rather than contest it. See Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, & Owain Evans, When Will AI Exceed Human Performance? 0000002169 00000 n
Rabbits come in the form of different opportunities for short-term gain by way of graft, electoral fraud, and the threat or use of force. Intriligator and Brito[38] argue that qualitative/technological races can lead to greater instability than quantitative races. The stag hunt differs from the prisoner's dilemma in that there are two pure-strategy Nash equilibria:[2] one where both players cooperate, and one where both players defect. Payoff variables for simulated Chicken game. At the same time, there are great harms and challenges that arise from AIs rapid development. In this game "each player always prefers the other to play c, no matter what he himself plays. This table contains a sample ordinal representation of a payoff matrix for a Stag Hunt game. Additionally, both actors perceive the potential returns to developing AI to be greater than the potential harms. A major terrorist attack launched from Afghanistan would represent a kind of equal opportunity disaster and should make a commitment to establishing and preserving a capable state of ultimate value to all involved. To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. Donna Franks, an accountant for Southern Technologies Corporation, discovers that her supervisor, Elise Silverton, made several errors last year. Language links are at the top of the page across from the title. But, after nearly two decades of participation in the countrys fledgling democratic politics, economic reconstruction and security-sector development, many of these strongmen have grown invested in the Afghan states survival and the dividends that they hope will come with greater peace and stability. An hour goes by, with no sign of the stag. Additionally, the defector can expect to receive the additional expected benefit of defecting and covertly pursuing AI development outside of the Coordination Regime. Most prominently addressed in Nick Bostroms Superintelligence, the creation of an artificial superintelligence (ASI)[24] requires exceptional care and safety measures to avoid developing an ASI whose misaligned values and capacity can result in existential risks for mankind. Meanwhile, the escalation of an arms race where neither side halts or slows progress is less desirable to each actors safety than both fully entering the agreement. Structural Conflict Prevention refers to a compromosde of long term intervention that aim to transform key socioeconomic, political and institional factors that could lead to conflict. This distribution variable is expressed in the model as d, where differing effects of distribution are expressed for Actors A and B as dA and dB respectively.[54]. [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. For example, in a scenario where the United States and Russia are competing to be the one to land on the moon first, the stag hunt would allow the two countries to work together to achieve this goal when they would have gone their separate ways and done the lunar landing on their own. A persons choice to bind himself to a social contract depends entirely on his beliefs whether or not the other persons or peoples choice.
Stag hunt - Wikipedia [36] Colin S. Gray, The Arms Race Phenomenon, World Politics, 24, 1(1971): 39-79 at 41. You note that the temptation to cheat creates tension between the two trading nations, but you could phrase this much more strongly: theoretically, both players SHOULD cheat. [12] Apple Inc., Siri, https://www.apple.com/ios/siri/. HV6am`vjyJ%K>{:kK$C$$EedI3OilJZT$h_'eN. Finally, Jervis[40] also highlights the security dilemma where increases in an actors security can inherently lead to the decreased security of a rival state. Charisma unifies people supposedly because people aim to be as successful as the leader. Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. Using the payoff matrix in Table 6, we can simulate scenarios for AI coordination by assigning numerical values to the payoff variables. Type of game model and prospect of coordination. f(x)={323(4xx2)0if0x4otherwise. 0 Catching the stagthe peace and stability required to keep Afghanistan from becoming a haven for violent extremismwould bring political, economic, and social dividends for all of them. [6] Moreover, speculative accounts of competition and arms races have begun to increase in prominence[7], while state actors have begun to take steps that seem to support this assessment. We can see through studying the Stag Hunt game theory that, even though we are selfish, we still are ironically aiming to for mutual benefit, and thus we tend to follow a such a social contract. [42] Vally Koubi, Military Technology Races, International Organization 53, 3(1999): 537565. the primary actors in war, having been replaced by "group[s] identified in terms of ethnicity, religion, or tribe" and that such forces rarely fight each other in a decisive encounter. Additionally, the feedback, discussion, resource recommendations, and inspiring work of friends, colleagues, and mentors in several time zones especially Amy Fan, Carrick Flynn, Will Hunt, Jade Leung, Matthijs Maas, Peter McIntyre, Professor Nuno Monteiro, Gabe Rissman, Thomas Weng, Baobao Zhang, and Remco Zwetsloot were vital to this paper and are profoundly appreciated. 'The "liberal democratic peace" thesis puts the nail into the coffin of Kenneth Waltz's claim that wars are principally caused by the anarchical nature of the international system.' For the painting about stag hunting, see, In this symmetric case risk dominance occurs if (. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. Deadlock occurs when each actors greatest preference would be to defect while their opponent cooperates. These remain real temptations for a political elite that has survived decades of war by making deals based on short time horizons and low expectations for peace. One is the coordination of slime molds. This may not amount to a recipe for good governance, but it has meant the preservation of a credible bulwark against state collapse. > If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. As a result, concerns have been raised that such a race could create incentives to skimp on safety. Finally, there are a plethora of other assuredly relevant factors that this theory does not account for or fully consider such as multiple iterations of game playing, degrees of perfect information, or how other diplomacy-affecting spheres (economic policy, ideology, political institutional setup, etc.) As described in the previous section, this arms race dynamic is particularly worrisome due to the existential risks that arise from AIs development and call for appropriate measures to mitigate it. From that moment on, the tenuous bonds keeping together the larger band of weary, untrusting hunters will break and the stag will be lost. (Pergamon Press: 1985). If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. arguing that territorial conflicts in international relations follow a strategic logic but one defined by the cost-benefit calculations that . Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism.
PDF A game theory view of the relationship between the U.S., China and Taiwan endstream
endobj
12 0 obj
<>stream
Does a more optimistic/pessimistic perception of an actors own or opponents capabilities affect which game model they adopt? ? It comes with colossal opportunities, but also threats that are difficult to predict. A relevant strategy to this insight would be to focus strategic resources on shifting public or elite opinion to recognize the catastrophic risks of AI. The ultimate resolution of the war in Afghanistan will involve a complex set of interlocking bargains, and the presence of U.S. forces represents a key political instrument in those negotiations. Depending on which model is present, we can get a better sense of the likelihood of cooperation or defection, which can in turn inform research and policy agendas to address this. (e.g., including games such as Chicken and Stag Hunt). If both sides cooperate in an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from an AI Coordination Regime consists of the probability that each actor believes such a regime would achieve a beneficial AI expressed as P_(b|A) (AB)for Actor As belief and P_(b|B) (AB)for Actor B times each actors perceived benefit of AI expressed as bA and bB. It is not clear whether the errors were deliberate or accidental. Deadlock is a common if little studied occurrence in international relations, although knowledge about how deadlocks are solved can be of practical and theoretical importance. There are three levels - the man, the structure of the state and the international system. [3] While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. As stated before, achieving a scenario where both actors perceive to be in a Stag Hunt is the most desirable situation for maximizing safety from an AI catastrophe, since both actors are primed to cooperate and will maximize their benefits from doing so. They suggest that new weapons (or systems) that derive from radical technological breakthroughs can render a first strike more attractive, whereas basic arms buildups provide deterrence against a first strike. [20] Will Knight, Could AI Solve the Worlds Biggest Problems? MIT Technology Review, January 12, 2016, https://www.technologyreview.com/s/545416/could-ai-solve-the-worlds-biggest-problems/. Published by the Lawfare Institute in Cooperation With, Lawfare Resources for Teachers and Students, Documents Related to the Mueller Investigation, highly contentious presidential elections, Civil Liberties and Constitutional Rights. The matrix above provides one example. For example, it is unlikely that even the actor themselves will be able to effectively quantify their perception of capacity, riskiness, magnitude of risk, or magnitude of benefits.
The Afghan Stag Hunt - Lawfare They are the only body responsible for their own protection. [16], On one hand, these developments outline a bright future. Therefore, if it is likely that both actors perceive to be in a state of Prisoners Dilemma when deciding whether to agree on AI, strategic resources should be especially allocated to addressing this vulnerability. Specifically, it is especially important to understand where preferences of vital actors overlap and how game theory considerations might affect these preferences. But for the argument to be effective against a fool, he must believe that the others with whom he interacts are notAlwaysfools.Defect. In short, the theory suggests that the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Together, these elements in the arms control literature suggest that there may be potential for states as untrusting, rational actors existing in a state of international anarchy to coordinate on AI development in order to reduce future potential global harms. [22] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, Machine Bias, ProPublica, May 23, 2016 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. He found various theories being proposed, suggesting a level analysis problem. %%EOF The Stag Hunt game, derived from Rousseaus story, describes the following scenario: a group of two or more people can cooperate to hunt down the more rewarding stag or go their separate ways and hunt less rewarding hares. Despite the large number of variables addressed in this paper, this is at its core a simple theory with the aims of motivating additional analysis and research to branch off. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c".
Stag Hunts: fascinating and useful game theory model for collective Here, both actors demonstrate varying uncertainty about whether they will develop a beneficial or harmful AI alone, but they both equally perceive the potential benefits of AI to be greater than the potential harms. [30], Today, government actors have already expressed great interest in AI as a transformative technology. Dipali Mukhopadhyay is an associate professor of international and public affairs at Columbia University and the author of Warlords, Strongman Governors, and the State in Afghanistan (Cambridge University Press, 2014). }}F:,EdSr Each can individually choose to hunt a stag or hunt a hare. Two, three, four hours pass, with no trace. However, if one doesn't, the other wastes his effort.
PDF The Stag Hunt and the Evolution of Social Structure - Cambridge Those who play it safe will choose 'War appears to be as old as mankind, but peace is a modern invention'. The stag may not pass every day, but the hunters are reasonably certain that it will come. Economic Theory of Networks at Temple University, Economic theory of networks course discussion. Table 11.
PDF The Stag Hunt - University of California, Irvine 0000002252 00000 n
If security increases cant be distinguished as purely defensive, this decreases instability. [23] United Nations Office for Disarmament Affairs, Pathways to Banning Fully Autonomous Weapons, United Nations, October 23, 2017, https://www.un.org/disarmament/update/pathways-to-banning-fully-autonomous-weapons/. Different social/cultural systems are prone to clash. [52] Stefan Persson, Deadlocks in International Negotiation, Cooperation and Conflict 29, 3(1994): 211244. 0000004367 00000 n
Use integration to find the indicated probabilities. In the current Afghan context, the role of the U.S. military is not that of third-party peacekeeper, required to guarantee the peace in disinterested terms; it has the arguably less burdensome job of sticking around as one of several self-interested hunters, all of whom must stay in the game or risk its collapse. Stag Hunt is a game in which the players must cooperate in order to hunt larger game, and with higher participation, they are able to get a better dinner. Anarchy in International Relations Theory: The Neorealist-Neoliberal Debate Created Date: 20160809151831Z As a result, a rational actor should expect to cooperate. . [38] Michael D. Intriligator & Dagobert L. Brito, Formal Models of Arms Races, Journal of Peace Science 2, 1(1976): 7788. [3] Elon Musk, Twitter Post, September 4, 2017, https://twitter.com/elonmusk/status/904638455761612800. In this section, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. This table contains an ordinal representation of a payoff matrix for a game in Deadlock. [54] In a bilateral AI development scenario, the distribution variable can be described as an actors likelihood of winning * percent of benefits gained by winner (this would be reflected in the terms of the Coordination Regime). I will apply them to IR and give an example for each. Evidence from AI Experts (2017: 11-21), retrieved from http://arxiv.org/abs/1705.08807. But what is even more interesting (even despairing) is, when the situation is more localized and with a smaller network of acquainted people, most players still choose to hunt the hare as opposed to working together to hunt the stag. Human security is an emerging paradigm for understanding global vulnerabilities whose proponents challenge the traditional notion of national security by arguing that the proper referent for security should be the individual rather than the state. Table 3. [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. The story is briey told by Rousseau, in A Discourse on Inequality: "If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach
What is coercive bargaining and the Stag Hunt? Give an example THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development. We have recently seen an increase in media acknowledgement of the benefits of artificial intelligence (AI), as well as the negative social implications that can arise from its development. 15. So it seems that, while we still are motivated by own self-interest, the addition of social dynamics to the two-person Stag Hunt game leads to a tendency of most people agreeing to hunt the stag. Additionally, both actors can expect a greater return if they both cooperate rather than both defect. <>stream
These two concepts refer to how states will act in the international community. However, anyone who hunts rabbit can do sosuccessfullyby themselves, but with a smaller meal. [9] That is, the extent to which competitors prioritize speed of development over safety (Bostrom 2014: 767).
Of Stag Hunts and secret societies: Cooperation, male coalitions and If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. Throughout history, armed force has been a ubiquitous characteristic of the relations between independent polities, be they tribes, cities, nation-states or empires. [41] AI, being a dual-use technology, does not lend itself to unambiguously defensive (or otherwise benign) investments. [5] As a result, it is becoming increasingly vital to understand and develop strategies to manage the human process of developing AI. 201-206. This is visually represented in Table 2 with each actors preference order explicitly outlined. If either hunts a stag alone, the chance of success is minimal. and other examples to illustrate how game theory might be applied to understand the Taiwan Strait issue. Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see Skyrms 2004). A hurried U.S. exit will incentivize Afghanistans various competing factions more than ever before to defect in favor of short-term gains on the assumption that one of the lead hunters in the band has given up the fight. I introduce the example of the Stag Hunt Gamea short, effective, and easy-to-use activity that simulates Jean-Jacques Rousseau's political philosophy. Back to the lionesses in Etosha National Park . Image: The Intelligence, Surveillance and Reconnaissance Division at the Combined Air Operations Center at Al Udeid Air Base, Qatar. I refer to this as the AI Coordination Problem. Namely, the probability of developing a harmful AI is greatest in a scenario where both actors defect, while the probability of developing a harmful AI is lowest in a scenario where both actors cooperate. \wb94W(F}pYY"[17/x(K+jf+M)S_3ZP7~Nj\TgTId=/o7Mx{a[ K}
Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. [28] Once this Pandoras Box is opened, it will be difficult to close. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. If one side cooperates with and one side defects from the AI Coordination Regime, we can expect their payoffs to be expressed as follows (here we assume Actor A defects while Actor B cooperates): For the defector (here, Actor A), the benefit from an AI Coordination Regime consists of the probability that they believe such a regime would achieve a beneficial AI times Actor As perceived benefit of receiving AI with distributional considerations [P_(b|A) (AB)b_Ad_A]. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of the actors perceived likelihood that such a regime would create a harmful AI expressed as P_(h|A) (AB)for Actor A and P_(h|B) (AB)for Actor B times each actors perceived harm expressed as hA and hB.
30 Day Yoga Weight Loss Before And After,
30 Day Yoga Weight Loss Before And After,
Sample Performance Improvement Plan For Poor Leadership,
Michael Mcgrath Obituary Kingston Ny,
Articles S