This essay outlines the major arguments on the topic, with links to relevant camp files and DebateUS! material.
Since affirmative cases often have similar advantages, I think it is useful to group them for discussion, though there are many advantages that are specific to the topic areas and plans.
Although teams may not always be prepared to debate every specific plan, if they can debate the advantages (there really are only so many), they can outweigh the affirmative case with their disadvantages.
Security. Collaboration will be critical to security improvements in one of the areas and impacts will focus on deterring Russia and/or other actors (especially China), both generally and by preventing them from out-competing the West (the US and NATO countries) in these areas.
While most people think about Europe and deterrence against Russia when thinking about NATO, NATO is also taking on an expanded role in deterring China.
Relations/ties. Collaboration is important to improving ties with NATO and that strong relations between the US and NATO are important (the impact). There are other ways the US and NATO can increase ties, so affirmative teams will have to prove the area is a critical place to build ties. They will, of course, also have to contend with the basic response that ties improved dramatically as a result of the Ukraine crisis. The US is also cooperating closely with the European Union.
NATO Unity/Cohesion. Though this will likely be overwhelmed one way or the other due to the Russian invasion of the Ukraine (Kutner), fostering unity between NATO countries is important to strengthening deterrence. Cooperative technology development, and co-development of standards related to the technology, could strengthen such cohesion.
Teams will claim technology development advantages within these three areas. Since all military research has civilian applications, it is easy to foresee wider developments to advances in these areas. It will be important for teams to identify why NATO is critical to that technology development, as teams will be able to run counterplans to develop the technology unilaterally, bilaterally with other countries or with the European Union.
AI development. AI has many military applications. Generally speaking, AI can be used to create autonomous weapons that can be used on the battlefield without humans to defeat enemies and also have defensive applications in that AI applications can generally think faster than humans and potentially make better real-time decisions. That latter can be essential when under attack. There are many civilian applications, including advancements in industry and those that can improve the quality of life. There are many environmental applications as well.
Biotechnology development. Advances in the biological sciences can help save lives in battle and strengthen the ability to fight. Civilian spin-offs include benefits in the economy and health care.
Cybersecurity development. With nearly everything in society moving online, protection against cyber attacks has become essential. With the war in the Ukraine magnifying the risks with repeated Russian (and US) cyber attacks, the importance of protecting Western militaries and civil society from cyber attacks has grown. A successful attack could collapse the economy and undermine the military. Cyber coop affirmative and negative (includes Article 5 evidence).
Standards. Many teams are claiming advantages that stem from the cooperative development of standards that govern the technology that is the subject of the case. While this is great strategically (aff teams will use it to answer the harms of the technology and any unilateral action counterplans), weaknesses lie in why such collaboration has to be with NATO (as opposed to the EU or the UN or China) and how effective it will be: What incentive does China have to cooperate on standards setting with the US/NATO? Smart negative debaters will heavily scrutinize the nonsense peddled by the evidence (assuming it actually says that) of these advantages.
And just think about: If NATO adopts these standards and China and/or Russia do not, these countries will have an enormous military advantage over the US.
Economic growth good. Advances in these areas are likely to strengthen the economy.
Space. Technology improvements in these areas are likely to facilitate outer space exploration.
Technology leadership.Technological advances in these areas are likely to improve the ability of the US (and the West generally) to compete against China. A considerable amount is written about how this is needed to prevent technological global dominance by China.
Hegemony good. As has been the case since the 1980s, teams will argue the plan increases overall US hegemony, which is needed to prevent global conflict. Hegemony good and bad files, hegemony updates
Morality. Some teams may argue that our current approach to NATO is immortal. For example, the case that requires human intervention in AI decision-making claims that failure to include humans is immoral.
It’s still a bit early to write about all of the plans that have been written, but these are some potential plans.
General cooperation in the area of biotechnology. Teams may also choose a specific area.
Biotech cooperation/standards. This Michigan Starter Packet version of the case argues the US needs to cooperate with NATO to set-standards and develop biotech so that China does not lead and that biotech can be put to good use (food security, reduce biodiversity loss) and kept out of the hands of terrorists. The case claims that NATO setting the standards will draw China in so its biotech operates appropriately (laugh test…).
General cooperation in the area of AI. Teams may also choose a specific area such as drones or tanks. The NAUDL version of the case has a very general plan and claims to strengthen AI in order to deter Russia and China. There are also vague references to standards for cooperation.
AI military logistics pilot projects. This Michigan Starter file version of the case argues that cooperation in AI with NATO in this way is important both to strengthen NATO and facilitate the development of standards that make AI more transparent and governable in a way that hopes to avoid the harms of AI.
Note: In 2 of the 3 cases in the Michigan starter pack, significant advantages are claimed from collaboration; again, I’d heavily scrutinize this argument, especially if you want to win, AI Bad.
Ethical AI. Also from the Starter Pack: This affirmative plan calls for human control – a “human in the loop” who has final say over AI functions. It has the US engage NATO in establishing ethical principles, known as RAI (responsible AI use), for all NATO operations and weapons. There are three reasons for this:
Advantage 1 – Cohesion. If everyone in NATO is on the same page and coordinated on technology and tactics, that is called Interoperability. The command, political, and public will to get coordinated is called Cohesion. Interoperability and Cohesion are the foundation of effective coalition military forces and operations. AI threatens to disrupt that cohesion and interoperability. If the US has Fully Autonomous weapons, and Italy does not, then those weapon systems will have trouble interoperating (not sure if that is a word). If the US is using mostly automated systems, and France is sending mostly human troops, then France might ask why they should risk French lives when the US is not. If the US spends A Lot on Fully Autonomous weapons, but Lithuania doesn’t have that technology, then the US might feel that other nations are not sharing the burden. If the British public opposes LAWs, and they are in a conflict allied with the US, who is using LAWs, then the British public might demand that their government stop supporting the operation. Or the US might perceive that the UK would stop supporting it. Dialogue to cooperate to establish norms for AI throughout the alliance would improve cohesion and interoperability, which makes for a more effective and credible NATO, which stops a nuclear war.
Advantage 2 – Crisis Instability. Many of the new AI weapons are “brittle” – they tend to react poorly when the situation goes outside of their programmed parameters. Like if the Ukrainians put reflective tape on stop signs to disorient the Russian drones piloted by AI. For example, without a human in the loop, AI accelerates battle decisions to machine speed. If something goes wrong, it goes Really Wrong Really Fast, without anyone able to check the escalation to nuclear war. Back in the 80’s, Russian Early Warning Radars detected what it thought to be a US nuclear attack and signaled that to Soviet Rocket Command. But a Lieutenant Colonel – Petrov – trusted his gut instinct and said “Maybe is birds, comrade.” Crisis averted. AI would have sent the nuclear weapons in a split second. There are many crises in the future for Europe, and miscalculation or instability in them could escalate a conflict accidentally or mistakenly. Human control would provide a stop in the escalation chain to prevent war.
Advantage 3 – Human Dignity. Autonomous AI weapons would make decisions about life and death by algorithm. The soldiers who are killed would just be part of a decision tree – depersonalized and objectified. Civilian casualties would become data points. This is unethical because it does not respect human dignity. Now, to be clear – all war is undignified. A human soldier shooting you leaves you just as dead as an autonomous drone. However, that soldier shares your humanity. They feel compassion, and hope, and empathy, and regret. They know what it means to take a life because they value their own. AI has none of that – it cannot “know” what it means to kill or die, because it cannot “feel” life. For better or worse, if war involves lethal decisions, the choice to kill another person should be made by a person. International and Human Rights laws are founded on the principle of Human Dignity. Our actions, arguments, motives and intentions must respect the Dignity of all persons. Death by algorithm does not respect the dignity of individual people, because AI cannot “know” what it means to be alive. (This is where the Cybernetics K comes in…) This is a deontological argument, not a consequential or utilitarian one. It says that before you decide if the plan has good or bad outcomes, you first have to determine whether it is Right or Wrong. If AI weapons are unethical, you never even evaluate if they have good consequences – that is the Ethics First framework argument.
Cyber cooperation. Teams may argue for advances in offensive cyber cooperation, defensive cyber cooperation, or a combination of the two.
Article 5. NATO’s Article 5 commits all countries in NATO to defending any country in NATO that has been attacked. Although Article 5 was only invoked one time in history (after 9-11), it is considered the bedrock of deterrence in the alliance.
There is a camp version of this case (Emory) that argues Article 5 should apply to a cyber attack (Klipstein). Kraterou argues it is already NATO policy but that all of the 30 countries in NATO would be unlikely to agree on what constitutes a significant enough cyber attack to merit a collective response. Lonergram argues the threshold should be increased so that only a massive cyber attack would trigger a collective response.
DOD Trade-Off. New military spending in one area could create a trade-off with military spending in another area (essay and updates).
Politics/Political Capital. While it will probably be difficult for negative teams to win that increasing cooperation with NATO in one of these three areas will require Biden to spend political capital, the disadvantage is always popular. Basically, negative teams will argue that if Biden spends political capital to push the plan through Congress that he will not have it to spend on another agenda item such as Build Back Better, efforts to reduce big tech monopolization and the Iran deal.
Midterms. There are two versions of this disadvantage.
The House. Since almost all evidence argues that Republicans will gain the majority in the House (by a wide margin), Negative teams will have to argue that the plan is popular, Biden will get credit and that the Democrats will now retain the majority. “Democrats bad” impacts include reductions in military spending and increases in taxes and/or government spending.
The Senate. The Senate races are much closer and it is possible the Democrats could still retain control of the Senate. Given this, teams could argue the plan is unpopular, threatening Democratic control of the Senate and that control of the Senate by the Democrats is important to prevent climate change, for example. These are the key Senate races to watch/argue could be flipped by the plan.
Russian isolationism. There has been a popular disadvantage in debate that says if Russia is pressured/isolated that it will lash out. I guess this argument is good in that it has proven true, but it is incredibly non-unique: Russia has lashed-out and attacked the Ukraine; the US continues to provide billions of dollars worth of weapons to the Ukraine; the US and NATO are continuing to reinforce its Eastern flank. The NAUDL files contain a version of this disadvantage that argues Russia will develop more AI if NATO develops more AI.
China isolationism. Similar to the Russia disadvantage (but still unique), this disadvantage argues that putting more pressure on China (perhaps through the development of new weapons) will undermine President Xi, causing China to lash out.
AI Bad. AI technology development could have disastrous consequences, including fully-automated weapons that become sentient and take on a life of their own.
Biotechnology development bad. Similar to the AI arguments, teams will argue that advances in this area, especially in the area of synthetic biology, threaten humankind.
Cybersecurity bad. “Cybersecurity” bad arguments are likely to focus on the development of offensive cyber capabilities that will be used to attack other countries.
Space bad. As discussed above, technology development in these areas could lead to expanded exploration and development of outer space. Teams can argue this is bad.
NATO Bad. While NATO exists and is expanding both in terms of military capability and the likely addition of two countries (Finland, Sweden) NATO bad arguments can be useful when combined with a non-NATO counterplan and are also useful when extending a kritik to argue that NATO is not needed to solve any essential need. NAUDL put out a “NATO Imperialism” file that just generally says NATO supports imperialism.
Hegemony Bad. This is the age-old debate argument that US leadership promotes military aggression, causes conflict with China/Russia, promotes terrorism, encourages nuclear proliferation and will eventaully overstretch and undermine the US. Hegemony updates and consolidated core files.
State Department Counterplan. From the the Michigan Starter Packet:
This counterplan should be treated as a “PIC” out of DoD administration and funding of security cooperation. Activities very similar to “security cooperation” can be accomplished between the US and NATO under the moniker “security assistance.”
The two primary net-benefits, thus far, are:
1-DoD funding/resources tradeoff DA. The DoD tradeoff DA is very simple: the plan requires drawing significant resources from the DoD, the counterplan doesn’t
2-Diplomatic Credibility internal net-benefit. The Diplomatic Credibility Good net-benefit is more complicated. It argues that the counterplan re-centers the State Department, instead of the military, as the leader of US foreign assistance, which promotes the ability of diplomats, as opposed to servicepeople, to achieve their desired outcomes. That’s good because diplomacy should lead the charge for solving transnational existential threats. Like most internal net-benefits, there are flaws: the counterplan’s single action is likely insufficient to boost credibility, the military and state department can both lead together and the State Department might not be the ideal department to lead challenges in, and out, of NATO.
There are a variety of other net-benefits that will come out in wave 2:
–DoD assistance causes Russian aggression, DoS assistance doesn’t
–DoD assistance precludes effective Congressional oversight, DoS assistance doesn’t
–DoD assistance condones human rights violations, DoS assistance doesn’t
Although I have a start on some of those positions, I chose to exclude them in order to streamline this file as much as possible.
There are two (related) concerns that I want to note. They should be in the back of your mind as you are prepping the file, and should be noted when refuting the argument.
1—I am uncertain that, by September, you will want to think of this as the “security assistance” counterplan. Although the negative competition evidence is pretty good (Kerr is by far the best), there is also very good evidence that describes security cooperation as a subset of security assistance – making the CP plan-plus, and/or link to the disadvantages. Also, “security assistance” is often referencing security sector assistance, which is more about building domestic resilience than establishing war-fighting abilities (depending on the aff, that might be a feature not a bug). There are certainly other ways to write the counterplan text. After seeing debates in the first week of camp, the second version of the file might look a lot different.
2—I am uncertain if it’s best to think of this CP as a PIC out of the DoD entirely, or a PIC out of DoD administration (still allowing for DoD involvement). This might seem like a semantic distinction, but it matters quite a bit for the way that net-benefits and solvency operate. For example, if the counterplan entirely excludes the DoD, it will likely be a bit less solvent on mostly military matters (like LAWS). If the counterplan includes the DoD, but does not have them fund or administer the cooperation, then the counterplan likely solves a bit better but has a more limited range of net-benefits. The current version of the file certainly allows the neg to say the military is significantly involved in the counterplan, just that the specific initiative would be administered, and funded, by the State Department. DoS can sell military articles, interact with foreign militaries, and even have service-members on staff, which is important to remember when debating the solvency of the counterplan.
When prepping the file, I would start by doing the following:
—Read it. Don’t highlight it the first time, see what’s there. If you don’t know what’s there, or what you need to refute, you won’t know what to highlight anyway.
—Read every plan, solvency advocate and “military key” card in each of the affirmatives produced
—Write a 1nc shell that’s specific to each of those affirmatives, and a 2nc block responding to each aff’s “military key” claims. Make the decision: internal net-benefit, or not?
—Then, start to highlight the file — specifically highlighting the parts of evidence that helps with each strategy.
Unilateralism. The US could simply develop these technologies itself without cooperating with NATO. Net-benefits include NATO bad, politics (with NATO specific links) and tech theft. This is a strong counterplan, but teams need to be able to debate the “standards setting” arguments discussed above. NAUDL has a version of this counterplan with NATO Bad/Imperialism is the net-benefit.
EU Cooperation. The EU is the European Union, a non-military organization of European countries. Teams could argue this cooperation will strengthen the development of the technologies without pushing them directly into the military sphere. Net-benefits include politics (with NATO specific links), the Militarization K and the Securitization K.
UN Cooperation. Similar to the EU counterplan, the negative can argue for increasing cooperation with the UN. The net-benefits are the same. Teams could also argue that it will produce a more multilateral global order, reducing the negative effects of US hegemony.
China Cooperation. This is a more radical counterplan, but inventive teams could argue the US should increase technology cooperation with China, arguing that China’s global leadership is good, that NATO is bad, and that such cooperation would pull China away from Russia. Politically, the cooperation would be unpopular (so the reverse of the normal politics disadvantage would be the net-benefit) and that NATO is bad.
Consult NATO. A popular (and sometimes hated) counterplan in debate is to consult NATO. If we are going to cooperate with NATO, why not consult NATO first?.Winning the Consult NATO Counterplan Contesting the Legitimacy of the Consult NATO Counterplan.
There are many kritiks in debate and most of them can be used on any topic depending on the situation. In this article, I cover some of the key and popular topic-specific kritiks.
Anti-Blackness. Arguments related to race and racism are obviously very popular in policy debate. There are strong arguments that NATO itself and international relations theory is ant-black. See NATO was founded to protect ‘civilized’ people. That means White, Why Is Mainstream International Relations Blind to Racism?, and Why Race Matters in International Relations.
Capitalism. The most popular kritik in policy debate is probably the capitalism kritik, if only for the reason that it can also be used to challenge race-related arguments. The basic argument is that capitalism is the root of the war system and will eventually lead to the extinction of the human race, which is only reinforced by technology development (through the military). For a NATO link, see Global NATO: A 70-Year Alliance of Oppressors in Crisis and Achieving True Cybersecurity is Impossible. DebateUS! Cap K Files
Militarism. This kritik argues that adopting military solutions to problems always results in war and mor violence
Securitization. The Securitization K argues that when we make security issues out of problems then war becomes self-fulfilling. The K, one of the most popular in debate, focuses on the rhetoric of the advocacy. There is a great cybersecurity link card in Achieving True Cybersecurity is Impossible
There is a NAUDL version of this kritik that is centered around threat construction — worry about threats leads to the adoption of policies that cause other countries to actually become threats.
Feminist IR. The Feminist International Relations kritik (“Fem IR”) argues that aspects of the affirmative advocacy operate in a way that oppresses women and that we instead need to challenge those ideas. The kritik draws on literature that seeks to expose the way both the academic discipline of international relations and the modern international system works against the interests of women, though there are many topic-specific applications. Essay and files.
Cybernetics. From the Michigan Starter Packet: It seems appropriate to begin with Google’s definition of “cybernetics,” which is “the science of communications and automatic control systems in both machines and living things.” This critique is concerned with the ways in which modern and emerging technologies are altering the structural nature of “communications” (both the media we use to communicate, like cellphones, social media, etc, and the ways that we communicate with each other in terms of empathy, aggression, etc) and “automatic control systems” (the systems and infrastructures that the world relies on to communicate, both the physical components such as power cables or satellites and the virtual components such as algorithmic decisionmaking systems or metadata sets). Specifically, this is a criticism of the affirmative’s epistemology, or the way in which it comes to know the world, and the sources, data, and ideological influences that their thought and research draw upon. The core argument is that scholarship which calls for security cooperation with NATO states relies on outdated ideas about the effectiveness and utility of traditional mechanisms of international law, and that these outdated ideas offer useless (if not actively harmful) responses to the crises created by the ongoing cybernetic transformation of the world. Consider the core paradox of autonomous weapons regulation: if the weapons are effectively regulated, then they aren’t truly autonomous, but as soon as they autonomous, we can’t regulate them without potentially compromising their effectiveness. By and large, the only way for international security organizations like NATO to keep up with the growing amount of security threats created by emerging technology is for those organizations to automate their functions, including the process of decisionmaking itself, using other new digital technologies. Thus, according to the link, the affirmative (and not just its scholarship—the action of the plan itself!) justifies the subtle spread, normalization, and acceptance of digital technologies which make it easier to monitor the population and extract data from them. That process is unsustainable, violent, conducive to fascism, and results in extinction.
The alternative to the continued datafication of the world is to focus on cultivating embodied, corporeal relationships. This should be understood somewhat, but not completely, literally; while the alt authors would certainly advocate more in-person connections and relationships, “corporeal care” is also something of a metaphor for the innumerable, ineffable relationships which make up human sociality and which lose something of their essence when translated into data. How many 30-part Instagram stories from concerts that your friends went to have you watched all the way through and actually enjoyed, let alone gotten anything close to the experience of actually being there? Even the person taking the video doesn’t get the same experience from rewatching it as they did from actually producing that moment of human connection and entertainment, but it’s the next best thing to being able to actually reproduce it. The point is that these moments of connection, collective experience, and relationality are metaphorically corporeal in the sense that they don’t last forever, but they do last materially—the opposite of data—and paying attention to those moments and focusing on generating more of them is a better way of combatting the expanding cybernetic regime than outdated legal instruments.