So, now that I’ve harangued the negative again, let’s write a more helpful post on How to Negate.
I suggest that you start at the top of this list and work your way down it looking for the elements that you can weave together to form an aggressive, multi-faceted, and mostly on-case 1NC. This will work pretty well for most affirmative cases you have to argue against. There are some screwballs that defy convention and will be difficult to handle, and there may even be some affirmative cases that are well researched and being delivered by teams that know their links and evidence inside and out. But generally, this should be a good way to start.
- Killer Stock Issues
The three stock issues that have minimal room for debate are Topicality, Inherency, and Harms. If you can win on them, it’s because the affirmative screwed up. Be warned, however, that judges usually don’t like voting on these issues because they tend to mean that the debate is effectively over in 20 minutes or less and the remaining hour of their time is going to be wasted. So if the affirmative team hasn’t really dropped the ball on a stock issue, don’t try to win with it.
- Topicality is the leader here. “Our opposition has not affirmed the resolution, so you can not claim on your ballot that they represented the affirmative side. There is no affirmative side in this room. Negative side must win.” Don’t whine about fairness because life isn’t fair. Don’t whine about education when you could be out researching more interesting things if you wanted to be. And because judges don’t like to vote on this, don’t bother bringing it up if there’s not a clear violation of common sense, regardless of how some dictionary defined something. (And if you don’t have the exact and official wording of the resolution, don’t even go here. Just don’t.)
- Inherency is also a common violation, but harder to prove and explain. “This debate has already happened and the plan is, in real life, in [describe the status of legislation here]. The affirmative’s decision to use a plan of known status with no inherent barrier in the status quo reduces this from a policy debate to a politicized interp round. The fait accompli of the real plan means that there’s [either: no negative ground | no affirmative ground] because we know how it’s turned out.”
- Harms is uncommon. Usually people can identify a problem when they see it. But it is possible that “Our opponents are delusional. Unicorns fart rainbows, not methane, so the massive increase of wild herds of unicorns on the plains of Atlantis is not even a problem for us to solve.”
Solvency hasn’t been mentioned yet because it’s a debatable point and a judge may vote for the plan even if solvency is unlikely.
- Basic Mistakes
Take a few minutes of prep looking at the skeleton of their case and see if you can pick out any glaringly obvious problems, the sorts of things that suggest your opponents were not in a sound state of mind while writing. Things like:
- Double Turns: While negative positions often go into self-contradiction, affirmatives sometime screw this one up as well. For example: Did they just say that anthropomorphism is bad and then talk about how much their plan benefits humans? You should be calling them on this sort of nonsense — either the judge noticed it and wants you to confirm it, or hasn’t noticed it and will be impressed when you do notice it.
- Hopeful Statistics: 87% of statistics are based on a data set that’s structured to provide the result being looked for. You need to look for an alternate way to read those metrics and raise doubts in your judge’s mind about what got excluded. For example: If there’s 17% less obesity near trails and biking tracks, does this indicate that people who live near by them are compelled to use them or rather that people who already care about their health try to live near trails and biking tracks? The other half of this attack goes into solvency deficit: why isn’t that number higher/lower?
- Hyperbolic Comparisons: A lot of fringe crackpots say a lot of things. And they may be right in most of what they say. But then they go a step too far that should make it easy to discredit them. For example: poverty is like an ongoing thermonuclear war year after year. Given a distinct lack of actual thermonuclear wars to compare poverty to, this is a bad analogy and that’s even before we point out that poverty doesn’t have a blast radius where the brunt of the impact is focused.
- Wishful Thinking: This tends to happen in the middle ground between overcoming inherent barriers to change and actually solving a problem, and looks like an unwarranted leap of faith devoid of any “how.” This is a policy debate, the “how” matters a lot. For example: If our current problems are caused by the power structure filled by plutocrats, sock puppets, and their incompetent nepotistic offspring, how can they really going to make this change? This tends to be a byproduct of the common inherency claim that “we can’t do what we want because of those assholes over there,” but then the plan doesn’t solve for “those assholes over there” such that a plethora of assholes-over-there-induced harms will continue regardless of the plan. This weak link will serve as an indictment of their ability to critically address reality first and then almost certainly chain into…
- Solvency Deficit: There’s virtually always a solvency deficit. If a two-sentence plan could genuinely fix the problems that the affirmative is claiming it could, then it would already be implemented in real life (and they’d be failing the Inherency stock issue). Hunt it down and put a spike in it. For example: If we’re only deploying free mass transit in urban areas, then won’t the poor rural areas with cheap real estate that tend to attract larger firms still be left out of the bright and glorious future we’ve been promised?
- Generic Death: As a specific form of solvency deficit, thinking that the end of the world (in three different ways) can be stopped by a two-sentence plan is pretty astounding to me as a judge. We’re seeing more global warming and less nuclear annihilation this year, so, for example: Given that smog is so bad in Beijing that we’ve not only heard of it over here but they’ve even heard of it over there, how will adding bike lanes really stop climate change from killing us all? If we’re going to die anyway, then I may not worry about whatever the rest of the affirmative plan is.
- Case-Specific Counter-Research
At this juncture, see if you’ve got any direct attacks against their case pre-researched. You probably don’t. Kids these days…
I advise against counterplans, not because I don’t love them but because they’re so rarely done well. I’ve seen two that I really liked and thought were clever and well done and both of them were kicked before the end of the round so I couldn’t even use them to vote for the negative. That said, if your prime attack is on Solvency Deficit, providing a counterplan that has at least a snowball’s chance in hell of getting past that solvency deficit could make a judge more keen on voting for you. Counterplans should be:
- Non-Topical: Do Not Affirm The Resolution. You can be dangerously close to topical, but so long as you make some pointed remark about how you’re not affirming the resolution you should be okay.
- Preferable: You solve, they don’t. Enough said. If you’ve got some cleverness, you can avoid breaking some framework or K or triggering a disadvantage, but the basic counterplan is going to be focused on solving problems better than the affirmative claims to.
- Competitive: Generally this is so that the affirmative can’t just say “perm, do both.” This is a weaker issue and easier to compensate for than to get right — after all, if they’ve got solvency deficit that you’ve called out and they can’t explain why you’ve got solvency deficit as well, then you can claim that The Perm is Invalid because the more-solvent counterplan would be preferred such that the less solvent actual plan would never be executed, leaving the affirmative on non-topical ground and thus no longer affirming the resolution.
Note that the other possibility here is for a technologically more efficient methodology counterplan. Between Drones (UAVs) and Robots, we’re getting a lot of cheap and flexible ways to get moderate amounts of power in unusual places. See if you can use a cheap futuristic gizmo to do what the aff thinks a stodgy, expensive, calcified old government program has to be created to do. After all, a lot of the evidence that the aff has cultivated for their plan is years old at this point and science is marching (flying, gliding, hovering) on.
- The K
There are only three Ks I’ve seen in action that have resonated with me, and I can’t advise ever using more than one as the risk of tripping over yourself and/or confusing a judge is too great. Anyway, if your opponent pulls out terminal impacts — and this works for the aff, too — I’d confirm in cross examination that they’re really dedicated to seeing everybody die horribly and then slap them with one of these three Ks (which you’ll have to dig up a suitable copy of from Google or wherever):
- Politics of Fear: your death-obsessed opponent is trying to scare the judge into handing over too much power for some sort of quasi-magical protection from the paternalistic state.
- Crisis-Based Policy-Making: similar to the Politics of Fear, this one suggests that unforeseen consequences will certainly come up if you’re forced to make a decision that can alter the fate of the world with less than 90 minutes of closed deliberation. It’s based on general incompetence instead of power-hungry malice, though.
- Hyper-Real Nonsense: the flip-side of Crisis-Based Policy-Making is that there isn’t an actual crisis, and nothing will come of the debate other than the attitudes taken in and out of the room inclusive of: the malice of adults putting the world at risk, the joy of finding heinous problems in the world to be solved with mere words in under 90 minutes, and the flippant discussion of extinction in a variety of forms without feeling the need to actually examine one’s (short) life. This can actually then be extended into the Politics of Fear whereby the practice of planning how the Federal Government (or whatever) will save the world (three times over!) encourages students to believe in the quasi-magical protection of the paternalistic state.
Note again, however, that these are totally optional and only good for killing terminal impact silliness. Use them to disarm your opponent, but have something else to actually attack your opponent with. Most other Ks are quasi- or pseudo-intellectual wanking that attempts to prevent any policy decision from ever being made; affirmative cases should be looking out for ones that try to legislate revolutions or say that we’re all dead regardless since how the judge votes won’t alter those claims at all.
- Generic Disadvantages
If you’re this far down the list, then the round is a toss-up. Suffice to say, I’m pretty sad when this is what the negative team comes out with first. Really, the affirmative effectively told me a story about a problem that they’re solving with additional benefits; the negative, if they’re going to sink that story, needs to figure out how to make these generics fit into the narrative. Anyway…
- Stronger: Spending disadvantages, spending avoidance trade-off disadvantages. The tighter you can associate the disadvantage to the plan, the better off you are. Make it sound less generic than it is. Talk about the people making these woe-laden decisions.
- Weaker: Politics, political capital. The point behind fiat is generally to get around this nonsense, and “political capital” is nonsense anyway, just look at how GWB privatized Social Security after winning re-election. Really, it’s hard to form a narrative around “these people are going to be vindictive assholes” and that’s what the baseline politics disadvantage is about.
- Fatally Dumb: Any disadvantage that grants the affirmative an assumption that you argued against. For example: if you claimed that they’re not topical for spending any money and then run a spending disadvantage, you may find yourself suffering. There are magical words to avoid this, however, and they are “EVEN IF”… as in, “even if we ignore that the affirmative doesn’t appear to be spending any money, then we’re still going to be running into this generic and boring spending disadvantage.” But you’re generally better not running disadvantages that aren’t going to be triggered. And, of course, don’t run a terminal impact if you ran a K saying that terminal impacts are ridiculous.