Captain America 3 passes the Bechdel test early on in the film by having Natasha give career advice to Wanda over the radio. The scene then turns its attention to an overloaded garbage truck symbolizing the male-geek ego that is used to breach the perimeter of an otherwise peaceful and blissfully unaware location, allowing violent men to come in and create a toxic atmosphere. In the melee that ensues, it’s worth noting that it’s Scarlet Witch — her initials being a J short of SJW — who clears the air but also, subsequently, whose failure to save the right people triggers international backlash in what comes off as analogous tone-policing built on a foundation of male privilege: “Tony Stark may have idiotically unleashed an alien AI that menaced the world and trashed a city, but Wanda’s loss of control when saving all of the poor people in the crowded market — we won’t stand for that!”
So the Secretary of State shows up and insists that the Avengers need to have a U.N. oversight committee. The conversation that doesn’t ensue goes like this:
With all due respect, sir, I'm Captain America and America doesn't accept oversight by the U.N. I mean, golly, we even reject prosecution under international law as gilding on our American Exceptionalism.
That's true. So as part of the accords you'd be promoted to Captain Planet.
Okay, but won't we always be stymied by the U.N. security council since any nation can drop a veto? I mean China and Russia seem like they'd always be interfering with our ability to operate...
Not really. In this world Russia stopped being a nation when the USSR collapsed and China's not even a place. Just look at the students Tony was talking to at MIT: how many of them were from anywhere in Asia? Trust me, our U.N. Security Council is way better than what our audience has in the real world.
Fair point. Let's talk about it in our world: who would be on the council? Would it be the legislators, statesmen, and intelligence agents that were infiltrated by Hydra in Captain America 2? Or the Vice President that was implicated in a coup because he wanted flaming roid-rage in Iron Man 3? Or maybe the President himself, who apparently ordered an unprecedented nuclear attack on New York City in The Avengers -- an attack, I might add, that Tony saved millions of people from? Because while I freely admit that Tony fucked up the whole Ultron thing...
Hey, watch your language -- this is a PG-13 movie!
... but even when a lot of innocent people are dying because of us, our actions are still less dangerous to the world than the actions of all you normal people that try to expand your power and thus personal sense of place and security through unnatural means. I mean, just compare me and Tony -- the heavier the armor, the lower the confidence in the ability to face whatever comes. And we're the superheroes.
I'm sorry, what was that? I couldn't hear you over the noise of this bus you threw me under.
And when we extrapolate this principle out we see that the nuclear strike on New York was authorized not by the powerful President of the United States but by a scared little man when he realized that his capacity for leadership wasn't up to the challenges he was facing. And his weakness, not his strength, made him a target for one of his closest allies and the flaming roid-rage brigade who, again, were disenfranchised and wanting to secure a place for themselves in the world that they seemed otherwise incapable of keeping up with. And the jumbo-murder drones that were going to assassinate a whole lot of people -- including you, Tony; you were a named target and I totally saved your life -- were designed and created by exceptional engineers but ordinary people who were afraid that they wouldn't be able to withstand the future.
I think that's the point, Steve: the world is becoming increasingly scared of us because at then end of every film, we've trashed our enemies and all that's left is us and collateral damage. And that fear is going to cause people and governments to behave irrationally and escalate conflicts.
I get that, but escalate to what? Launching jumbo-murder drones to wipe out most of the population? Major coups? Nuking New York? They -- not us, they -- already did all of that. There is nobody outside this room that I would trust to be on some kind of 'oversight committee' because they're the very people who are reacting irrationally to us.
Okay, maybe Clint. But as far as I'm concerned, even with Tony on the team, the safest hands are still our own.
Gee thanks, I'm feeling the love. But I'm still going to agree to this because if I don't the government will boycott me.
I thought you stopped selling them weapons, Tony.
I did. The current major project is rebuilding the entire power grid.
Wait, you're rebuilding the entire power grid but you're not driving a Tesla? What's up with that?
Uh, yeah. I have some... psychological anomalies? I don't even know. Look, the point is that if we're a non-governmental paramilitary force that goes fighting around the world, then it's very easy for people to label us as terrorists even though we're not, and a side effect of that is that even if the government would turn a blind eye towards our presence -- as they have with Wanda's immigration status -- the government will have to boycott me to avoid being a state-sponsor of terrorism.
Tony, you saved New York from the President trying to nuke it, a point that we've kept quite quiet about. We've got plenty of leverage here. Really: point all this shit out to people and they'll realize that we're more stable than the United States federal government.
You're right, we are -- we totally are. But we're not supposed to admit it because governments have a lot of power and the people in them are vulnerable to inferiority complexes. Which leads to them trying to nuke New York... so they can seem as strong as we are.
Above and beyond which, I need the government to think that it's a strong and capable partner doing the right thing for its people as it funnels truckloads of money into my coffers, licensing or buying my technology -- protected by government-enforced patents, of course -- that it should have been developing through broad public-subsidized research instead. Really, that whole 'September Foundation' nonsense where I'm dumping a truckload of money on kids' random-assed research? That's money the government paid me, and the condition on the grant has me getting free license on and collecting partial royalties from every patent generated by the work that's being funded. That's how neoliberalism works, Steve; that's what I and my family are an icon of: our big money comes from selling to a few people in government, not making a product that's actually popular with the masses.
That's, uh, wow. That's kind of horrifying, Tony.
That may be, but by doing it I'm more likely to get advanced warning of brilliant people who are just one lab accident away from starting the next flaming roid-rage brigade. And besides, if I weren't doing this then it's be goddamned Goldman Sachs or some other high-finance parasite that would be doing it. Most of the money made by extremely wealthy people is just collecting rent on past investment. At least I can still say that I'm still doing actual work to try to make the world a better place.
Even when you screw up.
Even when I screw up.
So again, the safest hands are still our own.
But only as long as we pretend like they're not.
Except that we don't have a financial industry in this world because the perpetual chaos and violence would've tanked the stock market so it's a total non-issue here.
Wait, I'm confused; I thought the point behind being the heroes was that we would be protecting people from violent villains, not getting hamstrung by them.
Well that's the problem -- everybody thinks they're trying to be good and maybe even heroic regardless of how laughably incapable they are. And when they realize that they're laughably incapable, a lot of people get really scared and start doing stupid things. Like nuking New York, or trying to stage a coup, or building mega-murder bots. Or asking us to sign our autonomy away at the request of some flunkie who was almost killed by a putting green.
My point is that we should smile and bat our eyelashes and sign their ridiculous papers like the good little superheroes they want us to be, but then keep on saving the world anyway because that's what we do. We don't have to be bound by their papers. Remember: this putz that any of us could wipe the floor with in two seconds works for a chump that tried to nuke us. What do they think they're going to do to top that? Send some sort of 'suicide squad' after us? Hah, no. There's nothing; they've got nothing. But if, by signing, we can get this schmuck to not launch nukes at us then maybe the world will be a slightly safer place for us to protect.
This scene is too damned long. Why hasn't a bomb gone off yet?
But the Secretary is wrong: there were several bombs in there.
First, the whole of the superhero genre — even dating as far back as The Odyssey — doesn’t believe in democracy. It elevates a hero and then slags off everybody around them. From a billion casualties in Independence Day as the catalyst to a feel-good action flick to Odysseus’s long trip that none of his crews survived, the equality of people is simply not part of the narrative. The narrative instead structurally forces the posed simply in Infernal Affairs: “What thousands must die so that Caesar may become great?” This creates contradiction within characters (like: Why doesn’t Captain America respect the will of the people?) and creates a callous indifference to human life in stark (pun intended) contradiction to the individualistic belief that warrants focus on a nigh-invulnerable hero anyway: when the hero is slaughtering legions of enemies, either in Hercules or in some Chinese historical fantasy, the hero is a person but their victims are just numbers. (This is the unsettling thing about V killing police in his exit from Jordan Tower in V for Vendetta: detective-inspector Finch and Dominic are real people, Creedy’s finger-men are villains, but beat-cops are just there to be culled.) While Captain America 3 does attempt to raise the issue of dehumanizing people down to mere collateral damage, the focus, format, and structure prevent the issue from being taken seriously.
But when we look past the mere assertion of democracy and the will of the people, we promptly encounter the real issue of power and people — particularly weak people — being afraid that their power isn’t enough to maintain itself and thus tending to over-deploy it. Consider: based on a single grainy photograph a kill-squad of normals is organized to assassinate the insanely dangerous super-soldier Bucky. And Black Panther also goes to assassinate Bucky on the same flimsy evidence. And the kill squad shoots everybody with the reckless abandon of people put into a fight-or-flight situation that they were unprepared for. The idea of “due process” being a foundational component of “rule of law” doesn’t enter into this for anybody, and nobody — especially those “upholding” the law (specifically Rhodes and Stark) — notices, not even when a lot of the team gets magically extradited from Germany and thrown into a black-site prison without any kind of trial. The Secretary of State acts with continual extrajudicial power, overextending his authority in a way that makes sense when you remember that the president tried to nuke New York — and we know the president authorized it because Secretary Ross explicitly highlights that he makes sure that nuclear weapons are under control. So what we see playing out is not a contest of the legitimate use of social authority to check the behavior of deviants, but rather — as Foucault would have us see — a few frightened people masquerading as the authority of “the people” to justify action against the deviant to secure and normalize their space.
But that dovetails into another thing that the film gets right: people’s sense or feeling of right and wrong drives their actions far more than their considered lip-service to reasoned policies, with local allegiance having more value than abstracted principle as Hume described in contradiction to enlightenment philosophy’s attempts to blend rationality and moral sentiment. Zimo, the former leader of a death squad, actively murders a bunch of people trying to get revenge for three counts of manslaughter that were not — in the way of The Stranger — adequately grieved and this is normal behavior for a good family man. T’Challa blames Bucky — based on a grainy photograph — for the death of his father, tries to kill him with no thought to capture or interrogation, and only later thinks to ask “If you weren’t guilty, why did you run?” as if “Because some fucking psycho with vibranium claws and an irrational hate-on was chasing me!” wasn’t the obvious answer. Tony is compromised throughout the entire movie — by physical pain, by an ongoing grieving process, by Pepper walking out when he went back on his promise from the end of Iron Man 3 — but the strength of his ego prevents him from reviewing his current choices despite them being a direct result of all the mistakes that he feels guilty about. And Secretary Ross believes in the rightness of the state, so he’ll happily send a whole lot of normal troops (of unknown allegiance in no clear chain of command) to try to stop the most dangerous and heavily armored people on planet with shockingly little regard for the lives put at his disposal. These characters all routinely ignore what they would claim to believe about the world so that they can reinforce how they feel about themselves. They all believe they can be good (or at least justified) even when their personal fears, neuroses and shortcomings are making them behave badly.
I would speculate that the alternative to all of this is helping people feel secure in society so that they’re not compelled to dubiously justify bad choices that they feel they’ve been forced into to themselves. This point even appears in the film: Tony asserts that he’s doing bad things to prevent worse things, the implication of which is that if he were secure against the worse things, then he wouldn’t try to rationalize or do the bad things. But that’s just the theory; I don’t know that it’s in our national character do actually widely practice such a thing.
Second, and related to the individualistic deployment of power, is the film as a piece of neoliberal propaganda. And this is just plain baggage on Iron Man: the weakness of the government, the ineptitude that lead to crisis, requires private citizens — often wealthy but routinely powerful — to step up and act for the good of society, and the government should be thankful for it and depend on the private sector to drive the public good rather than doing anything themselves. And the first part is a nice mythos to buy into, that some people like Elon Musk can really try to change the whole world — except if you were watching the Tesla 3 reveal and press, Elon knows that he can’t do it alone, or even with just his employees, which is why he opened his patents to his competitors and was vociferously thanking all of his high-dollar customers. But if we’ve got Musk then we’ve also got Gates with a peculiar mix of ideas both good and bad, and the Koch brothers whose ideas are mostly bad (but they are allegedly starting to fund some principled social justice work), and then also trolls like Sheldon Adelson, Lloyd Blankfein, and Donald Trump. And they all think that government is doing a bad job when it’s not doing what they want it to, and that the money they’ve extracted* from the system is what makes their criticism legitimate when the truth is that what the government is doing a bad job of doing is taxing incomes and re-circulating social goods to ensure a solid baseline quality of life for all of its citizens instead of, for example, letting them be poisoned in their homes in current-headline-example Flint.
Neoliberalism could be about expanding equality throughout the world and opening up the horizon for human potential, but it instead has an overwhelming tendency to subvert governance of the people to powerful private interests, with Disney (that owns Marvel) providing the egregious self-interested copyright-perpetuating behavior being a wholly apropos example. See, it used to be that a leader’s disinterest in a subject was seen as crucial for their objectivity in decision making: Solomon doesn’t give a shit about that baby, so his proposal to commit infanticide to figure out who the mother is is considered “wise” and not “dangerously sociopathic.” And this detachment gave us our notions of the elite: people with enough power to be untroubled should be able to provide more rationally objective guidance to society than people faced with troubles every day. But then it turns out that troubles are complex and elites don’t necessarily grasp the complexity of the situation, so we prefer experts to provide the guidance. But experts tend to bring their legacy interests with them, working to show (reciprocal) favor to their friends and entrenched institutions and making consistent application of rule of law for the good of the people an utter anomaly instead of the foundation of our society.
And then we get into a really strange place where we say we’ve got a really rich society, except that the government won’t collect taxes from the top or mandate higher wages for the bottom to help close its spending deficit because governance isn’t something we really do anymore. So the executives serving as board members for their peers (all advised by the same high-finance management consultants that get paid for action regardless of success or failure) participate in what amounts to a self-indulgent corporate circle-jerk of boosting their private compensation while suppressing pay and forcing business risk down on their labor force. But that’s in the real world; in the fictional world we might ask: how big is Stark Industries’ payroll? How do they do performance management? Do they have a gender pay gap despite Pepper running the show? Is Iron Man 4 going to feature Tony facing the smoldering wrath of 10,000 former employees that just wanted to have modest destinies involving raising kids, sending them off to college, and then a brief retirement featuring grandchildren before a peaceful demise but now that simple dream is denied them because they were laid off by a guy who maintains a cadre of lobbyists asserting that his taxes are too high and welfare programs are too generous while his accountants figure out how to shuffle money to international branches and shell companies to lower their bills? Because that’s how Tony Stark would be the richest guy in the world while the United States continued to be in debt up to its collective eyeballs.
Third is representation of race and sex. While the film passes the Bechdel test, as noted, and does a really nice job of developing Natasha’s personality and letting Wanda grow, and had an older woman — Marisa Tomei — on screen and regarded as attractive for like one whole minute!, and does have three black men that both speak and (mostly) survive the movie, there are some noticeable gaps. First, with the exception of T’Challa’s de-gendered bodyguard, black women exist to be victims and mourners. But this is a step up from, second, Asians that apparently don’t exist at all to such an extent that I’m pretty certain they were even grossly under-represented, especially at MIT.
But things turn strange when we get to the PG-13 representations of female sexuality because it combines “hotness” with utter asexuality. Natasha puts a lot of legitimate work into caring — in a platonic way — for her friends while Wanda works more on personal exploration and development, and both are good to see played out. But then Black Widow goes into combat with her cleavage prominently on display. And Scarlet Witch’s combat outfit is an impractical corset that drew smirks from my students. The only thing that the womens’ outfits add to their capacity for action rather than attracting gaze would be Natasha’s shock bracelets. Most of the guys are wearing armor — from padding on Captain America to Black Panther’s bulletproof bodysuit to Rhodes’s full-on “War Machine” — but neither of the women are. Hawkeye’s outfit is as minimalist as dudes get here; other than the goggles, Spider-Man’s suit is simultaneously concealing and non-functional.
And this is why Natasha’s “I was sterilized” scene in Avengers 2 seemed strangely out of place despite resonating with a lot of women: these films are running on an abstinence-only sex policy for their powerful women, so being sterilized — and were are the chances that irradiated Bruce’s sperm still work, anyway? — was functionally a non-issue. But think about it: on the prime-time spin-off, Agents of Shield, the women can both get laid and then go kick ass or do amazing things. But on the big screen we’ve only had Pepper who bags Tony and then also wears his suit and later wields flaming roid-rage superpowers for a short time before… she’s gone, and certainly not pregnant. And admittedly she’s gone in an arc that makes total sense for her character vis-a-vis Tony (and was totally obvious given her absence in Avengers 2 combined with the puerile jealousy of Tony against Bruce and Natasha, even if Paltrow’s 2014 exit made it an easy arc) but the point is that they didn’t even try to re-cast the part. Meanwhile, young Agent Carter does get kissed by 90ish-something year-old Captain America (it gets creepier the more you think about it) but is mostly just running errands in this movie. She never wears an unfashionable bullet-proof vest, never even pulls a gun from her fashionable thigh-holster. She spends about four seconds getting tossed into furniture by mind-controlled Bucky, but the rest of the film is being the pretty blond helping out around the office. Speaking of hair, contrary to the pretty flowing hair on the headlining women, Maria Hill’s efficiently cropped Avengers 1 hair doesn’t show up in this film and neither does Maria, despite being employed by Tony (as mentioned in Agents of Shield).
The point is that there’s an aesthetic layer of femininity, especially white femininity, that normalizes/enforces physical vulnerability even while preaching physical empowerment and modeling chastity in the frustrated get-up of a sexually isolated dominatrix. It’s the chainmail bikini updated for the 21st century. What’s more true is what Magneto said to Mystique in X-Men: First Class: “If you’re using half your concentration to look normal, then you’re only half paying attention to whatever else you’re doing.” Which you can read as again advocating for developing a practice of priorities and focusing on getting things done on the personal-and-real level. Or, in terms of Marvel movies, we should at least have Maria Hill suiting up in power armor to be the replacement War Machine.
Side note on what else we didn’t see: we didn’t see Clint’s wife with their infant being left alone while he goes off trying to get himself killed again against no particularly clear and present danger. The good and dutiful home-making wife is invisible, like Penelope in the Iliad.
But for all of that, it’s not a bad movie if you can buy into the neoliberal iconizing anti-democratic elitism that has been part-and-parcel to the dramatizing of social conflict since (at least) the Greeks. It’s just that we also have to wonder the abstractions that get codified into our culture are undermining the aspirations of our society and critique it at that level.
And we could critique the luck-and-McGuffin scripted plotline, like, with the scene where Bucky explains the other super-soldiers and Steve is like “I can’t call Tony because he wouldn’t believe me” but then also doesn’t call Natasha, whose phone number he has as we saw earlier in the film, and who is specifically sympathetic to him as we saw earlier in the film, and is intimately familiar with Soviet stuff and a native Russian speaker. The phone call that didn’t happen would’ve gone like this: “Hi Nat, it’s me and Bucky and we’re fine, but that weirdo shrink — has anybody re-run a background check on him? Because we’re pretty certain he’s headed to the Hydra base in Siberia to activate a super-soldier death squad. You know the location I’m talking about, right? Great. We’d love to come along, but are kind of in hiding here until you can vindicate us. Thanks.” Heck, they don’t even say “We’re going to Siberia to take out a super-soldier death squad, want to come along?” when she’s right there helping them in the hangar. But as with Star Trek: Into Darkness we’re supposed feel this movie rather than think about it so criticisms of plot devices will not stick to it.
But we can critique it like a geek, too. Because Spider-Man.
See, the thing about this particular reset of Spider-Man is that he comes into being after terrigen poisoning from the end of Agents of Shield season 2 is causing an outbreak of mutations ergo his obscured history here suggests not that he was bitten by a radioactive spider (so passe!) but rather that he had alien spider genes in his history — perhaps one of his ancestors was a Drider? — that were awakened and the reality is that he’s one of the Inhumans. So this is odd, but we can roll with it. What we can’t roll with is this: Tony Stark sees the kid on YouTube and tracks him down and just picks him up despite all of Hydra, Shield, and the United States Federal Government trying to track down and pick up inhumans for — guessing at timeline based on Peter’s assertion of 6 months — uh, 6 months. So the extreme likelihood is that Peter would have been picked up by Rosalind and stuffed in a freezer that got misappropriated by Hydra, and then he’d have gotten a rude awakening from Lash. And that’s true even if he’s “enhanced” and not “inhuman” because it’s really not clear that there’s any kind of a discernible difference at this juncture.
So when I’m down on the notion of Spider-Man joining the Avengers, it’s not just because I don’t want some obnoxious kid soaking up screen-time. And for everybody who says “but he’s supposed to be obnoxious like that, he’s so well-written” — no, he’s not: my students laughed derisively when the whiz-kid was delighted by passing a mere algebra test while I wondered how he could know that a DVD player left out on the curb was just fine merely by looking at it. No, the real issue is that the late crossover disrupts the continuity of the Marvel cinema universe. Which is normal for the ret-con-happy pulp-comic world that vomits up and forgets all ideas with no regard for their quality, but was something I’d (foolishly) hoped they’d be getting past — although it now seems obvious that Doctor Strange is how they’re going to integrate ret-cons until they start losing money.
As a side note: if they really wanted to bring in a spider-power inhuman despite inhumans being targeted by three different organizations for six months, Maria Hill (again!) was tucked away in Tony’s organization so if she were to acquire powers, hiding them from everybody but Tony would’ve been trivial rather than a major gaffe in continuity. It would’ve also explained how she could have a custom suit on-hand instead of magically fabricated on an insanely tight time-line, oops, while also working against the normalized aesthetic femininity — even though it would sadly foreclose the power armor option.
* Tangent: beware of articles that assert that extremely high-income people are “making” or “earning” money; that’s normative language that tries to equivocate their work with actual labor as if there were still a path in this new gilded age for a laborer to start in a mail-room and work their way up to CEO. There is not.
Comments Off on Captain America vs. The Post-Structuralist Critique
The practice of judging and condemning morally, is the favourite revenge of the intellectually shallow on those who are less so, it is also a kind of indemnity for their being badly endowed by nature, and finally, it is an opportunity for acquiring spirit and BECOMING subtle — malice spiritualises. –Nietzsche, Beyond Good and Evil
So I’m reading Haidt’s The Righteous Mind and while it does a good job of demonstrating that morality is too nebulous to be valued in a debate, and to clarify it with a specific criterion is to tilt into disaster (page 113), the possibility that morality is too nebulous to be clearly defined in an expansive post-structural society does not occur to Haidt — despite the wide variety of vitriolic feedback he receives on his (fundamentally flawed) work.
But let’s start with the decent part: he’s working from Hume, who argued that people cast moral judgments based on their emotional reactions to circumstances rather than a larger (deontological or teleological) rationalized framework, which is fine but it’s also why we don’t poll random strangers on what we ought to do: if we don’t trust their aesthetic sense, then we’re not going to trust their moral judgment if it’s justified with nothing more than Hume’s emotive base. The (failed) project of morality in the enlightenment was to get a system of determining goodness that anybody could use so that their judgments wouldn’t be arbitrary, but Hume came along and pissed on all of that (as Haidt quotes on page 115 — and yes, it really took 115 pages to get to this point; I’ve not suffered through a writer so gratingly loquacious since Veblen’s Theory of the Leisure Class).
The good part is when we start breaking down morality into its common components (page 153 if you want the short form): there’s caring/harm-prevention, fairness, loyalty, respect for authority/process, and the nebulous “sanctity” or disgust-avoidance (that I suspect Haidt mis-surveys when he finds it missing) — with the ones in bold being useful for debaters to consider in place of valuing mere “morality.” To these Haidt then adds liberty, which gets activated when authority betrays the people it ought to be loyal to and caring for, so despite our common love of the word “liberty,” I disagree with Haidt’s use of it as a vector of morality. What Haidt doesn’t consider is that people touting liberty don’t consider the likelihood of their failure or the consequences they’ll suffer as a result — they want to be free to succeed and want other people to be free to fail, not the other way around. It’s rather like the notion of equality: we only like equality when we’re at (or sympathetic to) a disadvantage — honestly arguing for equality from a position of power goes against self-interest and is thus vanishingly rare.
But the issue of “free to succeed” does point to an embarrassing gap in Haidt’s decomposition of morality: Aspiration. His core metaphor of rationality being a rider on a moving elephant of emotion evokes a teleological understanding that the elephant is going somewhere and not just milling around. His repetition of the narrative of the lazy welfare cheat focuses on the notion of cheating, but ignores the (Puritanical) moral judgment on laziness. And his strange clinging to the proportionality of rewards to defend inequality without noting that the absurd lack of proportionality was exactly what has made economic inequality a flashpoint issue for a great many people actually justifies Aspiration as a moral vector.
The argument is simple: if somebody has achieved what you want to achieve or is trying to achieve what you want to achieve, you will be sympathetic to their intentions regardless of the disastrous consequences of their actions. Consider the rich, sexually fulfilled, physically fit, super-genius titan of industry Tony Stark and how we’re supposed to ignore the bald fact that he unleashed an extraterrestrial artificial intelligence from the weapon of a villain who was specifically known for is treachery in Age of Ultron because he somehow thought it would be a good idea (despite Captain America 2‘s plot showing that it wasn’t a good idea when Nick Fury was pushing for something similar, either). It’s the moral feeling when parents hope for their children’s future, or are disappointed — quietly or otherwise — by their children’s lack of accomplishment or ambition, with the cross-cultural reference point being in Trigun where the sand-steamer designer shows his work to his son and wonders aloud what his son will accomplish to benefit humanity. It’s where we get the phrase “give ’em an A for effort,” with the evolutionary basis being the people who poked tigers and the like.
These days, the suspension of moral judgment for our Aspiration most obviously relates to the banksters and executives who we would like to hate, but honestly most of us are too jealous of their ability to extract money from the system so we invest more in our 401ks that they’re leeching off of. Veblen would call it an exploit, Nietzsche would call it the will to power, we look at it and have to admit that we wish we could have the opportunity to do such a thing, and that’s the reason I’ve heard given as to why poor Republicans appear to vote against their economic interest: they’re voting on their fiscal aspiration rather than their current fiscal reality. The flip-side is the presumed categorical lack of aspiration in poor people — or, since it’s 2016, Millennials — as demonstrated by their inability to do anything that we only wish we could do (that fairly successful people can also be fairly lazy is a surprise), and, related to that, the seeming smallness of Democrats’ aspirations — “now you have to buy health insurance, and we’re proud of that!” — makes it hard to keep their base charged up.
It may be argued that aspiration is part of sanctity, but this isn’t quite the case. The exact argument — going back to the Greeks per Foucault’s History of Sexuality vol 2 — is that
sexual pleasure was generally characterized as being, not a bearer of evil, but ontologically or qualitatively inferior—for several reasons: it was common to animals and men (and thus did not constitute a specifically human trait); it was mixed with privation and suffering (in contrast to the pleasures of sight and hearing); it depended on the body and its necessities and it was aimed at restoring the organism to its state prior to need.
I mean, consider the different response you’d have to a nymphomaniacal hedonist as opposed to a necrophiliac. You’re likely disgusted by the very notion of necrophilia — and that’s the Sanctity aspect of morality done proper — but lacking any details on the hedonist, you probably started from a position of disappointment because a human should be so much more (unless you’re aspiring to be a nymphomaniacal hedonist, in which case you were probably thinking “nice work if you can get it”). So there is a wedge in there between the actions that are condemned because they’re disgusting and the actions that are judged because, according to the person casting the judgment, they’re not a good enough use of human life, which perhaps sounds really classist and flashes dignity politics warnings — but let’s revisit the Greek argument against sex as it was remixed in Donnie Darko:
The rabbit’s not like us. It has no history books, no photographs, no knowledge of sorrow or regret. I mean, I’m sorry, Miss Pommeroy. Don’t get me wrong. You know, I like rabbits and all. They’re cute and they’re horny. And if you’re cute and you’re horny, then you’re probably happy that you don’t know who you are or why you’re even alive. You just wanna have sex as many times as possible before you die. I just don’t see the point in crying over a dead rabbit, you know, who never even feared death to begin with.
And what we see is that, to the person casting judgment, the qualitatively inferior behaviors and characteristics invites judgment while, contrariwise, the aspirational behaviors and characteristics (of, for example, the modernized Goethe’s Faust we call Tony Stark) are morally justified.
But why are so many people casting moral judgment anyway? Going into the third part of the book, we don’t have an answer. We did, however, get a hint early on: Haidt asked them. Really, that’s the reason, because these scenarios feature disclaimers like, with regards to eating a dead pet, “nobody saw them do this,” or with incest between consenting non-reproducing adults, “they keep that night as a special secret between them.” Only it’s obvious that it’s not a secret, that there was a witness, because here’s this leering researcher calling upon the interviewee to cast judgment on the people who were allegedly unobserved. Really: they had to be observed for moral judgment to be cast because morality, coming down from when everybody shared a cave, was fundamentally about social practice — McIntyre’s After Virtue is illuminating on this point, but even Haidt asserts social behaviors in primates and pre-history to justify the vectors he identifies. So why in the name of Cthulhu’s cloaca is he surveying on private behaviors?
His behaviors reminds of the bit of the Bible — John 8 — where the morality police take a woman to Jesus saying “We caught her committing adultery! What now?” and Jesus looks at the lecherous perverts (who didn’t bring in the man, presumably because they aspired to be so libertine themselves) and doesn’t say anything, but starts writing in the dirt. And — the popular narrative among Christians is that — he may’ve been writing the laws most commonly broken until the morality police realized their imperfection and wandered off. Or my alternate speculation is that he may have written the story about Noah cursing his son Canaan for subversive lechery. Regardless, as far as anybody knows, the brief interaction that the woman had with Jesus was brought about only and exclusively by the morality police insisting that there had to be a judgment — ideally one that allowed them to kill the woman or, more nefariously, tell the Roman occupiers that Jesus had subversively taken on the Roman state’s role in ordering an execution. But the bigger picture, that I would like to imagine Jesus saw, is that trying to instigate unnecessary judgments and condemnations is a pretty shitty way of increasing the amount of peace and love in the world.
So when Haidt says “This guy fucks a dead chickens! How do you feel about that?” the most honest answer I can come up with is “Dude, you can go fuck yourself and I promise I won’t judge you if you just refrain from telling me about it.”
So what are all the kids talking about these days? “Resolved: To alleviate income inequality in the United States, increased spending on public infrastructure should be prioritized over increased spending on means-tested welfare programs.” Really?
Not to put too fine of a point on it, but what the fuck does that even mean? We live in a world where 62 people have the same net worth as 3,600,000,000 people. And this means, bluntly, than any of those 62 people are really just plain better than you are, so you should get with their program. Really, what did you think Queen Bey meant by “Get in Formation”? But this isn’t just a capitalism thing — it’s part of human nature that gets refracted through our society: there are billionaires in “communist” China, and even Stalin had to use perks to keep highly skilled workers productive. A total alleviation of income inequality would be income equality which has never been a thing because people pretty consistently want to be more certain of themselves and their future than they are of their neighbors, so what we’ve got here isn’t so much a resolution as it is buzzword bingo.
While I would absolutely agree that acute income inequality correlates to and statistically contributes to a plethora of social maladies, you have to talk about the maladies you want to solve before you choose your angle of attack. And regardless of your angle of attack, inequality will remain because it’s coming out of the human condition. And it’s not just me — for whom neoliberalism has worked out pretty well — saying that; that’s Robert Reich’s position and he’s almost certainly both smarter and richer than you are.
Tangent: If you want an alternate perspective, perhaps you could try Paul Mason’sPostcapitalism — but I just re-arranged my 401k so you’re not going to get that perspective from me. I listen more closely to people like Tim O’Reilly.
Americans who are struggling do not see themselves in abstract language like “the poor” or “poverty.” This is partly because such language is seen as quite pejorative in America. To be poor is to have failed in pursuit of the American Dream. In too many ways, people who are poor are reviled. The first thing we need to do is stop blaming people and start talking about their real lives.
The economy is a result of the rules we create and the choices we make. The people who are struggling to make ends meet do so because we have built — through intentional choice — an economy that produces inadequate incomes for more than one-third of all Americans. So we need to have a real debate about what to do to build an economy that doesn’t produce such misery.
The misery caused by the current structure of our economy is what this debate ought to be coming down to, but at the moment it’s really one-sided. Let’s review:
Means-testing welfare reinforces the stigma of welfare driving low-income people away from the programs, mitigating their capacity for success because everybody knows that it’s morally reprehensible to be trapped in a cycle of government dependence.
The most popular welfare programs include Medicaid and the Earned Income Tax Credit which are basically subsidizing private businesses that are too stingy to properly compensate their employees; expanding welfare programs such as these merely socialize the expenses of running a business while the owners and managers privatize the revenue, trapping their profit margin in a vicious cycle of government dependence.
Chronic under-funding of infrastructure leads to disastrous failures, most notably in Flint, Michigan, where tap water is corrosive and leaden, but that sort of failure is happening at a smaller scale all across the US. The impact of this is that all impacted citizens get stuck buying common resources from expensive markets — buying bottled water instead of just using tap water — which drains their scarce resources faster, but only after they know there’s a problem that may have poisoned them.
One of the impacts of that situation was that restaurants and cafes got jacked because the tap water they were relying on was suddenly unreliable (as I heard from my barista friends downtown). The impact was short-lived, but think about what the impact to an economy would be over the long term? If I were to say “I’m going to go open a coffee shop in Flint right now,” you’d think that I was nuts, and you wouldn’t be alone: banks already don’t like lending start-up capital to restaurants et al (as I learned from the owner of a local bistro) due to the baseline risks in the business, and our failing infrastructure is exacerbating that risk, threatening to collapse low wage, low margin swathes of our economy that lots of people are working in and depending on for what they’ve got.
The good news is that we’ve got a goodly batch of senators — inclusive of Oregon’s senators — that are trying to increase infrastructure funding in the status quo.
A short list will suffice: water treatment, roads, bridges, public housing, passenger and freight rail, marine ports and inland waterways, national parks, broadband, the electric grid, schools, hospitals, government buildings, dams – in other words, to use a medical metaphor, the conditions for the healthy life of a nation.
Put another way, if it’s counted as part of a public service so the government can make demands of it, then it’s probably part of our infrastructure… even if the physical resource is specifically allocated to a means-tested welfare program — i.e.: public housing.
So there’s a lot you can talk up there, and a lot that’s falling apart. Or talk about how rich people clustering in affluent areas and neglecting infrastructure through spaces in between — like the valley of ashes in The Great Gatsby — entrenches income inequality as documented by Robert Putnam in Our Kids. You can even talk up the internet access you’re likely taking for granted right now to research this topic! And you might think that I’m totally saying that we should alleviate income inequality by prioritizing spending on infrastructure over increased spending on means-tested welfare programs. But we can’t actually prove that so we’re not winning here; the argument is squishy.
So let’s talk strategy: for this topic, you want to be speaking first to sketch out the positions — either
that just spending won’t clearly alleviate income inequality so, given that the resolution presents a false choice, the resolution cannot be affirmed or
that because the resolution is bounded to only two choices, to say that they’re both wrong isn’t a real option — the money is burning a hole in the deficit-spending framers’ pocket — and a negative ballot carries an implicit advocacy for increased spending on means-tested welfare, even if just in-balance with infrastructure.
And the catch is that in order to win with infrastructure, you have to actually attack means-tested welfare. Fortunately, this is easy because throwing money at our current welfare system, as-is, in isolation, is a really shitty thing to do. Here’s the scenario:
Computers encourage both the government and the banks to operate on a scale at which consideration of individual circumstance isn’t really possible. The result is unstoppable error by government (say, the frequent miscalculations that leave welfare recipients at constant risk of being wrongly accused of fraud)and unstoppable fraud by banks (say, robo-signing endlessly repackaged and resold mortgages and credit card debt). For bothgovernmentand banks, suchscaling up inevitably creates injustices for certain individuals, but so long as the victims are powerless there won’t be much of a legal or political reckoning. The person [gets] tossed into jail for welfare fraud he didn’t commitor tossed out of his house because he was mistakenly judged not to be paying his mortgage may or may not get it all sorted out in the end, but even if he does the feedback loop won’t impose too much pain.
writes Tim Noah for the New York Times in his summary of Taibbi’s The Divide, which specifies that there are 26,000 cases of welfare fraud pursued per year. But let’s double down on this by listening to the Brennan Center for Justice noting that we’ve got 4.4 million ex-convicts whose right to vote has been revoked — and if we expand means-tested welfare with the collateral damage of welfare fraud accusations, then we’re also going to be expanding fraud convictions that many states will then use to disenfranchise voters, ostensibly for committing fraud, but really for being poor. So the choice to not favor infrastructure spending over means-tested welfare ends up with people being unable to vote because they couldn’t make ends meet and our democracy being diminished.
That’s the short form that nails an impact and might fit in PF, but let me quote Taibbi at length:
Over and over again, we hear that if you owe money in a certain way, or if you receive a certain kind of public assistance, you forfeit this or that line item in the Bill of Rights. If you’re a person of means, you get full service for all ten amendments, and even a few that aren’t listed. But if you owe, if you rent, you get a slightly thinner, more tubercular version of the Fourth Amendment, the First Amendment, the Fifth and Sixth Amendments, and so on. … It’s not that it’s written anywhere that if you’re black and you live in the projects, you don’t get protection against illegal searches — it just sort of works out that way. And if this makes any sense at all, it’s not about skin color. This is a cultural kind of bias. White people who live the wrong way get caught in the net, too. And as the income gap gets bigger and bigger, more and more white people are being pushed behind the line.
in the late 1990s and early 2000s: Clinton wrote that “too many of those on welfare had known nothing but dependency all their lives.” She suggested that women recipients were “sitting around the house doing nothing.” She described the “move from welfare to work” as “the transition from dependency to dignity.” Or a “substitute dignity for dependence.” Put more simply, she stated, “these people are no longer deadbeats—they’re actually out there being productive.” … In sum, she has frequently validated a pathologization of poor black women that has often served as a pretext for Republican assaults on the social safety net. She has not repudiated these remarks. … Indeed, Clinton has long embraced welfare reform, a policy more hostile to women than almost any other enacted recent decades.
But let’s look closer — it’s not the spending that’s the problem, it’s the legislation that’s designed to limit spending:
Passed by a Republican Congress, the bill was signed in 1996 by President Bill Clinton, eager to make good on his pledge to “end welfare as we know it.” … What that meant was a five-year federal limit on receiving welfare. … According to a recent Harper’s story by Virginia Sole-Smith, “For every hundred families with children that are living in poverty, sixty-eight were able to access cash assistance before Bill Clinton’s welfare reform. By 2013, that number had fallen to twenty-six.”
And we can’t just “spend more” to get around that time-boxed limit. The limit we can get around is “TANF block grants were not set to adjust for inflation and, according to the Center on Budget and Policy Priorities, the program’s buying power has declined by more than a third since 1997.” But that’s only getting to a quarter of people in poverty while putting them at risk for being accused of welfare fraud. So given only a choice of spending between infrastructure and means-tested welfare, because while neither is likely to “alleviate income inequality,” at least infrastructure will make our citizens less vulnerable to their government and employers instead of more.
If you want to argue a straight negative extolling the virtues of means-tested welfare, I’m afraid I can’t help you with that, and I’m concerned that you’ve not been paying attention, so let’s recap:
An independent panel has concluded that disregard for the concerns of poor and minority people contributed to the government’s slow response to complaints from residents of Flint, Mich., about the foul and discolored water that was making them sick, determining that the crisis “is a story of government failure, intransigence, unpreparedness, delay, inaction and environmental injustice.”
Welfare isn’t going to close the economic gap between a person who qualifies for it and a person with an income like mine, and means-tested welfare does jack-all for people the government may casually poison in their homes. Putting more money into the status quo of welfare is a bad idea and I don’t know how somebody would argue for it.
Let’s start this off with a few foundational statements:
I would rather have kids that can think like me than can look like me, and I now have such a student who is even keen on a career like mine, and this is first and foremost for her.
I volunteer hundreds of hours each year in an attempt to help kids navigate around the boring mistakes that consumed years of my life, with this extending on that trend.
A great many mistakes come from believing in symbolic structures without a critical interrogation of the power that wants the symbol to be accepted as true, so we will be inspecting those structures.
While there’s an increasing amount of advice available on this topic, this is the compilation of advice I’m giving. So with that in mind, let’s begin to transition from adolescence into adulthood as an information system analyst or architect.
Misconception: Computer Science is Programming
I am a bad role model for you, but not for the reasons you might think. I’m a bad role model because I don’t know how to lead other people to the kind of success — lots of autonomy, solid paycheck, decent status — that I’ve had as a computer programmer. And this is because companies fetishize credentials, as if having a degree in Computer Science makes a person a capable programmer in a corporate environment: it does not. My degree is in Public Relations — solidly ironic for a reclusive misanthrope — and I don’t know that I’d be able to get past a phone screen if I went out job hunting because there are a lot of pricks in the world. But it turns out I’m not alone:
And you can follow these not-computer-science technologists on Twitter, which is something cool that we didn’t have when I was a teenager.
Minor Tangent: Programming, Actually!
If at any time you are sad that I’m not talking about programming here, pause to watch Laurie Voss (@seldo) talk for two hours about Stuff Everybody Knows. But I’m not talking about programming because it should be different by the time it matters to you. The harder technologies that I suspect are likely to be relevant in 5 years and beyond are security (Morgan Marquis-Boire — @headhntr — is the person I’d start learning from) and Blockchain (of which Bitcoin is only one use, but I don’t know enough to make recommendations here), but I spend almost no time on them — they are Computer Sciencey. Alistair Croll (@acroll) has his shortlist of future-driving tech, and the greater-than-ever capacity for machine learning through both hyperreality and networked experience will soon soon be challenging — but not necessarily eliminating — our scarcity-oriented socioeconomic structures, a point that Tim O’Reilly (@timoreilly) gets. If you’re looking to intercept an industrial sector in the near-term, there’s a report for that.
The key message here is that what you study in college doesn’t actually matter very much once you get started down a career path, and also that it’s easier to find role-models than ever before. But, counter to that point, it is worth mentioning that credential-fetishizing organizations — and the bigger they are, the more they do it — will likely prefer a Bachelor of Science degree to a Bachelor of Arts degree, and the preference may be reflected in your paycheck. The underlying truth of this point is one of the complexities of life: you have to position your actions to align to the current reality, but then also steer it towards the future you want.
Starting on a Lighter Note: Getting Dressed
Let’s take a turn for the frivolous and talk about fashion for a moment. You may not want to take my word for it; I’m constantly complaining about how hard it is to find pants for my < 29″ waist with legs that can press 540lbs. So I’ll be referencing everybody else.
First, the discussion I had with a director of lawyers (rocking a half-million per year salary) came down to simply this: wear the clothes that nobody you care about will judge you for. Now, to be fair, she was talking about when she was representing a client in front of a jury during high-stakes court proceedings, and not allowing the jury to be distracted by what she wore. But the same principle applies elsewhere: fill your wardrobe with outfits that will not draw attention away from your skills and abilities. And this is a kind of physical manifestation of Postel’s Law that we’ll get to in a moment.
But more specifically, there is a trend towards having a sort of personal uniform to minimize the effort of choosing an outfit. Women are starting to do it having noticed in Steve Jobs how well it seems to work out for high-status men (who, in years past, were always wearing mostly uniform suits and tuxes anyway). For our territory on the wide edge of technology, I would point to Timoni West’s description of her outfit — pay particular attention to “So when I find a product I like, I buy it in bulk” because it really sucks to find something simple that works perfectly and then you wear it out and go back for another… and it doesn’t exist anymore, Eddie Bauer, WTF?. Or, to put it another way,
All I want is a black hoodie, in my size, with a zip and thumbholes, that I can afford. Capitalism, what are you even for?
That aside, my advice goes like this:
I basically have two color palettes: the preferred black/blue/grey vs. the occasional green/brown/orange. The only common crossover is switching blue jeans for green cargo pants; other than that, I just decide which palette to use on any given day and grab clothes. If I’m needing to look managerial then I’ll wear a buttoned shirt, otherwise I’m likely to be tossing on a hoodie. Additionally, and this is for anybody who is crazy-lean like me, getting clothes that fit your form to express your physicality — even if it means you’re buying the same damned size and cut of jeans that you were 20 years ago — because your movement and gestures are a critical portion of your communication skills and if you’re floating in clothes too large for you (a habit enforced through years of your parents insisting that you’d grow into them) then you’re mitigating your ability to communicate effectively.
Side note: guys, if you have blue eyes, blue shirts are your friend — or if you’re wearing a suit, go for a blue tie. I speak from experience when I assure you that it will catch the attention of the sorts of women who go for blue eyes.
So, to sum up: populate your wardrobe with a minimal variety of almost nondescript group-and-role-appropriate clothes that fit you well and can be mixed-and-matched with no particular cognitive effort.
But the underlying principle here is Postel’s Law: Be conservative in what you do, but liberal in what you accept from others. He was talking about programming, but it’s really a vital two-part life-lesson:
If you control your public tone to minimize first-impression offenses, then you can ensure that anybody who takes offense anyway really is is a colossal asshole you should avoid. And
“Most people mean well. Even when they screw up. You get the best out of people when you keep that in mind.” —Nicole Sullivan
To re-clarify because this is important:
Postel’s Law is not a reason to not be badass.
There is no reason to not be badass.
If you are ambitious and successful in life, many men will be intimidated. That's a good thing. Those men are best avoided. #tothegirls2016
The most substantial mistake I made was getting married to an art student. And what I learned is that being deeply committed to an utterly misaligned relationship and un-equal partner is, frankly, horrible. And while (heteronormatively) women are becoming aware that dumbing-down for men sucks, educated men have been dumbly accepting the isolation of intellectual inequality as a matter of patriarchy for generations. Really, a lot of what the errant neoliberal line of feminism is going “WTF?” about — like “having it all” — are things that patriarchy just stoically ignored for the sake of structure, not because it provided a consistent masculine advantage. The patriarchal structures don’t work for everybody, or even most people, and replacing a few men with a few women won’t change that, hence my assertion of “errant.” Point is: don’t worry about a long-term relationship until you know who you are and what you do well enough to evaluate the appropriateness of a partner. And while there are sociological effects to mating for companionship, they’re hardly a warrant for you to pursue an unequal relationship. Anyway, don’t even worry about it for a decade. It’s good to be single: it gives you more room to grow.
When you see all the big corporations talking about how they’re building for diversity, and recruiting for diversity, and laying the pipeline for diversity (oblivious to the sexual connotation) the important question to ask is “What are you doing to maintain your current level of diversity?” because usually they’re not doing much of anything. Kate Heddleston picks up on the “canary in the coal mine” metaphor, where the women exiting technology show that the environment is toxic — but the managerial response is to bring in more canaries. This only compounds the chronic anxiety of the current employees, with that anxiety spilling out in racist and sexist ways to respond to the “new threat.” It’s ugly, as Laurie Penny describes and you’re likely to experience it differently/worse in your 20s, even starting in college assuming you’re not in the middle of it already, than when you’re older. You may wish, as many women do, that the #NotAllMen would stand up for you against the ugliness. But this misses the dominance element in harassment: the trolls and troglodytes who are behaving in vicious and perverse ways towards you in front of your peers are doing so to demonstrate their power to the group in a way that the group — being composed of the socially inept and utterly non-dominant — are probably not prepared to deal with at all. It’s ugly, and it’s worse if you’re the target, but it makes all the low-status bystanders who are too confused and cowardly to respond feel shittily aware of their low-status, too.
There is no bright spot here: as long as people are afraid that the world would be just fine — or more efficient — without them, they will tend towards being aggressive sadists, or sniveling sycophants, with socially adept people switching their behavior depending on who’s watching. The less-bad news is that sexualized aggression seems to be less common in a corporate structure, so you’re more likely to encounter trolls who are grumpy and bitter about all of the shit that they can’t control. Nicole Sullivan has some solid advice on how to deal with these sorts of people.
But this is getting off topic; we were on the issue of meritocracy and an HR department ensuring you don’t have to fight for fair compensation, which is wrong but people are assured to that to persuade them to not fight for fair compensation. What is true is what Adam Smith observed when writing On The Wealth Of Nations, that “Masters are always and every where in a sort of tacit, but constant and uniform combination, not to raise the wages of labour” with what’s new being that it may be illegal if certain lines are crossed. But there are two critical points here: First, negotiate your starting pay because when everybody gets a 2% raise, the person with the higher salary gets the bigger raise. Second, manage your manager in getting them to commit to what they want from you and how substantially they’ll be rewarding you for it.
Here’s a story about the power of negotiating: So this one year, we got pathetic raises, but I kept on being awesome at whatever was thrown at me. Then the next year, I got an apology from the manager because he ran out of decent raises to give, but I was assured he’d make it up to me. Then the next year, I got re-organized under a sniveling sycophant who didn’t like my attitude, and I got screwed in annual review and then punted to a different manager. And this was when I got pissed off and, taking three months of vacation in the face of arbitrary layoffs, told my new manager, for whom I had not yet delivered any value, that headhunters were baiting me with 50% raises (this was true) and if he didn’t promptly fix my compensation then I was going to walk. And he got me the biggest raise I’d seen in 8 years, so I called it good and went back to doing the work. Until another re-organization put me back under a sniveling sycophant. This time I was serendipitously offered a job in an organization that actually wanted me, so I jumped on it and promptly started managing my manager in a friendly way to ensure that I was delivering what he expected to make it easy to give me a raise when the annual review came around, and he gave me an even bigger promotion than the last one I’d gotten, which really crystallized the other misconception about meritocracy which we’ll get to in a moment.
But the big point is that when you don’t expect organizations to treat people fairly, you can be more assertive in taking control of how they treat you. And this is true in your relationship with the corporate manager who wants to keep exploiting your labor, in the relationship you have with the college professor who wants to impart somebody with valuable wisdom, or even (about half) of the wait-staff in restaurants from whom you’d like a meal and not poison: if you treat them well while making it clear how you expect to be treated, then you’re likely to get the favorable treatment you demand. Yes, this does border on sociopathic social engineering, but it’s different because everybody involved is happy with how things turn out, right? As Hamlet (a sociopath at best, murderous psychopath at worst) advises Polonius: “Use them after your own honour and dignity: the less they deserve, the more merit is in your bounty.” Regardless, self-promotion is absolutely a necessary job skill these days.
Minor Tangential Misconception: Being a Paying Customer Ensures Fair Treatment
It used to be conventional wisdom that if you’re not paying for the product — like shows broadcast on TV — then you are the product: TV was funded by advertisers that paid studios and stations to create and broadcast content that they could advertise on top of; they were paying to bait you into watching their sales pitches. But this is no longer the case. The de-structuralization of power has turned all customers — even paying ones — along with their interaction with the corporate institution into a product to be analyzed, packaged, and re-sold to another institution that thinks it knows how to extract more value from it… that is, from you. There are pernicious effects to this: for example, in order to game their U.S. News & World Report ranking, a college attempted to surreptitiously subject incoming students (already admitted and paying customers) to a sort of culture-test to figure out who was likely to drop out so they could (illegitimately) force those students out before they were counted as students, thus boosting the college’s student retention rate. The president of the university allegedly said that the students were like bunnies who needed to be drowned and threatened with guns.
This is a variation on the last misconception. But just as an organization may be full of sadists and sycophants, the organizations within a company will also have relations with each other. And while it’s always better to be a Vice President than a file clerk (because we’ve already downsized all of the file clerks), not all Vice Presidents are created equal.
Roughly speaking, in any significantly large corporation there’s a unique vertical capability of creating and selling products which differs from corporation to corporation. These are called profit centers: they exist to make money. Then there are horizontal utility groups that aren’t much different between corporations — things like Information Technology, Human Resources, Legal, and Finance. These exist to mitigate expenses in conventional ways, but are expenses in and of themselves: they are cost centers. While the skills that are useful in the horizontals — HR, IT, Legal, Finance — are more portable, their inability to actually make money causes them to be low-status and constantly starved for resources. If you want to be in a high-status group, get close to where the main products are being designed or sent to market.
When I say that low-status organizations are starved for resources, I mean that “the biggest raise I’d seen in 8 years” in a low-status organization (IT) was utterly crushed by the raise accompanying my next promotion in a high-status organization (Marketing) that I wasn’t even expecting to get for another year. What hadn’t changed was the content of my job: my entire professional life has been writing and maintaining web applications, but suddenly when I’m in a higher-status organization I start getting paid a lot more.
And it’s not all about money: at the human level status gets you autonomy to do what you do better. Status means your work is worth more, so the company is willing to invest more in you to get more and better work back out of you, instead of wanting you to do a hack job based on guidance from people who don’t know what you’re best at. And if you need a seat next to the window, and to take off early on Tuesdays for debate practice, you’ll have more leverage to get such things when you’ve got status. And if you’re in a high-status organization, you’ll start with more status and find it easy to gain more status.
Warning: In the low-status organization, everybody’s anxiously awaiting the next round of being downsized. Being a top-notch performer will, at some point, be at odds with your manager’s feckless non-direction and turn you into a liability to them.
But here’s a strange story: I stayed friends with one of my managers from the E-NEW — “Everything Nobody Else Wants” — group, so she kept me up to date on how she was moving around from group to group until she ran into a toxic asshole who, tragically, forced her out of the company. But that’s not the story. The story is about recruitment: our CIO (Chief Information Officer, head of IT) sent her out to recruit college graduates to come work for our IT department. And she was sent to big Ivy League schools like Cornell and Columbia. And you may be thinking as I’m recounting this story to you, as I was thinking when she told it to me, as she was thinking while she was doing it: “Why the fuck are we trying to drag expensive Ivy League graduates into our dismal IT department?” Because high-status Ivy League graduates aren’t going to join IT. Hell, even I was smart enough to not join IT — just dumb enough to move too slowly when IT rolled over the group that I had joined. There was a happy outcome of that absurd recruitment drive, though: my ex-manager got to network with other recruiters who lined her up with next career move when she got tired of the toxic asshole.
There are several lessons about college in all of this:
Companies will recruit graduating students from colleges. When selecting a college, be sure to ask “Which companies recruit here?” because if you’re not getting recruited then you’re going to have to work a lot harder to get that first job. The college may have a cache of resources to help fill in for companies that don’t recruit there, but using those resources is work you’ll have to do. And the gap where you may be unemployed when you leave college but before you start a career is insanely stressful, especially if you have an older sibling that had their first career move solidly lined up a full year before they graduated.
Expensive colleges produce high-status students, a point that is undermined by the usual size of student debt. For example: if you graduate with $120,000 of debt, from a high-status school you will have to get a job that pays not only a living wage, but also enough to pay off your debt — this may work out for you. On the other hand, if you graduate with $0 of debt, then you’re free to pursue any job or career that you can convert into food and shelter. There is, presumably, an optimal middle ground where status is maximized and debt it minimized, but it’s of primary importance that you’re confident you can live with whatever you choose to do.
Departments in a university have statuses just like departments in a corporation, and they are not funded equally despite what the Office of Admissions says, in much the same way we tried to sucker Ivy Leaguers into IT. Consider: The college I went to had a big music program and a big nursing program. The part where the computer science department was co-located with the math department in a trailer off campus behind the old gym should have been a warning to me, but I was young and naive. The previous recommendation was “it doesn’t matter much what you major in,” but let’s refine that: choose like three-or-so possible majors that appeal to you and then filter out any colleges that have none of those majors as their highest-status departments, and prefer the colleges where you get multiple matches as that leaves your options open.
If we put the flat cost of tuition together with a low-status and under-performing department, we end up with “General University Requirements” courses. You would be well-advised to figure out how to transfer these in from a low-cost/no-status institution if possible. Summer classes at a community college can probably get you past most things that seem superfluous to your degree. This works because the final degree is from the higher-status institution, and that’s what anybody who cares looks at, so that’s what counts. (Of course, you may cap out your transfer credits with your AP courses and that’s fine too — the real point is to not pay full price for sub-standard content.)
If we combine what we know about the status of departments and the status of people, we can begin to predict that tenure-track professors in high-status departments (one end of the spectrum) will want to impart wisdom on bright young sparks who take after them, while adjunct/temp professors in low-status departments will be afraid of losing their job and want a bit of commiseration from the not-quite-peers who are getting closer to the same sort of crap day by day. You can matter to and be well-treated by these sorts of professors pretty easily. In the middle of the spectrum will be the low-status tenured professors who have to keep their under-performing department running and the high-status adjunct professors who are focused on advancing their career rather than their students — but becoming relevant to these sorts of professors is relatively difficult.
If you ignore those lessons and just choose a college on the basis of its football team you’ll discover that it’s disappointingly like high school, but a lot more expensive. Of course you may be disappointed anyway — do bear in mind that your class cohort will made of people rather like your current cohort and the football players are still going to be the football players. Sorry about that; I apologize for the continuity of the universe in advance.
Tangent: Play to Your Strengths
One of the dis-services that is pounded in to high schoolers today is the need to do everything to be “well-rounded.” There are multiple reasons for this, like the need to discover individual strengths and talents, but not the least of which is — cynically — that a not-employed monomaniacal over-achiever will completely overwhelm teacher who has 199 other students and is trying to be a functional adult in a low-status profession as well. One of my former students wrote a 200-page paper on Women’s Rights in Afghanistan — a topic she was passionate about — only to have it flatly rejected as unreadable by a teacher who, doing the math, had other 600 pages of (boring-ass mediocre) 3-page papers from 200 students to shovel through. If we actually cared about cultivating the strengths and talents that we discovered through encouraging well-roundedness, we’d be behaving differently, but no matter: the point is that just like a company or university will happily starve some departments while feeding others, you should focus on doing a few things amazingly well and — bluntly — jettison whatever you’re not really committed to.
See, the way we think about students is very binary: they’re either becoming a square person with one skill and that fits if we need square people, or they’re becoming round people who — and this part is strange — won’t actually fit together, but can roll around in pursuit of whatever dreams they happen to have. Imagine civilization as a jigsaw puzzle: square pieces can fit together and be all wrong, round pieces can go anywhere but almost never really fit. So my advice is to synthesize a few things that you’re really good at and carve a niche that fits you.
This strategy of building on strengths to become better at what we’re doing instead of building on weaknesses in the hopes of achieving mediocrity has been obvious for years, gained popularity about a decade ago, and now even has a book that I’m comfortable recommending: Cal Newport’s So Good They Can’t Ignore You.
That said, everybody should still be able to do their own laundry and cooking.
So, to recap, you ought to be doing what you do best in an organization that is respected for what it does having studied at an institution that is respected specifically for what you studied there. And this is all harder than it sounds because companies and colleges bluff past questions of status and you’re being spread too thin to know what you want to focus on, but also harder than it sounds because you may have to maneuver your way around to get to a position with a comfortable level of status. Remember: align yourself to current reality, adjust yourself to steer towards a better future.
Misconception: Sell-Out or Poverty, Choose One!
I’ve spent a lot of time here talking about corporations, paychecks, and tuition fees. This is deeply ingrained in the life I went after and now have. But it’s not the only life out there, not even for a technologist.
First, please consider that you can be a sell-out and a lot of other things as well. I wrote a book, run a non-profit, and insisted that my employer support me as I routinely wander off and coach debate. Remember that you can use status to bolster your autonomy. The more valuable you are to them, the more pliable they’ll be to get your best work from you. Of course, if all you ever ask for is money then being a sell-out is kind of boring.
Second, please consider Idalin Bobé (@IdalinBobe) — she’s building status and leveraging that for an agenda of community building and organization as she describes in this long-form presentation:
Not dissimilarly, there’s Code For America which commonly works to modernize and simplify the technological skeletons of decaying bureaucracies, but is all about technology for local civic engagement. And on the way there, you’re likely to encounter Code 2040, dedicated to improving inclusivity within tech.
But here’s the thing that’s hard for kids these days to believe: We’re mostly doing capitalism badly as a matter of status games played in the arena of public policy. You know who else played status games in the arena of public policy to demonstrate how naive the economic principles that allegedly underpinned their government were? Guys named Stalin and Mao. The funniest part about Stalin’s purging of millions of Soviets what that intellectuals and academics like Sartre and Chomsky actually supported that lunacy because they’re so opposed to capitalism.
Let me be very clear: what really matters is building your status and using it to do what you want. That is what we get when we put human nature in civilization. Issues of money, debt, capitalism, communism, even the counter-factual notion of a social contract are all just different views, different shells on how people work to each build their status in a society to ensure their security and their legacy. Sexism and racism are ingrained in our culture to reduce the competition for high-status positions, but nepotism is seen as a natural right of the rich whenever people howl about a “death tax” despite it running completely contrary to our mythological meritocracy.
You may find yourself doubting all this “status” talk. But consider: why does anybody care about U of O’s football program? Or, more generally, why do institutions of higher learning all seem to support a sport that is known to cause brain damage? The answer — going back to Thorstein Veblen’s Theory of the Leisure Class which is an awful and yet valuable read — is the status that they get from having a football team, especially if it can win. The problem is that a good football team will not help you pay for school; quite the opposite actually.
When you can see this all clearly, see the absurdity of the human condition in society, and look for a way to join in and help out that matches whatever skills you choose to make your own, that’s when I’m successful. That’s when I’m successful at getting you past many of the stupid and naive mistakes that I made and that your peers will also make.
But what I’m really hoping is that I’ve provided you with plenty of resources to help you think about how technologies can be applied in under-developed areas. I’m hoping you’ve seen that our subculture of technologists need socially adept people who can stand in front of crowds and tell stories that are true. And I’m hoping that these give you a legitimate hope that you’ll be able to take control of your future and steer it in a direction of your choosing.
Appendix: Professional Practices
But we haven’t really talked at all about what you’re actually doing for a living. The answer is you’re making something that will help other people become what they want to be, and then helping convince them that they want to become that. Here’s the list of professional practices books that probably don’t get used in class:
Slack by Tom DeMarco on giving yourself enough space to think about solving problems.
Badass by Kathy Sierra on how to design products that can be adopted and adored.
Waltzing with Bears by DeMarco and Lister on how to design a schedule to maximize business value and
The Pragmatic Programmer by Hunt & Thomas on what you should be doing as a professional that you weren’t doing as a student.
Appendix: Personal Practices
In addition to those professional practices, here’s some documentation on how you can help keep random trolls out of your life.
“That wasn’t a nice thing to say; that wasn’t designed to make me feel good. That’s a… kind of a… not too subtle intimidation, and I, uh, get filled with anxiety when you talk about something like that. … You thought it, and then you said it. And now, I’m left with the aftermath of that… You’re holding me hostage. That’s not right.” –Dr. Oatman, Grosse Pointe Blank
I’ve been judging policy debate for several years now, and the thing that consistently makes me sad how threatening the posturing is. Consider: if I don’t vote for the brief-and-vague plan, then all my friends will die because of global warming, but if I do vote for the plan then all my family members will die in the resulting nuclear holocaust. Regardless, we’re all dehumanized — which is totally worse than being murdered, The Cards have spoken! — because our lives have been reduced to hyperreal bargaining chips in this children’s game of words. Make no mistake: it is only the venue that makes policy debate respectable; tweeting out a typically hyperbolic case would be indistinguishable from trolling — on which books have been written — and doing so while holding a gun would be terrorism.
I’m not in favor of teaching kids to do that.
So let’s answer a ridiculous terminal impact. Borrowing some cards from over here and mixing in several from my own collection, we end up with a Politics of Fear discourse critique which attacks the mere introduction of a debased terminal impact as an intellectually inept and morally inexcusable attempted exercise of domination to suppress civic participation, pervert education, prevent rational thought, and shorten the lives of all who hear it, all in the unwitting service of the status quo wielders of power (typically the military-industrial complex for violence-oriented scenarios) and sets a framework of accountability for the judge.
The strategically nice thing about this critique is that it focuses on the already-committed threats that were made, and so it can’t really be permutated. Your opponent might — and should — sever out of their bullshit terminal impacts, but they can’t undo the part where they have already threatened the life of the judge. (I mean, really, can we be more clear on what these terminal impacts really are?) Some cards may be trimmed or replaced, especially for climate change scenarios that don’t involve war, and you should be prepared to attack the link story that connects to the terminal impact to show that it’s impossibly contrived as well (for example, “They’re not ‘solving’ for Chinese emissions, so they can’t ‘solve’ for climate change; threatening you with it is inexcusable”), but the thematic web of internal links in these cards will make them really hard to answer.
There is no reason that policy debate needs to be dominated by preposterous extinction scenarios, either at the high-school or college level. Although I will always have a soft spot for some of the contrived DAs that I read and wrote at various points, it trivializes the activity when these arguments succeed at the expense of realistic discussions of public policy.
By stringing four or five cards together, teams are able to manufacture custom-built extinction scenarios for ANY policy change, no matter how small. These are not serious arguments, for all of the reasons offered in the opening post. They make claims that no one outside of policy debate has even thought of, much less taken seriously. Their popularity doesn’t just make debate look bad; it makes debate bad, period.
It may well be true that the viability of these arguments makes it strategically difficult to initiate discussions about social justice. But that is just one symptom of a deeper problem: by fetishizing long strings of improbable internal links that culminate in extinction, we make it impossible for debaters to restrict themselves to making credible, serious arguments for or against AFFs.
The race to snag improbable extinction impacts also prevents many of the most interesting impact debates from ever occurring. Most of us agree that ending a racist policy is not worth causing nuclear war . . . but is it worth risking a recession? What if that recession would cause hundreds of people to die substantially earlier than they otherwise would? As long as everything supposedly risks extinction, nothing but extinction will be worth talking about.
But not all discussions of nuclear war are made equal. In my eyes, the problem is not that policy debates often involve extinction scenarios; it is that 95% of those extinction scenarios are made up. That is, they are only discussed in policy debate, not the actual literature about the topic area.
Many affirmatives do actually relate to credible risks of human extinction, which are worth deliberation. Our government’s policies on Co2 emissions may dictate whether today’s debaters will live to be grandparents. And given how terrifyingly close the world has come to nuclear war in the past, our government’s deterrence posture and treaty obligations could determine whether you and everyone you love will die a horrific, painful death. The same is true of myriad issues in policy debate — from the deployment of space weapons to the coordination of disease surveillance — which may have a substantial effect on humanity’s odds of survival.
Or they may not. But the very nature of existential risk and self-selection bias (a.k.a, the anthropic principle) means that their absence from our past does not provide us meaningful evidence about their probability in our future. I fear that many (most?!) members of the community will not understand why that last sentence is true, despite having participated in numerous debates involving extinction claims. That reflects the lack of epistemological discussion in most “traditional” policy debates. In real life, probability analysis and the epistemology of risk are central issues in policymaking.
The phenomenon discussed in the opening post, in which even the most tenuous risks of extinction still trump other impacts is simply a variation of “Pascal’s mugging.” (See, Bostrom, Nick. “Pascal’s mugging.” Analysis 69, no. 3 (2009): 443-445. http://www.nickbostrom.com/papers/pascal.pdf.) Dividing infinite utility by a finite probability is a conceptually flawed approach, because it allows even the most absurd claims (e.g., the Wage Inflation DA) to receive infinite credence.
How can we eliminate the bad extinction arguments while retaining the worthwhile ones? The primary answer is surprisingly simple. Judges should be willing to seriously discount arguments for or against a policy whose thesis cannot be supported by any single author. These Frankenstein arguments, often cobbled together from unrelated newspaper clippings, account for the vast majority of the junk arguments in debate. Eliminating them is essentially an extension of the concept of solvency advocates, broadened to require advocates for DAs and advantages.
This minimum requirement for plausibility does not mean that every argument made in an op-ed piece or a white paper is true or even well thought out. But there is a certain plausibility threshold that a risk of extinction must overcome before any qualified author is willing to discuss it in print. For example, there are plenty of qualified authors who argue that we shouldn’t deploy space weapons because that policy would result in extinction. But good luck finding someone who actually argues that reforming prisons will undermine Obama’s political capital and thereby trigger global thermonuclear war.
The proposed standard is not a bright line, because that would be neither practical nor beneficial. Many credible authors will provide arguments for why an AFF will cause or prevent some obviously important impact, like global economic collapse or certain regional wars. If a team can provide legitimate evidence from another author that the specified impact would result in extinction, then the absence of an express extinction claim from the first author should be relatively unimportant. The alternative would be to reinforce the existing practice of hunting for those authors who make the most explicit extinction claims. This perversely causes teams to seek out and rely on hacks, because careful authors rarely throw around such bold proclamations.
Forcing teams to advance the arguments that are actually found in the literature would make policy debate less absurd to outsiders, who are right to roll their eyes at most of the positions that teams invent. And it would create a powerful incentive for debaters to truly engage the nuances of each topic instead of just trying to hunt for contrived links to the same cobbled together, generic arguments about extinction.
It is the judge’s intellectual and moral responsibility to insist on the the quality and integrity of evidence.
Edward Tufte (American statistician and professor emeritus of political science, statistics, and computer science at Yale University — seriously, he kicks your ass), Beautiful Evidence 2006, pg 9
Evidence presentations are seen here from both sides: how to produce them and how to consume them. As teachers know, a good way to learn something is to teach it. The partial symmetry of producers and consumers is a consequence of the theory of analytical design, which is based on the premise that the point of evidence displays is to assist the thinking of producer and consumer alike.Evidence presentations should be created in accord with the common analytical tasks at hand, which usually involve understanding causality, making multivariate comparisons, examining relevant evidence, and assessing the credibility of evidence and conclusions. Thus the practices of evidence display are derived from the universal principles of analytical thinking —and not from local customs, intellectual fashions, consumer convenience, marketing, or what the technologies of display happen to make available. The metaphor for evidence presentations is analytical thinking.
Making an evidence presentation is a moral act as well as an intellectual activity. To maintain standards of quality, relevance, and integrity for evidence, consumers of presentations should insist that presenters be held intellectually and ethically responsible for what they show and tell. Thus consuming a presentation is also an intellectual and a moral activity.
However, the sheer visceral horror of terrorist attacks—even those like the September 11, 2001 attacks—are not, I would argue, sufficient in themselves to explain the scope and depth of the fear and anxiety that now pervades social and political life. After all, acts of political violence are nothing new; in addition, they are always publicly mediated, and the meanings of such events are continuously contested and prone to alteration over time. More importantly, while there are ‘real’ dangers in the world—disease, accidents, and violence (among others) all have life and death consequences—, not all dangers are equal and not all risks are interpreted as dangers.3 The world contains a multiplicity of dangers (so many that we cannot even begin to know all that threatens us), but it is only those that are interpreted as threats that society learns to fear. Frequently, as the fear of terrorism illustrates, there is little correspondence between the socially accepted level of threat and the actual risk to individuals: on a statistical scale of risks for example, terrorism actually ranks somewhere around the risk of being killed in a DIY accident or being struck by lightening. In other words, the gap between society’s perception of the risk of terrorism and the physical reality is created by a socially constructed discourse of danger that normalises that fear.
Security-as-biopower works at all levels of society and results in a state of conflict that suffuses all of society, that is “indeterminate, both spatially and temporally,” and which allows for the possibility that “all of humanity can in principle be united against an abstract concept or practice such as terrorism” (Hardt & Negri 2004, 14-15, 19). As war invades the normal functioning of everyday life, even mundane activities like washing one’s hands or having up-to-date antivirus on one’s computer become acts of “national security.” In short, when war permeates all of society, which must be secured in its entirely on every possible front, then everyone is a potential “warrior.”
The more inured we are to idea of threat, the more likely we are to uncritically accept similar-sounding horror-stories from a military/industrial complex that wants to justify spending billions of dollars rather than simply win a debate.
“MEDIA FRAMING OF TERRORISM: VIEWS OF “FRONT LINES” NATIONAL SECURITY PRESTIGE PRESS” Heather Davis Epkins, Ph.D., 2011 http://drum.lib.umd.edu/bitstream/1903/11446/1/Epkins_umd_0117E_11895.pdf
Furthermore, while there is evidence that the government may initially set the media agenda, over time the public is also conditioned to understand the historic discourse of a topic, for example terrorism, within a certain framework that is reflected by public opinion (Sadaba & La Porte, 2006). Therefore, knowing this public opinion, both the government and the media appeal to the audience in these well-traveled frames. Scholarship also supports prevalence of this kind of rhetoric utilization in countries with long histories of terrorism (Sadaba & La Porte, 2006, p.86). The “War on Terror” frame, for example, has become the crux of both reporting and understanding homeland security issues in America (Norris et al., 2003, p. 4).
But no one really claims that emergency policies are the result of the kind of adrenaline-charged panic that seeing a tiger in the jungle induces. The concern is rather a more nuanced one about the dynamics and politics of collective fear over a much longer period of time—more often measured in years rather than in seconds. As history demonstrates, fear tends to lead the populace to seek reassurance from the authorities, and as a result there is always a risk that authorities will exploit fear to their advantage. One need only recall that President Bush’s approval rating, quite unimpressive on September 10, 2001, shot up to over 80 percent almost immediately thereafter. The majority is willing to tolerate much more concentrated executive power, for example, during wartime than during peacetime. Some of this toleration of concentrated power makes sense, to be sure, but if it is driven by irrational fears, there may be an inclination to vest too much power in the executive’s hands during emergencies—and a tendency on the executive’s part to stoke the fires of fear to keep his authority unquestioned. Fear often causes us to make demonstrably irrational decisions even when we have plenty of time to think. Social scientists have found that a variety of influences associated with fear undermine our ability to make rational judgments. One such effect, the “availability heuristic,” leads people to overestimate risks associated with vivid, immediate images and to discount more gradual, long-term, or abstract risks.
The security state then turns in a two-part cycle: first, treat everything as a risk, a behavior perpetuated by our opponents in their tenuous links to extinction — which justifies panoptic surveillance, turning case.
THE BIOPOLITICAL JUSTIFICATION FOR GEOSURVEILLANCE, Jeremy W Crampton, Geographical Review; Jul 2007; 97, 3; Research Library — http://opengeography.files.wordpress.com/2012/12/as-publishedocr.pdf
The question is not one of identifying which areas are at risk but of seeing everything at risk, to different degrees, as measured against a background of what is normal. Geosurveillance must be coextensive with that risk; that is, everywhere. Blanket geosurveillance is therefore a logical outcome of the state’s representation of its residents as risk factors who need to be controlled, modified, and logged. When we see an instance of surveillance, whether it be by the government or in consumption, such as biometric identification cards and the millions of CCTVS in the United Kingdom (Rosen 200l), cell-phone tracking, RFIDS, biological chipping, warrantless tapping of telephone calls, the Federal Bureau of Investigation’s DNA database (FBI 2001), we should see it in the context of surveillance-risk normalization.
The discourse of fear is one of the central constructions of the war on terrorism. Its main result is a society living in a state of ‘ontological hysteria’—a nation constantly anticipating the next attack, just ‘waiting for terror’. The suffocating power of the counter-terrorism project derives in large part from its ability to project a reality of ubiquitous and impending danger. And yet, as I have demonstrated, the discursive construction of the catastrophic terrorist threat is inherently unstable and susceptible to counter-hegemonic resistance. If the terrorist threat is a social construction, there is no reason why it cannot be deconstructed. From an ethical perspective, there are compelling reasons for actively resisting and working to dismantle the discourse of threat and danger. In the first place, as a great many studies have shown, the social construction of the global terrorist threat has functioned to provide a discursive smokescreen for the pursuit of expansionist imperial policies, such as opening up new regions to American markets and influence, the expansion of a global military presence, the disciplining of potential rivals, and the strategic control of future oil supplies—among others. In effect, the terrorist threat presently fulfils the same ideological and discursive functions that the communist threat played during the cold war. Second, the discourse of threat and danger is cynically employed to de-legitimise domestic dissent and expanding state power through the reassertion of the national security state. Successive reports by Amnesty International have noted that this is occurring all over the world: the war on terror is being used to repress opponents in dozens of countries. In this regard, the politics of fear are proving highly damaging to democratic politics and the functioning of civil society. The corrosive effects of the discourse are plainly obvious: antiglobalisation protesters, academics, postmodernists, liberals, pro-choice activists, environmentalists and gay liberationists in America have been accused of being aligned with the evil of terrorism and of undermining the nation’s struggle against terrorism;70 arms trade protesters are arrested under anti-terrorism legislation in Britain; blacklists of ‘disloyal’ professors, university departments, journalists, writers and commentators are posted on the internet and smear campaigns are launched against them; anti-administration voices are kept away from speaking at public events or in the media; and political opponents of government policy are accused of being traitors. The overall effect of this process is the narrowing of the discursive space for political debate and the suppression of civil society.
There are both ontological and normative reasons why a critical analysis of the current discourse of danger is urgently called for. Ontologically, as a number of important works have reminded us,5 political reality is a social construct, manufactured through discursive practices and shared systems of meaning. Language does not simply reflect reality, it co-constitutes it. A fully informed understanding of the origins, consequences, and trajectory of the current war on terrorism therefore, would appear largely unattainable in the absence of a critical investigation of the official language of counter-terrorism. Normatively, the enactment of any large-scale project of political violence—such as war or counter-terrorism—requires a significant degree of political and social consensus and consensus is not possible without language. The process of inducing consent and normalising the practice of the war requires the construction of a whole new public discourse that manufactures approval while simultaneously suppressing individual doubts and wider political protest. More than this, power itself is a social phenomenon, constantly in need of legitimation; and language is the medium of legitimation.6 Thus, the deployment of language by politicians is an exercise of power and domination; such power must always be subjected to rigorous public interrogation and critical examination lest it become abusive. This is never truer than during times of national crisis when the authorities assume enhanced powers to deal with what are perceived to be extraordinary public threats.
But for an in-round impact: even just hearing about how not voting for our opponents will result in the horrifying demise of your friends, your family, everyone you love, and you, too — that’s actually bad for your health, and they can’t kick out of it.
Bruce Schneier, “Living in a Code Yellow World,” Schneier on Security, 11/24/2015, https://www.schneier.com/blog/archives/2015/09/living_in_a_cod.html
In the 1980s, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the “combat mind-set.” Here is his summary:
In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.
In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.
In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.
In Red you are in a lethal mode and will shoot if circumstances warrant.
Cooper talked about remaining in Code Yellow over time, but he didn’t write about its psychological toll. It’s significant. Our brains can’t be on that alert level constantly. We need downtime. We need to relax. This is why we have friends around whom we can let our guard down and homes where we can close our doors to outsiders. We only want to visit Yellowland occasionally.
Since 9/11, the US has increasingly become Yellowland, a place where we assume danger is imminent. It’s damaging to us individually and as a society.
I don’t mean to minimize actual danger. Some people really do live in a Code Yellow world, due to the failures of government in their home countries. Even there, we know how hard it is for them to maintain a constant level of alertness in the face of constant danger. Psychologist Abraham Maslow wrote about this, making safety a basic level in his hierarchy of needs. A lack of safety makes people anxious and tense, and the long term effects are debilitating.
The same effects occur when we believe we’re living in an unsafe situation even if we’re not. The psychological term for this is hypervigilance. Hypervigilance in the face of imagined danger causes stress and anxiety. This, in turn, alters how your hippocampus functions, and causes an excess of cortisol in your body. Now cortisol is great in small and infrequent doses, and helps you run away from tigers. But it destroys your brain and body if you marinate in it for extended periods of time.
Not only does trying to live in Yellowland harm you physically, it changes how you interact with your environment and it impairs your judgment. You forget what’s normal and start seeing the enemy everywhere. Terrorism actually relies on this kind of reaction to succeed.
Those of us fortunate enough to live in a Code White society are much better served acting like we do. This is something we need to learn at all levels, from our personal interactions to our national policy. Since the terrorist attacks of 9/11, many of our counterterrorism policies have helped convince people they’re not safe, and that they need to be in a constant state of readiness. We need our leaders to lead us out of Yellowland, not to perpetuate it.
The Alternative, is to, as Tufte says, “insist that presenters be held intellectually and ethically responsible for what they show and tell” which means “Forcing teams to advance the arguments that are actually found in the literature … instead of just trying to hunt for contrived links to the same cobbled together, generic arguments about extinction,” as Ehrlich suggests as is necessary to preserve your intellectual rigor and good health.
The next Public Forum topic is “On balance, standardized testing is beneficial to K-12 education in the United States” and this one is kind of problematic because so much evidence goes all one way. For example, here’s John Oliver with his total disregard for your pathetic little 4-minute time limit.
But I’m going to go a different way with my neg example and see if I can get it in 4 minutes.
There are a lot of standardized tests, starting with the 23-or-more tests mandated by multiple pieces of federal legislation, as well as SAT and ACT, and international calibration assessments (PISA) when they come up, plus whatever tests are mandated by any particular state for all of its students — but mostly we’ll be looking at the effects of the federally mandated tests as the resolution is national in scope.
Using data from 2012, The Nation’s Report Card shows that since 2008, National Assessment of Educational Progress scores haven’t substantially improved for either 9 year olds or 17 year olds in math and reading. More distressingly, across all 17 year old students, scores haven’t changed substantially since 1973. So while The Atlantic summarized the results of PISA testing in December 2013 by headlining that compared to other countries, our schools are “Expensive, Unequal, Bad at Math,” the more disturbing trend is that we’re not seeing any benefits of the tests that have been added since before our parents were assessed.
What we’re seeing instead is that the Department of Education is tying federal funding to test scores, resulting in a culling of students who test badly: rich kids get drugged, poor kids get forced out.
The dominant paradigm of education in this country has actually led to rising rates of ADHD. In the four years after George W. Bush signed the No Child Left Behind Act into law in 2002 , the nationwide rate of ADHD diagnoses increased 22 percent. Why should this be so? The answer lies in the fact that the law tied financial rewards for schools to standardized test performance. Having more children diagnosed with ADHD was a boon to school districts that were lagging behind in test scores. First, scores for children with ADHD could be omitted from the school’s reported test scores. Second, children with the diagnosis got special accommodation, including extra time for taking standardized tests. Extra time, plus stimulant medication, which is a short-term performance enhancer, could very well raise kids’ test scores, in which case the school could decide to include them with the rest of its test scores. As a result, failing schools soon experienced a windfall of ADHD diagnoses.
But in areas where kids aren’t affluent and insured enough to be medicated, the threat to funding causes the barely-funded schools to push under-performing students out, creating what the ACLU refers to as “the school-to-prison pipeline.”
For most students, the pipeline begins with inadequate resources in public schools. Overcrowded classrooms, a lack of qualified teachers, and insufficient funding for “extras” such as counselors, special education services, and even textbooks, lock students into second-rate educational environments. This failure to meet educational needs increases disengagement and dropouts, increasing the risk of later court involvement. Even worse, schools may actually encourage dropouts in response to pressures from test-based accountability regimes such as the No Child Left Behind Act, which create incentives to push out low-performing students to boost overall test scores. … Lacking resources, facing incentives to push out low-performing students, and responding to a handful of highly-publicized school shootings, schools have embraced zero-tolerance policies that automatically impose severe punishment regardless of circumstances. Under these policies, students have been expelled for bringing nail clippers or scissors to school. Rates of suspension have increased dramatically in recent years —from 1.7 million in 1974 to 3.1 million in 2000 — and have been most dramatic for children of color.
Several big states have seen alarming drops in enrollment at teacher training programs. The numbers are grim among some of the nation’s largest producers of new teachers: In California, enrollment is down 53 percent over the past five years. It’s down sharply in New York and Texas as well. [And] The list of potential headaches for new teachers is long, starting with the ongoing, ideological fisticuffs over the Common Core State Standards, high-stakes testing and efforts to link test results to teacher evaluations. Throw in the erosion of tenure protections and a variety of recession-induced budget cuts, and you’ve got the makings of a crisis.
And in the LA Times in September, Harold Kwalwasser wrote “our exam system is deeply flawed, especially when it comes to teacher evaluation.” and “seemingly uncontrollable variability produces great teacher anxiety that is not worth the damage.”
So standardized testing is so bad for teachers that it’s making it hard to recruit new ones, so unhelpful for students that they’ve not improved in decades, and flat-out dangerous when the Department of Education ties financial incentives to it. The way the United States has deployed standardized testing in K-12 education in the status quo is a disaster.