The Lexwerks

Jury Nullification Again

For Aaron.

The next topic for high school Lincoln-Douglas debaters is “In the United States criminal justice system, jury nullification ought to be used in the face of perceived injustice.” This is an important topic because it covers a little-known feature of our criminal justice system, which is itself little-known to high school debaters. There is, however, a massive difference between what the topic wants kids to learn about and how the topic is actually written as is becoming woefully common.

The issue that the topic wants to tackle is whether it’s okay for a jury to tell a prosecutor to “go fuck yourself; while the defendant did break a law, it was a stupid law and thus there is no guilt for us to find.” That’s jury nullification in a nutshell: the refusal to find guilt in a lawbreaker despite it being overwhelmingly obvious that the law was broken. Juries are allowed to do this, by design, a point which rarely ever gets mentioned to them. And this is super-relevant when we’ve got what one might call “civil disobedience” cases like Bree Newsome and James Tyson who will soon be on trial for taking down flag of a militant group known for militant insurrection against the United States Federal Government and human rights abuses, abuses that they frequently twisted a common religion to justify — which makes that whole Confederacy thing sound kind of like ISIS, doesn’t it? So while some people might say “civil disobedience,” I’d call the well-reasoned, planned, and staged flag-removal a civil service that helped raise the consciousness of the nation.

Now you might be thinking “they obviously broke the law, so they have to be put on trial,” and you’d be wrong. Opening the first chapter of Suspicion Nation (page 35), Lisa Bloom explains:

Prosecutors are the most powerful players in our criminal justice system. Their day-to-day decisions can literally determine which Americans will lose their liberty, even their lives. In virtually every criminal case in America, prosecutors decide whether to file charges, and if they do, which charges to file — minor crimes (misdemeanors) or major ones (felonies)? Once a criminal defendant is charged, prosecutors choose whether to offer a plea bargain, and if so, what deal to offer (community service? A fine? One year behind bars? Twenty?) or whether to roll the dice and go to trial. Though they are public officials in our democracy, paid with our tax dollars, all of these decisions are made behind closed doors, without transparency or accountability, almost never subject to public review.

Or, to put it another way, the prosecutor who will be representing the state of South Carolina at trial against Bree Newsome and James Tyson has apparently chosen to be a total dick. And this isn’t as uncommon as we’d like to think: Color of Change asserts that “This follows a growing trend of prosecutors from Oakland to Baltimore and across the country overcharging people who take non-violent direct action in defense of Black lives.” And jury nullification is one of the checks — the check that the ordinary citizens that comprise the jury, as opposed to the prosecutor, judge, and governor who are all part of the state government — on injustice corrupting our criminal justice system.

So if we read the resolution as “jury nullification ought to be used when the prosecutor is a total dick,” I might be leaning affirmative on that.

But the resolution says something else entirely, and we should break it down:

    1. The first problem with “perceived injustice” is that criminal cases are inherently about injustice and trying to restore justice: the criminal behaved unjustly against a citizen and now must be punished to mitigate the resulting injustice. That’s how retribution theoretically works, and the jury ought to find the lawbreaker guilty rather than nullifying the prosecutor’s case against the criminal. Failure to specify the source of the injustice means that the resolution ineptly collapses on itself. (Who even writes these topics?)
    2. The second problem with “perceived injustice” is that you’re relying on the jury (as the only group with access to jury nullification) to perceive the injustice, and juries aren’t reliable in that way. A bunch of racists might consider it unjust that KKK members are facing trial for murdering civil rights workers, for example. Or a bunch of dim-witted people might be confused by prosecutorial incompetence. And between those variations of a “not guilty” verdict, we on the outside of the courtroom have a hard time telling what the verdict really means. So while it’s easy for us to say that Newsome and Tyson are facing unjust prosecution, they’re actually a very specific case from which we should be hesitant to draw general conclusions about what people who may be perceiving different things ought to be doing. (Alternately, if jury nullification is all about the freedom of conscience for jurors, then it’s wrong for us to say what they ought to be perceiving and doing when we’re not jurors.)
    3. But that raises the issue that we’re outsiders. I’ve not actually ever served on a jury, so what am I supposed to do about the injustices I perceive? What is the judge (who can, for example, dismiss the case) supposed to do in the face of injustice? Or the governor or president, each of whom can grant clemency according to their jurisdictions? What recourse is given to the courtroom journalists, or even the defense attorney? See, the very strange thing about the wording of the resolution is that it specifies an action — jury nullification — which implies an actor — the jury — without limiting the scope to the jurors. Which is another way of saying “If you’re not on the jury then you ought to STFU about injustice” because the one action that ought to be used in the face of perceived injustice isn’t available to non-jurors. (No, really, who the hell is writing these topics?)
    4. But the big issue for me is that even if I were on a jury, I wouldn’t be able to see the full extent of injustice in our criminal justice system. John Oliver has multiple Last Week Tonight videos describing the flaws in our criminal justice system all of which stack the deck so far in favor of the prosecutor that we’ve managed to incarcerate more people than oppressive communist China has, despite having only 1/3rd of the population. For example, we might have multiple laws that cover one offense, and each of those laws may have a mandatory minimum sentence — the prosecutor can then threaten to prosecute using all of the laws (even though there was only once offensive action) and sum up all of the mandatory minimums to arrive at a absurdly high prison sentence as a “big stick” against which they can offer a seemingly reasonable “little stick” of a plea-bargain if the defendant simply accepts guilt (and all associated future stigmas of guilt) and forgoes a jury trial. And almost everybody, especially those that can’t afford capable lawyers, end up taking plea bargains when they’re offered: in 2012, the New York Times reported that “97 percent of federal cases and 94 percent of state cases end in plea bargains, with defendants pleading guilty in exchange for a lesser sentence.” The resolution offers no recourse for any injustices that happen in that section of the population, and in the rare cases where a jury is called, and rarer cases where they perceive injustice, and rarer still where they know they ought to use jury nullification, the actual outcome of the Not Guilty verdict will be the state prosecutors asking the legislature for more laws they can use as leverage to force plea bargains. That’s what the big picture looks like for this little resolution: an exacerbation of injustice. What the affirmative is really valuing here is Spitting Into the Wind, not “justice.”

To counterpoint the resolution, the American Bar Association says that

(b) The prosecutor is an administrator of justice, an advocate, and an officer of the court; the prosecutor must exercise sound discretion in the performance of his or her functions.
(c) The duty of the prosecutor is to seek justice, not merely to convict.

And so, when a prosecutor is relying on intimidation, bullying, and being a total dick to score convictions — that is, in the face of prosecutorial injustice — the response ought to be firing and disbarring the prosecutor. While jury nullification is, by design, an option available to juries, our society ought not be relying on it to fight injustice.

Read previous articles about: Aaron Swartz, too many laws, and jury nullification.

Interlude for Vocational Guidance

All societies are evil, sorrowful, inequitable; and so they will always be. So if you really want to help this world, what you will have to teach is how to live in it. —Joseph Campbell

So one of my former students, now in college and heading towards having A Vocation asked if if I like my work. This seems like a reasonable question from a post-millennial wondering about an old capitalist sell-out like me. But before I actually get to my answer on it, I’m going to bring in a Monty Python sketch to address the basic concerns:

That aside, the short answer is that: Yes, I necessarily like my work. I am responsible to myself (and my cat). That means that the suffering that many other people have, and I used to have, where they dutifully sacrifice their happiness to ensure a better life for their family? That doesn’t apply to me (anymore). And since I’ve rejected the imposition of duty, that means that whatever it is that I’m doing is necessarily something I want to do — whether that’s being a self-satisfied sell-out or shrugging off slow service in a restaurant. So because I’m doing it instead of doing something else, I like it well enough.

So what do I do? I’m the “Technical Program Manager” — that’s Lead Unicorn — for a couple of Intel’s major web sites and the broader data platform that they front. I’m at the (virtualized) table when we talk about what the apps need to do, and how they should look, and how the underlying hardware needs to be configured. My signatures and fingerprints are on the commits of CSS, of JS, of the ASP.NET Razor views, of the C#, of SQL scripts, of the build scripts, on the recovery documentation, and even on some of the code pushes to production.

The joy of the work is the feeling of wielding power. It’s the joy of seeing the graphs of millions of people using my work per month. It’s the joy of connecting a translation engine to a database and ballooning thousands of records into a dozen languages. It’s the joy of writing a little script that does hundreds of modifications on behalf of a suddenly-quite-grateful human. It’s the joy of purging 80% of an application’s source code from existence because it was shoddily done by some unprofessional scrub — which doesn’t sound great, but it also becomes an anecdote to share with the professional peer group, a mark of honor and designation of place within the subculture.

It is work, though, and I’d be remiss if I didn’t mention a few other niceties that I’ve worked hard to get. I have a cube by a window, which is nice, but also managers that don’t quibble about how much I work from home, which is nicer. I have a very comfortable paycheck that I’ve used to do the things that most people let languish on a bucket list — buy a house, travel the world, write a book, start a non-profit — with the point being: don’t knock disposable income until you’ve got some. And I’ve got the tacit and financial support of Intel in coaching high school debate teams, which I explicitly insisted upon when I took my current position because that’s what I’m doing instead of having kids of my own.

If you’ve read Daniel Pink’s Drive, you might get a sense of Autonomy and Mastery if not exactly Purpose here, and I’m feeling pretty lucky with two out of three.

See, prior to my current position (and dangerously adjacent to my current position for the past several months) I was doing almost the exact same job within the IT department. But the IT department didn’t want me to know how much my work was being used. The IT department wanted me to regularly be in my cube, which was near the middle of the building. The IT department didn’t want me to work with live data, or deploy patches to production, or have a seat at the table to negotiate what work was going to get done. In the IT department, autonomy was not much of a thing. In the IT department, there were always more forms to be filled out.

It also turns out that when your manager is a fussy sycophant pining for the approval of his toxic asshat of a manager, it doesn’t matter what work you’re doing: you will be unhappy with your line of work. I don’t think that’s unique to the IT department. But let’s go out and consult the Gallup poll that says that, beyond Pink’s core of Autonomy, Mastery, and Purpose, workers will quit their boss as often (or more often) than they will quit their job.

Because after you’ve gotten a lackluster annual performance review from a bad manager, you’ll know what’s additionally true: yes, people seem to leave because the money is better elsewhere, but the money is better elsewhere because the bad manager will underutilize and (therefore) undervalue their workers and thus not even try to pay them what their fair market value should be.

White collar individual contributors should always remember this: your manager’s job is to give you work that will be profitable for the company to have you do and then appreciate you doing it. If they suck at giving you profitable work, or can’t figure out how to pay you what you’re worth for the kind of work they have you doing, that should be their incompetent problem, not yours. It’s supposed to go with their pay-grade. That their incompetence routinely becomes your problem is where the distribution of office work often falls apart. And there’s a solid chance that with a flicker of self-esteem — which your incompetent manager has almost certainly attempted to smother — you can walk away from that dysfunction and get a job somewhere else where your skills will be better, if still not fully appreciated. But it doesn’t solve for the problems in the environment you’re leaving behind, or for the structurally diminished status of the engineer, or for how executives are using stock grants as their private fiat currency which is turning the stock-market based 401k retirement plan into a Ponzi scheme.

The big pattern is this: modern capitalism, perhaps more so than many other economic systems, but not unlike — for example — Stalinism, has a problem where people with a lot of status and power continually spend that status and power on securing that status and power to the detriment of anybody who doesn’t have as much status or power with which to secure their own position. Or, more simply, the people at the top are doing woefully little to stop the shit from rolling on down.

And these are the things that kids these day see and hear about, even from their successful white-collar parents, that make them disinterested in in joining the Gangs of America.

I don’t know how to solve for that.

What I do know is that I’m on a good team doing substantially appreciated work; that I’ve negotiated, influenced, weaseled, and cajoled an almost embarrassing amount of personal autonomy from the corporation; and that between my team and my expanded autonomy, I’ve been able to wield enough mastery over my technical skills to almost shake off 15 years of “I got a degree in Public Relations — what the hell am I doing programming computers?” impostor syndrome.


Choose Good Health, Low Cholesterol, & Dental Insurance

“Susanna, four days ago… you chased a bottle of aspirin, with a bottle of vodka.” “…I had a headache.” —Girl, Interrupted

To kick off this debate season, we’ve got the LD topic of “Resolved: Adolescents ought to have the right to make autonomous medical choices” for which I’m not going to write a full set of sample cases because I think it’s a worse-than-usual topic: the affirmative simply doesn’t have enough good ground to stand on.

The ground I’d recommend for the affirmative is valuing Security, which can be threatened by disease ergo a criterion of Defense Against Disease. And the argument is that adolescents should be growing aware that their parents (and their parents’ lifestyle choices) may not be the best defense that they, the kids, have against disease and thus they should be free to secure their body against disease in ways that worry, or disagree with, or even contradict their parents. The key example to work with here would be vaccines: if the parent is an anti-vaxxer and the child, now a teenager, has been hearing anti-vax arguments for over a decade and has determined that they’re pseudo-scientific BS, then the adolescent child should be free to autonomously engage in a private doctor-patient relationship to become vaccinated and thus secure their body against diseases. Same deal with parents who are into faith healing; the child should be free to autonomously pursue the security of their body against disease regardless of the parents’ belief systems.

The weakness with this affirmative position is, of course, that it requires the parents to be less rational than their children — and that’s a total corner-case in reality, leaving oodles of doubt for the negative to exploit for the win. So what the affirmative may do, that is wrong, is deploy Kant’s definition of autonomy. As Michael Sandel writes in Justice: What’s The Right Thing To Do

Kant’s conception of autonomy imposes certain limits on the way we may treat ourselves. For, recall: To be autonomous is to be governed by a law I give myself—the categorical imperative. And the categorical imperative requires that I treat all persons (including myself) with respect—as an end, not merely as a means. So, for Kant, acting autonomously requires that we treat ourselves with respect, and not objectify ourselves. We can’t use our bodies any way we please.

And Kant’s name-brand rationality matches pretty well with how Wikipedia begins its summary of Autonomy: “…it is the capacity of a rational individual to make an informed, un-coerced decision. In moral and political philosophy, autonomy is often used as the basis for determining moral responsibility and accountability for one’s actions. In medicine, respect for the autonomy of patients is an important goal, though it can conflict with a competing ethical principle, namely beneficence.”

So that’s cool, except for two things:

  1. The US will, by default but not always, treat a teenager as a “young offender” rather than as a rational adult for purposes of criminal justice. The law will similarly reject the notion that teenagers having sex with adults could have given consent on the grounds that they are not rational adults. So the notion that in the corner-case of “medical choices” teenagers should be regarded as de facto rational adults (when even rational adults can catch flak from medical personnel for trying to get sterilized under the age of 30) goes against multiple long-established policy precedents.
  2. But what’s worse is that rationality is actually pretty subjective and situational, which is part of why the criminal justice system will by-default-but-not-always treat teenagers as young offenders. This is politely glossed in the assertion that “Determining optimality for rational behavior requires a quantifiable formulation of the problem, and making several key assumptions.” And we’ll be grinding on this for a while, but the key point is that to say there’s a “right” to some autonomous territory makes that rational aspect assumed — whether it’s a good assumption or not. See also: the “well-regulated militia” that justifies our right to bear arms. Oh, can’t find one of those? That’s my point exactly: when autonomy is granted, rationality becomes assumed and only gets brought up to defend a person from the consequences of actions they shouldn’t have taken (on the basis that they weren’t rational after all).

But Kant’s particularly irrational hard-on for rationality is shockingly de-humanizing as Michael Crawford explores in The World Beyond Your Head:

Experience is always contingent and particular, and for that reason “unfitted to serve as a ground of moral laws. The universality with which these laws should hold for all rational beings without exception… falls away if their basis is taken from the special consideration of human nature or from the accidental circumstances in which it is placed.” To be rational is, for Kant, precisely not to be situated in the world. … Whether you regard it as infantile or as the highest achievement of the European mind, what we find in Kant are the philosophical roots of our modern identification of freedom with choice. … This is important for understanding our culture because this understood, choice serves as the central totem of consumer capitalism, and those who present choices to us appear as handmaidens of our own freedom.

So what we’ve got right there is a core irony of Kant’s position: our culture bases our notion of freedom (badly) upon it being wholly internalized and individualistic, and yet we actively treat other people as a means — in violation of the previously mentioned categorical imperative — of expressing our freedom. Put another way: as soon as you need a doctor to fill out a prescription to validate your autonomous medical choice, you’ve compromised the principle of autonomy.

But Crawford continues:

When the choosing will is hermetically sealed off from the fuzzy, hard-to-master contingencies of the empirical world, it becomes more “free” in a sense: free for the kind of neurotic dissociation from reality that opens the door wide for other to leap in on our behalf, and present option that are available without the world-disclosing effort of skillful engagement.

And this is exactly where we get into medical choices where science has bottled a solution to some of your worries; indeed, even vaccines exist so you don’t have to develop an awareness of microbes, with bonus points if you realize that the doctor wants to vaccinate you and use your body in herd immunity as a means of protecting all bodies against certain diseases as some bodies cannot be vaccinated, making the intent to vaccinate people an immoral violation of the categorical imperative. (Please note: both you and your doctor should behave responsibly and you should get vaccinated if you can. Having a deep and personal engagement with the pox will not substantially increase your awareness of microbes in the world, and protecting you and those around you from that realization does not make your doctor an immoral person. The ability to conclude otherwise is why LD debate sometimes annoys the hell out of me.)

But let’s cut back to rationality under “universal laws” at this point: because at the point where a child would rationally disagree with a parent on a course of action — and that’s the conflict we’re generally talking about here — it becomes necessary for the child to denigrate the parent’s choices on child-rearing as fundamentally irrational. And this is a critical problem for Kant in any situation: as soon as we claim that we’ve eliminate the “accidental circumstances” of our current (and yet enduringly human) condition, we attribute the “accidental circumstances” of other peoples’ humanity to their difference from our position which makes them not just wrong, but debased and immoral, while our rational(ized) choice is justified by what we errantly consider to be universal principles. And this is how you weasel out of the categorical imperative like a Total Asshole: “I know I’m a rational actor here, and I see that because they would take actions different from my own that they are not rational, therefore I am not imperatively bound to treat them with the same dignity as I would treat all rational actors such as myself.” (For more on this topic, read Mistakes Were Made (But Not By Me).

We routinely see this in debates: “I present the categorical imperative as a totally rational ethical framework, and while it would be unethical to force my ethical framework on another rational being, my opponent’s disagreement with my ethical framework clearly shows that they are irrational and thus makes it totally fine for me to force my ethical framework on them.”

And that’s the real reason that I’m opposed to this topic: the strongest philosophical ground I know of for the affirmative will turn the person standing on it into an asshole who (ironically) dehumanizes everybody that disagrees with them. And unless their “autonomous medical choice” is being made at the proctologist’s office, turning into an asshole is not a good thing.

But skipping back to how bad of an idea it is to assume that teenagers are, by default, able to make autonomous medical choices (in the same way the 2nd Amendment assumes that gun owners are part of a well-regulated militia), let’s talk about the widespread wrongness of this topic.

Drugs! Specifically, the consumption of prescription drugs in a way that isn’t prescribed. If it’s a medical choice to take Oxycontin or Adderall as prescribed, then it’s a medical choice to also take them as not prescribed (the pharmacist gave us the dosage information and we rationally and autonomously chose to ignore it), and that’s illegal for everybody rather than being a right for adolescents. The affirmative may try to argue that autonomous medical choices should be a right for everybody, except that’s not what the resolution says and even if it were, drug-permissive countries like Portugal still treat addiction as a public health scourge to be treated, and that’s true of whether the drugs being consumed are corporate-produced “medications” or some other innovation of modern chemistry. The point is that the affirmative position leaves the medicine cabinet open with tacit approval for all subsequent abuse by teenagers. (If you want a particular genealogy of “public health,” check out chapter 2 of Foucault: Power.)

But that’s the obvious one. Let’s take it one step worse.

The American Psychiatry Association includes Anorexia in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) — this makes it a medical condition. It quickly leads to malnutrition, which also a diagnosable medical condition. This means that treating anorexia is a medical choice. And what we immediately encounter is that the people who are suffering from anorexia are the people who are inflicting anorexia upon themselves and have thus, generally and typically, made the autonomous medical decision to forgo treatment of anorexia.

Here’s some of what Laurie Penny — who was hospitalized with/for/because anorexia — has written about it in Unspeakable Things with emphasis added:

The other girls on the ward look like every kind of girl I’d grown up afraid of. … It’s bad enough being on a locked ward, but now I have to be locked up with a bunch of frivolous fashion kids? Clearly, these girls have starved themselves to the point of collapse simply because they want to look pretty; I, meanwhile, have perfectly rational, intellectual reasons for doing exactly the same. (p 55)

You do not do this to look beautiful. You know you look like hell. You do it because you want to disappear. … You’re sick of being looked at and judged and found wanting. (p 26)

It is no surprise that so many women and girls have what are delicately called ‘control issues’ around their bodies, from cutting and injuring their flesh to starving or stuffing themselves with food, compulsive exercise, or pathological, unhappy obsession over how we look and dress. Adolescence, for a woman, is the slow realisation that you are not considered as fully human as you hoped. You are a body first, and your body is not yours alone… (p 114)

So many young people are doing ritual violence to their own bodies. Diagnosis of eating disorders, chronic cutting and other, more arcane forms of self-injury has mushroomed over the past decade, especially among girls, young queers, and anybody who’s under extra pressure to fit in. (p 30)

Eating disorders are still seen as diseases peculiar to pretty young white women, which perhaps explains why years of ‘awareness raising’ have led to a great deal of glamour and mystery surrounding this deadliest of mental illnesses and precious little understanding. … [T]he number of people with eating disorders is still rising, and we are no closer to solving one of the great mysteries of modern life… The best answer we seem to have come up with is ‘magazines’. This says rather more about what society thinks goes on in the minds of teenage girls than it does about the cause of an epidemic that kills thousands of young people every year, and leaves countless more living half-existences with the best dreams of their single lives shrunk to the size of a dinner plate. … Eating disorders are what happens when youthful rebellion cannibalises itself. (p 29-30)


And the crucial point of that last block is where Penny observes that “This says rather more about what society thinks goes on in the minds of teenage girls,” because that’s the clue that we’re fighting with competing rationalities. That’s the warning about what else will get unleashed with the affirmation of the resolution. Yes, young women should have reproductive autonomy — but not to the point where we tacitly accept anorexia as, you know, just part of how kids cope with growing up. Yes, children of anti-vaxxers and faith-healers should totally See A Doctor About That — but not to the point where we’re okay with bored suburban kids are practicing chemistry in their parents’ medicine cabinet.

As written, the resolution categorically assumes too much of teenagers and too little of their parents and should be regarded very suspiciously for all of the reasons that the teenager on the affirmative will not want to discuss.

And that’s the final vicious point: while the affirmative may passionately advocate for the rights of their peer group, their claim to rationality is undermined by their necessary belief that their presumptive parents (or whoever it is they need autonomy from) must be behaving irrationally (contrariwise, their parents would assert that the children are behaving irrationally in dissent). Their position is not ethically disinterested from the outcome of the debate, and so their capacity for rational deliberation appears compromised on this very specific issue. To put it another way: they may be able to give powerful pathos-laden witness as to why they should have situational autonomy, but they do so to the detriment of their ethos as they disregard the wider implications of the topic that should consistently overwhelm them and result in a negative ballot.

The closing perk for those of you who read this far is the opening clip from Trainspotting, which critiques the prescribed list of socially permissible “rational” choices with non-prescription drugs and from which I took the post title. It is probably the best movie I recommend people avoid watching because the pure wretchedness of its content is so masterfully arranged. (Which is to say: if you think you can take it, you should watch it. Especially if you’re a sheltered suburban teenager. But there is a dead baby on the ceiling, and that’s just one scene.)

* It is at this point in the first draft that I take a break for lunch; a dish of elk, rice, and vegetables, spiced and sauteed in olive oil. This action isn’t quite ironic as it is not exactly autonomous: my cat will cry at me if I’m not eating, partially because he hates it when I get hangry but mostly because he likes to mooch bits of people-food from me. In this decade — since being divorced — I’ve gained almost 40 pounds, almost all of it muscle. This hasn’t changed the fact that I hate the idea of taking up space, of being noticeable, of becoming a target. That I can pass for one of the dudebros at the gym, the sort of people I grew up shying away from, is a nauseating source of cognitive dissonance for me. I project this resentment of my form outward and thus generally expect, as reflection, that I am generally feared and resented — all the more for being atypically non-confrontational, as that confuses the hell out of people who think that I’m what I look like. I am therefore impervious to common flirtation (on the grounds that “that person, as one of those people, fears and resents people that look like me, and thus myself inclusive, ergo she would not actually flirt with me; she obviously and rationally expects that the chiseled musculature and pretty blue eyes are totally a trap, a position which — while I believe it is incorrect — I totally respect and sympathize with, especially since it’s merely what I would think if I were standing where she’s standing, even though I’m not.”) which I suspect has gotten me rightly cussed out as “clueless” or a “stupid boy” behind my back once-or-repeatedly in the past few years. But I’d rather be one of those stupid boys than one of these total assholes, with apologies to Ms. Penny (in the abstract).

In related news, prior to a policy round (in which the genders of exactly nobody actually mattered for competitive purposes) at the state championship, I was asked how I should be addressed and I flippantly responded “Your Majesty.” This was wrong of me, and I apologize. I am to be correctly addressed as “Your Grace” since I am a card-carrying Pope of Eris.

Felicia, Foucault, and the Freakshow

There you stand, beloved freak; Let it shine. –Garbage, “Beloved Freak”

A study asserts that reading literary fiction (versus just fiction, or non-fiction, or nothing) improves empathy because it causes the reader to guess at what’s going on to round out the characters. This is funny in three ways: first, it requires unscientific subjectivity to elevate a book from fiction to literary fiction; second, it requires sloppy incompleteness on the part of the author and — by necessity — misidentifcation on the part of the reader; third, when this study is cited by speakers at conferences, they’re likely to be sloppily incomplete in their reference to it and merely call out “fiction” as beneficial. I might be a bit snarky and suggest that the study is itself a work of literary fiction as the authors try to construct meaning in their lives, and that I feel for them, but I think that might prove their point, which isn’t what I’m all about here. Because the people I tend to feel for are the people who are doing partial documentation of their lives in the form of memoirs. More specifically, and as I asked a delightful San Francisco executive when the speaker repeated the “fiction” claim: shouldn’t a good memoir do a better job of building empathy for a real person than literary fiction might?

I’ve decided to try this hypothesis on my niece by getting her memoirs of amazing women. I decided to start with Valerie Plame (Wilson)’s memoir of her career as a CIA spy because the CIA helpfully redacted all content that wasn’t age appropriate. As well as all content that was age appropriate. And every word that had a vowel. So that didn’t work. But then Felicia Day wrote a memoir, so I bought that and waited in line for five hours to get it signed for my niece who likely won’t be allowed to read it for a few years because the word “fuck” gets bandied about quite a bit, and Felicia asked me what I thought of her book and I replied half-truthfully that I liked it so far — the full truth was that I was enjoying the content but found the overly-modern style to be atrocious.

The problem of the overly-modern style seems to be associated with not-authors increasingly writing books and taking on a very casual tone that not only tries to intermittently converse with the reader or the author or the editor that is apparently failing at their job, but then also decorates text to suggest that words should sound different to thus mean different instead of simply choosing alternate phrasing or — heaven forbid — trusting the good senses of the reader. To be clear, this isn’t unique to Felicia — Aziz Ansari undercut his otherwise fascinating sociological screed on modern romance with it, Jenny Lawson fills out her taxidermied tome with it, and even Nietzsche would abuse his text with long dashes followed –BY ALL CAPS. But I don’t think they’re copying Nietzsche at all; I think their editor is failing to reign in their stylistically obnoxious tendencies. The more-professional writers like David Sedaris, Scott Berkun, and Molly Crabapple (specifically her essay on turning 30 as her memoir is not yet published) all manage to be funny and poignant and touching without being stylistically sloppy.

But given memoirs by both Plame* and Day, we can start to see — as Foucault would have — the populist reversal of power that the Internet has brought to publishing stories of the self. This is not an un-observed change; Dr. Bell has noted that technology “is changing is how stories are told, and who gets to tell them” and she certainly knows her Foucault.

For those who can’t quote it from memory, here’s the most-relevant bit of Foucault at this juncture, from Discipline & Punish:

For a long time ordinary individuality – the everyday individuality of everybody – remained below the threshold of description. To be looked at, observed, described in detail, followed from day to day by an uninterrupted writing was a privilege. The chronicle of a man, the account of his life, his historiography, written as he lived out his life formed part of the rituals of his power. The disciplinary methods reversed this relation, lowered the threshold of describable individuality and made of this description a means of control and a method of domination. It is no longer a monument for future memory, but a document for possible use. And this new describability is all the more marked in that the disciplinary framework is a strict one: the child, the patient, the madman, the prisoner, were to become, with increasing ease from the eighteenth century and according to a curve which is that of the mechanisms of discipline, the object of individual descriptions and biographical accounts. This turning of real lives into writing is no longer a procedure of heroization; it functions as a procedure of objectification and subjection.

And yet Plame and Day show two inversions here:

  1. Plame was elevated to public consciousness — heroized, if you will — and thus had a story to tell, but then had that story heavily redacted to not only obfuscate the CIA’s rituals of power, but also to reassert the prior disciplinary relationship with-and-over Plame; that is, the CIA alters the monument and excises the content of the document to prevent possible use. The inversion here is predictable: it is better to control the document than to write the document, especially as the subject of the document.
  2. Day bootstrapped herself on YouTube to gain an audience. The trajectory of her career was simply unfeasible just a few years prior when Plame’s identity was being leaked to Judy Miller and the New York Times. She is keenly aware of this and documents it not as a heroization, but rather as her (rather Erisian as far as I’m concerned) freakishness manifesting in a changing context that leaves her continually vulnerable. The inversion here is that a selection of her weaknesses are turned not to the private disciplinarian gaze, but opened to the public.

The inversion that Day’s memoir demonstrates is less predictable than Plame’s traditional memoir challenge to a bastion of knowledge, but it does answer Foucault’s concluding problem in Discipline & Punish: “At present, the problem lies rather in the steep rise in the use of these mechanisms of normalization and the wide-ranging powers which, through the proliferation of new disciplines, they bring with them.” (Emphasis added.) But what we see with novelty memoirs like Day’s, or Berkun’s, or Crabapple’s is that the audience-finding reach of modern technology is allowing what would have been a document of subjection to instead be disseminated into public consciousness and undermine the mechanisms of normalization.

So while I find it regrettable that we’re losing the particularly considered cognitive style of how we tell our stories (and I blame the editors), I think we should be delighted that we’re also expanding who gets to tell them.

The new problem we’re going to have is discovery: if the new value is subversion of normalization, then how does somebody who doesn’t already have an audience get their unusual story to an audience that wants to read it? Especially given the condition that most adult Americans are reading fewer than 6 books a year — reference, noting that 6 is the median for the 75% that read any books — and we can easily speculate that the majority of those are reading the same books as multiple of their friends are reading.

This isn’t really a new problem, though; it’s the same old problem: social power favors normalization. We engage in herd behavior and over-rely on stereotyping to stay vaguely engaged on hand and excuse our disengagement on the other because being over-engaged is creepy and why restraining orders exist. And this is also why publishers are still A Thing: their business model is to figure out who’s normal enough for an audience threshold and then buy the rights to distribute that story to the target sub-normalized audience. And thus it becomes a problem for publishers and distributors: how can they identify amorphous demographics that don’t perceptibly exist yet and then connect them to somebody who’s willing to narrate a story for them? And, thinking forward, how will they profit from previous years’ publications that don’t have an actual marketing budget? (Amazon’s recommendation algorithm should be helpful here, but it’s been pretty much crap for over a decade.)

Indeed, Felicia’s initial “big break” with The Guild wasn’t just keying into putting a show about computer gaming on the fledgling YouTube site (good sense and lucky opportunism) — it was having YouTube find and promote her work because they expected it would appeal to their target demographic. And while I’ve known geeks and actors through high school and college that have done not-dissimilar work, the apparent imperfection of their timing crimped their market success. The increase of opportunity comes with increased competition for niche mindshare, exposing luck as a major factor beyond talent or skill.

And it turns out that the particular strain of Impostor Syndrome that says “you’re only here because you got lucky; you couldn’t get here again no matter how hard you tried” seems to be what Felicia and I have in common. That, and having played raiding gnome warlocks — least popular class, represent! — back in the early days of World of Warcraft when raiding was a major social event. (Funny story: when I was waiting for five hours I didn’t really know much of anything about Felicia outside of Dr. Horrible’s Sing-Along Blog and that she’d been doxxed by the hateful jackasses of the Internet. But my niece was super-excited about being able to read the book with her name inscribed in it in a few years, as I expected she would be so I was figuring it’d be worth it regardless and was totally right. More on that in a few moments.)

Make no mistake: we are each skilled at what we do. But the opportunities we took to get where we respectively are? They are not opportunities we can necessarily tell other people to take, thus limiting the value of our experiential advice, nor opportunities that we’d necessarily be able to take if we’d somehow gone back in time to give ourselves good advice. And that makes us tangibly insecure.

See, I was generally a gifted slacker breezing through my rural town’s conservative public school system, completed a public relations degree in three years, realized that I had no professional interest in relating to the public, and got a guy who was some combination of kind-and-naive to give me a job programming computers for Intel; a fortunate divorce from an unfortunate marriage and having a guy I’d helped hire years before offer me his particular job as he was vacating it later and I’m doing great. Felicia was born 11 months after I was and went from being homeschooled to college without a high school diploma, studied math and music, and then went into acting, and claims to be mostly writing and producing these days — but can also draw a crowd of 1200 people to a bookstore and make them go “squee” in surprisingly adorable ways.

Neither of us really “belong” where we’re at, and that plays hell with our self-perception (as I was acutely reminded of while live-coding example work for three other actual programmers last week), probably contributing to varying degrees of social anxiety disorders. But there is good news for anybody who comes after us and is lucky and opportunistic, and it is this: Education Is Not Destiny.

And this should be super-important to the teenagers who are my demographic audience because I coach them in debate as they consider college-going strategies, most of which are designed to indenture more than educate:

Being able to identify and exploit opportunities is more important than what you study.

Being a computer science major won’t make you a competent programmer, being a theater major won’t lead to a successful career on stage and screen. These respective outcomes may be more likely, but they are not guaranteed. Getting into a successful career is all about developing a knack for identifying and exploiting opportunities… which also requires competence and skill, but you can miss a lot of opportunities despite being totally competent and skilled. So feel confident in studying whatever the hell you want to (and can afford to), so long as you’ve got strong contextual awareness of what sorts of opportunities are good for you.

To briefly address the obvious question: why does everybody act like education is destiny, having substituted “What do you want to be when you grow up?” with “What’s your major?” The simple answer is because it’s a stereotypical part of a person’s identity until it isn’t, in the same way performative masculinity or femininity gets associated to male and female, never mind that I’m into baking cakes now. And people associate moral significance to these performances against expected social roles based on ancient tribalism up through Greece and beyond when every individual’s contribution to the collective effort of civilization was seemingly pre-ordained as a moral duty to hold the chaos at bay. If you want to cut to the chase in moral philosophy from Aristotle to post-Enlightenment and how it allows people to think that their social constructs are sturdy enough for moral judgments, the book After Virtue is really quite good, but way off topic.

Let’s wrap up this line of discussion with a brief interlude for Nicole Sullivan. She ended up studying economics in college, then became a carpenter because sure-why-not, and is now a badass web programming geek because of Dance Dance Revolution and because Education Is Not Destiny.

I previously mentioned that the five hour wait was totally worth it, and would be remiss if I didn’t mention that Felicia was lovely and delightful even after five hours of book signing, a feat of endurance that was probably fueled by a lot of sugar and caffeine. But here’s what she says about it:

And this keys us into the insight that the “Public Figures” that can tap into the good vibes of their more-accessible-than-ever audience and be thankful for the opportunity to not be pigeonholed into a pre-ordained social role rather than feeling entitled to their celebrity have a professional skill that will help them retain their base when they want to shift their career to something new.

Take, for example, Sara Taylor. She also recently wrote a book in which a couple of teenage girls go and kill a bunch of people. It is, presumably, not exactly a memoir even though the girls are in a band, and Sara (aka Chibi) is the lead singer for the goth rock/darkwave band The Birthday Massacre. Anyway, I went to a Birthday Massacre concert last time they were in town — and the last opening band got me rather worried, because they were sounding like the stereotypical rock stars spewing verbal abuse on the audience from the sanctified space of the stage. But my fears were misguided; this was the typical gesture between Chibi and the crowd:
Chibi <3s Us and We <3 Chibi

And this brings us to our concluding advice: it’s not just nice to have an audience that loves you, but valuable to have an audience that facilitates opportunities you want to pursue. The line between supporter and participant has been smudged by the careers of random, chaotic people, and being able to identify and exploit opportunities from the crowd will be a valuable career-building skill. I have experience in this; my best career move was the one a former associate happened to offer me.

But I did have to be skilled and competent before I could exploit that opportunity, and that’s the positive note that Felicia ends her book on: because technology is changing how we tell our stories and who gets to tell them (going back to Bell), it is advisable for people to cultivate their personal narratives — it’s not just a matter of “writing a book,” but also a matter of aligning the facets of your personality with the work you want to spend a portion of your life doing and letting the world know the company you want to keep.

Raiding ‘locks, represent!


* While she’d reasonably assert that she is Valerie Plame Wilson, she was outed, entered the public consciousness, and became memoir-worthy (as it were) as Valerie Plame. Thus the portion of her life that is relevant here is labeled as “Plame,” preferring consistency to accuracy.

Big Data vs. Affirmative Action

Past performance is not an indicator of future results. –Every Prospectus Ever.

So there was this former executive of a successful San Francisco start-up telling us about how she just couldn’t hire a person who had bad credit.

And as you make sense of that statement, you’re very likely to jump to the wrong conclusion about what kind of a person that particular corporate executive was: you’re likely to suspect that she’s a classist elitist jerk because that’s the kind of story that you’re used to hearing that would conclude with “she just couldn’t hire a person who had bad credit.” And this is the same sort of trap we fall in when we encounter structural racism, or any other systemic -ist form of discrimination by algorithm: we look for the bad actor so that we can continue to play by the neutral-or-good rules while reassuring ourselves that we’re nothing like the bad actor. This would be fine if the rules were truly neutral, but they aren’t, and that’s even assuming that the people who wrote them did so in good faith and with the best of intentions.

So this post is for all you kids who are so not-racist that you think Affirmative Action is reverse discrimination that undermines the notion of equality that you’re totally in favor of. I’m going to start by splitting out the bad actors and looking briefly at what we want to believe is neutral to show how even good intentions can lead to -ism or -ist results. Then I’m going to double down on that point to explain the sort of situation the corporate executive I lead with was in, and why it’s a risk to you. And then I’m going to conclude with a value-based justification of affirmative action, regardless of how badly the policy may be implemented.

(Just to be clear, -ism and -ist are the abstract forms of “any kind of -ist — racist, sexist, ageist — behavior, or any -ism — racism, sexism, ageism — et cetera” because I’m making an abstract argument about how discrimination here.)

Let’s start with a basic claim:

We all know what interpersonal racism is: it’s these assholes who caused shit like this half a century ago, but then Rosa Parks and MLK fixed almost everything except for… pretty much everything. But this is what schools want to teach because it sounds like progress has been made and the grown-ups are constantly doing a better job than their predecessors — we call this the Myth of Perpetual Progress and it’s off-topic, but I recommend Loewen’s Lies My Teacher Told Me if you’re surprised.

I’m going to briefly argue that structural -ismist behavior is designed into our most-optimized social systems. And I’m drawing upon a bog of knowledge from Cathy O’Neil, Allstair Croll, and danah boyd in case you want to read more of this ilk (you totally should, that’s why I linked it for you). And I’m going to start with the very simple, aspirational, American-dreamy idea of buying a house. Yes, there are bad actors in Real Estate and everywhere else, too, but I’m going to not delve into them directly so I can focus on non-bad actors. And this is important because when things that aren’t bad are actually still bad, then we can’t excuse the badness by saying that we’ve cherry-picked evidence to make things worse than they are.

Because here’s the thing about the non-bad actors like insurers and creditors: they know what kind of a world they live in. Their first bet is a guess on when you’re going to die. But they’re also keenly interested to know how stable your job is, or if you’re likely to be fired — even if it’s a dodgy firing by an -ist boss that doesn’t know you and doesn’t like your demographic. For mortgages, they’re interested to know where you’re buying a home. And in the ideal free-market model, they’ll lower the cost of service for their low-risk customers by charging their high-risk customers more for less service, commiserate to the risk. The service charges (insurance premiums or loan interest rates) would be higher to recoup money earlier, while service maximum (insurance policy size or mortgage amount) would be capped lower to minimize the maximum amount of loss on payout/default. This is simple: the individual — me, you, any schlub — is specifically paying the institution to take on the fiscal risk of giving some rando — me, you, any schlub — a chunk of change, either to buy some property or in the event of misfortune. The more precisely the institution knows the likelihood that you’ll stop paying them and they’ll lose money on you, the more precisely they can charge you for the favor of taking on your risks to ensure that they make (rather than lose) money.

What this means is that an optimized mortgage equation that knows the elevated likelihood of a black man — his name is Jordan, why not? — being spuriously fired by some asshole boss, and will charge Jordan more and loan him less than it would me. The algorithm isn’t trying to be racist; it’s just acknowledging the ugly likelihood that Jordan’s going to have his life disrupted in a way that will impact the business Jordan is doing with the bank and protecting the bank from it while simultaneously working to earn my business. But this still means that Jordan is paying more for less, a negative effect that makes the bank running the racism-aware algorithm appear “racist” rather than “legitimately concerned about racism.” (We’ll get to what “legitimately concerned” looks like when we’re talking about affirmative action.)

Assuming that Jordan does business with the bank anyway, he’s going to promptly run into two downstream effects. First, he can’t buy as much or as nice of property. For example, he might find that properties near a polluted industrial district, or near railroad tracks, or in an higher-crime neighborhood, or under the low flight path of the airport are the only sorts of properties he can afford — and they bring stresses, carcinogens, and a shorter life span. (There was a clear correlation in southern California as I recall, but sadly I cannot remember the book I was reading that was harping on this evidence.) And that’s bad, but secondly there’s an ongoing economic impact: the American middle class keeps a lot of its wealth in its home real estate, and the lower mortgage cap basically caps Jordan’s ability to develop his home as a bastion of wealth.

If you’re in my target audience of teenagers, this may come as a surprise — but CNBC reports that

“Homeownership plays a pivotal role in the U.S. economy and has historically been one of the primary sources of wealth accumulation for middle-class families,” said Lawrence Yun, chief economist for the Realtors. … “Unfortunately, due to an underperforming labor market, insufficient housing supply and overly stringent underwriting standards since the recession, homeownership has plunged to a rate not seen in over two decades,” Yun added. “As a result, the country has become more unequal as the number of homeowners has fallen while the number of renters has significantly risen.”

So let me break down some sample math for you on the expectation that you’re a teenager and haven’t thought about this at all yet: when I was renting a few years ago, I was spending about $1000 on a mediocre apartment per month — that’s just spent and gone. But now I’ve got a house, and I’m putting $900 per month into the principal (that’s the part of the house I own) and spending $400 on interest and spending $300 on taxes. So clearly my budget is more restricted per month, but both the interest and the taxes are tax deductible, and the principal counts towards my net worth — so at the end of the year, after tax deductions, I’m only spending — in the “spent and gone” sense of the word — $500 per month on a nice house. And while rent has gone up noticeably since I’ve bought this house, my mortgage payments have not even though the market value of the house has increased. Yes, it can be hard to have that principal payment leaning on my budget every month, but when I’m done with this house I should be able to sell it and walk away with a huge pile of cash that I simply wouldn’t have been able to accumulate if I’d been renting.

That’s why Jordan wants to buy a house, wants to buy a house that will nicely increase in value while he owns it, and wants to buy a house that he’ll be able to re-sell when he’s ready to move on: it is an important part of his ability to advance socioeconomically, for both him and his family (if you’ve imagined him having a family), and he needs a nice big loan in order to do it because otherwise he’ll be blowing too much money on rent for too long and not make that socioeconomic advance at, for example, the rate I am.

But look back at Yun’s statement: “overly stringent underwriting standards.” That’s when banks are risk averse to people who may not be able to pay all the money back, for any reason — inclusive of being a victim of -ist stupidities in their place of (former) employment. Is that kind of algorithmic discrimination legal? Probably not. Does it happen anyway? Yes. Very clever statisticians are cooking up new ways to read historical data that won’t seem like racism — because they’re not really racists, right? — or sexism or any other -ism while more-precisely charging customers for the risks they pose to the bank. And that historical data is a sight to behold. Let’s say for a moment that we’re looking at a 30-year fixed interest rate mortgage (this is pretty normal; it will take twice as long as you’ve been alive to pay off a house), so we look back 30 years into the past to see how risks played out — and realize that 30 years ago was 1985 when the crack-cocaine epidemic ravaged inner cities, igniting a racially-charged moral panic of “at-risk” youth. 30 years before that was a decade before civil rights, so employment for minorities was super-precarious. As humans, we want to believe — as we were taught in school — “Wow, so much has changed!” But the algorithm is just going to crunch its numbers as if nothing ever changes.

Let me put that another way: large-scale personal discrimination, even-and-especially in the past informs our algorithms how dangerous it is to be in a non-powerful demographic in a country that is apparently overrun by -ist assholes. And our algorithms, in an as amoral (if not sociopathically dispassionate) way as they possibly can, return their variable response to the known risks in a way that is rarely questioned because we trust in the safe neutrality of the rules because they’re here to protect us from those -ist assholes, right?

You should be legitimately concerned at this point.

But let’s give you an example that’s in your more-immediate future: So there was this former executive of a successful San Francisco start-up telling us about how she just couldn’t hire a person who had bad credit, even though she really wanted to. We’ll say she was trying to hire Jordan (because I don’t know who she was actually trying to hire). See, the problem is that no matter how much money she offers Jordan now, even for moving expenses and as a joining bonus, if Jordan has a shitty credit score — a huge menagerie of college loans, maybe a couple of missed credit card payments because OMG Textbooks Are Expensive, maybe a used car loan on a lemon — Jordan will be rejected when he tries to rent an apartment, any apartment, and will thus be unable to move to San Francisco to take the well-paying job he needs to pay off that huge menagerie of college loans. The VP may want to pay Jordan with wheelbarrows full of cash (and she did!), but that desire to improve Jordan’s future means nothing when he’s disallowed from entering the housing market because of the financial 8-ball he’s starting his adult life behind. And that’s not racism at play, that’s just an authoritative algorithm spreading misery from the past into the future. And you should be legitimately concerned at this point.

You should be legitimately concerned because, as danah boyd succinctly put it “Technology doesn’t produce accountability [and] Removing discretion often backfires.” And when most of our economic lives are governed by black boxes full of trade-secret algorithms replacing the old arcane actuarial tables, then even in a best-possible-faith situation the negative impacts of past -ist behaviors will be used to encode -ism into our future.

I started off this post with the routinely disregarded financial wisdom that “past performance is not an indicator of future results,” a disclaimer that goes on every stock and mutual fund prospectus. But we ought to now repurpose it to serve as a reminder that it’s a mistake to pre-judge somebody’s future simply based on what we can reasonably infer from their past. It’s a mistake to not rent an apartment to a bright young engineer starting a promising career at one of the hottest companies in the Bay Area just because college is a financial tribulation these days. It’s a moral abdication to make people pay more for the risks they incur based on how people, civilization, and chance treat them rather than trying to help them mitigate those very risks. And thus it’s wrong to not take actions that affirm people’s ability to perform better and achieve more when they’re not being disadvantaged by their demographic correlation to risk. That’s what we’re supposed to mean by “Affirmative Action.” It’s not supposed to be a question of quotas (as dumbly implemented as the statistics they’re intended to counteract), nor should it be about lowering standards; it’s about seeing how well somebody does despite being almost certainly disadvantaged and then giving them the opportunity to show what they can do when they’re not disadvantaged anymore.

And so our legitimate concern ought to sometimes take moral action to affirm the idea that citizens of a just society would not be historically disadvantaged by other people’s hostile/predatory behaviors; here justice imposes a counter-factual responsibility to look to a person’s possible future regardless of their statistical past. Contrariwise, it is immoral to abdicate this responsibility to a machine built to enact corporate policy. Past performance is not an indicator of future results.

So I’ve covered what I wanted to cover and avoided what I wanted to avoid. But there is one last thing that I should briefly address, so kids, pay attention: affirmative action, even in its dumbest and sloppiest implementation is not about reverse racism. It’s about extending opportunity to somebody who worked hard to earn it with relative, if not absolute, achievement. It’s not really about you at all. And if you are concerned, then allow me to suggest that your concern is not that other people are being given opportunities, but rather that the adults of the world are doing a piss-poor job of creating opportunities in general such that they seem scarce in a way that make The Hunger Games a totally reasonable allegory rather than the obvious horror show unfit for consumption in civil society that it totally is. That’s why you almost certainly mis-attributed a cold elitism to the executive that just couldn’t hire a person who had bad credit before I explained what was going on. But you doubt this, consider: the simple economic fact of the matter is that “the richest 80 people in the world alone now possess more combined riches than do the poorest half of the world’s population” and that ratio is getting sharper over time.

Or, to put it another way, when “you wouldn’t be so annoyed that Jordan got to interview for that job and you didn’t, except you just know you’d have done better in the knife-fighting part of the interview than Jordan” — the problem isn’t who got the interview (affirmative action), it’s that there was a knife-fighting component to it (scarce opportunity). I know this doesn’t help your situation, but I do hope it helps your perspective.

Update – Further Reading: When Big Data Becomes Bad Data” by Lauren Kirchner at ProPublica.

Linear Dragons

“We each make our own accommodation with the world and learn to survive in it in different ways.” –Douglas Adams, Last Chance to See

Photo of Komodo Dragon, 2012, at the Sydney Zoo

Tuka the Komodo Dragon

Aristotle was an asshole: his theories of virtue, ethics, and morality actively excluded the overwhelming majority of not only the world (no barbarians allowed) but even the residents of Athens (also no women, poor people, or ugly people). This is a crucial point that gets forgotten when people try to tap into ancient Greek philosophy: its aristocratic aspirationality may not scale up so well. And Aristotle’s blind spots make him generally incompatible with the teachings of early Christianity that started in a Roman-subjugated and entirely-underclass Jewish population. And while Nietzsche would later complain that the morality of Christendom was only suitable for herds of slaves that diminished the (romantic) dignity of aspirational — that is, both heroic and fictional — humanity, Nietzsche didn’t provide a form of ethics that scaled well either. And if you want more on this topic, then I’d suggest reading Alasdair MacIntyre’s After Virtue because my intention here is only to call to mind what Aristotle gets right, and what dragons get wrong, and what it means to ordinary modern people.

Aristotle gets two things right: any virtuous behavior tends to come between either too much or too little of that type of behavior, and a person develops that virtue by intentionally practicing hitting that mid-point of virtuous behavior. For example: while too little courage is obviously cowardice, too much courage is suicidal insanity — thus the virtue of courage is exercised and developed by finding the mid-point between timidity and wanton recklessness. Aristotle also asserts that relying on ingrained discipline or intuition could result in the short-term appearance of virtuous behavior, but it wouldn’t be a conscientious practice and development of it. On the whole, existentialists can look back to Aristotle’s insistence on practiced virtue for their core claim of Existence Precedes Essence.

Anyway, dragons. Nobody coming from the European tradition would make a claim as to the virtuousness of dragons. While they may have many very impressive qualities, dragons are considered to be bad by human standards. And this is true not only of the mythological dragons — I’ll get to them — but also for real dragons. Here’s a snippet of conversation Douglas Adams recorded regarding the Komoodo Dragon in his book Last Chance to See:

What works as successful behavior for us does not work for lizards, and vice versa.

“For instance,” said Mark, “we don’t eat our own babies if they happen to be within reach when we’re feeling peckish… A baby dragon is just food as far as an adult is concerned… It moves about and has got a bit of meat on it. It’s food. If they ate them all, of course, the species would die out, so that wouldn’t work very well. Most animals survive because the adults have acquired an instinct to not eat their babies. The dragons survive because the baby dragons have acquired an instinct to climb trees. The adults are too big to do it, so the babies just sit up in the trees until they’re big enough to look after themselves. Some babies get caught though, which works fine. It sees them through times when food is scarce and helps to keep the population within sustainable levels. Sometimes they just eat them because they’re there.”

In this case, Aristotle would say the Komodo Dragon is not virtuous despite eating only a moderate quantity of its offspring because it’s not intentionally practicing moderation in eating its young, but instead encountering the constraint imposed by the baby dragons climbing trees to escape. There’s a moral lesson in there for us all, I’m sure.

But let’s talk about the European mythological/symbolic dragon. What we know about them tends to come from our perspective: they’re slain by heroes, most notably by Bard or Saint George, but really anybody trying to recover the treasure of a kingdom and/or virginal princess has motivation to go after a dragon. And everybody knows where to find the dragon because by the time the dragon is big enough to have a kingdom’s worth of treasure or be stealing — or demanding — princesses, they’re already incredibly powerful and noteworthy and almost certainly well into their adult years. Conversely, the idea of a young dragon seems generally unremarkable: their lack of power prevents them from abusing their power, so continually pushing themselves to their personal maximum of consumptive rapacity — symbolically both in terms of material wealth and sexually with a standing preference for adolescent virignal girls — is unremarkable until their power has grown to the point of being overwhelming and the consumptive rapacity that they’ve been practicing for years, decades, maybe even centuries, is abusive. And this gets the attention of the heroes and gets the dragon killed.

From our point of view, the dragon should have known better than to irresponsibly and intolerably abuse its power. But what did the dragon know? The symbolic dragon is intelligent, though more cunning than wise. The dragon knew that if it didn’t fully secure its territory, it’d get slaughtered by humans or, presumably, attacked by other dragons. And so as it’s growing up, it is constantly straining its small self to ensure its individual security against a plethora of social threats. And then when it is an adult, it finds that its essence is simply what it was practicing for all though years only now it’s got this incredible dragon body that’s making it wildly successful at defending its treasure hoard and intimidating humans into virgin sacrifice.

The ingrained insecurity of its youth leads the dragon to not practice finding mid-points of virtuous behavior, but instead pushing towards chronic excess in the hopes of feeling secure. Of course, the success of its excessive behaviors increase its value as a target of valiant heroes and thus undermine its security.

One of the early lessons of Jordan Ellenberg’s How to Not Be Wrong is on non-linear (that is, “not in a straight line”) thinking as exemplified by the Laffer curve. And the lesson is simple and sounds just like Aristotle: it optimal position for a variable policy is somewhere in the middle. Laffer was originally describing the hypothetical curve of government revenue from taxing people too little or too much: if the government doesn’t tax anybody anything then it’ll collect $0 but if they tax everybody everything, then nobody will bother earning anything and the government will also collect $0. Similarly, the dragon should strategically attune its greed to the balance of its need for power to fend of incidental challenges with its need to avoid the level of infamy that makes it a ripe target for heroes.

Instead, the dragon’s linear more-is-better thought process leads it to a cruel optimism: that it may one day be secure if it just keeps doing what has finally started seeming successful for it with the gold-pillaging and virgin-consuming.

And all of this is actually super-relevant to humans, even the ones that prefer evolutionary biology over symbolic mythology, because our limbic system is what makes us similar to lizards and undermines our ability to break out of linear thinking.

“It doesn’t matter how clever you think you are; those years, they leave all kinds of scars.” –Curve, “Beyond Reach

I sympathize with the dragons. It’s not just that I am amassing a treasure-trove, nor is it that the keenness of my physique can dissolve the attractive young fitness coach-lady’s professional veneer into flirtatiousness as she probes my pectorals for fat that is not there. But rather like the dragons, it’s that I was not always like this.

Growing up I was always one of the youngest kids in the class, and also tended to have the lowest BMI. I’m average height (5’10”) but until I was 31, my maximum adult weight was 128 lbs — so rewind that to high school and I’ve got the kind of scrawny body that results in girls telling me that I’m causing them body image issues. For most of my formative years, however, I went out of my way to avoid being persecuted. I learned very early — from my brother’s experience as well as popular culture — that there were violently dangerous bastards at school and that I should try to avoid them: give space, or blend in, or — late in high school — stand out by looking spiky, or, in the worst case, use my small size to slip between obstacles and people. Failing that, and knowing that I’d not win a “fair” fight, I should fight dirty and decisively.

I confess that I was temporally lucky; when I was growing up, it was not utterly horrifying to be the quiet inscrutable loner. Taking Machiavelli — “It is better to be feared than to be loved” — to heart was not yet grounds for expulsion. Columbine was still a few years away, so being a bit prickly wasn’t a therapy-inducing problem. Sara Taylor describes that time of my life perfectly in Boring Girls:

I suppose it was around this time that I started nurturing my desire to be feared. I wanted to surprise people who underestimated me, and rather than simply impress them, I wanted them to regret having felt that way. I became fixated on that moment of realization.

And this is simply because if a fight happens, then the fight isn’t really going to end until my opponent realizes the depth of their error in coming after me. I’ve heard that cis-boys can rough each other up to test and reinforce the strength of their hierarchical organizations; that team sports help engender this in them. I don’t understand that way of thinking at all. I think it’s fucking idiotic.

Anyway, my perpetual readiness to fight was thankfully never substantially tested. I suspect I was thankfully a step above where Paul Graham describes in his essay “Why Nerds are Unpopular” when he says that “to be unpopular in school is to be actively persecuted” — and indeed, I would assert that to be too popular is to be a precarious target for persecution, just like a dragon with too much treasure — but for me that just meant that I ought to seem complicit to whatever behavior, no matter how wretched, I couldn’t avoid to ensure my personal safety.

And that’s how a kid keeps his social nose clean while running the risk of encountering a predatory gang of bored delinquents, or simply being trampled by some girl who literally runs him down because she’s oblivious to where she’s going, both of which come together later in the form of obnoxious college football jocks, inadvertently confirming that the lower half of campus a hive of degenerate villainy — as I had long suspected and mostly avoided.

But now I’m “like this,” which means I’ve added 30 lbs of muscle and have traveled solo over much of the world (and surfed on three continents) and so on, and yet I’m still deeply, physiologically anxious about meeting people when I don’t know what they want from me. I still glance at joints, eyes, throat, and kidneys when encountering “sporty” looking guys even though I look almost identical to them. I resent people who are oblivious to how much physical space they’re taking up even though I’m not certain how much additional space I’m taking up.

The insecurities that I used to intentionally practice have persisted with me long past their defensive usefulness, corrupting my memories such that I identify with troubled Hal from Rocket Science more than I do with all the small-town newspaper reports of my long run of youthful accomplishments that my mother lovingly and comprehensively scrapbooked.

It turns out that bending an ingrained behavior into a non-linear virtue is a difficult thing to do; it is difficult to make the dragon lurking in the limbic system curve its habits.

So am I saying that despite appearances I’m actually a cold-blooded monster that will flame-broil you as soon as look at you? That’s certainly one interpretation, but I’d prefer you realize that this

is shockingly wrong linear thinking that fails to see how the taint of power drives nerd culture on a trajectory indistinguishable from bro culture in the same way that, as MacIntyre observed, power has consistently turned Marxists into Weberians. And this is well known, even specific to historic nerd culture as deconstructed from Revenge of the Nerds: their linear pursuit of one goal overshot the mark of virtue; in the end their behavior was practically indistinguishable from their antagonists. While it may be totally true that those people over there have created a nasty hostile asshole culture, this does not preclude the possibility we are living in a nasty hostile asshole culture ourselves, especially if we’ve spent an awful lot of time fixated on how much we hate their culture rather than positively cultivating our own.

And so consider the Komodo Dragon: just because the baby Komodo Dragon is able to climb a tree to flee its parents doesn’t mean it won’t try to eat its own babies when it grows up. It seems like we humans should be able to behave better than this, but it’s not at all clear that we’re going to.