The Lexwerks

Felicia, Foucault, and the Freakshow

There you stand, beloved freak; Let it shine. –Garbage, “Beloved Freak”

A study asserts that reading literary fiction (versus just fiction, or non-fiction, or nothing) improves empathy because it causes the reader to guess at what’s going on to round out the characters. This is funny in three ways: first, it requires unscientific subjectivity to elevate a book from fiction to literary fiction; second, it requires sloppy incompleteness on the part of the author and — by necessity — misidentifcation on the part of the reader; third, when this study is cited by speakers at conferences, they’re likely to be sloppily incomplete in their reference to it and merely call out “fiction” as beneficial. I might be a bit snarky and suggest that the study is itself a work of literary fiction as the authors try to construct meaning in their lives, and that I feel for them, but I think that might prove their point, which isn’t what I’m all about here. Because the people I tend to feel for are the people who are doing partial documentation of their lives in the form of memoirs. More specifically, and as I asked a delightful San Francisco executive when the speaker repeated the “fiction” claim: shouldn’t a good memoir do a better job of building empathy for a real person than literary fiction might?

I’ve decided to try this hypothesis on my niece by getting her memoirs of amazing women. I decided to start with Valerie Plame (Wilson)’s memoir of her career as a CIA spy because the CIA helpfully redacted all content that wasn’t age appropriate. As well as all content that was age appropriate. And every word that had a vowel. So that didn’t work. But then Felicia Day wrote a memoir, so I bought that and waited in line for five hours to get it signed for my niece who likely won’t be allowed to read it for a few years because the word “fuck” gets bandied about quite a bit, and Felicia asked me what I thought of her book and I replied half-truthfully that I liked it so far — the full truth was that I was enjoying the content but found the overly-modern style to be atrocious.

The problem of the overly-modern style seems to be associated with not-authors increasingly writing books and taking on a very casual tone that not only tries to intermittently converse with the reader or the author or the editor that is apparently failing at their job, but then also decorates text to suggest that words should sound different to thus mean different instead of simply choosing alternate phrasing or — heaven forbid — trusting the good senses of the reader. To be clear, this isn’t unique to Felicia — Aziz Ansari undercut his otherwise fascinating sociological screed on modern romance with it, Jenny Lawson fills out her taxidermied tome with it, and even Nietzsche would abuse his text with long dashes followed –BY ALL CAPS. But I don’t think they’re copying Nietzsche at all; I think their editor is failing to reign in their stylistically obnoxious tendencies. The more-professional writers like David Sedaris, Scott Berkun, and Molly Crabapple (specifically her essay on turning 30 as her memoir is not yet published) all manage to be funny and poignant and touching without being stylistically sloppy.

But given memoirs by both Plame* and Day, we can start to see — as Foucault would have — the populist reversal of power that the Internet has brought to publishing stories of the self. This is not an un-observed change; Dr. Bell has noted that technology “is changing is how stories are told, and who gets to tell them” and she certainly knows her Foucault.

For those who can’t quote it from memory, here’s the most-relevant bit of Foucault at this juncture, from Discipline & Punish:

For a long time ordinary individuality – the everyday individuality of everybody – remained below the threshold of description. To be looked at, observed, described in detail, followed from day to day by an uninterrupted writing was a privilege. The chronicle of a man, the account of his life, his historiography, written as he lived out his life formed part of the rituals of his power. The disciplinary methods reversed this relation, lowered the threshold of describable individuality and made of this description a means of control and a method of domination. It is no longer a monument for future memory, but a document for possible use. And this new describability is all the more marked in that the disciplinary framework is a strict one: the child, the patient, the madman, the prisoner, were to become, with increasing ease from the eighteenth century and according to a curve which is that of the mechanisms of discipline, the object of individual descriptions and biographical accounts. This turning of real lives into writing is no longer a procedure of heroization; it functions as a procedure of objectification and subjection.

And yet Plame and Day show two inversions here:

  1. Plame was elevated to public consciousness — heroized, if you will — and thus had a story to tell, but then had that story heavily redacted to not only obfuscate the CIA’s rituals of power, but also to reassert the prior disciplinary relationship with-and-over Plame; that is, the CIA alters the monument and excises the content of the document to prevent possible use. The inversion here is predictable: it is better to control the document than to write the document, especially as the subject of the document.
  2. Day bootstrapped herself on YouTube to gain an audience. The trajectory of her career was simply unfeasible just a few years prior when Plame’s identity was being leaked to Judy Miller and the New York Times. She is keenly aware of this and documents it not as a heroization, but rather as her (rather Erisian as far as I’m concerned) freakishness manifesting in a changing context that leaves her continually vulnerable. The inversion here is that a selection of her weaknesses are turned not to the private disciplinarian gaze, but opened to the public.

The inversion that Day’s memoir demonstrates is less predictable than Plame’s traditional memoir challenge to a bastion of knowledge, but it does answer Foucault’s concluding problem in Discipline & Punish: “At present, the problem lies rather in the steep rise in the use of these mechanisms of normalization and the wide-ranging powers which, through the proliferation of new disciplines, they bring with them.” (Emphasis added.) But what we see with novelty memoirs like Day’s, or Berkun’s, or Crabapple’s is that the audience-finding reach of modern technology is allowing what would have been a document of subjection to instead be disseminated into public consciousness and undermine the mechanisms of normalization.

So while I find it regrettable that we’re losing the particularly considered cognitive style of how we tell our stories (and I blame the editors), I think we should be delighted that we’re also expanding who gets to tell them.

The new problem we’re going to have is discovery: if the new value is subversion of normalization, then how does somebody who doesn’t already have an audience get their unusual story to an audience that wants to read it? Especially given the condition that most adult Americans are reading fewer than 6 books a year — reference, noting that 6 is the median for the 75% that read any books — and we can easily speculate that the majority of those are reading the same books as multiple of their friends are reading.

This isn’t really a new problem, though; it’s the same old problem: social power favors normalization. We engage in herd behavior and over-rely on stereotyping to stay vaguely engaged on hand and excuse our disengagement on the other because being over-engaged is creepy and why restraining orders exist. And this is also why publishers are still A Thing: their business model is to figure out who’s normal enough for an audience threshold and then buy the rights to distribute that story to the target sub-normalized audience. And thus it becomes a problem for publishers and distributors: how can they identify amorphous demographics that don’t perceptibly exist yet and then connect them to somebody who’s willing to narrate a story for them? And, thinking forward, how will they profit from previous years’ publications that don’t have an actual marketing budget? (Amazon’s recommendation algorithm should be helpful here, but it’s been pretty much crap for over a decade.)

Indeed, Felicia’s initial “big break” with The Guild wasn’t just keying into putting a show about computer gaming on the fledgling YouTube site (good sense and lucky opportunism) — it was having YouTube find and promote her work because they expected it would appeal to their target demographic. And while I’ve known geeks and actors through high school and college that have done not-dissimilar work, the apparent imperfection of their timing crimped their market success. The increase of opportunity comes with increased competition for niche mindshare, exposing luck as a major factor beyond talent or skill.

And it turns out that the particular strain of Impostor Syndrome that says “you’re only here because you got lucky; you couldn’t get here again no matter how hard you tried” seems to be what Felicia and I have in common. That, and having played raiding gnome warlocks — least popular class, represent! — back in the early days of World of Warcraft when raiding was a major social event. (Funny story: when I was waiting for five hours I didn’t really know much of anything about Felicia outside of Dr. Horrible’s Sing-Along Blog and that she’d been doxxed by the hateful jackasses of the Internet. But my niece was super-excited about being able to read the book with her name inscribed in it in a few years, as I expected she would be so I was figuring it’d be worth it regardless and was totally right. More on that in a few moments.)

Make no mistake: we are each skilled at what we do. But the opportunities we took to get where we respectively are? They are not opportunities we can necessarily tell other people to take, thus limiting the value of our experiential advice, nor opportunities that we’d necessarily be able to take if we’d somehow gone back in time to give ourselves good advice. And that makes us tangibly insecure.

See, I was generally a gifted slacker breezing through my rural town’s conservative public school system, completed a public relations degree in three years, realized that I had no professional interest in relating to the public, and got a guy who was some combination of kind-and-naive to give me a job programming computers for Intel; a fortunate divorce from an unfortunate marriage and having a guy I’d helped hire years before offer me his particular job as he was vacating it later and I’m doing great. Felicia was born 11 months after I was and went from being homeschooled to college without a high school diploma, studied math and music, and then went into acting, and claims to be mostly writing and producing these days — but can also draw a crowd of 1200 people to a bookstore and make them go “squee” in surprisingly adorable ways.

Neither of us really “belong” where we’re at, and that plays hell with our self-perception (as I was acutely reminded of while live-coding example work for three other actual programmers last week), probably contributing to varying degrees of social anxiety disorders. But there is good news for anybody who comes after us and is lucky and opportunistic, and it is this: Education Is Not Destiny.

And this should be super-important to the teenagers who are my demographic audience because I coach them in debate as they consider college-going strategies, most of which are designed to indenture more than educate:

Being able to identify and exploit opportunities is more important than what you study.

Being a computer science major won’t make you a competent programmer, being a theater major won’t lead to a successful career on stage and screen. These respective outcomes may be more likely, but they are not guaranteed. Getting into a successful career is all about developing a knack for identifying and exploiting opportunities… which also requires competence and skill, but you can miss a lot of opportunities despite being totally competent and skilled. So feel confident in studying whatever the hell you want to (and can afford to), so long as you’ve got strong contextual awareness of what sorts of opportunities are good for you.

To briefly address the obvious question: why does everybody act like education is destiny, having substituted “What do you want to be when you grow up?” with “What’s your major?” The simple answer is because it’s a stereotypical part of a person’s identity until it isn’t, in the same way performative masculinity or femininity gets associated to male and female, never mind that I’m into baking cakes now. And people associate moral significance to these performances against expected social roles based on ancient tribalism up through Greece and beyond when every individual’s contribution to the collective effort of civilization was seemingly pre-ordained as a moral duty to hold the chaos at bay. If you want to cut to the chase in moral philosophy from Aristotle to post-Enlightenment and how it allows people to think that their social constructs are sturdy enough for moral judgments, the book After Virtue is really quite good, but way off topic.

Let’s wrap up this line of discussion with a brief interlude for Nicole Sullivan. She ended up studying economics in college, then became a carpenter because sure-why-not, and is now a badass web programming geek because of Dance Dance Revolution and because Education Is Not Destiny.

I previously mentioned that the five hour wait was totally worth it, and would be remiss if I didn’t mention that Felicia was lovely and delightful even after five hours of book signing, a feat of endurance that was probably fueled by a lot of sugar and caffeine. But here’s what she says about it:

And this keys us into the insight that the “Public Figures” that can tap into the good vibes of their more-accessible-than-ever audience and be thankful for the opportunity to not be pigeonholed into a pre-ordained social role rather than feeling entitled to their celebrity have a professional skill that will help them retain their base when they want to shift their career to something new.

Take, for example, Sara Taylor. She also recently wrote a book in which a couple of teenage girls go and kill a bunch of people. It is, presumably, not exactly a memoir even though the girls are in a band, and Sara (aka Chibi) is the lead singer for the goth rock/darkwave band The Birthday Massacre. Anyway, I went to a Birthday Massacre concert last time they were in town — and the last opening band got me rather worried, because they were sounding like the stereotypical rock stars spewing verbal abuse on the audience from the sanctified space of the stage. But my fears were misguided; this was the typical gesture between Chibi and the crowd:
Chibi <3s Us and We <3 Chibi

And this brings us to our concluding advice: it’s not just nice to have an audience that loves you, but valuable to have an audience that facilitates opportunities you want to pursue. The line between supporter and participant has been smudged by the careers of random, chaotic people, and being able to identify and exploit opportunities from the crowd will be a valuable career-building skill. I have experience in this; my best career move was the one a former associate happened to offer me.

But I did have to be skilled and competent before I could exploit that opportunity, and that’s the positive note that Felicia ends her book on: because technology is changing how we tell our stories and who gets to tell them (going back to Bell), it is advisable for people to cultivate their personal narratives — it’s not just a matter of “writing a book,” but also a matter of aligning the facets of your personality with the work you want to spend a portion of your life doing and letting the world know the company you want to keep.

Raiding ‘locks, represent!


* While she’d reasonably assert that she is Valerie Plame Wilson, she was outed, entered the public consciousness, and became memoir-worthy (as it were) as Valerie Plame. Thus the portion of her life that is relevant here is labeled as “Plame,” preferring consistency to accuracy.

Big Data vs. Affirmative Action

Past performance is not an indicator of future results. –Every Prospectus Ever.

So there was this former executive of a successful San Francisco start-up telling us about how she just couldn’t hire a person who had bad credit.

And as you make sense of that statement, you’re very likely to jump to the wrong conclusion about what kind of a person that particular corporate executive was: you’re likely to suspect that she’s a classist elitist jerk because that’s the kind of story that you’re used to hearing that would conclude with “she just couldn’t hire a person who had bad credit.” And this is the same sort of trap we fall in when we encounter structural racism, or any other systemic -ist form of discrimination by algorithm: we look for the bad actor so that we can continue to play by the neutral-or-good rules while reassuring ourselves that we’re nothing like the bad actor. This would be fine if the rules were truly neutral, but they aren’t, and that’s even assuming that the people who wrote them did so in good faith and with the best of intentions.

So this post is for all you kids who are so not-racist that you think Affirmative Action is reverse discrimination that undermines the notion of equality that you’re totally in favor of. I’m going to start by splitting out the bad actors and looking briefly at what we want to believe is neutral to show how even good intentions can lead to -ism or -ist results. Then I’m going to double down on that point to explain the sort of situation the corporate executive I lead with was in, and why it’s a risk to you. And then I’m going to conclude with a value-based justification of affirmative action, regardless of how badly the policy may be implemented.

(Just to be clear, -ism and -ist are the abstract forms of “any kind of -ist — racist, sexist, ageist — behavior, or any -ism — racism, sexism, ageism — et cetera” because I’m making an abstract argument about how discrimination here.)

Let’s start with a basic claim:

We all know what interpersonal racism is: it’s these assholes who caused shit like this half a century ago, but then Rosa Parks and MLK fixed almost everything except for… pretty much everything. But this is what schools want to teach because it sounds like progress has been made and the grown-ups are constantly doing a better job than their predecessors — we call this the Myth of Perpetual Progress and it’s off-topic, but I recommend Loewen’s Lies My Teacher Told Me if you’re surprised.

I’m going to briefly argue that structural -ismist behavior is designed into our most-optimized social systems. And I’m drawing upon a bog of knowledge from Cathy O’Neil, Allstair Croll, and danah boyd in case you want to read more of this ilk (you totally should, that’s why I linked it for you). And I’m going to start with the very simple, aspirational, American-dreamy idea of buying a house. Yes, there are bad actors in Real Estate and everywhere else, too, but I’m going to not delve into them directly so I can focus on non-bad actors. And this is important because when things that aren’t bad are actually still bad, then we can’t excuse the badness by saying that we’ve cherry-picked evidence to make things worse than they are.

Because here’s the thing about the non-bad actors like insurers and creditors: they know what kind of a world they live in. Their first bet is a guess on when you’re going to die. But they’re also keenly interested to know how stable your job is, or if you’re likely to be fired — even if it’s a dodgy firing by an -ist boss that doesn’t know you and doesn’t like your demographic. For mortgages, they’re interested to know where you’re buying a home. And in the ideal free-market model, they’ll lower the cost of service for their low-risk customers by charging their high-risk customers more for less service, commiserate to the risk. The service charges (insurance premiums or loan interest rates) would be higher to recoup money earlier, while service maximum (insurance policy size or mortgage amount) would be capped lower to minimize the maximum amount of loss on payout/default. This is simple: the individual — me, you, any schlub — is specifically paying the institution to take on the fiscal risk of giving some rando — me, you, any schlub — a chunk of change, either to buy some property or in the event of misfortune. The more precisely the institution knows the likelihood that you’ll stop paying them and they’ll lose money on you, the more precisely they can charge you for the favor of taking on your risks to ensure that they make (rather than lose) money.

What this means is that an optimized mortgage equation that knows the elevated likelihood of a black man — his name is Jordan, why not? — being spuriously fired by some asshole boss, and will charge Jordan more and loan him less than it would me. The algorithm isn’t trying to be racist; it’s just acknowledging the ugly likelihood that Jordan’s going to have his life disrupted in a way that will impact the business Jordan is doing with the bank and protecting the bank from it while simultaneously working to earn my business. But this still means that Jordan is paying more for less, a negative effect that makes the bank running the racism-aware algorithm appear “racist” rather than “legitimately concerned about racism.” (We’ll get to what “legitimately concerned” looks like when we’re talking about affirmative action.)

Assuming that Jordan does business with the bank anyway, he’s going to promptly run into two downstream effects. First, he can’t buy as much or as nice of property. For example, he might find that properties near a polluted industrial district, or near railroad tracks, or in an higher-crime neighborhood, or under the low flight path of the airport are the only sorts of properties he can afford — and they bring stresses, carcinogens, and a shorter life span. (There was a clear correlation in southern California as I recall, but sadly I cannot remember the book I was reading that was harping on this evidence.) And that’s bad, but secondly there’s an ongoing economic impact: the American middle class keeps a lot of its wealth in its home real estate, and the lower mortgage cap basically caps Jordan’s ability to develop his home as a bastion of wealth.

If you’re in my target audience of teenagers, this may come as a surprise — but CNBC reports that

“Homeownership plays a pivotal role in the U.S. economy and has historically been one of the primary sources of wealth accumulation for middle-class families,” said Lawrence Yun, chief economist for the Realtors. … “Unfortunately, due to an underperforming labor market, insufficient housing supply and overly stringent underwriting standards since the recession, homeownership has plunged to a rate not seen in over two decades,” Yun added. “As a result, the country has become more unequal as the number of homeowners has fallen while the number of renters has significantly risen.”

So let me break down some sample math for you on the expectation that you’re a teenager and haven’t thought about this at all yet: when I was renting a few years ago, I was spending about $1000 on a mediocre apartment per month — that’s just spent and gone. But now I’ve got a house, and I’m putting $900 per month into the principal (that’s the part of the house I own) and spending $400 on interest and spending $300 on taxes. So clearly my budget is more restricted per month, but both the interest and the taxes are tax deductible, and the principal counts towards my net worth — so at the end of the year, after tax deductions, I’m only spending — in the “spent and gone” sense of the word — $500 per month on a nice house. And while rent has gone up noticeably since I’ve bought this house, my mortgage payments have not even though the market value of the house has increased. Yes, it can be hard to have that principal payment leaning on my budget every month, but when I’m done with this house I should be able to sell it and walk away with a huge pile of cash that I simply wouldn’t have been able to accumulate if I’d been renting.

That’s why Jordan wants to buy a house, wants to buy a house that will nicely increase in value while he owns it, and wants to buy a house that he’ll be able to re-sell when he’s ready to move on: it is an important part of his ability to advance socioeconomically, for both him and his family (if you’ve imagined him having a family), and he needs a nice big loan in order to do it because otherwise he’ll be blowing too much money on rent for too long and not make that socioeconomic advance at, for example, the rate I am.

But look back at Yun’s statement: “overly stringent underwriting standards.” That’s when banks are risk averse to people who may not be able to pay all the money back, for any reason — inclusive of being a victim of -ist stupidities in their place of (former) employment. Is that kind of algorithmic discrimination legal? Probably not. Does it happen anyway? Yes. Very clever statisticians are cooking up new ways to read historical data that won’t seem like racism — because they’re not really racists, right? — or sexism or any other -ism while more-precisely charging customers for the risks they pose to the bank. And that historical data is a sight to behold. Let’s say for a moment that we’re looking at a 30-year fixed interest rate mortgage (this is pretty normal; it will take twice as long as you’ve been alive to pay off a house), so we look back 30 years into the past to see how risks played out — and realize that 30 years ago was 1985 when the crack-cocaine epidemic ravaged inner cities, igniting a racially-charged moral panic of “at-risk” youth. 30 years before that was a decade before civil rights, so employment for minorities was super-precarious. As humans, we want to believe — as we were taught in school — “Wow, so much has changed!” But the algorithm is just going to crunch its numbers as if nothing ever changes.

Let me put that another way: large-scale personal discrimination, even-and-especially in the past informs our algorithms how dangerous it is to be in a non-powerful demographic in a country that is apparently overrun by -ist assholes. And our algorithms, in an as amoral (if not sociopathically dispassionate) way as they possibly can, return their variable response to the known risks in a way that is rarely questioned because we trust in the safe neutrality of the rules because they’re here to protect us from those -ist assholes, right?

You should be legitimately concerned at this point.

But let’s give you an example that’s in your more-immediate future: So there was this former executive of a successful San Francisco start-up telling us about how she just couldn’t hire a person who had bad credit, even though she really wanted to. We’ll say she was trying to hire Jordan (because I don’t know who she was actually trying to hire). See, the problem is that no matter how much money she offers Jordan now, even for moving expenses and as a joining bonus, if Jordan has a shitty credit score — a huge menagerie of college loans, maybe a couple of missed credit card payments because OMG Textbooks Are Expensive, maybe a used car loan on a lemon — Jordan will be rejected when he tries to rent an apartment, any apartment, and will thus be unable to move to San Francisco to take the well-paying job he needs to pay off that huge menagerie of college loans. The VP may want to pay Jordan with wheelbarrows full of cash (and she did!), but that desire to improve Jordan’s future means nothing when he’s disallowed from entering the housing market because of the financial 8-ball he’s starting his adult life behind. And that’s not racism at play, that’s just an authoritative algorithm spreading misery from the past into the future. And you should be legitimately concerned at this point.

You should be legitimately concerned because, as danah boyd succinctly put it “Technology doesn’t produce accountability [and] Removing discretion often backfires.” And when most of our economic lives are governed by black boxes full of trade-secret algorithms replacing the old arcane actuarial tables, then even in a best-possible-faith situation the negative impacts of past -ist behaviors will be used to encode -ism into our future.

I started off this post with the routinely disregarded financial wisdom that “past performance is not an indicator of future results,” a disclaimer that goes on every stock and mutual fund prospectus. But we ought to now repurpose it to serve as a reminder that it’s a mistake to pre-judge somebody’s future simply based on what we can reasonably infer from their past. It’s a mistake to not rent an apartment to a bright young engineer starting a promising career at one of the hottest companies in the Bay Area just because college is a financial tribulation these days. It’s a moral abdication to make people pay more for the risks they incur based on how people, civilization, and chance treat them rather than trying to help them mitigate those very risks. And thus it’s wrong to not take actions that affirm people’s ability to perform better and achieve more when they’re not being disadvantaged by their demographic correlation to risk. That’s what we’re supposed to mean by “Affirmative Action.” It’s not supposed to be a question of quotas (as dumbly implemented as the statistics they’re intended to counteract), nor should it be about lowering standards; it’s about seeing how well somebody does despite being almost certainly disadvantaged and then giving them the opportunity to show what they can do when they’re not disadvantaged anymore.

And so our legitimate concern ought to sometimes take moral action to affirm the idea that citizens of a just society would not be historically disadvantaged by other people’s hostile/predatory behaviors; here justice imposes a counter-factual responsibility to look to a person’s possible future regardless of their statistical past. Contrariwise, it is immoral to abdicate this responsibility to a machine built to enact corporate policy. Past performance is not an indicator of future results.

So I’ve covered what I wanted to cover and avoided what I wanted to avoid. But there is one last thing that I should briefly address, so kids, pay attention: affirmative action, even in its dumbest and sloppiest implementation is not about reverse racism. It’s about extending opportunity to somebody who worked hard to earn it with relative, if not absolute, achievement. It’s not really about you at all. And if you are concerned, then allow me to suggest that your concern is not that other people are being given opportunities, but rather that the adults of the world are doing a piss-poor job of creating opportunities in general such that they seem scarce in a way that make The Hunger Games a totally reasonable allegory rather than the obvious horror show unfit for consumption in civil society that it totally is. That’s why you almost certainly mis-attributed a cold elitism to the executive that just couldn’t hire a person who had bad credit before I explained what was going on. But you doubt this, consider: the simple economic fact of the matter is that “the richest 80 people in the world alone now possess more combined riches than do the poorest half of the world’s population” and that ratio is getting sharper over time.

Or, to put it another way, when “you wouldn’t be so annoyed that Jordan got to interview for that job and you didn’t, except you just know you’d have done better in the knife-fighting part of the interview than Jordan” — the problem isn’t who got the interview (affirmative action), it’s that there was a knife-fighting component to it (scarce opportunity). I know this doesn’t help your situation, but I do hope it helps your perspective.

Linear Dragons

“We each make our own accommodation with the world and learn to survive in it in different ways.” –Douglas Adams, Last Chance to See

Photo of Komodo Dragon, 2012, at the Sydney Zoo

Tuka the Komodo Dragon

Aristotle was an asshole: his theories of virtue, ethics, and morality actively excluded the overwhelming majority of not only the world (no barbarians allowed) but even the residents of Athens (also no women, poor people, or ugly people). This is a crucial point that gets forgotten when people try to tap into ancient Greek philosophy: its aristocratic aspirationality may not scale up so well. And Aristotle’s blind spots make him generally incompatible with the teachings of early Christianity that started in a Roman-subjugated and entirely-underclass Jewish population. And while Nietzsche would later complain that the morality of Christendom was only suitable for herds of slaves that diminished the (romantic) dignity of aspirational — that is, both heroic and fictional — humanity, Nietzsche didn’t provide a form of ethics that scaled well either. And if you want more on this topic, then I’d suggest reading Alasdair MacIntyre’s After Virtue because my intention here is only to call to mind what Aristotle gets right, and what dragons get wrong, and what it means to ordinary modern people.

Aristotle gets two things right: any virtuous behavior tends to come between either too much or too little of that type of behavior, and a person develops that virtue by intentionally practicing hitting that mid-point of virtuous behavior. For example: while too little courage is obviously cowardice, too much courage is suicidal insanity — thus the virtue of courage is exercised and developed by finding the mid-point between timidity and wanton recklessness. Aristotle also asserts that relying on ingrained discipline or intuition could result in the short-term appearance of virtuous behavior, but it wouldn’t be a conscientious practice and development of it. On the whole, existentialists can look back to Aristotle’s insistence on practiced virtue for their core claim of Existence Precedes Essence.

Anyway, dragons. Nobody coming from the European tradition would make a claim as to the virtuousness of dragons. While they may have many very impressive qualities, dragons are considered to be bad by human standards. And this is true not only of the mythological dragons — I’ll get to them — but also for real dragons. Here’s a snippet of conversation Douglas Adams recorded regarding the Komoodo Dragon in his book Last Chance to See:

What works as successful behavior for us does not work for lizards, and vice versa.

“For instance,” said Mark, “we don’t eat our own babies if they happen to be within reach when we’re feeling peckish… A baby dragon is just food as far as an adult is concerned… It moves about and has got a bit of meat on it. It’s food. If they ate them all, of course, the species would die out, so that wouldn’t work very well. Most animals survive because the adults have acquired an instinct to not eat their babies. The dragons survive because the baby dragons have acquired an instinct to climb trees. The adults are too big to do it, so the babies just sit up in the trees until they’re big enough to look after themselves. Some babies get caught though, which works fine. It sees them through times when food is scarce and helps to keep the population within sustainable levels. Sometimes they just eat them because they’re there.”

In this case, Aristotle would say the Komodo Dragon is not virtuous despite eating only a moderate quantity of its offspring because it’s not intentionally practicing moderation in eating its young, but instead encountering the constraint imposed by the baby dragons climbing trees to escape. There’s a moral lesson in there for us all, I’m sure.

But let’s talk about the European mythological/symbolic dragon. What we know about them tends to come from our perspective: they’re slain by heroes, most notably by Bard or Saint George, but really anybody trying to recover the treasure of a kingdom and/or virginal princess has motivation to go after a dragon. And everybody knows where to find the dragon because by the time the dragon is big enough to have a kingdom’s worth of treasure or be stealing — or demanding — princesses, they’re already incredibly powerful and noteworthy and almost certainly well into their adult years. Conversely, the idea of a young dragon seems generally unremarkable: their lack of power prevents them from abusing their power, so continually pushing themselves to their personal maximum of consumptive rapacity — symbolically both in terms of material wealth and sexually with a standing preference for adolescent virignal girls — is unremarkable until their power has grown to the point of being overwhelming and the consumptive rapacity that they’ve been practicing for years, decades, maybe even centuries, is abusive. And this gets the attention of the heroes and gets the dragon killed.

From our point of view, the dragon should have known better than to irresponsibly and intolerably abuse its power. But what did the dragon know? The symbolic dragon is intelligent, though more cunning than wise. The dragon knew that if it didn’t fully secure its territory, it’d get slaughtered by humans or, presumably, attacked by other dragons. And so as it’s growing up, it is constantly straining its small self to ensure its individual security against a plethora of social threats. And then when it is an adult, it finds that its essence is simply what it was practicing for all though years only now it’s got this incredible dragon body that’s making it wildly successful at defending its treasure hoard and intimidating humans into virgin sacrifice.

The ingrained insecurity of its youth leads the dragon to not practice finding mid-points of virtuous behavior, but instead pushing towards chronic excess in the hopes of feeling secure. Of course, the success of its excessive behaviors increase its value as a target of valiant heroes and thus undermine its security.

One of the early lessons of Jordan Ellenberg’s How to Not Be Wrong is on non-linear (that is, “not in a straight line”) thinking as exemplified by the Laffer curve. And the lesson is simple and sounds just like Aristotle: it optimal position for a variable policy is somewhere in the middle. Laffer was originally describing the hypothetical curve of government revenue from taxing people too little or too much: if the government doesn’t tax anybody anything then it’ll collect $0 but if they tax everybody everything, then nobody will bother earning anything and the government will also collect $0. Similarly, the dragon should strategically attune its greed to the balance of its need for power to fend of incidental challenges with its need to avoid the level of infamy that makes it a ripe target for heroes.

Instead, the dragon’s linear more-is-better thought process leads it to a cruel optimism: that it may one day be secure if it just keeps doing what has finally started seeming successful for it with the gold-pillaging and virgin-consuming.

And all of this is actually super-relevant to humans, even the ones that prefer evolutionary biology over symbolic mythology, because our limbic system is what makes us similar to lizards and undermines our ability to break out of linear thinking.

“It doesn’t matter how clever you think you are; those years, they leave all kinds of scars.” –Curve, “Beyond Reach

I sympathize with the dragons. It’s not just that I am amassing a treasure-trove, nor is it that the keenness of my physique can dissolve the attractive young fitness coach-lady’s professional veneer into flirtatiousness as she probes my pectorals for fat that is not there. But rather like the dragons, it’s that I was not always like this.

Growing up I was always one of the youngest kids in the class, and also tended to have the lowest BMI. I’m average height (5’10”) but until I was 31, my maximum adult weight was 128 lbs — so rewind that to high school and I’ve got the kind of scrawny body that results in girls telling me that I’m causing them body image issues. For most of my formative years, however, I went out of my way to avoid being persecuted. I learned very early — from my brother’s experience as well as popular culture — that there were violently dangerous bastards at school and that I should try to avoid them: give space, or blend in, or — late in high school — stand out by looking spiky, or, in the worst case, use my small size to slip between obstacles and people. Failing that, and knowing that I’d not win a “fair” fight, I should fight dirty and decisively.

I confess that I was temporally lucky; when I was growing up, it was not utterly horrifying to be the quiet inscrutable loner. Taking Machiavelli — “It is better to be feared than to be loved” — to heart was not yet grounds for expulsion. Columbine was still a few years away, so being a bit prickly wasn’t a therapy-inducing problem. Sara Taylor describes that time of my life perfectly in Boring Girls:

I suppose it was around this time that I started nurturing my desire to be feared. I wanted to surprise people who underestimated me, and rather than simply impress them, I wanted them to regret having felt that way. I became fixated on that moment of realization.

And this is simply because if a fight happens, then the fight isn’t really going to end until my opponent realizes the depth of their error in coming after me. I’ve heard that cis-boys can rough each other up to test and reinforce the strength of their hierarchical organizations; that team sports help engender this in them. I don’t understand that way of thinking at all. I think it’s fucking idiotic.

Anyway, my perpetual readiness to fight was thankfully never substantially tested. I suspect I was thankfully a step above where Paul Graham describes in his essay “Why Nerds are Unpopular” when he says that “to be unpopular in school is to be actively persecuted” — and indeed, I would assert that to be too popular is to be a precarious target for persecution, just like a dragon with too much treasure — but for me that just meant that I ought to seem complicit to whatever behavior, no matter how wretched, I couldn’t avoid to ensure my personal safety.

And that’s how a kid keeps his social nose clean while running the risk of encountering a predatory gang of bored delinquents, or simply being trampled by some girl who literally runs him down because she’s oblivious to where she’s going, both of which come together later in the form of obnoxious college football jocks, inadvertently confirming that the lower half of campus a hive of degenerate villainy — as I had long suspected and mostly avoided.

But now I’m “like this,” which means I’ve added 30 lbs of muscle and have traveled solo over much of the world (and surfed on three continents) and so on, and yet I’m still deeply, physiologically anxious about meeting people when I don’t know what they want from me. I still glance at joints, eyes, throat, and kidneys when encountering “sporty” looking guys even though I look almost identical to them. I resent people who are oblivious to how much physical space they’re taking up even though I’m not certain how much additional space I’m taking up.

The insecurities that I used to intentionally practice have persisted with me long past their defensive usefulness, corrupting my memories such that I identify with troubled Hal from Rocket Science more than I do with all the small-town newspaper reports of my long run of youthful accomplishments that my mother lovingly and comprehensively scrapbooked.

It turns out that bending an ingrained behavior into a non-linear virtue is a difficult thing to do; it is difficult to make the dragon lurking in the limbic system curve its habits.

So am I saying that despite appearances I’m actually a cold-blooded monster that will flame-broil you as soon as look at you? That’s certainly one interpretation, but I’d prefer you realize that this

is shockingly wrong linear thinking that fails to see how the taint of power drives nerd culture on a trajectory indistinguishable from bro culture in the same way that, as MacIntyre observed, power has consistently turned Marxists into Weberians. And this is well known, even specific to historic nerd culture as deconstructed from Revenge of the Nerds: their linear pursuit of one goal overshot the mark of virtue; in the end their behavior was practically indistinguishable from their antagonists. While it may be totally true that those people over there have created a nasty hostile asshole culture, this does not preclude the possibility we are living in a nasty hostile asshole culture ourselves, especially if we’ve spent an awful lot of time fixated on how much we hate their culture rather than positively cultivating our own.

And so consider the Komodo Dragon: just because the baby Komodo Dragon is able to climb a tree to flee its parents doesn’t mean it won’t try to eat its own babies when it grows up. It seems like we humans should be able to behave better than this, but it’s not at all clear that we’re going to.

Not a Bug; Loss Aversion is a Feature

TLDR? Loss Aversion is a psychological defense against focus capture, not a bug in our mathematical reasoning.

We’ll come back to that. But first I want to talk about restaurants and food allergies. See, back in 2012 I went from having a dodgy digestive system to having a wrecked digestive system with my body deciding that an awful lot of otherwise normal foods — like bread — should be treated like poison. And this changes my entire approach to all eating, which can interfere with the social aspects of eating. When Jennifer Esposito says that going out to eat isn’t fun anymore, this is the sort of thing she’s talking about: rather than focusing on having fun with your friends or meeting interesting new strangers, the top priority is Don’t Get Poisoned — and that’s even if your social circle understands that you’ve got biological issues and aren’t just trying to be a fad-tastic hipster.

Food allergies have changed the way I tip restaurant staff. Due to the irksome list of things my body rejects, there’s really not much on most menus that I can eat — the result is that my meals are rather repetitive. And what I’ve found is that I really like it when the server figures this out. It assures me that “Don’t Get Poisoned” is under control, so I can have a good meal with friends as if I were a normal person. My iconic instance of this was when I and my friends went in and the young waitress was near the end of a very rough shift and looking worn down, she’s dutifully taking the orders with the last reserves of her chipper customer-focus, and she gets to me and instead of saying “Do you know what you want?” she simply says “I know what you want.” Which was amazingly great as far as I’m concerned. Even my mom will occasionally try to poison me, so if a waitress is taking extra care to assure me that I’m not going to be poisoned then I’m certainly going to leave a larger tip.

To put it another way: I’m not paying more to avoid being poisoned (though the gluten-free bun on a burger often does come with a surcharge) because nobody should be poisoned; I’m paying more to not have to think about being poisoned. I’m paying more in gratitude to people who are thinking about how to prevent my cognitive drain.

So now it’s time to discuss the notion of cognitive drain. Here’s a short article from Kathy Sierra, or her longer video if you want to learn about how it applies to software development, or her even longer book, but the short of it that is relevant here is that: cognitive resources trade off with cognitive resources. Whether we’re holding on to a concern about being poisoned, or exercising the self-control necessary to not yowl at idiocy on parade, or getting distracted by the co-dependently needy notifications of a fussy app, your mental capacity is being diminished.

This plays out with priorities: if you’re focused on being Not Poisoned, then the idiotic buzzing of the phone can bloody well wait, Nick Carr. The immediate priority (No Poison) creates a sort of tunnel vision that hedges out everything else even if that’s not a good thing to do. Sure it holds my attention away from being notified that another bot has followed me on Twitter which is totally fine, but it also distracts me from my friends or from that incoming message from the lady I’m trying to cultivate a relationship with, either of which is clearly more desirable than something we’d all like to take for granted (Not Getting Poisoned). As a bonus, having a clear priority of Not Getting Poisoned also mitigates sexism and lookism when tipping the wait-staff — so let’s hear it for the social progressiveness of… autoimmune digestive disorders?

Anyway, I view the functional value of tunnel vision like a Laffer curve, and if you’ve not been properly introduced to the Laffer curve then I strongly suggest that you go read Jordan Ellenburg’s How Not to Be Wrong because, excepting a bit where he goes off at length on overloading the lottery, it’s the most interesting applied mathematics book I’ve ever seen. But the generalized point about the Laffer curve is that on one side of the curve you’ve got crap and on the other side of the curve you’ve also got crap and the optimum position is between the two sewage troughs. In dealing with focus, one side is distracted by everything (as many people are concerned about, inclusive of Carr, and presumably Postman, and anybody who looks at what colleges are coming to expect from high school juniors and seniors) while the other side is distracted by nothing regardless of how important or monumental it happens to be.

Tangent: To be fair to Carr et al, I do agree that we are faced with an ongoing firehose of irrelevant alerts, but it didn’t start with our phones. I can readily push the blame back to the rise of competitive 24/7 news channels, especially Fox News with their perpetual state of “News Alert.” If you compare what people learn from the Daily Show (1 day roll up) or, better yet, Last Week Tonight (7 day roll up) versus a Fox News Alert (no roll up at all) this should be awfully clear. But the predominant self-victimizing victims of this are the people affluent enough to have the slack to care. (If you want to read about the functional value of slack, Tom DeMarco has a nice little book with the obvious title of Slack.)

But where I’m really headed with this is into the territory of people who don’t have enough slack to afford distraction and a 90-minute lecture on the cognitive effects of poverty:

Now there’s also a New York Times article or a Guardian article, as well as the referenced book, but there are a few key points I’d like to bring out:

  • Our brain will subconsciously fixate on what it thinks we’re lacking, no matter how much we consciously assure ourselves that everything’s under control. This came out of the Minnesota Starvation Experiment. This results in focus capture. Conversely, if you have more than an optimal amount (think Laffer curve here) of a resource, you may have — as far as economists are concerned — insufficient focus on the resource, leading you to undervalue and possibly squander it. In the lecture, this was comparing discounts on computers and MP3 players.
  • Focus capture is a cognitive drain. In the lecture, this comes out with the word hunt but is a large point. The not-actually-copacetic-at-all office worker is drained by diplomatically smiling at the idiot stakeholder, I’m drained by avoiding poisons, poor people are drained by wondering how they can possibly afford to pay for “this” regardless of what “this” is. All of this drains the resources available for doing other things, like doing the best possible work that we’re paid to do, or properly enjoying a stellar meal, or correctly filling out ridiculously obtuse paperwork which may or may not be some behavioral economist’s IQ test. (In aggregate this extends to “the stable home life of the affluent grants them a biological advantage in maintaining focus on the manufactured tedium of school, helping to maintain classism indefinitely.”)
  • The cascade of short-term thinking “errors” caused by focus capture leads to future difficulty. This is an ongoing point in the lecture in terms of loans, especially those with interest rates that may be termed “predatory” by people who can afford to have no part of them. But that’s the basic economic example for poverty. My focus on being Not Poisoned has me reluctant to try new foods regardless of how amazingly great they may turn out to be. And the ironic opposite of poverty has brazen consumerism leaving affluent people wondering how they can maintain all that they’ve acquired, from the too-big house to the too-full closet and even (dare I say it) the too-many children — or, to put it another way, action now commits future resources.
  • Our public policy does a bad job of being fault-tolerant despite the well-known ability of people to make mistakes, especially on obtuse bureaucratic paperwork. This comes into play at the end of the lecture; the articles get to it faster. But my short evaluation of it is this: politicians are intentionally hard on poor people to prevent “cheating the system” and thus (appear to) protect the ostensible resources of the affluent, with the sadistic irony being that the people who have the resources to cheat the system are the ones who aren’t being distracted by the ongoing/perpetual hustle-and-grind of poverty.

But the point that the lecture didn’t get to that is easy to hypothesize is this: Loss Aversion is not a bug in our reasoning. It’s a feature that prevents focus capture. And this would be because an abstract loss from the status quo drops the person from a secure position into a resource poverty, necessitating a sub-optimal focusing behavior until the resource had been re-secured. If the behavior pattern is known to be sub-optimal in a “it caused people to walk into a trap” kind of way, then it should also be the kind of behavior pattern that natural selection has discouraged from showing up again and again. Regardless, the economists who chortle in self-satisfaction at the innumeracy of the common populous should be slapped with a fish and then ignored rather than “regarded as clever” which has tended to be the biopower-augmenting reaction for some time now.

The point is that keeping your brain from suffering the effects of poverty is why loss aversion is a feature of (and not a bug in) our reasoning; promise fulfilled.

But where this really goes is to Fault Tolerance.

  1. Our anti-poverty programs tend to be badly designed. This is documented in the articles above and mentioned in the lecture and any further coverage on at a bullet-point level would be doing it a disservice. But do tip your waitress generously for taking care to not poison you, especially at the end of the day she’s been having.
  2. The disjointed nature of our educational curriculum, particularly for high schoolers, is stupefying, and it has been for generations now. Context switching between the intimately-related mathematics and physics (to pick on the most obvious example) is an utter waste of kids’ cognitive capacity that could be applied to both concurrently. The artificial isolation of subjects tries to push even our most stable students into brief lanes of tunnel vision while ignoring the amount of focus captured by everything else going on in the rest of their day and then criticizes their performance. (An obvious path to investigate would be collapsing the not-uncommon block schedule slightly: half a day for math and science, half a day for problem-solving/engineering and technology; alternate day mixes history and literature for one half and art and civics for the other. Reducing the loss due to switching is good for cognitive resources; blurring the testability of material is good for fault-tolerance.)
  3. But let’s get personal: if you’re not a politician or school reformer or even necessarily poor, you may still be wasting resources on something as basic as getting dressed. Really, why did modern-day Faustian genius Steve Jobs always wear a black turtleneck? So that he wouldn’t squander his genius on deciding what to wear. That’s it, that’s the reason. Timoni West has a super-great article on how to populate a wardrobe with clothes you don’t have to think about, and I support it immensely in the same way I support Tina Fey and am gobsmacked that any press would criticize an economist over the simplicity of their fashion choices. Regardless, simplifying your wardrobe will save your decision-making capacity for when you need, but it will also make it fault-tolerant by ensuring that you don’t don a fashion faux pas while only half-awake: win-win!

While this topic is vast because it goes into the core of reconciling our current actions to our future outcomes, both with underestimated commitments and unintended consequences, I’m short on points that I want to spend time making at this juncture. You should go get dressed and I should go get Not Poisoned.

Moral Panic, Nostalgia, and Phoning It In

I’m going to try to keep this short because I’ve already spent too long watching a Public Forum debate in the wild. The topic was (over)simply “Smart Technology is Making Us Dumb.” And as far as I’m concerned, the affirmative team lost in multiple ways.

Now I might forgive them their squishy failure to define terms. In a public forum, they’re appealing to the biases of what people can see: smart technology is the phone that the kid is texting on, dumb is the kid texting on the phone walking out into traffic. The topic is loaded with implicit value judgments about smart and dumb as Dr. Bell (on the negative) correctly observes, with a definite pejorative feeling about what dumb is. But then both of the speakers on the affirmative reveal that they’re carrying these menacing phones of stupidity-inducing doom as well, so we really shouldn’t trust them: either they’re obviously dumb or they’re lying to us about the effects of the technology that they’re using. Similarly: did the Google engineers Nick cites arrive at a spiritual awakening about the folly of their career path, or were the neuroscientists shocked into changing their relationship to technology? Such ripe possibilities go unmentioned.

But Nick wants us to believe that we’re becoming dumb because of all the notifications our phones give us that compel us to respond to them. Only what Nick is describing isn’t smart technology — it’s stupid technology. If it lacks the common sense (as per the definition of “stupid”) to know that it shouldn’t interrupt now, then it’s a stupid technology. And yes, the part of Android 5.0 where you can no longer easily drop your phone into STFU mode made Android phones stupid. But the notifications are allegedly symptomatic of Information Overload which is non-unique if not actively untrue and even if it were true and unique to the loss of gatekeeping on bloggers like me (never mind the explosion of channels of cable TV or the explosion of books due to that nefarious printing press) it would merely indicate that people aren’t very good at being their own gatekeepers.

And this goes to the main point of Nick’s partner, Andrew Keen, who vitriolically objects to the noises that the proles are making with their smart technology. He apparently pines for the days when the mass of men lived lives of quiet desperation instead of the noisy vulgarity they’re subjecting him (due his lack of filters) to now. But this is a false nostalgia of the affirmative, wishing for wise people thinking deep thoughts as cleverly posited by the final audience question (in short: people without smart phones don’t sit around discussing philosophy or astrophysics; they go out dancing… so what’s your definition of “making”?) crystallizes this: the so-called smart technology is merely allowing more people to express their pre-existing vulgarity. Indeed, as Bruce Feller reported in the New York Times,

A growing body of research indicates how deeply our brains are wired to seek social approval. A study out of Harvard in 2012 showed that humans devote up to 40 percent of our time to self-disclosure, and doing so is as pleasurable as having food or sex… Matthew D. Lieberman, a professor of psychology at U.C.L.A. and the author of “Social: Why Our Brains Are Wired to Connect,” told me that this need for positive social interaction is hardly new. “It’s been there in one form or another since before the dinosaurs 250 million years ago,” he said.

It isn’t making us smarter, but it’s not necessarily making us (in social aggregate) dumber, either, especially if we use the actual definition of “dumb” which is “mute; unable to speak.” Bonus points if you noticed that “how deeply our brains are wired” directly clashes with Nick Carr’s predominant concern about our brains being melted down into grey goo.

The affirmative concludes with Plato’s Allegory of the Cave, presuming to tell us that we are the prisoners in the cave, shackled there by our mobile phones. Except that they were also carrying their own shiny new cave-shackles ergo we should not defer to their assertions. And do bear in mind that Plato’s elitism vis-a-vis the cave dwellers reflected his elitism vis-a-vis the majority of his society that were women and slaves. Indeed, it was expected that the women and slaves would never leave their small station and menial concerns to engage with the larger world, and only the great men who had seen the world could protect the state and its small-concerned inhabitants from it. But what we’ve seen is that Plato’s intentions are not reflected in modern democracy with (near) universal suffrage, that even if they were most people have maintained a small sphere of actionable concern such that technological advances across thousands of years has changed very little about the common human condition, and that, on the other side of the social spectrum, being a governing elite in no way demonstrates a big-picture mentality the affirmative might expect it to as demonstrated by, for example, their aggressively pipsqueak denial of climatic changes.

So for all the moral panic and faux-nostalgia the smartphone-indulging affirmative can conjure in the vein of “In the Old World, we were (Philosopher) Kings!,” the majority of their assertions are ill-based, misguided, or simply non-unique. To sum up the counter-position with the words of another Philosopher-King,

What has been will be again,
what has been done will be done again;
there is nothing new under the sun…
What is crooked cannot be straightened;
what is lacking cannot be counted…
Do not say, “Why were the old days better than these?”
For it is not wise to ask such questions.
(Ecclesiastes 1:9, 1:15 & 7:10)

On Getting Schooled

“In challenging times such as these, there is really no use in pretending all of our children represent the best we have to offer, because in every conceivable sense, that just isn’t the case,” said Education Secretary Arne Duncan, adding that operating under this delusion had cost the nation untold trillions of dollars. (The Onion, March 7, 2012)

I should have posted the negative material against “Resolved: In the United States, students should be guaranteed two years of free tuition to a community or technical college” the other day, but here it is now, really years overdue.

Please be reminded that I’m not a public schoolteacher, I just hang out with teachers an awful lot and am constantly reminded of how bad I would be in their career. While Camus admired actors for portraying the Absurd on stage, I admire public schoolteachers for living it as a career. A side-effect of this, however, is that my writing on the subject tends towards the dismal and veers away from any brief existential sparks of joy that briefly illuminate their systemic gloom. So, dear anonymous students, do your teachers a colossal favor and take them seriously. They’re barely compensated enough to oversee your barely-competent behavior, not compensated enough to correct your almost-cleverness or abject incompetence, and never compensated enough to deal with your bullshit.

Now that I’ve got that out of the way, let’s explain why tuition-free community college is a breathtakingly dumb idea.

The idea that students should be guaranteed 2 years of junior college takes everything you don’t like about public schooling and makes it last even longer.

Contention 1: School insulates, college exacerbates

It wasn’t until the post-World War 2 economic boom — my parents’ generation, really — that the United States could really afford to leave most of its youth in school for 12-or-more years because, as the educators at Glogster note, “more people had the means to stay in school, and the states had increasingly more money to pay for it.”

But there’s a grievous down-side to this development as venture capitalist Paul Graham explains in his essay “Why Nerds are Unpopular“:

School is a strange, artificial thing, half sterile and half feral. [ … And … ] If life seems awful to kids, it’s neither because hormones are turning you all into monsters (as your parents believe), nor because life actually is awful (as you believe). It’s because the adults, who no longer have any economic use for you, have abandoned you to spend years cooped up together with nothing real to do. Any society of that type is awful to live in.

This situation has only been exacerbated by high schools dropping expensive real-world trade-skill oriented metal and wood shop classes as documented by Dr. Crawford in his book Shop Class as Soulcraft to instead double-down on the collegiate focus, as evidenced by the distressingly deterministic mantra that “College begins with Kindergarten,” a mantra fulfilled by the resolution that ostensibly aims to make 2 years of community college an extension of 13 years of public education.

Noting that High School only became common when we didn’t have as much economic demand for young workers, do we think that this push for Community College is to skill up workers, or to tamp down on the size of the labor force? The Bureau of Labor Statistics reports that last year, only “51.9 percent of young people [16-24] were employed in July.” (“The month of July typically is the summertime peak in youth employment”) That’s barely more than half during a solid economic recovery… so we have to keep the other half occupied in the traditional way: doing homework.

So the push for free or subsidized continued education

  1. is caused by an over-supply of labor in an increasingly automated economy
  2. causing the superfluity of education to fetishize itself as its own objective
  3. functionally stunting the ability of adults to take their socioeconomic position in civilization.

Contention 2: Publicizing College Ignores & Exacerbates Achievement Gap

Contrary to how much the United States loves its titans of industry, we don’t do much to support our most talented and gifted students, as the Davidson Institute reports in their book Genius Denied — instead we worry about an Achievement Gap that develops during students’ time in school and spent over $11B on special education for people with learning disabilities (2006) to mitigate that gap while special education for students with learning super-abilities goes generally unfunded.

This isn’t to say that abnormally advanced students are universally ignored. Indeed, even in Oregon at least the Beaverton and Albany school districts help their advanced high schoolers cross-apply community college credit that’s being counted towards and associates’ degree to their high school requirements as well — and that tuition is paid by the school district. This is in the status quo, and doesn’t even look at commonly-available AP credits that typically count for college credit, and certainly stops short of the online courses available for free to anybody with adequate initiative through Coursera.

So when the affirmative is arguing for 2 years of free community college that’s different from the status quo, they’re really short-changing communal support for our top-performing students even more while simultaneously ignoring the fact that the achievement gap — entrenched in the curricular difference between advanced placement and remedial courses in high school — would essentially be given a longer run-time in public education to develop.

This has three effects:

  1. the rarity of public support exacerbates the tendency of our top students to become elitist jerks buying into the mythology of the self-made person with no sense of noblesse oblige — this is status quo, but worsened with the aff — because
  2. the aff substantially increases the visibly disproportionate amount of public resources allocated the mediocre students in the hope that they’ll somehow become economically viable,
  3. despite not actually closing the achievement gap and instead passing it on to community colleges to try to handle.

Overall, there are a plethora of policy options to improve the 13-year system we’ve got, not keeping our kids mired in it for another 2 years.

And that’s the basic case that runs about 3:54 at a solid clip.

But there’s a lot more going on behind the scenes with education that you should be dialed in on to speak competently to why the resolution is a bad sort of idea.

First, here’s the specific evidence on the Achievement Gap:

Decades of research confirm that summer learning loss is real. According to a report released last month by the RAND Corporation, the average summer learning loss in math and reading for American students amounts to one month per year. More troubling is that it disproportionately affects low-income students: they lose two months of reading skills, while their higher-income peers — whose parents can send them to enriching camps, take them on educational vacations and surround them with books during the summer — make slight gains. A study from Johns Hopkins University of students in Baltimore found that about two-thirds of the achievement gap between lower- and higher-income ninth graders could be explained by summer learning loss during the elementary school years.

The obvious way to try to avoid summer learning loss is to break up summer vacation into smaller chunks and attach them to Winter and Spring breaks and the like. It’s not the amount of time off but the continuity of it that is implicated here. We could get over half of the benefit of the affirmative position with no new costs and no new time spent if we just optimized our existing system for retention — though this may require community colleges to re-tool their course offerings to stay relevant in the future.

The point behind the Achievement Gap is simple: the resolution mis-diagnoses the harms entrenched in the current education system and therefore declares “Moar Education!” to be the affirmative solution.

First-and-a-half, the Achievement Gap is undermining the value of time spent in college by leaving students unprepared for college-level work:

  1. Here’s the general trend:

    Every year in the United States, nearly 60% of first-year college students discover that, despite being fully eligible to attend college, they are not ready for postsecondary studies. After enrolling, these students learn that they must take remedial courses in English or mathematics, which do not earn college credits. This gap between college eligibility and college readiness has attracted much attention in the last decade, yet it persists unabated. While access to college remains a major challenge, states have been much more successful in getting students into college than in providing them with the knowledge and skills needed to complete certificates or degrees.

  2. And specific to Community College:

    College students are increasingly spending federal financial aid and taking on debt for high school-level courses that don’t count toward a degree, despite mounting evidence the courses are ineffective and may contribute to higher dropout rates. … The trends reflect a sharp rise over the past decade in enrollment at community colleges, which disproportionately serve low-income, minority and older populations. About 40% of students entering community colleges enroll in at least one remedial course, according to the Education Department; only about 1 in 4 of them will earn a degree or certificate.

And this not only highlights that students are graduating from high school unprepared for college (which began with kindergarten!) but also that the time-binding of the resolution to “2 years” doesn’t really attach to any consistent degree of merit: people who were over-achieving in high school won’t use 2 years while the under-achievers taking remedial courses to catch up will find it nigh-impossible to get their 2-year degree in a mere 2 years.

Next, let’s handle the common claim that education is a fountain of never-ending economic goodness, to which we’ve got multiple responses:

  1. It is estimated that within 18 years, 45% of jobs will be vulnerable to automation. That’s an average of 2.5% of jobs per year, year after grinding year. Credentialed educational institutions will not be able to re-tool fast enough to keep their course catalogs economically relevant as they’ll be competing with the growing sector of the economy for instructors. (Pin that thought, we’ll come back to it.)
  2. But even now education is proving none-too-valuable in terms of raw ROI: “The total outstanding student loan balance is $1.08 trillion, and a whopping 11.5% of it is 90+ days delinquent or in default. That’s the highest delinquency rate among all forms of debt and the only one that’s been on the rise consistently since 2003.” If education were a reliable economic benefit, then we should see student loan debt shrinking, and being the least prone to default because of all the high-paying jobs people are able to get when they’re done being students. But we don’t.
  3. And we don’t because wages for college-educated people has been falling since the turn of the century, clearly showing that economics is about power more than education.

    If inequality was really about education, we would expect to see income growth sorted by education level: wages declining for workers who never made it past high school but rising for those who finished college. But as the graph [of median income for men with bachelor’s degrees, declining 8% since the turn of the century] above shows, that’s not the pattern. Rather, wages have stagnated for college graduates, too. What separates the members of the 99th percentile from their friends in the 91st percentile isn’t a college education. … But there’s a reason Washington prefers talking about education than power. If the answer to inequality is simply more education, than that’s relatively easy: most everyone agrees, conceptually at least, that a better education system would be better. But if the answer to inequality is redistributing economic power, well, that’s more controversial — particularly among those who currently hold the power.

  4. This should surprise us not-at-all when we look at the sorry state of college professors in the modern Community College (or even your public schoolteachers, pretty much all of whom have more formal education than I do).

    Adjunct professors scraping by on assistance from family, charities, and safety net programs like Medicaid and food stamps continue to push for fair compensation and work conditions. Higher education institutions across Colorado employ part-time faculty, but adjuncts in community colleges say their situation is particularly dire. Adjuncts currently represent 4,060 employees, or 78 percent of instructors at the 13 colleges in the Colorado Community College System, and are paid per class, largely without benefits, sick leave or job security.

And this not only shows that money doesn’t necessarily follow education, but also shows that educational institutions — especially those reliant on scarce public funding — will be hard-pressed to pick up support from people who are in-demand with and therefore well-compensated by the remaining portion of the economy. (Even if they could pull this off, the sheer tumult in the economy suggests that people who use educational institutions to acquire new skill-sets may well return time and again over the course of their life, spending well over the 2 years afforded by the resolution over the course of their lifetime.)

But while we’re looking forward to the future of work, let’s also look forward to the future of our public schoolteachers, because it’s pretty grim:

This is the canary in the coal mine. Several big states have seen alarming drops in enrollment at teacher training programs. The numbers are grim among some of the nation’s largest producers of new teachers: In California, enrollment is down 53 percent over the past five years. It’s down sharply in New York and Texas as well. In North Carolina, enrollment is down nearly 20 percent in three years. … There are, of course, alternative teacher certification programs across the U.S. including Teach for America. But TFA, too, has seen large drops in enrollment over the past two years. … [Bill] McDiarmid [dean of the University of North Carolina School of Education] points to the strengthening U.S. economy and the erosion of teaching’s image as a stable career. There’s a growing sense, he says, that K-12 teachers simply have less control over their professional lives in an increasingly bitter, politicized environment.

So what we’re seeing is that, bluntly, increasingly shitty working conditions, stagnant pay, and being used as political props (either in the role of martyr or covetous villain) have made becoming a front-line teacher a bad idea. But what this means is that our education system will be disintegrating at an accelerating rate in its K-12 base unless/until we address the career satisfaction of our professional educators due to lack of fresh blood in the system as old teachers retire or are ground down into attrition statistics — and this is looking forward from the status quo achievement gap and all that it entails. When faced with these problems, fussing about free community college is like ensuring that there are enough parachutes on the Titanic.

If you want more on this, go check out the negative case from the last time the PF framers suggested an educational reform.