The Lexwerks

Substandardized Testing

The next Public Forum topic is “On balance, standardized testing is beneficial to K-12 education in the United States” and this one is kind of problematic because so much evidence goes all one way. For example, here’s John Oliver with his total disregard for your pathetic little 4-minute time limit.

But I’m going to go a different way with my neg example and see if I can get it in 4 minutes.

To scope the debate, we’re going to be looking at the 50 million students in public schools covering roughly 90% of American K-12 students.

There are a lot of standardized tests, starting with the 23-or-more tests mandated by multiple pieces of federal legislation, as well as SAT and ACT, and international calibration assessments (PISA) when they come up, plus whatever tests are mandated by any particular state for all of its students — but mostly we’ll be looking at the effects of the federally mandated tests as the resolution is national in scope.

To understand “Beneficial” — and to weigh this round — we look to the US Department of Education’s intention for testing. On October 24th, they explained that “While some tests are for accountability purposes only, the vast majority of assessments should be tools in a broader strategy to improve teaching and learning.” So, does standardized testing improve teaching and learning?

Using data from 2012, The Nation’s Report Card shows that since 2008, National Assessment of Educational Progress scores haven’t substantially improved for either 9 year olds or 17 year olds in math and reading. More distressingly, across all 17 year old students, scores haven’t changed substantially since 1973. So while The Atlantic summarized the results of PISA testing in December 2013 by headlining that compared to other countries, our schools are “Expensive, Unequal, Bad at Math,” the more disturbing trend is that we’re not seeing any benefits of the tests that have been added since before our parents were assessed.

What we’re seeing instead is that the Department of Education is tying federal funding to test scores, resulting in a culling of students who test badly: rich kids get drugged, poor kids get forced out.

In A Disease Called Childhood Marilyn Wedge details how the standardized test regime of NCLB drove an increase in ADHD diagnoses.

The dominant paradigm of education in this country has actually led to rising rates of ADHD. In the four years after George W. Bush signed the No Child Left Behind Act into law in 2002 , the nationwide rate of ADHD diagnoses increased 22 percent. Why should this be so? The answer lies in the fact that the law tied financial rewards for schools to standardized test performance. Having more children diagnosed with ADHD was a boon to school districts that were lagging behind in test scores. First, scores for children with ADHD could be omitted from the school’s reported test scores. Second, children with the diagnosis got special accommodation, including extra time for taking standardized tests. Extra time, plus stimulant medication, which is a short-term performance enhancer, could very well raise kids’ test scores, in which case the school could decide to include them with the rest of its test scores. As a result, failing schools soon experienced a windfall of ADHD diagnoses.

But in areas where kids aren’t affluent and insured enough to be medicated, the threat to funding causes the barely-funded schools to push under-performing students out, creating what the ACLU refers to as “the school-to-prison pipeline.

For most students, the pipeline begins with inadequate resources in public schools. Overcrowded classrooms, a lack of qualified teachers, and insufficient funding for “extras” such as counselors, special education services, and even textbooks, lock students into second-rate educational environments. This failure to meet educational needs increases disengagement and dropouts, increasing the risk of later court ­involvement. Even worse, schools may actually encourage dropouts in response to pressures from test-based accountability regimes such as the No Child Left Behind Act, which create incentives to push out low-performing students to boost overall test scores. … Lacking resources, facing incentives to push out low-performing students, and responding to a handful of highly-publicized school shootings, schools have embraced zero-tolerance policies that automatically impose severe punishment regardless of circumstances. Under these policies, students have been expelled for bringing nail clippers or scissors to school. Rates of suspension have increased dramatically in recent years —from 1.7 million in 1974 to 3.1 million in 2000 — and have been most dramatic for children of color.

Furthermore, this is a contributing factor to a brewing teacher availability problem: In March, NPR reported that

Several big states have seen alarming drops in enrollment at teacher training programs. The numbers are grim among some of the nation’s largest producers of new teachers: In California, enrollment is down 53 percent over the past five years. It’s down sharply in New York and Texas as well. [And] The list of potential headaches for new teachers is long, starting with the ongoing, ideological fisticuffs over the Common Core State Standards, high-stakes testing and efforts to link test results to teacher evaluations. Throw in the erosion of tenure protections and a variety of recession-induced budget cuts, and you’ve got the makings of a crisis.

And in the LA Times in September, Harold Kwalwasser wrote “our exam system is deeply flawed, especially when it comes to teacher evaluation.” and “seemingly uncontrollable variability produces great teacher anxiety that is not worth the damage.”

So standardized testing is so bad for teachers that it’s making it hard to recruit new ones, so unhelpful for students that they’ve not improved in decades, and flat-out dangerous when the Department of Education ties financial incentives to it. The way the United States has deployed standardized testing in K-12 education in the status quo is a disaster.

Policy Debate Cross-Examination Primer

So it’s that magical time of the year when we need to talk about what we should be seeing in Cross Examination Policy Debate rounds, because we’re not seeing it. What we are seeing instead is: failing to get the opponent to contradict what they just read, openly acknowledging that no attention was paid to the last speech, and getting down in the weeds on the sex of an author on cards the team probably didn’t even cut themselves that nobody actually has any in-round way to confirm or deny.

This is ridiculous. None of it has anything to do with policy debate. Policy debate is the big How Are We Going To Do This? question. And the basic structure is simply a matter of “X is a problem that is not currently being addressed because Y, but if we do Z — which is relevant to the topic — then we can solve for X.” And the affirmative team can usually present all that in under a minute. Which means that they’re going to spend 7 minutes of the 1AC screwing themselves over, and the first cross examination period should be the 2N getting the 1A to absurdly over-commit to everything that was just said, and exposing all the links to the arguments the 1N is going to read.

So how can you do that? Well here are some boilerplate questions — covering all the stock issues of a minimal case — to get you, the 2N started:

Harms: “You seem to be genuinely concerned about [Aff harms]. That’s the problem you’re focusing on today, right?”

If you don’t know what their specific harms are, guess. If you get it wrong, they’ll correct you. If they just give you a terse “Nope,” then prompt them “okay, what’s the big problem you’re focusing on?” The point here — and with all of these questions is to get them to commit to things at a pace where the judge can’t possibly mishear their claim. The optional follow-up question would be “Why do you think [Aff harms] is really a problem?” — but generally this gives them too much space to repeat their case. Don’t use it unless you think they’re going after a ridiculously small problem.

Solvency: “And you’re promising our judge you can fix that with your plan?”

A dumb team will say “yes” because they know that solvency matters, and in that very moment they will over-commit their case so you can run solvency deficit arguments against them which boil down, in the 2NR, to

“They promised you in the first cross-examination that they’d actually solve this problem. They haven’t. In the best case, they’re woefully unaware of how hard of a problem they’re up against, in the worst case, they lied to you. Either way, they can’t keep their promise and haven’t earned your vote.”

A smart team, like you, will hedge your solvency to what your evidence can support: “A problem as big as [our harms] isn’t going to be solved with 10 seconds of good idea, but our plan will certainly mitigate the negative impacts we’re feeling from [our harms] as our evidence shows.”

A cagey team will deny real-world solvency because this is just play-debate and not real policy. Reword the question to “But if we took your plan out into the real world, would it solve [Aff harms]?” If they dither on that question then rail on them for wasting the judge’s time. If they want to blather vaguely in the comfort of a safely ineffectual zone, there’s probably Public Forum rounds going on all over the place.

Topicality:  “And you believe your plan to [summary of plan text] qualifies as the United States Federal Government substantially curtailing its domestic surveillance, why?”

You can skip this question if their plan is obviously topical, but it probably isn’t if you’ve got a good set of definitions. That said, judges don’t like voting on topicality because it’s how the judge says “My time was wasted from the 1AC on…” and that’s not an easy concession for a judge to make.

The variation on this question is “And what makes you think your plan to [summarize plan] will do that?” — following up on solvency. This variation is poking at “Effects Topicality” (FXT) where the plan might work, but relies on an outside actor (not compelled by fiat) in order to work, so there’s hand waving and wishful thinking involved. For example, last year the FXT plans usually involved tax credits, like: “Our plan is offshore-wind power tax credits. By the irresistible allure of tax credits, businesses will invest. We get solvency.” The “irresistable allure of tax credits” is wishful thinking, just like “cutting taxes on rich people will cause them to spend more” (no, it allows them to save more, which is what they’re good at — that’s how they became rich) and effects topicality starts as a solvency problem — their plan isn’t guaranteed to deliver what they promise — but also because the link to the topic might not actually happen, it’s also a topicality infringement.

Either way, the argument on topicality is

“Our opponents had one job when they started this debate, and that was to affirm the resolution. They have not done that. You may think they’re pretty nice people (we do), you may think they’ve got good ideas (we don’t, we’ll get to that in a moment), but what you can’t honestly do is write on your ballot that they represented an affirmative position. Nobody in this room has presented an affirmative position, so you have no ground to vote in the affirmative this round.”

This is a T argument that involves the judge. Most of them don’t. A typical T shell will say “vote for fairness & education,” which doesn’t happen because the judge is supposed to vote for the winner and not help the utterly crushed losing team out, and education would be better served by learning more new things, topic be damned. (And yes, I’ve seen an Aff team run a horribly off-topic case that the Neg team ran a Fairness & Education generic on, but then the Aff team turned the voting issues and the Neg team was sad, and confused, and lost because they no longer knew why topicality mattered.)

Special Note For The Aff: If your opponent runs a counterplan, or (stupidly) a PIC, you have a counter-T: they’re supposed to be representing a negative position. So when you’re Affirmative and the Negative says “counterplan: curtail surveillance tomorrow,” you can pop right back and say

“Our opponents agree with us in principle, they agree that the resolution should be affirmed, and they demonstrate it by presenting their counterplan that curtains domestic surveillance. We may disagree on some specifics, but everybody in this room is committed to the affirmative position of curtaining domestic surveillance. Nobody’s really against it; there’s no real negative position in this round. So regardless of which team you think is doing the better debating — and it’s us because we at least know what side we’re on — you’re going to be voting for an affirmative position.”

Note that both of these arguments get into what is dangerous ground of telling the judge what to do. Before you get to that dangerous ground, you need to make sure the judge is there too. That’s what cross-examination is all about: getting the other team to make their position, with its glaringly obvious flaws, inescapably clear to the judge so that all you have to do is explain what it means for the judge in a speech.

Inherency: “And if your plan is so great and so simple, why hasn’t it been done already?”

And they probably don’t know, or will refuse to say, or — at best — present a very one-sided opinion (“It’s the fault of Those Assholes Over There.”) and claim that it’s your job to oppose them. This is bullshit. Their evidence is cherry-picked to build their case. You know how we got into the last war in Iraq? The executive branch cherry-picked its evidence from a wide array of reports it had at its disposal; it made its case and hid everything else. New medications? The successful trials get published, the unsuccessful trials don’t — and a lot of unsafe or ineffective drugs get FDA-approved and sent to market. Presidential candidates promise everything they’re going to try to deliver from a seat of power, but never talk about the inherent barrier of a wantonly dysfunctional legislature until it’s time to blame Those Assholes Over There. Prosecutors will wrongly seek convictions of innocent people by suppressing exonerating evidence — and that’s horrifying. The (cruel) goal of this question is to get them to admit that either their plan isn’t that great which is why nobody’s bothered with it, or that their plan is utterly implausible in the real world (best case), or that they’re disingenuously misrepresenting their research by suppressing contradictory evidence, with an outside chance that their case is already derived from status quo policy so they’re not topical (as they’re not making a significant/substantial change).

In an ideal world, we’d have a “Cherry-Picked Evidence” Rhetoric K for people that shrug off due diligence with inherency, but I haven’t bothered to write one yet.

Hopefully these basic questions will help you get started exposing gaps and drawing out absurdities from affirmative cases for your partner to exploit in the 1NC. They aren’t the only questions you should be asking, obviously, but they should give you starting points to get your opponents to over-commit to unbalanced positions and over-inflate claims that are hollow.

One final stylistic note about the unbalanced and the hollow: many people will try to bludgeon their opponents with questions and maim them in cross-examination. This is crass and vulgar. The style you should be going for is to pierce them where they are so hollow that they don’t even feel the attack, to tip them over when they’re so unbalanced that it doesn’t look like you even touched them.

Good luck, Great skill!

Jury Nullification Again

For Aaron.

The next topic for high school Lincoln-Douglas debaters is “In the United States criminal justice system, jury nullification ought to be used in the face of perceived injustice.” This is an important topic because it covers a little-known feature of our criminal justice system, which is itself little-known to high school debaters. There is, however, a massive difference between what the topic wants kids to learn about and how the topic is actually written as is becoming woefully common.

The issue that the topic wants to tackle is whether it’s okay for a jury to tell a prosecutor to “go fuck yourself; while the defendant did break a law, it was a stupid law and thus there is no guilt for us to find.” That’s jury nullification in a nutshell: the refusal to find guilt in a lawbreaker despite it being overwhelmingly obvious that the law was broken. Juries are allowed to do this, by design, a point which rarely ever gets mentioned to them. And this is super-relevant when we’ve got what one might call “civil disobedience” cases like Bree Newsome and James Tyson who will soon be on trial for taking down flag of a militant group known for militant insurrection against the United States Federal Government and human rights abuses, abuses that they frequently twisted a common religion to justify — which makes that whole Confederacy thing sound kind of like ISIS, doesn’t it? So while some people might say “civil disobedience,” I’d call the well-reasoned, planned, and staged flag-removal a civil service that helped raise the consciousness of the nation.

Now you might be thinking “they obviously broke the law, so they have to be put on trial,” and you’d be wrong. Opening the first chapter of Suspicion Nation (page 35), Lisa Bloom explains:

Prosecutors are the most powerful players in our criminal justice system. Their day-to-day decisions can literally determine which Americans will lose their liberty, even their lives. In virtually every criminal case in America, prosecutors decide whether to file charges, and if they do, which charges to file — minor crimes (misdemeanors) or major ones (felonies)? Once a criminal defendant is charged, prosecutors choose whether to offer a plea bargain, and if so, what deal to offer (community service? A fine? One year behind bars? Twenty?) or whether to roll the dice and go to trial. Though they are public officials in our democracy, paid with our tax dollars, all of these decisions are made behind closed doors, without transparency or accountability, almost never subject to public review.

Or, to put it another way, the prosecutor who will be representing the state of South Carolina at trial against Bree Newsome and James Tyson has apparently chosen to be a total dick. And this isn’t as uncommon as we’d like to think: Color of Change asserts that “This follows a growing trend of prosecutors from Oakland to Baltimore and across the country overcharging people who take non-violent direct action in defense of Black lives.” And jury nullification is one of the checks — the check that the ordinary citizens that comprise the jury, as opposed to the prosecutor, judge, and governor who are all part of the state government — on injustice corrupting our criminal justice system.

So if we read the resolution as “jury nullification ought to be used when the prosecutor is a total dick,” I might be leaning affirmative on that.

But the resolution says something else entirely, and we should break it down:

    1. The first problem with “perceived injustice” is that criminal cases are inherently about injustice and trying to restore justice: the criminal behaved unjustly against a citizen and now must be punished to mitigate the resulting injustice. That’s how retribution theoretically works, and the jury ought to find the lawbreaker guilty rather than nullifying the prosecutor’s case against the criminal. Failure to specify the source of the injustice means that the resolution ineptly collapses on itself. (Who even writes these topics?)
    2. The second problem with “perceived injustice” is that you’re relying on the jury (as the only group with access to jury nullification) to perceive the injustice, and juries aren’t reliable in that way. A bunch of racists might consider it unjust that KKK members are facing trial for murdering civil rights workers, for example. Or a bunch of dim-witted people might be confused by prosecutorial incompetence. And between those variations of a “not guilty” verdict, we on the outside of the courtroom have a hard time telling what the verdict really means. So while it’s easy for us to say that Newsome and Tyson are facing unjust prosecution, they’re actually a very specific case from which we should be hesitant to draw general conclusions about what people who may be perceiving different things ought to be doing. (Alternately, if jury nullification is all about the freedom of conscience for jurors, then it’s wrong for us to say what they ought to be perceiving and doing when we’re not jurors.)
    3. But that raises the issue that we’re outsiders. I’ve not actually ever served on a jury, so what am I supposed to do about the injustices I perceive? What is the judge (who can, for example, dismiss the case) supposed to do in the face of injustice? Or the governor or president, each of whom can grant clemency according to their jurisdictions? What recourse is given to the courtroom journalists, or even the defense attorney? See, the very strange thing about the wording of the resolution is that it specifies an action — jury nullification — which implies an actor — the jury — without limiting the scope to the jurors. Which is another way of saying “If you’re not on the jury then you ought to STFU about injustice” because the one action that ought to be used in the face of perceived injustice isn’t available to non-jurors. (No, really, who the hell is writing these topics?)
    4. But the big issue for me is that even if I were on a jury, I wouldn’t be able to see the full extent of injustice in our criminal justice system. John Oliver has multiple Last Week Tonight videos describing the flaws in our criminal justice system all of which stack the deck so far in favor of the prosecutor that we’ve managed to incarcerate more people than oppressive communist China has, despite having only 1/3rd of the population. For example, we might have multiple laws that cover one offense, and each of those laws may have a mandatory minimum sentence — the prosecutor can then threaten to prosecute using all of the laws (even though there was only once offensive action) and sum up all of the mandatory minimums to arrive at a absurdly high prison sentence as a “big stick” against which they can offer a seemingly reasonable “little stick” of a plea-bargain if the defendant simply accepts guilt (and all associated future stigmas of guilt) and forgoes a jury trial. And almost everybody, especially those that can’t afford capable lawyers, end up taking plea bargains when they’re offered: in 2012, the New York Times reported that “97 percent of federal cases and 94 percent of state cases end in plea bargains, with defendants pleading guilty in exchange for a lesser sentence.” The resolution offers no recourse for any injustices that happen in that section of the population, and in the rare cases where a jury is called, and rarer cases where they perceive injustice, and rarer still where they know they ought to use jury nullification, the actual outcome of the Not Guilty verdict will be the state prosecutors asking the legislature for more laws they can use as leverage to force plea bargains. That’s what the big picture looks like for this little resolution: an exacerbation of injustice. What the affirmative is really valuing here is Spitting Into the Wind, not “justice.”

To counterpoint the resolution, the American Bar Association says that

(b) The prosecutor is an administrator of justice, an advocate, and an officer of the court; the prosecutor must exercise sound discretion in the performance of his or her functions.
(c) The duty of the prosecutor is to seek justice, not merely to convict.

And so, when a prosecutor is relying on intimidation, bullying, and being a total dick to score convictions — that is, in the face of prosecutorial injustice — the response ought to be firing and disbarring the prosecutor. While jury nullification is, by design, an option available to juries, our society ought not be relying on it to fight injustice.

Read previous articles about: Aaron Swartz, too many laws, and jury nullification.

Addendum: Here’s Bryan Stevenson talking about a whole lot of injustices that jury nullification cannot address.

Interlude for Vocational Guidance

All societies are evil, sorrowful, inequitable; and so they will always be. So if you really want to help this world, what you will have to teach is how to live in it. —Joseph Campbell

So one of my former students, now in college and heading towards having A Vocation asked if if I like my work. This seems like a reasonable question from a post-millennial wondering about an old capitalist sell-out like me. But before I actually get to my answer on it, I’m going to bring in a Monty Python sketch to address the basic concerns:

That aside, the short answer is that: Yes, I necessarily like my work. I am responsible to myself (and my cat). That means that the suffering that many other people have, and I used to have, where they dutifully sacrifice their happiness to ensure a better life for their family? That doesn’t apply to me (anymore). And since I’ve rejected the imposition of duty, that means that whatever it is that I’m doing is necessarily something I want to do — whether that’s being a self-satisfied sell-out or shrugging off slow service in a restaurant. So because I’m doing it instead of doing something else, I like it well enough.

So what do I do? I’m the “Technical Program Manager” — that’s Lead Unicorn — for a couple of Intel’s major web sites and the broader data platform that they front. I’m at the (virtualized) table when we talk about what the apps need to do, and how they should look, and how the underlying hardware needs to be configured. My signatures and fingerprints are on the commits of CSS, of JS, of the ASP.NET Razor views, of the C#, of SQL scripts, of the build scripts, on the recovery documentation, and even on some of the code pushes to production.

The joy of the work is the feeling of wielding power. It’s the joy of seeing the graphs of millions of people using my work per month. It’s the joy of connecting a translation engine to a database and ballooning thousands of records into a dozen languages. It’s the joy of writing a little script that does hundreds of modifications on behalf of a suddenly-quite-grateful human. It’s the joy of purging 80% of an application’s source code from existence because it was shoddily done by some unprofessional scrub — which doesn’t sound great, but it also becomes an anecdote to share with the professional peer group, a mark of honor and designation of place within the subculture.

It is work, though, and I’d be remiss if I didn’t mention a few other niceties that I’ve worked hard to get. I have a cube by a window, which is nice, but also managers that don’t quibble about how much I work from home, which is nicer. I have a very comfortable paycheck that I’ve used to do the things that most people let languish on a bucket list — buy a house, travel the world, write a book, start a non-profit — with the point being: don’t knock disposable income until you’ve got some. And I’ve got the tacit and financial support of Intel in coaching high school debate teams, which I explicitly insisted upon when I took my current position because that’s what I’m doing instead of having kids of my own.

If you’ve read Daniel Pink’s Drive, you might get a sense of Autonomy and Mastery if not exactly Purpose here, and I’m feeling pretty lucky with two out of three.

See, prior to my current position (and dangerously adjacent to my current position for the past several months) I was doing almost the exact same job within the IT department. But the IT department didn’t want me to know how much my work was being used. The IT department wanted me to regularly be in my cube, which was near the middle of the building. The IT department didn’t want me to work with live data, or deploy patches to production, or have a seat at the table to negotiate what work was going to get done. In the IT department, autonomy was not much of a thing. In the IT department, there were always more forms to be filled out.

It also turns out that when your manager is a fussy sycophant pining for the approval of his toxic asshat of a manager, it doesn’t matter what work you’re doing: you will be unhappy with your line of work. I don’t think that’s unique to the IT department. But let’s go out and consult the Gallup poll that says that, beyond Pink’s core of Autonomy, Mastery, and Purpose, workers will quit their boss as often (or more often) than they will quit their job.

Because after you’ve gotten a lackluster annual performance review from a bad manager, you’ll know what’s additionally true: yes, people seem to leave because the money is better elsewhere, but the money is better elsewhere because the bad manager will underutilize and (therefore) undervalue their workers and thus not even try to pay them what their fair market value should be.

White collar individual contributors should always remember this: your manager’s job is to give you work that will be profitable for the company to have you do and then appreciate you doing it. If they suck at giving you profitable work, or can’t figure out how to pay you what you’re worth for the kind of work they have you doing, that should be their incompetent problem, not yours. It’s supposed to go with their pay-grade. That their incompetence routinely becomes your problem is where the distribution of office work often falls apart. And there’s a solid chance that with a flicker of self-esteem — which your incompetent manager has almost certainly attempted to smother — you can walk away from that dysfunction and get a job somewhere else where your skills will be better, if still not fully appreciated. But it doesn’t solve for the problems in the environment you’re leaving behind, or for the structurally diminished status of the engineer, or for how executives are using stock grants as their private fiat currency which is turning the stock-market based 401k retirement plan into a Ponzi scheme.

The big pattern is this: modern capitalism, perhaps more so than many other economic systems, but not unlike — for example — Stalinism, has a problem where people with a lot of status and power continually spend that status and power on securing that status and power to the detriment of anybody who doesn’t have as much status or power with which to secure their own position. Or, more simply, the people at the top are doing woefully little to stop the shit from rolling on down.

And these are the things that kids these day see and hear about, even from their successful white-collar parents, that make them disinterested in in joining the Gangs of America.

I don’t know how to solve for that.

What I do know is that I’m on a good team doing substantially appreciated work; that I’ve negotiated, influenced, weaseled, and cajoled an almost embarrassing amount of personal autonomy from the corporation; and that between my team and my expanded autonomy, I’ve been able to wield enough mastery over my technical skills to almost shake off 15 years of “I got a degree in Public Relations — what the hell am I doing programming computers?” impostor syndrome.


Choose Good Health, Low Cholesterol, & Dental Insurance

“Susanna, four days ago… you chased a bottle of aspirin, with a bottle of vodka.” “…I had a headache.” —Girl, Interrupted

To kick off this debate season, we’ve got the LD topic of “Resolved: Adolescents ought to have the right to make autonomous medical choices” for which I’m not going to write a full set of sample cases because I think it’s a worse-than-usual topic: the affirmative simply doesn’t have enough good ground to stand on.

The ground I’d recommend for the affirmative is valuing Security, which can be threatened by disease ergo a criterion of Defense Against Disease. And the argument is that adolescents should be growing aware that their parents (and their parents’ lifestyle choices) may not be the best defense that they, the kids, have against disease and thus they should be free to secure their body against disease in ways that worry, or disagree with, or even contradict their parents. The key example to work with here would be vaccines: if the parent is an anti-vaxxer and the child, now a teenager, has been hearing anti-vax arguments for over a decade and has determined that they’re pseudo-scientific BS, then the adolescent child should be free to autonomously engage in a private doctor-patient relationship to become vaccinated and thus secure their body against diseases. Same deal with parents who are into faith healing; the child should be free to autonomously pursue the security of their body against disease regardless of the parents’ belief systems.

The weakness with this affirmative position is, of course, that it requires the parents to be less rational than their children — and that’s a total corner-case in reality, leaving oodles of doubt for the negative to exploit for the win. So what the affirmative may do, that is wrong, is deploy Kant’s definition of autonomy. As Michael Sandel writes in Justice: What’s The Right Thing To Do

Kant’s conception of autonomy imposes certain limits on the way we may treat ourselves. For, recall: To be autonomous is to be governed by a law I give myself—the categorical imperative. And the categorical imperative requires that I treat all persons (including myself) with respect—as an end, not merely as a means. So, for Kant, acting autonomously requires that we treat ourselves with respect, and not objectify ourselves. We can’t use our bodies any way we please.

And Kant’s name-brand rationality matches pretty well with how Wikipedia begins its summary of Autonomy: “…it is the capacity of a rational individual to make an informed, un-coerced decision. In moral and political philosophy, autonomy is often used as the basis for determining moral responsibility and accountability for one’s actions. In medicine, respect for the autonomy of patients is an important goal, though it can conflict with a competing ethical principle, namely beneficence.”

So that’s cool, except for two things:

  1. The US will, by default but not always, treat a teenager as a “young offender” rather than as a rational adult for purposes of criminal justice. The law will similarly reject the notion that teenagers having sex with adults could have given consent on the grounds that they are not rational adults. So the notion that in the corner-case of “medical choices” teenagers should be regarded as de facto rational adults (when even rational adults can catch flak from medical personnel for trying to get sterilized under the age of 30) goes against multiple long-established policy precedents.
  2. But what’s worse is that rationality is actually pretty subjective and situational, which is part of why the criminal justice system will by-default-but-not-always treat teenagers as young offenders. This is politely glossed in the assertion that “Determining optimality for rational behavior requires a quantifiable formulation of the problem, and making several key assumptions.” And we’ll be grinding on this for a while, but the key point is that to say there’s a “right” to some autonomous territory makes that rational aspect assumed — whether it’s a good assumption or not. See also: the “well-regulated militia” that justifies our right to bear arms. Oh, can’t find one of those? That’s my point exactly: when autonomy is granted, rationality becomes assumed and only gets brought up to defend a person from the consequences of actions they shouldn’t have taken (on the basis that they weren’t rational after all).

But Kant’s particularly irrational hard-on for rationality is shockingly de-humanizing as Michael Crawford explores in The World Beyond Your Head:

Experience is always contingent and particular, and for that reason “unfitted to serve as a ground of moral laws. The universality with which these laws should hold for all rational beings without exception… falls away if their basis is taken from the special consideration of human nature or from the accidental circumstances in which it is placed.” To be rational is, for Kant, precisely not to be situated in the world. … Whether you regard it as infantile or as the highest achievement of the European mind, what we find in Kant are the philosophical roots of our modern identification of freedom with choice. … This is important for understanding our culture because this understood, choice serves as the central totem of consumer capitalism, and those who present choices to us appear as handmaidens of our own freedom.

So what we’ve got right there is a core irony of Kant’s position: our culture bases our notion of freedom (badly) upon it being wholly internalized and individualistic, and yet we actively treat other people as a means — in violation of the previously mentioned categorical imperative — of expressing our freedom. Put another way: as soon as you need a doctor to fill out a prescription to validate your autonomous medical choice, you’ve compromised the principle of autonomy.

But Crawford continues:

When the choosing will is hermetically sealed off from the fuzzy, hard-to-master contingencies of the empirical world, it becomes more “free” in a sense: free for the kind of neurotic dissociation from reality that opens the door wide for other to leap in on our behalf, and present option that are available without the world-disclosing effort of skillful engagement.

And this is exactly where we get into medical choices where science has bottled a solution to some of your worries; indeed, even vaccines exist so you don’t have to develop an awareness of microbes, with bonus points if you realize that the doctor wants to vaccinate you and use your body in herd immunity as a means of protecting all bodies against certain diseases as some bodies cannot be vaccinated, making the intent to vaccinate people an immoral violation of the categorical imperative. (Please note: both you and your doctor should behave responsibly and you should get vaccinated if you can. Having a deep and personal engagement with the pox will not substantially increase your awareness of microbes in the world, and protecting you and those around you from that realization does not make your doctor an immoral person. The ability to conclude otherwise is why LD debate sometimes annoys the hell out of me.)

But let’s cut back to rationality under “universal laws” at this point: because at the point where a child would rationally disagree with a parent on a course of action — and that’s the conflict we’re generally talking about here — it becomes necessary for the child to denigrate the parent’s choices on child-rearing as fundamentally irrational. And this is a critical problem for Kant in any situation: as soon as we claim that we’ve eliminate the “accidental circumstances” of our current (and yet enduringly human) condition, we attribute the “accidental circumstances” of other peoples’ humanity to their difference from our position which makes them not just wrong, but debased and immoral, while our rational(ized) choice is justified by what we errantly consider to be universal principles. And this is how you weasel out of the categorical imperative like a Total Asshole: “I know I’m a rational actor here, and I see that because they would take actions different from my own that they are not rational, therefore I am not imperatively bound to treat them with the same dignity as I would treat all rational actors such as myself.” (For more on this topic, read Mistakes Were Made (But Not By Me).

We routinely see this in debates: “I present the categorical imperative as a totally rational ethical framework, and while it would be unethical to force my ethical framework on another rational being, my opponent’s disagreement with my ethical framework clearly shows that they are irrational and thus makes it totally fine for me to force my ethical framework on them.”

And that’s the real reason that I’m opposed to this topic: the strongest philosophical ground I know of for the affirmative will turn the person standing on it into an asshole who (ironically) dehumanizes everybody that disagrees with them. And unless their “autonomous medical choice” is being made at the proctologist’s office, turning into an asshole is not a good thing.

But skipping back to how bad of an idea it is to assume that teenagers are, by default, able to make autonomous medical choices (in the same way the 2nd Amendment assumes that gun owners are part of a well-regulated militia), let’s talk about the widespread wrongness of this topic.

Drugs! Specifically, the consumption of prescription drugs in a way that isn’t prescribed. If it’s a medical choice to take Oxycontin or Adderall as prescribed, then it’s a medical choice to also take them as not prescribed (the pharmacist gave us the dosage information and we rationally and autonomously chose to ignore it), and that’s illegal for everybody rather than being a right for adolescents. The affirmative may try to argue that autonomous medical choices should be a right for everybody, except that’s not what the resolution says and even if it were, drug-permissive countries like Portugal still treat addiction as a public health scourge to be treated, and that’s true of whether the drugs being consumed are corporate-produced “medications” or some other innovation of modern chemistry. The point is that the affirmative position leaves the medicine cabinet open with tacit approval for all subsequent abuse by teenagers. (If you want a particular genealogy of “public health,” check out chapter 2 of Foucault: Power.)

But that’s the obvious one. Let’s take it one step worse.

The American Psychiatry Association includes Anorexia in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) — this makes it a medical condition. It quickly leads to malnutrition, which also a diagnosable medical condition. This means that treating anorexia is a medical choice. And what we immediately encounter is that the people who are suffering from anorexia are the people who are inflicting anorexia upon themselves and have thus, generally and typically, made the autonomous medical decision to forgo treatment of anorexia.

Here’s some of what Laurie Penny — who was hospitalized with/for/because anorexia — has written about it in Unspeakable Things with emphasis added:

The other girls on the ward look like every kind of girl I’d grown up afraid of. … It’s bad enough being on a locked ward, but now I have to be locked up with a bunch of frivolous fashion kids? Clearly, these girls have starved themselves to the point of collapse simply because they want to look pretty; I, meanwhile, have perfectly rational, intellectual reasons for doing exactly the same. (p 55)

You do not do this to look beautiful. You know you look like hell. You do it because you want to disappear. … You’re sick of being looked at and judged and found wanting. (p 26)

It is no surprise that so many women and girls have what are delicately called ‘control issues’ around their bodies, from cutting and injuring their flesh to starving or stuffing themselves with food, compulsive exercise, or pathological, unhappy obsession over how we look and dress. Adolescence, for a woman, is the slow realisation that you are not considered as fully human as you hoped. You are a body first, and your body is not yours alone… (p 114)

So many young people are doing ritual violence to their own bodies. Diagnosis of eating disorders, chronic cutting and other, more arcane forms of self-injury has mushroomed over the past decade, especially among girls, young queers, and anybody who’s under extra pressure to fit in. (p 30)

Eating disorders are still seen as diseases peculiar to pretty young white women, which perhaps explains why years of ‘awareness raising’ have led to a great deal of glamour and mystery surrounding this deadliest of mental illnesses and precious little understanding. … [T]he number of people with eating disorders is still rising, and we are no closer to solving one of the great mysteries of modern life… The best answer we seem to have come up with is ‘magazines’. This says rather more about what society thinks goes on in the minds of teenage girls than it does about the cause of an epidemic that kills thousands of young people every year, and leaves countless more living half-existences with the best dreams of their single lives shrunk to the size of a dinner plate. … Eating disorders are what happens when youthful rebellion cannibalises itself. (p 29-30)


And the crucial point of that last block is where Penny observes that “This says rather more about what society thinks goes on in the minds of teenage girls,” because that’s the clue that we’re fighting with competing rationalities. That’s the warning about what else will get unleashed with the affirmation of the resolution. Yes, young women should have reproductive autonomy — but not to the point where we tacitly accept anorexia as, you know, just part of how kids cope with growing up. Yes, children of anti-vaxxers and faith-healers should totally See A Doctor About That — but not to the point where we’re okay with bored suburban kids are practicing chemistry in their parents’ medicine cabinet.

As written, the resolution categorically assumes too much of teenagers and too little of their parents and should be regarded very suspiciously for all of the reasons that the teenager on the affirmative will not want to discuss.

And that’s the final vicious point: while the affirmative may passionately advocate for the rights of their peer group, their claim to rationality is undermined by their necessary belief that their presumptive parents (or whoever it is they need autonomy from) must be behaving irrationally (contrariwise, their parents would assert that the children are behaving irrationally in dissent). Their position is not ethically disinterested from the outcome of the debate, and so their capacity for rational deliberation appears compromised on this very specific issue. To put it another way: they may be able to give powerful pathos-laden witness as to why they should have situational autonomy, but they do so to the detriment of their ethos as they disregard the wider implications of the topic that should consistently overwhelm them and result in a negative ballot.

The closing perk for those of you who read this far is the opening clip from Trainspotting, which critiques the prescribed list of socially permissible “rational” choices with non-prescription drugs and from which I took the post title. It is probably the best movie I recommend people avoid watching because the pure wretchedness of its content is so masterfully arranged. (Which is to say: if you think you can take it, you should watch it. Especially if you’re a sheltered suburban teenager. But there is a dead baby on the ceiling, and that’s just one scene.)

* It is at this point in the first draft that I take a break for lunch; a dish of elk, rice, and vegetables, spiced and sauteed in olive oil. This action isn’t quite ironic as it is not exactly autonomous: my cat will cry at me if I’m not eating, partially because he hates it when I get hangry but mostly because he likes to mooch bits of people-food from me. In this decade — since being divorced — I’ve gained almost 40 pounds, almost all of it muscle. This hasn’t changed the fact that I hate the idea of taking up space, of being noticeable, of becoming a target. That I can pass for one of the dudebros at the gym, the sort of people I grew up shying away from, is a nauseating source of cognitive dissonance for me. I project this resentment of my form outward and thus generally expect, as reflection, that I am generally feared and resented — all the more for being atypically non-confrontational, as that confuses the hell out of people who think that I’m what I look like. I am therefore impervious to common flirtation (on the grounds that “that person, as one of those people, fears and resents people that look like me, and thus myself inclusive, ergo she would not actually flirt with me; she obviously and rationally expects that the chiseled musculature and pretty blue eyes are totally a trap, a position which — while I believe it is incorrect — I totally respect and sympathize with, especially since it’s merely what I would think if I were standing where she’s standing, even though I’m not.”) which I suspect has gotten me rightly cussed out as “clueless” or a “stupid boy” behind my back once-or-repeatedly in the past few years. But I’d rather be one of those stupid boys than one of these total assholes, with apologies to Ms. Penny (in the abstract).

In related news, prior to a policy round (in which the genders of exactly nobody actually mattered for competitive purposes) at the state championship, I was asked how I should be addressed and I flippantly responded “Your Majesty.” This was wrong of me, and I apologize. I am to be correctly addressed as “Your Grace” since I am a card-carrying Pope of Eris.

Felicia, Foucault, and the Freakshow

There you stand, beloved freak; Let it shine. –Garbage, “Beloved Freak”

A study asserts that reading literary fiction (versus just fiction, or non-fiction, or nothing) improves empathy because it causes the reader to guess at what’s going on to round out the characters. This is funny in three ways: first, it requires unscientific subjectivity to elevate a book from fiction to literary fiction; second, it requires sloppy incompleteness on the part of the author and — by necessity — misidentifcation on the part of the reader; third, when this study is cited by speakers at conferences, they’re likely to be sloppily incomplete in their reference to it and merely call out “fiction” as beneficial. I might be a bit snarky and suggest that the study is itself a work of literary fiction as the authors try to construct meaning in their lives, and that I feel for them, but I think that might prove their point, which isn’t what I’m all about here. Because the people I tend to feel for are the people who are doing partial documentation of their lives in the form of memoirs. More specifically, and as I asked a delightful San Francisco executive when the speaker repeated the “fiction” claim: shouldn’t a good memoir do a better job of building empathy for a real person than literary fiction might?

I’ve decided to try this hypothesis on my niece by getting her memoirs of amazing women. I decided to start with Valerie Plame (Wilson)’s memoir of her career as a CIA spy because the CIA helpfully redacted all content that wasn’t age appropriate. As well as all content that was age appropriate. And every word that had a vowel. So that didn’t work. But then Felicia Day wrote a memoir, so I bought that and waited in line for five hours to get it signed for my niece who likely won’t be allowed to read it for a few years because the word “fuck” gets bandied about quite a bit, and Felicia asked me what I thought of her book and I replied half-truthfully that I liked it so far — the full truth was that I was enjoying the content but found the overly-modern style to be atrocious.

The problem of the overly-modern style seems to be associated with not-authors increasingly writing books and taking on a very casual tone that not only tries to intermittently converse with the reader or the author or the editor that is apparently failing at their job, but then also decorates text to suggest that words should sound different to thus mean different instead of simply choosing alternate phrasing or — heaven forbid — trusting the good senses of the reader. To be clear, this isn’t unique to Felicia — Aziz Ansari undercut his otherwise fascinating sociological screed on modern romance with it, Jenny Lawson fills out her taxidermied tome with it, and even Nietzsche would abuse his text with long dashes followed –BY ALL CAPS. But I don’t think they’re copying Nietzsche at all; I think their editor is failing to reign in their stylistically obnoxious tendencies. The more-professional writers like David Sedaris, Scott Berkun, and Molly Crabapple (specifically her essay on turning 30 as her memoir is not yet published) all manage to be funny and poignant and touching without being stylistically sloppy.

But given memoirs by both Plame* and Day, we can start to see — as Foucault would have — the populist reversal of power that the Internet has brought to publishing stories of the self. This is not an un-observed change; Dr. Bell has noted that technology “is changing is how stories are told, and who gets to tell them” and she certainly knows her Foucault.

For those who can’t quote it from memory, here’s the most-relevant bit of Foucault at this juncture, from Discipline & Punish:

For a long time ordinary individuality – the everyday individuality of everybody – remained below the threshold of description. To be looked at, observed, described in detail, followed from day to day by an uninterrupted writing was a privilege. The chronicle of a man, the account of his life, his historiography, written as he lived out his life formed part of the rituals of his power. The disciplinary methods reversed this relation, lowered the threshold of describable individuality and made of this description a means of control and a method of domination. It is no longer a monument for future memory, but a document for possible use. And this new describability is all the more marked in that the disciplinary framework is a strict one: the child, the patient, the madman, the prisoner, were to become, with increasing ease from the eighteenth century and according to a curve which is that of the mechanisms of discipline, the object of individual descriptions and biographical accounts. This turning of real lives into writing is no longer a procedure of heroization; it functions as a procedure of objectification and subjection.

And yet Plame and Day show two inversions here:

  1. Plame was elevated to public consciousness — heroized, if you will — and thus had a story to tell, but then had that story heavily redacted to not only obfuscate the CIA’s rituals of power, but also to reassert the prior disciplinary relationship with-and-over Plame; that is, the CIA alters the monument and excises the content of the document to prevent possible use. The inversion here is predictable: it is better to control the document than to write the document, especially as the subject of the document.
  2. Day bootstrapped herself on YouTube to gain an audience. The trajectory of her career was simply unfeasible just a few years prior when Plame’s identity was being leaked to Judy Miller and the New York Times. She is keenly aware of this and documents it not as a heroization, but rather as her (rather Erisian as far as I’m concerned) freakishness manifesting in a changing context that leaves her continually vulnerable. The inversion here is that a selection of her weaknesses are turned not to the private disciplinarian gaze, but opened to the public.

The inversion that Day’s memoir demonstrates is less predictable than Plame’s traditional memoir challenge to a bastion of knowledge, but it does answer Foucault’s concluding problem in Discipline & Punish: “At present, the problem lies rather in the steep rise in the use of these mechanisms of normalization and the wide-ranging powers which, through the proliferation of new disciplines, they bring with them.” (Emphasis added.) But what we see with novelty memoirs like Day’s, or Berkun’s, or Crabapple’s is that the audience-finding reach of modern technology is allowing what would have been a document of subjection to instead be disseminated into public consciousness and undermine the mechanisms of normalization.

So while I find it regrettable that we’re losing the particularly considered cognitive style of how we tell our stories (and I blame the editors), I think we should be delighted that we’re also expanding who gets to tell them.

The new problem we’re going to have is discovery: if the new value is subversion of normalization, then how does somebody who doesn’t already have an audience get their unusual story to an audience that wants to read it? Especially given the condition that most adult Americans are reading fewer than 6 books a year — reference, noting that 6 is the median for the 75% that read any books — and we can easily speculate that the majority of those are reading the same books as multiple of their friends are reading.

This isn’t really a new problem, though; it’s the same old problem: social power favors normalization. We engage in herd behavior and over-rely on stereotyping to stay vaguely engaged on hand and excuse our disengagement on the other because being over-engaged is creepy and why restraining orders exist. And this is also why publishers are still A Thing: their business model is to figure out who’s normal enough for an audience threshold and then buy the rights to distribute that story to the target sub-normalized audience. And thus it becomes a problem for publishers and distributors: how can they identify amorphous demographics that don’t perceptibly exist yet and then connect them to somebody who’s willing to narrate a story for them? And, thinking forward, how will they profit from previous years’ publications that don’t have an actual marketing budget? (Amazon’s recommendation algorithm should be helpful here, but it’s been pretty much crap for over a decade.)

Indeed, Felicia’s initial “big break” with The Guild wasn’t just keying into putting a show about computer gaming on the fledgling YouTube site (good sense and lucky opportunism) — it was having YouTube find and promote her work because they expected it would appeal to their target demographic. And while I’ve known geeks and actors through high school and college that have done not-dissimilar work, the apparent imperfection of their timing crimped their market success. The increase of opportunity comes with increased competition for niche mindshare, exposing luck as a major factor beyond talent or skill.

And it turns out that the particular strain of Impostor Syndrome that says “you’re only here because you got lucky; you couldn’t get here again no matter how hard you tried” seems to be what Felicia and I have in common. That, and having played raiding gnome warlocks — least popular class, represent! — back in the early days of World of Warcraft when raiding was a major social event. (Funny story: when I was waiting for five hours I didn’t really know much of anything about Felicia outside of Dr. Horrible’s Sing-Along Blog and that she’d been doxxed by the hateful jackasses of the Internet. But my niece was super-excited about being able to read the book with her name inscribed in it in a few years, as I expected she would be so I was figuring it’d be worth it regardless and was totally right. More on that in a few moments.)

Make no mistake: we are each skilled at what we do. But the opportunities we took to get where we respectively are? They are not opportunities we can necessarily tell other people to take, thus limiting the value of our experiential advice, nor opportunities that we’d necessarily be able to take if we’d somehow gone back in time to give ourselves good advice. And that makes us tangibly insecure.

See, I was generally a gifted slacker breezing through my rural town’s conservative public school system, completed a public relations degree in three years, realized that I had no professional interest in relating to the public, and got a guy who was some combination of kind-and-naive to give me a job programming computers for Intel; a fortunate divorce from an unfortunate marriage and having a guy I’d helped hire years before offer me his particular job as he was vacating it later and I’m doing great. Felicia was born 11 months after I was and went from being homeschooled to college without a high school diploma, studied math and music, and then went into acting, and claims to be mostly writing and producing these days — but can also draw a crowd of 1200 people to a bookstore and make them go “squee” in surprisingly adorable ways.

Neither of us really “belong” where we’re at, and that plays hell with our self-perception (as I was acutely reminded of while live-coding example work for three other actual programmers last week), probably contributing to varying degrees of social anxiety disorders. But there is good news for anybody who comes after us and is lucky and opportunistic, and it is this: Education Is Not Destiny.

And this should be super-important to the teenagers who are my demographic audience because I coach them in debate as they consider college-going strategies, most of which are designed to indenture more than educate:

Being able to identify and exploit opportunities is more important than what you study.

Being a computer science major won’t make you a competent programmer, being a theater major won’t lead to a successful career on stage and screen. These respective outcomes may be more likely, but they are not guaranteed. Getting into a successful career is all about developing a knack for identifying and exploiting opportunities… which also requires competence and skill, but you can miss a lot of opportunities despite being totally competent and skilled. So feel confident in studying whatever the hell you want to (and can afford to), so long as you’ve got strong contextual awareness of what sorts of opportunities are good for you.

To briefly address the obvious question: why does everybody act like education is destiny, having substituted “What do you want to be when you grow up?” with “What’s your major?” The simple answer is because it’s a stereotypical part of a person’s identity until it isn’t, in the same way performative masculinity or femininity gets associated to male and female, never mind that I’m into baking cakes now. And people associate moral significance to these performances against expected social roles based on ancient tribalism up through Greece and beyond when every individual’s contribution to the collective effort of civilization was seemingly pre-ordained as a moral duty to hold the chaos at bay. If you want to cut to the chase in moral philosophy from Aristotle to post-Enlightenment and how it allows people to think that their social constructs are sturdy enough for moral judgments, the book After Virtue is really quite good, but way off topic.

Let’s wrap up this line of discussion with a brief interlude for Nicole Sullivan. She ended up studying economics in college, then became a carpenter because sure-why-not, and is now a badass web programming geek because of Dance Dance Revolution and because Education Is Not Destiny.

I previously mentioned that the five hour wait was totally worth it, and would be remiss if I didn’t mention that Felicia was lovely and delightful even after five hours of book signing, a feat of endurance that was probably fueled by a lot of sugar and caffeine. But here’s what she says about it:

And this keys us into the insight that the “Public Figures” that can tap into the good vibes of their more-accessible-than-ever audience and be thankful for the opportunity to not be pigeonholed into a pre-ordained social role rather than feeling entitled to their celebrity have a professional skill that will help them retain their base when they want to shift their career to something new.

Take, for example, Sara Taylor. She also recently wrote a book in which a couple of teenage girls go and kill a bunch of people. It is, presumably, not exactly a memoir even though the girls are in a band, and Sara (aka Chibi) is the lead singer for the goth rock/darkwave band The Birthday Massacre. Anyway, I went to a Birthday Massacre concert last time they were in town — and the last opening band got me rather worried, because they were sounding like the stereotypical rock stars spewing verbal abuse on the audience from the sanctified space of the stage. But my fears were misguided; this was the typical gesture between Chibi and the crowd:
Chibi <3s Us and We <3 Chibi

And this brings us to our concluding advice: it’s not just nice to have an audience that loves you, but valuable to have an audience that facilitates opportunities you want to pursue. The line between supporter and participant has been smudged by the careers of random, chaotic people, and being able to identify and exploit opportunities from the crowd will be a valuable career-building skill. I have experience in this; my best career move was the one a former associate happened to offer me.

But I did have to be skilled and competent before I could exploit that opportunity, and that’s the positive note that Felicia ends her book on: because technology is changing how we tell our stories and who gets to tell them (going back to Bell), it is advisable for people to cultivate their personal narratives — it’s not just a matter of “writing a book,” but also a matter of aligning the facets of your personality with the work you want to spend a portion of your life doing and letting the world know the company you want to keep.

Raiding ‘locks, represent!


* While she’d reasonably assert that she is Valerie Plame Wilson, she was outed, entered the public consciousness, and became memoir-worthy (as it were) as Valerie Plame. Thus the portion of her life that is relevant here is labeled as “Plame,” preferring consistency to accuracy.