Project Management is People Management

I’m taking a class on project management for grad school. Our textbook is Project Management: The Managerial Process (7th edition, Larson & Gray, 2018) and while it is structurally laid out in a sensible-looking fashion, its actual content includes surprising claims like:

“one could eliminate the risk of choosing the wrong software by developing web applications using both ASAP.NET [sic] and PHP” (p. 217)

Continue reading “Project Management is People Management”

Unapproved Curriculum #2: Vocational Guidance

Let’s start this off with a few foundational statements:

  1. I would rather have kids that can think like me than can look like me, and I now have such a student who is even keen on a career like mine, and this is first and foremost for her.
  2. I volunteer hundreds of hours each year in an attempt to help kids navigate around the boring mistakes that consumed years of my life, with this extending on that trend.
  3. A great many mistakes come from believing in symbolic structures without a critical interrogation of the power that wants the symbol to be accepted as true, so we will be inspecting those structures.

While there’s an increasing amount of advice available on this topic, this is the compilation of advice I’m giving. So with that in mind, let’s begin to transition from adolescence into adulthood as an information system analyst or architect.

Misconception: Computer Science is Programming

I am a bad role model for you, but not for the reasons you might think. I’m a bad role model because I don’t know how to lead other people to the kind of success — lots of autonomy, solid paycheck, decent status — that I’ve had as a computer programmer. And this is because companies fetishize credentials, as if having a degree in Computer Science makes a person a capable programmer in a corporate environment: it does not. My degree is in Public Relations — solidly ironic for a reclusive misanthrope — and I don’t know that I’d be able to get past a phone screen if I went out job hunting because there are a lot of pricks in the world. But it turns out I’m not alone:

And you can follow these not-computer-science technologists on Twitter, which is something cool that we didn’t have when I was a teenager.

Minor Tangent: Programming, Actually!

If at any time you are sad that I’m not talking about programming here, pause to watch Laurie Voss (@seldo) talk for two hours about Stuff Everybody Knows. But I’m not talking about programming because it should be different by the time it matters to you. The harder technologies that I suspect are likely to be relevant in 5 years and beyond are security (Morgan Marquis-Boire — @headhntr — is the person I’d start learning from) and Blockchain (of which Bitcoin is only one use, but I don’t know enough to make recommendations here), but I spend almost no time on them — they are Computer Sciencey. Alistair Croll (@acroll) has his shortlist of future-driving tech, and the greater-than-ever capacity for machine learning through both hyperreality and networked experience will soon soon be challenging — but not necessarily eliminating — our scarcity-oriented socioeconomic structures, a point that Tim O’Reilly (@timoreilly) gets. If you’re looking to intercept an industrial sector in the near-term, there’s a report for that.

The key message here is that what you study in college doesn’t actually matter very much once you get started down a career path, and also that it’s easier to find role-models than ever before. But, counter to that point, it is worth mentioning that credential-fetishizing organizations — and the bigger they are, the more they do it — will likely prefer a Bachelor of Science degree to a Bachelor of Arts degree, and the preference may be reflected in your paycheck. The underlying truth of this point is one of the complexities of life: you have to position your actions to align to the current reality, but then also steer it towards the future you want.

Starting on a Lighter Note: Getting Dressed

Let’s take a turn for the frivolous and talk about fashion for a moment. You may not want to take my word for it; I’m constantly complaining about how hard it is to find pants for my < 29″ waist with legs that can press 540lbs. So I’ll be referencing everybody else.

First, the discussion I had with a director of lawyers (rocking a half-million per year salary) came down to simply this: wear the clothes that nobody you care about will judge you for. Now, to be fair, she was talking about when she was representing a client in front of a jury during high-stakes court proceedings, and not allowing the jury to be distracted by what she wore. But the same principle applies elsewhere: fill your wardrobe with outfits that will not draw attention away from your skills and abilities. And this is a kind of physical manifestation of Postel’s Law that we’ll get to in a moment.

But more specifically, there is a trend towards having a sort of personal uniform to minimize the effort of choosing an outfit. Women are starting to do it having noticed in Steve Jobs how well it seems to work out for high-status men (who, in years past, were always wearing mostly uniform suits and tuxes anyway). For our territory on the wide edge of technology, I would point to Timoni West’s description of her outfit — pay particular attention to “So when I find a product I like, I buy it in bulk” because it really sucks to find something simple that works perfectly and then you wear it out and go back for another… and it doesn’t exist anymore, Eddie Bauer, WTF?. Or, to put it another way,

But there’s a sexism warning here for women, too, because some (frankly idiotic) people are still sexist with particular regards to clothing.  But wait, it gets worse.

That aside, my advice goes like this:
I basically have two color palettes: the preferred black/blue/grey vs. the occasional green/brown/orange. The only common crossover is switching blue jeans for green cargo pants; other than that, I just decide which palette to use on any given day and grab clothes. If I’m needing to look managerial then I’ll wear a buttoned shirt, otherwise I’m likely to be tossing on a hoodie. Additionally, and this is for anybody who is crazy-lean like me, getting clothes that fit your form to express your physicality — even if it means you’re buying the same damned size and cut of jeans that you were 20 years ago — because your movement and gestures are a critical portion of your communication skills and if you’re floating in clothes too large for you (a habit enforced through years of your parents insisting that you’d grow into them) then you’re mitigating your ability to communicate effectively.

Side note: guys, if you have blue eyes, blue shirts are your friend — or if you’re wearing a suit, go for a blue tie. I speak from experience when I assure you that it will catch the attention of the sorts of women who go for blue eyes.

So, to sum up: populate your wardrobe with a minimal variety of almost nondescript group-and-role-appropriate clothes that fit you well and can be mixed-and-matched with no particular cognitive effort.

But the underlying principle here is Postel’s Law: Be conservative in what you do, but liberal in what you accept from others. He was talking about programming, but it’s really a vital two-part life-lesson:

  1. If you control your public tone to minimize first-impression offenses, then you can ensure that anybody who takes offense anyway really is is a colossal asshole you should avoid. And
  2. “Most people mean well. Even when they screw up. You get the best out of people when you keep that in mind.” —Nicole Sullivan

To re-clarify because this is important:

  • Postel’s Law is not a reason to not be badass.
  • There is no reason to not be badass.
  • Ever.

Tangent: Brief Foray into Relationships

The most substantial mistake I made was getting married to an art student. And what I learned is that being deeply committed to an utterly misaligned relationship and un-equal partner is, frankly, horrible. And while (heteronormatively) women are becoming aware that dumbing-down for men sucks, educated men have been dumbly accepting the isolation of intellectual inequality as a matter of patriarchy for generations. Really, a lot of what the errant neoliberal line of feminism is going “WTF?” about — like “having it all” — are things that patriarchy just stoically ignored for the sake of structure, not because it provided a consistent masculine advantage. The patriarchal structures don’t work for everybody, or even most people, and replacing a few men with a few women won’t change that, hence my assertion of “errant.” Point is: don’t worry about a long-term relationship until you know who you are and what you do well enough to evaluate the appropriateness of a partner. And while there are sociological effects to mating for companionship, they’re hardly a warrant for you to pursue an unequal relationship. Anyway, don’t even worry about it for a decade. It’s good to be single: it gives you more room to grow.

With that in mind, let’s get to work.

Misconception: Meritocracy Ensures Fair Pay

Leaders of great companies will often tout the rewards that get lavished on people because they want their labor-force “knowing and having faith that the system will actually give you the right raises as you go along,” which is self-serving bullshit that not only allows them to not pay people what their labor is worth, but also allows the executive class to justify their own bloated compensation. But just like getting an A+ in a class doesn’t make you the teacher, being the best laborer won’t make you a boss.

And yet we all love the myth of meritocracy because we’re nerds who were better at understanding grading scales than social status when we were young and probably still take comfort in small systems of easily-mastered rules. But the active dissonance between what we want to believe and the feeling of getting a laughable 2% raise in a good year — “That’s not a raise, that’s a type of milk!” was my precise response — contributes to what you will hear described as the Toxic Environment of career technologists. I doubt that it’s much different from other industries, but we do have people who are well-paid and rather clever and socially oblivious, so the toxic effects are more pronounced. We’ll readily attack each other with tactless honesty at the behest of management instead of politely bolstering everybody and the more different — “Otherizeable” — you are from the normative core of the group, the more likely you are to get attacked by each of them, either in direct conflict or in an uncharitable review. Most of them won’t really “mean it” per se, they’re just stupid boys from the Introverted Troglodyte department.

Minor Tangent: Let’s Talk About Sex

When you see all the big corporations talking about how they’re building for diversity, and recruiting for diversity, and laying the pipeline for diversity (oblivious to the sexual connotation) the important question to ask is “What are you doing to maintain your current level of diversity?” because usually they’re not doing much of anything. Kate Heddleston picks up on the “canary in the coal mine” metaphor, where the women exiting technology show that the environment is toxic — but the managerial response is to bring in more canaries. This only compounds the chronic anxiety of the current employees, with that anxiety spilling out in racist and sexist ways to respond to the “new threat.” It’s ugly, as Laurie Penny describes and you’re likely to experience it differently/worse in your 20s, even starting in college assuming you’re not in the middle of it already, than when you’re older. You may wish, as many women do, that the #NotAllMen would stand up for you against the ugliness. But this misses the dominance element in harassment: the trolls and troglodytes who are behaving in vicious and perverse ways towards you in front of your peers are doing so to demonstrate their power to the group in a way that the group — being composed of the socially inept and utterly non-dominant — are probably not prepared to deal with at all. It’s ugly, and it’s worse if you’re the target, but it makes all the low-status bystanders who are too confused and cowardly to respond feel shittily aware of their low-status, too.

There is no bright spot here: as long as people are afraid that the world would be just fine — or more efficient — without them, they will tend towards being aggressive sadists, or sniveling sycophants, with socially adept people switching their behavior depending on who’s watching. The less-bad news is that sexualized aggression seems to be less common in a corporate structure, so you’re more likely to encounter trolls who are grumpy and bitter about all of the shit that they can’t control. Nicole Sullivan has some solid advice on how to deal with these sorts of people.

But this is getting off topic; we were on the issue of meritocracy and an HR department ensuring you don’t have to fight for fair compensation, which is wrong but people are assured to that to persuade them to not fight for fair compensation. What is true is what Adam Smith observed when writing On The Wealth Of Nations, that “Masters are always and every where in a sort of tacit, but constant and uniform combination, not to raise the wages of labour” with what’s new being that it may be illegal if certain lines are crossed. But there are two critical points here: First, negotiate your starting pay because when everybody gets a 2% raise, the person with the higher salary gets the bigger raise. Second, manage your manager in getting them to commit to what they want from you and how substantially they’ll be rewarding you for it.

Here’s a story about the power of negotiating: So this one year, we got pathetic raises, but I kept on being awesome at whatever was thrown at me. Then the next year, I got an apology from the manager because he ran out of decent raises to give, but I was assured he’d make it up to me. Then the next year, I got re-organized under a sniveling sycophant who didn’t like my attitude, and I got screwed in annual review and then punted to a different manager. And this was when I got pissed off and, taking three months of vacation in the face of arbitrary layoffs, told my new manager, for whom I had not yet delivered any value, that headhunters were baiting me with 50% raises (this was true) and if he didn’t promptly fix my compensation then I was going to walk. And he got me the biggest raise I’d seen in 8 years, so I called it good and went back to doing the work. Until another re-organization put me back under a sniveling sycophant. This time I was serendipitously offered a job in an organization that actually wanted me, so I jumped on it and promptly started managing my manager in a friendly way to ensure that I was delivering what he expected to make it easy to give me a raise when the annual review came around, and he gave me an even bigger promotion than the last one I’d gotten, which really crystallized the other misconception about meritocracy which we’ll get to in a moment.

But the big point is that when you don’t expect organizations to treat people fairly, you can be more assertive in taking control of how they treat you. And this is true in your relationship with the corporate manager who wants to keep exploiting your labor, in the relationship you have with the college professor who wants to impart somebody with valuable wisdom, or even (about half) of the wait-staff in restaurants from whom you’d like a meal and not poison: if you treat them well while making it clear how you expect to be treated, then you’re likely to get the favorable treatment you demand. Yes, this does border on sociopathic social engineering, but it’s different because everybody involved is happy with how things turn out, right? As Hamlet (a sociopath at best, murderous psychopath at worst) advises Polonius: “Use them after your own honour and dignity: the less they deserve, the more merit is in your bounty.” Regardless, self-promotion is absolutely a necessary job skill these days.

Minor Tangential Misconception: Being a Paying Customer Ensures Fair Treatment

It used to be conventional wisdom that if you’re not paying for the product — like shows broadcast on TV — then you are the product: TV was funded by advertisers that paid studios and stations to create and broadcast content that they could advertise on top of; they were paying to bait you into watching their sales pitches. But this is no longer the case. The de-structuralization of power has turned all customers — even paying ones — along with their interaction with the corporate institution into a product to be analyzed, packaged, and re-sold to another institution that thinks it knows how to extract more value from it… that is, from you. There are pernicious effects to this: for example, in order to game their U.S. News & World Report ranking, a college attempted to surreptitiously subject incoming students (already admitted and paying customers) to a sort of culture-test to figure out who was likely to drop out so they could (illegitimately) force those students out before they were counted as students, thus boosting the college’s student retention rate. The president of the university allegedly said that the students were like bunnies who needed to be drowned and threatened with guns.

Misconception: Meritocracy Prevents Organizational Politics

This is a variation on the last misconception. But just as an organization may be full of sadists and sycophants, the organizations within a company will also have relations with each other. And while it’s always better to be a Vice President than a file clerk (because we’ve already downsized all of the file clerks), not all Vice Presidents are created equal.

Roughly speaking, in any significantly large corporation there’s a unique vertical capability of creating and selling products which differs from corporation to corporation. These are called profit centers: they exist to make money. Then there are horizontal utility groups that aren’t much different between corporations — things like Information Technology, Human Resources, Legal, and Finance. These exist to mitigate expenses in conventional ways, but are expenses in and of themselves: they are cost centers. While the skills that are useful in the horizontals — HR, IT, Legal, Finance — are more portable, their inability to actually make money causes them to be low-status and constantly starved for resources. If you want to be in a high-status group, get close to where the main products are being designed or sent to market.

When I say that low-status organizations are starved for resources, I mean that “the biggest raise I’d seen in 8 years” in a low-status organization (IT) was utterly crushed by the raise accompanying my next promotion in a high-status organization (Marketing) that I wasn’t even expecting to get for another year. What hadn’t changed was the content of my job: my entire professional life has been writing and maintaining web applications, but suddenly when I’m in a higher-status organization I start getting paid a lot more.

And it’s not all about money: at the human level status gets you autonomy to do what you do better. Status means your work is worth more, so the company is willing to invest more in you to get more and better work back out of you, instead of wanting you to do a hack job based on guidance from people who don’t know what you’re best at. And if you need a seat next to the window, and to take off early on Tuesdays for debate practice, you’ll have more leverage to get such things when you’ve got status. And if you’re in a high-status organization, you’ll start with more status and find it easy to gain more status.

Warning: In the low-status organization, everybody’s anxiously awaiting the next round of being downsized. Being a top-notch performer will, at some point, be at odds with your manager’s feckless non-direction and turn you into a liability to them.

But here’s a strange story: I stayed friends with one of my managers from the E-NEW — “Everything Nobody Else Wants” — group, so she kept me up to date on how she was moving around from group to group until she ran into a toxic asshole who, tragically, forced her out of the company. But that’s not the story. The story is about recruitment: our CIO (Chief Information Officer, head of IT) sent her out to recruit college graduates to come work for our IT department. And she was sent to big Ivy League schools like Cornell and Columbia. And you may be thinking as I’m recounting this story to you, as I was thinking when she told it to me, as she was thinking while she was doing it: “Why the fuck are we trying to drag expensive Ivy League graduates into our dismal IT department?” Because high-status Ivy League graduates aren’t going to join IT. Hell, even I was smart enough to not join IT — just dumb enough to move too slowly when IT rolled over the group that I had joined. There was a happy outcome of that absurd recruitment drive, though: my ex-manager got to network with other recruiters who lined her up with next career move when she got tired of the toxic asshole.

There are several lessons about college in all of this:

  1. Companies will recruit graduating students from colleges. When selecting a college, be sure to ask “Which companies recruit here?” because if you’re not getting recruited then you’re going to have to work a lot harder to get that first job. The college may have a cache of resources to help fill in for companies that don’t recruit there, but using those resources is work you’ll have to do. And the gap where you may be unemployed when you leave college but before you start a career is insanely stressful, especially if you have an older sibling that had their first career move solidly lined up a full year before they graduated.
  2. Expensive colleges produce high-status students, a point that is undermined by the usual size of student debt. For example: if you graduate with $120,000 of debt, from a high-status school you will have to get a job that pays not only a living wage, but also enough to pay off your debt — this may work out for you. On the other hand, if you graduate with $0 of debt, then you’re free to pursue any job or career that you can convert into food and shelter. There is, presumably, an optimal middle ground where status is maximized and debt it minimized, but it’s of primary importance that you’re confident you can live with whatever you choose to do.
  3. Departments in a university have statuses just like departments in a corporation, and they are not funded equally despite what the Office of Admissions says, in much the same way we tried to sucker Ivy Leaguers into IT. Consider: The college I went to had a big music program and a big nursing program. The part where the computer science department was co-located with the math department in a trailer off campus behind the old gym should have been a warning to me, but I was young and naive. The previous recommendation was “it doesn’t matter much what you major in,” but let’s refine that: choose like three-or-so possible majors that appeal to you and then filter out any colleges that have none of those majors as their highest-status departments, and prefer the colleges where you get multiple matches as that leaves your options open.
  4. If we put the flat cost of tuition together with a low-status and under-performing department, we end up with “General University Requirements” courses. You would be well-advised to figure out how to transfer these in from a low-cost/no-status institution if possible. Summer classes at a community college can probably get you past most things that seem superfluous to your degree. This works because the final degree is from the higher-status institution, and that’s what anybody who cares looks at, so that’s what counts. (Of course, you may cap out your transfer credits with your AP courses and that’s fine too — the real point is to not pay full price for sub-standard content.)
  5. If we combine what we know about the status of departments and the status of people, we can begin to predict that tenure-track professors in high-status departments (one end of the spectrum) will want to impart wisdom on bright young sparks who take after them, while adjunct/temp professors in low-status departments will be afraid of losing their job and want a bit of commiseration from the not-quite-peers who are getting closer to the same sort of crap day by day. You can matter to and be well-treated by these sorts of professors pretty easily. In the middle of the spectrum will be the low-status tenured professors who have to keep their under-performing department running and the high-status adjunct professors who are focused on advancing their career rather than their students — but becoming relevant to these sorts of professors is relatively difficult.

If you ignore those lessons and just choose a college on the basis of its football team you’ll discover that it’s disappointingly like high school, but a lot more expensive. Of course you may be disappointed anyway — do bear in mind that your class cohort will made of people rather like your current cohort and the football players are still going to be the football players. Sorry about that; I apologize for the continuity of the universe in advance.

Tangent: Play to Your Strengths

One of the dis-services that is pounded in to high schoolers today is the need to do everything to be “well-rounded.” There are multiple reasons for this, like the need to discover individual strengths and talents, but not the least of which is — cynically — that a not-employed monomaniacal over-achiever will completely overwhelm teacher who has 199 other students and is trying to be a functional adult in a low-status profession as well. One of my former students wrote a 200-page paper on Women’s Rights in Afghanistan — a topic she was passionate about — only to have it flatly rejected as unreadable by a teacher who, doing the math, had other 600 pages of (boring-ass mediocre) 3-page papers from 200 students to shovel through. If we actually cared about cultivating the strengths and talents that we discovered through encouraging well-roundedness, we’d be behaving differently, but no matter: the point is that just like a company or university will happily starve some departments while feeding others, you should focus on doing a few things amazingly well and — bluntly — jettison whatever you’re not really committed to.

See, the way we think about students is very binary: they’re either becoming a square person with one skill and that fits if we need square people, or they’re becoming round people who — and this part is strange — won’t actually fit together, but can roll around in pursuit of whatever dreams they happen to have. Imagine civilization as a jigsaw puzzle: square pieces can fit together and be all wrong, round pieces can go anywhere but almost never really fit. So my advice is to synthesize a few things that you’re really good at and carve a niche that fits you.

This strategy of building on strengths to become better at what we’re doing instead of building on weaknesses in the hopes of achieving mediocrity has been obvious for years, gained popularity about a decade ago, and now even has a book that I’m comfortable recommending: Cal Newport’s So Good They Can’t Ignore You.

That said, everybody should still be able to do their own laundry and cooking.

So, to recap, you ought to be doing what you do best in an organization that is respected for what it does having studied at an institution that is respected specifically for what you studied there. And this is all harder than it sounds because companies and colleges bluff past questions of status and you’re being spread too thin to know what you want to focus on, but also harder than it sounds because you may have to maneuver your way around to get to a position with a comfortable level of status. Remember: align yourself to current reality, adjust yourself to steer towards a better future.

Misconception: Sell-Out or Poverty, Choose One!

I’ve spent a lot of time here talking about corporations, paychecks, and tuition fees. This is deeply ingrained in the life I went after and now have. But it’s not the only life out there, not even for a technologist.

First, please consider that you can be a sell-out and a lot of other things as well. I wrote a book, run a non-profit, and insisted that my employer support me as I routinely wander off and coach debate. Remember that you can use status to bolster your autonomy. The more valuable you are to them, the more pliable they’ll be to get your best work from you. Of course, if all you ever ask for is money then being a sell-out is kind of boring.

Second, please consider Idalin Bobé (@IdalinBobe) — she’s building status and leveraging that for an agenda of community building and organization as she describes in this long-form presentation:

Not dissimilarly, there’s Code For America which commonly works to modernize and simplify the technological skeletons of decaying bureaucracies, but is all about technology for local civic engagement. And on the way there, you’re likely to encounter Code 2040, dedicated to improving inclusivity within tech.

But here’s the thing that’s hard for kids these days to believe: We’re mostly doing capitalism badly as a matter of status games played in the arena of public policy. You know who else played status games in the arena of public policy to demonstrate how naive the economic principles that allegedly underpinned their government were? Guys named Stalin and Mao. The funniest part about Stalin’s purging of millions of Soviets what that intellectuals and academics like Sartre and Chomsky actually supported that lunacy because they’re so opposed to capitalism.

Let me be very clear: what really matters is building your status and using it to do what you want. That is what we get when we put human nature in civilization. Issues of money, debt, capitalism, communism, even the counter-factual notion of a social contract are all just different views, different shells on how people work to each build their status in a society to ensure their security and their legacy. Sexism and racism are ingrained in our culture to reduce the competition for high-status positions, but nepotism is seen as a natural right of the rich whenever people howl about a “death tax” despite it running completely contrary to our mythological meritocracy.

You may find yourself doubting all this “status” talk. But consider: why does anybody care about U of O’s football program? Or, more generally, why do institutions of higher learning all seem to support a sport that is known to cause brain damage? The answer — going back to Thorstein Veblen’s Theory of the Leisure Class which is an awful and yet valuable read — is the status that they get from having a football team, especially if it can win. The problem is that a good football team will not help you pay for school; quite the opposite actually.

When you can see this all clearly, see the absurdity of the human condition in society, and look for a way to join in and help out that matches whatever skills you choose to make your own, that’s when I’m successful. That’s when I’m successful at getting you past many of the stupid and naive mistakes that I made and that your peers will also make.

But what I’m really hoping is that I’ve provided you with plenty of resources to help you think about how technologies can be applied in under-developed areas. I’m hoping you’ve seen that our subculture of technologists need socially adept people who can stand in front of crowds and tell stories that are true. And I’m hoping that these give you a legitimate hope that you’ll be able to take control of your future and steer it in a direction of your choosing.

Appendix: Professional Practices

But we haven’t really talked at all about what you’re actually doing for a living. The answer is you’re making something that will help other people become what they want to be, and then helping convince them that they want to become that. Here’s the list of professional practices books that probably don’t get used in class:

  • Slack by Tom DeMarco on giving yourself enough space to think about solving problems.
  • Badass by Kathy Sierra on how to design products that can be adopted and adored.
  • Waltzing with Bears by DeMarco and Lister on how to design a schedule to maximize business value and
  • The Pragmatic Programmer by Hunt & Thomas on what you should be doing as a professional that you weren’t doing as a student.

Appendix: Personal Practices

In addition to those professional practices, here’s some documentation on how you can help keep random trolls out of your life.

EF-ing Many-to-Many Relationships

Oh dear, that title sounds nasty.

So there are lot of primers on getting started with Microsoft’s Entity Framework in a Code-First way, and this won’t be one of them. If this disappoints you and you’re Google-impaired, here’s an official video and a related article. But a whole lot of primers only explain how to put a lot of tables next to each other in a data context, stopping short of relating those tables to each other. While this is generally good enough when you’re writing queries, it comes up short if your users are writing queries because you’ve exposed your data as a service… like an OData service. This is especially true if your “tables” are just views with no obvious referential integrity that you’re writing .NET code against after the fact. It turns out that the process to do this is strange enough that you’re unlikely to simply stumble across it, but it’s easy when you know how. So here’s how.

Step 1: Have Your Major Objects. Let’s make a table (or view) for IceCream (id int primary key) and WaffleCones (id int primary key) because we can match any ice cream to any waffle cone. Sure, why not? Note that your primary keys don’t have to be strictly primary keys (for views) so long as you’ve controlled them as if they’re primary keys.

Step 2: Association Table (or View). Now in the middle we’re going to put an IceCreamConeCompatibility (distinct ice_cream_id, waffle_cone_id) based on whatever magical business logic ensures that ice cream is compatible with waffle cones.

Step 3: The Object Decoration. Go into your object classes and add the relationship property like this:

public class IceCream {
     public int id { get; set; }
     //Here's the part that matters:
     public virtual ICollection WaffleCones { get; set; }
public class WaffleCone {
     public int id {get; set;}
     //Again, adding this new part:
     public virtual ICollection IceCreams { get; set; }

Step 4: The Data Context Code. As promised, here’s the freaky part: IceCreamConeCompatibility is not recognized as an actual object in code — but that’s okay. The relationship is contextual so we’re going to mention it to our DataContext class, like this:

public class IceCreamDataContext : DbContext {
  public IceCreamDataContext(string nameOrConnectionString) 
               : base(nameOrConnectionString) {};
  public DbSet IceCreams { get; set; }
  public DbSet WaffleCones { get; set; }
  // with me so far? Here's the freakish bit
  protected override void OnModelCreating(DbModelBuilder modelBuilder) {
    // This is how to do *-to-* with a relationship table in the middle.
       HasMany(ic => ic.WaffleCones).WithMany(wc => wc.IceCreams).Map(map =>

So what you can see is that we’re going into Entity Framework’s black-boxery and force-feeding it a relation when it’s called into being. Put into more-or-less human terms, this relationship is:

Each ice cream (starting on the left) has many waffle cones (moving to the right), with the waffle cones also having many ice creams, and all of this is technically described in the compatibility table which references key of the right side table (waffle codes) to its “waffle_cone_id” column and the key of its left side table (ice creams) to “ice_cream_id.”

See, it’s easy when you know how. But I’m not apologizing for the title of this post.

Doing More with LESS

Microsoft’s Visual Studio Web Essentials extension is probably the quickest and easiest way to get LESS stylesheets enabled in your ASP.NET project. They have a page that doesn’t say much about it. Once it’s in, you can “Add new item > StyleSheetLESS” and this will give you three items: the LESS stylesheet that you edit, the CSS stylesheet that you use if you’re bundling and minifying as part of the build after debugging locally, and the minified CSS stylesheet that you may use if you’re not minifying and bundling as part of your build process.

Warnings: Bugs in your LESS stylesheet will cause the CSS file to not generate. And your CSS file may not generate when you want it to — be sure to check “Tools > Options > Web Essentials > LESS” and ensure that “Generate CSS file on save” and “Show preview window” are both set to true to minimize the likelihood that the tool will betray you. Finally, if you’ve bugged your LESS but can’t figure out how because it doesn’t proactively pop up the error list, Error List is about halfway down in the “View” menu (or Ctrl+\, E using default keybindings).

But if you’re like me, you’ve been using CSS all millennium long (so far) and are probably wondering why you should change to this new fad that is on your virtual lawn. And the answer is simple: You Hate CSS. You hate CSS because CSS was not written for programmers. Programmers believe in making things DRY (“Don’t Repeat Yourself”) and CSS has a disgusting tendency towards being SOGgy (“Slimy Organic Growth”) as overrides, conditionals, and one-offs layer over the not-necessarily-used cruft inherited from external agencies, contractors, corporate branding, and that intern from two years ago.

LESS Power #0: Nesting instead of repeating selectors. This one should’ve been obvious 15 years ago:

.big-class {
  font-size: 1.2em;
  margin: 4px;
  .sub-box {
    padding: 4px;
    &:hover {
      text-decoration: underline;

I’m going to trust that since you know this compiles down to CSS I don’t have to spell out that it’ll build rules for .big-class and .big-class .sub-box and .big-class .sub-box:hover, right?

LESS Power #1: Consistent values via variables. How many times does your current CSS set border radius? I ask, because every one of them could be different. The first basic power of LESS is to set a variable (preferably at the top of your LESS), like this:
@button-border-radius: 3px;
And then you can use it all over the place like this:
.button-green { color: green; border-radius: @button-border-radius; }
.button-blue { color: blue; border-radius: @button-border-radius; }

Or we could take it step further and have the radius be a mix-in:
.round-corners { border-radius: @button-border-radius; }
.button-green { color: green; .round-corners; }
.button-blue { color: blue; .round-corners; }

The mix-in technique is particularly useful when resetting fonts, margins, and paddings early in the de-crufting process.

Warning: Just because you’re not repeating yourself doesn’t mean that CSS has magically become not-dumb. The compiled CSS will copy the content of variables and mix-ins to wherever they’re used resulting in no savings for your end user. This is preferable to you copying them yourself, but not permission to get sloppy with them.

LESS Power #2: Color palette control. How many colors are in your color palette? Probably very few, but then expanded out to be a crazy number when you’ve got variants and shades and hues and such. LESS can help you bring your colors under control with functions. Let’s start by taking one of our favorite colors that is important and defining it:
@link-blue: #0071C5;
Now this is our most important blue, so when we do other blues, we want to keep them in line with this blue. For example, if we want a background blue that’s like this, but more grey so that it’s got a “not clickable” vibe to it, we can use the LESS “desaturate” function like this:
@flat-blue: desaturate(@link-blue, 30%);
LESS documentation naturally features the full list of LESS functions, but I’ll go over a few more here.
Now that we can manipulate our colors a bit, it’s time to mix them in to interesting situations. For example, let’s make those blue and green buttons stand out a bit with consistently applied gradients. First we’ll write a parameterized mix-in like this:

.background-gradient (@bgcolor) {
  background: none;
  background-color: @bgcolor;
  background-image: -webkit-linear-gradient(bottom, darken(@bgcolor, 10%) 1%, 
      @bgcolor 20%, lighten(@bgcolor, 10%) 92%);
  background-image: linear-gradient(to top, darken(@bgcolor, 10%) 1%, 
      @bgcolor 20%, lighten(@bgcolor, 10%) 92%);

and then we’ll drop it in to our buttons
@selected-green: #1C1;
.button-green { .background-gradient(@selected-green); .round-corners; }
.button-blue { .background-gradient(@link-blue); .round-corners; }

And now when we have to change our gradient declaration to keep up with browser compatibility or simply on the fashionable whims of corporate branding, we’ve got one place to change it for our entire site, regardless of how many colors get used.

For our special bonus, I’d like to give a shout out to the inset box shadow which I use to highlight things (like buttons or table cells when a hover event is detected, though in the future it’ll be tied to eye-tracking, I’m sure):

.highlight-element(@highlight-color) {
  background: @highlight-color;
  box-shadow: inset 0 0 .45em darken(@highlight-color, 40%);

Note that this expects a light highlight color and darkens the edges; you can reverse this by changing the darken() on the box-shadow to lighten() on the background and giving it a dark color to start, of course: the color values are pre-compiled by the LESS compiler saving you from the worry of browser compatibility on that point.

LESS Power #4: Retro sizing calculations that work. If you’re living under the imposing thumb of corporate standards that aren’t yet fluid and responsive — this year they’re “old school,” next year they’ll step it up to “retro” so as to not fall behind the times — then you’re thinking about box sizes, and those box sizes are probably based on the sizes of background graphics that you’re not even using because you’ve got gradients and box-shadows and all manner of great built-in capabilities. Well, let’s add another great capability: calculated sizes. Start by defining some variables

@core-width: 960px;
@core-pad: 10px;
@element-pad: 8px;

And then we lay down our wide centered block:

.main-center {
  float: none;
  width: @core-width;
  margin: 0 auto @core-pad; /* Center our main block */
  background-color: #fff;

Note that I’m not re-defining white because it’s white. If we had any doubts about leaving white as white, I’d have re-defined white. But declaring that @white: white; is silly so I didn’t do it.

Now that we’ve got our block in which all content gets uncerimoniously dumped, let’s show what the cool stuff is:

.w-737 { /* left nav + wide right block */
  width: (@core-width * .78 - @core-pad);
.w-517 { /* left nav + wider middle + right nav */
  width: (@core-width * .56);
.con-rcl, .con-lcl { /* left & right columns */
  width: ((@core-width * .22) - (@core-pad + @element-pad ));
  z-index: 2;
  display: inline;
  position: relative;
.con-rcl {
  float: right;
.oneThird /* 3 columns */
  width: ((@core-width * .333) - 
      (@element-pad + (@box-border-width * 2) + (@core-pad * .667))); 
  /* 2 core pads div 3 columns; borders not inside these boxen */
  float: left;
  margin-right: @element-pad;

So what you can see here is that we can take a quantity of pixels, apply a percentage to it (like “width: 33%” would imply) and then modify it to account for margins, paddings, borders, outlines, and whatever else might be required by our box model. If you need a refresher on box models, this looks like a decent one.

And that’s what LESS does for us. We started with not repeating our selectors across rules. Then we mixed in not repeating our consistent values or style settings across rules. Then we built an actual color palette from functions instead of just tweaking hex values until they looked okay. And finally we made woefully old-school element widths less woeful using the LESS compiler’s calculations.

These examples came out of a project that’s been around since the days of supporting IE6 and had many hands working over it. Converting over to LESS and eliminating over half our styles while giving the site a UI scrub-down (and no longer pandering to IE8) took me less than a week and while it’s not the prettiest of pigs, it is now wearing a more reasonable amount of lipstick which is confined to its lips.

In terms of business impact, this gives us

  • stylesheets that are easier (and thus cheaper) for developers to maintain,
  • stylesheets that are shorter for bandwidth cost savings,
  • and, when delivered, client browser resource savings which improves user experience.

Put it together and I think you’ve got a compelling case for doing LESS at work.

Twitter Cards: Toying with Metadata

This post is about getting nifty Twitter Summary cards and similar OpenGraph summary cards for Facebook, Google+, and LinkedIn on a single-author copy of WordPress.  It’s not too difficult, except for the misinformation that makes it impossible.

Several hours of woe and tribulation later, I have added the following to my possibly-unnecessary header-article.php (which is called from single.php as get_header("article")):

<?php if(is_single() || is_page()) { ?>
<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="@Lexwerks">
<meta name="twitter:creator" content="@Lexwerks">
<meta property="og:url" content="<?php the_permalink(); ?>">
<meta property="og:title" content="<?php the_title(); ?>">
<meta property="og:description" content="<?php the_field( "summary" ); ?>">
<?php if (has_post_thumbnail()) {
$thumb_src = wp_get_attachment_image_src( get_post_thumbnail_id() ); ?>
<meta property="og:image" content="<?php echo $thumb_src[0] ?>">
<?php } ?>
<meta property="og:type" content="article">
<meta property="og:site_name" content="<?php bloginfo('name'); ?>">
<?php } ?>

I installed Elliot Condon’s Advanced Custom Fields plug-in as the easiest way to add a single piece of metadata to an article (I called it “summary”) and fetch it back out again because adding a summary slug should be simple, and not necessarily an excerpt.  And now it is; thanks Elliot! I also added a custom functions.php to my -child theme’s folder to enable “Featured Images” nee “Thumbnails” — it is simply <?php add_theme_support( 'post-thumbnails' ); ?> and now I can attach images to my posts that aren’t currently showing up in my posts because I haven’t asked them to, but they will show up on my cards. I may add a condition in the future to default in my favorite thumbnail if I don’t attach a specific featured image to a post.

I could probably augment this further by including heights and widths for the image in use, picking up my Twitter ID from my WordPress User Profile (under “Users” if you, like me, haven’t seen it before), or gone with my numeric twitter:site:id instead of my changeable user name.

My testing shows that Google+ picks right up on this stuff with only a little bit of caching, which is great.  Facebook tends to cache the card for rather longer, so it does work but it’s difficult to test revisions with.  Twitter wants you to grovel for their approval — or at least click a button and wait a couple of weeks after you have implemented Twitter Cards to their algorithmic satisfaction — before you can see the results showing up in tweets. For a company that forces people to abandon grammar and spelling to get ideas out faster, this seems like a particularly backwards maneuver. The up-side to Twitter is that you can feed your card into their validator until you’re happy with it — though that’s not the same as it actually working and adding value in the wild.

I haven’t tested with LinkedIn because I’m not particularly linked in to… yeah.  I’ll get around to it later.

It is worth noting that Twitter seems to have the staunchest restrictions on the way they read the card. They want:

  1. The title to max out at 70 characters.
  2. The description to max out at 200 characters (though I’ve also heard 160).
  3. The image to be under 1mb (reasonable) at least 60px in each direction (really?) and if they go more than 120px in either direction then they’re going to get cropped.
  4. The site to load in under 3 seconds for their validating algorithm.

If you follow those rules, your summary should show up fine for Facebook and Google+ as well, assuming FB and/or G+ didn’t cache an earlier copy of your pre-successful metadata.

Awaiting a Good Example

Apparently there’s some confusion on how to use the new Async/Await capabilities of .NET 4.5. I know that there’s confusion on this point because I saw it on slides presented at a conference, and it made me sad. While what was claimed may be technically accurate for client apps, it looked dead wrong for web apps. So I started with one of K. Scott Allen’s articles on Ode To Code and modded his code a bit to demonstrate how Asyc/Await usually works.

Kicking open VS2012 and starting a WebApi project, I drop in the following code. Spoiler: it turns out that this is the wrong way to do it; keep reading.

public async Task<string[]> Get()
   var mark1 = await SleeperAsync(8000);
   string mark2 = "Timestamp2: " + DateTime.Now;
   var mark3 = await SleeperAsync(3000);
   string mark4 = "Timestamp4: " + DateTime.Now;

   return new string[] {  mark1, mark2, mark3, mark4 };

async Task<string> SleeperAsync(int napTime)
   string outVal = "TaskOne: " + DateTime.Now + " - ";
   await Task.Delay(napTime);
   return outVal + DateTime.Now;

And go to path /api/values (because this is the ValuesController) and see the entirely wrong results of

<ArrayOfstring xmlns:i="" xmlns="">
TaskOne: 8/11/2012 11:35:20 AM - 8/11/2012 11:35:28 AM
<string>Timestamp2: 8/11/2012 11:35:28 AM</string>
TaskOne: 8/11/2012 11:35:28 AM - 8/11/2012 11:35:31 AM
<string>Timestamp4: 8/11/2012 11:35:31 AM</string>

Which shows that awaiting a async method call makes it synchronous. Our first timestamp marks when the first delay finishes, and the second delay starts on that mark, then the second timestamp marks when the second delay finishes: synchronous. We’re awaiting the results (as we should’ve expected from the defintion of the word “await” but apparently C# is supposed to be the bizarro-world of natural language or something).

So let’s move one of those awaits so we can see exactly what’s going on:

public async Task<string[]> Get()
   var mark1 = SleeperAsync(8000); // await moved to result usage
   string mark2 = "Timestamp2: " + DateTime.Now;
   var mark3 = await SleeperAsync(3000); // We now know this is WRONG.
   string mark4 = "Timestamp4: " + DateTime.Now;

   return new string[] { await mark1, mark2, mark3, mark4 };

This gives us

<ArrayOfstring xmlns:i="" xmlns="">
TaskOne: 8/11/2012 11:39:50 AM - 8/11/2012 11:39:58 AM
<string>Timestamp2: 8/11/2012 11:39:50 AM</string>
TaskOne: 8/11/2012 11:39:50 AM - 8/11/2012 11:39:53 AM
<string>Timestamp4: 8/11/2012 11:39:53 AM</string>

And we can see that first task starts at 11:39:50, but doesn’t block the thread so our timestamp gets 11:39:50 as well and then the other task also starts at 11:39:50. But we’re awaiting the results of that other task, so it blocks the thread until 11:39:53, and then our other timestamp is also marked at 11:39:53. But we have to get that first (8-second task) to finish before we show any of it, so we’ve got an await down at the bottom and our patience is rewarded when the first task finishes at 11:39:58.

So the more-correct way to write the function is:

public async Task<string[]> Get()
   var mark1 = SleeperAsync(8000);
   string mark2 = "Timestamp2: " + DateTime.Now;
   var mark3 = SleeperAsync(3000);
   string mark4 = "Timestamp4: " + DateTime.Now;

   return new string[] { await mark1, mark2, await mark3, mark4 };

But! Spinning off async calls isn’t free and this “more” correct way of writing this function therefore is not optimal. The optimal thing to do is balance your long-running asynchronous calls with the overall amount of time that the rest of your still-synchronous code is running. So if I’ve got delays of 7, 3, 2, and 1 seconds, then I should only make the 7 second delay asynchronous since the 3, 2, and 1 second delays are only 6 seconds and the overall method is going to be waiting on the 7 second call to finish anyway.

But! This is still nonsense: who the heck puts 13 seconds of delays in their code? (People doing demos instead of real work, that’s who.) Your real code should require a lot more thought than this, and attempting to balance async to synch performance smacks of Premature Optimization. The rule on premature optimization is: Don’t Do It. More precisely, I say “don’t bother,” because as Miguel de Icaza observed: “You will be wrong about the things that actually mattered.” And while that often seems to be true in life and applies to teleological consequentialism, Miguel really was focused on software. So don’t bother doing it, your performance holes are going to be where you don’t expect them.

So the two things you should’ve learned here are:

  1. Putting await in front of an async call is Epiq Fale; the async method call goes first, the await shows up later at the point where you need the results.
  2. But all of this should be awaiting actual performance data from your code so that you don’t “optimize” something that’s already performing just fine.