Darwin’s Theorem

darwins-theorem-cover_small

Science, religion, evolution, romance, action, siphonophors!

Darwin’s Theorem is a story about stories(the working title for a long time was “Metastory”) that’s also a mystery, a romance, an adventure, and various other things besides. Not quite science fiction, excessively didactic… think of it as “Dan Brown meets ‘Origin of Species’.”

If you like to see plot, action and strong characters deployed in the pursuit of big, speculative ideas, you should check it out!

Posted in marketing, writing | Leave a comment

How Bad is the Porn Industry?

Some quotes (a few words have been changed or omitted to hide identities):

  • I am overwhelmed with sadness, panic, fear, despair. I know I want to quit, but at this point it’s been 9 years.
  • I am so sick of being poor and pathetic. My life sucks.
  • My coworkers are mean, bitter people, with even more controlling dictatorial people who lead them. It is a toxic place. As soon as I am able, I am leaving.
  • The worst decision of my life. And pay isn’t even that great.
  • I lacked the feeling of connection that I needed to survive. Outwardly I seemed content, but I was dying inside.
  • I finished ten years ago and still wonk around feeling bad
  • But to be honest, I have disliked it for a very long time, and just kept thinking that if I stuck it out things would improve and I would work out the nerves, etc. Also, I have had a really hard time admitting that I may not be cut out for it.
  • I’m learning a lot about myself and about the whole fucking process. Just when I think that my eyes have been opened and I’m finally aware of what a sham all this stuff is and what I need to do to get the hell out of here, I have another realization.
  • So I LEFT MY ABUSIVE MANAGER for another, significantly less abusive one. He was a bigshot and I was terrified he would shut me out and my career would be over because he was unpredictable and I’d heard of similar things happening to other people. NOTHING BAD HAPPENED and my new manager is much better. I feel like I took action, and generally freer. A bit sad that ‘not experiencing direct violence but still anxious and depressed’ counts as ‘much better’, but I wanted to share something slightly hopeful.
  • Over a year ago I told my managers I was suicidally depressed. They told me not to worry because you couldn’t tell from my work. Then none of us ever mentioned it again.
  • …an exercise in managing crushing anxiety and fear, and trying desperately to make them look like passion or talent. I could literally taste the adrenaline in the back of my throat for whole month. My gums bled spontaneously throughout the day.
  • My faith in my abilities and in myself at this point is almost nonexistent.

Awful, eh? Dreadful. Destructive. Inhumane. Clearly a domain that screws people up, or that attracts screwed up people. Harassment, violence, humiliation.

The thing is: it’s not porn. All those quotes are taken from blogs about academia.

Posted in epistemology, politics, psychology, science | Leave a comment

How Much Information Does Your MP Contain?

Canada is supposed to be a representative democracy, not a democratic oligarchy. “Oligo” is a prefix that means “few”, from the Greek word ολιγος, which means “few”. In an oligarchy, a small number of people–we’ll call them “party leaders” for both convenience and verisimilitude–dictate policy. In a “democratic oligarchy” the various oligarchs are apportioned votes or influence in a manner roughly (sometimes very roughly) in proportion to the support they have amongst the populace.

The important difference is that in a democratic oligarchy, when the party leaders gather together it is only their views and interests that are represented. The views and interests of the people are at best secondary to the views and interests of the party leaders.

In a democratic oligarchy the hierarchy of interests looks like this:

  1. Party Leadership
  2. Senior Party Members
  3. General Party Membership
  4. Citizens

In a representative democracy, the hierarchy of interests is quite different:

  1. Citizens
  2. Members of Parliament
  3. Party Leadership
  4. Senior Party Members
  5. General Party Membership

One could argue with the precise ordering, but the important difference is that there is simply no role for Members of Parliament (MPs) in a democratic oligarchy. They are simply variously-coloured counters used by the party leaderships to keep track of votes. For this reason, in a democratic oligarchy, I refer to MPs as “tiddlywinks”, for the worthless plastic disks that children play with.

In Canada today, are MPs tiddlywinks?

There is a simple and objective way to measure this, based on information theory. To understand it requires a bit of history.

The quantification of information was pioneered by American mathematician Claude Shannon in 1950’s, and we typically talk about “Shannon information” (and its opposite, “Shannon entropy”). The idea is disarmingly simple. Consider a string of characters like:

ABCD…

or

AQUD…

or

THOM…

Now try to guess the next letter in each string. In the first, the odds are really high that it is “E”. In the second, who knows? In the third, the odds favour “A” (as in “THOMAS”) “P” (as in THOMPSON”) and “S” (as in “THOMSON”). That’s assuming the string is in English.

The more information a string contains, the greater confidence we can have in predicting the next letter. If we can predict the next letter with certainty, there is no more information left over from external sources.

In the example above, I’ve used letters in the Western European alphabet, but in the formal mathematics of information theory we generally use binary numbers (0’s and 1′) and talk about information in terms of bits. If we can predict with certainty what the next value will be in a string of 0’s and 1’s, we have one bit of information. If we can predict a long sequence of following bits, we have that many bits of information. And if we can only get a so-so probability about what the next bit is going to be we only have some fraction of a bit of information.

On this basis, we can ask: how many bits of information does your MP contain? Obviously, if we can predict with near-certainty how they are going to vote by only knowing one fact about them, then everything else is irrelevant, where “everything else” includes their personal background, character, integrity, accomplishments, awards, etc. Also irrelevant is who they represent, the geographic locale of their riding, and so on. All that matters is that one fact that lets us predict how they will vote.

We have a good deal of information available to us that lets us answer this question. In particular, the Globe and Mail teamed up with Samara to look at MP voting records. I have pulled the data out of the web page and done some analysis on it to answer the question, “How much information does your MP contain?”

The answer, which I’m sure members of the oligarchy will be “shocked, shocked” to discover, is: almost none.

There are a couple of ways to look at this. One is to simply plot the number times each individual breaks ranks, ever:

Democratic Oligarchy

Democratic Oligarchy

As can be seen in the image, there are literally only 6 MPs who break ranks more than 1% of the time. The largest outlier, at just over 2%, also cast a relatively small number of votes overall–138 vs a median of 542 in the sample.

All told, of the 311 MPs in the sample (higher than 308 because of by-elections), 259 of them–83%–never broke ranks at all.

Our political parties are getting an “A” grade in controlling “their” MP’s behaviour.

But what does it mean for a party to control “their” MPs? They aren’t the party’s MPs: they are our MPs. They’re supposed to be representing us.

An enemy of democracy would at this point be rolling their eyes and saying, “How naive!” because they know that it’s difficult to refute a sneer.

This means, in very simple terms: 83% of the time, your MP contains no information. Once you know their party, you can predict with certainty what their vote is going to be. Who they are and who they represent are irrelevant.

In the remaining 17% of cases, your MP contains almost no information, about 1% of a single bit. Which for all the practical difference it makes to our government, may as well be no information at all: of the 161,000 votes counted in this dataset, 99.87% were along party lines.

Stop and think about that: this dataset over a few years counted over a hundred and sixty thousand individual votes cast in Parliament, and in just two hundred cases out of that one hundred sixty thousand did an MP break party ranks.

Read that and tell me your MP is more than a counter in the tallying of votes that represent nothing but party interests.

Once upon a time Parliament in Britain represented property. After 1837 or so it came to represent–imperfectly but genuinely–people. Today in Canada it represents parties. There is no other conclusion to draw from these data.

We do not live in a representative democracy, but in a democratic oligarchy. Anyone who says otherwise is either lying or doesn’t understand the data.

One question this raises is: how did this happen? For example, it is illegal to offer any Member of Parliament any material inducement to change their vote. If I were to offer a sitting MP $90,000 to change their vote on some bill before the house I would go to jail. Well, someone would, anyway.

So how is it that private political organizations routinely and as a matter of course suborn our democratic representatives to vote against the best interests of their constituents, day in and day out, and no one raises their voice against it? I’m not familiar with any section of the Parliament Act that exempts political parties from section 41:

41. (1) No member of the House of Commons shall receive or agree to receive any compensation, directly or indirectly, for services rendered or to be rendered to any person, either by the member or another person,

(a) in relation to any bill, proceeding, contract, claim, controversy, charge, accusation, arrest or other matter before the Senate or the House of Commons or a committee of either House; or

(b) for the purpose of influencing or attempting to influence any member of either House.

And yet we routinely read that “often MPs are pressured to represent someone else–their party” (“often” in this context means “99.87% of the time”, as seen above… in normal parlance this would be referred to as “always”.)

I know this is tradition, but how is this even legal? MPs are not voting with their party out of the goodness of their hearts or the thoughtfulness of their brains, they are voting because they “receive compensation” and avoid punishment for doing so. They are being suborned by the private interests of their political party.

Political parties in Canada typically have a few tens of thousands to a hundred thousand members, each one constituting less than 1% of the population. I would say that less than 1% of the population counts as “few”, and it is certainly the case that the votes each of these tiny, privileged, private organizations has in Parliament is dependent on how many people votes for them, so as I’ve said, we’re a democratic oligarchy.

And more to the point: we are a criminal democratic oligarchy. As near as I can tell, the bread and butter business of political parties is to contravene section 41 of the Parliament Act, as well as to offend against every basic principle of representative democracy.

We are being governed by a criminal class of less than 1% of the population.

It’s time we started to fix that. I’ll leave thoughts about how to another time. This isn’t an issue that happened overnight, and it won’t be fixed overnight, but there are things we can start doing next year that will help: for one, vote for an independent. I’d far rather have an independent oddball in Parliament who made a sincere effort to represent me than a conventional party member of any party who I know is going to vote with their party 99.87% of the time.

Posted in ethics, politics, probability | Leave a comment

Some Other Rules

The title of this post is an extremely obscure joke: sum rules for total transition strength are a way of accounting for all possible transitions without actually measuring them individually. They serve as a sanity check on both theory and measurement. But in grad school it always sounded like the prof was talking about “some rules”, as if there were just some rules that had been randomly rounded up to fill our time in class, which might have been replaced by “some other rules” if circumstances had worked out that way.

I have a lot of rules in my life, mostly for convenience. I have a long list of rules for things I won’t read, simply to keep the list of things I do read down to somewhat manageable proportions (I’ve just done with some American Civil War era histories, I’m currently reading a pulp space opera, and have a history of the Boxer Rebellion and a history of early Antarctic exploration on my to-read list, as well as Tolkein’s Arthur and a few other things besides, so my will-not-read list hasn’t stopped me from being fairly eclectic).

I also have rules for writing, mostly about the avoidance of cliches, insofar as that’s possible.

Since I’m now exploring interactive fiction I’m thinking about rules for it. These are rules of thumb, conveniences, not laws of nature. Every competent practitioner of any art has such rules. I remember over-hearing a lawyer in the gym once explain to a junior colleague his rules for striking a particular type of bargain with the opposition, and recognized myself in what he was saying: he had been trained to offer a certain type of deal at one point in the proceedings and to be open to a different type of deal at another point, and he had a reasonable expectation that other lawyers would be on the same wavelength.

Professionals do this all the time, and writers as much as anyone else. We have rules about how to introduce characters, how to describe scenes and so on. Poets are even worse: we have rules about everything.

With regard to interactive fiction, I think one of my rules is going to be: the playable character should never be in doubt as to what their fundamental problem is. I played through 9:05 by Adam Cadre today, and it does a wonderful send-up of this common trope. The work was released in 2000, so it’s long past time that this was accepted as a common thing, but I see plenty of recent games where the fundamental mystery is “who the am I and what am I supposed to do?”

A little bit of revelation in the early part of the game is fine, but in many it is far too central for my taste.

Lost Pig is a delightful counter-example to this trend, although honestly I found the puzzles somewhat arcane, which will probably make real IF people laugh. I’m extremely simple-minded, and more to the point have all the wrong life-experience for this sort of game. I have actually had to hunt for lost pigs, OK? That kind of thing messes with your head when playing through a game like this, which is perfectly lovely in every other respect.

This is something I find I’m running into a lot in IF: my cultural context or knowledge base are just skewed enough relative to the author’s to be confusing.

I have the same problem with non-interactive fiction: Alice Monro’s story “Runaway”, for example, has some enormous howlers in it to anyone who has ever spent a lot of time at a riding stable in southern Ontario. Leaving a hole in the roof of the ring uncovered, for example, is simply not done, ever, as the next big blow (and in summer there is always a next big blow, coming in with the thunderstorms) will take the whole roof off.

In non-interactive fiction such mismatches are simply irritants. They prevent a certain fraction of the readership from fully appreciating the story, but don’t make it impossible to finish. A writer I know had an opportunity to fix such a problem: shortly before the book was re-issued the publisher’s wife read it, and told the publisher, “Neither /author/ nor /editor/ nor /publisher/ have ever cooked a turkey dinner.” The author (then a young woman) had a scene where the protagonist cooked two turkeys in a single oven, an improbable feat at best. A knowledgeable reader would be troubled by such a thing, but would still be able to finish the story.

In IF, that is not the case: if I miss a culturally-specific cue, if I make an assumption that is different from that of the author’s because I’ve had experience hunting lost pigs, if I see a way to use the things I have with me to engineer a solution that the author has not considered… the experience falls apart.

The degree of congruence required between the author’s mind and the interactor’s for a satisfying experience is vastly higher for interactive fiction than non-interactive fiction.

This is a problem that narrows the scope of interactive fiction to a pretty niche appeal. You would expect, on this basis, that the interactive fiction community would be small and closely coupled, and that’s pretty much what you see.

Consider a real-world comparison: sailors. A landlubber can read, or even write, a decent sailing story and enjoy it, but to write an interactive fiction about sailing that could be pleasurably experienced by sailors I think it likely one would have to be a sailor, and an interactive fiction about sailing that was enjoyed by sailors would probably be an exercise in frustration to non-sailors. This has certainly been my experience with non-sailors in the real world: I know of one person who learned to sail as an adult who has really taken to it. There is a large body of esoteric knowledge and terminology that has to be mastered to interact with the world in the appropriate ways.

I’m not saying it is impossible to write an interactive fiction the enjoyment of which is not sensitively dependent on a shared world-view between the author and interactor, but it seems to me to be far more difficult than is the case with non-interactive fiction.

Posted in interactive fiction, sailing, writing | Leave a comment

Puzzles vs Problems

Most of my life has been spent unraveling puzzles that are presented to the human race by nature, or god, or some equally malignant power.

“How do you measure the e+/e- annihilation cross section in the range of 1 – 3 MeV?

“How do you detect neutrinos from a nearby nuclear reactor?”

“How do you rapidly register low-quality images accurately enough to be clinically useful in radiotherapy?”

“How do you calibrate a large heavy-water neutrino detector?”

“How do you use image guidance to help surgeons doing ACL repairs? Or placing screws in a patient’s spine?”

“How do you identify which genes are significantly more or less active in this type of cancer vs that?”

“How do you find a stable solution to the biomechanics problem represented by a knee plus this additional device?”

And so on… I get bored easily so the topics cover a lot of ground, and I’ve left out some of the weirder ones.

Puzzles that are merely human bore me. The world has enough puzzles to keep us all occupied for the rest of time without us spending time making up more.

And yet… there is something appealing about the murder mysteries of the world, the purely human puzzles that still have significance to us because they are relevant to significant events. Even made-up mysteries of this type I can get behind. The illusion of significance is enough, and story provides sufficient illusion of significance.

That said, in reading about interactive fiction (I’m thinking there needs to be some dreadful Soviet-style compound like “interfic”) I can see the focus on “puzzles” could be off-putting, at least to interactors like me.

On the other hand, the staple of noninterfic is “problems”. The characters should have a new problem every four pages, as well as the larger framing problem and a number of nested problem levels. In “The Lord of the Rings” the framing problem is destroying the ring and saving the world. There are many other levels of problems: forming the fellowship, maintaining the fellowship, defending Helm’s Deep, all the way down to fighting off or hiding from or suborning Gollum.

So why not talk about interactive fiction in terms of problems rather than puzzles? There may be an excellent reason for doing so, but I’m going to keep the thought in the back of my mind as I explore the medium more deeply.

Posted in interactive fiction, writing | Leave a comment

Some Notes on Interactive Fiction

I’m old enough to have played Zork, but I never did. It and similar games were part of the atmosphere of my early academic career.

So I’ve been aware of interactive fiction–although I would not have thought of it in those terms–for a long time. In the past ten years or so I’ve found friends online who are involved in the interactive fiction (IF) community, and have poked around the edges of a couple of games.

That parser thing.

Damn.

I am not by nature a patient man. I can be patient when sufficiently motivated, but it isn’t my default setting.

Two minutes with the parser and I wanted to smack it. This from someone who raised two children without raising a hand to them once.

So my early attempts to interact with interactive fiction were just a failure. I didn’t get it. I was missing something, some spirit of the enterprise. I wasn’t sure if I was supposed to be playing a game, reading a story, or doing something completely different, and whatever I did with the parser it rejected me. I could look, move explore, but couldn’t seem to make any kind of meaningful progress.

It was just damned frustrating.

Then a while back I made a concerted attempt to play Jason McIntosh’s The Warbler’s Nest.

It was still frustrating, but the creepy atmosphere of the place drew me in. I felt kind of there.

Then I didn’t, because I failed to grasp the nature of the task, and spent a bunch of time screwing around pointlessly.

But then I found what I was looking for, right where I should have expected to find it, and after that things moved forward. I did mostly the right things, or at least things that weren’t pointless and frustrating (I’m using that word a lot) repetitions of things I’d done before. I solved the mystery, made the only possible choices (for me… YMMV) and came to an ending. Far from frustrating, that was deeply satisfying.

For a few brief moments I saw the potential of the medium, and I’m writing this while it’s still all fresh enough in my mind to capture what one Total Newb’s experience with a good, simple, powerful piece of IF is like.

There were minor things that I got caught on in the writing: amongst my people it would be a tinker, not a tailor, giving eldrich advice. I kept on expecting something to do with clothing, and didn’t recognize the figure for what it was. That’s just to be expected when reading a story from someone with a slightly different cultural background, which likely only caught me out because I wasn’t expecting it and was using up 99% of my mental capacity on the damned parser.

There was a much bigger thing, though: there was the problem of profluence.

I’m a writer, mostly, and “profluence” (I PRO-nounce it PRO-fluence but I know people who say PROF-luence, sad souls) is John Gardner’s term for the way fiction draws us forward into the continuous dream. Anyone who has taken more than a couple of writing courses will have spent some time struggling with Gardner’s “The Art of Fiction”, in which profluence is made much of.

Looking at my interaction with “The Warbler’s Nest”, and looking at the scheme of the game itself, I have to say the story is perfectly profluent. There were many moments when I felt there, in the dream, and moving forward through it. Profluent as hell, especially given my participation in it.

And then that damned parser, or my ignorance of it… and not just the simple mechanics, but the expectations, the context, the cues and conventions that I was too thick to see or understand.

I imagine new readers must feel like this: they are working their way along through a story OK and then get hit by some weird new word or grammatical construct or narrative trick that kicks them out of the dream and into the WTF?

An important part of profluence is a feeling of continuously fulfilled expectation leavened with the appropriate amount of surprise. It is a mix of both anticipation and discovery, and what we discover must always be what was implied by what came before, but only now that we are in a position to understand what came before in the light of our new discovery.

There were moments when I had that, and it was awesome. But then I’d trip over the technology. Stupid newb.

I don’t have any particular silver bullet to suggest. Maybe IF simply has a learning curve and newbs like me need to suck it up and climb the damned thing. I’m not a big fan of CYOA-style games or stories, which I understand is an alternative to the parser in the IF community, so I’m not drawn in that direction.

I’ve fiddled with a few other games and get the same mix of profluent dreaming and hard landings. I’m not sure how steep the learning curve is, or even if it ever really ends.

My next move is to write a little IF myself. I’ve bought a book (Aaron A Reed’s “Creating Interactive Fiction with Inform 7″) and have some ideas regarding a story–I’m thinking of tuning up a thing I did six or seven years ago about an Arctic expedition gone wrong, just as an experiment.

I’d like to understand this medium better and see if I can do something interesting in it. It certainly seems like it ought to be powerful, but I’m not sure that the current technological approach is the way forward. I have a few ideas regarding alternatives, but am going to spend some time learning the state of the art before exploring in new directions.

Posted in interactive fiction, writing | 2 Comments

Deficits, Deficits, Deficits

Back at the end of 2012 I predicted that the deficit for 2013 would be over $25 billion. I was wrong, maintaining my record as the only person who has ever been wrong about anything.

The deficit for 2013 was stuck at $20 billion, and since the start of 2014 has apparently come down to $10 billion, although know one can explain how or why.

Posted in economics, politics | Leave a comment

Is the US a “Developing Nation”?

I’ve just read “Fault Lines” by Raghuram Rajan, who is now the head of the Reserve Bank of India. He is an American-trained economist but unlike most Americans has a decidedly international perspective (this is not particularly a kick at Americans: when your nation is a central to the world as the US is today it is hard to see outside it.)

As such, he draws some interesting parallels between the modern US economy and the economies of developing nations. In particular, the close relationship between the US government and the financial sector starts to look more like the managed or relationship capitalism of the developing world than the market capitalism the US prides itself on.

And then there is access to credit. It is one of the features of the developing world that access to credit is poor, to the extent that this is almost definitional. In Canada, for example, the difficulty that “First Nations” people have in accessing credit has kept them in a state of perpetual poverty and dependency that the citizens of many developing nations would immediately recognize. When given access to credit–which necessarily implies reasonably stable conditions and the rule of law, because without those credit agreements are non-enforceable and therefore non-existent–nations rapidly join the ranks of the developed world.

But in the US, over 8% of households don’t have any bank account whatsoever, and many more have no effective access to credit. It is difficult to find comparable (household) figures in Canada, but the figures we do have indicate lower levels (as low as 3%, growing to 8% only when looking at exclusively lower income households).

Under- and un-banked households in the US are concentrated amongst poorer Americans and those whose skin colour is somewhat darker than that of, say, Dick Cheney.

This all suggests the US is in fact two nations: a developed-world nation with a stable middle class population capable of getting loans from financial institutions that are may fail if they screw up, and a developing-world nation with a few systemically powerful financial institutions and industries plus a population excluded from access to credit (and education, and the rule of law…)

It’s an interesting perspective, and one that casts some additional light on the current state of American society.

The slow growth of jobs in the US after the last three recessions (1991, 2001, 2008) has tended to grow the unbanked class, and the financial crisis has taken the “Greenspan put” from an implicit promise to pick up the pieces after the crash to a realized policy. The current push by the Federal Reserve to keep interest rates low is about as bad as it gets from the point of view of repeating the whole thing again in a few years, but Rajan makes the argument that low interest rates and the housing bubble are an attempt (supported by both parties in the US) to allow poorer people to participate in economic growth at a time when jobs are scarce and wages stagnant.

Posted in economics, politics | Leave a comment

Why We Need Anti-Discrimination Laws: a computational approach

My libertarian friends, back when I had libertarian friends, often imagined that anti-discrimination laws were unnecessary because “the market will take care of it”.

The argument goes like this: companies compete for the best employees, and quality of employee is a significant determinant of corporate success, and as such companies that discriminate against a sub-set of the population will under-perform those that don’t because they will necessarily forgo the best candidate in some cases and that will result in a lower-than-average employee quality that will result in an increased rate of corporate failure.

This is an imaginary argument, which is to say: it is not an argument at all. While such propositions stimulate thought, they ask us to do something that is far beyond the capabilities of the human imagination: accurately envision the working out of a diverse set of forces represented by probability distributions.

In particular, the way the argument is usually deployed is intended to focus our limited attentional resources on the high-end of the employee skill distribution. But this is wrong: the average person is average, and for discrimination to have an effect it has to occur in a situation where dropping the minority out of the population somehow changes the average skill of available workers. This is mathematically impossible.

Furthermore, remember that the whole trick of industrial capitalism is to create great products with average workers. This is why Wedgewood and others were able to create so much wealth, and why common items like pottery and clothing are now so cheap we hardly notice them, whereas before industrialization they were so dear that it was possible to live by stealing them.

It follows from this that the average worker in the average industry in the average capitalist economy is… average. Therefore it is mathematically impossible for discrimination against a minority to materially affect the success of a business, because the minority population will have on average the same distribution of skills as the majority population. Dropping out the minority population from consideration in business would therefore have a trivial effect on hiring decisions in the average case, and the exceptional case is not sufficient to punish businesses that discriminate to the point of affecting their success.

It’s worth looking at some examples of distributions before considering a more complete simulation. The image below considers a case where a company interviews 100 workers for a position where there is a 10% minority representation amongst the worker population. Worker “skill” has a Gaussian (bell curve) distribution with a mean of 10 and standard deviation of 5. Anti-skilled workers (people who negatively contribute to the company) exist. Both majority and minority populations have the same distribution.

Statistically speaking, the best candidate is probably not in the minority.

The best candidate is probably in the majority because any random worker is probably in the majority.

If we assume a strict meritocracy where the company chooses the best worker as measured by some particular skill, it is easy to see that discrimination or lack thereof will have no effect on the outcome: the best worker is probably a member of the majority because any randomly selected worker is probably a member of the majority. This is what “majority” means.

Even if we relax the criterion somewhat and say the company takes the first candidate who exceeds some particular level of skill–say 15 on the graph–we can see that the odds are still in favour of the first worker to meet that criterion being in the majority, again because any randomly selected worker is probably in the majority.

It takes about 200 lines of Python to simulate a situation where there are 100 businesses with 50 – 300 employees (in the SME range) who hire out of a population of about 18000 workers (re-sized to generate 3-6% average unemployment, which can be thought of as the effects of migration or economic fluctuations) with 10% minority representation. Each business has a “discrimination factor” between 0 and 1 that multiplies the skill level of minority workers for comparison during hiring, so a value of 0 means essentially no minority worker is ever hired and 1 means minorities are treated the same as the majority workers.

Every year there is 5% attrition as employees move on for various reasons, and every year companies are tested by the market to see if they have sufficiently skilled workers to survive. The test is applied by finding the average worker skill and adding a random value to it. Worker skill is over 80% of the company test score, so while randomness plays a role (as it does in the real world) worker skill is dominant.

The test has been set up so 10 – 20% of companies fail each year, which is pretty harsh. They are immediately replaced by new companies with random characteristics. If discrimination is a significant effect we will see more discriminatory companies go out of business more rapidly, and slowly the population of companies will evolve toward more egalitarian ones.

The hiring process consists of companies taking the best worker out of ten chosen at random. Worker skills are distributed on a bell curve with a mean of 1 and standard deviation of 0.5, with the additional condition that the skill-level be positive. As noted above, companies multiply worker skills by the corporate “discrimination factor” for comparison during hiring, so some companies almost never hire minority workers, even when they are highly skilled.

Here’s the code (updated to reflect changes discussed in the edit below):


import random

import numpy as np

"""For a population of two types, one of which is in the minority and 
is heavily discriminated against, does the discrimination materially
harm businesses that engage in it?"""

class Person:
	
	def __init__(self, bMinority):
		self.bMinority = bMinority
		self.fSkill = np.random.normal(1.0, 0.5)
		while self.fSkill < 0:	# constrain skill to be positive
			self.fSkill = np.random.normal(1.0, 0.5)
		self.bEmployed = False
		
	def getHireFactor(self, fDiscriminationFactor):
		if self.bMinority:
			return self.fSkill*fDiscriminationFactor
		else:
			return self.fSkill

class Business:
	
	def __init__(self, nEmployees, fChanceFactor, fDiscriminationFactor):
		self.nEmployees = int(nEmployees)
		self.lstEmployees = []
		self.fChanceFactor = fChanceFactor
		self.fDiscriminationFactor = fDiscriminationFactor
		
	def hire(self, lstUnemployed, lstEmployed):
		"""Take the best person out of first 10 at random"""
		if len(self.lstEmployees) < self.nEmployees:
			random.shuffle(lstUnemployed)	# randomize unemployed
			pHire = lstUnemployed[0]
			fBest = pHire.getHireFactor(self.fDiscriminationFactor)
			for pPerson in lstUnemployed[1:10]:
				fFactor = pPerson.getHireFactor(self.fDiscriminationFactor)
				if fFactor > fBest:
					fBest = fFactor
					pHire = pPerson

			pHire.bEmployed = True
			lstEmployed.append(pHire)
			lstUnemployed.remove(pHire)
			self.lstEmployees.append(pHire)
		
	def test(self, fThreshold):
		fAvg = sum([pEmployee.fSkill for pEmployee in self.lstEmployees])/len(self.lstEmployees)
		fSum = fAvg + random.random()*self.fChanceFactor
		if fSum > fThreshold:
			return True
		else:
			return False
			
	def attrition(self, lstEmployed, lstUnemployed):
		lstMovedOn = []
		for pEmployee in self.lstEmployees:
			if random.random() < 0.05:
				lstMovedOn.append(pEmployee)
				
		for pEmployee in lstMovedOn:
			self.lstEmployees.remove(pEmployee)
			pEmployee.bEmployed = False
			lstEmployed.remove(pEmployee)
			lstUnemployed.append(pEmployee)

nEmployeeRange = 250
fChanceFactor = 1.15 # equal to average employee skill (> 1 due to eliminting negative values)
lstBusinesses = []
nTotalWorkers = 0
for nBusinesses in range(0, 100):
	lstBusinesses.append(Business(50+random.random()*nEmployeeRange, fChanceFactor, random.random()))
	nTotalWorkers += lstBusinesses[-1].nEmployees
	
fMinorityFraction = 0.1
lstUnemployed = []
lstEmployed = []
nMinority = 0.0
nMajority = 0.0
fFullEmploymentFactor = 1.03
nPopulation = int(nTotalWorkers*fFullEmploymentFactor)
print nPopulation
for nPeople in range(0, nPopulation):
	lstUnemployed.append(Person(random.random() < fMinorityFraction))
	if lstUnemployed[-1].bMinority:
		nMinority += 1
	else:
		nMajority += 1

print nMajority, nMinority
print "Initial hiring. This may take a few minutes..."
while True: # initial hiring phase
	random.shuffle(lstBusinesses)
	nFull = 0
	for pBusiness in lstBusinesses:
		pBusiness.hire(lstUnemployed, lstEmployed)
		nFull += pBusiness.nEmployees == len(pBusiness.lstEmployees)

	if nFull == len(lstBusinesses):
		break

print len(lstEmployed), len(lstUnemployed)

nMajorityUnemployed = 0.0
nMinorityUnemployed = 0.0
for pPerson in lstUnemployed:
	if pPerson.bMinority:
		nMinorityUnemployed += 1
	else:
		nMajorityUnemployed += 1
print nMinorityUnemployed/nMinority, nMajorityUnemployed/nMajority

print "Starting iteration..."

lstLastTenFailCount = []
outFile = open("adjusted.dat", "w")
fTestThreshold = 0.98 # about 2% failure rate overall to start
while True:	# yearly iteration
	random.shuffle(lstBusinesses)
	
	lstFailed = []	# first test businesses
	for pBusiness in lstBusinesses:
		if not pBusiness.test(fTestThreshold):	
			lstFailed.append(pBusiness)
			for pPerson in pBusiness.lstEmployees:
				pPerson.bEmployed = False
				lstUnemployed.append(pPerson)
				lstEmployed.remove(pPerson)
				
	lstLastTenFailCount.append(len(lstFailed))
	if len(lstLastTenFailCount) > 10:
		lstLastTenFailCount.pop(0)
		nTotalFail = sum(lstLastTenFailCount)
		if nTotalFail < 15:
			fTestThreshold += 0.01
		if nTotalFail > 25:
			fTestThreshold -= 0.01
		print nTotalFail, fTestThreshold
			
	for pBusiness in lstFailed:
		lstBusinesses.remove(pBusiness)
		nTotalWorkers -= pBusiness.nEmployees

	for pBusiness in lstBusinesses: # attrition from remaining businesses
		pBusiness.attrition(lstEmployed, lstUnemployed)
	
	while len(lstBusinesses) < nBusinesses: # creation of new businesses
		lstBusinesses.append(Business(50+random.random()*nEmployeeRange, fChanceFactor, random.random()))
		nTotalWorkers += lstBusinesses[-1].nEmployees

	# migration keeps unemployment between 3% and 6%
	nWorkers = len(lstUnemployed)+len(lstEmployed)
	nPopulation = int(nTotalWorkers*fFullEmploymentFactor+fFullEmploymentFactor*random.random())
	random.shuffle(lstUnemployed)
	while nWorkers < nPopulation:
		lstUnemployed.append(Person(random.random() < fMinorityFraction))
		if lstUnemployed[-1].bMinority:
			nMinority += 1
		else:
			nMajority += 1
		nWorkers += 1
	while nWorkers > nPopulation:
		pWorker = lstUnemployed.pop()
		if pWorker.bMinority:
			nMinority -= 1
		else:
			nMajority -= 1
		nWorkers -= 1

	while True: # hiring
		random.shuffle(lstBusinesses)
		for pBusiness in lstBusinesses:
			pBusiness.hire(lstUnemployed, lstEmployed)
		nFull = 0
		for pBusiness in lstBusinesses:
			nFull += pBusiness.nEmployees == len(pBusiness.lstEmployees)
			
		if nFull == len(lstBusinesses):
			break
	
	fDiscrimination = 0.0
	for pBusiness in lstBusinesses:	# how discriminatory are we now?
		fDiscrimination += pBusiness.fDiscriminationFactor
	nMajorityUnemployed = 0.0
	nMinorityUnemployed = 0.0
	for pPerson in lstUnemployed:
		if pPerson.bMinority:
			nMinorityUnemployed += 1
		else:
			nMajorityUnemployed += 1
	outFile.write(str(len(lstFailed))+" "+str(fDiscrimination)+" "+str(nMinorityUnemployed/nMinority)+" "+str(nMajorityUnemployed/nMajority)+" "+str(nMinorityUnemployed)+" "+str(nMinority)+" "+str(nMajorityUnemployed)+" "+str(nMajority)+"\n")
	print len(lstFailed), fDiscrimination, nMinorityUnemployed/nMinority, nMajorityUnemployed/nMajority, nMinorityUnemployed, nMinority, nMajorityUnemployed, nMajority 

Feel free to run it and tweak it. If you talk about the results please give me attribution for the original.

The only thing I can say with any degree of certainty about this code is that it’s a better, more accurate, more useful representation of the actual market than anything your imagination can possibly run on its own.

Note that so far as the arguments I’m making here go, it doesn’t matter if you once heard a story about a company that failed because they refused to hire a black genius: anecdotal accounts of singular events are not arguments. Social policy should be based on large-scale averages, not one-offs.

So what does the code show?

Unsurprisingly, based on the arguments above, there is insufficient selection effect to drive discriminatory companies out of business on a timescale of less than centuries… by which time all of the companies would be out of business anyway for reasons that have nothing to do with discrimination. This result follows from the fact that the average worker is average, and the strength of capitalism is making great products with average workers.

Here’s a typical run of the code that simulates 100 years of this little micro-economy:

Discrimination Over a Century

Discrimination Over a Century

The discrimination factor is simply the sum over all company’s individual discrimination factors and it can be seen to slowly rise (which is equivalent to decreasing discrimination) by about 20% over the course of a century.

So the notion that “the market will take care of it” isn’t entirely insane, it is merely far too weak to make a meaningful difference over the lifetime of currently discriminated-against workers. Furthermore, the simulation is almost insanely generous to the hypothesis under test. It assumes, for example, that there is zero cost to hiring minority workers, whereas history shows this is false: the US is replete with stories of shutdowns, protests and other work actions by majority workers in the face of minority hiring. If we add even moderate costs to the model it will generate segregation, not integration.

I’m fairly surprised the model shows any effect at all. The effect demonstrated under extremely favourable assumptions is certainly far too small to be socially significant, and the model was not intended to closely emulate the real world, but to explore the numerical reality behind the historical fact that no free market anywhere ever has produced an integrated society without benefit of regulation.

Edit: the stuff below is follow-up to what goes above, as I thought it interesting to dig into the model parameters to see how realistic they are, and discovered that “not very” was the answer.

The most important factor in determining how efficient the market is in fighting discrimination is the rate of business turn-over. Small businesses (< 500 employees) account for the majority of employment in developed countries. It turns out there is quite a bit of data available on the rate at which such businesses fail, and the number is not 15% per year but somewhere around 2%. The UK data linked above gives the distribution of ages, which can be reasonably well modeled with a 3% failure rate, and the US data gives the mid-400 thousands for the birth/death rate, which is a 1.5% turn-over rate on a population of 28.2 million businesses.

So my 15% estimate was wrong by an order of magnitude. It’s also the case that chance plays a bigger role than the model allows, so I tweaked it such that chance (or rather, anything other than employee skills… it might be CEO competency, better competition, etc) accounts for 50% of the overall test score on average, instead of the average of 80% in the original model. I’ve updated the code above to reflect the changes.

Critics might say that I’ve fine-tuned these parameters to reach my preferred conclusion, which is nonsense on two counts: the first is that I’d far rather have markets take care of discrimination than leaving it to the nice people who work for the government. The second is that my parameter choices are empirically-driven in the first case and extremely generous in the second. I’ve worked for (and owned) small businesses where a few exceptional people were vital to the successes we did have, and which still went broke due to other factors. Anyone who claims things other than employee skills don’t have a very large impact on business success has never run a business.

My business experience is entirely in high-tech, working in areas where employee skills are vastly more important than in any other area, and it is still a matter of ordinary empirical fact that the success or failure of the business was only weakly tied to the quality of the team.

There is a larger question here, though. Is it reasonable to critique a computational model on the basis of parameter choice when the alternative is the un-aided and highly-fallible human imagination? Is it reasonable to say, “My imaginary argument doesn’t require any difficult parameter choices, so it’s better than your computational argument that does!”?

Does it weaken my argument because you can see how it works, analyze it and criticize it based on that analysis?

I don’t think so.

Most of what passes for argument about social policy between ideologues comes down to disagreements about what they imagine will happen under various conditions. Since we know with as much certainty as we know anything that our imaginations are terrible guides to what is real, ideological critiques of numerical, probabilistic arguments–Bayesian arguments–don’t hold much water.

Yet we very often feel as if a model like the one I’m presenting here is “too simplistic” to capture the reality of the system it is simulating in a way that would give us the power to draw conclusions from it.

It’s true that we should be cautious about over-interpreting models, but given that, how much more-so should we be over-cautious of over-interpreting our known-to-be-lousy imaginings?

If this model is too simplistic, it is certainly vastly more sophisticated–and accurate–than the unchecked, untested, unverified results of anyone’s imagination.

And what is the result of this model with somewhat more realistic parameters? I added a little code to adjust the test threshold dynamically to maintain a failure rate of between 1.5 and 2.5% (otherwise as time went on good companies tended to dominate, suggesting I am still setting the role of chance far too low) and tweaked the role of chance up to about 50%. The results of 250 years are shown in the figure below. Remember, this is more time than most nations in the developed world have existed as nations, so there have certainly been nothing like stable market conditions anywhere on Earth over this time such that this experiment might actually be performed.

Note scale is 250 years, not 100 as in previous figure

Note scale is 250 years, not 100 as in previous figure

The line fitted to the results has a slope of about 0.02/year, so after 1000 years less than half the original bias will be gone from this magically stable population of companies. This is indistinguishable from no progress at all when we try to apply it across the broad sweep of human history, which has in fact seen more and less discriminatory times as companies, industries, nations and empires come and go.

We can also look at the unemployment rate of the majority and minority population.

Minority Unemployment Stays High

Minority Unemployment Stays High

The overall unemployment rate varies between 3% and 6%, but the majority never sees much fluctuation. Instead, the majority–whose unemployment rate runs at about twice the majority in a typical year–gets hammered. This is also what we see in the real world, which speaks to the model’s basic soundness.

So there you have it. Numerical evidence that the proposition “The market will take care of discrimination” is not very plausible. “I imagine it otherwise” is not a counter-argument, or evidence for the proposition. If you want to argue against me, improve the model, don’t deploy the awesome power of your imagination, because your imagination isn’t any good at this stuff. Neither is mine. Neither is anyone else’s.

Posted in economics, evolution, history, politics, probability | 2 Comments

Identity

I’ve been heavily involved in theories and questions of identity in the past, and the question of “Who am I?”, like “What on Earth was I thinking?” and “Do you think we should call the cops?” never really gets old.

In the modern West we expect a lot of people with respect to identity. We have substantially diminished the traditional props that we once used to identify themselves–religion, ethnicity, family, nation–and then expected to go out into the world and figure ourselves out.

It isn’t enormously surprising that many of us make a bit of a mess of it.

Human identity has two components: internal and external. External identifiers are the easiest to come by, and there are always people looking to sell them to you. Religions and nations did this very well for centuries, but the price started to look a little high as the body counts from wars and the restrictions on behaviour started to chafe.

Today, explicitly commercial enterprises dominate the sale of identities, although the new media ecosystem has reduced their reach and hold. Apple, Harley Davidson and a few other brands are still able to sell themselves as part of their customers identities, but gone are the days when soft drinks and cigarettes could play a similar role.

Nationalism never goes out of style, even in Canada, but we live in an increasingly internationalized, globalized world, and that’s a good thing.

Religion is obviously a dominant force in the Muslim world, but despite attempts to revive it in the republic to the south of us, there are no meaningfully “Christian” nations in the same sense there are “Muslim” ones. This is also a good thing: the antidote to a toxin is not another toxin.

Sports and teams still play a relatively benign role in many people’s lives as a way of identifying themselves, as do hobbies and pastimes, but for the most part these are too coarse and trivial to be of much use. That I am a poet or canoeist doesn’t really do much to distinguish me from the millions of other poets or canoeists out there, and if identify doesn’t identify, what does it do?

Because that’s the way we expect identity to work in the modern world: to identify us uniquely, not as one of many more-or-less identical units of humanity. The external markers of identity generally serve to identify us as part of some larger group, and we are concerned as much with differentiating ourselves from such suspect, disused or pathological groupings as we are with including ourselves in them.

This is where internal markers of identity come in. As we have weakened and pathologized external axes of identification, we have come to rely much more on our internal sense of who we are. It isn’t entirely surprising that many people aren’t up to the task, or that co-evolving parasites have moved in to sell people their own unique set of personal parameters, most commonly in the form of some relatively non-toxic spiritual practice: yoga, meditation, volunteering in one for or another.

Weird-ass diets are amongst the most successful things in this category. If you are eating Paleo, you’ve purchased someone else’s identity package. There’s no flag you can wave or banner you can march under, which is what makes this an internal identifier, but you’ve still got it from a third-party.

There are other sources of identity that are actively harmful to others as well as to yourself. All political ideologies, from feminism to neo-Nazi-ism, fall into this category. By reducing the world to a series of doctrinaire terms, the follower of an ideology places themselves securely on a fixed grid of relationships that identifies them. Unfortunately, the grid never comes close to matching the world for nuance or even in general shape and contours.

The attraction of ideology to youth is understandable in this context: in a world that is bereft of clear signifiers of external identity, internalizing someone else’s ideology and personalizing it as your very own makes is an efficient if ultimately self-defeating move.

Of course, some of the less secure followers of ideologies feel it necessary to broadcast their allegiance via everything from hair cuts to tee-shirts, but it’s the inner state that creates the identity. Old fashioned external identifiers like nationalism and religion didn’t make much of what people actually believed–Orwell correctly observed that the British Empire allowed its subjects the privacy of their own minds–but the whole point of ideological identifiers is they can be used as markers of internal state, not external allegiance.

Not all ideologies are created equal, of course: it is possible to identify as a feminist and not be insane or dangerous. The same cannot be said of neo-Nazis.

As someone who identified with some pretty strange ideologies in his youth, I think ideological identification is something to be managed rather than deprecated. Youthful ideologues is the price we pay for the decline in religious, nationalist and ethnic identification, and that’s a good bargain.

Sexuality is an area where we have seen a blossoming of identities in recent decades, and that’s also a good thing. We are fortunate to live in a time (for those of us who live in Canada, at least) where people are generally free to identify themselves in a vast diversity of ways relative to their sexual and relationship preferences.

But sex is only one area of our humanity, and it while we certainly have a wealth of fine-grained divisions in other areas, we rarely take them seriously enough to play a significant role in our identities. The arts and sciences in particular deserve more attention in this regard–I am not just a poet, but a metrical, formal poet working in certain mostly English traditions, for example.

It would be nice if the arts and sciences were taken as seriously as sources of identity as sports and sex are today. Perhaps the pursuit of a diversity of diversities should be a project for the 21st century and beyond.

Posted in life | Leave a comment