Darwin’s Theorem

darwins-theorem-cover_small

Science, religion, evolution, romance, action, siphonophors!

Darwin’s Theorem is a story about stories(the working title for a long time was “Metastory”) that’s also a mystery, a romance, an adventure, and various other things besides. Not quite science fiction, excessively didactic… think of it as “Dan Brown meets ‘Origin of Species’.”

If you like to see plot, action and strong characters deployed in the pursuit of big, speculative ideas, you should check it out!

Posted in marketing, writing | Leave a comment

Some Other Rules

The title of this post is an extremely obscure joke: sum rules for total transition strength are a way of accounting for all possible transitions without actually measuring them individually. They serve as a sanity check on both theory and measurement. But in grad school it always sounded like the prof was talking about “some rules”, as if there were just some rules that had been randomly rounded up to fill our time in class, which might have been replaced by “some other rules” if circumstances had worked out that way.

I have a lot of rules in my life, mostly for convenience. I have a long list of rules for things I won’t read, simply to keep the list of things I do read down to somewhat manageable proportions (I’ve just done with some American Civil War era histories, I’m currently reading a pulp space opera, and have a history of the Boxer Rebellion and a history of early Antarctic exploration on my to-read list, as well as Tolkein’s Arthur and a few other things besides, so my will-not-read list hasn’t stopped me from being fairly eclectic).

I also have rules for writing, mostly about the avoidance of cliches, insofar as that’s possible.

Since I’m now exploring interactive fiction I’m thinking about rules for it. These are rules of thumb, conveniences, not laws of nature. Every competent practitioner of any art has such rules. I remember over-hearing a lawyer in the gym once explain to a junior colleague his rules for striking a particular type of bargain with the opposition, and recognized myself in what he was saying: he had been trained to offer a certain type of deal at one point in the proceedings and to be open to a different type of deal at another point, and he had a reasonable expectation that other lawyers would be on the same wavelength.

Professionals do this all the time, and writers as much as anyone else. We have rules about how to introduce characters, how to describe scenes and so on. Poets are even worse: we have rules about everything.

With regard to interactive fiction, I think one of my rules is going to be: the playable character should never be in doubt as to what their fundamental problem is. I played through 9:05 by Adam Cadre today, and it does a wonderful send-up of this common trope. The work was released in 2000, so it’s long past time that this was accepted as a common thing, but I see plenty of recent games where the fundamental mystery is “who the am I and what am I supposed to do?”

A little bit of revelation in the early part of the game is fine, but in many it is far too central for my taste.

Lost Pig is a delightful counter-example to this trend, although honestly I found the puzzles somewhat arcane, which will probably make real IF people laugh. I’m extremely simple-minded, and more to the point have all the wrong life-experience for this sort of game. I have actually had to hunt for lost pigs, OK? That kind of thing messes with your head when playing through a game like this, which is perfectly lovely in every other respect.

This is something I find I’m running into a lot in IF: my cultural context or knowledge base are just skewed enough relative to the author’s to be confusing.

I have the same problem with non-interactive fiction: Alice Monro’s story “Runaway”, for example, has some enormous howlers in it to anyone who has ever spent a lot of time at a riding stable in southern Ontario. Leaving a hole in the roof of the ring uncovered, for example, is simply not done, ever, as the next big blow (and in summer there is always a next big blow, coming in with the thunderstorms) will take the whole roof off.

In non-interactive fiction such mismatches are simply irritants. They prevent a certain fraction of the readership from fully appreciating the story, but don’t make it impossible to finish. A writer I know had an opportunity to fix such a problem: shortly before the book was re-issued the publisher’s wife read it, and told the publisher, “Neither /author/ nor /editor/ nor /publisher/ have ever cooked a turkey dinner.” The author (then a young woman) had a scene where the protagonist cooked two turkeys in a single oven, an improbable feat at best. A knowledgeable reader would be troubled by such a thing, but would still be able to finish the story.

In IF, that is not the case: if I miss a culturally-specific cue, if I make an assumption that is different from that of the author’s because I’ve had experience hunting lost pigs, if I see a way to use the things I have with me to engineer a solution that the author has not considered… the experience falls apart.

The degree of congruence required between the author’s mind and the interactor’s for a satisfying experience is vastly higher for interactive fiction than non-interactive fiction.

This is a problem that narrows the scope of interactive fiction to a pretty niche appeal. You would expect, on this basis, that the interactive fiction community would be small and closely coupled, and that’s pretty much what you see.

Consider a real-world comparison: sailors. A landlubber can read, or even write, a decent sailing story and enjoy it, but to write an interactive fiction about sailing that could be pleasurably experienced by sailors I think it likely one would have to be a sailor, and an interactive fiction about sailing that was enjoyed by sailors would probably be an exercise in frustration to non-sailors. This has certainly been my experience with non-sailors in the real world: I know of one person who learned to sail as an adult who has really taken to it. There is a large body of esoteric knowledge and terminology that has to be mastered to interact with the world in the appropriate ways.

I’m not saying it is impossible to write an interactive fiction the enjoyment of which is not sensitively dependent on a shared world-view between the author and interactor, but it seems to me to be far more difficult than is the case with non-interactive fiction.

Posted in interactive fiction, sailing, writing | Leave a comment

Puzzles vs Problems

Most of my life has been spent unraveling puzzles that are presented to the human race by nature, or god, or some equally malignant power.

“How do you measure the e+/e- annihilation cross section in the range of 1 – 3 MeV?

“How do you detect neutrinos from a nearby nuclear reactor?”

“How do you rapidly register low-quality images accurately enough to be clinically useful in radiotherapy?”

“How do you calibrate a large heavy-water neutrino detector?”

“How do you use image guidance to help surgeons doing ACL repairs? Or placing screws in a patient’s spine?”

“How do you identify which genes are significantly more or less active in this type of cancer vs that?”

“How do you find a stable solution to the biomechanics problem represented by a knee plus this additional device?”

And so on… I get bored easily so the topics cover a lot of ground, and I’ve left out some of the weirder ones.

Puzzles that are merely human bore me. The world has enough puzzles to keep us all occupied for the rest of time without us spending time making up more.

And yet… there is something appealing about the murder mysteries of the world, the purely human puzzles that still have significance to us because they are relevant to significant events. Even made-up mysteries of this type I can get behind. The illusion of significance is enough, and story provides sufficient illusion of significance.

That said, in reading about interactive fiction (I’m thinking there needs to be some dreadful Soviet-style compound like “interfic”) I can see the focus on “puzzles” could be off-putting, at least to interactors like me.

On the other hand, the staple of noninterfic is “problems”. The characters should have a new problem every four pages, as well as the larger framing problem and a number of nested problem levels. In “The Lord of the Rings” the framing problem is destroying the ring and saving the world. There are many other levels of problems: forming the fellowship, maintaining the fellowship, defending Helm’s Deep, all the way down to fighting off or hiding from or suborning Gollum.

So why not talk about interactive fiction in terms of problems rather than puzzles? There may be an excellent reason for doing so, but I’m going to keep the thought in the back of my mind as I explore the medium more deeply.

Posted in interactive fiction, writing | Leave a comment

Some Notes on Interactive Fiction

I’m old enough to have played Zork, but I never did. It and similar games were part of the atmosphere of my early academic career.

So I’ve been aware of interactive fiction–although I would not have thought of it in those terms–for a long time. In the past ten years or so I’ve found friends online who are involved in the interactive fiction (IF) community, and have poked around the edges of a couple of games.

That parser thing.

Damn.

I am not by nature a patient man. I can be patient when sufficiently motivated, but it isn’t my default setting.

Two minutes with the parser and I wanted to smack it. This from someone who raised two children without raising a hand to them once.

So my early attempts to interact with interactive fiction were just a failure. I didn’t get it. I was missing something, some spirit of the enterprise. I wasn’t sure if I was supposed to be playing a game, reading a story, or doing something completely different, and whatever I did with the parser it rejected me. I could look, move explore, but couldn’t seem to make any kind of meaningful progress.

It was just damned frustrating.

Then a while back I made a concerted attempt to play Jason McIntosh’s The Warbler’s Nest.

It was still frustrating, but the creepy atmosphere of the place drew me in. I felt kind of there.

Then I didn’t, because I failed to grasp the nature of the task, and spent a bunch of time screwing around pointlessly.

But then I found what I was looking for, right where I should have expected to find it, and after that things moved forward. I did mostly the right things, or at least things that weren’t pointless and frustrating (I’m using that word a lot) repetitions of things I’d done before. I solved the mystery, made the only possible choices (for me… YMMV) and came to an ending. Far from frustrating, that was deeply satisfying.

For a few brief moments I saw the potential of the medium, and I’m writing this while it’s still all fresh enough in my mind to capture what one Total Newb’s experience with a good, simple, powerful piece of IF is like.

There were minor things that I got caught on in the writing: amongst my people it would be a tinker, not a tailor, giving eldrich advice. I kept on expecting something to do with clothing, and didn’t recognize the figure for what it was. That’s just to be expected when reading a story from someone with a slightly different cultural background, which likely only caught me out because I wasn’t expecting it and was using up 99% of my mental capacity on the damned parser.

There was a much bigger thing, though: there was the problem of profluence.

I’m a writer, mostly, and “profluence” (I PRO-nounce it PRO-fluence but I know people who say PROF-luence, sad souls) is John Gardner’s term for the way fiction draws us forward into the continuous dream. Anyone who has taken more than a couple of writing courses will have spent some time struggling with Gardner’s “The Art of Fiction”, in which profluence is made much of.

Looking at my interaction with “The Warbler’s Nest”, and looking at the scheme of the game itself, I have to say the story is perfectly profluent. There were many moments when I felt there, in the dream, and moving forward through it. Profluent as hell, especially given my participation in it.

And then that damned parser, or my ignorance of it… and not just the simple mechanics, but the expectations, the context, the cues and conventions that I was too thick to see or understand.

I imagine new readers must feel like this: they are working their way along through a story OK and then get hit by some weird new word or grammatical construct or narrative trick that kicks them out of the dream and into the WTF?

An important part of profluence is a feeling of continuously fulfilled expectation leavened with the appropriate amount of surprise. It is a mix of both anticipation and discovery, and what we discover must always be what was implied by what came before, but only now that we are in a position to understand what came before in the light of our new discovery.

There were moments when I had that, and it was awesome. But then I’d trip over the technology. Stupid newb.

I don’t have any particular silver bullet to suggest. Maybe IF simply has a learning curve and newbs like me need to suck it up and climb the damned thing. I’m not a big fan of CYOA-style games or stories, which I understand is an alternative to the parser in the IF community, so I’m not drawn in that direction.

I’ve fiddled with a few other games and get the same mix of profluent dreaming and hard landings. I’m not sure how steep the learning curve is, or even if it ever really ends.

My next move is to write a little IF myself. I’ve bought a book (Aaron A Reed’s “Creating Interactive Fiction with Inform 7″) and have some ideas regarding a story–I’m thinking of tuning up a thing I did six or seven years ago about an Arctic expedition gone wrong, just as an experiment.

I’d like to understand this medium better and see if I can do something interesting in it. It certainly seems like it ought to be powerful, but I’m not sure that the current technological approach is the way forward. I have a few ideas regarding alternatives, but am going to spend some time learning the state of the art before exploring in new directions.

Posted in interactive fiction, writing | Leave a comment

Deficits, Deficits, Deficits

Back at the end of 2012 I predicted that the deficit for 2013 would be over $25 billion. I was wrong, maintaining my record as the only person who has ever been wrong about anything.

The deficit for 2013 was stuck at $20 billion, and since the start of 2014 has apparently come down to $10 billion, although know one can explain how or why.

Posted in economics, politics | Leave a comment

Is the US a “Developing Nation”?

I’ve just read “Fault Lines” by Raghuram Rajan, who is now the head of the Reserve Bank of India. He is an American-trained economist but unlike most Americans has a decidedly international perspective (this is not particularly a kick at Americans: when your nation is a central to the world as the US is today it is hard to see outside it.)

As such, he draws some interesting parallels between the modern US economy and the economies of developing nations. In particular, the close relationship between the US government and the financial sector starts to look more like the managed or relationship capitalism of the developing world than the market capitalism the US prides itself on.

And then there is access to credit. It is one of the features of the developing world that access to credit is poor, to the extent that this is almost definitional. In Canada, for example, the difficulty that “First Nations” people have in accessing credit has kept them in a state of perpetual poverty and dependency that the citizens of many developing nations would immediately recognize. When given access to credit–which necessarily implies reasonably stable conditions and the rule of law, because without those credit agreements are non-enforceable and therefore non-existent–nations rapidly join the ranks of the developed world.

But in the US, over 8% of households don’t have any bank account whatsoever, and many more have no effective access to credit. It is difficult to find comparable (household) figures in Canada, but the figures we do have indicate lower levels (as low as 3%, growing to 8% only when looking at exclusively lower income households).

Under- and un-banked households in the US are concentrated amongst poorer Americans and those whose skin colour is somewhat darker than that of, say, Dick Cheney.

This all suggests the US is in fact two nations: a developed-world nation with a stable middle class population capable of getting loans from financial institutions that are may fail if they screw up, and a developing-world nation with a few systemically powerful financial institutions and industries plus a population excluded from access to credit (and education, and the rule of law…)

It’s an interesting perspective, and one that casts some additional light on the current state of American society.

The slow growth of jobs in the US after the last three recessions (1991, 2001, 2008) has tended to grow the unbanked class, and the financial crisis has taken the “Greenspan put” from an implicit promise to pick up the pieces after the crash to a realized policy. The current push by the Federal Reserve to keep interest rates low is about as bad as it gets from the point of view of repeating the whole thing again in a few years, but Rajan makes the argument that low interest rates and the housing bubble are an attempt (supported by both parties in the US) to allow poorer people to participate in economic growth at a time when jobs are scarce and wages stagnant.

Posted in economics, politics | Leave a comment

Why We Need Anti-Discrimination Laws: a computational approach

My libertarian friends, back when I had libertarian friends, often imagined that anti-discrimination laws were unnecessary because “the market will take care of it”.

The argument goes like this: companies compete for the best employees, and quality of employee is a significant determinant of corporate success, and as such companies that discriminate against a sub-set of the population will under-perform those that don’t because they will necessarily forgo the best candidate in some cases and that will result in a lower-than-average employee quality that will result in an increased rate of corporate failure.

This is an imaginary argument, which is to say: it is not an argument at all. While such propositions stimulate thought, they ask us to do something that is far beyond the capabilities of the human imagination: accurately envision the working out of a diverse set of forces represented by probability distributions.

In particular, the way the argument is usually deployed is intended to focus our limited attentional resources on the high-end of the employee skill distribution. But this is wrong: the average person is average, and for discrimination to have an effect it has to occur in a situation where dropping the minority out of the population somehow changes the average skill of available workers. This is mathematically impossible.

Furthermore, remember that the whole trick of industrial capitalism is to create great products with average workers. This is why Wedgewood and others were able to create so much wealth, and why common items like pottery and clothing are now so cheap we hardly notice them, whereas before industrialization they were so dear that it was possible to live by stealing them.

It follows from this that the average worker in the average industry in the average capitalist economy is… average. Therefore it is mathematically impossible for discrimination against a minority to materially affect the success of a business, because the minority population will have on average the same distribution of skills as the majority population. Dropping out the minority population from consideration in business would therefore have a trivial effect on hiring decisions in the average case, and the exceptional case is not sufficient to punish businesses that discriminate to the point of affecting their success.

It’s worth looking at some examples of distributions before considering a more complete simulation. The image below considers a case where a company interviews 100 workers for a position where there is a 10% minority representation amongst the worker population. Worker “skill” has a Gaussian (bell curve) distribution with a mean of 10 and standard deviation of 5. Anti-skilled workers (people who negatively contribute to the company) exist. Both majority and minority populations have the same distribution.

Statistically speaking, the best candidate is probably not in the minority.

The best candidate is probably in the majority because any random worker is probably in the majority.

If we assume a strict meritocracy where the company chooses the best worker as measured by some particular skill, it is easy to see that discrimination or lack thereof will have no effect on the outcome: the best worker is probably a member of the majority because any randomly selected worker is probably a member of the majority. This is what “majority” means.

Even if we relax the criterion somewhat and say the company takes the first candidate who exceeds some particular level of skill–say 15 on the graph–we can see that the odds are still in favour of the first worker to meet that criterion being in the majority, again because any randomly selected worker is probably in the majority.

It takes about 200 lines of Python to simulate a situation where there are 100 businesses with 50 – 300 employees (in the SME range) who hire out of a population of about 18000 workers (re-sized to generate 3-6% average unemployment, which can be thought of as the effects of migration or economic fluctuations) with 10% minority representation. Each business has a “discrimination factor” between 0 and 1 that multiplies the skill level of minority workers for comparison during hiring, so a value of 0 means essentially no minority worker is ever hired and 1 means minorities are treated the same as the majority workers.

Every year there is 5% attrition as employees move on for various reasons, and every year companies are tested by the market to see if they have sufficiently skilled workers to survive. The test is applied by finding the average worker skill and adding a random value to it. Worker skill is over 80% of the company test score, so while randomness plays a role (as it does in the real world) worker skill is dominant.

The test has been set up so 10 – 20% of companies fail each year, which is pretty harsh. They are immediately replaced by new companies with random characteristics. If discrimination is a significant effect we will see more discriminatory companies go out of business more rapidly, and slowly the population of companies will evolve toward more egalitarian ones.

The hiring process consists of companies taking the best worker out of ten chosen at random. Worker skills are distributed on a bell curve with a mean of 1 and standard deviation of 0.5, with the additional condition that the skill-level be positive. As noted above, companies multiply worker skills by the corporate “discrimination factor” for comparison during hiring, so some companies almost never hire minority workers, even when they are highly skilled.

Here’s the code (updated to reflect changes discussed in the edit below):


import random

import numpy as np

"""For a population of two types, one of which is in the minority and 
is heavily discriminated against, does the discrimination materially
harm businesses that engage in it?"""

class Person:
	
	def __init__(self, bMinority):
		self.bMinority = bMinority
		self.fSkill = np.random.normal(1.0, 0.5)
		while self.fSkill < 0:	# constrain skill to be positive
			self.fSkill = np.random.normal(1.0, 0.5)
		self.bEmployed = False
		
	def getHireFactor(self, fDiscriminationFactor):
		if self.bMinority:
			return self.fSkill*fDiscriminationFactor
		else:
			return self.fSkill

class Business:
	
	def __init__(self, nEmployees, fChanceFactor, fDiscriminationFactor):
		self.nEmployees = int(nEmployees)
		self.lstEmployees = []
		self.fChanceFactor = fChanceFactor
		self.fDiscriminationFactor = fDiscriminationFactor
		
	def hire(self, lstUnemployed, lstEmployed):
		"""Take the best person out of first 10 at random"""
		if len(self.lstEmployees) < self.nEmployees:
			random.shuffle(lstUnemployed)	# randomize unemployed
			pHire = lstUnemployed[0]
			fBest = pHire.getHireFactor(self.fDiscriminationFactor)
			for pPerson in lstUnemployed[1:10]:
				fFactor = pPerson.getHireFactor(self.fDiscriminationFactor)
				if fFactor > fBest:
					fBest = fFactor
					pHire = pPerson

			pHire.bEmployed = True
			lstEmployed.append(pHire)
			lstUnemployed.remove(pHire)
			self.lstEmployees.append(pHire)
		
	def test(self, fThreshold):
		fAvg = sum([pEmployee.fSkill for pEmployee in self.lstEmployees])/len(self.lstEmployees)
		fSum = fAvg + random.random()*self.fChanceFactor
		if fSum > fThreshold:
			return True
		else:
			return False
			
	def attrition(self, lstEmployed, lstUnemployed):
		lstMovedOn = []
		for pEmployee in self.lstEmployees:
			if random.random() < 0.05:
				lstMovedOn.append(pEmployee)
				
		for pEmployee in lstMovedOn:
			self.lstEmployees.remove(pEmployee)
			pEmployee.bEmployed = False
			lstEmployed.remove(pEmployee)
			lstUnemployed.append(pEmployee)

nEmployeeRange = 250
fChanceFactor = 1.15 # equal to average employee skill (> 1 due to eliminting negative values)
lstBusinesses = []
nTotalWorkers = 0
for nBusinesses in range(0, 100):
	lstBusinesses.append(Business(50+random.random()*nEmployeeRange, fChanceFactor, random.random()))
	nTotalWorkers += lstBusinesses[-1].nEmployees
	
fMinorityFraction = 0.1
lstUnemployed = []
lstEmployed = []
nMinority = 0.0
nMajority = 0.0
fFullEmploymentFactor = 1.03
nPopulation = int(nTotalWorkers*fFullEmploymentFactor)
print nPopulation
for nPeople in range(0, nPopulation):
	lstUnemployed.append(Person(random.random() < fMinorityFraction))
	if lstUnemployed[-1].bMinority:
		nMinority += 1
	else:
		nMajority += 1

print nMajority, nMinority
print "Initial hiring. This may take a few minutes..."
while True: # initial hiring phase
	random.shuffle(lstBusinesses)
	nFull = 0
	for pBusiness in lstBusinesses:
		pBusiness.hire(lstUnemployed, lstEmployed)
		nFull += pBusiness.nEmployees == len(pBusiness.lstEmployees)

	if nFull == len(lstBusinesses):
		break

print len(lstEmployed), len(lstUnemployed)

nMajorityUnemployed = 0.0
nMinorityUnemployed = 0.0
for pPerson in lstUnemployed:
	if pPerson.bMinority:
		nMinorityUnemployed += 1
	else:
		nMajorityUnemployed += 1
print nMinorityUnemployed/nMinority, nMajorityUnemployed/nMajority

print "Starting iteration..."

lstLastTenFailCount = []
outFile = open("adjusted.dat", "w")
fTestThreshold = 0.98 # about 2% failure rate overall to start
while True:	# yearly iteration
	random.shuffle(lstBusinesses)
	
	lstFailed = []	# first test businesses
	for pBusiness in lstBusinesses:
		if not pBusiness.test(fTestThreshold):	
			lstFailed.append(pBusiness)
			for pPerson in pBusiness.lstEmployees:
				pPerson.bEmployed = False
				lstUnemployed.append(pPerson)
				lstEmployed.remove(pPerson)
				
	lstLastTenFailCount.append(len(lstFailed))
	if len(lstLastTenFailCount) > 10:
		lstLastTenFailCount.pop(0)
		nTotalFail = sum(lstLastTenFailCount)
		if nTotalFail < 15:
			fTestThreshold += 0.01
		if nTotalFail > 25:
			fTestThreshold -= 0.01
		print nTotalFail, fTestThreshold
			
	for pBusiness in lstFailed:
		lstBusinesses.remove(pBusiness)
		nTotalWorkers -= pBusiness.nEmployees

	for pBusiness in lstBusinesses: # attrition from remaining businesses
		pBusiness.attrition(lstEmployed, lstUnemployed)
	
	while len(lstBusinesses) < nBusinesses: # creation of new businesses
		lstBusinesses.append(Business(50+random.random()*nEmployeeRange, fChanceFactor, random.random()))
		nTotalWorkers += lstBusinesses[-1].nEmployees

	# migration keeps unemployment between 3% and 6%
	nWorkers = len(lstUnemployed)+len(lstEmployed)
	nPopulation = int(nTotalWorkers*fFullEmploymentFactor+fFullEmploymentFactor*random.random())
	random.shuffle(lstUnemployed)
	while nWorkers < nPopulation:
		lstUnemployed.append(Person(random.random() < fMinorityFraction))
		if lstUnemployed[-1].bMinority:
			nMinority += 1
		else:
			nMajority += 1
		nWorkers += 1
	while nWorkers > nPopulation:
		pWorker = lstUnemployed.pop()
		if pWorker.bMinority:
			nMinority -= 1
		else:
			nMajority -= 1
		nWorkers -= 1

	while True: # hiring
		random.shuffle(lstBusinesses)
		for pBusiness in lstBusinesses:
			pBusiness.hire(lstUnemployed, lstEmployed)
		nFull = 0
		for pBusiness in lstBusinesses:
			nFull += pBusiness.nEmployees == len(pBusiness.lstEmployees)
			
		if nFull == len(lstBusinesses):
			break
	
	fDiscrimination = 0.0
	for pBusiness in lstBusinesses:	# how discriminatory are we now?
		fDiscrimination += pBusiness.fDiscriminationFactor
	nMajorityUnemployed = 0.0
	nMinorityUnemployed = 0.0
	for pPerson in lstUnemployed:
		if pPerson.bMinority:
			nMinorityUnemployed += 1
		else:
			nMajorityUnemployed += 1
	outFile.write(str(len(lstFailed))+" "+str(fDiscrimination)+" "+str(nMinorityUnemployed/nMinority)+" "+str(nMajorityUnemployed/nMajority)+" "+str(nMinorityUnemployed)+" "+str(nMinority)+" "+str(nMajorityUnemployed)+" "+str(nMajority)+"\n")
	print len(lstFailed), fDiscrimination, nMinorityUnemployed/nMinority, nMajorityUnemployed/nMajority, nMinorityUnemployed, nMinority, nMajorityUnemployed, nMajority 

Feel free to run it and tweak it. If you talk about the results please give me attribution for the original.

The only thing I can say with any degree of certainty about this code is that it’s a better, more accurate, more useful representation of the actual market than anything your imagination can possibly run on its own.

Note that so far as the arguments I’m making here go, it doesn’t matter if you once heard a story about a company that failed because they refused to hire a black genius: anecdotal accounts of singular events are not arguments. Social policy should be based on large-scale averages, not one-offs.

So what does the code show?

Unsurprisingly, based on the arguments above, there is insufficient selection effect to drive discriminatory companies out of business on a timescale of less than centuries… by which time all of the companies would be out of business anyway for reasons that have nothing to do with discrimination. This result follows from the fact that the average worker is average, and the strength of capitalism is making great products with average workers.

Here’s a typical run of the code that simulates 100 years of this little micro-economy:

Discrimination Over a Century

Discrimination Over a Century

The discrimination factor is simply the sum over all company’s individual discrimination factors and it can be seen to slowly rise (which is equivalent to decreasing discrimination) by about 20% over the course of a century.

So the notion that “the market will take care of it” isn’t entirely insane, it is merely far too weak to make a meaningful difference over the lifetime of currently discriminated-against workers. Furthermore, the simulation is almost insanely generous to the hypothesis under test. It assumes, for example, that there is zero cost to hiring minority workers, whereas history shows this is false: the US is replete with stories of shutdowns, protests and other work actions by majority workers in the face of minority hiring. If we add even moderate costs to the model it will generate segregation, not integration.

I’m fairly surprised the model shows any effect at all. The effect demonstrated under extremely favourable assumptions is certainly far too small to be socially significant, and the model was not intended to closely emulate the real world, but to explore the numerical reality behind the historical fact that no free market anywhere ever has produced an integrated society without benefit of regulation.

Edit: the stuff below is follow-up to what goes above, as I thought it interesting to dig into the model parameters to see how realistic they are, and discovered that “not very” was the answer.

The most important factor in determining how efficient the market is in fighting discrimination is the rate of business turn-over. Small businesses (< 500 employees) account for the majority of employment in developed countries. It turns out there is quite a bit of data available on the rate at which such businesses fail, and the number is not 15% per year but somewhere around 2%. The UK data linked above gives the distribution of ages, which can be reasonably well modeled with a 3% failure rate, and the US data gives the mid-400 thousands for the birth/death rate, which is a 1.5% turn-over rate on a population of 28.2 million businesses.

So my 15% estimate was wrong by an order of magnitude. It’s also the case that chance plays a bigger role than the model allows, so I tweaked it such that chance (or rather, anything other than employee skills… it might be CEO competency, better competition, etc) accounts for 50% of the overall test score on average, instead of the average of 80% in the original model. I’ve updated the code above to reflect the changes.

Critics might say that I’ve fine-tuned these parameters to reach my preferred conclusion, which is nonsense on two counts: the first is that I’d far rather have markets take care of discrimination than leaving it to the nice people who work for the government. The second is that my parameter choices are empirically-driven in the first case and extremely generous in the second. I’ve worked for (and owned) small businesses where a few exceptional people were vital to the successes we did have, and which still went broke due to other factors. Anyone who claims things other than employee skills don’t have a very large impact on business success has never run a business.

My business experience is entirely in high-tech, working in areas where employee skills are vastly more important than in any other area, and it is still a matter of ordinary empirical fact that the success or failure of the business was only weakly tied to the quality of the team.

There is a larger question here, though. Is it reasonable to critique a computational model on the basis of parameter choice when the alternative is the un-aided and highly-fallible human imagination? Is it reasonable to say, “My imaginary argument doesn’t require any difficult parameter choices, so it’s better than your computational argument that does!”?

Does it weaken my argument because you can see how it works, analyze it and criticize it based on that analysis?

I don’t think so.

Most of what passes for argument about social policy between ideologues comes down to disagreements about what they imagine will happen under various conditions. Since we know with as much certainty as we know anything that our imaginations are terrible guides to what is real, ideological critiques of numerical, probabilistic arguments–Bayesian arguments–don’t hold much water.

Yet we very often feel as if a model like the one I’m presenting here is “too simplistic” to capture the reality of the system it is simulating in a way that would give us the power to draw conclusions from it.

It’s true that we should be cautious about over-interpreting models, but given that, how much more-so should we be over-cautious of over-interpreting our known-to-be-lousy imaginings?

If this model is too simplistic, it is certainly vastly more sophisticated–and accurate–than the unchecked, untested, unverified results of anyone’s imagination.

And what is the result of this model with somewhat more realistic parameters? I added a little code to adjust the test threshold dynamically to maintain a failure rate of between 1.5 and 2.5% (otherwise as time went on good companies tended to dominate, suggesting I am still setting the role of chance far too low) and tweaked the role of chance up to about 50%. The results of 250 years are shown in the figure below. Remember, this is more time than most nations in the developed world have existed as nations, so there have certainly been nothing like stable market conditions anywhere on Earth over this time such that this experiment might actually be performed.

Note scale is 250 years, not 100 as in previous figure

Note scale is 250 years, not 100 as in previous figure

The line fitted to the results has a slope of about 0.02/year, so after 1000 years less than half the original bias will be gone from this magically stable population of companies. This is indistinguishable from no progress at all when we try to apply it across the broad sweep of human history, which has in fact seen more and less discriminatory times as companies, industries, nations and empires come and go.

We can also look at the unemployment rate of the majority and minority population.

Minority Unemployment Stays High

Minority Unemployment Stays High

The overall unemployment rate varies between 3% and 6%, but the majority never sees much fluctuation. Instead, the majority–whose unemployment rate runs at about twice the majority in a typical year–gets hammered. This is also what we see in the real world, which speaks to the model’s basic soundness.

So there you have it. Numerical evidence that the proposition “The market will take care of discrimination” is not very plausible. “I imagine it otherwise” is not a counter-argument, or evidence for the proposition. If you want to argue against me, improve the model, don’t deploy the awesome power of your imagination, because your imagination isn’t any good at this stuff. Neither is mine. Neither is anyone else’s.

Posted in economics, evolution, history, politics, probability | 2 Comments

Identity

I’ve been heavily involved in theories and questions of identity in the past, and the question of “Who am I?”, like “What on Earth was I thinking?” and “Do you think we should call the cops?” never really gets old.

In the modern West we expect a lot of people with respect to identity. We have substantially diminished the traditional props that we once used to identify themselves–religion, ethnicity, family, nation–and then expected to go out into the world and figure ourselves out.

It isn’t enormously surprising that many of us make a bit of a mess of it.

Human identity has two components: internal and external. External identifiers are the easiest to come by, and there are always people looking to sell them to you. Religions and nations did this very well for centuries, but the price started to look a little high as the body counts from wars and the restrictions on behaviour started to chafe.

Today, explicitly commercial enterprises dominate the sale of identities, although the new media ecosystem has reduced their reach and hold. Apple, Harley Davidson and a few other brands are still able to sell themselves as part of their customers identities, but gone are the days when soft drinks and cigarettes could play a similar role.

Nationalism never goes out of style, even in Canada, but we live in an increasingly internationalized, globalized world, and that’s a good thing.

Religion is obviously a dominant force in the Muslim world, but despite attempts to revive it in the republic to the south of us, there are no meaningfully “Christian” nations in the same sense there are “Muslim” ones. This is also a good thing: the antidote to a toxin is not another toxin.

Sports and teams still play a relatively benign role in many people’s lives as a way of identifying themselves, as do hobbies and pastimes, but for the most part these are too coarse and trivial to be of much use. That I am a poet or canoeist doesn’t really do much to distinguish me from the millions of other poets or canoeists out there, and if identify doesn’t identify, what does it do?

Because that’s the way we expect identity to work in the modern world: to identify us uniquely, not as one of many more-or-less identical units of humanity. The external markers of identity generally serve to identify us as part of some larger group, and we are concerned as much with differentiating ourselves from such suspect, disused or pathological groupings as we are with including ourselves in them.

This is where internal markers of identity come in. As we have weakened and pathologized external axes of identification, we have come to rely much more on our internal sense of who we are. It isn’t entirely surprising that many people aren’t up to the task, or that co-evolving parasites have moved in to sell people their own unique set of personal parameters, most commonly in the form of some relatively non-toxic spiritual practice: yoga, meditation, volunteering in one for or another.

Weird-ass diets are amongst the most successful things in this category. If you are eating Paleo, you’ve purchased someone else’s identity package. There’s no flag you can wave or banner you can march under, which is what makes this an internal identifier, but you’ve still got it from a third-party.

There are other sources of identity that are actively harmful to others as well as to yourself. All political ideologies, from feminism to neo-Nazi-ism, fall into this category. By reducing the world to a series of doctrinaire terms, the follower of an ideology places themselves securely on a fixed grid of relationships that identifies them. Unfortunately, the grid never comes close to matching the world for nuance or even in general shape and contours.

The attraction of ideology to youth is understandable in this context: in a world that is bereft of clear signifiers of external identity, internalizing someone else’s ideology and personalizing it as your very own makes is an efficient if ultimately self-defeating move.

Of course, some of the less secure followers of ideologies feel it necessary to broadcast their allegiance via everything from hair cuts to tee-shirts, but it’s the inner state that creates the identity. Old fashioned external identifiers like nationalism and religion didn’t make much of what people actually believed–Orwell correctly observed that the British Empire allowed its subjects the privacy of their own minds–but the whole point of ideological identifiers is they can be used as markers of internal state, not external allegiance.

Not all ideologies are created equal, of course: it is possible to identify as a feminist and not be insane or dangerous. The same cannot be said of neo-Nazis.

As someone who identified with some pretty strange ideologies in his youth, I think ideological identification is something to be managed rather than deprecated. Youthful ideologues is the price we pay for the decline in religious, nationalist and ethnic identification, and that’s a good bargain.

Sexuality is an area where we have seen a blossoming of identities in recent decades, and that’s also a good thing. We are fortunate to live in a time (for those of us who live in Canada, at least) where people are generally free to identify themselves in a vast diversity of ways relative to their sexual and relationship preferences.

But sex is only one area of our humanity, and it while we certainly have a wealth of fine-grained divisions in other areas, we rarely take them seriously enough to play a significant role in our identities. The arts and sciences in particular deserve more attention in this regard–I am not just a poet, but a metrical, formal poet working in certain mostly English traditions, for example.

It would be nice if the arts and sciences were taken as seriously as sources of identity as sports and sex are today. Perhaps the pursuit of a diversity of diversities should be a project for the 21st century and beyond.

Posted in life | Leave a comment

Two Haiku

Two scenes from a recent camping trip up the Sunshine Coast in BC:

high thin ghost-blown clouds
sweep across the summer sky
never finding home

ephemeral flame
lone dancer to Time’s music
eternal fire

Posted in haiku, life, poem | Leave a comment

Bookshelves

I’ve long been dissatisfied with the state of the bookshelf art, and took it upon myself to prototype a new approach, with the constraints:

  • the shelves should look reasonably good
  • be absolutely minimal in their design
  • not require fine carpentry skills
  • not touch the ground

The latter constraint came about for a variety of reasons. The floor in my place is carpet and vacuuming is a big enough pain without the bottoms of bookshelves to work with. And in any case I just thought it would be cool for the selves to hang from the walls and ceilings. I spent some quality time with wood engineering literature and was able to demonstrate that the loads I was looking at were comfortably within tolerances, including the pull-out strengths of eyebolts into the ceiling joists from places like the US federal government.

The design is as follows: vertical maple 1×2 stringers screwed to the studs by four 1/4×3 inch steel flathead screws would carry nine rows of additional 1/4×3 screws with their heads sticking out an inch to carry the back of the shelves. The front of the shelves would be carried by 3/32 wire strung from eyebolts in the joists or in 2×2 maple headers along the wall that was parallel to the joists. These can be seen in the picture below:

Overview of design.

Overview of design.

The joists run parallel to the wall on the right, and the shelves are 1×8 maple, 6 ft long, so while the first joist is about 14 inches from the wall the eyebolts need to be about 9 inches out (because the 8 inch planks are stood off by the thickness of the 1×2 stringers up the wall–I’m giving unfinished dimensions because I’m lazy, so take off 1/4 here or there as appropriate.) The headers sit on top of the stringers at the back and are screwed into the joists with 1/4×3 inch steel flathead steel screws. I couldn’t find any 2×2 maple so used a sandwich of two 1×2 pieces, which is sub-optimal but workable. I probably should have glued them together but didn’t really need the extra strength so didn’t bother. You can’t see them in the finished project anyway.

Setting screws in the front of the shelves.

Setting screws in the front of the shelves.

There were a lot of screws in this project. Five stringers on each wall with four screws to hold them on and nine screws to hold the shelves plus five more to hold the headers up adds up to 135 1/4×3 screws. Then there were another 90 number 8 1 1/4 inch brass round head screws for the front of the shelves, plus flat steel washers for each of them. I would have used steel screws for the shelf fronts if I were doing this again, as the brass was prone to breaking in the hard maple, even with fairly generous pilot holes. [EDIT: I've decided to replace all the brass screws with steel ones: the brass just aren't strong enough to give the kind of friction I want on the wires, and a few of the shelves are already starting to show a little bit of slide on the wires as the wood beneath them compresses. I may have to add clamps to the wires under the shelves, ultimately, to get a really secure configuration.]

As can be seen in the picture above, I clamped and marked the shelves and then drilled and screwed the brass screws in most of the way. This made actually setting the shelves up fairly easy.

The back of the shelves rests on screws set out of the stringers, remember. To seat the shelves properly I would place a shelf up by hand, using a vertical level to line it up with the shelf below (the bottom shelf was just eyeballed into place). I would then use a large hammer (with woodblock, of course) to bang the shelf down on the screw heads.

Tools of the trade.

Tools of the trade.

This would mark the positions of the heads, and a quick zap with a counter-sink bit would create a divot that the edge of the head would rest in. It worked fairly well, and the design was as forgiving of my rather cavalier approach to carpentry as I’d hoped.

Putting the shelves up can be done by one person. I worked from the bottom up, marking and drilling each shelf as I went. I had strung the outermost wires over the eyebolts and simply rested the back of the shelf on the screws, making sure the heads fell into the divots I had made, and then adjusted the front using a level. The bottom shelves I put a bend in the wire to get the height right (having cut the wires a few inches too long for the purpose, to be trimmed later) and found that the 3/32 braided steel wire–which I ordered from someone off Amazon–was more than stiff and strong enough to hold a single shelf with just a hook bent in the end. That made it easy to fiddle with one end while the wire held up the other, and drive the screw in with my trusty old plug-in drill (the wireless drill just didn’t have enough umph to drive things into the maple.)

The bottom shelves took some readjusting after everything else was up, but overall the process of putting them up wasn’t too difficult.

Shelving in progress.

Shelving in progress.

After getting all the shelves up with the outermost wires only, I strung the inner wires, cut them more-or-less to length, and proceeded to work from the top down to press them in behind the washers. It is strictly friction that is keeping the front of the shelves suspended. I had thought about wrapping the wires around the screws, but the wire is too big and the screws too small, and it would have made tensioning very difficult (it was already a bit tricky). This is the weakest aspect of the engineering, but the shelves show every indication of being strong and stable. Fully loaded I can pull down on them with a good fraction of my body weight and they don’t so much as quiver.

The wires run down on either side of the screws, as show in the picture below.

Wires and screws.

Wires and screws.

And the wall takes most of the load via the screws in the stringers (which in fact are long enough to get into the studs underneath):

Wall support.

Wall support.

One nice aspect of the design is there is a bit of room behind to let air circulate. Gotta keep those books well-ventilated!

It’s difficult to see in this picture because I took it before I added them, but I also ran single wires down the butt-ends of the shelves. These act as bookends. On two ends I ran them down in a zig-zag pattern and on the other two they are simply vertical. I think I like the zig-zag more.

Almost done.

Almost done.

During this whole process I was fiddling with the tension on the wires (and replacing brass screws whose heads had come off… I mostly just drilled a new pilot hole adjacent and more-or-less covered the snapped end of the old screw with the washer.) The tensioning is not hugely critical so long as things are pretty even. The bottom shelf wires will always be a bit loose until the system is loaded with books. I did stand on the bottom shelf at times with my full weight to get a sense of what was required, but mostly just aimed for equal distribution of force. The wire ends sometimes became bit frayed but nothing unmanagable. The wire could be cut with a decent set of pliers.

The proof of the shelving is in the loading, and I gradually built up the load on these ones over the course of week or so, just to make sure there would be no surprises. There weren’t (yet):

Loaded.

Loaded.

That’s most of my books. There is actually a little room to spare, which is nice. Even though I’m reading mostly ebooks these days, some extra room is always valuable. I’ll likely get rid of some of the older less interesting ones soon to make even more space.

So the design goals have been met, including the minimal waste condition. Three are a few dozen screws left over, and not a lot of additional scrap:

Scrap.

Scrap.

The wire was only available in something like a 250 foot roll, so I had to accept that waste, but the wood came out just about perfectly.

Overall, a successful prototype, and I would definitely build this design again. Although it’s nominally built-in, I will take it down and fill all the screw-holes when I move out of this place and expect to get my full damage deposit back.

Posted in making | 2 Comments