Why We Need Anti-Discrimination Laws: a computational approach

My libertarian friends, back when I had libertarian friends, often imagined that anti-discrimination laws were unnecessary because “the market will take care of it”.

The argument goes like this: companies compete for the best employees, and quality of employee is a significant determinant of corporate success, and as such companies that discriminate against a sub-set of the population will under-perform those that don’t because they will necessarily forgo the best candidate in some cases and that will result in a lower-than-average employee quality that will result in an increased rate of corporate failure.

This is an imaginary argument, which is to say: it is not an argument at all. While such propositions stimulate thought, they ask us to do something that is far beyond the capabilities of the human imagination and accurately envision the working out of a diverse set of forces represented by probability distributions.

In particular, the way the argument is usually deployed is intended to focus our limited attentional resources on the high-end of the employee skill distribution. But this is wrong: the average person is average, and for discrimination to have an effect it has to occur in a situation where dropping the minority out of the population somehow changes the average skill of available workers. This is mathematically impossible.

Furthermore, remember that the whole trick of industrial capitalism is to create great products with average workers. This is why Wedgewood and others were able to create so much wealth, and why common items like pottery and clothing are now so cheap we hardly notice them, whereas before industrialization they were so dear that it was possible to live by stealing them.

It follows from this that the average worker in the average industry in the average capitalist economy is… average. Therefore it is mathematically impossible for discrimination against a minority to materially affect the success of a business, because the minority population will have on average the same distribution of skills as the majority population. Dropping out the minority population from consideration in business would therefore have a trivial effect on hiring decisions in the average case, and the exceptional case is not sufficient to punish businesses that discriminate to the point of affecting their success.

It’s worth looking at some examples of distributions before considering a more complete simulation. The image below considers a case where a company interviews 100 workers for a position where there is a 10% minority representation amongst the worker population. Worker “skill” has a Gaussian (bell curve) distribution with a mean of 10 and standard deviation of 5. Anti-skilled workers (people who negatively contribute to the company) exist. Both majority and minority populations have the same distribution.

Statistically speaking, the best candidate is probably not in the minority.

The best candidate is probably in the majority because any random worker is probably in the majority.

If we assume a strict meritocracy where the company chooses the best worker as measured by some particular skill, it is easy to see that discrimination or lack thereof will have no effect on the outcome: the best worker is probably a member of the majority because any randomly selected worker is probably a member of the majority. This is what “majority” means.

Even if we relax the criterion somewhat and say the company takes the first candidate who exceeds some particular level of skill–say 15 on graph–we can see that the odds are still in favour of the first worker to meet that criterion being in the majority, again because any randomly selected worker is probably in the majority.

It takes about 200 lines of Python to simulate a situation where there are 100 businesses with 50 – 300 employees (in the SME range) who hire out of a population of about 18000 workers (re-sized to generate 3-6% average unemployment, which can be thought of as the effects of migration or economic fluctuations) with 10% minority representation. Each business has a “discrimination factor” between 0 and 1 that multiplies the skill level of minority workers for comparison during hiring, so a value of 0 means essentially no minority worker is ever hired and 1 means minorities are treated the same as the majority workers.

Every year there is 5% attrition as employees move on for various reasons, and every year companies are tested by the market to see if they have sufficiently skilled workers to survive. The test is applied by finding the average worker skill and adding a random value to it. Worker skill is over 80% of the company test score, so while randomness plays a role (as it does in the real world) worker skill is dominant.

The test has been set up so 10 – 20% of companies fail each year, which is pretty harsh. They are immediately replaced by new companies with random characteristics. If discrimination is a significant effect we will see more discriminatory companies go out of business more rapidly, and slowly the population of companies will evolve toward more egalitarian ones.

The hiring process consists of companies taking the best worker out of ten chosen at random. Worker skills are distributed on a bell curve with a mean of 1 and standard deviation of 0.5, with the additional condition that the skill-level be positive. As noted above, companies multiply worker skills by the corporate “discrimination factor” for comparison during hiring, so some companies almost never hire minority workers, even when they are highly skilled.

Here’s the code (updated to reflect changes discussed in the edit below):


import random

import numpy as np

"""For a population of two types, one of which is in the minority and 
is heavily discriminated against, does the discrimination materially
harm businesses that engage in it?"""

class Person:
	
	def __init__(self, bMinority):
		self.bMinority = bMinority
		self.fSkill = np.random.normal(1.0, 0.5)
		while self.fSkill < 0:	# constrain skill to be positive
			self.fSkill = np.random.normal(1.0, 0.5)
		self.bEmployed = False
		
	def getHireFactor(self, fDiscriminationFactor):
		if self.bMinority:
			return self.fSkill*fDiscriminationFactor
		else:
			return self.fSkill

class Business:
	
	def __init__(self, nEmployees, fChanceFactor, fDiscriminationFactor):
		self.nEmployees = int(nEmployees)
		self.lstEmployees = []
		self.fChanceFactor = fChanceFactor
		self.fDiscriminationFactor = fDiscriminationFactor
		
	def hire(self, lstUnemployed, lstEmployed):
		"""Take the best person out of first 10 at random"""
		if len(self.lstEmployees) < self.nEmployees:
			random.shuffle(lstUnemployed)	# randomize unemployed
			pHire = lstUnemployed[0]
			fBest = pHire.getHireFactor(self.fDiscriminationFactor)
			for pPerson in lstUnemployed[1:10]:
				fFactor = pPerson.getHireFactor(self.fDiscriminationFactor)
				if fFactor > fBest:
					fBest = fFactor
					pHire = pPerson

			pHire.bEmployed = True
			lstEmployed.append(pHire)
			lstUnemployed.remove(pHire)
			self.lstEmployees.append(pHire)
		
	def test(self, fThreshold):
		fAvg = sum([pEmployee.fSkill for pEmployee in self.lstEmployees])/len(self.lstEmployees)
		fSum = fAvg + random.random()*self.fChanceFactor
		if fSum > fThreshold:
			return True
		else:
			return False
			
	def attrition(self, lstEmployed, lstUnemployed):
		lstMovedOn = []
		for pEmployee in self.lstEmployees:
			if random.random() < 0.05:
				lstMovedOn.append(pEmployee)
				
		for pEmployee in lstMovedOn:
			self.lstEmployees.remove(pEmployee)
			pEmployee.bEmployed = False
			lstEmployed.remove(pEmployee)
			lstUnemployed.append(pEmployee)

nEmployeeRange = 250
fChanceFactor = 1.15 # equal to average employee skill (> 1 due to eliminting negative values)
lstBusinesses = []
nTotalWorkers = 0
for nBusinesses in range(0, 100):
	lstBusinesses.append(Business(50+random.random()*nEmployeeRange, fChanceFactor, random.random()))
	nTotalWorkers += lstBusinesses[-1].nEmployees
	
fMinorityFraction = 0.1
lstUnemployed = []
lstEmployed = []
nMinority = 0.0
nMajority = 0.0
fFullEmploymentFactor = 1.03
nPopulation = int(nTotalWorkers*fFullEmploymentFactor)
print nPopulation
for nPeople in range(0, nPopulation):
	lstUnemployed.append(Person(random.random() < fMinorityFraction))
	if lstUnemployed[-1].bMinority:
		nMinority += 1
	else:
		nMajority += 1

print nMajority, nMinority
print "Initial hiring. This may take a few minutes..."
while True: # initial hiring phase
	random.shuffle(lstBusinesses)
	nFull = 0
	for pBusiness in lstBusinesses:
		pBusiness.hire(lstUnemployed, lstEmployed)
		nFull += pBusiness.nEmployees == len(pBusiness.lstEmployees)

	if nFull == len(lstBusinesses):
		break

print len(lstEmployed), len(lstUnemployed)

nMajorityUnemployed = 0.0
nMinorityUnemployed = 0.0
for pPerson in lstUnemployed:
	if pPerson.bMinority:
		nMinorityUnemployed += 1
	else:
		nMajorityUnemployed += 1
print nMinorityUnemployed/nMinority, nMajorityUnemployed/nMajority

print "Starting iteration..."

lstLastTenFailCount = []
outFile = open("adjusted.dat", "w")
fTestThreshold = 0.98 # about 2% failure rate overall to start
while True:	# yearly iteration
	random.shuffle(lstBusinesses)
	
	lstFailed = []	# first test businesses
	for pBusiness in lstBusinesses:
		if not pBusiness.test(fTestThreshold):	
			lstFailed.append(pBusiness)
			for pPerson in pBusiness.lstEmployees:
				pPerson.bEmployed = False
				lstUnemployed.append(pPerson)
				lstEmployed.remove(pPerson)
				
	lstLastTenFailCount.append(len(lstFailed))
	if len(lstLastTenFailCount) > 10:
		lstLastTenFailCount.pop(0)
		nTotalFail = sum(lstLastTenFailCount)
		if nTotalFail < 15:
			fTestThreshold += 0.01
		if nTotalFail > 25:
			fTestThreshold -= 0.01
		print nTotalFail, fTestThreshold
			
	for pBusiness in lstFailed:
		lstBusinesses.remove(pBusiness)
		nTotalWorkers -= pBusiness.nEmployees

	for pBusiness in lstBusinesses: # attrition from remaining businesses
		pBusiness.attrition(lstEmployed, lstUnemployed)
	
	while len(lstBusinesses) < nBusinesses: # creation of new businesses
		lstBusinesses.append(Business(50+random.random()*nEmployeeRange, fChanceFactor, random.random()))
		nTotalWorkers += lstBusinesses[-1].nEmployees

	# migration keeps unemployment between 3% and 6%
	nWorkers = len(lstUnemployed)+len(lstEmployed)
	nPopulation = int(nTotalWorkers*fFullEmploymentFactor+fFullEmploymentFactor*random.random())
	random.shuffle(lstUnemployed)
	while nWorkers < nPopulation:
		lstUnemployed.append(Person(random.random() < fMinorityFraction))
		if lstUnemployed[-1].bMinority:
			nMinority += 1
		else:
			nMajority += 1
		nWorkers += 1
	while nWorkers > nPopulation:
		pWorker = lstUnemployed.pop()
		if pWorker.bMinority:
			nMinority -= 1
		else:
			nMajority -= 1
		nWorkers -= 1

	while True: # hiring
		random.shuffle(lstBusinesses)
		for pBusiness in lstBusinesses:
			pBusiness.hire(lstUnemployed, lstEmployed)
		nFull = 0
		for pBusiness in lstBusinesses:
			nFull += pBusiness.nEmployees == len(pBusiness.lstEmployees)
			
		if nFull == len(lstBusinesses):
			break
	
	fDiscrimination = 0.0
	for pBusiness in lstBusinesses:	# how discriminatory are we now?
		fDiscrimination += pBusiness.fDiscriminationFactor
	nMajorityUnemployed = 0.0
	nMinorityUnemployed = 0.0
	for pPerson in lstUnemployed:
		if pPerson.bMinority:
			nMinorityUnemployed += 1
		else:
			nMajorityUnemployed += 1
	outFile.write(str(len(lstFailed))+" "+str(fDiscrimination)+" "+str(nMinorityUnemployed/nMinority)+" "+str(nMajorityUnemployed/nMajority)+" "+str(nMinorityUnemployed)+" "+str(nMinority)+" "+str(nMajorityUnemployed)+" "+str(nMajority)+"\n")
	print len(lstFailed), fDiscrimination, nMinorityUnemployed/nMinority, nMajorityUnemployed/nMajority, nMinorityUnemployed, nMinority, nMajorityUnemployed, nMajority 

Feel free to run it and tweak it. If you talk about the results please give me attribution for the original.

The only thing I can say with any degree of certainty about this code is that it’s a better, more accurate, more useful representation of the actual market than anything your imagination can possibly run on its own.

Note that so far as the arguments I’m making here go, it doesn’t matter if you once heard a story about a company that failed because they refused to hire a black genius: anecdotal accounts of singular events are not arguments. Social policy should be based on large-scale averages, not one-offs.

So what does the code show?

Unsurprisingly, based on the arguments above, there is insufficient selection effect to drive discriminatory companies out of business on a timescale of less than centuries… by which time all of the companies would be out of business anyway for reasons that have nothing to do with discrimination. This result follows from the fact that the average worker is average, and the strength of capitalism is making great products with average workers.

Here’s a typical run of the code that simulates 100 years of this little micro-economy:

Discrimination Over a Century

Discrimination Over a Century

The discrimination factor is simply the sum over all company’s individual discrimination factors and it can be seen to slowly rise (which is equivalent to decreasing discrimination) by about 20% over the course of a century.

So the notion that “the market will take care of it” isn’t entirely insane, it is merely far too weak to make a meaningful difference over the lifetime of currently discriminated-against workers. Furthermore, the simulation is almost insanely generous to the hypothesis under test. It assumes, for example, that there is zero cost to hiring minority workers, whereas history shows this is false: the US is replete with stories of shutdowns, protests and other work actions by majority workers in the face of minority hiring. If we add even moderate costs to the model it will generate segregation, not integration.

I’m fairly surprised the model shows any effect at all. The effect demonstrated under extremely favourable assumptions is certainly far too small to be socially significant, and the model was not intended to closely emulate the real world, but to explore the numerical reality behind the historical fact that no free market anywhere ever has produced an integrated society without benefit of regulation.

Edit: the stuff below is follow-up to what goes above, as I thought it interesting to dig into the model parameters to see how realistic they are, and discovered that “not very” was the answer.

The most important factor in determining how efficient the market is in fighting discrimination is the rate of business turn-over. Small businesses (< 500 employees) account for the majority of employment in developed countries. It turns out there is quite a bit of data available on the rate at which such businesses fail, and the number is not 15% per year but somewhere around 2%. The UK data linked above gives the distribution of ages, which can be reasonably well modeled with a 3% failure rate, and the US data gives the mid-400 thousands for the birth/death rate, which is a 1.5% turn-over rate on a population of 28.2 million businesses.

So my 15% estimate was wrong by an order of magnitude. It’s also the case that chance plays a bigger role than the model allows, so I tweaked it such that chance (or rather, anything other than employee skills… it might be CEO competency, better competition, etc) accounts for 50% of the overall test score on average, instead of the average of 80% in the original model. I’ve updated the code above to reflect the changes.

Critics might say that I’ve fine-tuned these parameters to reach my preferred conclusion, which is nonsense on two counts: the first is that I’d far rather have markets take care of discrimination than leaving it to the nice people who work for the government. The second is that my parameter choices are empirically-driven in the first case and extremely generous in the second. I’ve worked for (and owned) small businesses where a few exceptional people were vital to the successes we did have, and which still went broke due to other factors. Anyone who claims things other than employee skills don’t have a very large impact on business success has never run a business.

My business experience is entirely in high-tech, working in areas where employee skills are vastly more important than in any other area, and it is still a matter of ordinary empirical fact that the success or failure of the business was only weakly tied to the quality of the team.

There is a larger question here, though. Is it reasonable to critique a computational model on the basis of parameter choice when the alternative is the un-aided and highly-fallible human imagination? Is it reasonable to say, “My imaginary argument doesn’t require any difficult parameter choices, so it’s better than your computational argument that does!”?

Does it weaken my argument because you can see how it works, analyze it and criticize it based on that analysis?

I don’t think so.

Most of what passes for argument about social policy between ideologues comes down to disagreements about what they imagine will happen under various conditions. Since we know with as much certainty as we know anything that our imaginations are terrible guides to what is real, ideological critiques of numerical, probabilistic arguments–Bayesian arguments–don’t hold much water.

Yet we very often feel as if a model like the one I’m presenting here is “too simplistic” to capture the reality of the system it is simulating in a way that would give us the power to draw conclusions from it.

It’s true that we should be cautious about over-interpreting models, but given that, how much more-so should we be over-cautious of over-interpreting our known-to-be-lousy imaginings?

If this model is too simplistic, it is certainly vastly more sophisticated–and accurate–than the unchecked, untested, unverified results of anyone’s imagination.

And what is the result of this model with somewhat more realistic parameters? I added a little code to adjust the test threshold dynamically to maintain a failure rate of between 1.5 and 2.5% (otherwise as time went on good companies tended to dominate, suggesting I am still setting the role of chance far too low) and tweaked the role of chance up to about 50%. The results of 250 years are shown in the figure below. Remember, this is more time than most nations in the developed world have existed as nations, so there have certainly been nothing like stable market conditions anywhere on Earth over this time such that this experiment might actually be performed.

Note scale is 250 years, not 100 as in previous figure

Note scale is 250 years, not 100 as in previous figure

The line fitted to the results has a slope of about 0.02/year, so after 1000 years less than half the original bias will be gone from this magically stable population of companies. This is indistinguishable from no progress at all when we try to apply it across the broad sweep of human history, which has in fact seen more and less discriminatory times as companies, industries, nations and empires come and go.

We can also look at the unemployment rate of the majority and minority population.

Minority Unemployment Stays High

Minority Unemployment Stays High

The overall unemployment rate varies between 3% and 6%, but the majority never sees much fluctuation. Instead, the majority–whose unemployment rate runs at about twice the majority in a typical year–gets hammered. This is also what we see in the real world, which speaks to the model’s basic soundness.

So there you have it. Numerical evidence that the proposition “The market will take care of discrimination” is not very plausible. “I imagine it otherwise” is not a counter-argument, or evidence for the proposition. If you want to argue against me, improve the model, don’t deploy the awesome power of your imagination, because your imagination isn’t any good at this stuff. Neither is mine. Neither is anyone else’s.

Posted in economics, evolution, history, politics, probability | Leave a comment

Identity

I’ve been heavily involved in theories and questions of identity in the past, and the question of “Who am I?”, like “What on Earth was I thinking?” and “Do you think we should call the cops?” never really gets old.

In the modern West we expect a lot of people with respect to identity. We have substantially diminished the traditional props that we once used to identify themselves–religion, ethnicity, family, nation–and then expected to go out into the world and figure ourselves out.

It isn’t enormously surprising that many of us make a bit of a mess of it.

Human identity has two components: internal and external. External identifiers are the easiest to come by, and there are always people looking to sell them to you. Religions and nations did this very well for centuries, but the price started to look a little high as the body counts from wars and the restrictions on behaviour started to chafe.

Today, explicitly commercial enterprises dominate the sale of identities, although the new media ecosystem has reduced their reach and hold. Apple, Harley Davidson and a few other brands are still able to sell themselves as part of their customers identities, but gone are the days when soft drinks and cigarettes could play a similar role.

Nationalism never goes out of style, even in Canada, but we live in an increasingly internationalized, globalized world, and that’s a good thing.

Religion is obviously a dominant force in the Muslim world, but despite attempts to revive it in the republic to the south of us, there are no meaningfully “Christian” nations in the same sense there are “Muslim” ones. This is also a good thing: the antidote to a toxin is not another toxin.

Sports and teams still play a relatively benign role in many people’s lives as a way of identifying themselves, as do hobbies and pastimes, but for the most part these are too coarse and trivial to be of much use. That I am a poet or canoeist doesn’t really do much to distinguish me from the millions of other poets or canoeists out there, and if identify doesn’t identify, what does it do?

Because that’s the way we expect identity to work in the modern world: to identify us uniquely, not as one of many more-or-less identical units of humanity. The external markers of identity generally serve to identify us as part of some larger group, and we are concerned as much with differentiating ourselves from such suspect, disused or pathological groupings as we are with including ourselves in them.

This is where internal markers of identity come in. As we have weakened and pathologized external axes of identification, we have come to rely much more on our internal sense of who we are. It isn’t entirely surprising that many people aren’t up to the task, or that co-evolving parasites have moved in to sell people their own unique set of personal parameters, most commonly in the form of some relatively non-toxic spiritual practice: yoga, meditation, volunteering in one for or another.

Weird-ass diets are amongst the most successful things in this category. If you are eating Paleo, you’ve purchased someone else’s identity package. There’s no flag you can wave or banner you can march under, which is what makes this an internal identifier, but you’ve still got it from a third-party.

There are other sources of identity that are actively harmful to others as well as to yourself. All political ideologies, from feminism to neo-Nazi-ism, fall into this category. By reducing the world to a series of doctrinaire terms, the follower of an ideology places themselves securely on a fixed grid of relationships that identifies them. Unfortunately, the grid never comes close to matching the world for nuance or even in general shape and contours.

The attraction of ideology to youth is understandable in this context: in a world that is bereft of clear signifiers of external identity, internalizing someone else’s ideology and personalizing it as your very own makes is an efficient if ultimately self-defeating move.

Of course, some of the less secure followers of ideologies feel it necessary to broadcast their allegiance via everything from hair cuts to tee-shirts, but it’s the inner state that creates the identity. Old fashioned external identifiers like nationalism and religion didn’t make much of what people actually believed–Orwell correctly observed that the British Empire allowed its subjects the privacy of their own minds–but the whole point of ideological identifiers is they can be used as markers of internal state, not external allegiance.

Not all ideologies are created equal, of course: it is possible to identify as a feminist and not be insane or dangerous. The same cannot be said of neo-Nazis.

As someone who identified with some pretty strange ideologies in his youth, I think ideological identification is something to be managed rather than deprecated. Youthful ideologues is the price we pay for the decline in religious, nationalist and ethnic identification, and that’s a good bargain.

Sexuality is an area where we have seen a blossoming of identities in recent decades, and that’s also a good thing. We are fortunate to live in a time (for those of us who live in Canada, at least) where people are generally free to identify themselves in a vast diversity of ways relative to their sexual and relationship preferences.

But sex is only one area of our humanity, and it while we certainly have a wealth of fine-grained divisions in other areas, we rarely take them seriously enough to play a significant role in our identities. The arts and sciences in particular deserve more attention in this regard–I am not just a poet, but a metrical, formal poet working in certain mostly English traditions, for example.

It would be nice if the arts and sciences were taken as seriously as sources of identity as sports and sex are today. Perhaps the pursuit of a diversity of diversities should be a project for the 21st century and beyond.

Posted in life | Leave a comment

Two Haiku

Two scenes from a recent camping trip up the Sunshine Coast in BC:

high thin ghost-blown clouds
sweep across the summer sky
never finding home

ephemeral flame
lone dancer to Time’s music
eternal fire

Posted in haiku, life, poem | Leave a comment

Bookshelves

I’ve long been dissatisfied with the state of the bookshelf art, and took it upon myself to prototype a new approach, with the constraints:

  • the shelves should look reasonably good
  • be absolutely minimal in their design
  • not require fine carpentry skills
  • not touch the ground

The latter constraint came about for a variety of reasons. The floor in my place is carpet and vacuuming is a big enough pain without the bottoms of bookshelves to work with. And in any case I just thought it would be cool for the selves to hang from the walls and ceilings. I spent some quality time with wood engineering literature and was able to demonstrate that the loads I was looking at were comfortably within tolerances, including the pull-out strengths of eyebolts into the ceiling joists from places like the US federal government.

The design is as follows: vertical maple 1×2 stringers screwed to the studs by four 1/4×3 inch steel flathead screws would carry nine rows of additional 1/4×3 screws with their heads sticking out an inch to carry the back of the shelves. The front of the shelves would be carried by 3/32 wire strung from eyebolts in the joists or in 2×2 maple headers along the wall that was parallel to the joists. These can be seen in the picture below:

Overview of design.

Overview of design.

The joists run parallel to the wall on the right, and the shelves are 1×8 maple, 6 ft long, so while the first joist is about 14 inches from the wall the eyebolts need to be about 9 inches out (because the 8 inch planks are stood off by the thickness of the 1×2 stringers up the wall–I’m giving unfinished dimensions because I’m lazy, so take off 1/4 here or there as appropriate.) The headers sit on top of the stringers at the back and are screwed into the joists with 1/4×3 inch steel flathead steel screws. I couldn’t find any 2×2 maple so used a sandwich of two 1×2 pieces, which is sub-optimal but workable. I probably should have glued them together but didn’t really need the extra strength so didn’t bother. You can’t see them in the finished project anyway.

Setting screws in the front of the shelves.

Setting screws in the front of the shelves.

There were a lot of screws in this project. Five stringers on each wall with four screws to hold them on and nine screws to hold the shelves plus five more to hold the headers up adds up to 135 1/4×3 screws. Then there were another 90 number 8 1 1/4 inch brass round head screws for the front of the shelves, plus flat steel washers for each of them. I would have used steel screws for the shelf fronts if I were doing this again, as the brass was prone to breaking in the hard maple, even with fairly generous pilot holes. [EDIT: I've decided to replace all the brass screws with steel ones: the brass just aren't strong enough to give the kind of friction I want on the wires, and a few of the shelves are already starting to show a little bit of slide on the wires as the wood beneath them compresses. I may have to add clamps to the wires under the shelves, ultimately, to get a really secure configuration.]

As can be seen in the picture above, I clamped and marked the shelves and then drilled and screwed the brass screws in most of the way. This made actually setting the shelves up fairly easy.

The back of the shelves rests on screws set out of the stringers, remember. To seat the shelves properly I would place a shelf up by hand, using a vertical level to line it up with the shelf below (the bottom shelf was just eyeballed into place). I would then use a large hammer (with woodblock, of course) to bang the shelf down on the screw heads.

Tools of the trade.

Tools of the trade.

This would mark the positions of the heads, and a quick zap with a counter-sink bit would create a divot that the edge of the head would rest in. It worked fairly well, and the design was as forgiving of my rather cavalier approach to carpentry as I’d hoped.

Putting the shelves up can be done by one person. I worked from the bottom up, marking and drilling each shelf as I went. I had strung the outermost wires over the eyebolts and simply rested the back of the shelf on the screws, making sure the heads fell into the divots I had made, and then adjusted the front using a level. The bottom shelves I put a bend in the wire to get the height right (having cut the wires a few inches too long for the purpose, to be trimmed later) and found that the 3/32 braided steel wire–which I ordered from someone off Amazon–was more than stiff and strong enough to hold a single shelf with just a hook bent in the end. That made it easy to fiddle with one end while the wire held up the other, and drive the screw in with my trusty old plug-in drill (the wireless drill just didn’t have enough umph to drive things into the maple.)

The bottom shelves took some readjusting after everything else was up, but overall the process of putting them up wasn’t too difficult.

Shelving in progress.

Shelving in progress.

After getting all the shelves up with the outermost wires only, I strung the inner wires, cut them more-or-less to length, and proceeded to work from the top down to press them in behind the washers. It is strictly friction that is keeping the front of the shelves suspended. I had thought about wrapping the wires around the screws, but the wire is too big and the screws too small, and it would have made tensioning very difficult (it was already a bit tricky). This is the weakest aspect of the engineering, but the shelves show every indication of being strong and stable. Fully loaded I can pull down on them with a good fraction of my body weight and they don’t so much as quiver.

The wires run down on either side of the screws, as show in the picture below.

Wires and screws.

Wires and screws.

And the wall takes most of the load via the screws in the stringers (which in fact are long enough to get into the studs underneath):

Wall support.

Wall support.

One nice aspect of the design is there is a bit of room behind to let air circulate. Gotta keep those books well-ventilated!

It’s difficult to see in this picture because I took it before I added them, but I also ran single wires down the butt-ends of the shelves. These act as bookends. On two ends I ran them down in a zig-zag pattern and on the other two they are simply vertical. I think I like the zig-zag more.

Almost done.

Almost done.

During this whole process I was fiddling with the tension on the wires (and replacing brass screws whose heads had come off… I mostly just drilled a new pilot hole adjacent and more-or-less covered the snapped end of the old screw with the washer.) The tensioning is not hugely critical so long as things are pretty even. The bottom shelf wires will always be a bit loose until the system is loaded with books. I did stand on the bottom shelf at times with my full weight to get a sense of what was required, but mostly just aimed for equal distribution of force. The wire ends sometimes became bit frayed but nothing unmanagable. The wire could be cut with a decent set of pliers.

The proof of the shelving is in the loading, and I gradually built up the load on these ones over the course of week or so, just to make sure there would be no surprises. There weren’t (yet):

Loaded.

Loaded.

That’s most of my books. There is actually a little room to spare, which is nice. Even though I’m reading mostly ebooks these days, some extra room is always valuable. I’ll likely get rid of some of the older less interesting ones soon to make even more space.

So the design goals have been met, including the minimal waste condition. Three are a few dozen screws left over, and not a lot of additional scrap:

Scrap.

Scrap.

The wire was only available in something like a 250 foot roll, so I had to accept that waste, but the wood came out just about perfectly.

Overall, a successful prototype, and I would definitely build this design again. Although it’s nominally built-in, I will take it down and fill all the screw-holes when I move out of this place and expect to get my full damage deposit back.

Posted in making | 2 Comments

Discpiline and Practice

I’m moderately successful by the standards of the world, despite being a profoundly flawed human being. I wonder about why this is so, sometimes. Some of the answer is pure luck: I was born in the best country on Earth at the best time in its history, into a family that allowed me to take full advantage of the benefits of our institutions and infrastructure. I am also fortunate to have a fairly sharp brain and a good memory.

But a lot of people with those same advantages or more have not had the same degree of success I have, either in their personal or professional lives, whereas I am in a position such that there are only one or two ambitions I’ve yet fulfilled, and I’ve knocked off some fairly large ones along the way. I’ve been a scientist, and engineer and a businessperson. I’ve raised a family. I’ve gained a deeper understanding of certain quantum mysteries that were important to me. I’ve canoed, sailed, hiked, run and swum a good deal.

When I ask what I’ve done beyond being lucky to achieve those things, I find two closely related things: discipline and practice.

There are also things that I have found to be necessary but not sufficient conditions for success: planning and scheduling.

A plan and/or a schedule without discipline and practice are useless, and I’ve seen a lot of people fail while investing a great deal in plans they don’t follow and schedules they don’t keep, because they don’t have the discipline to do so. When you find yourself investing heavily in planning and scheduling as your primary path to success, you should treat it as a sign of insufficient discipline. This is not to say that planning and scheduling are not of value: they are, but only to the disciplined individual.

Discipline comes first.

Discipline is about what you are doing right now. It is not about intent, not about tomorrow, not about “someday” or “eventually”.

Right now I am writing a blog post on discipline and practice. There are a million other things I might be doing and definitely some other things I’d like to be doing, but I’m doing this and not them. This is because I’m disciplined about doing the things that I have decided are my highest priority right now.

Planning and scheduling can help you with keep track of what’s most important right now–which sometimes includes goofing off, because relaxation is important too–but they can’t give you that inner force of will, that discipline, that says, “Yes” to executing on plans and schedules moment-by-moment.

I once spent six months commuting 800 km (each way) every couple of weeks. It was… unpleasant. The work was valuable and good, and the money paid for a commitment I’d made. But the commute was nasty, at times through torrential rains and other bad conditions. I remember sitting in the car one Sunday around noon, ready to head out for another week on-site. It was blisterlingly hot. I knew the road depressingly well by then. I had solved the core hard problem and was struggling with a bunch of minor and annoying issues. I really didn’t want to get on the road and go. But… discipline is about what you are doing right now. Right then I needed to be on the road to fulfill my plans and schedules and commitments. So on the road I got, and Did the Job.

We are given moments of choice in our life, and that was one of them. It was notable because I made the right choice, which I have not always done. But the right choice has usually been to take the more difficult path, to turn the boat upwind and beat against the storm. The act of doing so, moment-by-moment, is the essence of discipline.

This does not happen in isolation. We do not make choices in a vacuum, atomically, but in the context of all that has gone before. My life choices are more strongly conditioned by my failures than anything else: places where I have screwed up and incidents I do not want to repeat.

Discipline is not generic. We are not, cannot be, and should not be equally disciplined in all things.

Discipline is what we practice. Properly speaking, we are always engaged in “a discipline”, not just “discipline”. We are at any moment practicing some specific discipline (which includes goofing off, hopefully in moderation.)

Practice is under-rated in Canadian society. We have adopted too much the notion that people are born good or bad, talented or not, although the data show this is simply false. Talent matters, but a less talented person can beat a more talented one by dint of practice.

Because discipline is about what you are doing right now, it is practiced moment-by-moment. Right now I am practicing the discipline of writing. In a few minutes I will be practicing the discipline of relaxing. The thing that makes these disciplines rather than just drifting is mindfullness: I am aware, more-or-less, most of the time, of what I’m doing and generally have some notion of why. Mindful practice (of which there has rightfully been much made in the literature of learning and achievement of late) is fundamental to developing discipline in various arts.

Discipline is hard, but it can be learned, it can be practiced. I am not a fan of virtue ethics, but Aristotle’s observation that we can build moral habits is not wrong. We are what we do habitually: I am kind because I go out of my way to ease the difficulties of others; I am snarky because I don’t suffer fools gladly; I am a poet because I actually write poetry fairly often, and have read and studied in a somewhat systematic way the forms of structures of English poetry in the past thousand years.

Practice builds discipline, discipline eases practice.

Posted in ethics, life, psychology | Leave a comment

Where is the Warming? Part 2: Vancouver Again

Science is more of an art than a science.

I did a number of obviously stupid things in my previous quick look at Vancouver temperature data. Fortunately, this blog doesn’t have any readers so I get to catch my own mistakes.

I had intended this series–which will likely continue intermittently for a while–to be a pretty superficial look at various weather stations to see where the quite dramatic upswing in temperatures that appears in the physically meaningless arithmetic average temperature so-often cited in the news, but the data deserve more than that.

The data are sacred, remember: they can do what nothing else–not prayer, not scripture, not revelation, not imagination, not guesswork–can do. They can give us knowledge of the way the world actually is.

Knowledge is inherently uncertain, so we can often dig even more deeply into the data to find out more. When we do
that, we find some interesting things.

As I said previously, “global mean temperature” is an utterly meaningless number. It is some kind of geometrically weighted arithmetic average that is simply physically meaningless. The only correct way to compute it is to do the following:

  1. Measure temperature and humidity on a reasonably fine grid
  2. Infer the local tropospheric heat content from those data
  3. Sum the local tropospheric heat content over the whole globe
  4. Divide by the average tropospheric heat capacity, or the heat capacity of some kind of standard or model atmosphere

The trick is that the wetness of the troposphere significantly affects its heat capacity, and not by a small amount. The heat capacity of water is a thousand times that of air, so going from 0% to 100% humidity can make a big difference. 25 C air at 100% relative humidity has 20 grams of water vapour for every kilogram of air, so the water vapour has 20 times the heat capacity of the air. This is why arithmetic “global average temperature” is meaningless: it tells you nothing about the temperature that the system would have if allowed to come into global equilibrium. For example, a cubic metre of 10 C air at 100% relative humidity mixing with a cubic meter of air at 20 C and 100% relative humidity would reach an equilibrium temperature of 17 C or so. If it mixed with dry air at 20 C the equilibrium temperature would be about 10 C: there just isn’t enough energy in dry air to matter.

However, if global temperatures are going up, local temperatures must also be going up, on average. That is, if we look at a bunch of ground-stations we should see temperatures increasing over the past century much more often than not.

So here is my attempt to reduce the temperature data from one city to a single graph and a couple of numbers:

Seasonal One-Century Temperature Change in Vancouver, BC

Seasonal One-Century Temperature Change in Vancouver, BC

The graph shows the one-century temperature change implied by a linear fit to the temperature on any given day of the year, so day 0 is January 1st, day 10 is January 10th, and so on until day 360, which is December 25th or so, depending on leap-year-ish-ness of the year in question. By pulling out single days like this we can get rid of the effect of the yearly variation on the kind of quick fit I did yesterday, and we can see the effects of seasons as well as the day/night effect.

The summary numbers from the full fit are also shown, but in this case the full fit includes a sinusoidal component to account for the yearly variation in temperatures. As we saw yesterday, the daytime maxima don’t show much warming, but the nighttime minima do, as would be expected if the warming was dominated by the urban heat island effect.

The seasonal data are much richer than these two numbers suggest, however. As can be seen in the figure, both daytime and nighttime winter temperatures are up by about 2 C in the past century (winter is roughly days 0 – 90, spring is days 90 – 180, summer is days 180 – 270 and fall is days 270 – 360). This is consistent with my father’s observation that when he was growing up in Vancouver in the ’20′s there was usually enough snow in the winter to require milk to be delivered by horse-drawn sleighs because the trucks of the day couldn’t handle the hills in those conditions. Winter daytime temperatures have gone from around 3.0 C average to 5.0 C average, which is quite a dramatic effect.

In the spring/summer, however, only the nighttime temperature is up. The daytime temperatures actually dropped by a degree, with the effect most pronounced in the spring.

You can see how a case can be made from these data for “warming”, for “no warming” and for “cooling”, depending on what you choose to look at. This is why the reduction of the public policy discussion around climate change to a single number is stupid, and it is equally stupid when done by Warmists[*] and Denialists. If the scientific debate had been put in physically meaningful terms to begin with we might not be where we are today.

Furthermore, when people talk about the “signature” of anthropogenic climate change, they are or should be talking about things like the winter/summer difference in these data.

The tropospheric temperature field is reasonably smooth–if it was not then the coarse-grid assumption of climate models would be simply wrong, and arithmetic averaging of temperatures would be mathematically as well as physically illegitimate. This smoothness means that within a given model volume element, we should expect ground temperatures to follow roughly similar trends to the model predictions for that volume. The absolute temperature is rarely going to be the best measure of comparison, but things like the day/night difference and the seasonal difference seen in these data may well be more robust.

I’m not looking at models yet, although I have in the past. For now I’m looking at the data in an effort to produce a few simple comparators that might be used to tease out the signature of climate change in a way that looks plausible to all sides.

As a computational physicist who is skeptical that unphysical models like GCMs could produce anything remotely resembling the actual future climate, I’m doing what good scientists do: being very suspicious of the things we want most to believe, and see if we can test the idea to destruction. That creates the possibility of learning something new, which cannot be done if we merely try to confirm what we already know. It is primarily by proving ourselves wrong that we create new knowledge.

[*] I have been told that “Warmists” don’t exist. I define them as people who don’t know or understand the science, but have a large political stake in a variety of abstinence-only measures aimed at limiting energy use, while opposing nuclear power and research into geo-engineering. To be a “Warmist” in my definition you have to hold the anti-scientific belief that “the science is settled” and be hostile to anything outside of the “abstinence only” envelope of “solutions”.

Posted in epistemology, physics, science, thermodynamics | Leave a comment

Where is the Warming? Part 1: Vancouver

“Global mean temperature” is an utterly meaningless number. It isn’t quite on the scale of “kWhr of power” but it’s close. It is some kind of geometrically weighted arithmetic average that is simply physically meaningless. The only correct way to compute it is to do the following:

  1. Measure temperature and humidity on a reasonably fine grid
  2. Infer the local tropospheric heat content from those data
  3. Sum the local tropospheric heat content over the whole globe
  4. Divide by the average tropospheric heat capacity, or the heat capacity of some kind of standard or model atmosphere

The trick is that the wetness of the troposphere significantly affects its heat capacity, and not by a small amount. The heat capacity of water is a thousand times that of air, so going from 0% to 100% humidity can make a big difference. 25 C air at 100% relative humidity has 20 grams of water vapour for every kilogram of air, so the water vapour has 20 times the heat capacity of the air. This is why arithmetic “global average temperature” is meaningless: it tells you nothing about the temperature that the system would have if allowed to come into global equilibrium. For example, a cubic metre of 10 C air at 100% relative humidity mixing with a cubic meter of air at 20 C and 100% relative humidity would reach an equilibrium temperature of 17 C or so. If it mixed with dry air at 20 C the equilibrium temperature would be about 10 C: there just isn’t enough energy in dry air to matter.

Maybe it’s just me, but I like my public policy to be based on moderately correct physics.

So rather than taking a meaningless arithmetic average, it should be possible to look at ground stations and find a bunch of them that show dramatic warming. We’ll start with Vancouver, BC, Canada, just because. The maximum temperature looks like this:

Maximum Daily Temperature in Vancouver, 1898- 2013

Maximum Daily Temperature in Vancouver, 1898- 2013


The fit to the data shows negligible warming: 2.4 mC/year, with an error of +/- 1.0 mC/year.

If we look at the minimum temperature the picture changes a bit:

Minimum Daily Temperature in Vancouver, 8198 - 2013

Minimum Daily Temperature in Vancouver, 8198 – 2013

In this case the warming is closer to the canonical 1 C per century, but the fact that the effect is confined to nighttime temperatures tells us it is more probably urban heat island effects, not global warming. As cities grow and land use changes the nighttime temperature increases quite dramatically. These data are from stations at Vancouver airport and from a station labeled “Vancouver PMO” that I’m not sure of the location of. The airport data cover from 1935 on, so dominate the time series, but are still expected to show an urban heat island increase with time.

To count as a climate change signal I would expect both nighttime and daytime temperatures would be significantly affected, but in this case the nighttime effect is five times larger than the daytime effect, which is far more consistent with an urban heat island than global climate change.

So we don’t see a lot of warming in Vancouver.

My expectation is that over the country there will be a tendency to see large effects as we move inland and we move north. I’ll be poking around at different datasets as time goes on and I get time for it.

Posted in physics, science, thermodynamics | 5 Comments

More on Algorithms

A few months back I wrote a long and rambling post on why computing may be hard. I promised to write a simple model that showed how the bimodal mark distribution in CS1 could be reproduced on the assumption that there was a single skill or small set of skills that–when they fall below a particular threshold–result in abject failure, and when they are above it they result in exceptional performance.

Here is the model:


import math
import random

def sigmoid(x):
  return 1 / (1 + math.exp(-x))

# data from:
# Computer Science Education Vol 20, No. 1, March 2010, 37-71
# Anthony Robins, "Learning Edge Momentum: a new account of outcomes in CS1"
lstData = [38, 9, 8, 18, 26]
lstThreshold = [40, 50, 65, 80, 100]

# Can a simple signmoidal barrier in a single characteristic reproduce the data?
lstMarks = [0, 0, 0, 0, 0]
for nI in range(0, 100): # students
 fAbility = -5+12*random.random()
  fProbability = sigmoid(fAbility/2)
  nMark = 0
  for nJ in range(0, 100): # marks
    if random.random() < fProbability:
      nMark += 1
  for (nK, fThreshold) in enumerate(lstThreshold):
    if nMark < fThreshold:
      lstMarks[nK] += 1
      break
print lstMarks

It considers a class of 100 students in a course with 100 available marks, and ask how likely each student is to get them if there is a sigmoidal barrier to entry, which represents some basic skill which has a threshold such that if you're over it the marks are easy and under it the marks are hard.

With a little bit of hand-tweaking of the parameters, the model does a reasonable job of reproducing the data, where the Data and Model columns give the percentage of students who fall into the given mark range:

Mark (range) Data Model
E (0-39%) 38 34
D (40-49%) 9 8
C (50-64%) 8 12
B (65-79%) 18 14
A (80-100%) 26 32

This could probably be improved upon by fitting the parameters to the data, but it's enough to make the point as-is. This is no great surprise--or shouldn't be. A model with a nonlinear response to an incremental increase in skill results in an an outcome that is nonlinear relative to skill. But the specific bimodal form is not necessarily what one would naively expect, and my intent is to focus attention on the question: is there any one (or few) skills that vary widely in the population and that we know are critical to programming success?

I believe this is almost trivially true: developing ordered sequences of events to carry out specific tasks--algorithm development--is a completely unnatural activity that many people are really bad at. The average person is mediocre at following instructions. Actually creating them is much more difficult. And our evolutionary history does not suggest that the ability to create ordered sequences of instructions to carry out specific tasks has ever been strongly selected for, so we'd expect it--like any weakly selected trait--to have a pretty broad distribution in the population.

The model demonstrates that a broad distribution of ability when applied against a skill that has a highly non-linear contribution to success will result in a bimodal distribution of results of the kind observed in introductory computing (CS1). This does not mean that there is a "gene for programming" any more than there is a "gene for height", but that one or a small number of skills that must be above a given threshold to succeed in CS1 can easily explain the data.

I've suggested ability to create new linear temporal sequences to generate particular outcomes could be one such skill. Representational thinking may be another--the ability to let one thing stand for another, often nested several deep.

As such, I see no need for more complex models to explain the data until this one is exhausted, and it has not been yet. Until it has, the contention that "some people just don't get it"--that is, some people have the continuously-distributed skill in too small a degree to get over the sharp barrier to entry--remains plausible.

[Edited for clarity 2014-06-24]

Posted in psychology, software | 3 Comments

So How About That Hiatus, Eh?

There’s been much ballyhooing about the “hiatus” in the thermodynamically meaningless “global average (dry bulb) temperature”. Everyone reading this knows what “dry bulb” means, of course, because it would be ridiculous to have a strong opinion on climate change and not know the difference between dry-bulb and wet-bulb temperatures, and while humanity is ridiculous (I know I am) it is simply unimaginable that we are that ridiculous, and obviously what we can’t imagine can’t be true. Right?

Snark aside (or as far aside as I am ever able to put snark… say 10 cm), what’s with the growing discrepancy between unphysical model results and thermodynamically meaningless global average temperatures?

In one corner we have Denialists who can’t imagine humanity having a big effect on the climate, claiming that climate models are now in error by more than error.

In the other corner we have Warmists claiming it’s all cherry picking and 1998 was an ultra-warm year and Bengahzi! No, wait, that’s the other bunch of irrational bullies.

In my professional life I do a lot of robust estimation, so I’m sensitive to issues like cherry picking, and the Warmists aren’t entirely wrong when they say there is something fishy about “statistics” that claim the world hasn’t been this warm since the last time it was this warm.

As such, I thought it might be fun to look at the problem like a physicist, rather than a climatologist or an economist. So this is what I did…

The obvious way to avoid cherry picking the start date for any particular fit to the data is to work backward, from today, and see what happens as you either move a window along or extend a window wider, taking in more and more of the past.

The obvious way to fit the data is with a median fitter. Median fitting finds a line such that half the data are higher and half the data are lower. As such, outlier years like 1998 don’t count for much. It’s hard to make a median fit go wrong.

So I ran a median fit on the most recent Hadcrut data for the thermodynamically meaningless “global average temperature”, starting from the present day and working backward.

The first thing I did was simply take in a larger and larger window, going back from the present. I started with a one year window, fitting the monthly data and running the window wider and wider (if I were doing this for publication I’d probably do the yearly data to show that the averaging method is irrelevant, but this is a rainy evening spent with a bottle of wine and I can’t be arsed to dig that deeply.)

The median-fit slope for an expanding window from the present day (May 2014)

The median-fit slope for an expanding window from the present day (May 2014)

The result is pretty clear: there is some noise in the smaller windows, but when things settle down there is a real minimum on a decade times-scale. This is clear long before 1998 gets folded into the mix, although you can see the sharp dip that 1998 produces in the graph (the “Year” is the lower end of the fitting window… the upper end is fixed at May 2014.)

So please let’s shut up about “cherry picking”. That low slope value in the early 2000s? Not cherry picked. It’s just there, in the data. It’s real.

Now obviously, it would be anti-scientific and wrong to just leave it at that. We see that there is a minimum in the slope of the thermodynamically meaningless “global average temperature” on a one-decade time-scale. There’s also a peak on about a thirty-year timescale, and a long-term positive value that looks pretty significant.

If we plot the fitted curves from different window-sizes against the data we can see this effect pretty clearly, especially when compared to the data:

"Global Average Temperature" Anomaly vs Median Fits at Various Timescalses

“Global Average Temperature” Anomaly vs Median Fits at Various Timescales

The green and blue curves show fits to the full dataset (back to 1860) and to the past 90 years. As expected, they are not too different.

The purple line for the fit over the past 40 years shows the strength of the recent warming, and the turquoise line for the last decade shows the “hiatus”: the fit is almost flat, independently of any particular cherry-picked starting point. I just fit a decade. Eight or twelve years would also have the same result. It’s not hugely robust, but it doesn’t require selecting one particular instant in time, either.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. This analysis so far suggests the idea that there is something interesting going on when we look at the last decade. The obvious way to test this is to look at every decade in the dataset. That’s what I did next, fitting one-decade windows across the full range of the data. The median slope has some interesting features.

One Decade Median Slope As a Function of Window End Year

One Decade Median Slope As a Function of Window End Year

The minimum in the decadal median slope can be see in recent years. This is a real feature of the data, and anyone who denies it is in a state of sin. So is anyone who pays much attention to the anomaly in 1961, which is due to a particularly low period in the early 50′s. Even on a one-decade timescale climate is pretty noisy.

But… there are rather a lot of places where the decadal slope is considerably lower than today, eh what?

Furthermore, there seems to be… something of a trend. I committed a sin myself by fitting the first derivative to a straight line (this was not a median fit, but rather least-squares) and getting the green line shown in the graph. The best fit to the overall decadal median slope of the thermodynamically meaningless “global average temperature” is positive–which indicates the slope is increasing–and the size of the coefficient is about ten times the standard error.

So how about that hiatus, eh?

Posted in physics, science, thermodynamics | Leave a comment

Spammers

May I just say you are wasting your time. I’ve read your post on this topic and it’s amazing! I’ll definitely be marking as spam the tiny fraction of your annoying gibberish that gets past Akismet.

Posted in Blog | Leave a comment