Darwin’s Theorem


Science, religion, evolution, romance, action, siphonophors!

Darwin’s Theorem is a story about stories(the working title for a long time was “Metastory”) that’s also a mystery, a romance, an adventure, and various other things besides. Not quite science fiction, excessively didactic… think of it as “Dan Brown meets ‘Origin of Species’.”

If you like to see plot, action and strong characters deployed in the pursuit of big, speculative ideas, you should check it out!

Posted in marketing, writing | Leave a comment

My Career So Far

Career (v., int.): To rush headlong or carelessly; To move rapidly straight ahead, especially in an uncontrolled way.

In thirty years I’ve covered a fair bit of ground, and it’s definitely been headlong, careless and uncontrolled. I’m not sure “straight ahead” quite fits, so maybe I’ve had a “careen” more than a “career” so far.

Everything is an exploration. I got into engineering because I wanted to know how things were made, so I could make things. I got into physics because I wanted to know how the universe is made, although I don’t currently have any plans for making one. I was fortunate to go to a school that had an engineering physics program, and that let me move on to grad school in first a fairly applied area and finally some pretty esoteric stuff for my terminal degree.

For reasons I still don’t understand I got a post-doc at Caltech, which let me explore the rarefied atmosphere of top-tier American academia, as well as the less rarefied atmosphere of Los Angeles. During that time I designed a small neutrino detector that actually detected neutrinos using some fairly clever techniques to suppress backgrounds that were many orders of magnitude larger than the signal. That was also during the time of the 17 keV neutrino controversy, and I got to contribute to the untangling of it.

I wanted to raise my kids in Canada, though, so my excursion in the US didn’t last long, and as I changed countries I changed fields to medical physics, and spent a year up to my neck in megavoltage imaging. The new high-speed desktop computers–the 386 and 486–and peripherals were letting us do things that had been impossible a few years before, like capture and process images in realtime on a few thousand dollars worth of hardware. It was an exciting time, and the “pseudocorrelation” image registration algorithm I invented at that time turns out to be one of the more useful things I’ve done. It was based on some numerical techniques I’d learned as a pure physicist running radiation transport simulations, but applied to an imaging problem. Cross-pollination at its best.

The lure of pure physics brought me back to Kingston and the Sudbury Neutrino Observatory, where I worked on detector calibration issues for several years. I will always be grateful to Queen’s for giving me such an enormous range of opportunities. Pretty much every opportunity I’ve had is directly traceable to Queen’s.

I wanted to stay in Canada and also give my kids a life beyond the poverty line, so leaving academia seemed like a wise move, although it was easily one of the most difficult things I’ve done. I had always assumed I’d be a career academic, but there was both a lack of jobs in my field and a lack of interest on my part in doing the things one needed to do to climb that particular ladder.

I’ve always believed that it’s incoherent to say you want to do something if you don’t want to do the things that doing that thing entails. Want to be a musician? Better want to practice and study diligently, to go where the jobs are, to manage your career, and so on. Those things are what “being a musician” means. Likewise, being a professor means a lot of things, many of which I wasn’t very well-suited to. I loved hands-on research, but that’s not what profs spend most of their time doing. It was a disappointing and difficult realization, and one that took years for me to really wrap my head around. I can be kind of slow that way.

In the meantime, the dot-com boom was echoing across the land. A local company–Andyne Computing–was hiring anyone who was warm and breathing and had a little Unix experience. I had expanded my job search outside of academia and was sending off resume’s everywhere that looked remotely plausible. They turned me down for the job I’d originally applied to but would “keep my application on file”, which turned out not to be a euphemism for trashing it. A few months later I got a call and within a month or so I had jumped from academia to industry.

It was probably the most challenging time of my life, personally. The academic world has a lot going for it, but it has toxic elements that are difficult to see from the inside. The most important one is its hyper-competitive nature. In academia, everyone is a threat to everyone else’s advancement. Resources are few and finite, and any piece of the pie that someone else gets is one that you don’t. For all its superficial collegiality, the academic world is necessarily tense and hostile under the skin.

The business world is totally different. There, everyone is a resource to help grow the company. There is still politics, obviously, but one of my biggest revelations in my years at Andyne (later Hummingbird) was how much healthier the place was from a psychological point of view than even the best of the academic milieu. Academics who haven’t experienced both worlds will probably sneer at this, but for me at least the business world was just a much better place to be.

At the same time, I was getting interested in starting my own business, so I eventually moved on to explore far end of the commercial ocean, where small businesses and startups live, which turned out to be more frequently swept by storms and squalls than the sunny climes of large corporations.

I’d been through a downsizing at Hummingbird, but as the dot-com boom turned into the crash, smaller places were getting killed off at a rate sufficient to make f’dcompany a high-traffic website. The startup where I had been the first employee, putting together a computer-assisted surgery platform from scratch based on a lot of academic work done at Queen’s and KGH, folded, followed a year later by the genomics startup I jumped to after that. During that process I got to see the other side of downsizing, as I shrank my team to help lengthen the company’s runway.

Facing unemployment, I cut a deal with Parteq, the intellectual property arm of Queen’s, and licensed my former employer’s technology as part of a deal to help find a permanent home for it. My own company, Predictive Patterns Software Inc, came out of that deal.

I made some good judgements and some bad judgements over the next few years, and by the time the dust was settled PPS was a scientific and software consulting company with as much business as I could handle, including a great contract implementing my pseudo-correlation algorithm for an extremely complex intra-operative multi-modal registration procedure. I worked on everything from opthalmic ultrasound to cardiac imaging to advanced database design for clients ranging from university labs to some of the largest corporations in the world. The only thing I wouldn’t work on was stuff whose primary purpose is to kill people: I have no interest in being a deadweight loss.

When the financial crisis came rolling across the world my client base went strangely quiet. I was in the business of outsourcing rocket science, and in those uncertain times no one was spending money on anything the least bit speculative. Fortunately, one of my earliest clients had just been bought by a multi-national, and I was asked to come on board full-time. I took that opportunity, and while I continued to do a little consulting as the world recovered from the crisis, I’ve been living that alternative lifestyle known as “having a job” for almost six years now, which is almost twice as long as I’ve ever been employed by anyone other than myself.

Today, as it happens, is the last day of that employment. Corporate strategies and needs change, and what was once a business unit dedicated to new product development is being turned into one dedicated to supporting existing products. C’est la vie.

This leaves me at a juncture. I’ve done a lot of stuff in the past thirty years, and I’ve got another decade or two left in me. What to do with it?

As I said above, I’ve always been an explorer, and as such I’m drawn toward the new. While I’ve done a lot of project management over the years, it has rarely been my sole responsibility, and I’m thinking that may be one place to go next. While I’m proud of the technical work I’ve done, some of my most satisfying memories are of solving managerial problems, because they make the biggest impact on the people immediately around me.

I’m applying to some writing positions because that’s something else I’ve always wanted to do for a living. I’ve been selling my work in a small way for decades. Maybe it’s time to turn that into something more. It’s a difficult business, but what worthwhile thing has ever been easy?

There are other avenues of approach too. I’m not completely closing the door on further technical positions, or even going back to consulting, but again: I’d rather do something new over anything I’ve done before. That means I don’t capitalize quite so much on past experience as I might like, because companies increasingly want to hire people who have already done the exact job they are looking for, but that just means that I won’t end up working for people quite that myopic.

Or I could do something completely different, so different that I’ve not even thought of it yet.

So it’s an interesting time to be alive, and since I’ve had about five weeks vacation in the past fifteen years, I’m going to take advantage of the current gap to enjoy the sunshine and some of what the city has to offer while I look for the next landscape to explore.

Posted in life | Comments Off on My Career So Far

Making a Scene

Carrie and I have have just completed the ICI Open Scenes gym, and despite only having half an operating brain right now I want to get a few notes down before it all goes “poof”. The instructor, Brian Anderson, was excellent–consistently high-quality instruction is one of ICI’s trademarks–and the class really drove home several points for me.

1) Quiet is good. This is something that has been emphasized throughout the ICI program, but it takes some getting used to. I am a big, loud, guy, and I like large, dramatic scenes. But lesson the second is…

2) …it ain’t about what about I like. It’s about what the audience likes (this comes as a shock, I’m sure.)

3) There are a whole bunch of concrete techniques for giving the audience a good experience.

4) Show, don’t tell. The flip-side of my enjoyment of big, loud scenes is that I’m an intellectual and like to over-analyze things and over-talk things.

The class was a great mix of diverse talents. People have different styles and proclivities, and if I wanted to divide them into three types, I’d call them “dramatic”, “intellectual” and “quiet”. None of those names are very good, and everyone has some of each, but in most people one of the three runs a bit ahead of the others. Maybe “quiet” should be “characterful” or something. I dunno.

“Dramatic” is probably my dominant tendency, as I said, followed by “intellectual”. But “quiet” forms the best foundation for a scene. This again has been talked about a lot in various ICI courses: let the scene start slowly, quietly, and build up naturally. We covered this in Armando and Story in particular. Let the scene eventually take off into the stratosphere if you like, but lay the solid foundation of the ordinary first. It works better for the audience and creates better theatre.

I believed this going into the course, but it was a great exercise in seeing the principle in action, and maybe learning some habits that will help me start scenes quietly and let them build from an unassuming foundation.

One of the most delightful and illuminating exercises was done for only one scene with by three people: use just one-word sentences. It was set in heaven, and it was brilliant. The restriction to single-words both required the actors to “show don’t tell” and forced them into quiet mode. Almost everything was communicated by emotional reaction and body language.

Which is another lesson that had been covered before but I’m still wrapping my head around well enough to remember to do it in the moment:

5) A big emotional reaction can imbue another player’s work with great significance to the story and the audience.

Sometimes all you have to do is react. It doesn’t have to be extremely well-thought-out. Just treat what your scene partners have done as important… because it is, and if you treat it as important the audience will treat it as important. Which matters, because apparently we are up there to entertain the audience. I keep emphasizing this because although it’s obviously true, it’s so much fun to be on stage with the people I’ve met at ICI that it can be easy to forget that entertaining the audience, not enjoying myself, is the point.

Also, on reflection, pretty much ignoring the audience is a bad artistic habit I have from being a poet. Since no one (to a good approximation) is ever going to read my poetry, I can be entirely self-indulgent in what I create, writing for myself alone, plus Hilary and Carrie and maybe one or two others. In improv, even a small audience–even just the rest of the class–is comfortably larger than the audience for most my poetry, and they are right there in front of the stage, not scattered across the world from Serbia to Brazil. As such, it probably makes sense to pay more attention to what they want.

Brian had a lot of insights into what audiences generally want and how to give it to them, from the observation that low-status happy characters are instantly sympathetic to suggestions as to how to exploit character’s vulnerabilities for the audience’s entertainment. If a character wants something badly it’s gotta look like they aren’t going to get it, so the audience will get a rush when they finally do (or, on hopefully rare occasions, don’t.)

Those practical lessons were valuable, but the biggest thing was the practice of the quiet scene. I think (I hope) over the course of the course my scene work got lower-key, in keeping with my long-term goal of talking less, being less loud, less dramatic.

Quiet scenes are not the only good scenes by any means, but quiet scenes–especially scenes that start quietly and develop naturally–have the best chance of success. Maybe they’ll get big and dramatic. Maye they’ll get allegorical and intellectual (it does happen) but either way they’ll have a chance to do it on a foundation of some well-established characters in a solid environment with strong relationships. Quietly.

Posted in improv, life, poetry | Comments Off on Making a Scene

Many Interacting Worlds

There’s a quite clever spin on the Many Worlds Interpretation that’s just been published that solves some of the big problems but remains implausible to my jaundiced eye.

The idea is disarmingly simple and inherently non-local: suppose that there exist multiple worlds so each particle has N “avatars” (my term, not theirs) that exist in different universes. By hypothesis, the avatars don’t interact with each other except when every avatar in one universe is in almost exactly the same state as their companion avatar in an adjacent universe.

The inherent non-locality is manifest: it’s as if you put the finger-tips of your two hands together, but unless every finger was lined up with its opposite number, there would be no interaction between them. Try to put just the index fingers together and they’d pass through each other. But put all four fingers and your thumbs together and you can use one hand to push on the other. This is as non-local as it gets.

There is a sense in which they are reconceptualizating the abstract configuration space of the de Broglie-Bohm theory into a concrete set of real universes, which is definitely a more satisfactory ontology. Their inter-universe potential is closely related to Bohm’s quantum potential, and they motivate the theory with reference to Bohm’s work.

They also get to Born’s rule without a lot of flailing around, although their argument isn’t completely general (it requires the initial configuration be a solution of Schrodinger’s equation, amongst other things.) Still, this is a very nice feature, because the need to impose Born’s rule by hand is one of the most compelling critiques of Everett’s original Many Worlds idea.


Their analysis of quantum tunneling, while clever, reveals what looks to me to be a fundamental problem.

Their analysis goes like this: imagine a single particle with two avatars (in two different but closely adjacent universes) approaching a potential barrier that they don’t have enough energy to get over. The inter-universe potential results in a repulsive force between the avatars, causing one of them to speed up, one to slow down. If the universes are close enough together this will allow one of the avatars to pass over the barrier, because it will have gained enough energy to do so, while the other avatar will be slowed down.

The problem is: in the universe where the avatar makes it over the barrier, where does the energy go?

In real experiments, particles that penetrate the barrier have the same energy they started out with, once they’re through. If the inter-universe potential were the explanation for barrier penetration we would be wondering, “Where does the extra energy come from?” not “How does a particle with insufficient energy get over the barrier?”

It may be that in a more complete model this problem can be fixed up, but it isn’t obvious how. The whole point of this approach is that the dynamics are completely classical, up to the non-local, configuration-dependent strength of the inter-universe quantum potential. As such, it should be that any account of tunneling ultimately hits up against the same problem: the particles (avatars) we observe passing through the barrier should have more than enough energy to do so. That isn’t what we see, which calls the whole approach into question.

Posted in physics, prediction, quantum | Comments Off on Many Interacting Worlds

Some Notes on Armando

Carrie and I just completed an ICI Gym program in Armando, which is a variant of long-form improv. Long-form tends to have a basically similar structure: there is a (possibly repeating) source of inspiration that the improvers draw from (I like to use “improver”, pronounced “improv-er” rather than “improviser” both because it’s more specific to the art of improv rather than, say, improvising a fan belt out of a pair of pantyhose, and because it evokes “improve”, which is what we’re all trying to do.)

In Armando the source of inspiration is a “monologist” who gives typically three or maybe four monologues over the course of 20 or 30 minutes. The monologues are true stories about the monologist. There are various reasons for this: it increases emotional authenticity and tones down the more risque’ stuff, both of which are good things. Sex makes for easy humour, and who wants to do the easy thing?

We had monologues tonight about people learning to steal as kids, about the annoying behaviour of parents, about the stupid things we do as young adults, and so on. All gold-mines of material.

The way the material is used is as a source of inspiration, not as a literal plot. The improvers don’t act out the story of the monologue, but rather take some point in it and use that as the basis for a scene. The scene may be long or short, it may involve only one person or half a dozen. It may be intense, laid-back, wandering, directed, whatever. It isn’t always obvious (to me, anyway) how it links back to the monologue, but one of the fun things about improv is it gives you a glimpse inside other people’s minds: how they think, what seems natural to them.

In the Age of the Internet the notion that other people see the world in fundamentally different ways should not be a surprise, but C.S. Lewis’ observation that every child at the age of ten believes “the kind of fish-knives used in her father’s house were the proper or normal or ‘real’ kind, while those of the neighbouring families were ‘not real fish-knives’ at all” would seem to still apply to adults as well. We just don’t often find ourselves in situations where it matters what kind of fish-knives we or anyone else thinks are “real”.

Improv forces us to actively seek out and understand our fellow-player’s everyday understanding of common-place ideas, and we frequently discover it differs a good deal from our own. This is often quite fun.

The Armando class happened to have a bunch of people who had played together before, either in previous classes or in informal jams. This made for a great learning environment, facilitated by a great teacher, Margaret Nyfors.

These notes are just my quick impression of the class overall, which ran for four weeks in two-hour evening sessions. This seemed like a really good format, because the time just flew by and the work was exhausting.

At the beginning it was really challenging. Figuring out how to recognize and pull out ideas, figuring out how to contribute to a scene or when not to, and figuring out how to end a scene–which is usually done by “swiping”, in which a player (who may be in the scene) runs across the front to signal the end–were all difficult. By the third class we were getting the hang of it: the group was friendly and positive, and we all realized we were having similar issues and making sure everyone was supported as we struggled with them.

Here is an incomplete list of things I think I learned, in no particular order, noted down for my own use, but maybe they’ll be useful to others as well. Some are specific to Armando, most are more general:

  1. Any idea will do. Today there was one scene started on the basis of the monologist stumbling over a word that she couldn’t remember. The player starting the scene began telling a story to someone she drew in from the group, and as more and more people gathered round to hear it she began fumbling with words, blanking on completely commonplace terms, and finally getting into an argument over her story-telling ability. It was a great demonstration of how you can pick up one minor quirk of delivery and turn it into a scene.
  2. Variety, variety, variety. Scene length. Number of players. Active vs talky. High drama vs everyday. The scenes will eventually fill out some kind of envelope of possibility, or enough of it to make it time for a new monologue, and you want to cover some ground both for the sake of the story and the sake of the audience. Too much of anything gets monotonic. If the last scene was high drama, tone the next one down a bit. If a few two-person scenes have been done, do a group scene, or a single-person scene. Change it up.
  3. Trust. I’m not a person to whom trust comes naturally, and I’ve been trained as a scientist to be even less trusting. Many years ago a colleague came and asked me about a particular piece of apparatus I was using. It was sitting on the bench but the label plate wasn’t visible and so I just told him the model number. He insisted on getting in behind it to have a look. I started to think, “How rude” when I realized I would have done exactly the same thing. We are trained not to trust, especially when there’s an easy check.

    Improv isn’t like that: trust is the default, and the more you play with people the more you trust them. I can feel the degree of trust I’m able to extend getting larger, and changing in kind as the environment I’m playing in gets more free. In a game the level of trust is not huge because the structure keeps everything safe. The more free-form the scene is the more trust is required and the more powerful trust becomes. In Armando trust spans many dimensions: you can start a scene with half an idea and trust your fellow players will find the other half. You trust that you’ll find a story to tell, or a way to close the scene. You trust that someone will swipe the scene at a sensible time.

    All that trust lets you focus on doing your job in the scene, which may be staying out of it entirely. I’m still working on this.

  4. Low-drama scenes are often better. Let the story develop. Let the characters speak and behave quasi-naturally. Let the humour come out of their authentic interaction. Don’t start too high, but let the absurdity of the scene ramp up over time, starting with something normal and finding an unforced path to something weird, but logical in its own way.
  5. Cats are nice (not relevant to Armando, but I’m writing this with a cat on my lap and I figure if I mention him he might settle down and stop clawing my arm and let me get it done.)
  6. Special for men: be charming. If you’re a jerk, make sure you get your comeuppance before the scene is done.
  7. Special for women: be nice to each other. Don’t be catty or hostile.
  8. It’s always about the relationship. Sure a scene may be in a video store, but it’s not about videos. It’s about the relationships between the people there.
  9. Who am I, where am I, and what am I doing? If you can answer those three questions you know why you are in the scene and probably what it is about.
  10. Who or what is the scene about? Once this is established, be wary of further offers that could confuse or overwhelm it. The scene doesn’t need them. Don’t think that because you have a great idea that the scene needs it. I’m still working on this.
  11. For the monologist: pay careful attention to the scenes and decide when to swipe the whole thing and start a new monologue. The monologist is the onlie begetter in Armando, and is responsible for providing fertile ground to the players and deciding when the ideas from one monologue have been mined out and it’s time to start another.

That’s all I can think of at the moment. It’s late, I’m tired, and it was a great class. ICI rocks.

Posted in improv | Comments Off on Some Notes on Armando

Some Notes on Being Bad at Stuff

I’m not a big fan of the Sapir-Whorf hypothesis, which says the language we use limits the ways we can think. The fact is we routinely create new language when we need to think new thoughts. But old language can tend to nudge us in problematic directions.

Consider the following statements:

1) “I am a student of X.”
2) “I am bad at X.”
3) “I am working on learning X.”
4) “Every time I do X I make big mistakes.”

We tend to sort these statements into two categories: “learner/student: good!” and “mistakes/bad: bad!” Yet in fact the “negative” statements are necessary consequences of the “positive” statements. Although there are lots of reasons we might be bad at something, one of the more common ones is that we’re just learning how to do it.

We need a word for “student-bad” rather than “incompetent-bad”. As it stands, we always say “I am bad at this” regardless of the reason, and that tends to make learning more emotionally difficult than it needs to be. At any level of learning there are certain mistakes we don’t want to make. But at the level we are at, there are mistakes we need to make, mistakes we are here to make, because by making those mistakes–recognizing them for what they are, watching and analyzing how we went wrong and figuring out ways to avoid them–is precisely how we learn in almost all cases.

“Book learning” is a bit different: it is a way to absorb information and maybe even knowledge without making many mistakes, and that’s a good and useful thing, but it only covers a small fraction of the totality of knowledge and skill.

The Maker movement has been not bad about lionizing the willingness to make mistakes, but we still don’t have a word for “Whenever I do X I make_the_kind_of_mistakes_I_need_to_make_to_learn_how_to_do_X_better,” and we need one.

Maybe such a word exists in some other language that we can steal in the finest tradition of English… my first suggestion is “litost”, from the Czech meaning “a state of torment created by the sudden sight of one’s own misery”.

Posted in epistemology, language, life, making | Comments Off on Some Notes on Being Bad at Stuff

A Closed-Form Argument about Climate Change

I’ve been a critic of over-sold climate models for many years now. I am a computational physicist, and therefore–unlike climate scientists–am professionally qualified to judge the predictive quality of climate models.

I mention this because apparently many people think it important that whenever a physicist like Freeman Dyson says anything about climate change that people be reminded that physicists “are not climate scientists” as if that invalidates the points they make. In the present case, those self-same people ought therefore to be willing to dismiss anything climate scientists say about climate model accuracy in favour of the judgement of someone who is properly professionally qualified.

Personally, I think that’s a load of bollocks, but as I say: a surprising number of people take this notion very seriously.

And in fairness, climate scientist’s lack of expertise in the broader field of computational physics really is problematic. Because they haven’t spent most of their careers working with models that can be tested in detail, they seem to have very little notion of the insurmountable difficulties involved in building predictive models from imperfect physics.

In any case, I’ve been very interested in constructing a simple argument that does not depend on climate models in any kind of detail, and that answers the question, “Is climate change anthropogenic or not?”

That Earth’s climate is changing is pretty clear. Direct measurements indicate that about 1 W/m**2 is being added to the Earth’s heat budget, and a number of additional measurements, particularly ocean temperature profiles, support this view.

From that fact alone, however, there are two things we can’t infer:

1) Where the extra heat is coming from


2) What the effects of the extra heat will be.

The first question is what I am addressing here. Granted that additional heat of about 1 W/m**2 is being added–because this is a highly plausible proposition based on multiple independent measures–how do we know if human activity is responsible? Particularly, how do we know if greenhouse gases are responsible?

There are two steps to the argument.

The first is that greenhouse gas warming has particular signatures that are not shared by any known alternatives.

The first of these is the day/night effect: nights are seeing more warming than days. For changes in external sources, like the sun, we would expect the opposite effect. For greenhouse gases, which prevent the Earth’s surface from losing heat to space at night, this is what we would expect.

The second is the altitude effect: the surface is warming faster than the upper atmosphere (which is in fact cooling). This is consistent with heat being trapped near the Earth’s surface, as happens with any greenhouse gas model.

The third(ish) is the latitude effect: higher latitudes are warming faster than the tropics. This is more-or-less consequent on the altitude effect. Because the surface is warming faster than the upper atmosphere, snow and ice are melting, which exposes more dark surface (rocks) which leads to more warming.

These effects are signatures of greenhouse gas warming.

The second part of the argument is: where are the greenhouse gases coming from? Are there natural sources?

The major greenhouse gases (other than water) are CO2 and methane, both of which have concentrations that are measurably increasing, and both of which have large human sources that we can compute by simple arithmetic. We know how much coal and oil we burn, and we have a reasonable idea of how much methane is emitted by various industrial and agricultural processes, so we can both estimate our impact and measure our impact on the amount of these greenhouse gasses, and the numbers are roughly consistent.

So here is the closed-form argument:

Observed warming has signatures (day/night and altitude, which leads to latitude) that are only consistent with greenhouse gases. We can measure and calculate the amount of greenhouse gasses we dump into the air, and we know there are no other major sources (despite repeated lies about volcanoes etc.) Furthermore, the magnitude of observed warming is consistent with reasonable estimates–mostly from climate models–of the size of the greenhouse gas emissions. While climate models are not going to get the details right, we should expect them to get the size of the overall effect roughly correct, so whatever else we can say it’s not like the observed warming is orders of magnitude different from the predicted warming. They are basically on par. Given this, it is most plausible that anthropogenic greenhouse gas emissions are responsible for the observed warming.

It is not clear what the effects of the excess heat will be, because climate models are not adequate for the kind of detailed prediction that would allow us to say anything about that with much confidence. We can, however, say that if the changes are more than extremely modest, they will likely be quite expensive, because our current economy is finely tuned to our current climate, so almost any change is likely to be economically disruptive on a global scale. Hardly the end of the world, but given we have the means at hand in terms of both policy and technology to tweak global, industrial capitalism to deal with climate change without impoverishing ourselves or engaging in some known-failed strategy like “changing everything”, it behooves us to do it.

Shifting from income to carbon taxes, building solar and storage capacity, building advanced/modular nuclear capacity and investing in thorium-cycle research and fusion research as well as carbon capture and geo-engineering research, are all good policy, and carbon pricing in particular will go a long way toward fixing the problem all by itself. We know how to do this. We don’t have to turn it into a titanic battle of good against evil. All we have to do is engage in the kind of ordinary, evidence-based policy-making that has improved public health and living standards so much over most of the past century.

So let’s get on that, shall we?

Posted in economics, physics, politics, prediction, science, software, technology | Comments Off on A Closed-Form Argument about Climate Change

What Is Game Theory a Theory Of?

I’ve written about the Prisoner’s Dilemma before but wanted to revisit the point.

Game theory purports to be a theory of “rational self-interested actors” or “rational maximizers.” These are individuals who are only interested in playing the game to win. All they know about the other player is that they are also a rational maximizer, and this is necessary for the theory to say anything interesting.

If both players were not rational maximizers then the rational player might need a completely different strategy depending on the nature of their opponent. Game theory would then be a theory of nothing much. It is only as a theory of rational maximizers that it has anything interesting to say at all.

Note that here I am talking about classic game theory, not newfangled modern inventions that study iterated games amongst semi-rational players and ask what the optimal strategy is in such circumstances. I’m talking about the historical foundations of the modern field, not the modern field.

The problem with the classic theory of deterministic symmetrical games is that in a deterministic symmetrical game all rational maximizers will necessarily choose the same strategy. To claim anything else would be to claim that we can rely on some rational maximizers to behave differently than others, which would only be the case when there are multiple strategies that have identical payoffs, which is not generally the case for symmetrical games, and is specifically not the case with regard to the Prisoner’s Dilemma.

Again: stochastic game theory, where no agent can be relied upon to be a rational maximizer, is a different animal. In this case, a rational maximizer’s strategy is neither unique nor obvious, so a great deal of the supposed power of game theory goes away.

But for a theory of strict rational maximizers there is no more chance of one rational maximizer in a pairwise game to make a different choice than the other than there is of one mass in a physics problem falling down while an otherwise identical mass falls up. Classical physics is a deterministic theory of massive bodies, and as such all massive bodies are predicted to behave in the same way in the same situation.

In a deterministic theory of rational maximizers all rational maximizers are predicted to behave in the same way in the same situation.

It follows from the this that the off-diagonal elements of the payoff matrix for a symmetric one-off game between rational maximizers are irrelevant. No rational maximizer would ever consider them because they know that as matter of causal necessity whatever they choose the other actor will choose as well.

To claim otherwise is to claim that one actor is a rational maximizer and the other actor is a random number generator, which is not what classical game theory purports to be about.

I’m belabouring this point for a reason: this error of imposing an asymmetric assumption on a symmetric situation is incredibly common, the point of being our default assumption, and it is more often than not wrong.

To take a trivial example: Patrick Rothfuss’ novella “The Slow Regard of Silent Things” was pretty well received by his first readers, but they all thought no one else would like it even though they themselves did.

This is the Prisoner’s Fallacy: the rejection of the idea that the best, most robust, first-order predictor of other people’s behaviour is your own behaviour.

The opposite of this is the Law of Common Humanity: “To first order, They are pretty much like Us.”

The Prisoner’s Fallacy comes to us so naturally that an entire industry of very smart people failed to notice it in the roots of classical game theory. Of course the players of a symmetrical game could behave differently! How could this not be?

More interestingly: how could it be? How could we come to impose asymmetry on such a symmetrical situation?

Is it simply because we cannot see from any point of view but our own, and as soon as we think about the problem we project ourselves into the mind of the nominal rational maximizer, and so spontaneously break the symmetry of the problem? Maybe. But we have no warrant to do so.

This is not a small problem. The most extreme case results in the War Puzzle: the question of why anyone would go to war when there are always better alternatives available. The reason seems to be in part that we humans tend to expect others will behave differently than ourselves: we would fight back vigorously when attacked, but they will capitulate at the sound of the first shot.

Decentering our point of view is hard. There are entire books written on it and none of them have made much difference to the world. I don’t have any amazingly clever solution. I just wanted to point out how pervasive and easy this error is to make, so much so that I’m sure that most people versed in classical game theory will deny the premise of this post, and insist that no one ever claimed that classical game theory was just a theory of rational maximizers but some other theory that explicitly adopted fixes to allow non-rational actors into the mix. This may be, but every popular exposition of game theory I’ve read, as well as more technical introductions, tend to say things like, game theory is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.”

Yet no theory of intelligent rational decision-makers will admit of the possibility that in a symmetric game there will be anything other than symmetrical behaviour on the part of identical entities.

Posted in economics, ethics, politics, war | Comments Off on What Is Game Theory a Theory Of?

More on Orbital Integration

My previous post on orbital motion and climate change climbed pretty far up on Hacker News, much to my surprise–I didn’t submit it there, so to whoever, did: thanks. There were a number of useful comments, mostly along the lines of “If you had bothered to be smart you would have used a symplectic integrator.”

Since I’m not very smart–or so I’m told–and I’m well-known for being the only person who has ever been wrong about anything on the ‘Net, this was surely good advice to take. I’ve not worked in areas where symplectic integrators would be very useful for very nearly longer than symplectic integrators have been a thing, so it’s always nice to get an opportunity to play with a technology that wasn’t well-known outside of a couple of specialist communities back in my post-doc days.

“Symplectic” is from the Greek, meaning roughly “co-braided”, if that’s helpful. Symplectic integrators have a very nice property: they preserve the phase-space volume in the vicinity of the trajectory. Since this is a property that reality has, having it built in to a numerical method is a nice thing. In mathematical physics this principle of constant phase-space density near the physical trajectory is known as Liouville’s Theorem.

It turns out the literature on symplectic integrators still tends to the abstract. The very good book by Hairer, Lubich and Wanner (Geometric Numerical Integration: structure-preserving algorithms for ordinary differential equations) is slightly mis-titled, as it contains very few actual algorithms. You can see this effect in the Wikipedia page on the subject, which unlike the page on Runge-Kutta integrators doesn’t really give you enough information to start writing actual code unless you already have a pretty sophisticated level of understanding.

The waters are further muddied for clewless newbies because “symplectic integrators” are often casually compared to “RK integrators” or “Euler integrators”, which is like comparing “sports cars” to “German cars” as if they named disjoint categories. “Symplecticity” is a general property that can be possessed by almost any integrator, including some Euler and RK integrators if they have the right structure and coefficients.

After a bit of digging, and working my way through Hairer et al–whose parable on projection methods ought to be required reading for everyone who has ever integrated anything, and which I’ll talk about at a later date–I found that boost, inevitably, has a collection of integrators that includes various symplectic ones. If I were smart–instead of merely wanting to see how far I could push my adaptive RK4 (ARK4) integator–I would have encountered this in the preliminary search that I deliberately didn’t do, wanting to see for myself what problems I’d run into. Sometimes the best way to learn is to reinvent the wheel.

It turns out there is even a nice example of almost precisely the problem I was trying to solve, albeit in the inevitably funny units that people always seem to find necessary. I’m an engineer as well as a physicist, and my numerical work comes into contact with engineering reality often enough that I’m pretty insistent on using SI units throughout. Keeping numbers close to unity will always give us the highest numerical precision, but by being rigidly consistent about units I wipe out a range of trivially-easy-to-make errors of the kind that destroy spacecraft now and then.

If I were smarter–and by “smarter” I mean “smarter than the people at Lockheed and NASA”–maybe flipping back and forth between units wouldn’t be such a big deal for me, but I only have so many brain cells to go around and I really do find total consistency in this regard easier and simpler. Part of this may be that I find my eye naturally saccades over the middle of the number, catching only the first few digits and the exponent, which is all that matters in most cases anyway. Many people find it harder to pick out the relevant bits from the sea of digits.

In any case I fiddled around with the example code from the link above–the masses are in solar masses, distances are in AU and time is in days, by the way: the first two are obvious, the last needs to be worked out from the value they give for G or looked up in Hairer et al, who document their units. After I’d put it into a form I liked, I ran it.

My first observation was: wow, fast. There is no doubt it’s pretty quick. But how accurate is it? And does it conserve energy? My original post on this subject contained a major mistake: the energy calculation wasn’t adding in the gravitational potential properly, which made the code look like it was doing rather badly on energy conservation, rather than better than 1 part in 1010, which is actually the case. So this second look at a different technique at least found me a bug in my own work, and I always like that.

When I dug more deeply into the results, it looked like the length of the year was just a tiny bit off, although energy is well-conserved. I computed the Earth’s orbital radius (instantaneously relative to the sun) for my ARK4 code, my symplectic code (which uses boost::numeric::odeint::symplectic_rkn_sb3a_mclachlan, as does the example code), and NASA’s ground truth, all going back 2000 years before present. The following plot sums up the result nicely:

ARK4 vs Symplectic Radial Error over 2000 years

ARK4 vs Symplectic Radial Error over 2000 years

In both cases there is a wedge-shaped error envelope that oscillates yearly, due almost entirely to a slight drift in the length of the year. The effect is much bigger in the symplectic code than the ARK4 code.

The fixed time-step for the symplectic calculation was 10,000 seconds (comparable to the ARK4 adaptive step, although there were cases where it got much less) and the runtime was just over an hour on a machine where the full ARK4 calculation took about five hours, so sympletic wins on speed by a moderate factor. It does not, however, win on accuracy, where it is about a factor of eight less accurate than ARK4 at the end of 2000 years.

This is weird because the Symplectic RKN McLachlan solver in odeint is 6th order, so you’d expect it to do a better job than my 4th order one. The error seems to be independent of step-size as well–over a range from 1 day to 2 hour steps–so I’m not convinced that cutting down the step size and accepting a much longer integration time would help. Nor can I be arsed to really dig into the code and figure out what’s going on, although I’m thankful that open source makes that possible.

I’m running at long double precision–the same as the ARK4 runs–so its unlikely to be a simple numerical effect, although it might have to do with orders of operation in the equations for the derivatives of the Hamiltonian, some of which do involve very large numbers. I reran the code with scaled units to check this and it made no difference, except it cut the runtime by a factor of two: maybe the math processor is clever enough to figure out when the full long double width is not required, and saves some movs and whatnot?

In the meantime, I’m pleased to say that while slow, my ARK4 solver actually does pretty well on this nominally unsuitable problem. It also has a brain-dead simplicity and generality that is nice, and a kind of physical transparency that I personally find useful. It’s easy to add dissipative terms, for example, which I may not have realized were important when first setting a problem up. So as a tool for early exploration it is particularly useful and robust, especially for someone like me, who isn’t smart enough to correctly anticipate all the things that might ultimately turn out to be important in a simulation.

Finally, while doing all this playing about, I was reminded of a trick I implemented for simulating a double-focusing beta spectrometer, back in the day, and I’ve come to wonder if that trick can’t be generalized and integrated into my ARK4 solver in a way that would make it even more powerful. So that’s what I’m going to look at next [Edit: having looked at it, I’ve decided there’s no suitable generalization.]

Posted in mechanics, physics, science, software | Comments Off on More on Orbital Integration

What’s Wrong with Marx

Today the new finance minister of Greece wrote about his fascination with Marx.

His analysis points to the fundamental flaw in Marxism while not just ignoring but praising it: it rests on the notion that “binary oppositions” completely dominate world history and social dynamics. Class struggle, class opposition, is what defines Marxist thinking.

He’s not wrong that Marx relied on manufacturing a sense of binary oppositions to drive his cartoon of history, but while such a story is great for creating drama it does a demonstrably, repeatedly, empirically terrible job as a way of either understanding history or changing the world, unless by “changing the world” you mean “changing the world into one vast prison camp”.

It is far too easy to pass from benign academic twaddle about “binary oppositions” to “us vs them” to “you’re either with us or against us… and we get to say which you are, not you.” Marxism is made for the power-mad, precisely because of this obsession with the binary oppositions Varoufakis is so enamoured of.

The insistence that binary oppositions dominate history does strongly inform the labour theory of value, but the analysis is transparently false. Labour is special because human beings have a special place in any political economy, simply because there wouldn’t be one without us. But the “binary opposition” he sees as being unique to labour is nonsense, and this would be obvious if the role of binary oppositions in his theory didn’t serve as a major distraction to analysis.

Consider for a moment the “binary opposition” between electricity’s value-creating potential that can never be quantified in advance, and electricity as a quantity that can be sold for a price. A kWhr that goes into a supercomputer to calculate the optimal shape of a machine part creates value of a kind and in amounts that is utterly unlike a kWhr that goes into driving a washing machine at a laundromat.

This is precisely the “binary opposition” that Varoufakis touts as being unique to labour. It is nothing of the kind. It was nothing of the kind in the days when a lump of coal could be used to heat a pauper’s hut or fire a steel mill. All economic inputs have both an unquantifiable-in-advance value-creating capacity and a market value. Regardless of whether these aspects of any thing are opposed, they are in no way unique to labour, and so resting your theory of value on a uniqueness that labour does not have is not a winning move.

But we tend not to notice that because we are distracted. Humans are fascinated by conflict. Present us with a conflict and it will grab our limited attention, leaving very little over to ask, “Hey, all this sound and fury is kind of signifying nothing.”

This is the most important role of “binary oppositions” in Marxism (and its bastard step-child, post-modernism.) It uses this simple flaw in our attentional structure to allow people to smuggle in right under our noses claims that we would otherwise easily see to be false. Our critical attention is suppressed by our fascination with conflict, and so we uncritically accept falsehoods.

Do capital and labour have somewhat different interests? Sure. Do they have many interests that are also shared, based on their common nature of human beings? Absolutely. This is not a binary opposition. This is an argument for a democratic clearing house where differences are aired and decisions made. There needs to be eternal vigilance that one side or the other (mostly the other) doesn’t gain undue influence in such a place, but the false belief that the world is dominated by black-and-white “binary oppositions” is completely unhelpful in this enterprise. It sheds no light on our legitimate differences, and trying to fix things up later on by talking about “intersectionality” between the various 1’s and 0’s is a poor patch on a broken analysis.

This phenomenon is not unique to political discourse. Many years ago I wrote a number of papers covering the theoretical, computational and experimental analysis of metal-phosphor screens for megavoltage imaging. In one of the papers I derived the correct equation for the signal-to-noise ratio in such screens, which required an understanding of how light scatters in the material that makes up the screen, which consists of fine crystals in a plastic matrix. The old theory, which mine replaced, started with the assumption that the screen was a transparent single crystal with high refractive index and infinite scattering length, and then tried to fix up the consequences of these false assumptions with heuristic correction factors. So theories that are broken in their most basic assumptions and then fixed up with heuristics are familiar territory to me, and when I look at the discourse on intersectionality it has that smell about it.

We are not a set of 1’s and 0’s in binary opposition to each other, some with the bit flipped to “labour”, some to “capital”. We are human beings, full of contradictions far more complex and diverse than this ridiculous scientistic reductionism can possibly encompass, and any theory should acknowledge this from the outset. Marxism, with its central focus on binary oppositions–particularly with regard to class struggle, but elsewhere as well, as Varoufakis correctly points out–is not such a theory. It is at best a toy model, useful for getting a sense of how some aspects of a real theory might work, but not suitable for practical analysis of the real world.

Posted in economics, history, politics, psychology | Comments Off on What’s Wrong with Marx

Dark Matter, Aether, Caloric and Neutrinos

It is fairly common today to see laypeople compare dark matter to the luminferous aether, that bugaboo of 19th century physics whose existence was disproven by the Michaelson-Morley experiment and which was subsequently made redundant by Einstein’s kinematic relativity.

Aether was invented to explain the behaviour of light–if light was a wave, the reasoning went, something must be waving–but it turned out that light didn’t have the behaviour that the existence of aether implied. We know this because Messrs Michaelson and Morley hauled a large and sensitive optical apparatus up a mountaintop and kept it in carefully controlled conditions while the Earth moved around the sun, relative to the sea of aetheric fluid. Since the apparatus would change its velocity relative to the aether as the Earth moved, the interference fringes it created were predicted to move. They did not. Ergo, no aether, at least not of the appropriate kind. There were some variants, which also might have been subject to test had not Einstein made it all unnecessary.

It is notable what no one–not Michaelson, not Morely, not Einstein, not Mach, not anyone–did: they did not draw an analogy between aether and caloric or phlogiston, both theoretical entities invoked to explain the behaviour of heat in the early 19th century, and subsequently shown to be non-viable.

There is a good reason no one did this: it is not a comparison that sheds any light on the matter. The proposition “Light is carried by the luminferous aether” has a plausibility that is not changed one whit by the observation that “Luminferous aether is a theoretical entity created specifically to account for an otherwise inexplicable phenomenon, just like caloric and phlogiston were.”

The reason for this is simple: the history of physics is chalk full of theoretical entities created and given properties specifically to explain some otherwise inexplicable phenomenon.

So as well as comparing dark matter to aether, we might compare it to neutrinos, which were a theoretical entity invoked to explain a particular set of observations on radioactivity (the shape of the beta spectrum.) Neutrinos turned out to exist.

As such, while the comparison to aether is superficially apt, it is not something we can draw any conclusions from, because the comparison to the neutrino is equally apt, and it would require us to draw the opposite conclusion.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. It is not the discipline of testing ideas by making analogies to other ideas. There is a reason for this: making analogies to other ideas has consistently proven to be almost completely useless for creating knowledge of reality, while the discipline of science has been wildly successful.

Nor are the properties of caloric, aether, neutrinos or dark matter “magical”, which is something people who compare dark matter to aether sometimes say. The properties theoretical entities are assumed to have are merely the ones required of an entity that is able to explain our observations in each instance. In the case of caloric it turned out to have self-contradictory properties, when the full deductive closure of the theory was teased out. In the case of aether it turned out to have properties that made predictions that were false. In the case of neutrinos the required properties made predictions that were true.

In the case of dark matter: we don’t know yet, and the only way we will ever know is if we continue on with our program of systematic observation, controlled experiment and Bayesian inference. There is no other way to know.

Posted in bayes, Blog, epistemology, history, physics, science | Comments Off on Dark Matter, Aether, Caloric and Neutrinos