Darwin’s Theorem

darwins-theorem-cover_small

Science, religion, evolution, romance, action, siphonophors!

Darwin’s Theorem is a story about stories(the working title for a long time was “Metastory”) that’s also a mystery, a romance, an adventure, and various other things besides. Not quite science fiction, excessively didactic… think of it as “Dan Brown meets ‘Origin of Species’.”

If you like to see plot, action and strong characters deployed in the pursuit of big, speculative ideas, you should check it out!

Posted in marketing, writing | Leave a comment

Some Notes on Armando

Carrie and I just completed an ICI Gym program in Armando, which is a variant of long-form improv. Long-form tends to have a basically similar structure: there is a (possibly repeating) source of inspiration that the improvers draw from (I like to use “improver”, pronounced “improv-er” rather than “improviser” both because it’s more specific to the art of improv rather than, say, improvising a fan belt out of a pair of pantyhose, and because it evokes “improve”, which is what we’re all trying to do.)

In Armando the source of inspiration is a “monologist” who gives typically three or maybe four monologues over the course of 20 or 30 minutes. The monologues are true stories about the monologist. There are various reasons for this: it increases emotional authenticity and tones down the more risque’ stuff, both of which are good things. Sex makes for easy humour, and who wants to do the easy thing?

We had monologues tonight about people learning to steal as kids, about the annoying behaviour of parents, about the stupid things we do as young adults, and so on. All gold-mines of material.

The way the material is used is as a source of inspiration, not as a literal plot. The improvers don’t act out the story of the monologue, but rather take some point in it and use that as the basis for a scene. The scene may be long or short, it may involve only one person or half a dozen. It may be intense, laid-back, wandering, directed, whatever. It isn’t always obvious (to me, anyway) how it links back to the monologue, but one of the fun things about improv is it gives you a glimpse inside other people’s minds: how they think, what seems natural to them.

In the Age of the Internet the notion that other people see the world in fundamentally different ways should not be a surprise, but C.S. Lewis’ observation that every child at the age of ten believes “the kind of fish-knives used in her father’s house were the proper or normal or ‘real’ kind, while those of the neighbouring families were ‘not real fish-knives’ at all” would seem to still apply to adults as well. We just don’t often find ourselves in situations where it matters what kind of fish-knives we or anyone else thinks are “real”.

Improv forces us to actively seek out and understand our fellow-player’s everyday understanding of common-place ideas, and we frequently discover it differs a good deal from our own. This is often quite fun.

The Armando class happened to have a bunch of people who had played together before, either in previous classes or in informal jams. This made for a great learning environment, facilitated by a great teacher, Margaret Nyfors.

These notes are just my quick impression of the class overall, which ran for four weeks in two-hour evening sessions. This seemed like a really good format, because the time just flew by and the work was exhausting.

At the beginning it was really challenging. Figuring out how to recognize and pull out ideas, figuring out how to contribute to a scene or when not to, and figuring out how to end a scene–which is usually done by “swiping”, in which a player (who may be in the scene) runs across the front to signal the end–were all difficult. By the third class we were getting the hang of it: the group was friendly and positive, and we all realized we were having similar issues and making sure everyone was supported as we struggled with them.

Here is an incomplete list of things I think I learned, in no particular order, noted down for my own use, but maybe they’ll be useful to others as well. Some are specific to Armando, most are more general:

  1. Any idea will do. Today there was one scene started on the basis of the monologist stumbling over a word that she couldn’t remember. The player starting the scene began telling a story to someone she drew in from the group, and as more and more people gathered round to hear it she began fumbling with words, blanking on completely commonplace terms, and finally getting into an argument over her story-telling ability. It was a great demonstration of how you can pick up one minor quirk of delivery and turn it into a scene.
  2. Variety, variety, variety. Scene length. Number of players. Active vs talky. High drama vs everyday. The scenes will eventually fill out some kind of envelope of possibility, or enough of it to make it time for a new monologue, and you want to cover some ground both for the sake of the story and the sake of the audience. Too much of anything gets monotonic. If the last scene was high drama, tone the next one down a bit. If a few two-person scenes have been done, do a group scene, or a single-person scene. Change it up.
  3. Trust. I’m not a person to whom trust comes naturally, and I’ve been trained as a scientist to be even less trusting. Many years ago a colleague came and asked me about a particular piece of apparatus I was using. It was sitting on the bench but the label plate wasn’t visible and so I just told him the model number. He insisted on getting in behind it to have a look. I started to think, “How rude” when I realized I would have done exactly the same thing. We are trained not to trust, especially when there’s an easy check.

    Improv isn’t like that: trust is the default, and the more you play with people the more you trust them. I can feel the degree of trust I’m able to extend getting larger, and changing in kind as the environment I’m playing in gets more free. In a game the level of trust is not huge because the structure keeps everything safe. The more free-form the scene is the more trust is required and the more powerful trust becomes. In Armando trust spans many dimensions: you can start a scene with half an idea and trust your fellow players will find the other half. You trust that you’ll find a story to tell, or a way to close the scene. You trust that someone will swipe the scene at a sensible time.

    All that trust lets you focus on doing your job in the scene, which may be staying out of it entirely. I’m still working on this.

  4. Low-drama scenes are often better. Let the story develop. Let the characters speak and behave quasi-naturally. Let the humour come out of their authentic interaction. Don’t start too high, but let the absurdity of the scene ramp up over time, starting with something normal and finding an unforced path to something weird, but logical in its own way.
  5. Cats are nice (not relevant to Armando, but I’m writing this with a cat on my lap and I figure if I mention him he might settle down and stop clawing my arm and let me get it done.)
  6. Special for men: be charming. If you’re a jerk, make sure you get your comeuppance before the scene is done.
  7. Special for women: be nice to each other. Don’t be catty or hostile.
  8. It’s always about the relationship. Sure a scene may be in a video store, but it’s not about videos. It’s about the relationships between the people there.
  9. Who am I, where am I, and what am I doing? If you can answer those three questions you know why you are in the scene and probably what it is about.
  10. Who or what is the scene about? Once this is established, be wary of further offers that could confuse or overwhelm it. The scene doesn’t need them. Don’t think that because you have a great idea that the scene needs it. I’m still working on this.
  11. For the monologist: pay careful attention to the scenes and decide when to swipe the whole thing and start a new monologue. The monologist is the onlie begetter in Armando, and is responsible for providing fertile ground to the players and deciding when the ideas from one monologue have been mined out and it’s time to start another.

That’s all I can think of at the moment. It’s late, I’m tired, and it was a great class. ICI rocks.

Posted in improv | Comments Off on Some Notes on Armando

Some Notes on Being Bad at Stuff

I’m not a big fan of the Sapir-Whorf hypothesis, which says the language we use limits the ways we can think. The fact is we routinely create new language when we need to think new thoughts. But old language can tend to nudge us in problematic directions.

Consider the following statements:

1) “I am a student of X.”
2) “I am bad at X.”
3) “I am working on learning X.”
4) “Every time I do X I make big mistakes.”

We tend to sort these statements into two categories: “learner/student: good!” and “mistakes/bad: bad!” Yet in fact the “negative” statements are necessary consequences of the “positive” statements. Although there are lots of reasons we might be bad at something, one of the more common ones is that we’re just learning how to do it.

We need a word for “student-bad” rather than “incompetent-bad”. As it stands, we always say “I am bad at this” regardless of the reason, and that tends to make learning more emotionally difficult than it needs to be. At any level of learning there are certain mistakes we don’t want to make. But at the level we are at, there are mistakes we need to make, mistakes we are here to make, because by making those mistakes–recognizing them for what they are, watching and analyzing how we went wrong and figuring out ways to avoid them–is precisely how we learn in almost all cases.

“Book learning” is a bit different: it is a way to absorb information and maybe even knowledge without making many mistakes, and that’s a good and useful thing, but it only covers a small fraction of the totality of knowledge and skill.

The Maker movement has been not bad about lionizing the willingness to make mistakes, but we still don’t have a word for “Whenever I do X I make_the_kind_of_mistakes_I_need_to_make_to_learn_how_to_do_X_better,” and we need one.

Maybe such a word exists in some other language that we can steal in the finest tradition of English… my first suggestion is “litost”, from the Czech meaning “a state of torment created by the sudden sight of one’s own misery”.

Posted in epistemology, language, life, making | Comments Off on Some Notes on Being Bad at Stuff

A Closed-Form Argument about Climate Change

I’ve been a critic of over-sold climate models for many years now. I am a computational physicist, and therefore–unlike climate scientists–am professionally qualified to judge the predictive quality of climate models.

I mention this because apparently many people think it important that whenever a physicist like Freeman Dyson says anything about climate change that people be reminded that physicists “are not climate scientists” as if that invalidates the points they make. In the present case, those self-same people ought therefore to be willing to dismiss anything climate scientists say about climate model accuracy in favour of the judgement of someone who is properly professionally qualified.

Personally, I think that’s a load of bollocks, but as I say: a surprising number of people take this notion very seriously.

And in fairness, climate scientist’s lack of expertise in the broader field of computational physics really is problematic. Because they haven’t spent most of their careers working with models that can be tested in detail, they seem to have very little notion of the insurmountable difficulties involved in building predictive models from imperfect physics.

In any case, I’ve been very interested in constructing a simple argument that does not depend on climate models in any kind of detail, and that answers the question, “Is climate change anthropogenic or not?”

That Earth’s climate is changing is pretty clear. Direct measurements indicate that about 1 W/m**2 is being added to the Earth’s heat budget, and a number of additional measurements, particularly ocean temperature profiles, support this view.

From that fact alone, however, there are two things we can’t infer:

1) Where the extra heat is coming from

and

2) What the effects of the extra heat will be.

The first question is what I am addressing here. Granted that additional heat of about 1 W/m**2 is being added–because this is a highly plausible proposition based on multiple independent measures–how do we know if human activity is responsible? Particularly, how do we know if greenhouse gases are responsible?

There are two steps to the argument.

The first is that greenhouse gas warming has particular signatures that are not shared by any known alternatives.

The first of these is the day/night effect: nights are seeing more warming than days. For changes in external sources, like the sun, we would expect the opposite effect. For greenhouse gases, which prevent the Earth’s surface from losing heat to space at night, this is what we would expect.

The second is the altitude effect: the surface is warming faster than the upper atmosphere (which is in fact cooling). This is consistent with heat being trapped near the Earth’s surface, as happens with any greenhouse gas model.

The third(ish) is the latitude effect: higher latitudes are warming faster than the tropics. This is more-or-less consequent on the altitude effect. Because the surface is warming faster than the upper atmosphere, snow and ice are melting, which exposes more dark surface (rocks) which leads to more warming.

These effects are signatures of greenhouse gas warming.

The second part of the argument is: where are the greenhouse gases coming from? Are there natural sources?

The major greenhouse gases (other than water) are CO2 and methane, both of which have concentrations that are measurably increasing, and both of which have large human sources that we can compute by simple arithmetic. We know how much coal and oil we burn, and we have a reasonable idea of how much methane is emitted by various industrial and agricultural processes, so we can both estimate our impact and measure our impact on the amount of these greenhouse gasses, and the numbers are roughly consistent.

So here is the closed-form argument:

Observed warming has signatures (day/night and altitude, which leads to latitude) that are only consistent with greenhouse gases. We can measure and calculate the amount of greenhouse gasses we dump into the air, and we know there are no other major sources (despite repeated lies about volcanoes etc.) Furthermore, the magnitude of observed warming is consistent with reasonable estimates–mostly from climate models–of the size of the greenhouse gas emissions. While climate models are not going to get the details right, we should expect them to get the size of the overall effect roughly correct, so whatever else we can say it’s not like the observed warming is orders of magnitude different from the predicted warming. They are basically on par. Given this, it is most plausible that anthropogenic greenhouse gas emissions are responsible for the observed warming.

It is not clear what the effects of the excess heat will be, because climate models are not adequate for the kind of detailed prediction that would allow us to say anything about that with much confidence. We can, however, say that if the changes are more than extremely modest, they will likely be quite expensive, because our current economy is finely tuned to our current climate, so almost any change is likely to be economically disruptive on a global scale. Hardly the end of the world, but given we have the means at hand in terms of both policy and technology to tweak global, industrial capitalism to deal with climate change without impoverishing ourselves or engaging in some known-failed strategy like “changing everything”, it behooves us to do it.

Shifting from income to carbon taxes, building solar and storage capacity, building advanced/modular nuclear capacity and investing in thorium-cycle research and fusion research as well as carbon capture and geo-engineering research, are all good policy, and carbon pricing in particular will go a long way toward fixing the problem all by itself. We know how to do this. We don’t have to turn it into a titanic battle of good against evil. All we have to do is engage in the kind of ordinary, evidence-based policy-making that has improved public health and living standards so much over most of the past century.

So let’s get on that, shall we?

Posted in economics, physics, politics, prediction, science, software, technology | Comments Off on A Closed-Form Argument about Climate Change

What Is Game Theory a Theory Of?

I’ve written about the Prisoner’s Dilemma before but wanted to revisit the point.

Game theory purports to be a theory of “rational self-interested actors” or “rational maximizers.” These are individuals who are only interested in playing the game to win. All they know about the other player is that they are also a rational maximizer, and this is necessary for the theory to say anything interesting.

If both players were not rational maximizers then the rational player might need a completely different strategy depending on the nature of their opponent. Game theory would then be a theory of nothing much. It is only as a theory of rational maximizers that it has anything interesting to say at all.

Note that here I am talking about classic game theory, not newfangled modern inventions that study iterated games amongst semi-rational players and ask what the optimal strategy is in such circumstances. I’m talking about the historical foundations of the modern field, not the modern field.

The problem with the classic theory of deterministic symmetrical games is that in a deterministic symmetrical game all rational maximizers will necessarily choose the same strategy. To claim anything else would be to claim that we can rely on some rational maximizers to behave differently than others, which would only be the case when there are multiple strategies that have identical payoffs, which is not generally the case for symmetrical games, and is specifically not the case with regard to the Prisoner’s Dilemma.

Again: stochastic game theory, where no agent can be relied upon to be a rational maximizer, is a different animal. In this case, a rational maximizer’s strategy is neither unique nor obvious, so a great deal of the supposed power of game theory goes away.

But for a theory of strict rational maximizers there is no more chance of one rational maximizer in a pairwise game to make a different choice than the other than there is of one mass in a physics problem falling down while an otherwise identical mass falls up. Classical physics is a deterministic theory of massive bodies, and as such all massive bodies are predicted to behave in the same way in the same situation.

In a deterministic theory of rational maximizers all rational maximizers are predicted to behave in the same way in the same situation.

It follows from the this that the off-diagonal elements of the payoff matrix for a symmetric one-off game between rational maximizers are irrelevant. No rational maximizer would ever consider them because they know that as matter of causal necessity whatever they choose the other actor will choose as well.

To claim otherwise is to claim that one actor is a rational maximizer and the other actor is a random number generator, which is not what classical game theory purports to be about.

I’m belabouring this point for a reason: this error of imposing an asymmetric assumption on a symmetric situation is incredibly common, the point of being our default assumption, and it is more often than not wrong.

To take a trivial example: Patrick Rothfuss’ novella “The Slow Regard of Silent Things” was pretty well received by his first readers, but they all thought no one else would like it even though they themselves did.

This is the Prisoner’s Fallacy: the rejection of the idea that the best, most robust, first-order predictor of other people’s behaviour is your own behaviour.

The opposite of this is the Law of Common Humanity: “To first order, They are pretty much like Us.”

The Prisoner’s Fallacy comes to us so naturally that an entire industry of very smart people failed to notice it in the roots of classical game theory. Of course the players of a symmetrical game could behave differently! How could this not be?

More interestingly: how could it be? How could we come to impose asymmetry on such a symmetrical situation?

Is it simply because we cannot see from any point of view but our own, and as soon as we think about the problem we project ourselves into the mind of the nominal rational maximizer, and so spontaneously break the symmetry of the problem? Maybe. But we have no warrant to do so.

This is not a small problem. The most extreme case results in the War Puzzle: the question of why anyone would go to war when there are always better alternatives available. The reason seems to be in part that we humans tend to expect others will behave differently than ourselves: we would fight back vigorously when attacked, but they will capitulate at the sound of the first shot.

Decentering our point of view is hard. There are entire books written on it and none of them have made much difference to the world. I don’t have any amazingly clever solution. I just wanted to point out how pervasive and easy this error is to make, so much so that I’m sure that most people versed in classical game theory will deny the premise of this post, and insist that no one ever claimed that classical game theory was just a theory of rational maximizers but some other theory that explicitly adopted fixes to allow non-rational actors into the mix. This may be, but every popular exposition of game theory I’ve read, as well as more technical introductions, tend to say things like, game theory is “the study of mathematical models of conflict and cooperation between intelligent rational decision-makers.”

Yet no theory of intelligent rational decision-makers will admit of the possibility that in a symmetric game there will be anything other than symmetrical behaviour on the part of identical entities.

Posted in economics, ethics, politics, war | Comments Off on What Is Game Theory a Theory Of?

More on Orbital Integration

My previous post on orbital motion and climate change climbed pretty far up on Hacker News, much to my surprise–I didn’t submit it there, so to whoever, did: thanks. There were a number of useful comments, mostly along the lines of “If you had bothered to be smart you would have used a symplectic integrator.”

Since I’m not very smart–or so I’m told–and I’m well-known for being the only person who has ever been wrong about anything on the ‘Net, this was surely good advice to take. I’ve not worked in areas where symplectic integrators would be very useful for very nearly longer than symplectic integrators have been a thing, so it’s always nice to get an opportunity to play with a technology that wasn’t well-known outside of a couple of specialist communities back in my post-doc days.

“Symplectic” is from the Greek, meaning roughly “co-braided”, if that’s helpful. Symplectic integrators have a very nice property: they preserve the phase-space volume in the vicinity of the trajectory. Since this is a property that reality has, having it built in to a numerical method is a nice thing. In mathematical physics this principle of constant phase-space density near the physical trajectory is known as Liouville’s Theorem.

It turns out the literature on symplectic integrators still tends to the abstract. The very good book by Hairer, Lubich and Wanner (Geometric Numerical Integration: structure-preserving algorithms for ordinary differential equations) is slightly mis-titled, as it contains very few actual algorithms. You can see this effect in the Wikipedia page on the subject, which unlike the page on Runge-Kutta integrators doesn’t really give you enough information to start writing actual code unless you already have a pretty sophisticated level of understanding.

The waters are further muddied for clewless newbies because “symplectic integrators” are often casually compared to “RK integrators” or “Euler integrators”, which is like comparing “sports cars” to “German cars” as if they named disjoint categories. “Symplecticity” is a general property that can be possessed by almost any integrator, including some Euler and RK integrators if they have the right structure and coefficients.

After a bit of digging, and working my way through Hairer et al–whose parable on projection methods ought to be required reading for everyone who has ever integrated anything, and which I’ll talk about at a later date–I found that boost, inevitably, has a collection of integrators that includes various symplectic ones. If I were smart–instead of merely wanting to see how far I could push my adaptive RK4 (ARK4) integator–I would have encountered this in the preliminary search that I deliberately didn’t do, wanting to see for myself what problems I’d run into. Sometimes the best way to learn is to reinvent the wheel.

It turns out there is even a nice example of almost precisely the problem I was trying to solve, albeit in the inevitably funny units that people always seem to find necessary. I’m an engineer as well as a physicist, and my numerical work comes into contact with engineering reality often enough that I’m pretty insistent on using SI units throughout. Keeping numbers close to unity will always give us the highest numerical precision, but by being rigidly consistent about units I wipe out a range of trivially-easy-to-make errors of the kind that destroy spacecraft now and then.

If I were smarter–and by “smarter” I mean “smarter than the people at Lockheed and NASA”–maybe flipping back and forth between units wouldn’t be such a big deal for me, but I only have so many brain cells to go around and I really do find total consistency in this regard easier and simpler. Part of this may be that I find my eye naturally saccades over the middle of the number, catching only the first few digits and the exponent, which is all that matters in most cases anyway. Many people find it harder to pick out the relevant bits from the sea of digits.

In any case I fiddled around with the example code from the link above–the masses are in solar masses, distances are in AU and time is in days, by the way: the first two are obvious, the last needs to be worked out from the value they give for G or looked up in Hairer et al, who document their units. After I’d put it into a form I liked, I ran it.

My first observation was: wow, fast. There is no doubt it’s pretty quick. But how accurate is it? And does it conserve energy? My original post on this subject contained a major mistake: the energy calculation wasn’t adding in the gravitational potential properly, which made the code look like it was doing rather badly on energy conservation, rather than better than 1 part in 1010, which is actually the case. So this second look at a different technique at least found me a bug in my own work, and I always like that.

When I dug more deeply into the results, it looked like the length of the year was just a tiny bit off, although energy is well-conserved. I computed the Earth’s orbital radius (instantaneously relative to the sun) for my ARK4 code, my symplectic code (which uses boost::numeric::odeint::symplectic_rkn_sb3a_mclachlan, as does the example code), and NASA’s ground truth, all going back 2000 years before present. The following plot sums up the result nicely:

ARK4 vs Symplectic Radial Error over 2000 years

ARK4 vs Symplectic Radial Error over 2000 years

In both cases there is a wedge-shaped error envelope that oscillates yearly, due almost entirely to a slight drift in the length of the year. The effect is much bigger in the symplectic code than the ARK4 code.

The fixed time-step for the symplectic calculation was 10,000 seconds (comparable to the ARK4 adaptive step, although there were cases where it got much less) and the runtime was just over an hour on a machine where the full ARK4 calculation took about five hours, so sympletic wins on speed by a moderate factor. It does not, however, win on accuracy, where it is about a factor of eight less accurate than ARK4 at the end of 2000 years.

This is weird because the Symplectic RKN McLachlan solver in odeint is 6th order, so you’d expect it to do a better job than my 4th order one. The error seems to be independent of step-size as well–over a range from 1 day to 2 hour steps–so I’m not convinced that cutting down the step size and accepting a much longer integration time would help. Nor can I be arsed to really dig into the code and figure out what’s going on, although I’m thankful that open source makes that possible.

I’m running at long double precision–the same as the ARK4 runs–so its unlikely to be a simple numerical effect, although it might have to do with orders of operation in the equations for the derivatives of the Hamiltonian, some of which do involve very large numbers. I reran the code with scaled units to check this and it made no difference, except it cut the runtime by a factor of two: maybe the math processor is clever enough to figure out when the full long double width is not required, and saves some movs and whatnot?

In the meantime, I’m pleased to say that while slow, my ARK4 solver actually does pretty well on this nominally unsuitable problem. It also has a brain-dead simplicity and generality that is nice, and a kind of physical transparency that I personally find useful. It’s easy to add dissipative terms, for example, which I may not have realized were important when first setting a problem up. So as a tool for early exploration it is particularly useful and robust, especially for someone like me, who isn’t smart enough to correctly anticipate all the things that might ultimately turn out to be important in a simulation.

Finally, while doing all this playing about, I was reminded of a trick I implemented for simulating a double-focusing beta spectrometer, back in the day, and I’ve come to wonder if that trick can’t be generalized and integrated into my ARK4 solver in a way that would make it even more powerful. So that’s what I’m going to look at next [Edit: having looked at it, I’ve decided there’s no suitable generalization.]

Posted in mechanics, physics, science, software | Comments Off on More on Orbital Integration

What’s Wrong with Marx

Today the new finance minister of Greece wrote about his fascination with Marx.

His analysis points to the fundamental flaw in Marxism while not just ignoring but praising it: it rests on the notion that “binary oppositions” completely dominate world history and social dynamics. Class struggle, class opposition, is what defines Marxist thinking.

He’s not wrong that Marx relied on manufacturing a sense of binary oppositions to drive his cartoon of history, but while such a story is great for creating drama it does a demonstrably, repeatedly, empirically terrible job as a way of either understanding history or changing the world, unless by “changing the world” you mean “changing the world into one vast prison camp”.

It is far too easy to pass from benign academic twaddle about “binary oppositions” to “us vs them” to “you’re either with us or against us… and we get to say which you are, not you.” Marxism is made for the power-mad, precisely because of this obsession with the binary oppositions Varoufakis is so enamoured of.

The insistence that binary oppositions dominate history does strongly inform the labour theory of value, but the analysis is transparently false. Labour is special because human beings have a special place in any political economy, simply because there wouldn’t be one without us. But the “binary opposition” he sees as being unique to labour is nonsense, and this would be obvious if the role of binary oppositions in his theory didn’t serve as a major distraction to analysis.

Consider for a moment the “binary opposition” between electricity’s value-creating potential that can never be quantified in advance, and electricity as a quantity that can be sold for a price. A kWhr that goes into a supercomputer to calculate the optimal shape of a machine part creates value of a kind and in amounts that is utterly unlike a kWhr that goes into driving a washing machine at a laundromat.

This is precisely the “binary opposition” that Varoufakis touts as being unique to labour. It is nothing of the kind. It was nothing of the kind in the days when a lump of coal could be used to heat a pauper’s hut or fire a steel mill. All economic inputs have both an unquantifiable-in-advance value-creating capacity and a market value. Regardless of whether these aspects of any thing are opposed, they are in no way unique to labour, and so resting your theory of value on a uniqueness that labour does not have is not a winning move.

But we tend not to notice that because we are distracted. Humans are fascinated by conflict. Present us with a conflict and it will grab our limited attention, leaving very little over to ask, “Hey, all this sound and fury is kind of signifying nothing.”

This is the most important role of “binary oppositions” in Marxism (and its bastard step-child, post-modernism.) It uses this simple flaw in our attentional structure to allow people to smuggle in right under our noses claims that we would otherwise easily see to be false. Our critical attention is suppressed by our fascination with conflict, and so we uncritically accept falsehoods.

Do capital and labour have somewhat different interests? Sure. Do they have many interests that are also shared, based on their common nature of human beings? Absolutely. This is not a binary opposition. This is an argument for a democratic clearing house where differences are aired and decisions made. There needs to be eternal vigilance that one side or the other (mostly the other) doesn’t gain undue influence in such a place, but the false belief that the world is dominated by black-and-white “binary oppositions” is completely unhelpful in this enterprise. It sheds no light on our legitimate differences, and trying to fix things up later on by talking about “intersectionality” between the various 1’s and 0’s is a poor patch on a broken analysis.

This phenomenon is not unique to political discourse. Many years ago I wrote a number of papers covering the theoretical, computational and experimental analysis of metal-phosphor screens for megavoltage imaging. In one of the papers I derived the correct equation for the signal-to-noise ratio in such screens, which required an understanding of how light scatters in the material that makes up the screen, which consists of fine crystals in a plastic matrix. The old theory, which mine replaced, started with the assumption that the screen was a transparent single crystal with high refractive index and infinite scattering length, and then tried to fix up the consequences of these false assumptions with heuristic correction factors. So theories that are broken in their most basic assumptions and then fixed up with heuristics are familiar territory to me, and when I look at the discourse on intersectionality it has that smell about it.

We are not a set of 1’s and 0’s in binary opposition to each other, some with the bit flipped to “labour”, some to “capital”. We are human beings, full of contradictions far more complex and diverse than this ridiculous scientistic reductionism can possibly encompass, and any theory should acknowledge this from the outset. Marxism, with its central focus on binary oppositions–particularly with regard to class struggle, but elsewhere as well, as Varoufakis correctly points out–is not such a theory. It is at best a toy model, useful for getting a sense of how some aspects of a real theory might work, but not suitable for practical analysis of the real world.

Posted in economics, history, politics, psychology | Comments Off on What’s Wrong with Marx

Dark Matter, Aether, Caloric and Neutrinos

It is fairly common today to see laypeople compare dark matter to the luminferous aether, that bugaboo of 19th century physics whose existence was disproven by the Michaelson-Morley experiment and which was subsequently made redundant by Einstein’s kinematic relativity.

Aether was invented to explain the behaviour of light–if light was a wave, the reasoning went, something must be waving–but it turned out that light didn’t have the behaviour that the existence of aether implied. We know this because Messrs Michaelson and Morley hauled a large and sensitive optical apparatus up a mountaintop and kept it in carefully controlled conditions while the Earth moved around the sun, relative to the sea of aetheric fluid. Since the apparatus would change its velocity relative to the aether as the Earth moved, the interference fringes it created were predicted to move. They did not. Ergo, no aether, at least not of the appropriate kind. There were some variants, which also might have been subject to test had not Einstein made it all unnecessary.

It is notable what no one–not Michaelson, not Morely, not Einstein, not Mach, not anyone–did: they did not draw an analogy between aether and caloric or phlogiston, both theoretical entities invoked to explain the behaviour of heat in the early 19th century, and subsequently shown to be non-viable.

There is a good reason no one did this: it is not a comparison that sheds any light on the matter. The proposition “Light is carried by the luminferous aether” has a plausibility that is not changed one whit by the observation that “Luminferous aether is a theoretical entity created specifically to account for an otherwise inexplicable phenomenon, just like caloric and phlogiston were.”

The reason for this is simple: the history of physics is chalk full of theoretical entities created and given properties specifically to explain some otherwise inexplicable phenomenon.

So as well as comparing dark matter to aether, we might compare it to neutrinos, which were a theoretical entity invoked to explain a particular set of observations on radioactivity (the shape of the beta spectrum.) Neutrinos turned out to exist.

As such, while the comparison to aether is superficially apt, it is not something we can draw any conclusions from, because the comparison to the neutrino is equally apt, and it would require us to draw the opposite conclusion.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. It is not the discipline of testing ideas by making analogies to other ideas. There is a reason for this: making analogies to other ideas has consistently proven to be almost completely useless for creating knowledge of reality, while the discipline of science has been wildly successful.

Nor are the properties of caloric, aether, neutrinos or dark matter “magical”, which is something people who compare dark matter to aether sometimes say. The properties theoretical entities are assumed to have are merely the ones required of an entity that is able to explain our observations in each instance. In the case of caloric it turned out to have self-contradictory properties, when the full deductive closure of the theory was teased out. In the case of aether it turned out to have properties that made predictions that were false. In the case of neutrinos the required properties made predictions that were true.

In the case of dark matter: we don’t know yet, and the only way we will ever know is if we continue on with our program of systematic observation, controlled experiment and Bayesian inference. There is no other way to know.

Posted in bayes, Blog, epistemology, history, physics, science | Comments Off on Dark Matter, Aether, Caloric and Neutrinos

On Interpretation

“The cat is on the mat” is a reasonably clear statement, not subject to a huge range of interpretation. If anyone reading it claimed it justified killing blasphemers most people would look at them funny. And remember: this is the Internet, so there is a near-certainty that someone will interpret it as justifying the killing of blasphemers.

The twin belief that:

a) it is possible to determine the One True Interpretation of any proposition

and

b) anyone who disagrees with that interpretation is mad, stupid or evil

is remarkably prevalent, particularly given that we are continually given evidence that our interpretations of other people’s words are mistaken.

Anyone who has ever had a fight with their significant other about a misunderstanding has experienced the reality of interpretive failure up-close-and-personal, and therefore should be aware this fundamental truth: misinterpretation is possible.

This is particularly true when it comes to scripture. Unlike “the cat is on the mat”, scriptural texts tend to be strongly allusive and in any case were mostly written hundreds or thousands of years ago in contexts radically different from today, when what is now common knowledge was far in the incomprehensible future.

Consider the entire industry of Biblical interpretation, in which various gibberish-merchants sell their false wares to a more-or-less ignorant audience.

You can claim anything you want as “interpretive principles”, and indeed someone out there has:

  • Holistic historico-grammatical interpretation with a dollop of context to taste (because it’s not like “context” hasn’t been used to interpret exactly the same words in radically different ways across the course of history, right?)
  • the guidance of the Holy Spirit, which is invoked by absolutely everyone who has ever interpreted the Bible, all in contradiction with one another. Where is the interpreter who says, “Oh by the way I totally ignored the guidance of the Holy Spirit in this work? No prayer, no spiritual reflection or anything like it went into my interpretation”? I’ve certainly never heard of any such thing, so pointing out a “principle” that is agreed upon by virtually everyone who has ever created one of the thousands of mutually contradictory interpretations of the Bible is sort of stupid. It’s not as if hewing to the Holy Spirit will result in a single clear interpretation, or that some guy with a website is the First Person Ever to think of trusting to the Holy Spirit to guide their interpretation. People have been doing that for over a thousand years and have been disagreeing violently over where the Holy Spirit guides them.
  • Metaphysical interpretation which appears to mean “whatever just makes sense to me without any attempt at justification beyond whatever plausible bullshit I pull out of my ass”.
  • Racist interpretation that I’m going to point to via a skeptics site rather than subjecting myself to the real thing. But anyone who knows anything about the big business of Bible interpretation knows that the Good Book as been used to justify some very bad things using principles of interpretation that are no different from those used by more civilized people.
  • And so on…

The fact that such a diversity of interpretations and interpretive principles exists tells us something: there is no correct, justifiable, narrow envelope of interpretation of the kind that exists for “The cat is no the mat” or “F=ma” or “we hold these rights to be inalienable”. If there were, people would be able to find it and agree on it.

After all, we can find and agree on the laws of motion, which presumably were also laid down by god. How can it be that we can agree on how to interpret the world god made but not the book god supposedly wrote, dictated or inspired? Isn’t that a little odd?

It surely cannot be the case that something so subtle, complex and just plain weird as the universe in all its relativistic and quantum peculiarity should be subject to widely-agreed-upon interpretation but the Bible, the Quran and the Guru Granth Sahib should be completely incoherent and incomprehensible. And yet that is what they manifestly are: completely incoherent and incomprehensible, because if they were not the One True Interpretation would be at least as widely agreed upon as the general consensus in the sciences about the world god made.

There is no way around this: different people from different cultures come to the world god made with different biases and backgrounds, and all end up in pretty much the same place. They don’t insist in imposing arbitrary and essentially meaningless interpretive principles on the study of the world god made, because none are necessary. Close observation of the world is sufficient to reveal its deep secrets to us. There is a tiny amount of residual disagreement, of course, because we are human. But no one is claiming “F=mv” is the correct force law or “E=m2c” is the correct relativistic energy relation, which is what would be the case if the same range of interpretations existed for the world as for scripture.

This is just a fact: scripture is subject to a vast range of completely contradictory interpretations, and unlike the interpretation of the world god made, no one has ever been able to come up with a set of interpretive principles that have gained anything like widespread agreement.

Yet on the face of it interpreting words is far easier than interpreting the universe. People were interpreting words for thousands of years before we started to get our interpretation of the universe even approximately right. “All men by nature desire to know” was written down 2500 years ago, and no one has ever had much difficulty in figuring out how to interpret it.

So we have failed at the easier task while succeeding brilliantly at the harder. It’s almost as if scripture was full of contradictory gibberish, and therefore incapable of being interpreted coherently at all.

Posted in epistemology, history, language, religion, science | Comments Off on On Interpretation

Some Notes on Orbital Mechanics and Climate Change

What is the role of orbital variations in climate change?

We know from direct measurement that the Earth is warming by about 0.6 W/m**2. Climate models give a number that is about 1.6 W/m**2 from greenhouse gas emissions and it is likely aerosols are buying us about 1 W/m**2 back.

0.6 W/m**2 is not very much. The Earth is 149.6 billion kilometres from the sun, on average, and the solar constant–the amount of power per unit area reaching the Earth from the sun, called insolation–is 1366 W/m**2 at the top of the Earth’s atmosphere [a commenter on Hacker News pointed out there is a better number: http://onlinelibrary.wiley.com/doi/10.1029/2010GL045777/pdf]. The solar constant varies as 1/r**2, so if we put those numbers together we find that a variation of 32,000 km in the mean distance between Earth and sun is enough to produce 0.6 W/m**2 change in insolation. That’s only a tenth of the distance to the moon, which is not very much at all.

This question is interesting in part because unlike many other influences on climate, it is computationally tractable on your average computer. Furthermore, the physics are both painfully simple and absolutely fundamental. Newton’s law of universal gravitation is what started the ball rolling, science-wise, and it’s so simple it can be dealt with using quill and parchment in many interesting cases.

So this is an opportunity to demonstrate some fundamental issues with the computational physics of climate, and explain why I am cautious about drawing any very strong conclusions from climate models. I am not a climate scientist, but rather a computational and experimental physicist. This means I have spent most of my career dealing with systems where I can check my computational results in the lab, and I am painfully aware that they don’t always agree, even when internal error checking seems to say my computational results are basically OK.

Note, however, that nothing I say here implies “climate change is a hoax” or that climate scientists aren’t doing their best to understand an enormously complex system. I believe the general confidence in climate models is unjustifiably high for reasons that I hope this little exploration will make apparent, but I also believe that nuclear fission, solar and wind power should be strongly supported, coal power should be curtailed as rapidly as possible, and taxes shifted away from income and toward carbon emissions.

Dumping gigatonnes of garbage into the atmosphere is not a great idea, although honestly if it were a choice between that and the revolutionary overthrow of the capitalist order I would go down fighting for capitalism. We don’t know how climate change comes out, but we do know that attempts to “change everything” always end neck-deep in human blood, and personally I’d like to avoid that. It’s just the kind of evil capitalist roader I am.

So much for politics. What about science?

Orbital mechanics at the level I’m interested in is governed by a single law:

F = m1*m2*G/r**2

This is Newton’s law of gravitation. The force between two masses (m1 and m2) is equal to their product divided by the square of the distance between them (r) multiplied by a universal constant, G = 6.673E-11 N*m**2/kg**2. This is a small number because gravity is a weak force. It is only because the masses of the planets and stars are huge that we notice it at all.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. As such, it is fundamentally about exploration. We have ideas, we test them, we let the results of those tests guide where we go next. In the end, we publish, because if not made public it is not science. People were investigating reality long before the Proceedings of the Royal Society saw the light of day, but they didn’t accumulate knowledge because they didn’t publish. Science is both intensely individualistic and profoundly communal.

In the present case, the first idea I had is “I bet I can get some decent orbital mechanics results using that adaptive Runge-Kutta solver I wrote years ago for a quite different project.” So let’s put that to the test. [A number of commentors on this article on Hacker News have pointed out that symplectic integrators are a better choice for this problem. They are correct, but I didn’t have a symplectic integrator lying around. I’ve since explored this alternative and found it faster but less accurate than the work described here]

This is the way the computational physics of orbital mechanics works. We have a body like the Earth moving around the sun. We know its current location and velocity, and we know the gravitational force acting on it. And we know some basic kinematics, which is the mathematical description of motion (dynamics is about the causes of motion, kinematics is about the description of motion.)

Kinematically, we have the equations:

dx = v*dt
dv = a*dt

where dx = change in position, dv = change in velocity, a = acceleration, and dt = a small time interval. I’ve written these as one-dimensional equations but there are similar equations for y and z as well. Newtonian gravity has the nice property that the x, y and z components are all independent of each other so we can compute them separately (Einstein’s gravitational theory does not have this property, and as a result is… somewhat more complicated.)

All these equations say is that position changes linearly with velocity and time, and velocity changes linearly with acceleration and time. Acceleration is given by Newton’s law of gravitation via Newton’s second law: F = m*a, or a = F/m, which gives us:

a = m2*G/r**2

Because gravity is proportional to mass, m1 is divided out of the force law, making the gravitational acceleration proportional only to the mass doing the attracting. This is why feathers and hammers drop at the same rate on the surface of the moon, or in a giant vacuum chamber.

Taken together, these relations give us a second order differential equation for the position of an object moving under the influence of gravity. The second derivative of position (which is also the rate of change of velocity) is just equal to the acceleration (by definition):

d2x/dt2 = a

We can get the acceleration of m1 from any number of bodies by summing up their gravitational influence:

d2x/dt2 = m2*G/r12**2 + m3*G/r13**2 + … mi*G/r1i**2

where r1i is the distance between the mass of interest and the ith body in the simulation.

Conceptually, to solve this equation means simply integrating the effects of gravity along the curve of motion determined by the effects of gravity. If that sounds a bit recursive it’s because it is: to know the effect of gravity over each step we have to know the effect of gravity at the end of the step as well as at the beginning, but we don’t know the effect of gravity at the end of the step until we’ve taken the step and found out where it ends up. It turns out we can deal with this using some simple corrections, because god loves us and made the universe second-order-smooth.

Starting from known initial conditions we can step forward small but finite increments in time. To figure out the effect of the endpoint we break the whole interval into parts and use estimates of the values in the middle points to correct the value at the end for the change of gravity over the interval. There is a whole family of methods for doing this but the workhorse is 4th order Runge-Kutta (RK4), named after the pair of German mathematicians who devised it.

Many years ago I wrote an adaptive RK4 solver that varies the time step to maintain some error bound. It’s rather stupid about it, just dividing the desired time step in two and asking “After taking two half-steps am I sufficiently close to the same place as when I took one full step?” It does, however, work reasonably well.

Here’s where computational physics starts to get fun.

We represent the world with numbers, but the real world has more-or-less infinite precision while computers are decidedly finite. A typical double-precision floating point number is 64 bits which allows it to represent values between about 10-308 and 10+308. More important is the value ε, which is the smallest number that when added to 1.0 results in a different value. For 64-bit floating point numbers this is 2.220446049250313E-16, which is not all that small, as we shall see.

We are going to be integrating the orbits of the planets, and doing so over a few thousand years because this turns out to be about the limit of computation for my laptop. A full solar system simulation covering 1E11 seconds (about 3200 years) takes a day or so to run. Patience is a virtue for computationalists as it is for experimentalists.

The size of the time step for our simulation is going to turn out to be about 10,000 seconds, and even with adaptive RK4 integration the error bound turns out to be 1E-18 to get decent internal precision. 1E-18 is less than ε for double-precision floating point, so we’re going to have to go with 128 bit long doubles (implemented by the nasty trick of “#define double long double” because I’m basically an evil bastard.)

How did I decide this?

First, by setting up a dumb-as-rocks simulation of a faux Earth around an approximate sun, and letting the simulation run to cover 1E11 seconds and then changing the signs of all the velocities and letting it run backward for another 1E11 seconds. With perfect numerical precision, this would result in the final positions of the sun and the Earth being identical to their initial positions.

There was a lot of flailing about at this stage, and it took a few days and some head-to-head comparisons between the Python and C++ versions of the code to tease out a few minor issues that resulted in improperly accumulating errors. The Python code is far too slow to run over the full simulation time, but it served as a very useful check on the C++ over shorter times (when both are run at 64 bit double precision the results have to be byte-for-byte identical or something is wrong.)

I mention this because we rarely see what goes on behind the scientific curtain. There is always a lot of flailing around, a lot of trial and error. Young people in particular may not be aware of this and feel that their own flailings are uniquely embarrassing. They are not. Flailure is always an option, and often quite a good one.

Having spent a few days flailing I was ready to plug in some real numbers for the Earth, the sun, and the planets. I decided eventually to leave the Moon out of it until the end, because I thought it shouldn’t have a huge effect on the Earth’s orbit about the sun. Silly me, as it turned out.

I also elected to leave out all corrections from general relativity, from tidal forces, and so on, which really are trivially small over the time scales I care about. As I said at the outset, what I wanted to know was: is the effect of the planets on the Earth’s orbit comparable to the effect of humans on climate?

NASA of course has all the numbers one might want with regard to the positions of the planets and so I turned to them for the current data, which I set as the positions and velocities of the sun and planets at midnight CT January 1 2015. NASA also has detailed results of their own solar system model available, which gave me a target to aim for.

Having a target to aim for is the holy grail of computational physics, of course: a measurement or gold-standard calculation that can be used to validate the code. Validation is done by setting any physical constants in the code based on independent measurements and then ensuring in known cases that the simulation reproduces reality. It is not about tuning unphysical parameters so the code matches one particular reality. It is about testing without adjustment.

The way NASA actually developed their model was based on a dialog with nature. They had excellent measurements going back decades or centuries, and required that their simulations matched those observations. I am simply using NASA’s validated simulation results as a surrogate for observation, and since they really are ridiculously accurate–they are the basis for spacecraft navigation in the solar system, and have corrections for things like the wobbling of Mars under the influence of its two tiny moons–this is more than good enough for what I’m doing.

The first thing I did was simulate Earth going around the sun with no other planets. This is the simplest possible system and I wanted to get a sense of what it looked like over the past few thousand years. For convenience I ran 1E11 seconds, which is a little over 3000 years and took an hour or so to run. The initial results were promising but a little weird, as initial results often are. I decided to add in other planets, thinking that perhaps I was asking the simulation to do something impossible: I was giving it a precise starting point in terms of the Earth’s position and velocity (and the sun’s) but leaving out all the other planets, whose presence was actually necessary to create that position and velocity.

I wasn’t sure which planet after Jupiter was most important in perturbing Earth’s orbit–I thought it was probably Saturn–so I ran a bit of code to find the maximum gravitational force between each of the planets and Earth, and got a bit of a surprise:

Body Orbital Radius (AU) Gravity (N)
Sun 0.0030246192898 3.69616319e+22
Jupiter 5.41567069072 1.79863405677e+18
Saturn 10.1361271569 1.2579933068e+17
Venus 0.744294082795 1.37535974505e+18
Mars 1.41241143847 6.97067495351e+16
Mercury 0.407065807622 1.7349744085e+16
Uranus 20.37633197 4.2709974665e+15
Neptune 30.5218921949 2.1709083215e+15
Moon 0.00261101798674 1.99083093078e+20

As can be seen, Venus has about ten times the influence on Earth as Saturn does, and when it is added into the simulation along with Jupiter and Saturn things started to look a bit better. There was still some weirdness with the precise length of the Earth’s orbit… or was I misconverting from fractional Julian day number to seconds? This kind of trivial conversion problem is far more common than you might think.

Adding Mars changed things very little, and at this point I started doing a more detail comparison with the NASA data, and noticed a couple of oddities. In particular, the Earth’s distance to the sun at aphelion (furthest from the centre) was off:

One year of orbital simulation with no moon. Green line is present work, red line is NASA

One year of orbital simulation with no moon. Green line is present work, red line is NASA

And if I looked out a few hundred years the length of the year was clearly wrong:

After a few hundred years a small error accumulates...

After a few hundred years a small error accumulates…

When in doubt, look carefully at the data. Don’t be too quick to jump to conclusions. Unlike every other approach to knowing, science rewards being open to alternative ideas. Advocates on non-scientific approaches often say they want people to be open to “other ideas”, but in the end they all want you to reach some conclusion that they have already decided is the right one, like “Mohammed was the last true prophet” or “big pharma and GMOs are causing all the ills of the world.” Science, on the other hand, is just a discipline, and all it wants is that you practice it honestly, publicly testing ideas by systematic observation, controlled experiment and Bayesian reasoning.

When someone lectures you on the importance of being open minded, try responding with something along the lines of, “I’m open to any idea that we can figure out how to test, because if it can’t be tested it probably can’t explain anything. If it can explain a phenomenon, then it can act as a cause on other things, so it can be tested, because we can make predictions of other things that it is likely to cause. If it can’t cause anything other than what it’s being invoked to explain, then it really doesn’t have any meaning beyond ‘the thing that causes that’, and that’s pretty boring. So give me an idea–one idea, not ten–and let’s talk about how to test it, what other consequences it would have that we can look into. I’m totally open to any idea like that.”

The practice of science is very much about learning where to look for plausible ideas, and focusing on looking rather than imagining as the first step on any search for a more plausible proposition. This is difficult because it requires us to accept that we don’t know what the problem is and we might not be able to find the actual source. Imagination is much more comforting than reality at times like this. But we should learn to trust ourselves, and look to the world to guide us. That’s the only way we’ll find more plausible propositions. People were imagining answers to hard questions for thousands of years before we learned the discipline of science, and we have precious little besides some decent poetry to show for it. We certainly never cured disease or ended hunger that way.

So rather than imagining what the problem might be, I looked more carefully at the various components of position, and what I saw in the z-component (the distance from the plane of the ecliptic) was striking:

Red line is simulation, wobbly green line is reality

Red line is simulation, wobbly green line is reality

Instead of the smooth sinusoid of my simulation, reality is undergoing a weirdly jagged drunkard’s walk with about thirteen wobbles per year. Thirteen is the number of lunar months in a year, more-or-less, which suggests that culprit is the moon.

I hadn’t given the moon much thought at this point. While it has a large interaction with the Earth, as a terrestrial satellite it didn’t seem like it could have a very big influence on the Earth’s orbit around the sun. I basically viewed it as an idler, swinging along beside the Earth but having no net influence to speak of.

Thinking about it, the biggest effect of the moon is on the initial conditions: I have pulled the Earth’s orbital parameters out at a single moment in time and its motion at that time has been influenced by the moon. Take the moon away, and it will not track back along its physical course but some other curve. The effect is most obvious in the z-component, where the 5 degree tilt of the moon’s orbit relative to the plane of the ecliptic results in the Earth being wobbled as the moon travels around it, but the overall effect shows up in other places too, as we shall see. My simulation starts off in the correct place but with one small missing influence, and after a few hundred years lands is in a completely unphysical place.

This is the great lesson of computational physics: errors in the model almost always accumulate. They never average out.

When I added the moon, things snapped into place:

Simulation and reality with the moon in place

Simulation and reality with the moon in place

It is worth noting that both with and without the moon the internal error on my ten-year computation was small, less than half a meter after running things forward and backward. Over that simulated time span my model Earth covered nearly 20 trillion kilometres, and the equations brought it back to within 42 cm of its starting point. That it did this even without the moon–which turns out to be vital to matching reality in so many other ways–is an example of how internal consistency is necessary but not sufficient for high accuracy.

Now I was ready to run a long simulation with all the planets and the Moon. I experimented with relaxed error bounds to get the simulation time down to a day or so. The moon, because of its tightly curved orbit, is a particular pain to simulate accurately, and so slows the simulation dramatically when the error is at 1E-18. After running a few one-decade simulations with different error bounds I settled on 1E-13 for the long run. If it didn’t work I would only have lost an overnight.

I still had that weirdness in the phase over hundreds of years. Either I was converting the NASA data incorrectly, or I was goofing up the time conversion on my own results somehow (the results are output in seconds but converted to years for display) or the simulation was still not right.

When the simulation that included the moon was about half done I ran the processor on the incomplete output file and had a look at the results:

With the moon in place the year has the right length

With the moon in place the year has the right length

So… if asked, “Do you think the Earth’s moon changes the length of the year?” would you have said, “Yeah, enough to lose about six months over the course of 400 years or so?” I certainly wouldn’t have. Yet it does, because the true and correct initial conditions put the Earth in a slightly different orbit without the moon’s influence to correct it. Depending on when in the lunar month I had started the simulation the results would have been quite different depending on the relative value of the lunar momentum to the Earth’s.

[Edit: I originally had an error in the energy calculation that cleverly left out the potential term. Oops. When added back in, the simulation has a fractional energy error of 6.7E-11, which is good enough for going on with.] After the full 3200 year run, forward and back, the Earth comes out out of place by just a bit more than it’s own radius–6350 km–which is not bad following a six trillion kilometre journey.

At this point, I was confident I understand the Earth’s orbit pretty well, and can set out to answer the question I started with: how does it affect the insolation?

As can be seen, the distance to the sun varies quite a lot over the year due to orbital eccentricity, so some kind of averaging is necessary. Averaging over sinusoidal waveforms is surprisingly tricky because how you handle the ends tends to dominate the whole integral. If you get one end-point just a little bit wrong, it won’t cancel the contribution at the other endpoint, and you get a significantly non-zero value that’s just a numerical artifact.

One way to avoid that is to fit the curve to a known shape–in this case just a cosine function plus a flat baseline–and use the fit parameters to estimate the properties you care about. B + A*cos(ωt+φ) is the function to fit, and the thing I care about is B, the baseline value. The rest is just a wobble around that. With t in years ω is fixed at 2*π, so the fit only has three parameters, with the phase φ just correcting for wherever the fit happens to start for each fitting period, as the data points don’t fall exactly on year boundaries. Fitting over two-year intervals–which resulted in slightly cleaner results that one-year fits–yields the following curve for the baseline B parameter, which is the average insolation:

insolation

There is a bit of a step around 1500 years ago, but it is less than 20% of the heat imbalanced measured in the modern day, and it is not reproduced in the NASA data–the purple line that runs back 2000 years, which is all I downloaded–so is likely a result of remaining imperfections in the model or analysis. The thin blue line at the top shows the magnitude of the empirically observed heat imbalance from the nominal solar constant of 1366 W/m**2. If the moon is left out of the model the simulated insolation is about as high again above the blue line as the blue line is above the correct value.

There is also a very observable effect of decreasing orbital eccentricity over time, as can be seen in the A parameter, which measures the variation over the course of the year:

eccentricity_insolation

While the change in eccentricity is quite significant–and very close to that found in the NASA dataset, as shown in green–it doesn’t have a direct driving effect on climate.

It does, however, have an indirect effect. More intense sunlight during part of the year will be offset on average by less intense sunlight six months later… except that the sun won’t be falling on the same places. Perhaps the perihelion–the time of closest approach and most intense insolation–happens during boreal summer. That would mean the more intense heating will occur when the northern hemisphere is already at its hottest, and the cooling will happen during austral summer, when the northern hemisphere is covered in snow. If things were the other way around, the extra heating would occur when the Great Southern Ocean was warming up, and cooling would happen during northern summer.

If the Earth were a smooth and featureless billiard ball none of this would matter, but that is not the case. How the planet responds to additional heating depends on seasonal conditions in the north and the south, and understanding that response is the business of climate modelling.

And climate modelling requires physically accurate software if it is not to be subject to the kinds of error accumulation I’ve explored here. The difference between precise models and noisy models is that in precise models (like orbital simulations) cumulative errors are easy to see, and in noisy models (like climate) cumulative errors are hard to see.

This is where this little excursion ends, but I’ll come back to this topic later, with a closer look at some of the physics in specific climate models that I find problematic. Climate is determined by a number of large competing factors. The amount of energy that reaches the Earth’s surface is only a fraction of the 1366 W/m**2 incident at the top of the atmosphere. Some is reflected, some is absorbed, some is scattered. Small changes to any of these processes can result in large effects on the climate. We are concerned with changes at the 0.1% level: 1.4 W/m**2. This is the magnitude of the energy imbalance driving climate change.

Building models that are accurate to 0.1% over a century is not possible unless the physics they embody is at least that accurate. This is my professional judgement as a computational physicist, and the argument presented here is supposed to illustrate that judgement, not prove it. When I set out to perform this analysis I knew pretty much what the results would be, because I knew that tiny aberrations in the physics almost always results in significant excursions from reality at the end of the simulation. I didn’t know precisely what would be the major source of inaccuracy, but I was sure I would run into one. Unsurprisingly, I did.

A small under or over estimate of any major process in any simulation will result in unphysical integrations, just as we have seen here with planetary orbits. The moon’s affect on the Earth’s orbit is tiny–its direct gravitational influence is just 0.3% of the sun’s. But its presence or absence in the model has a huge effect on the properties of the orbit. Leaving the moon out with the particular initial conditions I’ve used results in an average orbit that has the insolation wrong by 1.7 W/m**2, about the same size of the total effect from green house gases.

That’s an absolute effect, mind. Climate models are attempting to get at relative effects. But they are doing so by subtracting large terms from each other, and there is no reason to believe their errors will cancel.

This is not a political statement. As I have said, I think anthropogenic climate change is a serious problem and that it should be approached via a mix of technology and public policy, particularly nuclear power development, solar power development, energy storage development, rapid curtailment of thermal coal development, and a shift from income taxes to carbon taxes. I don’t need physically accurate climate models to tell me these are all good things, because they are good things regardless of the long-term effect of carbon on the Earth’s climate. Carbon-based fuels have enough problems to justify moving away from them regardless, and the substantial and plausible risk posed by anthropogenic climate change simply adds to the argument.

But all that said: until climate models are as accurate as orbital simulations, it is difficult to claim that the science is settled. That doesn’t mean we can’t say anything, of course. Even without accuracy we can be fairly sure of the following:

1) Adding CO2 to the atmosphere will increase global heat content. There is simply no way to avoid this. We have directly measured human contributions to CO2 and other greenhouse gases and we know they are sufficient to add a significant amount of heat to the climate. We are working on directly measuring the effect of aerosols and the indications are they are reducing the amount of heat in the climate. We have directly measured the heat balance of the oceans and it is showing effects that are in the range of 0.5 to 1.5 W/m**2, which is the magnitude of effect we expect from models.

2) Increasing global heat content will have a disruptive influence on economic processes that are tuned to the current climate

What we don’t know, and what I am arguing that we can’t know due to the limitations on the physics in climate models, is the detailed ways in which the climate will change. You can read this, if you like, as an apology for the “hiatus”, which I’ve been warning about for over a decade. Unphysical models will not reproduce reality in detail and it was a terrible mistake to sell climate models to the public as if they could.

Who can predict the effect of what we leave out of climate models? We leave out–or approximate in unphysical ways–precisely the things we don’t understand well enough to include properly. We hope the effects of those approximations will be small. But because error always accumulates–I have never run a model of any kind where this was not the case–we know that those unphysical approximations will tend to carry our models further and further away from reality as time passes. So we can’t tell what effect our omissions will have until we have data to compare to. And that’s OK.

However, putting forward detailed model results as if they were strong predictions about the real future is a mistake, because given the level of physical detail and the vagaries of long-time integrations, it would be astonishing if climate model results bore more than a vague resemblance to reality, just as it would be astonishing–given what I know now–if a model of the Earth’s orbital motion that left out the influence of the moon was particularly close to the physical motion of the planet.

Nor do internal checks on model precision reflect very strongly on model accuracy. Only comparison with ground truth can do that, and in the case of climate we won’t have that until it’s rather too late to do anything about it.

Fortunately, we don’t have to wait around. There is ample reason to build nuclear power stations, to close coal plants, to build solar farms and energy storage systems, and to shift the tax burden from incomes to carbon emissions today. It’s time we stopped listening to lunatic calls that we must “change everything” or engage in the kind of failed revolutionary action that characterized so much of the 20th century, and started to focus on changing the few things that will actually help the climate without bringing global industrial civilization to an end.

Hysterical shills for big coal and big oil will oppose sensible policies as much as the looney left does, but while I don’t think the science of climate change is remotely settled, the arguments in favour of technological and policy changes in favour of evolving our global, market-based, industrial civilization in the direction of a low-carbon future are compelling regardless.

Posted in mechanics, physics, science, software | Tagged | Comments Off on Some Notes on Orbital Mechanics and Climate Change

The Narrative of Fire

I’m a very lucky person. I was born in the best nation at the best time in the best place on Earth, into circumstances that at once gave me a certain degree of humility (although I often keep it well-hidden) and the opportunity to do pretty much whatever I want.

One of the things I grew up with was fire, and my luck being what it is, I am now renting a place with a real fireplace.

I’m pretty sure everyone reading this knows what I mean by “real”: it burns wood, not gas.

I mostly burn prefab logs in it, for a variety of reasons. The chimney hasn’t been swept in a good long time, for example.

Contemplating the difference between gas fireplaces and real fireplaces, even real fireplaces burning fake logs, I am struck by this: real fire has a narrative. Every real fire tells a story. It grows from almost nothing–or perhaps a spectacular initiation ceremony of the kind Scout leaders used to put on–and goes through a complex evolution until you wake beside the quiet ashes in the morning, the girl from the night before still warm against your body even though the fire has gone cold.

This is the narrative of fire, and it is older than humankind.

Our Homo Erectus ancestors knew the narrative of fire, a million years ago. We evolved from them, and brought fire with us.

It told its tale in the night time, survived as a pale shadow through the day, and spoke again in the evening.

Hominids watched the fire burn, and listened to the stories it told, and became human.

Is this the ultimate source of our narrative proclivity? Is this where the stories come from? Not from the tales told in the circle of firelight, but from the fire itself?

A fire is a process of combustion in three acts. It begins with a small flame, grows to the point where it is no longer tentative but inevitable, unstoppable, beyond the point of accidental extinguishment, and as it burns through the second act it plays upon the wood until a midpoint is reached… it is a mature fire, fulfilled in all its potentials. The last log has been fed into it but is not consumed, not homogenized. The fire and the wood now fight for longevity, consuming and defending until, inevitably, we reach the third act, when all is hot, the totality is ready for whatever end awaits. The sparks rise up, the climax is reached, the denouement begun and the ashes die down toward the grey light of dawn.

This is the narrative of fire: eternal, beyond the realm of human life, fundamental to all that comes after.

So long as there is fire, there is life. So long as there is fire there is something human in the world, telling its stories to the empty sky, warding off the darkness and the rain.

Posted in life, religion, story, thermodynamics | Comments Off on The Narrative of Fire