Darwin’s Theorem

darwins-theorem-cover_small

Science, religion, evolution, romance, action, siphonophors!

Darwin’s Theorem is a story about stories(the working title for a long time was “Metastory”) that’s also a mystery, a romance, an adventure, and various other things besides. Not quite science fiction, excessively didactic… think of it as “Dan Brown meets ‘Origin of Species’.”

If you like to see plot, action and strong characters deployed in the pursuit of big, speculative ideas, you should check it out!

Posted in marketing, writing | Leave a comment

More on Orbital Integration

My previous post on orbital motion and climate change climbed pretty far up on Hacker News, much to my surprise–I didn’t submit it there, so to whoever, did: thanks. There were a number of useful comments, mostly along the lines of “If you had bothered to be smart you would have used a symplectic integrator.”

Since I’m not very smart–or so I’m told–and I’m well-known for being the only person who has ever been wrong about anything on the ‘Net, this was surely good advice to take. I’ve not worked in areas where symplectic integrators would be very useful for very nearly longer than symplectic integrators have been a thing, so it’s always nice to get an opportunity to play with a technology that wasn’t well-known outside of a couple of specialist communities back in my post-doc days.

“Symplectic” is from the Greek, meaning roughly “co-braided”, if that’s helpful. Symplectic integrators have a very nice property: they preserve the phase-space volume in the vicinity of the trajectory. Since this is a property that reality has, having it built in to a numerical method is a nice thing. In mathematical physics this principle of constant phase-space density near the physical trajectory is known as Liouville’s Theorem.

It turns out the literature on symplectic integrators still tends to the abstract. The very good book by Hairer, Lubich and Wanner (Geometric Numerical Integration: structure-preserving algorithms for ordinary differential equations) is slightly mis-titled, as it contains very few actual algorithms. You can see this effect in the Wikipedia page on the subject, which unlike the page on Runge-Kutta integrators doesn’t really give you enough information to start writing actual code unless you already have a pretty sophisticated level of understanding.

The waters are further muddied for clewless newbies because “symplectic integrators” are often casually compared to “RK integrators” or “Euler integrators”, which is like comparing “sports cars” to “German cars” as if they named disjoint categories. “Symplecticity” is a general property that can be possessed by almost any integrator, including some Euler and RK integrators if they have the right structure and coefficients.

After a bit of digging, and working my way through Hairer et al–whose parable on projection methods ought to be required reading for everyone who has ever integrated anything, and which I’ll talk about at a later date–I found that boost, inevitably, has a collection of integrators that includes various symplectic ones. If I were smart–instead of merely wanting to see how far I could push my adaptive RK4 (ARK4) integator–I would have encountered this in the preliminary search that I deliberately didn’t do, wanting to see for myself what problems I’d run into. Sometimes the best way to learn is to reinvent the wheel.

It turns out there is even a nice example of almost precisely the problem I was trying to solve, albeit in the inevitably funny units that people always seem to find necessary. I’m an engineer as well as a physicist, and my numerical work comes into contact with engineering reality often enough that I’m pretty insistent on using SI units throughout. Keeping numbers close to unity will always give us the highest numerical precision, but by being rigidly consistent about units I wipe out a range of trivially-easy-to-make errors of the kind that destroy spacecraft now and then.

If I were smarter–and my “smarter” I mean “smarter than the people at Lockheed and NASA”–maybe flipping back and forth between units wouldn’t be such a big deal for me, but I only have so many brain cells to go around and I really do find total consistency in this regard easier and simpler. Part of this may be that I find my eye naturally saccades over the middle of the number, catching only the first few digits and the exponent, which is all that matters in most cases anyway. Many people find it harder to pick out the relevant bits from the sea of digits.

In any case I fiddled around with the example code from the link above–the masses are in solar masses, distances are in AU and time is in days, by the way: the first two are obvious, the last needs to be worked out from the value they give for G or looked up in Hairer et al, who document their units. After I’d put it into a form I liked, I ran it.

My first observation was: wow, fast. There is no doubt it’s pretty quick. But how accurate is it? And does it conserve energy? My original post on this subject contained a major mistake: the energy calculation wasn’t adding in the gravitational potential properly, which made the code look like it was doing rather badly on energy conservation, rather than better than 1 part in 1010, which is actually the case. So this second look at a different technique at least found me a bug in my own work, and I always like that.

When I dug more deeply into the results, it looked like the length of the year was just a tiny bit off, although energy is well-conserved. I computed the Earth’s orbital radius (instantaneously relative to the sun) for my ARK4 code, my symplectic code (which uses boost::numeric::odeint::symplectic_rkn_sb3a_mclachlan, as does the example code), and NASA’s ground truth, all going back 2000 years before present. The following plot sums up the result nicely:

ARK4 vs Symplectic Radial Error over 2000 years

ARK4 vs Symplectic Radial Error over 2000 years

In both cases there is a wedge-shaped error envelope that oscillates yearly, due almost entirely to a slight drift in the length of the year. The effect is much bigger in the symplectic code than the ARK4 code.

The fixed time-step for the symplectic calculation was 10,000 seconds (comparable to the ARK4 adaptive step, although there were cases where it got much less) and the runtime was just over an hour on a machine where the full ARK4 calculation took about five hours, so sympletic wins on speed by a moderate factor. It does not, however, win on accuracy, where it is about a factor of eight less accurate than ARK4 at the end of 2000 years.

This is weird because the Symplectic RKN McLachlan solver in odeint is 6th order, so you’d expect it to do a better job than my 4th order one. The error seems to be independent of step-size as well–over a range from 1 day to 2 hour steps–so I’m not convinced that cutting down the step size and accepting a much longer integration time would help. Nor can I be arsed to really dig into the code and figure out what’s going on, although I’m thankful that open source makes that possible.

I’m running at long double precision–the same as the ARK4 runs–so its unlikely to be a simple numerical effect, although it might have to do with orders of operation in the equations for the derivatives of the Hamiltonian, some of which do involve very large numbers. I reran the code with scaled units to check this and it made no difference, except it cut the runtime by a factor of two: maybe the math processor is clever enough to figure out when the full long double width is not required, and saves some movs and whatnot?

In the meantime, I’m pleased to say that while slow, my ARK4 solver actually does pretty well on this nominally unsuitable problem. It also has a brain-dead simplicity and generality that is nice, and a kind of physical transparency that I personally find useful. It’s easy to add dissipative terms, for example, which I may not have realized were important when first setting a problem up. So as a tool for early exploration it is particularly useful and robust, especially for someone like me, who isn’t smart enough to correctly anticipate all the things that might ultimately turn out to be important in a simulation.

Finally, while doing all this playing about, I was reminded of a trick I implemented for simulating a double-focusing beta spectrometer, back in the day, and I’ve come to wonder if that trick can’t be generalized and integrated into my ARK4 solver in a way that would make it even more powerful. So that’s what I’m going to look at next.

Posted in mechanics, physics, science, software | Comments Off

What’s Wrong with Marx

Today the new finance minister of Greece wrote about his fascination with Marx.

His analysis points to the fundamental flaw in Marxism while not just ignoring but praising it: it rests on the notion that “binary oppositions” completely dominate world history and social dynamics. Class struggle, class opposition, is what defines Marxist thinking.

He’s not wrong that Marx relied on manufacturing a sense of binary oppositions to drive his cartoon of history, but while such a story is great for creating drama it does a demonstrably, repeated, empirically terrible job as a way of either understanding history or changing the world, unless by “changing the world” you mean “changing the world into one vast prison camp”.

It is far too easy to pass from benign academic twaddle about “binary oppositions” to “us vs them” to “you’re either with us or against us… and we get to say which you are, not you.” Marxism is made for the power-mad, precisely because of this obsession with the binary oppositions Varoufakis is so enamoured of.

The insistence that binary oppositions dominate history does strongly inform the labour theory of value, but the analysis is transparently false. Labour is special because human beings have a special place in any political economy, simply because there wouldn’t be one without us. But the “binary opposition” he sees as being unique to labour is nonsense, and this would be obvious if the role of binary oppositions in his theory didn’t serve as a major distraction to analysis.

Consider for a moment the “binary opposition” between electricity’s value-creating potential that can never be quantified in advance, and electricity as a quantity that can be sold for a price. A kWhr that goes into a supercomputer to calculate the optimal shape of a machine part creates value of a kind and in amounts that is utterly unlike a kWhr that goes into driving a washing machine at a laundromat.

This is precisely the “binary opposition” that Varoufakis touts as being unique to labour. It is nothing of the kind. It was nothing of the kind in the days when a lump of coal could be used to heat a pauper’s hut or fire a steel mill. All economic inputs have both an unquantifiable-in-advance value-creating capacity and a market value. Regardless of whether these aspects of any thing are opposed, they are in no way unique to labour, and it is transparently false to claim they are.

But we tend not to notice that because we are distracted. Humans are fascinated by conflict. Present us with a conflict and it will grab our limited attention, leaving very little over to ask, “Hey, all this sound and fury is kind of signifying nothing.”

This is the most important role of “binary oppositions” in Marxism (and its bastard step-child, post-modernism.) It uses this simple flaw in our attentional structure to allow people to smuggle in right under our noses claims that we would otherwise easily see to be false. Our critical attention is suppressed by our fascination with conflict, and so we uncritically accept falsehoods.

Do capital and labour have somewhat different interests? Sure. Do they have many interests that are also shared, based on their common nature of human beings? Absolutely. This is not a binary opposition. This is an argument for a democratic clearing house where differences are aired and decisions made. There needs to be eternal vigilance that one side or the other (mostly the other) doesn’t gain undue influence in such a place, but the false belief that the world is dominated by black-and-white “binary oppositions” is completely unhelpful in this enterprise. It sheds no light on our legitimate differences, and trying to fix things up later on by talking about “intersectionality” between the various 1’s and 0’s is a poor patch on a broken analysis.

Many years ago I wrote a number of papers covering the theoretical, computational and experimental analysis of metal-phosphor screens for megavoltage imaging. In one of the papers I derived the correct equation for the signal-to-noise ratio in such screens, which required an understanding of how light scatters in the material that makes up the screen, which consists of fine crystals in a plastic matrix. The old theory, which mine replaced, started with the assumption that the screen was a transparent single crystal with high refractive index and infinite scattering length, and then tried to fix up the consequences of these false assumptions with heuristic correction factors. So I theories that are broken in their most basic assumptions and then fixed up with heuristics are familiar territory to me, and when I look at the discourse on intersectionality it has that smell about it.

We are not a set of 1’s and 0’s in binary opposition to each other, some with the bit flipped to “labour”, some to “capital”. We are human beings, full of contradictions far more complex and diverse than this ridiculous scientistic reductionism can possibly encompass, and any theory should acknowledge this from the outset. Marxism, with its central focus on binary oppositions–particularly with regard to class struggle, but elsewhere as well, as Varoufakis correctly points out–is not such a theory. It is at best a toy model, useful for getting a sense of how some aspects of a real theory might work, but not suitable for practical analysis of the real world.

Posted in economics, history, politics, psychology | Comments Off

Dark Matter, Aether, Caloric and Neutrinos

It is fairly common today to see laypeople compare dark matter to the luminferous aether, that bugaboo of 19th century physics whose existence was disproven by the Michaelson-Morley experiment and which was subsequently made redundant by Einstein’s kinematic relativity.

Aether was invented to explain the behaviour of light–if light was a wave, the reasoning went, something must be waving–but it turned out that light didn’t have the behaviour that the existence of aether implied. We know this because Messrs Michaelson and Morley hauled a large and sensitive optical apparatus up a mountaintop and kept it in carefully controlled conditions while the Earth moved around the sun, relative to the sea of aetheric fluid. Since the apparatus would change its velocity relative to the aether as the Earth moved, the interference fringes it created were predicted to move. They did not. Ergo, no aether, at least not of the appropriate kind. There were some variants, which also might have been subject to test had not Einstein made it all unnecessary.

It is notable what no one–not Michaelson, not Morely, not Einstein, not Mach, not anyone–did: they did not draw an analogy between aether and caloric or phlogiston, both theoretical entities invoked to explain the behaviour of heat in the early 19th century, and subsequently shown to be non-viable.

There is a good reason no one did this: it is not a comparison that sheds any light on the matter. The proposition “Light is carried by the luminferous aether” has a plausibility that is not changed one whit by the observation that “Luminferous aether is a theoretical entity created specifically to account for an otherwise inexplicable phenomenon, just like caloric and phlogiston were.”

The reason for this is simple: the history of physics is chalk full of theoretical entities created and given properties specifically to explain some otherwise inexplicable phenomenon.

So as well as comparing dark matter to aether, we might compare it to neutrinos, which were a theoretical entity invoked to explain a particular set of observations on radioactivity (the shape of the beta spectrum.) Neutrinos turned out to exist.

As such, while the comparison to aether is superficially apt, it is not something we can draw any conclusions from, because the comparison to the neutrino is equally apt, and it would require us to draw the opposite conclusion.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. It is not the discipline of testing ideas by making analogies to other ideas. There is a reason for this: making analogies to other ideas has consistently proven to be almost completely useless for creating knowledge of reality, while the discipline of science has been wildly successful.

Nor are the properties of caloric, aether, neutrinos or dark matter “magical”, which is something people who compare dark matter to aether sometimes say. The properties theoretical entities are assumed to have are merely the ones required of an entity that is able to explain our observations in each instance. In the case of caloric it turned out to have self-contradictory properties, when the full deductive closure of the theory was teased out. In the case of aether it turned out to have properties that made predictions that were false. In the case of neutrinos the required properties made predictions that were true.

In the case of dark matter: we don’t know yet, and the only way we will ever know is if we continue on with our program of systematic observation, controlled experiment and Bayesian inference. There is no other way to know.

Posted in bayes, Blog, epistemology, history, physics, science | Comments Off

On Interpretation

“The cat is on the mat” is a reasonably clear statement, not subject to a huge range of interpretation. If anyone reading it claimed it justified killing blasphemers most people would look at them funny. And remember: this is the Internet, so there is a near-certainty that someone will interpret it as justifying the killing of blasphemers.

The twin belief that:

a) it is possible to determine the One True Interpretation of any proposition

and

b) anyone who disagrees with that interpretation is mad, stupid or evil

is remarkably prevalent, particularly given that we are continually given evidence that our interpretations of other people’s words are mistaken.

Anyone who has ever had a fight with their significant other about a misunderstanding has experienced the reality of interpretive failure up-close-and-personal, and therefore should be aware this fundamental truth: misinterpretation is possible.

This is particularly true when it comes to scripture. Unlike “the cat is on the mat”, scriptural texts tend to be strongly allusive and in any case were mostly written hundreds or thousands of years ago in contexts radically different from today, when what is now common knowledge was far in the incomprehensible future.

Consider the entire industry of Biblical interpretation, in which various gibberish-merchants sell their false wares to a more-or-less ignorant audience.

You can claim anything you want as “interpretive principles”, and indeed someone out there has:

  • Holistic historico-grammatical interpretation with a dollop of context to taste (because it’s not like “context” hasn’t been used to interpret exactly the same words in radically different ways across the course of history, right?)
  • the guidance of the Holy Spirit, which is invoked by absolutely everyone who has ever interpreted the Bible, all in contradiction with one another. Where is the interpreter who says, “Oh by the way I totally ignored the guidance of the Holy Spirit in this work? No prayer, no spiritual reflection or anything like it went into my interpretation”? I’ve certainly never heard of any such thing, so pointing out a “principle” that is agreed upon by virtually everyone who has ever created one of the thousands of mutually contradictory interpretations of the Bible is sort of stupid. It’s not as if hewing to the Holy Spirit will result in a single clear interpretation, or that some guy with a website is the First Person Ever to think of trusting to the Holy Spirit to guide their interpretation. People have been doing that for over a thousand years and have been disagreeing violently over where the Holy Spirit guides them.
  • Metaphysical interpretation which appears to mean “whatever just makes sense to me without any attempt at justification beyond whatever plausible bullshit I pull out of my ass”.
  • Racist interpretation that I’m going to point to via a skeptics site rather than subjecting myself to the real thing. But anyone who knows anything about the big business of Bible interpretation knows that the Good Book as been used to justify some very bad things using principles of interpretation that are no different from those used by more civilized people.
  • And so on…

The fact that such a diversity of interpretations and interpretive principles exists tells us something: there is no correct, justifiable, narrow envelope of interpretation of the kind that exists for “The cat is no the mat” or “F=ma” or “we hold these rights to be inalienable”. If there were, people would be able to find it and agree on it.

After all, we can find and agree on the laws of motion, which presumably were also laid down by god. How can it be that we can agree on how to interpret the world god made but not the book god supposedly wrote, dictated or inspired? Isn’t that a little odd?

It surely cannot be the case that something so subtle, complex and just plain weird as the universe in all its relativistic and quantum peculiarity should be subject to widely-agreed-upon interpretation but the Bible, the Quran and the Guru Granth Sahib should be completely incoherent and incomprehensible. And yet that is what they manifestly are: completely incoherent and incomprehensible, because if they were not the One True Interpretation would be at least as widely agreed upon as the general consensus in the sciences about the world god made.

There is no way around this: different people from different cultures come to the world god made with different biases and backgrounds, and all end up in pretty much the same place. They don’t insist in imposing arbitrary and essentially meaningless interpretive principles on the study of the world god made, because none are necessary. Close observation of the world is sufficient to reveal its deep secrets to us. There is a tiny amount of residual disagreement, of course, because we are human. But no one is claiming “F=mv” is the correct force law or “E=m2c” is the correct relativistic energy relation, which is what would be the case if the same range of interpretations existed for the world as for scripture.

This is just a fact: scripture is subject to a vast range of completely contradictory interpretations, and unlike the interpretation of the world god made, no one has ever been able to come up with a set of interpretive principles that have gained anything like widespread agreement.

Yet on the face of it interpreting words is far easier than interpreting the universe. People were interpreting words for thousands of years before we started to get our interpretation of the universe even approximately right. “All men by nature desire to know” was written down 2500 years ago, and no one has ever had much difficulty in figuring out how to interpret it.

So we have failed at the easier task while succeeding brilliantly at the harder. It’s almost as if scripture was full of contradictory gibberish, and therefore incapable of being interpreted coherently at all.

Posted in epistemology, history, language, religion, science | Comments Off

Some Notes on Orbital Mechanics and Climate Change

What is the role of orbital variations in climate change?

We know from direct measurement that the Earth is warming by about 0.6 W/m**2. Climate models give a number that is about 1.6 W/m**2 from greenhouse gas emissions and it is likely aerosols are buying us about 1 W/m**2 back.

0.6 W/m**2 is not very much. The Earth is 149.6 billion kilometres from the sun, on average, and the solar constant–the amount of power per unit area reaching the Earth from the sun, called insolation–is 1366 W/m**2 at the top of the Earth’s atmosphere [a commenter on Hacker News pointed out there is a better number: http://onlinelibrary.wiley.com/doi/10.1029/2010GL045777/pdf]. The solar constant varies as 1/r**2, so if we put those numbers together we find that a variation of 32,000 km in the mean distance between Earth and sun is enough to produce 0.6 W/m**2 change in insolation. That’s only a tenth of the distance to the moon, which is not very much at all.

This question is interesting in part because unlike many other influences on climate, it is computationally tractable on your average computer. Furthermore, the physics are both painfully simple and absolutely fundamental. Newton’s law of universal gravitation is what started the ball rolling, science-wise, and it’s so simple it can be dealt with using quill and parchment in many interesting cases.

So this is an opportunity to demonstrate some fundamental issues with the computational physics of climate, and explain why I am cautious about drawing any very strong conclusions from climate models. I am not a climate scientist, but rather a computational and experimental physicist. This means I have spent most of my career dealing with systems where I can check my computational results in the lab, and I am painfully aware that they don’t always agree, even when internal error checking seems to say my computational results are basically OK.

Note, however, that nothing I say here implies “climate change is a hoax” or that climate scientists aren’t doing their best to understand an enormously complex system. I believe the general confidence in climate models is unjustifiably high for reasons that I hope this little exploration will make apparent, but I also believe that nuclear fission, solar and wind power should be strongly supported, coal power should be curtailed as rapidly as possible, and taxes shifted away from income and toward carbon emissions.

Dumping gigatonnes of garbage into the atmosphere is not a great idea, although honestly if it were a choice between that and the revolutionary overthrow of the capitalist order I would go down fighting for capitalism. We don’t know how climate change comes out, but we do know that attempts to “change everything” always end neck-deep in human blood, and personally I’d like to avoid that. It’s just the kind of evil capitalist roader I am.

So much for politics. What about science?

Orbital mechanics at the level I’m interested in is governed by a single law:

F = m1*m2*G/r**2

This is Newton’s law of gravitation. The force between two masses (m1 and m2) is equal to their product divided by the square of the distance between them (r) multiplied by a universal constant, G = 6.673E-11 N*m**2/kg**2. This is a small number because gravity is a weak force. It is only because the masses of the planets and stars are huge that we notice it at all.

Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. As such, it is fundamentally about exploration. We have ideas, we test them, we let the results of those tests guide where we go next. In the end, we publish, because if not made public it is not science. People were investigating reality long before the Proceedings of the Royal Society saw the light of day, but they didn’t accumulate knowledge because they didn’t publish. Science is both intensely individualistic and profoundly communal.

In the present case, the first idea I had is “I bet I can get some decent orbital mechanics results using that adaptive Runge-Kutta solver I wrote years ago for a quite different project.” So let’s put that to the test. [A number of commentors on this article on Hacker News have pointed out that symplectic integrators are a better choice for this problem. They are correct, but I didn’t have a symplectic integrator lying around. I’ve since explored this alternative and found it faster but less accurate than the work described here]

This is the way the computational physics of orbital mechanics works. We have a body like the Earth moving around the sun. We know its current location and velocity, and we know the gravitational force acting on it. And we know some basic kinematics, which is the mathematical description of motion (dynamics is about the causes of motion, kinematics is about the description of motion.)

Kinematically, we have the equations:

dx = v*dt
dv = a*dt

where dx = change in position, dv = change in velocity, a = acceleration, and dt = a small time interval. I’ve written these as one-dimensional equations but there are similar equations for y and z as well. Newtonian gravity has the nice property that the x, y and z components are all independent of each other so we can compute them separately (Einstein’s gravitational theory does not have this property, and as a result is… somewhat more complicated.)

All these equations say is that position changes linearly with velocity and time, and velocity changes linearly with acceleration and time. Acceleration is given by Newton’s law of gravitation via Newton’s second law: F = m*a, or a = F/m, which gives us:

a = m2*G/r**2

Because gravity is proportional to mass, m1 is divided out of the force law, making the gravitational acceleration proportional only to the mass doing the attracting. This is why feathers and hammers drop at the same rate on the surface of the moon, or in a giant vacuum chamber.

Taken together, these relations give us a second order differential equation for the position of an object moving under the influence of gravity. The second derivative of position (which is also the rate of change of velocity) is just equal to the acceleration (by definition):

d2x/dt2 = a

We can get the acceleration of m1 from any number of bodies by summing up their gravitational influence:

d2x/dt2 = m2*G/r12**2 + m3*G/r13**2 + … mi*G/r1i**2

where r1i is the distance between the mass of interest and the ith body in the simulation.

Conceptually, to solve this equation means simply integrating the effects of gravity along the curve of motion determined by the effects of gravity. If that sounds a bit recursive it’s because it is: to know the effect of gravity over each step we have to know the effect of gravity at the end of the step as well as at the beginning, but we don’t know the effect of gravity at the end of the step until we’ve taken the step and found out where it ends up. It turns out we can deal with this using some simple corrections, because god loves us and made the universe second-order-smooth.

Starting from known initial conditions we can step forward small but finite increments in time. To figure out the effect of the endpoint we break the whole interval into parts and use estimates of the values in the middle points to correct the value at the end for the change of gravity over the interval. There is a whole family of methods for doing this but the workhorse is 4th order Runge-Kutta (RK4), named after the pair of German mathematicians who devised it.

Many years ago I wrote an adaptive RK4 solver < > that varies the time step to maintain some error bound. It’s rather stupid about it, just dividing the desired time step in two and asking “After taking two half-steps am I sufficiently close to the same place as when I took one full step?” It does, however, work reasonably well.

Here’s where computational physics starts to get fun.

We represent the world with numbers, but the real world has more-or-less infinite precision while computers are decidedly finite. A typical double-precision floating point number is 64 bits which allows it to represent values between about 10-308 and 10+308. More important is the value ε, which is the smallest number that when added to 1.0 results in a different value. For 64-bit floating point numbers this is 2.220446049250313E-16, which is not all that small, as we shall see.

We are going to be integrating the orbits of the planets, and doing so over a few thousand years because this turns out to be about the limit of computation for my laptop. A full solar system simulation covering 1E11 seconds (about 3200 years) takes a day or so to run. Patience is a virtue for computationalists as it is for experimentalists.

The size of the time step for our simulation is going to turn out to be about 10,000 seconds, and even with adaptive RK4 integration the error bound turns out to be 1E-18 to get decent internal precision. 1E-18 is less than ε for double-precision floating point, so we’re going to have to go with 128 bit long doubles (implemented by the nasty trick of “#define double long double” because I’m basically an evil bastard.)

How did I decide this?

First, by setting up a dumb-as-rocks simulation of a faux Earth around an approximate sun, and letting the simulation run to cover 1E11 seconds and then changing the signs of all the velocities and letting it run backward for another 1E11 seconds. With perfect numerical precision, this would result in the final positions of the sun and the Earth being identical to their initial positions.

There was a lot of flailing about at this stage, and it took a few days and some head-to-head comparisons between the Python and C++ versions of the code to tease out a few minor issues that resulted in improperly accumulating errors. The Python code is far too slow to run over the full simulation time, but it served as a very useful check on the C++ over shorter times (when both are run at 64 bit double precision the results have to be byte-for-byte identical or something is wrong.)

I mention this because we rarely see what goes on behind the scientific curtain. There is always a lot of flailing around, a lot of trial and error. Young people in particular may not be aware of this and feel that their own flailings are uniquely embarrassing. They are not. Flailure is always an option, and often quite a good one.

Having spent a few days flailing I was ready to plug in some real numbers for the Earth, the sun, and the planets. I decided eventually to leave the Moon out of it until the end, because I thought it shouldn’t have a huge effect on the Earth’s orbit about the sun. Silly me, as it turned out.

I also elected to leave out all corrections from general relativity, from tidal forces, and so on, which really are trivially small over the time scales I care about. As I said at the outset, what I wanted to know was: is the effect of the planets on the Earth’s orbit comparable to the effect of humans on climate?

NASA of course has all the numbers one might want with regard to the positions of the planets and so I turned to them for the current data, which I set as the positions and velocities of the sun and planets at midnight CT January 1 2015. NASA also has detailed results of their own solar system model available, which gave me a target to aim for.

Having a target to aim for is the holy grail of computational physics, of course: a measurement or gold-standard calculation that can be used to validate the code. Validation is done by setting any physical constants in the code based on independent measurements and then ensuring in known cases that the simulation reproduces reality. It is not about tuning unphysical parameters so the code matches one particular reality. It is about testing without adjustment.

The way NASA actually developed their model was based on a dialog with nature. They had excellent measurements going back decades or centuries, and required that their simulations matched those observations. I am simply using NASA’s validated simulation results as a surrogate for observation, and since they really are ridiculously accurate–they are the basis for spacecraft navigation in the solar system, and have corrections for things like the wobbling of Mars under the influence of its two tiny moons–this is more than good enough for what I’m doing.

The first thing I did was simulate Earth going around the sun with no other planets. This is the simplest possible system and I wanted to get a sense of what it looked like over the past few thousand years. For convenience I ran 1E11 seconds, which is a little over 3000 years and took an hour or so to run. The initial results were promising but a little weird, as initial results often are. I decided to add in other planets, thinking that perhaps I was asking the simulation to do something impossible: I was giving it a precise starting point in terms of the Earth’s position and velocity (and the sun’s) but leaving out all the other planets, whose presence was actually necessary to create that position and velocity.

I wasn’t sure which planet after Jupiter was most important in perturbing Earth’s orbit–I thought it was probably Saturn–so I ran a bit of code to find the maximum gravitational force between each of the planets and Earth, and got a bit of a surprise:

Body Orbital Radius (AU) Gravity (N)
Sun 0.0030246192898 3.69616319e+22
Jupiter 5.41567069072 1.79863405677e+18
Saturn 10.1361271569 1.2579933068e+17
Venus 0.744294082795 1.37535974505e+18
Mars 1.41241143847 6.97067495351e+16
Mercury 0.407065807622 1.7349744085e+16
Uranus 20.37633197 4.2709974665e+15
Neptune 30.5218921949 2.1709083215e+15
Moon 0.00261101798674 1.99083093078e+20

As can be seen, Venus has about ten times the influence on Earth as Saturn does, and when it is added into the simulation along with Jupiter and Saturn things started to look a bit better. There was still some weirdness with the precise length of the Earth’s orbit… or was I misconverting from fractional Julian day number to seconds? This kind of trivial conversion problem is far more common than you might think.

Adding Mars changed things very little, and at this point I started doing a more detail comparison with the NASA data, and noticed a couple of oddities. In particular, the Earth’s distance to the sun at aphelion (furthest from the centre) was off:

One year of orbital simulation with no moon. Green line is present work, red line is NASA

One year of orbital simulation with no moon. Green line is present work, red line is NASA

And if I looked out a few hundred years the length of the year was clearly wrong:

After a few hundred years a small error accumulates...

After a few hundred years a small error accumulates…

When in doubt, look carefully at the data. Don’t be too quick to jump to conclusions. Unlike every other approach to knowing, science rewards being open to alternative ideas. Advocates on non-scientific approaches often say they want people to be open to “other ideas”, but in the end they all want you to reach some conclusion that they have already decided is the right one, like “Mohammed was the last true prophet” or “big pharma and GMOs are causing all the ills of the world.” Science, on the other hand, is just a discipline, and all it wants is that you practice it honestly, publicly testing ideas by systematic observation, controlled experiment and Bayesian reasoning.

When someone lectures you on the importance of being open minded, try responding with something along the lines of, “I’m open to any idea that we can figure out how to test, because if it can’t be tested it probably can’t explain anything. If it can explain a phenomenon, then it can act as a cause on other things, so it can be tested, because we can make predictions of other things that it is likely to cause. If it can’t cause anything other than what it’s being invoked to explain, then it really doesn’t have any meaning beyond ‘the thing that causes that’, and that’s pretty boring. So give me an idea–one idea, not ten–and let’s talk about how to test it, what other consequences it would have that we can look into. I’m totally open to any idea like that.”

The practice of science is very much about learning where to look for plausible ideas, and focusing on looking rather than imagining as the first step on any search for a more plausible proposition. This is difficult because it requires us to accept that we don’t know what the problem is and we might not be able to find the actual source. Imagination is much more comforting than reality at times like this. But we should learn to trust ourselves, and look to the world to guide us. That’s the only way we’ll find more plausible propositions. People were imagining answers to hard questions for thousands of years before we learned the discipline of science, and we have precious little besides some decent poetry to show for it. We certainly never cured disease or ended hunger that way.

So rather than imagining what the problem might be, I looked more carefully at the various components of position, and what I saw in the z-component (the distance from the plane of the ecliptic) was striking:

Red line is simulation, wobbly green line is reality

Red line is simulation, wobbly green line is reality

Instead of the smooth sinusoid of my simulation, reality is undergoing a weirdly jagged drunkard’s walk with about thirteen wobbles per year. Thirteen is the number of lunar months in a year, more-or-less, which suggests that culprit is the moon.

I hadn’t given the moon much thought at this point. While it has a large interaction with the Earth, as a terrestrial satellite it didn’t seem like it could have a very big influence on the Earth’s orbit around the sun. I basically viewed it as an idler, swinging along beside the Earth but having no net influence to speak of.

Thinking about it, the biggest effect of the moon is on the initial conditions: I have pulled the Earth’s orbital parameters out at a single moment in time and its motion at that time has been influenced by the moon. Take the moon away, and it will not track back along its physical course but some other curve. The effect is most obvious in the z-component, where the 5 degree tilt of the moon’s orbit relative to the plane of the ecliptic results in the Earth being wobbled at the moon travels around it, but the overall effect shows up in other places too, as we shall see. My simulation starts off in the correct place but with one small missing influence, and after a few hundred years lands is in a completely unphysical place.

This is the great lesson of computational physics: errors in the model almost always accumulate. They never average out.

When I added the moon, things snapped into place:

Simulation and reality with the moon in place

Simulation and reality with the moon in place

It is worth noting that both with and without the moon the internal error on my ten-year computation was small, less than half a meter after running things forward and backward. Over that simulated time span my model Earth covered nearly 20 trillion kilometres, and the equations brought it back to within 42 cm of its starting point. That it did this even without the moon–which turns out to be vital to matching reality in so many other ways–is an example of how internal consistency is necessary but not sufficient for high accuracy.

Now I was ready to run a long simulation with all the planets and the Moon. I experimented with relaxed error bounds to get the simulation time down to a day or so. The moon, because of its tightly curved orbit, is a particular pain to simulate accurately, and so slows the simulation dramatically when the error is at 1E-18. After running a few one-decade simulations with different error bounds I settled on 1E-13 for the long run. If it didn’t work I would only have lost an overnight.

I still had that weirdness in the phase over hundreds of years. Either I was converting the NASA data incorrectly, or I was goofing up the time conversion on my own results somehow (the results are output in seconds but converted to years for display) or the simulation was still not right.

When the simulation that included the moon was about half done I ran the processor on the incomplete output file and had a look at the results:

With the moon in place the year has the right length

With the moon in place the year has the right length

So… if asked, “Do you think the Earth’s moon changes the length of the year?” would you have said, “Yeah, enough to lose about six months over the course of 400 years or so?” I certainly wouldn’t have. Yet it does, because the true and correct initial conditions put the Earth in a slightly different orbit without the moon’s influence to correct it. Depending on when in the lunar month I had started the simulation the results would have been quite different depending on the relative value of the lunar momentum to the Earth’s.

[Edit: I originally had an error in the energy calculation that cleverly left out the potential term. Oops. When added back in, the simulation has a fractional energy error of 6.7E-11, which is good enough for going on with.] After the full 3200 year run, forward and back, the Earth comes out out of place by just a bit more than it’s own radius–6350 km–which is not bad following a six trillion kilometre journey.

At this point, I was confident I understand the Earth’s orbit pretty well, and can set out to answer the question I started with: how does it affect the insolation?

As can be seen, the distance to the sun varies quite a lot over the year due to orbital eccentricity, so some kind of averaging is necessary. Averaging over sinusoidal waveforms is surprisingly tricky because how you handle the ends tends to dominate the whole integral. If you get one end-point just a little bit wrong, it won’t cancel the contribution at the other endpoint, and you get a significantly non-zero value that’s just a numerical artifact.

One way to avoid that is to fit the curve to a known shape–in this case just a cosine function plus a flat baseline–and use the fit parameters to estimate the properties you care about. B + A*cos(ωt+φ) is the function to fit, and the thing I care about is B, the baseline value. The rest is just a wobble around that. With t in years ω is fixed at 2*π, so the fit only has three parameters, with the phase φ just correcting for wherever the fit happens to start for each fitting period, as the data points don’t fall exactly on year boundaries. Fitting over two-year intervals–which resulted in slightly cleaner results that one-year fits–yields the following curve for the baseline B parameter, which is the average insolation:

insolation

There is a bit of a step around 1500 years ago, but it is less than 20% of the heat imbalanced measured in the modern day, and it is not reproduced in the NASA data–the purple line that runs back 2000 years, which is all I downloaded–so is likely a result of remaining imperfections in the model or analysis. The thin blue line at the top shows the magnitude of the empirically observed heat imbalance from the nominal solar constant of 1366 W/m**2. If the moon is left out of the model the simulated insolation is about as high again above the blue line as the blue line is above the correct value.

There is also a very observable effect of decreasing orbital eccentricity over time, as can be seen in the A parameter, which measures the variation over the course of the year:

eccentricity_insolation

While the change in eccentricity is quite significant–and very close to that found in the NASA dataset, as shown in green–it doesn’t have a direct driving effect on climate.

It does, however, have an indirect effect. More intense sunlight during part of the year will be offset on average by less intense sunlight six months later… except that the sun won’t be falling on the same places. Perhaps the perihelion–the time of closest approach and most intense insolation–happens during boreal summer. That would mean the more intense heating will occur when the northern hemisphere is already at its hottest, and the cooling will happen during austral summer, when the northern hemisphere is covered in snow. If things were the other way around, the extra heating would occur when the Great Southern Ocean was warming up, and cooling would happen during northern summer.

If the Earth were a smooth and featureless billiard ball none of this would matter, but that is not the case. How the planet responds to additional heating depends on seasonal conditions in the north and the south, and understanding that response is the business of climate modelling.

And climate modelling requires physically accurate software if it is not to be subject to the kinds of error accumulation I’ve explored here. The difference between precise models and noisy models is that in precise models (like orbital simulations) cumulative errors are easy to see, and in noisy models (like climate) cumulative errors are hard to see.

This is where this little excursion ends, but I’ll come back to this topic later, with a closer look at some of the physics in specific climate models that I find problematic. Climate is determined by a number of large competing factors. The amount of energy that reaches the Earth’s surface is only a fraction of the 1366 W/m**2 incident at the top of the atmosphere. Some is reflected, some is absorbed, some is scattered. Small changes to any of these processes can result in large effects on the climate. We are concerned with changes at the 0.1% level: 1.4 W/m**2. This is the magnitude of the energy imbalance driving climate change.

Building models that are accurate to 0.1% over a century is not possible unless the physics they embody is at least that accurate. This is my professional judgement as a computational physicist, and the argument presented here is supposed to illustrate that judgement, not prove it. When I set out to perform this analysis I knew pretty much what the results would be, because I knew that tiny aberrations in the physics almost always results in significant excursions from reality at the end of the simulation. I didn’t know precisely what would be the major source of inaccuracy, but I was sure I would run into one. Unsurprisingly, I did.

A small under or over estimate of any major process in any simulation will result in unphysical integrations, just as we have seen here with planetary orbits. The moon’s affect on the Earth’s orbit is tiny–its direct gravitational influence is just 0.3% of the sun’s. But its presence or absence in the model has a huge effect on the properties of the orbit. Leaving the moon out with the particular initial conditions I’ve used results in an average orbit that has the insolation wrong by 1.7 W/m**2, about the same size of the total effect from green house gases.

That’s an absolute effect, mind. Climate models are attempting to get at relative effects. But they are doing so by subtracting large terms from each other, and there is no reason to believe their errors will cancel.

This is not a political statement. As I have said, I think anthropogenic climate change is a serious problem and that it should be approached via a mix of technology and public policy, particularly nuclear power development, solar power development, energy storage development, rapid curtailment of thermal coal development, and a shift from income taxes to carbon taxes. I don’t need physically accurate climate models to tell me these are all good things, because they are good things regardless of the long-term effect of carbon on the Earth’s climate. Carbon-based fuels have enough problems to justify moving away from them regardless, and the substantial and plausible risk posed by anthropogenic climate change simply adds to the argument.

But all that said: until climate models are as accurate as orbital simulations, it is difficult to claim that the science is settled. That doesn’t mean we can’t say anything, of course. Even without accuracy we can be fairly sure of the following:

1) Adding CO2 to the atmosphere will increase global heat content. There is simply no way to avoid this. We have directly measured human contributions to CO2 and other greenhouse gases and we know they are sufficient to add a significant amount of heat to the climate. We are working on directly measuring the effect of aerosols and the indications are they are reducing the amount of heat in the climate. We have directly measured the heat balance of the oceans and it is showing effects that are in the range of 0.5 to 1.5 W/m**2, which is the magnitude of effect we expect from models.

2) Increasing global heat content will have a disruptive influence on economic processes that are tuned to the current climate

What we don’t know, and what I am arguing that we can’t know due to the limitations on the physics in climate models, is the detailed ways in which the climate will change. You can read this, if you like, as an apology for the “hiatus”, which I’ve been warning about for over a decade. Unphysical models will not reproduce reality in detail and it was a terrible mistake to sell climate models to the public as if they could.

Who can predict the effect of what we leave out of climate models? We leave out–or approximate in unphysical ways–precisely the things we don’t understand well enough to include properly. We hope the effects of those approximations will be small. But because error always accumulates–I have never run a model of any kind where this was not the case–we know that those unphysical approximations will tend to carry our models further and further away from reality as time passes. So we can’t tell what effect our omissions will have until we have data to compare to. And that’s OK.

However, putting forward detailed model results as if they were strong predictions about the real future is a mistake, because given the level of physical detail and the vagaries of long-time integrations, it would be astonishing if climate model results bore more than a vague resemblance to reality, just as it would be astonishing–given what I know now–if a model of the Earth’s orbital motion that left out the influence of the moon was particularly close to the physical motion of the planet.

Nor do internal checks on model precision reflect very strongly on model accuracy. Only comparison with ground truth can do that, and in the case of climate we won’t have that until it’s rather too late to do anything about it.

Fortunately, we don’t have to wait around. There is ample reason to build nuclear power stations, to close coal plants, to build solar farms and energy storage systems, and to shift the tax burden from incomes to carbon emissions today. It’s time we stopped listening to lunatic calls that we must “change everything” or engage in the kind of failed revolutionary action that characterized so much of the 20th century, and started to focus on changing the few things that will actually help the climate without bringing global industrial civilization to an end.

Hysterical shills for big coal and big oil will oppose sensible policies as much as the looney left does, but while I don’t think the science of climate change is remotely settled, the arguments in favour of technological and policy changes in favour of evolving our global, market-based, industrial civilization in the direction of a low-carbon future are compelling regardless.

Posted in mechanics, physics, science, software | Tagged | Comments Off

The Narrative of Fire

I’m a very lucky person. I was born in the best nation at the best time in the best place on Earth, into circumstances that at once gave me a certain degree of humility (although I often keep it well-hidden) and the opportunity to do pretty much whatever I want.

One of the things I grew up with was fire, and my luck being what it is, I am now renting a place with a real fireplace.

I’m pretty sure everyone reading this knows what I mean by “real”: it burns wood, not gas.

I mostly burn prefab logs in it, for a variety of reasons. The chimney hasn’t been swept in a good long time, for example.

Contemplating the difference between gas fireplaces and real fireplaces, even real fireplaces burning fake logs, I am struck by this: real fire has a narrative. Every real fire tells as story. It grows from almost nothing–or perhaps a spectacular initiation ceremony of the kind Scout leaders used to put on–and goes through a complex evolution until you wake beside the quiet ashes in the morning, the girl from the night before still warm against your body even though the fire has gone cold.

This is the narrative of fire, and it is older than humankind.

Our Homo Erectus ancestors knew the narrative of fire, a million years ago. We evolved from them, and brought fire with us.

It told its tale in the night time, survived as a pale shadow through the day, and spoke again in the evening.

Hominids watched the fire burn, and listened to the stories it told, and became human.

Is this the ultimate source of our narrative proclivity? Is this where the stories come from? Not from the tales told in the circle of firelight, but from the fire itself?

A fire is a process of combustion in three acts. It begins with a small flame, grows to the point where it is no longer tentative but inevitable, unstoppable, beyond the point of accidental extinguishment, and as it burns through the second act it plays upon the wood until a midpoint is reached… it is a mature fire, fulfilled in all its potentials. The last log has been fed into it but is not consumed, not homogenized. The fire and the wood now fight for longevity, consuming and defending until, inevitably, we reach the third act, when all is hot, the totality is ready for whatever end awaits. The sparks rise up, the climax is reached, the denouement is reached and the ashes die down toward the grey light of dawn.

This is the narrative of fire: eternal, beyond the realm of human life, fundamental to all that comes after.

So long as there is fire, there is life. So long as there is fire there is something human in the world, telling its stories to the empty sky, warding off the darkness and the rain.

Posted in life, religion, story, thermodynamics | Comments Off

Some Notes on Case-Control Studies

Case-control studies do the following:

  1. Find a bunch of entities that have instances of the effect you are interested in. These are the “cases”.
  2. Find an equal or greater number of entities that are matched with your cases in every respect except the factors that you think might be the causes of the effect you are interested in. These are the “controls”.
  3. Compare the size of the purported causes in the two populations and see if they are significantly different.
  4. Publish before doing any further investigation, sanity-checking, or corrections for multiple experiments.
  5. Have your institution give out the most hyperbolic press-release that your conclusions can be tortured into confessing support for, because torture works so well

Admittedly, the last two steps are not strictly required for a well-designed case-control study. They are just extremely popular. I’m being insensitive to people who conduct case-control studies here. Hopefully none of them will murder me. Because I hear that’s a thing now, killing people who mock your irrational beliefs. [*]

And thinking case-control studies are a good idea is nothing if not an irrational belief. That very nice article on 538 takes the case-control results from Swedish studies and applies it to the American population over the past couple of decades. Since cell phone use has increased dramatically, you can compute an expected increase in the rare brain cancer that the Swedes say is increased by cell phone use, and you can see trivially that no such increase has occurred.

Ergo, cell phones don’t cause brain cancer.

But why would anyone believe they did in the first place?

The problem with case-control studies is that you are attributing any difference between your cases and your controls to the causes you don’t control for. And for rare events there will almost always be differences. There are ways around this that I’ll get to below, but let’s first look at how it happens.

To illustrate it I wrote a little Python code:

import numpy as np
import random

import scipy.stats.mstats

# a bunch of characteristics with mean 10 and width 5
fMean = 10.0
fWidth = 5.0
nCharacteristics = 10

# one characteristic is going to have a trivial boost
# in the case population, just 'cause randomness happens
fCorrelation = 1.05

# three match criteria, three possible "causes"
nTestCriteria = 3
nMatchCriteria = 3

nPatients = 500
nControls = 2*nPatients

for nShots in range(0, 100):
	lstIndices = range(0,10)
	random.shuffle(lstIndices)

	lstPatients = []
	for nI in range(0, nPatients):
		lstPatients.append(np.random.normal(loc = fMean, scale = fWidth, size=nCharacteristics))
		for nIndex in lstIndices[-nTestCriteria:]: # causes are tweaked
			lstPatients[-1][nIndex] *= fCorrelation
		
	lstControls = []
	while len(lstControls) < nControls:
		lstTest = np.random.normal(loc = fMean, scale = fWidth, size=nCharacteristics)
		for lstPatient in lstPatients:
			nCount = sum([abs(lstTest[nI]-lstPatient[nI]) < 1 for nI in lstIndices[0:nMatchCriteria]])
			if nCount == nMatchCriteria: # match on uncorrelated criteria
				lstControls.append(lstTest)
				break

	lstRatio = [] # odds of rare events are based on tails of distributions!
	for nIndex in lstIndices[-nTestCriteria:]:
		nPatientCount = 0
		for lstPatient in lstPatients:
			if lstPatient[nIndex] > fMean+2*fWidth:
				nPatientCount += 1
		nControlCount = 0
		for lstControl in lstControls:
			if lstControl[nIndex] > fMean+2*fWidth:
				nControlCount += 1

		lstRatio.append((float(nPatientCount)/nPatients)/(float(nControlCount)/nControls))

	nJMax = 0 # now take the BIGGEST difference in effect!
	fRatioMax = 0
	for (nJ, fRatio) in enumerate(lstRatio):
		if fRatio > fRatioMax:
			nJMax = nJ
			fRatioMax = fRatio

	# compare the distributions... are they different?
	nJMax -= -nTestCriteria
	lstPatientData = []
	for lstPatient in lstPatients:
		lstPatientData.append(lstPatient[nJMax])
	lstControlData = []
	for lstControl in lstControls:
		lstControlData.append(lstControl[nJMax])
	
	fT, fProb = scipy.stats.mstats.ttest_ind(lstPatientData, lstControlData)

	print fRatioMax, fT, fProb

This is an illustrative cheat, nothing more. I could have built a fancier model but there’s only so much time you can spend on irrational nutjobs, like people who believe in case-control studies.

The results of the simulation are shown below:

Odds Ratio vs T-Test Results

Odds Ratio vs T-Test Results

The point is that with an undetectably small tweak to the underlying distribution (the T-test p-values are almost all > 0.05) it is trivially easy to get factors of two or more difference between case and control groups.

This is possible in part because I’ve allowed multiple possible causes and selected and reported on the one that showed an effect. This is a criminally bad thing to do, utterly illegitimate and wrong. If you’re going to do it, you need to a) define the categories of cause beforehand and b) correct all your p-values for the fact that you’ve gone on a fishing expedition. The odds of something being correlated with your effect are as near as anything to a certainty. The more different things you look at as “possible causes” the more likely it is that you will find one that is correlated by chance.

The importance of the stunt I’ve pulled here is that by any ordinary statistical standard (and the T-test is as ordinary as you can possibly get) the distributions are not different, but the specific procedure used to tease out the effect results in an apparently dramatic consequences. Statistically identical distributions are generating factors of two or more differences!

This is another way of saying: if you need a case-control study to detect the effect you are looking for, it is probably so small as to be irrelevant to public policy. The money spent on all those case-control studies on cell phones and brain cancer would have saved far more lives had it been spent on almost anything else: auto safety, anti-smoking campaigns, etc.

One of the reasons I don’t work in radiotherapy any more, after a brief and productive stint in the field in the early ’90’s, is that I realized all the money we were spending would be far better put into anti-smoking and other campaigns against the small number of things we knew pretty well caused cancer, instead of marginally improving radiotherapy treatment, which was a) already pretty good and b) showed no significant likelihood of improving much (spoiler: it didn’t.)

There are ways case-control studies can be improved to generate results that are more reliable guides to reality. In particular, any decent case-control study should look at exactly one possible cause, or correct very aggressively for multiple experiments. It is hard to overstate how rapidly the statistical power of data decreases as hypotheses multiply, particularly if they are allowed to work in combination, or if the data are sub-setted, so instead of looking at “brain cancer” you end up looking at “this particularly rare form of brain cancer”.

Secondly, additional non-causal variables should be investigated that have similar scope to the potentially causal ones, and their distributions should be analyzed and reported alongside the purportedly causal ones. Ideally this should be done blindly.

That is, if you’re investigating cell phone use and brain cancer, you should also question participants on how often they talk to their mother, or how often they go out with friends, or what their favourite colour is, and so on. Everything is correlated with everything else, of course, so it’ll be difficult to find truly independent variables… which should give you pause when executing a hyper-sensitive test for correlations. Because maybe cell phone use correlates with how often you talk to your mother, or how often you go out with friends, or what your favourite colour is (seriously: colour preferences exhibit age and cultural differences that could easily correlate with cell phone use.)

By measuring and reporting nominally unrelated variables, the ridiculousness of supposedly positive results will be highlighted.

Thirdly, in the “Methods” section, the rate at which case-control studies produce results that are later shown to be nonsense should be mentioned. A simple sentence like, “Case-control studies have been used in this area of research for the past 20 years. We have found 253 studies in the literature. Only three of them identified effects that were later confirmed by less problematic forms of investigation.”

If you expect me to believe a result, you need to show me that the method you are using has a good track record of confirmed results in the past. It is true that because of their hyper-sensitivity it will be very difficult to confirm many results from case-control studies by other means, but again: that suggests perhaps redirecting scarce research funding toward areas that have a big enough impact on human life to actually measure.

Fourthly: commit to publishing all results, and get a commitment from your institution’s PR people to make the same amount of noise when you find no association as when you do find an association. Put that message, “New study shows no correlation between cell phone use and scurvy!” out there. Try to ensure the same amount of money is spent promoting negative results as positive. Yeah, I know, I’m into the realm of total fantasy here.

Finally: case-control studies should where possible focus strongly on the dose-response curve. In the absence of randomized controlled trials, the dose-response curve is by far the best indicator of causation. If the effect can be graded by levels of severity then the level of severity should be correlated with the level of the cause. If it is not, then the results are probably noise. This may not be possible in all cases, but when it is, not doing it is inexcusable.

Case-control studies do have a use in guiding future research, but they are so fraught with problematic aspects that they should never be used to imply causation without a strong dose-response result. This review is rather more generous to them than I am. I am not aware of any research into how often case-control studies are confirmed on follow-up, and any evidence-based researcher (and what other kind is there?) should be bothered by that.

Here is a nice example of a case-control study that doesn’t do everything wrong. They have a single hypothesis, they have a causal account, they do what they can to poke at their results within the limits of their data, and they don’t draw grandiose conclusions (I’d like to see the press release associated with the work, though, which probably says something about mothers taking anti-depressants killing babies.)

Even so, tests that are hyper-sensitive to correlations should come with an outsized warning regarding the lack of correlation between correlation and causation, and whenever you read about a case-control study you should think, “This is more likely than not due to random chance and poor research methods, and even if the effect is real it is so small they had to use a test that was hyper-sensitive to correlations to find it, and in any case they don’t show any dose-response data so it doesn’t constitute more than the tiniest incremental evidence in favour of the proposition under test.”


[*] Yes, I am still pretty much incandescently angry regarding the murders by blasphemophobes last week in Paris, and am likely to remain so for a good long time. I get that way when irrational people take it upon themselves to kill people specifically because of characteristics I have. That means I’m still a monkey underneath, rooting for members of my troop, and I won’t deny it. Oook. That said, I have also been angry for a long time at the killings perpetrated by the US and others against innocents in the Middle East and elsewhere–being more-or-less an innocent myself–and have written and spoken about it extensively, so this is not cherry-picking. Dropping the innocent and focusing just on the largest monkey-troop of all–human beings–here’s something I wrote on Facebook after the death of Osama bin Laden: “I do not celebrate the death of a human being. The impulse to solve our problems by killing people is what got us into this mess. It will not get us out of it.” But though I do not support killing, I reserve the right to be absolutely furious with killers.

Posted in science | Comments Off

Reification Revisited

I think very slowly and I’m not very smart. Unlike every other human being on Earth, who “just knows” exactly what is the right thing to do in all circumstances and is never, ever wrong about it, I sometimes make mistakes. Sometimes I even figure them out and correct them.

This is unique to me: no one else ever makes a mistake, so no one else is ever under any circumstances required to change their mind. It’s quite remarkable, really, and I’ve never understood how people manage it. How they are so damned sure all the time about what is right (what they believe) and what is wrong (what I believe.)

It’s a trick I’ve never mastered, despite being willing to put forward opinions on many things. I’ve learned that it’s a good way to figure out stuff, because if you put forward any opinion whatsoever someone will always tell you why it’s wrong. Since I am the person with the least intelligence and least knowledge of everything on the entire planet, this allows me to get the benefit of the far more intelligent and knowledgeable people who all just know the truth.

It is a little weird that they never agree with each other, though.

In any case, as I analyzed my own argument about religious reification in my slow and rather unintelligent way, I found a contradiction with other arguments I have made.

Unlike every other human being on the planet, who can hold an infinite number of disparate facts in mind at once, I can only manage five or ten. Therefore as I plod methodically through my thinking, I often miss stuff on the first pass, and therefore take up positions that are contradictory to other positions that I hold.

Although I’m extremely slow and stupid, I do value consistency. Perhaps this is what makes me slow and stupid, as I’ve noticed that all the people who tell me how much smarter and more knowledgeable they are than me are rarely much interested in consistency. And while I like that joke about an unhealthy consistency being the hobgobblin of small minds, I do think consistency is worth a little brain power, even for someone who has as little to spare as I do.

I’ve argued here that the doctrinaire feminist analysis of rape as a “man/woman” problem is wrong, and it is better addressed as a “predator/citizen” problem. Because the vast majority of men are a) not rapists and b) not like rapists in the relevant respects and c) often (we don’t have much knowledge of how often, but “often” definitely covers it) the victims of rape themselves… because of all these things it makes more sense to analyze rape using a model in which “people who rape” are the target of our wrath, not “men”.

Applying the same logic to Muslims and blasphemophobes and homophobes and transphobes, it is likewise true that many–perhaps most, but certainly many–Muslims are not those things. Irshad Manji is none of those things, I’m sure, and she describes herself as a Muslim. Irfan Wasara, the major Sufi character in my novel is none of those things, and he is based on a lot of research into the diversity of Muslim beliefs.

Does the existence of Quakers mean that the statement “Christianity is a violent, intolerant religion” is false?

Why am I even asking that?

It isn’t a particularly useful or interesting question. It isn’t even well-formed in a Bayesian sense.

What is interesting is the following: many Muslims use particular arguments to justify their blasphemophobia, transphobia and/or homophobia. Many other Muslims use different arguments to oppose one or in some cases all of those things.

My interest is two-fold.

First off, because I am a blasphemer and I have gay and trans friends, it is in my interest to promote interpretations of Islam that are on the lower end of the blasphemophobic, homophobic and transphobic scales. This is consistent with my general mission to increase the amount of human decency in the world, although I try not to sully myself with the gibberish arguments of scripturalists.

Secondly, because there is a clear causal association between believing any variant of Islam and being blasphemophobic, homophobic and/or transphobic, it is in my interests to reduce the number of Muslims and the degree to which they adhere to their faith. This is fully consistent with my general mission to reduce the amount of faith in the world.

While “being a Muslim” is causally associated with being blasphemophobic, transphobic and homophobic, this does not mean that it is sufficient to focus on that causation, any more than the undoubted causal association between “being a man” and “being a rapist” is sufficient to justify an exclusive focus on that causation.

Being a Muslim is neither necessary nor sufficient to being blasphemophobic, homophobic or transphobic, although it does help in each case.

The more interesting question is why do some Muslims become blasphemophobic, transphobic and homophobic, just as it is more interesting to ask why some men become rapists. It is simply not “the fact of being a man”, despite the petulant screeds of misandric feminists to that effect.

With regard to Islam, as with regard to Christianity, there are cultural rather than scriptural components in play. Biblical and Quaranic guidance on these matters is inconsistent, which is why there is a diversity of belief across their communities.

Curiously, I am reminded of my arguments with feminists in this regard, as they often insist that misandric feminists–who exist–are not “real feminists”, just as some Muslims argue that homophobic, transphobic and blasphemophobic Muslims aren’t “real Muslims”.

As is usually the case with questions of reification, it’s pretty much a matter of taste what you call such people. The interesting question, the important question, is what to do with them.

How does one argue a Muslim of any kind out of their faith-based commitment to homophobia, transphobia or blasphemophobia?

That is the question that matters. I have a few ideas as to the answers, and will explore some of them in the fullness of time.

In the meantime, in my slow, stupid, ignorant way, I’ve brought myself back into a state of something like consistency. It must be much easier to simply never question whether or not one’s beliefs make sense relative to reality or each other. I suppose this is why everyone else is so much smarter than I am: they have all the brain-power that I use to laboriously think things through to power their amazing insights into the minds of people they have never met based on words they have not bothered to properly read.

Posted in bayes, death, ethics, life, politics, religion | Comments Off

Religious Reification

I’ve written about our tendency to treat certain ways of putting people in to abstract groups as “real” before, and how this can distort our relationship to reality, often in socially and personally negative ways.

Scott Lynch tweated today:

“Humans” or “sapient hominids” is the highest available level of abstract category when discussing morality and behaviour in our current context, although one day hyper-intelligent shades of blue may cause us to rethink our stance on inclusive language.

I’m generally an advocate of talking about humans in these situations, but over the past few days I’ve found myself talking about Muslims.

Is this remotely justified, or am I simply being an asshole, reifying the most convenient group and othering the hell out of them?

The rest of this is totally self-serving and may be entirely hypocritical. It’s hard to tell, when you’re close to the issues, but after casting about a bit I think I’ve come up with a justifiable reason for my choice to focus on Muslims rather than humans in this case.

I will say that it took me a few minutes to do this. I did what I usually do in such cases, which is to “cover the ground” around the question, asking myself what I’ve thought about such things before, digging up possibly relevant facts and arguments to see how and in what respects my current position differs from my previous one. I really have argued that judicious choice of what to reify can make the world a better place, and that we too often reify based on anger and hostility.

There is no doubt that in my monkey brain I am angry and hostile toward Muslims. Oook. And I want to destroy their religion as much or more as I want to destroy any religion. Faith is wrong, and it is an enormously destructive force in the world. But all people have faith, so why am I getting all in a knot about Islam in particular just now? What is it to me?

I figured it out eventually, because it is exactly the argument I’ve already made: I am a blasphemer, and Islam is a hotbed of blasphemophobia.

If I say, “The Prophet had an extremely rudimentary grasp of a few Bible stories and the theology of the Quran is childish and stupid”, some nutjob might take it upon themselves to kill me for it. It behooves me to pay particular attention to such a group.

If I say, “Jesus was a Jewish revolutionary whose temporal mission failed and whose spiritual position was hijacked and distorted almost beyond recognition by Pauline intellectual mercenaries invading Roman society” I might get a few arguments, angry words and dirty looks, but in decades of engaging with Christians of all stripes that’s the worst that has happened.

If I say, “Marx was a wanker and his theory was responsible for the death of millions” I might get pushed around a bit, but that’s the most physically threatening I’ve ever seen a Lefty get.

If I say, “Misandric Feminism is unhelpful to men, stifling of free debate, and a bigger danger to the construction of a new and more healthy–for men–masculine identity than all the MRAs in the world combined” I am likely to get some extremely strident screaming and calls for my castration (misandric feminists are kind of monotonic and predictable) and various attempts to ban me from $CAMPUS_OF_YOUR_CHOICE as a speaker, but for all that I think doctrinaire feminism in general is tendentious, overblown and stupid, the number of feminists actually killing people is small.

I am a blasphemer. Give me an orthodoxy and I’ll heterdox it.

There are lots of people who take umbrage at that. A century or three ago I would have to be on the lookout for Christians coming after me. If I lived in India today I’d have to worry about Hindu fundamentalists coming after me. If I lived in Burma it might be Buddhist fundamentalists I’d have to worry about.

Lefty nutjobs still rule China, and you can be killed there for speaking your mind. Russia is not doing so well in that regard either.

And yet despite that there is empirical evidence that blasphemophobia is more prevalent amongst Muslims than any other group today.

I might be wrong about that. It might be that over the past thirty years there has been a spate of blasphemophobic attacks on Western blasphemers that I have missed.

But under the circumstances, I don’t think it’s unreasonable for me to focus on Muslims, because I am a blasphemer, and where and when I am Islam is by far the biggest threat to blasphemers. Hatred and fear of blasphemy is embedded in Sharia law, which is supported by a substantial minority of Muslims world-wide, including well-regarded, prominent Muslims in Canada.

In the same way, I worry about Lefties who want to nationalize industries, and will continue to do so as long as the most prominent left-wing party in my home province continues to have “public ownership of the means of production” in their constitution. It isn’t entirely illegitimate to reify such groups, who have unified themselves around a organizing principle that is inherently hostile to my way of life. Particularly not when a some group members are apt to take the law into their own hands.

Posted in death, ethics, history, politics, religion | Comments Off

Blasphemophobia

I have always had unpopular opinions, and argued for them forthrightly, vigorously and–I like to think–honestly. I have sought out the weaknesses of my own positions and done my best to modify them in the face of new evidence. I’ve learned a lot as the years have passed, and changed my mind about many things. Some propositions I argued for passionately in my youth I now believe to be quite wrong.

I have learned.

I am the only person on Earth to ever admit he was wrong on the Internet, and in fact may be the only person on Earth to have ever been wrong about anything.

This has never won me friendship, respect or admiration. I have lost friends, been subject to denigration, insult and anger. I have been told I am a fool, and given simple-minded lectures on banal trivialities as if I’d never heard them before. Maybe I’m just not a very nice person. But I have learned.

The cartoonists at Charlie Hebdo were members of my tribe, or at least a tribe in the next valley to mine, who my people didn’t go to war with very often.

They lived, by all accounts, pretty lonely, marginal lives, Charb most of all: “I have no kids, no wife, no car, no credit. It may sound a bit pompous, but I’d rather die standing than live on my knees.”

His colleague Ris: “”We do not want to be afraid, but to laugh, to take life lightly. We’re just trying to make something funny. Humor is a language that fundamentalists do not understand… no. They rely on fear.”

And Charb again: “I don’t think I harm anyone with a pen. I do not put lives in danger. When activists need excuses to justify their violence, they always find them.”

I’ve lived a life more about knowledge than humour, but by the same token, I don’t think I harm anyone with ideas. I do not put lives at risk.

And I know the difference between words and killing.

I am by education and experience both an engineer and an experimental and computational physicist. I have worked in pure physics and in medical physics in various capacities, as well as robotics and embedded systems. I have deliberately stayed away from anything that will kill people. But I know about such things. In my business they are part of the landscape. I’ve turned down jobs, left money on the table, because it would have meant building machines that kill people. And I have had come unbidden in the night ideas, thoughts, designs for machines whose primary use would be killing people. I have let them quietly pass away in the silence of my mind, undeveloped, unborn.

I know what killing is.

And I know what words are. Mocking, caring, angry, loving, silly, stupid, thoughtful, beautiful words. Words in all shapes, sizes and uses. Words for every occasion.

Words do not kill.

Anyone who suggests otherwise has never actually done the job and seen the body.

But there are hundreds of millions of people alive today, mostly Muslims but some unreconstructed Soviets as well, plus the odd Maoist here and there, and certainly a Christian, Sikh and Hindu or two, who are so afraid of words that they support laws that impose the death penalty for speaking them.

This is not a slur against Muslims but rather a statement of perfectly ordinary fact: most Muslims world-wide support Sharia law and many forms of Sharia law in practice include the death penalty for blasphemy or the closely-related crime of apostasy.

I am aware that there are many Muslim scholars who argue Sharia should not contain the death penalty for blasphemy, or any penalty at all. I am also aware that Pakistan and Saudi Arabia, amongst others, have the death penalty for blasphemy under various forms of Sharia, so scholarly insistence to the contrary, this is a real thing.

Saying “not all Muslims” are in favour of the death penalty for blasphemers does not change the fact that hundreds of millions of Muslims are. If you want to argue against the point I am making here it will not do to say–truthfully–that something like 70% of Muslims are perfectly fine with me calling the Prophet an ignorant psychopath. The fact remains that the other 30% or so are more-or-less supportive of a legal system that would make me a criminal for saying that, and likely on the order of 10%–still comfortably over a hundred million people–would be quite happy with the penalty being death.

Pakistan alone has almost 200 million people, a corrupt but still sort-of functional democratic government, 84% support for Sharia law, and a blasphemy clause in its criminal code that is very broad. Beyond lesser forms of blasphemy such as hurting anyone’s religious feelings (including non-Muslims), any utterance or writing that directly or indirectly defiles the name of Muhammad, upon conviction, carries an automatic death penalty. There are 17 people on death row for blasphemy in Pakistan right now.

They are being killed for speaking words. For expressing opinions. For being of the wrong religious persuasion, or none at all.

Just like Charb.

So it is not wrong to say that there are hundreds of millions of Muslims who are in favour of the death penalty for blasphemy, and again: pointing to any number of Muslims who are not does nothing to change this fact.

And that’s a problem, because when people like Charb blaspheme, some Muslims decide to take the law into their own hands.

The distance between “this is an act that ought to be illegal and punishable by death” and “it is right to kill a person committing this act even in the absence of such a law” is not as large as one might like.

We have seen this with fundamentalist Christians in the US who bomb abortion clinics, and those actions make anyone who believes that abortion is a crime deserving of the death penalty rightly suspect as a potential murderer or arsonist, and the whole pro-life movement is routinely treated as suspect: “Yet every time that happens we instantly have congressional investigations, Justice Department press conferences, Presidential denunciations, round-ups of pro-life activists, new federal laws passed, non-violent pro-life groups investigated, United States Marshals assigned to protect abortion clinics, and front-page coverage in every newspaper in America.”

Anyone writing against the kind of commentary I am making on Muslims here: please point me to your comparable writings against the vilification of all conservative Christians as potential murderers, because the two are based on precisely the same logic and I see no reason why anyone would defend Muslims but not conservative Christians in this regard.

I find it quite reasonable to question any American Christian with regard to their support for clinic bombers and killers, and I’ve done so. For the same reason I find it quite reasonable to question any Muslim–certainly any Muslim outside of Canada–with regard to their support for killing blasphemers. There is a significant minority within each group who believe such killings are, if not precisely justified, only wrong because they were not carried out under the auspices of a properly constituted religious court.

Outside of the US, Christians are much less conservative, so the question is less reasonable there. Inside Canada Muslims are pretty liberal, although I’ve met ones whose views on homosexuality disgust me, and even “pretty liberal” Muslims are frequently my enemies when it comes to blasphemy laws.

Just as homophobia is widespread within the Muslim community, so is blasphemophobia: fear of blasphemers. Fear of words. Fear of ideas.

I am a blasphemer.

Always have been, always will be.

Name your religion, I will denigrate it.

Identify your god, your prophet, your holy scriptures, I will question them, poke fun at their implausibility, ridicule their inconsistencies, impossibilities, idiocies. I will do it carefully, thoughtfully, annoyingly and thoroughly. I will learn more about your religion just so I can criticize it–and you–more deeply and accurately.

I will treat nothing–absolutely nothing–as sacred, or above and beyond questioning, investigation and publicly testing by systematic observation, controlled experiment and Bayesian inference.

I even wrote a book that posits the Christian God is an evolved non-human entity bent on using us for its own ends–because I think that’s not a bad metaphor–and includes an evil villain who is a Christian fundamentalist. He has a Biblical-literalist minion who understands that on a literal reading the Bible contains, amongst other things, a manual for how to properly rape your prisoners of war. Is saying that blasphemy, or blasphemous libel?

I’m not always particularly nice about my criticism of faith because I am not a particularly nice person, and haven’t always been treated particularly nicely by religious people–or non-religious people for that matter. Maybe I was just born this way. But “not being a particularly nice person” is not a crime, and certainly not a crime worthy of the death penalty.

I am a blasphemer, and I’m fed up with people like me being killed by people who are afraid of us.

It’s time to talk about blasphemophobia.


[Edited slightly for clarity and shilling my book, and additional link from my friend Scott on what a bunch of marvelously cantankerous bastards Charlie Hebdo was constituted by.]

Posted in death, ethics, life, politics, religion | Comments Off