What is the role of orbital variations in climate change?
We know from direct measurement that the Earth is warming by about 0.6 W/m**2. Climate models give a number that is about 1.6 W/m**2 from greenhouse gas emissions and it is likely aerosols are buying us about 1 W/m**2 back.
0.6 W/m**2 is not very much. The Earth is 149.6 million kilometres from the sun, on average, and the solar constant--the amount of power per unit area reaching the Earth from the sun, called insolation--is 1366 W/m**2 at the top of the Earth's atmosphere [a commenter on Hacker News pointed out there is a better number: http://onlinelibrary.wiley.com/doi/10.1029/2010GL045777/pdf]. The solar constant varies as 1/r**2, so if we put those numbers together we find that a variation of 32,000 km in the mean distance between Earth and sun is enough to produce 0.6 W/m**2 change in insolation. That's only a tenth of the distance to the moon, which is not very much at all.
This question is interesting in part because unlike many other influences on climate, it is computationally tractable on your average computer. Furthermore, the physics are both painfully simple and absolutely fundamental. Newton's law of universal gravitation is what started the ball rolling, science-wise, and it's so simple it can be dealt with using quill and parchment in many interesting cases.
So this is an opportunity to demonstrate some fundamental issues with the computational physics of climate, and explain why I am cautious about drawing any very strong conclusions from climate models. I am not a climate scientist, but rather a computational and experimental physicist. This means I have spent most of my career dealing with systems where I can check my computational results in the lab, and I am painfully aware that they don't always agree, even when internal error checking seems to say my computational results are basically OK.
Note, however, that nothing I say here implies "climate change is a hoax" or that climate scientists aren't doing their best to understand an enormously complex system. I believe the general confidence in climate models is unjustifiably high for reasons that I hope this little exploration will make apparent, but I also believe that nuclear fission, solar and wind power should be strongly supported, coal power should be curtailed as rapidly as possible, and taxes shifted away from income and toward carbon emissions.
Dumping gigatonnes of garbage into the atmosphere is not a great idea, although honestly if it were a choice between that and the revolutionary overthrow of the capitalist order I would go down fighting for capitalism. We don't know how climate change comes out, but we do know that attempts to "change everything" always end neck-deep in human blood, and personally I'd like to avoid that. It's just the kind of evil capitalist roader I am.
So much for politics. What about science?
Orbital mechanics at the level I'm interested in is governed by a single law:
F = m1*m2*G/r**2
This is Newton's law of gravitation. The force between two masses (m1 and m2) is equal to their product divided by the square of the distance between them (r) multiplied by a universal constant, G = 6.673E-11 N*m**2/kg**2. This is a small number because gravity is a weak force. It is only because the masses of the planets and stars are huge that we notice it at all.
Science is the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference. As such, it is fundamentally about exploration. We have ideas, we test them, we let the results of those tests guide where we go next. In the end, we publish, because if not made public it is not science. People were investigating reality long before the Proceedings of the Royal Society saw the light of day, but they didn't accumulate knowledge because they didn't publish. Science is both intensely individualistic and profoundly communal.
In the present case, the first idea I had is "I bet I can get some decent orbital mechanics results using that adaptive Runge-Kutta solver I wrote years ago for a quite different project." So let's put that to the test. [A number of commentors on this article on Hacker News have pointed out that symplectic integrators are a better choice for this problem. They are correct, but I didn't have a symplectic integrator lying around. I've since explored this alternative and found it faster but less accurate than the work described here]
This is the way the computational physics of orbital mechanics works. We have a body like the Earth moving around the sun. We know its current location and velocity, and we know the gravitational force acting on it. And we know some basic kinematics, which is the mathematical description of motion (dynamics is about the causes of motion, kinematics is about the description of motion.)
Kinematically, we have the equations:
dx = v*dt
dv = a*dt
where dx = change in position, dv = change in velocity, a = acceleration, and dt = a small time interval. I've written these as one-dimensional equations but there are similar equations for y and z as well. Newtonian gravity has the nice property that the x, y and z components are all independent of each other so we can compute them separately (Einstein's gravitational theory does not have this property, and as a result is... somewhat more complicated.)
All these equations say is that position changes linearly with velocity and time, and velocity changes linearly with acceleration and time. Acceleration is given by Newton's law of gravitation via Newton's second law: F = m*a, or a = F/m, which gives us:
a = m2*G/r**2
Because gravity is proportional to mass, m1 is divided out of the force law, making the gravitational acceleration proportional only to the mass doing the attracting. This is why feathers and hammers drop at the same rate on the surface of the moon, or in a giant vacuum chamber.
Taken together, these relations give us a second order differential equation for the position of an object moving under the influence of gravity. The second derivative of position (which is also the rate of change of velocity) is just equal to the acceleration (by definition):
d2x/dt2 = a
We can get the acceleration of m1 from any number of bodies by summing up their gravitational influence:
d2x/dt2 = m2*G/r12**2 + m3*G/r13**2 + ... mi*G/r1i**2
where r1i is the distance between the mass of interest and the ith body in the simulation.
Conceptually, to solve this equation means simply integrating the effects of gravity along the curve of motion determined by the effects of gravity. If that sounds a bit recursive it's because it is: to know the effect of gravity over each step we have to know the effect of gravity at the end of the step as well as at the beginning, but we don't know the effect of gravity at the end of the step until we've taken the step and found out where it ends up. It turns out we can deal with this using some simple corrections, because god loves us and made the universe second-order-smooth.
Starting from known initial conditions we can step forward small but finite increments in time. To figure out the effect of the endpoint we break the whole interval into parts and use estimates of the values in the middle points to correct the value at the end for the change of gravity over the interval. There is a whole family of methods for doing this but the workhorse is 4th order Runge-Kutta (RK4), named after the pair of German mathematicians who devised it.
Many years ago I wrote an adaptive RK4 solver that varies the time step to maintain some error bound. It's rather stupid about it, just dividing the desired time step in two and asking "After taking two half-steps am I sufficiently close to the same place as when I took one full step?" It does, however, work reasonably well.
Here's where computational physics starts to get fun.
We represent the world with numbers, but the real world has more-or-less infinite precision while computers are decidedly finite. A typical double-precision floating point number is 64 bits which allows it to represent values between about 10-308 and 10+308. More important is the value ε, which is the smallest number that when added to 1.0 results in a different value. For 64-bit floating point numbers this is 2.220446049250313E-16, which is not all that small, as we shall see.
We are going to be integrating the orbits of the planets, and doing so over a few thousand years because this turns out to be about the limit of computation for my laptop. A full solar system simulation covering 1E11 seconds (about 3200 years) takes a day or so to run. Patience is a virtue for computationalists as it is for experimentalists.
The size of the time step for our simulation is going to turn out to be about 10,000 seconds, and even with adaptive RK4 integration the error bound turns out to be 1E-18 to get decent internal precision. 1E-18 is less than ε for double-precision floating point, so we're going to have to go with 128 bit long doubles (implemented by the nasty trick of "#define double long double" because I'm basically an evil bastard.)
How did I decide this?
First, by setting up a dumb-as-rocks simulation of a faux Earth around an approximate sun, and letting the simulation run to cover 1E11 seconds and then changing the signs of all the velocities and letting it run backward for another 1E11 seconds. With perfect numerical precision, this would result in the final positions of the sun and the Earth being identical to their initial positions.
There was a lot of flailing about at this stage, and it took a few days and some head-to-head comparisons between the Python and C++ versions of the code to tease out a few minor issues that resulted in improperly accumulating errors. The Python code is far too slow to run over the full simulation time, but it served as a very useful check on the C++ over shorter times (when both are run at 64 bit double precision the results have to be byte-for-byte identical or something is wrong.)
I mention this because we rarely see what goes on behind the scientific curtain. There is always a lot of flailing around, a lot of trial and error. Young people in particular may not be aware of this and feel that their own flailings are uniquely embarrassing. They are not. Flailure is always an option, and often quite a good one.
Having spent a few days flailing I was ready to plug in some real numbers for the Earth, the sun, and the planets. I decided eventually to leave the Moon out of it until the end, because I thought it shouldn't have a huge effect on the Earth's orbit about the sun. Silly me, as it turned out.
I also elected to leave out all corrections from general relativity, from tidal forces, and so on, which really are trivially small over the time scales I care about. As I said at the outset, what I wanted to know was: is the effect of the planets on the Earth's orbit comparable to the effect of humans on climate?
NASA of course has all the numbers one might want with regard to the positions of the planets and so I turned to them for the current data, which I set as the positions and velocities of the sun and planets at midnight CT January 1 2015. NASA also has detailed results of their own solar system model available, which gave me a target to aim for.
Having a target to aim for is the holy grail of computational physics, of course: a measurement or gold-standard calculation that can be used to validate the code. Validation is done by setting any physical constants in the code based on independent measurements and then ensuring in known cases that the simulation reproduces reality. It is not about tuning unphysical parameters so the code matches one particular reality. It is about testing without adjustment.
The way NASA actually developed their model was based on a dialog with nature. They had excellent measurements going back decades or centuries, and required that their simulations matched those observations. I am simply using NASA's validated simulation results as a surrogate for observation, and since they really are ridiculously accurate--they are the basis for spacecraft navigation in the solar system, and have corrections for things like the wobbling of Mars under the influence of its two tiny moons--this is more than good enough for what I'm doing.
The first thing I did was simulate Earth going around the sun with no other planets. This is the simplest possible system and I wanted to get a sense of what it looked like over the past few thousand years. For convenience I ran 1E11 seconds, which is a little over 3000 years and took an hour or so to run. The initial results were promising but a little weird, as initial results often are. I decided to add in other planets, thinking that perhaps I was asking the simulation to do something impossible: I was giving it a precise starting point in terms of the Earth's position and velocity (and the sun's) but leaving out all the other planets, whose presence was actually necessary to create that position and velocity.
I wasn't sure which planet after Jupiter was most important in perturbing Earth's orbit--I thought it was probably Saturn--so I ran a bit of code to find the maximum gravitational force between each of the planets and Earth, and got a bit of a surprise:
Body | Orbital Radius (AU) | Gravity (N) |
Sun | 0.0030246192898 | 3.69616319e+22 |
Jupiter | 5.41567069072 | 1.79863405677e+18 |
Saturn | 10.1361271569 | 1.2579933068e+17 |
Venus | 0.744294082795 | 1.37535974505e+18 |
Mars | 1.41241143847 | 6.97067495351e+16 |
Mercury | 0.407065807622 | 1.7349744085e+16 |
Uranus | 20.37633197 | 4.2709974665e+15 |
Neptune | 30.5218921949 | 2.1709083215e+15 |
Moon | 0.00261101798674 | 1.99083093078e+20 |
As can be seen, Venus has about ten times the influence on Earth as Saturn does, and when it is added into the simulation along with Jupiter and Saturn things started to look a bit better. There was still some weirdness with the precise length of the Earth's orbit... or was I misconverting from fractional Julian day number to seconds? This kind of trivial conversion problem is far more common than you might think.
Adding Mars changed things very little, and at this point I started doing a more detail comparison with the NASA data, and noticed a couple of oddities. In particular, the Earth's distance to the sun at aphelion (furthest from the centre) was off:
And if I looked out a few hundred years the length of the year was clearly wrong:
When in doubt, look carefully at the data. Don't be too quick to jump to conclusions. Unlike every other approach to knowing, science rewards being open to alternative ideas. Advocates on non-scientific approaches often say they want people to be open to "other ideas", but in the end they all want you to reach some conclusion that they have already decided is the right one, like "Mohammed was the last true prophet" or "big pharma and GMOs are causing all the ills of the world." Science, on the other hand, is just a discipline, and all it wants is that you practice it honestly, publicly testing ideas by systematic observation, controlled experiment and Bayesian reasoning.
When someone lectures you on the importance of being open minded, try responding with something along the lines of, "I'm open to any idea that we can figure out how to test, because if it can't be tested it probably can't explain anything. If it can explain a phenomenon, then it can act as a cause on other things, so it can be tested, because we can make predictions of other things that it is likely to cause. If it can't cause anything other than what it's being invoked to explain, then it really doesn't have any meaning beyond 'the thing that causes that', and that's pretty boring. So give me an idea--one idea, not ten--and let's talk about how to test it, what other consequences it would have that we can look into. I'm totally open to any idea like that."
The practice of science is very much about learning where to look for plausible ideas, and focusing on looking rather than imagining as the first step on any search for a more plausible proposition. This is difficult because it requires us to accept that we don't know what the problem is and we might not be able to find the actual source. Imagination is much more comforting than reality at times like this. But we should learn to trust ourselves, and look to the world to guide us. That's the only way we'll find more plausible propositions. People were imagining answers to hard questions for thousands of years before we learned the discipline of science, and we have precious little besides some decent poetry to show for it. We certainly never cured disease or ended hunger that way.
So rather than imagining what the problem might be, I looked more carefully at the various components of position, and what I saw in the z-component (the distance from the plane of the ecliptic) was striking:
Instead of the smooth sinusoid of my simulation, reality is undergoing a weirdly jagged drunkard's walk with about thirteen wobbles per year. Thirteen is the number of lunar months in a year, more-or-less, which suggests that culprit is the moon.
I hadn't given the moon much thought at this point. While it has a large interaction with the Earth, as a terrestrial satellite it didn't seem like it could have a very big influence on the Earth's orbit around the sun. I basically viewed it as an idler, swinging along beside the Earth but having no net influence to speak of.
Thinking about it, the biggest effect of the moon is on the initial conditions: I have pulled the Earth's orbital parameters out at a single moment in time and its motion at that time has been influenced by the moon. Take the moon away, and it will not track back along its physical course but some other curve. The effect is most obvious in the z-component, where the 5 degree tilt of the moon's orbit relative to the plane of the ecliptic results in the Earth being wobbled as the moon travels around it, but the overall effect shows up in other places too, as we shall see. My simulation starts off in the correct place but with one small missing influence, and after a few hundred years lands is in a completely unphysical place.
This is the great lesson of computational physics: errors in the model almost always accumulate. They never average out.
When I added the moon, things snapped into place:
It is worth noting that both with and without the moon the internal error on my ten-year computation was small, less than half a meter after running things forward and backward. Over that simulated time span my model Earth covered nearly 20 trillion kilometres, and the equations brought it back to within 42 cm of its starting point. That it did this even without the moon--which turns out to be vital to matching reality in so many other ways--is an example of how internal consistency is necessary but not sufficient for high accuracy.
Now I was ready to run a long simulation with all the planets and the Moon. I experimented with relaxed error bounds to get the simulation time down to a day or so. The moon, because of its tightly curved orbit, is a particular pain to simulate accurately, and so slows the simulation dramatically when the error is at 1E-18. After running a few one-decade simulations with different error bounds I settled on 1E-13 for the long run. If it didn't work I would only have lost an overnight.
I still had that weirdness in the phase over hundreds of years. Either I was converting the NASA data incorrectly, or I was goofing up the time conversion on my own results somehow (the results are output in seconds but converted to years for display) or the simulation was still not right.
When the simulation that included the moon was about half done I ran the processor on the incomplete output file and had a look at the results:
So... if asked, "Do you think the Earth's moon changes the length of the year?" would you have said, "Yeah, enough to lose about six months over the course of 400 years or so?" I certainly wouldn't have. Yet it does, because the true and correct initial conditions put the Earth in a slightly different orbit without the moon's influence to correct it. Depending on when in the lunar month I had started the simulation the results would have been quite different depending on the relative value of the lunar momentum to the Earth's.
[Edit: I originally had an error in the energy calculation that cleverly left out the potential term. Oops. When added back in, the simulation has a fractional energy error of 6.7E-11, which is good enough for going on with.] After the full 3200 year run, forward and back, the Earth comes out out of place by just a bit more than it's own radius--6350 km--which is not bad following a six trillion kilometre journey.
At this point, I was confident I understand the Earth's orbit pretty well, and can set out to answer the question I started with: how does it affect the insolation?
As can be seen, the distance to the sun varies quite a lot over the year due to orbital eccentricity, so some kind of averaging is necessary. Averaging over sinusoidal waveforms is surprisingly tricky because how you handle the ends tends to dominate the whole integral. If you get one end-point just a little bit wrong, it won't cancel the contribution at the other endpoint, and you get a significantly non-zero value that's just a numerical artifact.
One way to avoid that is to fit the curve to a known shape--in this case just a cosine function plus a flat baseline--and use the fit parameters to estimate the properties you care about. B + A*cos(ωt+φ) is the function to fit, and the thing I care about is B, the baseline value. The rest is just a wobble around that. With t in years ω is fixed at 2*π, so the fit only has three parameters, with the phase φ just correcting for wherever the fit happens to start for each fitting period, as the data points don't fall exactly on year boundaries. Fitting over two-year intervals--which resulted in slightly cleaner results that one-year fits--yields the following curve for the baseline B parameter, which is the average insolation:
There is a bit of a step around 1500 years ago, but it is less than 20% of the heat imbalanced measured in the modern day, and it is not reproduced in the NASA data--the purple line that runs back 2000 years, which is all I downloaded--so is likely a result of remaining imperfections in the model or analysis. The thin blue line at the top shows the magnitude of the empirically observed heat imbalance from the nominal solar constant of 1366 W/m**2. If the moon is left out of the model the simulated insolation is about as high again above the blue line as the blue line is above the correct value.
There is also a very observable effect of decreasing orbital eccentricity over time, as can be seen in the A parameter, which measures the variation over the course of the year:
While the change in eccentricity is quite significant--and very close to that found in the NASA dataset, as shown in green--it doesn't have a direct driving effect on climate.
It does, however, have an indirect effect. More intense sunlight during part of the year will be offset on average by less intense sunlight six months later... except that the sun won't be falling on the same places. Perhaps the perihelion--the time of closest approach and most intense insolation--happens during boreal summer. That would mean the more intense heating will occur when the northern hemisphere is already at its hottest, and the cooling will happen during austral summer, when the northern hemisphere is covered in snow. If things were the other way around, the extra heating would occur when the Great Southern Ocean was warming up, and cooling would happen during northern summer.
If the Earth were a smooth and featureless billiard ball none of this would matter, but that is not the case. How the planet responds to additional heating depends on seasonal conditions in the north and the south, and understanding that response is the business of climate modelling.
And climate modelling requires physically accurate software if it is not to be subject to the kinds of error accumulation I've explored here. The difference between precise models and noisy models is that in precise models (like orbital simulations) cumulative errors are easy to see, and in noisy models (like climate) cumulative errors are hard to see.
This is where this little excursion ends, but I'll come back to this topic later, with a closer look at some of the physics in specific climate models that I find problematic. Climate is determined by a number of large competing factors. The amount of energy that reaches the Earth's surface is only a fraction of the 1366 W/m**2 incident at the top of the atmosphere. Some is reflected, some is absorbed, some is scattered. Small changes to any of these processes can result in large effects on the climate. We are concerned with changes at the 0.1% level: 1.4 W/m**2. This is the magnitude of the energy imbalance driving climate change.
Building models that are accurate to 0.1% over a century is not possible unless the physics they embody is at least that accurate. This is my professional judgement as a computational physicist, and the argument presented here is supposed to illustrate that judgement, not prove it. When I set out to perform this analysis I knew pretty much what the results would be, because I knew that tiny aberrations in the physics almost always results in significant excursions from reality at the end of the simulation. I didn't know precisely what would be the major source of inaccuracy, but I was sure I would run into one. Unsurprisingly, I did.
A small under or over estimate of any major process in any simulation will result in unphysical integrations, just as we have seen here with planetary orbits. The moon's affect on the Earth's orbit is tiny--its direct gravitational influence is just 0.3% of the sun's. But its presence or absence in the model has a huge effect on the properties of the orbit. Leaving the moon out with the particular initial conditions I've used results in an average orbit that has the insolation wrong by 1.7 W/m**2, about the same size of the total effect from green house gases.
That's an absolute effect, mind. Climate models are attempting to get at relative effects. But they are doing so by subtracting large terms from each other, and there is no reason to believe their errors will cancel.
This is not a political statement. As I have said, I think anthropogenic climate change is a serious problem and that it should be approached via a mix of technology and public policy, particularly nuclear power development, solar power development, energy storage development, rapid curtailment of thermal coal development, and a shift from income taxes to carbon taxes. I don't need physically accurate climate models to tell me these are all good things, because they are good things regardless of the long-term effect of carbon on the Earth's climate. Carbon-based fuels have enough problems to justify moving away from them regardless, and the substantial and plausible risk posed by anthropogenic climate change simply adds to the argument.
But all that said: until climate models are as accurate as orbital simulations, it is difficult to claim that the science is settled. That doesn't mean we can't say anything, of course. Even without accuracy we can be fairly sure of the following:
1) Adding CO2 to the atmosphere will increase global heat content. There is simply no way to avoid this. We have directly measured human contributions to CO2 and other greenhouse gases and we know they are sufficient to add a significant amount of heat to the climate. We are working on directly measuring the effect of aerosols and the indications are they are reducing the amount of heat in the climate. We have directly measured the heat balance of the oceans and it is showing effects that are in the range of 0.5 to 1.5 W/m**2, which is the magnitude of effect we expect from models.
2) Increasing global heat content will have a disruptive influence on economic processes that are tuned to the current climate
What we don't know, and what I am arguing that we can't know due to the limitations on the physics in climate models, is the detailed ways in which the climate will change. You can read this, if you like, as an apology for the "hiatus", which I've been warning about for over a decade. Unphysical models will not reproduce reality in detail and it was a terrible mistake to sell climate models to the public as if they could.
Who can predict the effect of what we leave out of climate models? We leave out--or approximate in unphysical ways--precisely the things we don't understand well enough to include properly. We hope the effects of those approximations will be small. But because error always accumulates--I have never run a model of any kind where this was not the case--we know that those unphysical approximations will tend to carry our models further and further away from reality as time passes. So we can't tell what effect our omissions will have until we have data to compare to. And that's OK.
However, putting forward detailed model results as if they were strong predictions about the real future is a mistake, because given the level of physical detail and the vagaries of long-time integrations, it would be astonishing if climate model results bore more than a vague resemblance to reality, just as it would be astonishing--given what I know now--if a model of the Earth's orbital motion that left out the influence of the moon was particularly close to the physical motion of the planet.
Nor do internal checks on model precision reflect very strongly on model accuracy. Only comparison with ground truth can do that, and in the case of climate we won't have that until it's rather too late to do anything about it.
Fortunately, we don't have to wait around. There is ample reason to build nuclear power stations, to close coal plants, to build solar farms and energy storage systems, and to shift the tax burden from incomes to carbon emissions today. It's time we stopped listening to lunatic calls that we must "change everything" or engage in the kind of failed revolutionary action that characterized so much of the 20th century, and started to focus on changing the few things that will actually help the climate without bringing global industrial civilization to an end.
Hysterical shills for big coal and big oil will oppose sensible policies as much as the looney left does, but while I don't think the science of climate change is remotely settled, the arguments in favour of technological and policy changes in favour of evolving our global, market-based, industrial civilization in the direction of a low-carbon future are compelling regardless.