## Damped Simple Harmonic Motion

Previously I showed that the rocking suitcase demonstrates something tolerably close to simple harmonic motion (SHM) even though the forcing acting on it is a step-function that changes sign at zero, rather than a linear ramp.

The frequency of the motion is amplitude-dependent, with the relation:

ω ~ sqrt(12*g/(5*L*A0))

where A0 is the amplitude in radians, L is the width of the wheels and the suitcase (assumed to be the same) and g is a well-known constant.

For a rotational simple harmonic oscillator the characteristic frequency is:

ω = sqrt(k/I)

where k is the torsion constant (the restoring force per angular displacement) and I is the moment of inertia about the centre of rotation. We can use this to get an “effective k” from measurement or simulation of the rocking suitcase. Otherwise, from the simple theory, the effective k is:

k = m*g*L/A0

This is important because damping is k-dependent, and in particular, critical damping is k-dependent. Knowing the effective k is going to let us figure out what the optimal damping constant is.

For SHM, there is a state known as “critical damping” in which the system returns to equilibrium as rapidly as possible. Less than critical damping and thing oscillate for a while. More than critical damping and things come back to equilibrium on a slower, longer curve.

The condition for critical damping is:

c/(2sqrt(k*I)) = 1

where c is the damping force coefficient and k and I are defined as above. This is called the “damping ratio” and usually given the symbol ζ (Greek letter zeta). There is a rich convention of notation in physics, with Greek letters being used for angular qualities (α for angles, ω for angular frequencies) and particular concepts within specific fields. It is not uncommon to see the letter take over as the name of the concept, as in the “plasma β” for the charged-fluid analog of pressure, for example.

Based on the simple theory, we get a c-value of:

c = 2*sqrt(12*g/(5*L*A0))

which gives us damping coefficients around 21 N*s/radian for half a radian motion. Plugging this into the equations of motion, we get the following result:

Critically Damped Motion

Once again the value of SHM as a model for things that aren’t necessarily very close to the underlying assumptions comes to the fore. I’ve based the damping coefficient on a ridiculously simple approximation to the actual case, and yet it results in something that is remarkably close to critically damped motion, with a tiny over-shoot. The weirdly linear shape of the ramp is presumably due to the constant force law interacting with the damping force.

The question remains, then: how does one implement, physically, the damping described by the equation above? This is where physics shades into engineering. Physics and engineering involve very different kinds of creativity. The former requires that we figure out what laws are operative (sometimes inventing new laws in the process) and sometimes invent new ways of applying them. Physics in the “normal science” case tells us what will happen if we create configuration X.

Engineering, in contrast, is about coming up with configurations such that they are described by the equations physics gives us. What physical system will result in a damping coefficient that scales like the reciprocal of the square root of the amplitude of the motion? I have a couple of ideas, but they are more about intuition than anything else, and will no-doubt require some empirical testing to validate, disprove, or improve.

The important thing is that I have a fair idea of the damping required, and a good idea of the scaling law that describes the damping. How significant the latter is remains an open question: maybe a constant, small damping force coefficient will be enough to prevent things from ever getting large. Maybe there is a characteristic scale to the driving impulses (the size of the bumps wheeled suitcases hit and the speed they are traveling) that is such that a constant damping force coefficient will kill the motion in that regime before it ever has a chance to get large. These are not questions that can be answered without actually gathering some data.

I’m pretty sure I have a board or two with an accelerometer on it. Maybe it’s time to instrument a suitcase and visit the airport…

## Simple Harmonic Oscillators

It’s curious that popular science articles are happy to discuss black holes and warped space and holograms on the brane, but they never mention simple harmonic oscillators (SHOs) or simple harmonic motion (SHM).

SHM is the cornerstone of physics. The goal of any physicist when first confronted with a problem is to figure out how to turn it into an SHM problem, where “motion” is understood inclusively: it can apply to mechanical or electrical or fluid or completely abstract oscillations.

Consider a system that is in equilibrium, so given the positions of its components nothing has any force acting on it. I am in equilibrium with my chair right now: the spring force of the cushion acting on me is equal to the gravitational force of me acting on it. Furthermore, my equilibrium is stable. If the cat jumps on my lap (oof) I press down on the cushion a little more and it compresses a bit, increasing it’s opposing force so neither the cat nor I plunge through the substance of the chair toward the centre of the Earth. Likewise, if the cat jumps off, I bounce up a little and the cushion decompresses, reducing the force it supplies so I don’t go flying toward the ceiling.

SHM is occurs whenever we have a restoring force that is more-or-less proportional to the displacement of the system from it’s equilibrium position. “More-or-less” is a pretty elastic term here, because to first order everything is a simple proportionality: the force is just the displacement multiplied by a (negative) constant. The negative sign is important because it ensures the force is restoring… otherwise the system is in unstable equilibrium and really will go flying off in all directions when perturbed.

For the wheeled suitcase example I wrote about previously, the restoring force is due to all the weight being transferred onto a single wheel due to the least little bump in the road. This is about as non-linear as you can get, which makes it a nice illustration of how a tool like SHM can be used to think about and understand systems that are only very approximately fulfill the conditions that nominally create it.

SHM is a direct consequence of Newton’s 2nd Law: F = m*a or more causally, a = F/m (the acceleration of an object is equal to the force on it divided by its mass.)

Acceleration is the rate of change of velocity with time: a = dv/dt, where “d” refers to an infinitesimally small increment (the continuous version of δ). And velocity is the rate of change in position (x): v = dx/dt, which means a = d(dx/dt)/dt = d2x/dt2 (the latter expression is just a notational simplification, like so much math.)

The equation that governs SHM is, from Newton’s 2nd Law:

F = -k*x

where k is the “force constant”: it just gives the proportionality between the displacement (x) and the force. The negative sign is a convenience, to remind us that the force is restoring, and in this formulation k is a positive number. Physicists spend an astonishing amount of time worrying about dropped negative signs: I once worked on an experiment whose original designer–a very careful and clever guy–had dropped a negative sign which resulted in an important background term cancelling out instead of adding up, which meant that by the time I joined the team they had been collecting data for two years that was almost all background events. The geometry meant the background looked the same as the real phenomenon, and the absolute rate calculation was very difficult to do, making it hard to identify the issue. My contribution was an independent calculation of the relative and absolute rates, which revealed the issue and forced us to substantially redesign the experiment.

Re-writing this equation in terms of acceleration:

m*a = -k*x

or:

d2x/dt2 = -(k/m)*x

This is the fundamental equation of simple, undamped harmonic motion. For most interesting cases there is also a damping term that is proportional to velocity, representing the effects of friction:

d2x/dt2 + (β/m)*dx/dt = -(k/m)*x

The damping term will become interesting in a bit.

In the meantime, how is this equation supposed to apply to the suitcase example, where the force term is a step function, traditionally represented by &Theta(x), rather than a linear function of the displacement? What I’m interested in doing here is figuring out the characteristic frequency so that I can figure out a damping term that will tend to kill the motion.

For the simple equation, the solution is:

x(t) = A*exp(i*sqrt(k/m))

As is typical of differential equations, we don’t solve them in any algebraic sense, we rather figure out the important constraints on the solution and use our (hopefully extensive) pre-existing knowledge of various functions to find one that fulfills those constraints. In this case, the equation can be restated as the constraint that the second derivative is equal to a negative constant multiplied by the function, which means that the function must be its own second derivative. That means we are looking at exponentials and sin/cosine functions, as they are the only ones we know of that fulfill that constraint. Because the constant is negative we have to have a complex exponential, which is where the factor i = sqrt(-1) comes in.

The amplitude of the oscillation (A) drops out of the equation, although in the general case of the damped, driven harmonic oscillator it will also be a function of time.

The (angular) frequency of the oscillation is sqrt(k/m), so again: in the case of a step function, what is a “reasonable” estimate for k? And for that matter, since we are considering a rotating system we really need to be thinking in terms of I (the moment of inertia) and the angular displacement α, rather than the mass and linear displacement, but the functional forms are all the same. And we should be talking about torques rather than forces:

α(t) = A*exp(i*sqrt(k/I))

If we assume the amplitude of the motion is fixed, then letting the average torque equal the true torque at half displacement seems like a reasonable thing to do. The true torque with one wheel on the ground for the suitcase is:

T0 = m*g*L/2

where L is the distance between the wheels. With an amplitude of A0 and the true torque reached at half the amplitude, this gives:

k*A0 = m*g*L

so:

k = m*g*L/A0

and the angular frequency of the motion is:

ω = sqrt(m*g*L/(I*A0))

and I = m*(h2+w2)/12

where h and w are the width and height of the suitcase, as we are assuming the motion is basically about the normal to the depth (the suitcase is approximately upright).

Ergo:

ω = sqrt(12*g*L/((h2+w2)*A0))

And to simplify further, we’ll assume h = 2*w (a reasonable approximation for most wheeled suitcases), so:

ω = sqrt(12*g*L/((5*w2)*A0))

If the wheels are the full width of the suitcase apart (as is typical) we get:

ω ~ sqrt(12*g/(5*L*A0))

For an amplitude of half a radian (~30 degrees) and L = 0.4 m we get:

ω ~ sqrt(12*10/(5*0.4/2)) = 10 s-1 or about 1.6 Hz, which is not insanely wrong (originally I managed to forget the square root, and so my answer was badly off). The correct answer (from observation) is about 1 Hz, and in fact modelling the system gives 1.4 Hz.

This is the great thing about science: you can generally tell when you’ve screwed up, and more to the point, so can everyone else. People who are insecure or cowards–philosophers obsessed with “certainty”, for example–aren’t able to do science, because they can’t take being provably wrong with equanimity. Scientists automatically look for multiple independent ways of measuring the same thing, because that is the only way we know of catching our own errors, and we fully expect that most of what we do will be in error.

And here’s the thing about simple harmonic motion:

Simple Harmonic Motion

The green line is a cosine function, fitted to match the output of the differential equation solver (which is a 4th order Runge-Kutta solver, as that is almost always the right choice for an 2nd order ordinary differential equation.) The actual motion, with the torque set to a constant value that changes sign to always oppose the displacement, is shown by the red crosses. Despite F0*2*(Θ(x)-0.5) being a lousy approximation to -k*x, you can see that the modeled motion and the simple harmonic motion are remarkably close to each other.

This is why physicists love SHM: it is almost always a useful first approximation, and it puts us on familiar ground, which is always a good place to be when you are rummaging around in the unknown.

## Will Bill C-23 Prevent Me From Voting?

I moved recently, and in BC the provincial government has an interesting way of updating your driver’s license when you move:

Voter ID?

There has been a great deal of quite justified uproar regarding the Harper government’s latest assault on Canadian democracy, which reduces the powers of Elections Canada and increases the requirements for voter ID in a way that is all-but-certain to differentially disenfranchise non-Conservative voters, notably young people, poor people, less educated people and people whose first language is not English.

Today, Laurie Hawn, sent to Ottawa by the voters of Edmonton-Centre to represent the interests of the Conservative Party of Canada, spoke before the procedure and house affairs committee and claimed there was “no legitimate reason” for anyone not to have acceptable ID by the time the next federal election rolls around in 2015.

This made me realize that I may be disenfranchised in the next election.

My driver’s license is the only piece of legitimate, legally issued, proper and correct government ID I have with my address on it. That little peel-off sticker is the only “proof” I have of where I live.

My passport, my Medical Services Plan card, my membership cards in the PEO and APEGBC… none of these things have my address on them.

Were I of a mind to commit fraud and take my vote to some more hotly contested riding than my own heavily Liberal one, it would be trivial to do so. Anyone in BC with a driver’s license can easily find an appropriately-sized sticker at their local Staples and print themselves a new address, which will give them a piece of ID that is indistinguishable from the real thing.

So…

Either Bill C-23 will do nothing to prevent voter fraud… or my duly issued, legally correct government ID will be rejected by the partisan poll supervisor because it has an easily-fakable address on it, which will illegitimately prevent me from exercising my right to vote.

Which is it?

I would ask all advocates of Bill C-23 to tell me: how is it that this bill can both prevent the trivial voter fraud I have outlined above and still allow me–a white, Anglo, middle-aged, middle-class, educated professional from the least-marginalized group in all of Canada–to exercise my right to vote?

And I would say to anyone who is complacent about this bill because they are not amongst the marginalized groups it appears to target: don’t kid yourself. If you are a citizen, the architects of this bill consider you their enemy, and your rights are as much under threat as those of the marginalized groups that this bill will do so much to disenfranchise.

## The Mechanics of Book-Building: print

E-books are a lot simpler than print.

The great thing about e-books is you’ve got an HTML layout engine doing all the heavy lifting for you, and while you can’t control what the finished product will look like, you’ll at least know that it won’t look hideous.

Print is a different kettle of fish, because it can look really good, and when it deviates from that, you notice.

I’m using CreateSpace as a printer and distributor, and they have some stuff on their site that’s helpful, notably a Word doc template that has the right layouts for a 6×9 inch novel. All dimensions in the printing industry are still in Imperial, which tells you something.

Word and Wordalikes are of course completely useless for doing serious page layout in long documents, and should never be used for, well, anything more than a few pages, really. I screwed around with LibreOffice (at which I’m pretty expert) for a few hours before it crashed and left the document in an unrecoverable state. I’ve seen Carrie go through the same thing with Word recently while writing her dissertation, and had similar experiences with AbiWord and OpenOffice, so my conclusion is that all these tools are broken and writers should not use them. I’m currently moving to gedit for my writing, a it has decent UTF-8 support and a spell-checker, which is really all I need.

What Wordalikes are good for is the layout of the first few pages of the book: the title page, the copyright page, and so on. I generated an eight-page document called frontmatter.doc that contained all that, and then used LaTeX’s \includepdf command to pull it into the actual book.

LaTeX is document layout system built on top of the TeX engine, which is used throughout academia, at least the non-technophobic parts. I’m using the texlive distribution, which is really good and complete. I’m using the TeXWorks editor for fine tuning and experimentation, but as always I’m using custom Python to generate the actual .tex files from my “plain-text” (UTF-8) master.

The critical thing in any LaTeX document is the first dozen-odd lines. Here are mine:

\documentclass[11pt]{book}
\usepackage[size=novel, gutter=0.5in, top=0.75in, bottom=0.75in,
noicc=true, paper=white, color=false, preview=false, nopdf]
{createspace}

\title{Darwin's Theorem}

\usepackage{pdfpages}

\usepackage{fontspec}
\setmainfont{Times New Roman}

\begin{document}

\sloppy

\includepdf[pages={1, 2, 3, 4, 5, 6, 7, 8}]{frontmatter.pdf}

\markboth{\hfill - Darwin's Theorem - \hfill}{\hfill - TJ Radcliffe - \hfill}
\begingroup
\let\cleardoublepage\relax


There is a lot to love here, and it represents the collective work of hundreds and hundreds of people over decades. Text purists will be screaming that anyone who uses Times New Roman in the 21st century will be first against the wall when the revolution comes, but text purists are funny that way (and several others.)

Most of the hard work is being done by the createspace style, which can be downloaded from github. It lets you mess with the margins, set the gutter (the extra margin on the inside of the page for the binding) and so on. It even has the CreateSpace standard sizes pre-defined (“novel” is 6×9) and some colour-space stuff that I’ve turned off as I don’t need it (noicc).

There are a few minor issues with the style: it complains about preview being on even when it is off (which is the default but I’ve also turned it off explicitly). The gutter is not added to the margin to get the full width, so you get warnings about the gutter being too narrow when it is not. Unless you set “nopdf”, which turns off some of the PDF setup stuff, you get warnings about being in draft mode.

The headings stuff puts the title and author name on facing pages (myheadings style treats chapter starts specially so they are unmarked.)

The “sloppy” directive is there to get rid of the overly strict justification settings that LaTeX uses by default (half-a-dozen text purists just swore solemn vows to hunt me down and kem me.) Without it you get an overfull hbox every few pages, with it you get an underful hbox (which amounts to words spaced out a bit more than a text purist would approve of) four or five times in the whole book, and I will happily bet that without being told where they are, no self-proclaimed text purist would ever notice them.

The grouping and cleardoublepage stuff gets rid of the book style’s insistence on starting every chapter on a right-hand page. I relax it at the end so the author’s note can be so started, but as it turns out that’s the way it happens naturally.

I experimented a lot with fonts and came away mostly unsatisfied. My final choice was between Times New Roman and Liberation Serif, and I could barely tell the difference except that the latter looked slightly colder and harder. There was some screwing around with em-dashes and quotes, and attempts to set the renderer to basic and the ligatures to TeX, but in the end I simply translated my ASCII em-dashes (–) to U+2014 during the LaTeX conversion and all was well.

One of the nice things about Times New Roman is that I could be reasonably sure of getting the same font from LibreOffice in the front matter as I was getting in the main body with LaTeX. They certainly look the same. Likewise, the page sizes and everything look the same as well. Spending oodles of time staring at the PDF helped.

Orphan lines at the end of chapters were, as always, far more common than statistics would suggest, including, inevitably, the last line of the book. There are about 41 lines per page, and 30 chapters, so there is a 1/40th chance of a page ending on any given line, which leads one to expect 0.75 single-line end-of-chapter pages, instead of the actual number, which was 3. A quick Poisson calculation suggests that the odds of that are about 3%, which is weird because it happens to me every single time I layout anything. Maybe it’s just me.

I could have tweaked the margins to fix these (and induce others, presumably) but instead was able to tweak the text. This has got to be the strong AI problem: if a computer can lay out a book without human intervention, it is intelligent.

With regard to PDF generation itself, I used LuaLaTeX out of TeXWorks (you can select the generator in the upper left drop-down) and it worked well. Not all the generators supported the font package I am using.

PDF generation from LibreOffice is a simple matter of export, making sure you set the type to PDF/1a or whatever it is (this embeds fonts). There is a warning about a transparent object being rendered as opaque, but I can’t see anything so apparently it is an invisible opaque object.

Uploading the book to CreateSpace is straightforward, and they do a bunch of checks on interiors. I had to tweak the top and bottom margins to get the page numbers in-bounds.

Cover design is a pain, and I did it using the CreateSpace online thing, which worked well. I used the same basic cover layout as for the e-book, with a back-cover that had the same background, and added text as appropriate. It is nice that CreateSpace shows you where the barcode goes.

It is worth noting that depending on the template selected you only have a limited choice of spine options. I was (and still am) tempted to download the preview CreateSpace generates, tweak the image, and upload it as a stand-alone image. Maybe next time.

The CreateSpace review process takes about 24 hours. Pay close attention before you submit! (I did, but had to revise anyway.)

That’s all I can think of right now. There will certainly be more later.

CreateSpace wants to push the finished book the KDP for Kindle publishing but I don’t want to do that. For one, the ISBNs are different. For two, my e-book has a table of contents (for ease of navigation) whereas the print edition does not.

With regard to pricing, I’ve set the Kindle price to $4.37, which gives me a$3.00 royalty, and the print book to \$13.29, which gives me the same royalty when sold on Amazon.com, somewhat more in the CreateSpace store, and much, much less when sold in bookstores (assuming that ever happens.) It seemed reasonable to have the same royalty for both formats, and if I sell a few hundred copies, ever, it will give me some pocket change for coffee.

That’s everything I can think of at this point. There will be more when the book is actually ready to go… soon!

## The Mechanics of Book-Building: e-books

This is focused on issues of interest to Canadian writers, so if you aren’t Canadian a lot of it will be irrelevant or misleading.

I’ve just gone through the Amazon “Kindle Direct Publishing” (KDP) process, and am in that delightful limbo between having the book ready to go and actually pushing the button. This is because the print edition is also in the works and it takes longer.

Putting together the e-book was a straightforward process thanks to Sigil, an e-book editor. I write using a simple text editor, and am currently looking at moving to gedit for all my non-software editing needs. It has an integrated spell-checker and decent UTF-8 support. We are at that point where it is clear that UTF-8 is the way the world is going, and it is time for everyone to go there. This may actually drive my adoption of Python 3, which abstracts away character encodings and forces explicit conversions between byte sequences (which can contain encoded characters) and strings (which contain abstract characters.)

The biggest late-breaking problems I had in the e-book building process were to do with broken character encodings. I only use a couple of non-ASCII characters, and fortunately HTML has entities for dealing with them. For non-programmers, writing directly in Sigil is probably an option, and simply using things like &eacute; for é is likely the best option.

I did a lot of review of the generated e-book, and I’m as sure as anything that it still contains errors. Such is life. My goal is to reduce them to an acceptable minimum, and to not let any really egregious ones slip through, although I wouldn’t doubt that I’ve failed at that as well. I usually do.

The front matter took an inordinate amount of time to get right. The copyright page in particular tends to look very different on different devices, and it bugs me when things spill over to the next page, which they inevitably will.

Getting an ISBN was straightforward. I signed up for CISS as a publisher. This was a big decision, so I guess I should talk about it. I’ll get to the process of getting an ISBN in a separate post.

When you are publishing a book you are doing the things that a publisher does, or you are doing it wrong. Things are what they are, regardless of what categories we as knowing subjects put them in, but putting things in categories–assigning labels to them and treating them as an instance of a named class–is how we think, and using this process well can help us along through life.

As such, rather than thinking of myself as an “author who publishes his own books” I’m choosing to think of myself as a “publisher” as well as an “author”. Two different categories for two different activities. The name of my imprint is Siduri Press, and I plan to use it has vehicle for other publications in future, not just my own stuff.

This lets me specify Siduri Press as the publisher, rather than KDP or whatever.

I elected not to embed any fonts in the e-book, relying on system fonts and the user’s settings. This seems to me a reasonable thing, as the user can always over-ride the fonts anyway. E-readers just don’t let the author control the look of the book very much, and that is as it should be. My goal was to get a decent default look, and leave it at that.

The cover image was the biggest hassle. The recommended size is fairly big, and I commissioned an image before I really understood the requirements, so even compressed it weighs almost as much as the text. Beyond the image, there is the actual cover design. I experimented with some different things and after some back-and-forth with Hilary came up with something that looks pretty good on both colour and e-ink displays. My Sony e-reader is the bottom of the heap, but it still makes the image look OK.

The big trick in this case was to put the text in near-white (on Hilary’s recommendation I added just a hint of colour to it, and it does look better). E-readers have typically just 16 grey levels, so a hint of colour will still get flattened to white, but make things look better on tablets.

The biggest aid in all of this is Amazon’s previewer, which you may be able to find on the KDP website. It lets you look at your e-book on all kinds of different devices, and is good for finding problems. I tended to review by comparison: I’d look at the cover on all devices, the title page, the copyright page, and so on. At various times I had problems with special characters, italics, and layout.

It is worth noting that the Adobe Reader software seems to contain a completely broken layout engine that does horrible things to your xhtml and css. I was able to tweak the CSS to get something that looked good in a conforming renderer (Sigil uses WebKit) and was still OK in Adobe’s broken piece of garbage, which did bizarre things with perfectly ordinary horizontal rules.

Overall, the layout experience for the e-book was not at all bad, now that I know everything I need to know. I’ve probably left out a huge wack of stuff here that would be useful, but I want to get some notes down on CreateSpace and LaTeX before it all falls out of my brain.

## The Bonferroni Correction

Despite sounding a bit like the title of a Robert Ludlum novel, the Bonferroni Correction is a somewhat controversial fix for a common issue in statistical analysis. The issue itself is nicely illustrated by this particular bit of wisdom:

June territory, which has not fared well in past midterm election years in the second year of a presidential cycle

Read those conditions: mid-term election years in the second year of a presidential cycle. First off, I’m not totally sure what this means, as don’t mid-term elections always fall in the second year of a presidential cycle? Or does it mean the president’s second term?

Regardless, it’s easy to see where this is going: once you allow market predictions to be conditioned on, well, pretty much anything, you have a huge range of possible conditions to choose from. Give me any pattern of market behaviour and I’ll find something–from the infinite possibilities-that correlates well enough to make it look usefully predictive.

And that’s the problem: “infinite possibilities”.

As soon as we move from total probabilities (how often something happens regardless of other factors) to conditional probabilities (how often something happens given a particular condition exists) we are in deep water, because unless we restrict the conditions of interest we are bound to find one that makes our probabilities look significant.

The Bonferroni correction attempts to take this into account. It has passed through many hands, and the most recent incarnation is the Holm-Bonferroni correction, but the principle is the same: if we test N hypotheses, as N become large the probability of us finding one that fits the data by chance approaches unity.

The thing that makes all corrections of this type problematic is the difficulty in answering the question, “What should N be?”

If I happen to know I have 10 hypotheses under test, N = 10, but that’s hardly ever the case. A great deal of data analysis is exploratory, and as such tends to involve an open-ended set of hypotheses. Furthermore, it is generally impossible to put a cap on the number a priori. We simply don’t know enough until we’ve rolled around with the data a bit, at which point we are covered in the stuff: thoroughly contaminated from a statistical point of view, with priors coming out of our ears.

Formal statistical analysis requires discipline. After taste and good judgement it is the most important thing an analyst has. The problem with corrections of this kind is that they do not admit much leeway in the discipline department.

Real analysis is a spiral into the data, and on any loop of the spiral we may step back and revise our priors. The problem is, we can easily mislead ourselves, and smuggle in the answer we want using various tricks. Underestimating the scope of available conditions is one of the best.

Bonferroni-type corrections are where the subjectivity of plausibilities comes to the fore, because really, if we genuinely have an unbiased prior anything is equally plausible, ab initio. That means our space of available conditions is effectively infinite, and our discovery of some interesting correlation is by definition spurious, which is silly.

This is why Bayesianism requires far more intelligence and discipline than imaginary arguments do. Imaginary arguments just require making up some plausible bullshit. Bayesian arguments require taste and good judgement and discipline: you can’t just accept a genuinely unbiased prior, you must put some thought into the problem and limit the scope of hypotheses you are willing to entertain. Of course, this may throw out the baby of truth with the bathwater of speculation, and that’s why Bayesians value diversity: with many different starting points we are more likely to capture a broader range of significant hypotheses.

## Those Damned Wobbly Suitcases

I was using a little hand-truck to move some stuff the other day and it did that characteristic dance that such things do. Wheeled suitcases do the same thing: take them over the smallest bump and they start to wobble uncontrollably, dancing from side to side until they fall over, or you stop moving and let them settle down.

Rolling it along a rough sidewalk made this problem a lot worse, and being in the rain made me want to walk even faster than usual. So it was a pain, and it got me to thinking what the cause is. It isn’t obvious, and it particularly isn’t obvious why the motion is so damned aggressive. Even walking backward with both hands on the handle it was difficult to control.

What do you think causes this? Feel free to use your imagination on the problem, and imagine a solution using your imaginary analysis. What do you think would reduce this motion? Why does it seem so violent and difficult to control?

Me, my imagination is useless for that kind of analysis, so instead I use ideas that have been tested by systematic observation and controlled experiment. In this case, the idea is the Free Body Diagram (FBD), which replaces all the materials around the object of interest with the forces they exert, and then puts mathematical constraints on those forces such that they reproduce the physical constraints on the object.

The FBD for this problem is shown below:

There are four forces involved: the (small) upward force from the person’s hand (Fh), the downward force of gravity (mg) and the upward forces from the wheels (Wl and Wr).

Since the suitcase neither leaps into the air nor plunges to the centre of the Earth, we can put the constraint on these forces that:

mg – Fh – Wl – Wr = 0

We know that Fh is relatively small and constant (we aren’t putting much effort into holding the suitcase up… that’s the whole point), so we can treat it as zero and see that the weight (mg) must be about equal to the sum of the forces on the wheels:

mg ~ Wl + Wr

Now, when both wheels are on the ground, the weight will be distributed pretty much evenly (Wl ~ Wr) and the torque on the suitcase will be small.

However, suitcase frames are pretty stiff, which means that if we hit a bump, even a little one, we generally lift one of the wheels–say the left one–off the ground. But again, the suitcase doesn’t immediately plunge to the centre of the Earth, so the net force must still be pretty much zero. But that means that the force on the right wheel must double!

That’s where the torque is coming from, enough to drive the left wheel back down hard, jumping the right wheel off the ground and suddenly reversing the torque. No wonder the wobble is so hard to damp: it’s being driven by the whole weight of the suitcase!

As each wheel comes off the ground, the other wheel has to support the whole weight, creating a torque that alternates from side-to-side, making the wobbles bigger and bigger.

Note that putting the wheels farther apart won’t help this, except insofar as it adds a little flexibility to the frame. If the frame bends, then the torque will be lower, at least for smaller bumps.

What would help is adding springs to the wheels, or making the frame between the wheels much more springy, or putting the wheels on a separate bar that had a single pivot point in the middle attaching it to the suitcase, or making the wheels out of something squishy so they act as springs themselves, or replacing the two wheels with a single central one or a short roller (which would still have to be squishy or springy) or moving the wheels much closer together and adding a soft (inelastic) rubber nub to each corner of the suitcase.

There are likely other solutions as well, but only by analyzing the problem correctly and understanding the cause of the motion can we begin to imagine solutions that might help. If we instead make the mistake of using our imagination to invent the cause of the problem, we are very unlikely to get anywhere. To contend otherwise is to be left wondering why the industrial revolution got going 50 years after Newton, not 5000 years before him.

## Reflections

It’s been a busy couple of months. The first part of this was written a few weeks ago and a whole lot has already changed, mostly for the good.

I’ve sold my boat, and therefore have experienced the other of the two happiest days in a sailor’s life. The new owners made a good purchase for them, I think, so I’m happy it’s going to a good home. It was the right boat for me to buy and the right time for me to sell. New adventures await.

I’ve found a new place to live: larger, less urban and closer to Carrie, as well as to a good yacht club with a community club. It’s also go decent views, and is close to an excellent coffee shop, which may even be better than my current local one.

The larger space, as much as anything else, is a big draw. I’m living in a postage-stamp-sized apartment right now, while I’m enjoying being right downtown, the new place is all of 20 minutes away by foot, 10 by transit. The biggest change is I’ll have to take a bus to get the Skytrain to the airport, which is a hardship I can live with.

Things continue to go well with my employer, which is gratifying. Despite the usual teething troubles that any new technology has, we’re proving ourselves and I expect this will be our year to really start getting traction in the market. As a consultant, I saw a lot of projects through a narrow phase of their lifespan. It’s nice to see this one go all the way.

Carrie and I continue to figure to out what life means to us, and how to mix our needs for space and independence with our needs for togetherness and stuff. We’ve been making movies with a local meetup group, and I’ve written a couple of short scripts for them, which is fun.

Theatre has been awesome. I’ve seen some Chekov and “The Odd Couple”, and there’s the symphony tomorrow and the opera next week. This is no bad thing.

Writing goes well: my novel is through the copy-editing process and I’ve got the final cover-art from the cover-artist, and I’m a week or two away from launch, I think. I’m working on a novel-length version of Songs of Albion while taking a writing course at SFU that I’m really enjoying and learning a lot from.

The world at large is a mess. Russians are invading Ukraine. Crimea is set to explode. Venezuela is a mess. Fighting continues in Syria. People everywhere still think killing people is going to solve all their problems, because it has always worked so well every time it has ever been tried.

I don’t really see that there’s much I can do about it, and that annoys and frustrates me.

I’ve been writing a lot about Bayes here, and that may be useful. People generally want certainty, and are willing to pay almost any price for it. Many people would rather be certain of a lie than hold a doubtful truth. But it is the nature of truth to be doubtful: that is what Bayes’ Rule tells us. Our degree of belief in a truth will always be open to modification by new data, if we hold that truth rationally in the first place. If we are certain, we are doing it wrong, even if what we are certain of is true.

We know–with mathematical certainty, which while not perfect is as certain as we can reasonably get–that Bayes’ Rule is the only way of updating our beliefs in the face of new evidence that is consistent. Any other way of doing things will result in the same evidence producing different conclusions depending on the ordering of our thoughts. Since consistency is necessary to act on the world–we cannot both do and not do the same thing in the same respect at the same time–having consistent beliefs is quite useful, if we want our thinking to guide our actions.

If we want to use something else to guide our actions, like our feelings, then we don’t need to worry about consistency or anything else, although we do need to take full responsibility for the mess we make of things. Feelings are facts–about us–and like any facts they should be taken into account when we make choices. It would be stupid and irrational to ignore our feelings, because it is stupid and irrational to ignore relevant facts, but likewise it would be stupid and irrational to behave as if our feelings in any way provided justification for action on their own.

As someone who has written and argued for various positions over a good thirty years, and never convinced anyone of anything in that whole time, taking up the cause of Bayes at this point in my life may seem rather foolish. But everyone needs a hobby, and maybe this will be mine in the next few decades.

## Why Speculation About MH370 is Evidence of Innumeracy

Modern air travel is ridiculously safe. Aircraft are not designed using prayer, or crystals, or chi, or any other pre-scientific or anti-scientific “way of knowing” that is demonstrably far less effective than publicly testing ideas by some combination of systematic observation, controlled experiment and Bayesian inference.

Pilots are not trained by looking to the Bible or the Quran or the Guru Granth Sahib as a guide, but using principles that have been worked out by publicly testing ideas by some combination of systematic observation, controlled experiment and Bayesian inference (wouldn’t it be great if we had a word for that discipline that everyone understood, so we could use that word and not have some ignoramous smugly declare that publicly testing ideas by some combination systematic observation, controlled experiment and Bayesian inference couldn’t prove everything?)

In any case, thanks to all that work by people “who do not teach their God will rouse them/just before the bolts work loose” major airline disasters are unbelievably rare, which is to say: extremely improbable.

That means that when a disaster does happen, the cause is almost certain to be some extremely improbable confluence of events, be it multiple failures of independent systems or some unexpected interaction of systems in combination (the Ariane V explosion was of the latter kind: all the individual sub-systems worked properly, but in combination they destroyed the rocket.)

When we speculate on the possible causes for an event, we are properly limited to things that are not vastly less probable than the most common known causes. The famous medical dictum, “When you hear hoofbeats, think horses, not zebras” applies. There are a variety of causes for the sound of hoofbeats, and the most probable ones will be the cause most of the time because horses, even in modern cities with modern police forces, are just not that rare. I don’t think a year has gone by in my adult life that I didn’t encounter a police horse in a downtown area somewhere.

The relatively high probability of the most common cause in such cases sharply limits the range of speculation, because there just aren’t that many things that are comparably probable.

In the case of air disasters, however, the most common causes are incredibly low probability events. There is a huge range of things that have comparably low probability, and that means the field for speculation is very nearly unbounded, so we can wander across it almost endlessly, never getting any wiser, never getting any closer to the truth.

Speculation in such cases adds nothing. It is not like the case where there are a small number of highly probable causes. In such cases we might be able to exhaustively examine the minutia of the evidence and distinguish between them. But that is only possible because they are so few.

In the case where the most common cause is wildly improbable, it is simply not possible to pluck one hypothesis from the vast array of more-or-less equally plausible ones and study it to the point where it can be significantly raised or lowered in plausibility. For one, it is the nature of vastly improbable events to be very sensitive to detailed assumptions, so the lack of knowledge that surrounds air disasters in their early stages leaves room for different speculators to come to vastly different conclusions based on tiny differences in how they fill in the huge gaps in available information.

As such, engaging in speculation as to what happened to MH370 as if that speculation will ever carry us one whit closer to understanding what happened is strong evidence of innumeracy. The people who are doing this simply do not understand the numerical realities of Bayesian inference in such situations.

This is not to say that such speculation can’t be entertaining, and if people want to entertain each other by making up stories around indistinguishably implausible hypotheses, I’m going to consider them somewhat heartless, cruel and inhumane–because this is after all the tragic disappearance of over 200 human beings–but I won’t call them innumerate.

Still, apart from the rather goulish entertainment value, we should all understand that this is a time for mourning, and silence, and careful study of the few data we have in the hope that physically searching–which is nothing but the testing of the ideas “MH370 is at location X/Y” using systematic observation–will lead us to evidence of what actually occurred. The thing we can be practically certain of is that speculation will not.

Posted in bayes, epistemology, probability, psychology, religion | 8 Comments

## Market Predictions

You may have seen that “scary chart” that some idiot is promoting as the presaging the possible end of the world. It’s so spectacularly stupid I figured I’d do a few minutes of actual analysis of the “argument” and how to test it, and then spend some time wondering just how much of a moron you have to be to get a syndicated financial advice column.

The argument behind the chart is simple: “The past 17 or 18 months “look like” the run-up to the Crash of ’29, and since charts that look similar in the past are more likely than average to look similar in the future we should be scared!!! that the next few months will look like October-December of 1929.”

There is no other way of making this argument. It has three simple phases:

1) the past 18 months are highly similar to the 18 months leading up to the Crash
2) high similarity over the past 18 months implies a predictively useful chance of high similarity over the next 3 months
3) Therefore we have a useful prediction that the market will crash in the next three months.

“Predictively useful” is more restrictive than “statistically significant”. It is easy to get statistically significant correlations that are orders of magnitude too weak to be predictively useful. There is a very significant diminuition of traffic on weekends, for example, but I would never use that to predict it was safe to cross the road with my eyes closed.

The simplest measure of similarity between two time-series is the cross-correlation, which for all its well-known issues does correlate well with any more sensible measure. So I ran the cross correlation between the 250 days leading up to the Crash of ’29 and found that the current values are pretty modest. Here’s a look at the last few thousand trading days (five or ten years worth):

Raw correlation

If we were strongly correlated with the 250 days before the crash, there’d be a big spike at the end. See it? Neither do I.

Even if we had a good correlation, would it tell us anything about the future? Here’s a graph of future (60 day) correlation vs past (250 day) correlation, again using the 250 days before the Crash:

Predictive Correlation?

If there was a predictive correlation we’d see a linear blob, not a nicely circular one like we’ve got.

So a trivial bit of Python tells us that:

a) the market doesn’t look objectively similar to the run-up to the Crash of ’29.

b) even if it did, it wouldn’t be predictively significant.

Financial pundits are idiots. Film at 11.