Science: the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference.

My intent for this site is that it act as a unified location for my various interests. The main blog is likely to continue to focus on poetry and the like, but there needs to be places for more technical material, and this is one of them.

The current model of scholarly publication is good and truly broken, and we have yet to figure out a replacement. In days of yore publications were typically “communicated” by members of various scholarly societies, so once one became a member of the Royal Society, for example, one could publish virtually anything. Thus we have missives from people like Robert Hooke on the birth of a deformed cow in his neighbourhood: whatever took a member’s fancy might wind up in publication.

In the early 20th century science was clearly having a significant commercial impact. Scientific gunnery had changed warfare from an idiotic exercise in incompetent fumbling into an idiotic exercise in incompetent fumbling that actually killed more people than the factors of disease and starvation that had routinely decimated armies in the past.

It simply wouldn’t do to let mere thinkers decide what got published, and the system peer-review was introduced as a way of democratizing the crucial public element of the scientific process.

The first time Einstein had a paper subject to peer review and got the reviewer’s comments back he wrote a stern note to the editor of the journal asking what on earth was going on. When the new policy was explained to him he wrote back to say that he did not approve of this system whereby some of his colleagues were denied access to his work until after others had given it a thorough going-over. It seemed to him unprofessional, unegalitarian and undemocratic. He resisted publishing in peer-reviewed journals there-after.

Regardless of Einstein’s disapproval, peer review came to dominate scientific and scholarly publishing in the 20th century, and it served not too badly to let good work get published and keep bad work out of the journals. But having been on both ends of the peer-review process, I can’t say I like any part of it very much, and you’d be hard-pressed to find anyone who does.

The primary failing of peer-review is a consequence of the dead-tree publishing model: too little information is available for reviewers to do an adequate job of vetting new work, particularly with regard to fraud. The recent revelation that the anti-vaccine movement funded fraudulent work that was used to promote their cause is just one prominent example of fraud. Get a few scientists in any discipline together and after a few drinks ask them about fraud. You’ll be shocked by what you hear.

My scientific career has spanned two decades and several fields, and I have encountered fraud to my certain knowledge at least once, and possibly several more times. Fraud can be difficult to detect even to people close to the question, and the boundary between “fraud” and “really bad data analysis” is fuzzy.

If people back then had published their raw data, the various controversies would have never got off the ground. That wasn’t practical, in those days. Today it is. Even for very large, very complex experiments.

Two groups have been especially good at this: the genomics community, and the astronomical community. In time, one hopes, the very notion of scientific publication will become vastly more inclusive than it currently is. In particular, there is nothing to prevent us from making tera-bytes of data available online for costs that are not out of line compared to ordinary journal publication.

At this point all the old barriers to entry become almost irrelevant, with “real” publication in a peer-reviewed journal only useful for scientific career advancement.

The nice thing about Web-based publication is that the work can speak for itself, and it provides a venue for things that would be difficult to publish otherwise.

One of my hobbies over the past twenty years, for example, has been the computational study of perpetual motion machines (PMMs). These are interesting for two main reasons: the first is purely as examples of really hard computational problems, because the degree of exactitude required to ensure that a many-particle simulation obeys the 2nd Law of Thermodynamics is considerable. The second is that it is often not obvious–even to physicists–why PMMs won’t work, and being able to see the dynamics of a Szilard engine or similar unfold in simulation is often quite enlightening. It enhances our grasp of the abstract principles with a concrete understanding.

The only people interested in PMM’s, however, are a tiny number of physicists and a large number of nuts. As such, getting space to talk about them in academic journals is quite properly difficult.

I’ve thought for a while I really should get around to writing all this stuff up, and this page is a tiny step in that direction. I am as always woefully busy, but unless I publish the results of my investigations they are not science, and I can’t be having with that.

Leave a Reply

Your email address will not be published. Required fields are marked *