One of them is in new physics, but that isn’t the most likely place to find them.
I’ve only given the paper a quick perusal at this point, but a few things jump out at me:
- The proton pulses are broad (10.5 μs)
- The muons decay over a 1 km distance
- The instrumental correction is big (1048 ns)
- The analysis appears to have been done with binned data
All of these are sources of significant error, which in this context does not mean “mistake” but “uncertainty”.
To understand the first two points on the list above, think of the experiment this way: you have a machine that fires 1 second long bursts of ball bearings. They travel over a known distance, which takes them about 200 seconds. Your ball bearing detector unfortunately isn’t very good, and it only detects one bearing for every 10E15 you shoot. For the vast majority of bursts, therefore, you detect nothing at all, and the ones you do detect are distributed across the 1 second width of the burst. The individual detections tell you very little about the time-of-flight because you’ve got a +/-0.5 s uncertainty in the time of arrival simply because of not knowing where in the burst the bearing happened to be.
So you shoot a whole lot of bursts, and you time things precisely, but your detector adds an extra time delay that you calculate ought to be 0.106 seconds but is in fact 0.100 seconds. So your total time of flight is 200.100 seconds rather than the 200.106 seconds you expect. But remember, your burst length is 1.0 seconds. So you need to understand the structure of the burst extremely well or it’s going to be pretty easy to add an error that is much larger than the effect you’re looking for.
The first two points on my list are ones that affect the shape of the original distribution of neutrinos, which must be very well known. The researchers think it is.
The third point is the size of the instrumental correction: the time delays as the signal passes through the detector electronics. In this respect, I’m at a bit of a loss as to quite why the data are presented the way they are. For example Figure 11 compares the ν/p PDFs before and after the instrumental correction, but nowhere do we see the comparison of the PDFs with and without the extra 60 ns, a difference that is over ten times smaller than the physically irrelevant comparison that is displayed.
It is always best to focus your presentation on the physically interesting aspects of the data, not the incidental instrumental effects, and I am quite certain that at least one news outlet will reprint Figure 11 claiming that it shows the size of the difference between the expected arrival time and the actual arrival time.
As may be, the 1048 ns delay is a great place to look for mistakes that might contribute to the anomalous result, but only the people with access to the hardware can do that.
The final point is a much bigger deal. The analysis procedure is not described in sufficient detail to reproduce what they have done, but it is clear that they are somehow building up a probability distribution function (PDF) of neutrino arrival times and then computing a maximum likelihood over the mean proton pulse shape. The display in Figure 11 shows the PDF in terms of events per 150 ns, while in Figure 12 events per 50 ns is used. In either case the granularity of the PDF is far too coarse to meaningfully extract a δt of 60 ns from the data, and if they have simply run their computation on such coarsely binned data their result is almost certainly an artifact of their analysis method, not a result of miscalibrating their source terms or instrumental delays.
In general analysis should be done on unbinned data, although in this case building up a quasi-continuous proton PDF is reasonable. But the neutrino data should be left with their individual arrival times, and the perfectly well-posed statistical question, “What is the probability that these arrival times resulted form this source distribution with a time delay of δt?” No binning artifacts come into the question, then, because the source (proton) PDF can be measured on a 1 ns timescale, much smaller than the effect in question. The small number of neutrino events (16111 in all) precludes this for the neutrino PDF.
Binned vs unbinned analysis can be a bit of a religious matter amongst physicists, but I am a member of the Church of the Unbinned, and believe that the freedom from artifacts is more than worth the additional computational complexity. Continuous data should be analyzed continuously.
So it is in the subtleties of their analysis that I would go looking for the missing 60 ns, first by running an unbinned analysis to answer the probability question above. There are other places to look as well–including new physics–but this most mundane task is the one I would focus on first.