Uncommon Descent Serving The Intelligent Design Community

BREAKING: Possible Neutrinos moving at superluminal speeds at CERN!

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

When I was a kid and was bored in Chem classes I would occasionally daydream of a messenger arriving at the classroom door to tell the late, great, Fr Farrell of a scientific breakthrough.

Of course, in later years, I always assumed that it would be years before a breakthrough would filter down to High School Chem.

But, today, may be a possible day like that.

According to a BBC report from CERN (HT, WUWT):

Neutrinos sent through the ground from Cern toward the Gran Sasso laboratory 732km away seemed to show up a tiny fraction of a second early.

The result – which threatens to upend a century of physics – will be put online for scrutiny by other scientists.

In the meantime, the group says it is being very cautious about its claims.

“We tried to find all possible explanations for this,” said report author Antonio Ereditato of the Opera collaboration.

“We wanted to find a mistake – trivial mistakes, more complicated mistakes, or nasty effects – and we didn’t,” he told BBC News.

“When you don’t find anything, then you say ‘Well, now I’m forced to go out and ask the community to scrutinise this.'”

One to keep an eye on.

Further details include:

Neutrinos come in a number of types, and have recently been seen to switch spontaneously from one type to another.

The team prepares a beam of just one type, muon neutrinos, sending them from Cern to an underground laboratory at Gran Sasso in Italy to see how many show up as a different type, tau neutrinos.

In the course of doing the experiments, the researchers noticed that the particles showed up a few billionths of a second sooner than light would over the same distance.

The team measured the travel times of neutrino bunches some 15,000 times, and have reached a level of statistical significance that in scientific circles would count as a formal discovery.

But the group understands that what are known as “systematic errors” could easily make an erroneous result look like a breaking of the ultimate speed limit, and that has motivated them to publish their measurements.

“My dream would be that another, independent experiment finds the same thing – then I would be relieved,” Dr Ereditato said.

But for now, he explained, “we are not claiming things, we want just to be helped by the community in understanding our crazy result – because it is crazy”.

“And of course the consequences can be very serious.”

Ole Albert must be peering over the balcony to see what we are going to do. Isaac, standing next to him is taking the bets. END

Comments
Once we are beyond that 500 – 1,000 bit threshold
I don't have a sense of how prevalent are these kinds of changes, or their rate of appearance. Is it on the order of 1 per phyla, 1 per order, 1 per genus, or something even more frequent like 1 per 10 generations of a given specie? This ties in with my previous question about micro- versus macro-evolution and changes to karyotypes. Chromosomal rearrangement (CR) need not add much (if any) new information and yet still create reproductive barriers that lead to speciation. Does that mean the best explanation for CRs is a purely natural process? If so, then does that not imply that intelligent designers wait until CR events occur before implementing new FSCI elements?rhampton7
September 23, 2011
September
09
Sep
23
23
2011
12:50 PM
12
12
50
PM
PDT
This sampling challenge is what gives teeth to the log reduced form of the Dembski Chi threshold metric for inferring to design as best explanation: Chi_500 = I*S - 500, bits beyond the solar system threshold. If you want to address something like the OOL and want to be quite cautious, you can go up to a 1,000 bit threshold, that will swamp the scope of search of the cosmos in a hay bale that makes the above look like a joke, we are talking 1 in 10^150 here. You couldn't make such a hay bale using the resources of the observed cosmos, to draw a one-straw sample from. And yet, we are here talking 143 ASCII characters worth of information, again trivially small for composing code to say something significant or better, control something significant. DNA for minimal observed life forms is 100, 00 - 1 mn bits. So, in terms of our observed cosmos, the Dembski type threshold remains quite reasonable. (To suggest a quasi-infinite multiverse is to go beyond science into metaphysical speculation and comparative difficulties debates, and that opens up a whole world of challenges on which worldview best makes sense of observed reality.) So, it is still quite reasonable to argue based on the Dembski type bound. It gives analytical reason for grounding the empirical observation that the only known cause of functionally specific, complex information of 500 - 1,000 bits or more, is design. Which is why it is patently obvious that the notion that linear digital coded, algorithmic strings and data structures such as we find in DNA etc, wrote themselves from blind chance and/or mechanical necessity is -- pardon directness -- blatantly absurd. And, PT hangers-on et al, evolutionary biologists and the like are not particularly equipped as experts to dismiss such an information origination challenge; hence the gross blunder of locking a whole field of science into an information-origins absurdity. Once we are beyond that 500 - 1,000 bit threshold, the only known and reasonable source of FSCI is intelligence, and it is quite reasonable to onward infer that if we see a system that passes on FSCI using say a von Neumann Self-Replicator (as we can see for cell based life) this is a mechanism for by and large conserving the info and replicating it within machinery that puts it to work, not a viable mechanism for it to have come about by blind chance plus mechanical necessity. An absurdity imposed by an a priori commitment to materialism, often motivated by the sort of hostility and irrational fear that we see in the latter-day dictum that one cannot let a Divine Foot in the door of the hallowed halls of science. (Cf. discussion here on.) It is high time to think again, soberly, about the information origins challenge for biology; starting with OOL, then addressing OO info for major body plans, and onwards to the origin of key systems in our own and other relevant body plans. Obfuscatory squid-ink clouds of mathematicised rhetoric aside, the only empirically and analytically credible source of FSCI is design, and the only credible function of vNSR type self replicating mechanisms is to more or less preserve and pass on such FSCI, with some room for modest hill-climbing within islands of function. In case this is not recognised, I am stating the relevant form of the law of origin and conservation of functionally specific, complex information. G'day, GEM of TKIkairosfocus
September 23, 2011
September
09
Sep
23
23
2011
12:33 AM
12
12
33
AM
PDT
AMC: The first thing is; what does a reported 60 ns time-difference in time of flight for 732 km [~ 2.44 ms] mean? That is what the physicists involved are trying to work out; as in, is there some subtle error in their calculation or experimental method that has made a difference of 25 parts per million? Next, you will immediately see that the differences involved are very small, relatively speaking, and would not affect the order of magnitude type result that Dembski is doing. Planck came up with a distance, the Planck length, based on relations of physical parameters, such that we find a way to deduce a length from the key parameters h [reduced to h-bar, h/2* pi], G and c: l_P = SQRT [ h-bar*G/c^3] = 1.616*10^?35 m. This is a tiny fraction of the diameter of say a proton, 1 /10^20th. It is a reasonable, minimum yardstick for length. No length shorter than that is physically meaningful. (BTW, Planck was building a system of natural units based on physical constants.) The speed of light is still a useful relevant speed, and the Planck time is the time light takes to go one Planck length. In duration, it is 5.391 *10^-44 s. The yardstick significance of this time, is that the fastest nuclear interactions take about 10^20 t_P and the fastest -- ionic -- chemical interactions take about 10^30. In short, you will see that the order of magnitude effects taken up in Dembski's estimates are already long since swallowing up the sort of differences being suggested. Dembski rounds down the t_P to 10^-45 s, and rounds off number of atoms in the cosmos to 10^80 [a particle number estimate], then uses a threshold of thermodynamic lifespan, 10^25 s, that is about 50 mn times the usual timeline since the Big Bang, 13.7 BYA. Multiplying through, you get 10^150 Planck time quantum states for the atoms across that span. And recall, the fastest -- ionic -- chemical interactions will use up about 10^30 such states. That's why a latter-day limit of 10^120 physical binary operations [on an effective storage register of 10^90 bits] is very reasonable, whatever one may want to think about Seth Lloyd's calculations. We may use these sorts of estimates to set up a yardstick for how many shortest time, atomic level things can happen on relevant scales, from our planet to our solar system [our effective cosmos for direct chemical-style interactions] to the observed universe. (You may want to look at David Abel's estimates here & discussion of significance here.) Boiling down, our solar system has about 10^57 atoms, and across its reasonable lifespan to date would have an upper bound of 10^102 PTQS's; i.e. 1 in 10^48 of the 3 * 10^150 possible configs for a register of 500 bits. A solar system lifespan to date scaled sample of the set of possibilities would be like drawing a sample of cumulative size 1 straw from a cubical hay bale a light month across. Even if a solar system were buried in it, you would overwhelmingly be likely only to pick up straw. Samples like this tend to be overwhelmingly representative of the BULK of a distribution of possibilities, so unusual and specific configs of such complexity are maximally unlikely to be reached by blind, chance-based trial and error/success approaches. Hence the issues on islands of specific, complex function. Cf discussions here. (In practice, if you want to get 73 or so ASCII characters worth of functionally useful and specific coded info or the like, you do not set a solar system's worth of monkeys, typewriters, paper forests and factories, plus banana plantations and trains to work on banging it out by trial and error.) [ . . . ]kairosfocus
September 23, 2011
September
09
Sep
23
23
2011
12:33 AM
12
12
33
AM
PDT
I came on this site this morning to see whether this report was mentioned. Reading it in an online newspaper immediately raised a question in my mind... I am listening to an audible book of Dembski's Design Revolution. In a previous chapter he has defined something he calls (something like) the Universal Probability Bound, which is calculated by reference to number of particles in the universe, age of the universe and some other factors. From memory (although I could be wrong), one of the factors was the idea that nothing travels faster than the speed of light. While his is a very conservative estimate, if I am right in recalling a premise is that 'nothing travels faster than the speed of light', could the universe have more 'probability resources' than he calculates on the basis of this research?AMC
September 22, 2011
September
09
Sep
22
22
2011
04:38 PM
4
04
38
PM
PDT

Leave a Reply