Uncommon Descent Serving The Intelligent Design Community

ID Metrics and an Active Information Tutorial

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

One of my favorite parts of ID is the fact that it is creating good tools for biologists to use. ID is often misconceived as a conclusion about whether or not X was designed. Instead, ID presupposes only the *possibility* that something was designed, and that intelligent agents are not mechanistic. In accordance with this, several metrics have been developed.

The first metric that I am aware of is CSI. The method for measuring CSI was originally developed by Dembski in The Design Inference. The main problem for CSI is in the difficulty of actually taking the measurements it requires.

The second metric (well, metric probably isn’t quite the right word, it’s a qualitative measure) is Irreducible Complexity as described in Darwin’s Black Box. As originally proposed by Behe, Irreducible Complexity is fully testable, and has been successfully tested by Minnich and Meyer in the lab. While Irreducible Complexity as proposed by Behe only conceptually argues against Darwinism, further theoretical work shows more specifically, based on computational principles, why Irreducible Complexity argues for intelligence, as well as practical uses of ID in biological research.

The third metric, however, is my favorite. It’s a simpler conception, yet very powerful, and is based directly on the No Free Lunch theorems. It is “Active Information”. Active Information is basically the measurement of how much information a search algorithm knows about the pattern of the search space that it is searching. It is measured by looking at the performance of the search algorithm vs a blind search. The paper describing it is here. This concept has been further applied to measure the amount of active information that is used by the immune system during somatic hypermutation (about 22 bits), and additional research is ongoing to apply it more generally to cells in hypermutable states.

Anyway, Active Information has a huge potential in biology to help detect which processes have frontloaded information, and how much information the cell is actually supplying for mutational processes. Anyway, below, Robert Marks gives a *great* lecture on information generally, and ends the lecture specifically talking about Active Information in evolutionary systems.

Comments
Exactly. If it works, it’s advantageous. Can you let Behe know?
Cute, Elizabeth. Anything substantive to say? Of course you were just having fun, right? You don't actually think that this is something Behe disagrees with, right?Eric Anderson
March 20, 2012
March
03
Mar
20
20
2012
07:59 AM
7
07
59
AM
PDT
johnnyb:
ID is often misconceived as a conclusion about whether or not X was designed. Instead, ID presupposes only the *possibility* that something was designed, and that intelligent agents are not mechanistic.
I think I understand what you are trying to say, but I fear the above may be misconstrued. - It is always logically possible, independent of ID, that something was designed. - ID is indeed interested in whether we can conclude that something was designed. It is not a deductive conclusion, but an inductive one. But the enterprise is quite interested in whether there are objective indicia of design that can be applied to determine whether something was designed. I think what you're suggesting is that, in the context of applied science, by giving researchers the freedom to intellectually consider whether something was designed, researchers can explore new avenues and gain new insights. Certainly agree with that aspect.Eric Anderson
March 20, 2012
March
03
Mar
20
20
2012
07:55 AM
7
07
55
AM
PDT
Why? Behe believes in a naturalistic evolution with common descent. Behe's point is what is capable *without* directed mutations. The answer is - not much. Therefore, one needs directed mutations to account for important changes in life history.johnnyb
January 22, 2012
January
01
Jan
22
22
2012
06:56 PM
6
06
56
PM
PDT
Actually, it may have been Foster that switched sides (or maybe both!). My memory is faded at the moment.johnnyb
January 22, 2012
January
01
Jan
22
22
2012
06:33 PM
6
06
33
PM
PDT
Nick - Again, you have completely misunderstood ID. Perhaps if you stopped hating ID for just a few minutes - long enough to figure out what it says, you might at least see what we are saying. For instance, you made the following false claims about my position: "Which is the same thing as saying that normal evolution as scientists understand it, i.e. evolution by natural processes, doesn’t work." Actually, my proposed idea (and that of Behe) ENTIRELY consists of natural processes. In what sense is a mechanism a non-natural process? Are computers non-natural processes? I am simply positing that there was an existing information source (much like a hard drive on a computer) which held information to assist the changes, and giving criteria for when we should be looking for such information. What part of that is non-natural? I have other ideas for non-natural processes (you can come to the Engineering and Metaphysics conference this summer to see them if you are interested), but they were not discussed in this paper at all, and, in fact, aren't even about evolution. "None of the events I described (gene duplication, mutation to increase binding, selection for regulation for efficiency) is difficult or rare" I didn't claim otherwise. "and the mutation events didn’t even have to happen together." *Pieces* of the mutation events don't have to happen together, assuming that you already have proto-FleN. So, if you assume half of your argument, then, amazingly, you are halfway there ;) The fact, though, is that for them to work well together, there is a likely need to have some amount of coordination. In addition, you are very wrong about the directed mutation controversy. In fact, some of the major players who were *against* directed mutations have switched sides (such as Rosenberg). The problem is that they put an evidential requirement on the hypothesis that was just silly. For instance, in the Lac+ mutation, it was assumed that the Lac gene wasn't targeted because they found mutations in other genes. However, most of the genes they found mutations on were other genes for metabolizing sugars! So it seemed that while it was not a 100% target of the right mutation on the right gene, the cell had heuristics about which genes were likely to contain the correct hit. Also, another issue that happened is that most of the selection experiments involved lethal selection. It is true in the face of *lethal* selection that, in those cases, only the cells which had the mutation pre-existing survived. However, many of the researchers noted that after a while (several days after one would normally perform a fluctuation test), additional colonies would start to get the mutation which did *not* follow the Luria-Delbruck pattern. These were apparently organisms which did not get a lethal dosage the first time, and therefore could trigger a mutational response. Likewise, a clear example of directed mutation is in the adaptive immune system, where the mutation machinery skips over 99% of the genome to land mutations in the correct *half* of the correct gene. The mutations are targeted by an upstream non-coding sequence which points to where the mutations should go (if you move the sequence, you move the location of the mutations). As you can see this process is directed. In addition, the selection is artificial, rather than natural. That is, the cells that die aren't ones that don't have a sufficient metabolism to keep going, but instead they are the ones which don't perform the proper function. Therefore, the selection is not about *survivability*, but *aptness*, or, to put it another way, if they match the teleology of the organism. So, I wouldn't be so quick to put the nail in the coffin of the directed mutation hypothesis. If you allow for semi-directed mutations (after all *no one* claimed that the cells were omniscient), then all of a sudden the tests that are used to show that the mutations aren't directed don't seem so worthwhile. That's why I advocate for using Active Information as a metric - this gives an actual value to the amount of directedness that any mutation has given the present selection pressures.johnnyb
January 22, 2012
January
01
Jan
22
22
2012
06:22 PM
6
06
22
PM
PDT
Claims of accumulations of random mutations actually construct multi-protein machines but it never pans out. That is why scientists are skeptical of it. And Nick, if you testified in a Court case involving ID and tried to use gene duplication you will get laughed out of the Court-room.Joe
January 22, 2012
January
01
Jan
22
22
2012
04:53 PM
4
04
53
PM
PDT
The question is, what makes it “faulty” if it is producing successful adaptations?
Nice to see this question asked by an ID proponent :D Exactly. If it works, it's advantageous. Can you let Behe know? ;)Elizabeth Liddle
January 22, 2012
January
01
Jan
22
22
2012
03:18 PM
3
03
18
PM
PDT
You should look into recent research regarding mutation theory – more and more we are seeing that mutations – whether gene duplications, SNPs, or other fun, are actually orchestrated by existing information. And the goal of my paper is simple – to show the kinds of evolutionary events which appear to presuppose orchestration.
Claims of adaptive mutation, directed mutation, etc. have been made again and again, for decades. They almost always get a splash in the press, and then don't pan out, and turn out to be due to something the scientists missed, usually a selective step the researchers didn't detect, or an experimental problem, or contamination. That's why most scientists are skeptical of most of it. You cite one interesting study, but it looks like a very special case, not any sort of evidence for a general mechanism. Whereas random mutation and natural selection is a very general mechanism.
You keep on arguing as if ID says that these are totally unevolvable. I didn’t say that. Behe didn’t say that. Dembski didn’t say that. What we said is that information/orchestration is required.
Which is the same thing as saying that normal evolution as scientists understand it, i.e. evolution by natural processes, doesn't work. Which is what you were arguing, except that your arguments didn't work because they are contradicted by known facts. None of the events I described (gene duplication, mutation to increase binding, selection for regulation for efficiency) is difficult or rare, and the mutation events didn't even have to happen together. To seriously examine the question, we'd have to get a survey of all the related systems, phylogenies of FleN, FleQ, and relatives, etc., and then see how plausible my scenario is, vs. your assertions about why a gradual, natural evolutionary pathway wouldn't work. I've already pointed out some basic problems with your claims based on the little we know now. If you're going to claim that natural evolution is impossible/really ridiculously improbable on the basis of the FleQ/FleN system, you'd better do better than that, and frankly you should at the very least do the above minimal research work for your readers. Too bad the peer-reviewers at that creationist journal didn't insist on this. Any real science journal would.NickMatzke_UD
January 22, 2012
January
01
Jan
22
22
2012
02:51 PM
2
02
51
PM
PDT
Johnnyb:
This appears to be a backup system. A complex of Crp and cAMP is what normally switches on glycerol production. However, this mutational process is for E. coli with a Crp deletion. Therefore, in the absence of Crp/ cAMP,the genome is marked in such a way that a backup mutation can provide the necessary promoter.
This "backup" mutation does not provide a new promoter, it activates the original one. However, it is interesting that the mutations that relieve the starvation stress arise at high rates only when the mutation would diminish the stress, it exhibits the features of Lamarckian evolution.Starbuck
January 21, 2012
January
01
Jan
21
21
2012
10:18 PM
10
10
18
PM
PDT
Nick - You have again misread me, and, likewise, the whole ID movement. I never said that gene dupication is impossible. My point is that when it occurs, there are enough issues that it requires *orchestration* for them to occur. You have already presupposed 3 incredibly unlikely pieces to explain one previously unlikely event. You are making the problem worse, not better. You should look into recent research regarding mutation theory - more and more we are seeing that mutations - whether gene duplications, SNPs, or other fun, are actually orchestrated by existing information. And the goal of my paper is simple - to show the kinds of evolutionary events which appear to presuppose orchestration. Imagine, for instance, if you told me that a blind person drove from one side of town to the other, by themselves. I'm not going to believe you. "But," you would argue, "if they turned left on seventh, right on Bristleblock, and then did a hard left on Mistletoe street, they'd get there just fine". That may or may not be true enough, but the point is irrelevant. If the driver is blind, they probably aren't going to do that. On the other hand, if the driver had a navigation computer which said "turn left now", "turn right now", then it is possible for them to navigate. The point of the paper is to show which types of systems are likely to require organization to evolve. You keep on arguing as if ID says that these are totally unevolvable. I didn't say that. Behe didn't say that. Dembski didn't say that. What we said is that information/orchestration is required. One interesting read is this paper by Zhang and Saier, In it, they describe an orchestrated mutational event. However, you should note that it took, I think it was 13 separate experiments to determine that the mutation was part of a regulated system rather than being haphazard. It would not have taken any experiment for them to have declared (quite authoritatively) that the mutational event was haphazard and fortuitous. It is not currently required in biology to prove one's position when they say that a mutation was fortuitous, but it requires an enormous amount of data to meet the requirements to prove a mutation was orchestrated. As such, I imagine there are a many more mutations which are part of orchestrated systems than are currently noted. The point of my paper was to present the scenarios likely to need orchestration to occur.johnnyb
January 21, 2012
January
01
Jan
21
21
2012
05:30 PM
5
05
30
PM
PDT
Don’t forget that FleN is regulated by FleQ.
FleQ just turns up expression of the operon containing FleN, along with operons containing 20+ other early flagellum genes (mostly the basal body genes homologous to nonflagellar T3SS). FleN then downregulates FleQ. This isn't how regulation works even in somewhat closely-related flagellated bacteria, like the standard model systems in E. coli and Salmonella. Knocking out FleN in Pseudomonas doesn't even knock out motility. So even on Behe's IC argument, you don't have a case. The FleQ homolog in Bacillus subtilis, when knocked out, produced no obvious unusual motility phenotype. You need to explain why it would be so amazingly hard to evolve such a system from a standard multiflagellated ancestor, and to have any hope at all your argument has to not contradict well-known facts.NickMatzke_UD
January 21, 2012
January
01
Jan
21
21
2012
05:10 PM
5
05
10
PM
PDT
What did “proto-FleN” do?
It could have been any of a large number of downstream genes producing some accessory flagellar protein (or even some non-flagellar protein that gets transposed into the flagellar operon).
What happened to that function when it changed?
You've never heard of gene duplication, I guess. You don't have an objection, you have "I personally haven't looked into this *at all*, but I'm going to declare the downfall of Darwinian evolution based on it nevertheless.
That isn’t an argument, or even an alternative, it’s just B.S.
You're the one whose "argument" relies on the nonexistence of gene duplication, the nonexistent of functional shifts, and the nonexistence of successful bacteria (a) without this system and (b) which happily produce multiple flagella.NickMatzke_UD
January 21, 2012
January
01
Jan
21
21
2012
04:37 PM
4
04
37
PM
PDT
Nick - Don't forget that FleN is regulated by FleQ. For this to work, you have to have the coordinated mutations of both having the receptors for FleN being regulated by FleQ as well as having the capacity for FleN to regulate FleQ. Now, my point was NOT that it couldn't evolve. In fact, the whole point of the paper was this - what would it mean if it did! The point was that half-baked, biologically unrealistic stories like yours are intrinsically unlikely. You didn't even bother to give a realistic story. It isn't science if you say "what if things were different and they magically changed incrementally". What did "proto-FleN" do? What happened to that function when it changed? That isn't an argument, or even an alternative, it's just B.S. It's like if every time I looked at a thermometer, I thought, "man, I wonder what kind of non-thermal force might have been at work to produce the temperature reading". Is it true that it is possible for non-thermal processes to interfere with thermometers? Sure. But if you invoke them for every temperature reading, you're not doing science, you're doing make-believe. What is needed are people to think about the informational requirements for change, and how those informational requirements could be met, and then to experiment to see how far those informational requirements actually exist in the cell. You should take a look at books like Caporale's "The Implicit Genome" which covers a lot of topics such as this.johnnyb
January 20, 2012
January
01
Jan
20
20
2012
09:09 PM
9
09
09
PM
PDT
johnnyb: Very interesting. I was not aware of that calculation, but it is obviously right. I have discussed many time the algorithm that generates the increse of antibody affinity after the primary immune response as a wondersul example of intelligent protwin engineering embedded in our immune system. I usually refer to the process described in your reference as "targeted mutation". It is, indeed, a very good example of added information. But not the only one in that specific system. Two more information adding mechanisms highly contribute to the final result: a) the hypermutating system, which is complex and implies many enzymes; and: b) a very powerful process of intelligent selection, based on the affinity of the new clones to the original epitope, stored in the antigen presenting cells. Those mechanisms are summed up in the following quote from Wikipedia: "Experimental evidence supports the view that the mechanism of SHM involves deamination of cytosine to uracil in DNA by an enzyme called Activation-Induced (Cytidine) Deaminase, or AID.[6][7] A cytosine:guanine pair is thus directly mutated a to a uracil:guanine mismatch. Uracil residues are not normally found in DNA, therefore, to maintain the integrity of the genome most of these mutations must be repaired by high-fidelity DNA mismatch repair enzymes. The uracil bases are removed by the repair enzyme, uracil-DNA glycosylase.[7] Error-prone DNA polymerases are then recruited to fill in the gap and create mutations.[6][8] The synthesis of this new DNA involves error-prone DNA polymerases, which often introduce mutations either at the position of the deaminated cytosine itself or neighboring base pairs. During B cell division the immunoglobulin variable region DNA is transcribed and translated. The introduction of mutations in the rapidly-proliferating population of B cells ultimately culminates in the production of thousands of B cells, possessing slightly different receptors and varying specificity for the antigen, from which the B cell with highest affinities for the antigen can be selected. The B cells with the greatest affinity will then be selected to differentiate into plasma cells producing antibody and long-lived memory B cells contributing to enhanced immune responses upon reinfection." So, the whole process is a very good example of how an intelligent process of protein engineering, based on targeted hypermutation and intelligent selection by direct measurement of a specific function, can esily enough find a functional target, optimizing an existing function. Finally, I would like to add that dFSCI (a metric O often use here) is only a subset of CSI, defined by the kind of objects considered (only digital sequences), and the kind of specification (functional specification). Essentially, the concept is the same.gpuccio
January 20, 2012
January
01
Jan
20
20
2012
07:50 PM
7
07
50
PM
PDT
Nick Matzke:
Starting point: System with just FleQ. Bacterium produces multiple flagella, and not even an exact number of them, just like many bacteria living today. Next step: Selection for a fewer flagella (for efficiency or whatever) selects mutation(s) on FleQ and one of the downstream proteins (proto-FleN) such that they interact a little more strongly. First FleQ is weakly downregulated, producing slightly fewer flagella. Later it is stronger downregulated.
So in order to explain the origin of one IC config you rely on another? Talk about a totally uncooked argument. No thinking required when you start out with the very stuff you need to explain in the first place. How about starting with a flagella-less system...Joe
January 20, 2012
January
01
Jan
20
20
2012
06:24 PM
6
06
24
PM
PDT
Pallen and Matzke (2006) argue for the exaptational origin of the flagellum. As we’ve shown, just the FleQ/FleN pathway makes the evolution of this system solely by natural selection unlikely.
How so? The FliQ/FleN pathway isn't even universally required in known flagellar systems. You write in the paper:
An example of a relatively irreducibly complex mechanism, then, would be the control of the flagellar assembly in the bacterium Psuedomonas aeruginosa, which uses a multilevel control system to regulate the formation of the flagellum. FleQ is a transcription factor that regulates a number of other genes used in flagellar assembly. One of the downstream products of the assembly is FleN. FleN interacts with FleQ to deactivate it, preventing multiflagellation (Dasgupta et al. 2003). The regulation of FleQ is done downstream of FleQ itself, making a step-at-a-time evolution of the pathway extremely difficult.
This doesn't work either. E.g.: Starting point: System with just FleQ. Bacterium produces multiple flagella, and not even an exact number of them, just like many bacteria living today. Next step: Selection for a fewer flagella (for efficiency or whatever) selects mutation(s) on FleQ and one of the downstream proteins (proto-FleN) such that they interact a little more strongly. First FleQ is weakly downregulated, producing slightly fewer flagella. Later it is stronger downregulated. "Just not so" stories are pretty worthless when the author makes no effort whatsoever to make the minimal effort to look up relevant data or consider the plausibility of alternative contentions. Unfortunately, this is almost universally all that creationists/IDists do -- throw out some half-baked argument that "seems obvious" to them without doing anything like the serious research and thinking required. Then they get mad when they don't get taken seriously by real biologists.NickMatzke_UD
January 20, 2012
January
01
Jan
20
20
2012
05:32 PM
5
05
32
PM
PDT
The short answer - if, like Broadway Danny Rose, I may interject - Pierre, is that they are terrified of a paradigm-change which would make them look as foolish as the stridency with which they have proclaimed non-believers to be irrational half-wits. I believe psychologists call it, 'projection'. There was an article on here I believe, recently, in which it was recounted that a student questioned her professor on an aspect of 'evamolution', and he 'went ballistic'. Imagine if he had addressed her question with integrity on its face value. If he had yielded to the merit of her question, it would have opened a whole Pandora's Box for him, because one truth would have led to another - and where would his career have gone from there, eh? Those truths would have been festering away subliminally, but, were necessarily, fiercely repressed. If he had not felt threatened, then, instead of exploding with rancour, he would have addressed her, as I am addressing you, and patiently explained to her, perhaps with a slightly pained demeanour .... "My dear child......." Scripture tells us we must bear one another's burdens, so a little kindness however patronising in tone, goes a long way - as I'm sure you would agree.Axel
January 20, 2012
January
01
Jan
20
20
2012
04:07 PM
4
04
07
PM
PDT
Peter:
Then why do the vast majority of scientists (i.e. the people closest to the evidence) of all religions and faiths and none at all, all over the world, disagree with you?
This is really the weakest of all the arguments put forward against ID. Anyone with passing familiarity with the history of science knows that the fact that a theory is endorsed by the majority of scientists is hardly a guarantee that it is correct.
So, as you claim to have “scientific evidence” for the existence of life from an ID perspective could you please share that with me?
The following is a partial list, but will get you started: Darwinism, a Theory in Crisis by Michael Denton Darwin's Black Box and The Edge of Evolution by Michael Behe Signature in the Cell by Stephen Meyer The Design Inference and No Free Lunch by William Dembski Genetic Entropy and the Mystery of the Genome by J.C. Sanford Douglas Axe and Ann Gauger's papers, available at their Web site.Bruce David
January 20, 2012
January
01
Jan
20
20
2012
04:01 PM
4
04
01
PM
PDT
Starbuck - The question is, what makes it "faulty" if it is producing successful adaptations? Historically, "faulty" has been used to talk about polymerases which did not copy 100%, but what if that was the point? A benefit of Active Information is to quantify this question, so we can see whether it gives or does not give active information. The nice thing about the SMH example is that we know biologically that the cell is restricting itself to a certain space, so the calculations were simplified. For more global mutators, it is harder to assess, but it is something I've been working on. If you want to talk about it more, send me an email - jonathan.bartlett@blythinstitute.org.johnnyb
January 20, 2012
January
01
Jan
20
20
2012
11:00 AM
11
11
00
AM
PDT
I hold a theistic interpretation of quantum mechanics: "As a man who has devoted his whole life to the most clear headed science, to the study of matter, I can tell you as a result of my research about atoms this much: There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter." Max Planck - The Father Of Quantum Mechanics - Das Wesen der Materie [The Nature of Matter], speech at Florence, Italy (1944)(Of Note: Max Planck Planck was a devoted Christian from early life to death, was a churchwarden from 1920 until his death, and believed in an almighty, all-knowing, beneficent God http://en.wikiquote.org/wiki/Max_Planck Alain Aspect and Anton Zeilinger by Richard Conn Henry - Physics Professor - John Hopkins University Excerpt: Why do people cling with such ferocity to belief in a mind-independent reality? It is surely because if there is no such reality, then ultimately (as far as we can know) mind alone exists. And if mind is not a product of real matter, but rather is the creator of the "illusion" of material reality (which has, in fact, despite the materialists, been known to be the case, since the discovery of quantum mechanics in 1925), then a theistic view of our existence becomes the only rational alternative to solipsism (solipsism is the philosophical idea that only one's own mind is sure to exist). (Dr. Henry's referenced experiment and paper - “An experimental test of non-local realism” by S. Gröblacher et. al., Nature 446, 871, April 2007 - “To be or not to be local” by Alain Aspect, Nature 446, 866, April 2007 http://henry.pha.jhu.edu/aspect.html Wave function Excerpt "wave functions form an abstract vector space",,, This vector space is infinite-dimensional, because there is no finite set of functions which can be added together in various combinations to create every possible function. http://en.wikipedia.org/wiki/Wave_function#Wave_functions_as_an_abstract_vector_space Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) http://www.cas.umt.edu/phil/faculty/duwell/DuwellPSA2K.pdf Quantum Computing – Stanford Encyclopedia Excerpt: Theoretically, a single qubit can store an infinite amount of information, yet when measured (and thus collapsing the Quantum Wave state) it yields only the classical result (0 or 1),,, http://plato.stanford.edu/entries/qt-quantcomp/#2.1 Single photons to soak up data: Excerpt: the orbital angular momentum of a photon can take on an infinite number of values. Since a photon can also exist in a superposition of these states, it could – in principle – be encoded with an infinite amount of information. http://physicsworld.com/cws/article/news/7201 It is important to note that the following experiment actually encoded information into a photon while it was in its quantum wave state, thus destroying the notion, held by many, that the wave function was not 'physically real' but was merely 'abstract'. i.e. How can information possibly be encoded into something that is not physically real but merely abstract? Ultra-Dense Optical Storage - on One Photon Excerpt: Researchers at the University of Rochester have made an optics breakthrough that allows them to encode an entire image's worth of data into a photon, slow the image down for storage, and then retrieve the image intact. http://www.physorg.com/news88439430.html The following paper mathematically corroborated the preceding experiment and cleaned up some pretty nasty probabilistic incongruities that arose from a purely statistical interpretation, i.e. it seems that stacking a ‘random infinity', (parallel universes to explain quantum wave collapse), on top of another ‘random infinity', to explain quantum entanglement, leads to irreconcilable mathematical absurdities within quantum mechanics: Quantum Theory's 'Wavefunction' Found to Be Real Physical Entity: Scientific American - November 2011 Excerpt: David Wallace, a philosopher of physics at the University of Oxford, UK, says that the theorem is the most important result in the foundations of quantum mechanics that he has seen in his 15-year professional career. "This strips away obscurity and shows you can't have an interpretation of a quantum state as probabilistic," he says. http://www.scientificamerican.com/article.cfm?id=quantum-theorys-wavefunction The quantum (wave) state cannot be interpreted statistically - November 2011 http://lanl.arxiv.org/abs/1111.3328bornagain77
January 20, 2012
January
01
Jan
20
20
2012
09:51 AM
9
09
51
AM
PDT
Ahh I found it here http://www.blythinstitute.org/images/data/attachments/0000/0005/EstimatingActiveInformationPoster_final.pdfStarbuck
January 20, 2012
January
01
Jan
20
20
2012
09:48 AM
9
09
48
AM
PDT
Yeah I read that part but there's no actual calculation there, for example, how exactly did you get 22 bits? I was thinking more of a case where the faulty polymerase actually helps some bacteria that could not survive on some carbon source developed the ability, thanks to this polymerase. Many biologists would probably be fooled into thinking this was evidence of directed mutation, when it was a faulty polymerase that worked briefly and then died. How much information was applied to the search, how would I go about figuring that out.Starbuck
January 20, 2012
January
01
Jan
20
20
2012
09:41 AM
9
09
41
AM
PDT
Perhaps it would be worthwhile to provide an FAQ which includes terminology. But you need to add one from my paper - RIC - Relative Irreducible Complexity.johnnyb
January 20, 2012
January
01
Jan
20
20
2012
09:36 AM
9
09
36
AM
PDT
Starbuck - I provided examples - specifically of somatic hypermutation. This is from the linked abstract:
Historically, Active Information has only been applied to computer-based evolutionary algorithms. However, it can also be applied to biological systems. Applying Active Information to the Somatic Hypermutation (SMH) process for refining binding sites during an immune response makes an excellent test case for using this concept biologically. Because SMH primarily works by restricting the physical range of base pairs which it mutates, it simplifies the Active Information calculations. One can simply subtract log2(SMH mutation space) from log2(whole genome search space) and estimate that the SMH process contributes approximately 22 bits of Active Information to the search. Additional factors can complicate this estimate, such as taking into account the number of mutations required for hitting the target.
The example you reference does not apply to active information, because it is not a biological search. For it to be a biological search, you would need to specify the selection pressures. Then we could ask that question using Active Information, and see how that contributes (positively or negatively) to Active Information. Note that it is entirely possible that you could come up with *negative* Active Information (meaning that the evolutionary program of the genome points *away* from likely solutions). I think those are just as important, because they help you find which specific problems genomes are built to be modified for. And that brings me to the other nice part of Active Information - you don't have to know about a mechanism ahead-of-time to make the calculation, but if there is significant positive active information, that tells you that there is probably a mechanism worth finding in the system.johnnyb
January 20, 2012
January
01
Jan
20
20
2012
09:34 AM
9
09
34
AM
PDT
"Can you name a single fact that ID/Creationism has discovered that Darwinism has not? If not, are not claims of it’s potential utility in research premature?" That's easy - genetics. It was discovered by Mendel in use as an anti-evolutionary argument (read the conclusion of Mendel's paper if you don't believe me). I think that the Price equation is similar, though its original founder was a little more coy about it's origin from theology. If you want to see the relevance of Creation in the history of biology, you should check out my article on the subject: The Doctrine of Creation and the Making of Modern Biologyjohnnyb
January 20, 2012
January
01
Jan
20
20
2012
09:27 AM
9
09
27
AM
PDT
Peter, Without bandwidth-wasting, I suggest you simply demonstrate the emergence of control, functionality and symbolic information processing in nature by chance and necessity alone. ok?Eugene S
January 20, 2012
January
01
Jan
20
20
2012
09:26 AM
9
09
26
AM
PDT
Peter - That actually wasn't what I was referring to. It amazes me how many people argue vehemently against ID without reading the technical papers. If you had bothered to actually read the paper, you would have found that you quoted from section 3.3, but skipped over 3.1, 3.2, and 3.4 which had applications to non-creationary aspects of biology, irrelevant of the truth/falsity of common descent. Did you intentionally skip over those? So, in 3.1, I discuss the following:
Pallen and Matzke (2006) argue for the exaptational origin of the flagellum. As we’ve shown, just the FleQ/FleN pathway makes the evolution of this system solely by natural selection unlikely. However, that does not completely nullify the argument of exaptation. Because we lack total knowledge, this system is an RIC system. However, as discussed in section 2.6, this leaves open a few possibilities for its evolution. If it is evolvable, then it means that the traversed sequence space has been somehow regularized. An analogous (though not functionally homologous) way of looking at the possible evolution is to compare it to the V(D)J recombination system in which specific gene regions, designated as either variable (V), diversity (D), or joining (J) regions of the immune system, are randomly selected and assembled. In the V(D)J recombination system, the formation of immunoglobulin genes is facilitated by recombination signal sequences (RSSs), which mark segments of functionality. These, in turn, are then assembled in a regularized way, and the whole process resembles a computer metaprogram—a program which generates other programs (Bartlett 2006). These pathways are not deterministic, but information is the main driving force in their generation. The RSSs provide the information within the genome to guide the recombination towards likely functional paths. The FleQ/FleN pathways (and others) could be evolved through an analogous system which put together pieces of functionality based on templates. Rigoutsos et al. (2006) have claimed to have found gene sequences that match such a description. Whatever the exact mechanism, the RIC concept indicates that although an unguided evolution of the flagellum by exaptation is unlikely, it would be possible if the evolution was regularized in some way.
The point being, that if an RIC system (similar to IC, but see paper for definition) is found to be evolvable, then we have direct evidence of mutational machinery at play. Therefore, by identifying evolvable RIC systems, we can use RIC to detect higher-order levels of evolution occurring.johnnyb
January 20, 2012
January
01
Jan
20
20
2012
09:22 AM
9
09
22
AM
PDT
I'm missing where the actual calculation of a specific case is shown, it's just glossed over unless I'm missing something. I would like to see it applied to a case where there is a mistake in protein synthesis that results in a faulty polymerase.Starbuck
January 20, 2012
January
01
Jan
20
20
2012
09:20 AM
9
09
20
AM
PDT
BA, Which interpretation of quantum mechanics do you hold to be most accurate? Perhaps the Ensemble interpretation? Or perhaps de Broglie-Bohm? Whatever it is, why choose that one over another?Peter Griffin
January 20, 2012
January
01
Jan
20
20
2012
09:19 AM
9
09
19
AM
PDT
The following describes how quantum entanglement is related to functional information:
Quantum Entanglement and Information Excerpt: A pair of quantum systems in an entangled state can be used as a quantum information channel to perform computational and cryptographic tasks that are impossible for classical systems. http://plato.stanford.edu/entries/qt-entangle/
Anton Zeilinger, a leading researcher in Quantum mechanics, relates how quantum entanglement is related to quantum teleportation in this following video;
Quantum Entanglement and Teleportation – Anton Zeilinger – video http://www.metacafe.com/watch/5705317/
A bit more detail on how teleportation is actually achieved, by extension of quantum entanglement principles, is here:
Quantum Teleportation Excerpt: To perform the teleportation, Alice and Bob must have a classical communication channel and must also share quantum entanglement — in the protocol we employ*, each possesses one half of a two-particle entangled state. http://www.cco.caltech.edu/~qoptics/teleport.html
And quantum teleporation has now shown that atoms, which are suppose to be the basis from which ALL functional information ‘emerges’ in the atheistic neo-Darwinian view of life, are now shown to be, in fact, reducible to the transcendent functional quantum information that the atoms were suppose to be the basis of in the first place!
Ions have been teleported successfully for the first time by two independent research groups Excerpt: In fact, copying isn’t quite the right word for it. In order to reproduce the quantum state of one atom in a second atom, the original has to be destroyed. This is unavoidable – it is enforced by the laws of quantum mechanics, which stipulate that you can’t ‘clone’ a quantum state. In principle, however, the ‘copy’ can be indistinguishable from the original (that was destroyed),,, http://www.rsc.org/chemistryworld/Issues/2004/October/beammeup.asp Atom takes a quantum leap – 2009 Excerpt: Ytterbium ions have been ‘teleported’ over a distance of a metre.,,, “What you’re moving is information, not the actual atoms,” says Chris Monroe, from the Joint Quantum Institute at the University of Maryland in College Park and an author of the paper. But as two particles of the same type differ only in their quantum states, the transfer of quantum information is equivalent to moving the first particle to the location of the second. http://www.freerepublic.com/focus/news/2171769/posts
Thus the burning question, that is usually completely ignored by the neo-Darwinists that I’ve asked in the past, is, “How can quantum information/entanglement possibly ‘emerge’ from any material basis of atoms in DNA, or any other atoms, when entire atoms are now shown to reduce to transcendent quantum information in the first place in these teleportation experiments??? i.e. It is simply COMPLETELY IMPOSSIBLE for the ’cause’ of transcendent functional quantum information, such as we find on a massive scale in DNA and proteins, to reside within, or ever ‘emerge’ from, any material basis of particles!!! Despite the virtual wall of silence I’ve seen from neo-Darwinists thus far, this is not a trivial matter in the least as far as developments in science have gone!!
Does Quantum Biology Support A Quantum Soul? – Stuart Hameroff - video (notes in description) http://vimeo.com/29895068
Here is a clear example of ‘quantum computation’ in the cell:
Quantum Dots Spotlight DNA-Repair Proteins in Motion - March 2010 Excerpt: "How this system works is an important unanswered question in this field," he said. "It has to be able to identify very small mistakes in a 3-dimensional morass of gene strands. It's akin to spotting potholes on every street all over the country and getting them fixed before the next rush hour." Dr. Bennett Van Houten - of note: A bacterium has about 40 team members on its pothole crew. That allows its entire genome to be scanned for errors in 20 minutes, the typical doubling time.,, These smart machines can apparently also interact with other damage control teams if they cannot fix the problem on the spot. http://www.sciencedaily.com/releases/2010/03/100311123522.htm
Of note: DNA repair machines ‘Fixing every pothole in America before the next rush hour’ is analogous to the traveling salesman problem. The traveling salesman problem is a NP-hard (read: very hard) problem in computer science; The problem involves finding the shortest possible route between cities, visiting each city only once. ‘Traveling salesman problems’ are notorious for keeping supercomputers busy for days.
NP-hard problem http://en.wikipedia.org/wiki/NP-hard
Since it is obvious that there is not a material CPU (central processing unit) in the DNA, or cell, busily computing answers to this monster logistic problem, in a purely ‘material’ fashion, by crunching bits, then it is readily apparent that this monster ‘traveling salesman problem’, for DNA repair, is somehow being computed by ‘non-local’ quantum computation within the cell and/or within DNA; verses and music:
John 1:1-3 In the beginning was the Word, and the Word was with God, and the Word was God. He was with God in the beginning. Through him all things were made; without him nothing was made that has been made. 1 Corinthians 2:14 The natural person does not accept the things of the Spirit of God, for they are folly to him, and he is not able to understand them because they are spiritually discerned. Brooke Fraser – Lord of Lords(Legendado Português) - http://www.youtube.com/watch?v=rkF3iVjOZ1I
bornagain77
January 20, 2012
January
01
Jan
20
20
2012
09:18 AM
9
09
18
AM
PDT
1 2

Leave a Reply