Uncommon Descent Serving The Intelligent Design Community

More On T-urf13 – A Response To Arthur Hunt And Others

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A week ago I blogged about Arthur Hunt’s failure to refute Michael Behe on his concepts of irreducible complexity and the edge of evolution. My article quickly ignited into a heated debate which honed in on a host of different issues. Within just 48 hours of publishing the piece, more than 70 responses had ensued (as of now, there is more than 170!). Among those who commented was none other than Arthur Hunt himself, who raised a few criticisms of his own. This is my response to these criticisms.

I have distilled the objections down to three headings. These are:

  1. The challenge I (and others) pose to those of a neo-Darwinian persuasion to provide a detailed and testable account of the origin of biochemical systems is an exercise in the pot calling the kettle black. After all, have ID proponents articulated a testable theory which explicates how a biochemical system might have arisen by virtue of intelligent causality?
  2. Arthur Hunt claims that the processes of random recombination are demonstrably sufficient to account for the T-urf13 gene, and that there is evidence to support that this is actually what happened.
  3. Arthur Hunt provides a link to a blog where he critiques the Axe (2004) paper which I cited regarding the prohibitive rarity of functional protein folds with respect to the vast sea of combinatorial sequence space.

Let’s address each of these in turn.

1. Is The Pot Calling The Kettle Black?

This is an objection I hear rather frequently: While it may be true that those of a neo-Darwinian persuasion have utterly failed to provide a detailed step-by-step account of any complex biochemical system (thus demonstrating the feasibility of their model), ID proponents have also failed to provide such detailed accounts of how the design came to be implemented in the living system. It seems to me that this objection, however, is missing a subtle point: A rigorous and testable scientific theory of design detection is quite different from a rigorous and testable scientific theory of design implementation. I am happy to freely confess with regards the latter that, though one or two interesting ideas have been advanced (see, for instance this pro-ID paper by Michael Sherman — not to be confused with Michael Shermer — in the journal, Cell Cycle.), we are currently lacking any form of testable historic narrative which outlines in testable fashion how, say, the bacterial flagellum came into existence by virtue of intelligent causality. What we do have, however, is a rigorous and testable scientific theory of how design might be detected. The ability to detect the products of intelligent design is not contingent upon the ability to discern how that design came to be implemented. One may justifiably infer that, let’s say, a complex computer system is best explained by intelligent causation without knowing anything about the identity of the designer or how it came to be assembled.

Moreover, it is still unclear how mind is able to interact with matter, even in the context of human minds (I lean in the direction of substance dualism with respect to my concept of brain and mind). We make design inferences every day, and it is not clear at all that one requires independent knowledge of the designer or the agent’s modus operandi in order to justify that inference.

Intelligent design is consistent with some concepts of mechanism with regards the history of life (such as certain variants of front-loading). But it is by no means committed to such scenarios. Creation of independent taxons de novo is one possibility, and one concept which I am personally sympathetic towards.

2. Recombination – A Sufficient Causal Explanation?

Arthur Hunt asks:

[A]re you suggesting that the mechanisms known to be involved in homologous and/or non-homologous recombination (the processes that pretty clearly gave rise to the T-urf13 gene) do not obey the rules of chemistry? This is the only way I can make sense of your argument.

Stated another way, you seem to be claiming that T-urf13 must be beyond the reach of “Darwinian” processes, that in this case must involve recombination. Since T-urf13 obviously exists, and since we know when, where, and how (at least in general terms) it arose, then your argument seems to boil down to an assertion that recombinational mechanisms are either insufficient (obviously wrong) or guided in some (unstated) way. If you are trying to say something else, a bit of clarification would be appreciated.

Again, what is being discussed is the causal sufficiency of random (that is, unguided) processes in accounting for the de novo gene (in this case, T-urf13): Did ample probabilistic resources exist in order to justify attributing such an event to chance? It seems fairly clear to me that they did not. Let me re-iterate again: A demonstration of molecular homology or common ancestral derivation is not, in and of itself, a causal explanation. It is not that the processes of homologous and non-homologous recombination do not obey the laws of chemistry. Rather, we are positing that agency is an indispensible factor in accounting for novel functional information – just as a combination of agency and chemistry is indispensible in producing this blog post. Moreover, the transmembrane system described by Hunt is substantially simpler in composition than almost all of the systems raised by Behe (see, for instance, my detailed discussion of the complexity of the flagellar apparatus here).

The Uncommon Descent writer and commenter “PaV” made note of one or two things which I had neglected in my original piece. Here are his unedited remarks (which can be found at comment #101 in my previous post):

(1) The method of inheritance of mitochondria is not the same as that of nuclear DNA—the benchmark of neo-Darwinism.

(2) The idea of three CCC’s is hypothetical, and not more. The “third” CCC that Hunt proposes—a binding site between a toxin and the gated ion-channel—can just as easily, and more plausibly, be explained by the toxin ‘evolving’ a binding site for the ion-channel.

(3) The kind of “recombination” that takes place in (plant) mitochondria is not your normal Mendelian recombination. Hunt eludes to this when he says at [40]: “Jonathan, are you suggesting that the mechanisms known to be involved in homologous and/or non-homologous recombination (the processes that pretty clearly gave rise to the T-urf13 gene) do not obey the rules of chemistry?”

Notice his use of “rules of chemistry” and not, e.g., the “rules of biology”. This is because in plant mitochondria, ‘sub-circles’ of mitochondrial DNA can accumulate through ‘intra-molecular’ recombination. There apparently is some kind of machinery that allows portions of the original ‘circular’ genome of the mitochondria to take parts of the original mitochondrial genome and fashion other circles. This machinery (notice that this terms presupposes some kind of inter-purposiveness) obeys not genetic, but chemical rules, with the result that a huge diversity of “recombinations” can be cobbled together. As someone pointed out in Hunt’s blog back when T-urf 13 was being discussed, there are great similarities between the diversity bought about in anti-body production and that of plant mitochondrial recombination. This, quite obviously, falls outside of normal “Darwinian” mechanisms. It seems a little bit disingenious that Art is now correctly referring to this quite different type of recombination as following chemical rules, yet insist that the CCC’s he finds here dispute Behe’s claims that “Darwinian” mechanisms can’t produce much more than 2 CCC’s of complexity.

(4) Let’s just be aware that Behe uses White’s number of 1 in 10^20 in EoE, a number that represents not theoretical figures of probability, but actual in vivo probabilites; i.e., this is what is found in the lab. As to ‘theoretical figures’, the number should be 1 in 10^16, and, hence, 3 CCC’s would represent, theoretically, 1 in 10^48 improbability, under the UPB used by most scientists. I add this simply for the sake of clarity.

(5) In a recent paper, a “de novo” gene was being touted. Guess what? It turns out that a portion of a “non-coding” gene and its flanking element was involved in the manufacturing of this “de novo” gene. This is exactly what we find in T-urf 13. Hence, when I used the term “machinery” in (3) above, indeed, this seems to be a maneuver that living cells have at their disposal, thus warranting a search for this new mechanism and not the false claim of truly “de novo” genes. As in the case with T-urf-13, the “new” gene is nothing more than the demolishing of another gene: i.e., the critical portion of the “de novo” gene represented no more than a portion of another gene. (The transcribed portion of T-urf 13 that provides this ‘amazing’ gated ion-channel, is only a very small part of the “de novo” gene.) Again, this is consistent with Behe’s latest article.

(6) There are two “nuclear restorers” that can restore the plant to ‘male fertility’ from the sterile condition found in the Texas maize from which T-urf 13 is derived. Interestingly, and provocatively, when the “nuclear restorers” work their magic, guess what? T-urf 13 is no longer found: evidence, again, that a “mechanism”, and “machinery” is at play.

Getting back to the original post here, Jonathan quite correctly demonstrates that Darwinian mechanisms are being assumed to be at work with the manufacture of URF-13 protein, and, yet, from all indications, whatever is happening to the maize, has very little to do with true Darwinian mechanisms. I would hope Arthur Hunt might acknowledge this.

Let’s just finish here by pointing out again that T-urf 13 involves a kind of degradation of maize. In the case of the Texas maize–hence the T—the T-urf 13 was located by researchers because it was there that the toxin that decimated the corn grown in Texas in the late 60′s attached itself. So the “manufacturing” of this “de novo” gene proved to make the maize less fit. This is in keeping with Behe’s latest findings.

3. Is Douglas Axe Wrong?

Hunt continues,

Axe’s views about the nature of the functional space of protein sequences are plainly wrong. I try to illustrate this here (I also explain why the ID party line with respect to the 2004 JMB paper is incorrect.) Given this, the objections that rely on the alleged fantastic isolation of functionality in sequence space are pretty irrelevant. (This is, of course, one of the points of the T-urf13 example. It shows quite clearly that functionality, even irreducible complexity, is not the stupendous impossibility that is claimed by Behe and others.)

Hunt thus directed me to a blog (which I have happened across before) he had posted in 2007 critiquing Douglas Axe’s 2004 JMB paper. A thorough critique of this article is likely to shape up to be fairly lengthy and time-consuming. I am also not a specialist in protein bioscience and it has been a while since I read Axe’s 2004 JMB paper in any great detail. But I do think that Hunt raises a number of points in his blog which need to be appropriately and adequately addressed. I thus intend to write about this in more detail, and in greater length, when time permits. For the time being, let me point out that Axe’s conclusions regarding the rarity of functional protein catalytic domains in sequence space is not an isolated result. In addition to the Keefe and Szostak (2001) paper which I mentioned in my previous post, a similar result was obtained by Taylor et al. in their 2001 PNAS paper. This paper examined the AroQ-type chorismate mutase, and arrived at a similarly low prevalence (giving a value of 1 in 10^24 for the 93 amino acid enzyme, but, when adjusted to reflect a residue of the same length as the 150-amino-acid section analysed from Beta-lactamase, yields a result of 1 in 10^53). Yet another paper by Sauer and Reidhaar-Olson (1990) reported on “the high level of degeneracy in the information that specifies a particular protein fold,” which it gives as 1 in 10^63. I could easily continue in the same vein for some time. I also strongly encourage Arthur Hunt and others to read Douglas Axe’s excellent review article in Bio-complexity which covers this topic in more detail. Axe also has contributed an excellent chapter to the recently-published The Nature of Nature — Examining The Role of Naturalism in Science (a book which I highly recommend), which explains the relevant concepts at an accessible level for non-experts.

[UPDATE: Further discussion of Axe’s 2004 JMB result, in the context of Hunt’s essay, is now available here.]

Comments
Jonathan M, I think my post previous to this may have been bumped off the front page too quickly for you to see it. I'm still carrying on the conversation in the previous thread, but based on the responses thus far I don't think that I'm likely to get an answer to my questions there. If you have the time to provide a mathematically rigorous definition of CSI and show how to calculate it for the four scenarios I describe in post 17 of this thread, I believe it would be worth a new topic on UD's front page. It would certainly help me to test some of the claims frequently made by ID proponents here. Thank you for your time.MathGrrl
March 8, 2011
March
03
Mar
8
08
2011
02:14 PM
2
02
14
PM
PDT
Yes, Mathgrrl, I was referring to Dembski’s explanatory filter which makes use of the concept of CSI. This concept is rigorously and mathematically defined in Dembski’s books, “The Design Inference” and “No Free Lunch”. Have you read those books?
I have read No Free Lunch and skimmed The Design Inference. I was unable to find a mathematically rigorous definition of CSI that I could implement in software. I would appreciate your help in reaching that point, if you are willing. On the other thread I posted four scenarios and asked how to measure CSI for each of them. They are: 1) A biological system with the specification of "Produces X amount of protein Y." A simple gene duplication, even without subsequent modification of the duplicate, can increase production from less than X to greater than X. 2) Tom Schneider's ev uses the specification of "A nucleotide that binds to exactly N sites of width M within the genome." Using only simplified forms of known, observed evolutionary mechanisms, ev routinely evolves genomes that meet the specification. The length of the genome required to meet this specification can be quite long, depending on the value of M and N. (ev is particularly interesting because it is based directly on Schneider's PhD work with real biological organisms.) 3) Tom Ray's Tierra routinely results in digital organisms with a number of specifications. One I find interesting is "Acts as a parasite on other digital organisms in the simulation." 4) Steiner Problem solutions use the specification "Computes a close approximation to the shortest connected path between a set of points." The length of the genomes required to meet this specification depends on the number of points, but can certainly be hundreds of bits. How would one measure the CSI in each of these genomes?MathGrrl
March 7, 2011
March
03
Mar
7
07
2011
10:18 AM
10
10
18
AM
PDT
Arthur Hunt:
The fact is that well-understood biochemical mechanisms are involved in the origination of T-urf13.
That does not make it a blind watchmaker process. Arthur Hunt:
1. The mode of inheritance of mitochondrial genes has absolutely nothing to do with the issue here – that T-urf13 arose via well-understood processes that do not involve ID and that defy the claims that such proteins and systems require probabilistic resources far in excess of what was available.
What methodology was used to determine the process is a blind wachmaker process?
2. As I explained in the essay on T-urf13, Behe’s own “derivation” equates a CCC with a small-molecule binding site (that would be the site to which the antimalarial binds). To argue that a polyketide binding site is not analogous to an antimalarial binding site makes no sense to me. (The fact of the matter is that a polyketide is much more like an oligopeptide than is chloroquine.)
Chapter 8 of "The Edge of Evolution" takes care of your claims rather nicely Art. Tjhat you can ignore what Bejhe wrote says more about you than his claims. So what we have is Arthur Hunt refusing to understand Dr Behe's and ID's claims and blathering on as if it means something. Nice work Art.Joseph
March 7, 2011
March
03
Mar
7
07
2011
05:37 AM
5
05
37
AM
PDT
Yes, Mathgrrl, I was referring to Dembski's explanatory filter which makes use of the concept of CSI. This concept is rigorously and mathematically defined in Dembski's books, "The Design Inference" and "No Free Lunch". Have you read those books? JJonathan M
March 7, 2011
March
03
Mar
7
07
2011
01:39 AM
1
01
39
AM
PDT
Arthur Hunt -- Please note the update to this post. Thanks. JJonathan M
March 7, 2011
March
03
Mar
7
07
2011
01:31 AM
1
01
31
AM
PDT
myname, I'm not sure I understand the terms, but "copy and paste" is probably closer to the situation here than "cut and paste". One small correction - in my comment above, "back and white" should read "black and white". Sorry for any confusion.Arthur Hunt
March 6, 2011
March
03
Mar
6
06
2011
03:56 PM
3
03
56
PM
PDT
Jonathan M, One sentence from your original post has significant bearing on the conversation taking place in the previous thread about Dr. Hunt's work:
What we do have, however, is a rigorous and testable scientific theory of how design might be detected.
Are you referring to Dembski's CSI or something else? I have been attempting, without success, to get an ID proponent in the other thread to provide a mathematically rigorous definition of CSI so that I can run a simulation to test the claim that it is an unambiguous indicator of intelligent agency. If you are referring to CSI, could I ask you to please provide references to such a definition? If you are referring to a different "rigorous and testable" mechanism, could you please provide more detail? Thank you.MathGrrl
March 6, 2011
March
03
Mar
6
06
2011
03:44 PM
3
03
44
PM
PDT
OT video; loved this 98 minute dialogue between theist mathematician Dr. John Lennox and atheist chemist, Dr. Peter Atkins. Very good discussion between two clear speakers. Dr. John Lennox and Dr. Peter Atkins. http://www.vimeo.com/15799514bornagain77
March 6, 2011
March
03
Mar
6
06
2011
12:31 PM
12
12
31
PM
PDT
Art Hunt, maybe you could clarify something. Is the T-urf13 due to a copy&paste or a cut&paste mechanism?myname
March 6, 2011
March
03
Mar
6
06
2011
08:37 AM
8
08
37
AM
PDT
@ Jonathan, Thanks a lot for getting me access to the article by Shapiro and Sternberg I really appreciate it.noam_ghish
March 5, 2011
March
03
Mar
5
05
2011
03:40 AM
3
03
40
AM
PDT
Hi Jonathan, Thanks for your continued interest in my essays. Getting to your latest (in order): You said:
I have distilled the objections down to three headings. These are: 1. The challenge I (and others) pose to those of a neo-Darwinian persuasion to provide a detailed and testable account of the origin of biochemical systems is an exercise in the pot calling the kettle black. After all, have ID proponents articulated a testable theory which explicates how a biochemical system might have arisen by virtue of intelligent causality?
Um, I really don’t care about pots, kettles, and the like. The fact is that well-understood biochemical mechanisms are involved in the origination of T-urf13. Recombination has been studied for decades, and we know enough about the enzymes to know that the attendant chemical mechanisms are all that are needed to promote the genomic shuffling that gave rise to T-urf13. If you don’t believe me, then you are welcome to call upon the research literature, and/or study the enzymes in the lab, and point out evidence that argues otherwise.
Again, what is being discussed is the causal sufficiency of random (that is, unguided) processes in accounting for the de novo gene (in this case, T-urf13): Did ample probabilistic resources exist in order to justify attributing such an event to chance? It seems fairly clear to me that they did not. Let me re-iterate again: A demonstration of molecular homology or common ancestral derivation is not, in and of itself, a causal explanation.
Well, when we have a good understanding of the enzymology that underlies the homology and origination of T-urf13, I would say that we do indeed have a causal explanation. Are you claiming that there is evidence that suggests that recombination did not give rise to the gene encoding T-urf13? If so, some pointers would be appreciated.
The Uncommon Descent writer and commenter “PaV” made note of one or two things which I had neglected in my original piece.
PaV’s contributions fall short of any sort of refutation of my points. Without quoting extensively (refer to the original post for details) but instead following PaV’s numbering: 1. The mode of inheritance of mitochondrial genes has absolutely nothing to do with the issue here – that T-urf13 arose via well-understood processes that do not involve ID and that defy the claims that such proteins and systems require probabilistic resources far in excess of what was available. 2. As I explained in the essay on T-urf13, Behe’s own “derivation” equates a CCC with a small-molecule binding site (that would be the site to which the antimalarial binds). To argue that a polyketide binding site is not analogous to an antimalarial binding site makes no sense to me. (The fact of the matter is that a polyketide is much more like an oligopeptide than is chloroquine.) 3. The concept of “Darwinian” recombination is an odd one indeed. Recombination is catalyzed by enzymes. Period. Whether it happens in the nucleus, organelle, or cytoplasm (or anywhere else) is quite irrelevant, and the location of the genome (and events) in question has no bearing on the fact that the “probabilistic resources” provided by the process fall far short of those demanded by ID advocates. In spite of this, T-urf13 exists. 4. I invite PaV (and Jonathan M) to read the entire paragraph and section of the review from which Behe pulls the 10^20 number. I’ve discussed this matter elsewhere, and will just state the obvious (well, it’s obvious if one has access to the complete article, and not just to Behe’s book) – this number is at best only distantly-related to the events needed to generate a “CCC”. 5. The stuff about “de novo” genes is not relevant to T-urf13. I ask that you go back and read all of my essays on the subject. 6. Nuclear restorers are near and dear to my heart, as they are usually RNA-binding proteins that have evolved to acquire new RNA sequence preferences. These new functionalities promote the degradation of T-urf13-encoding mRNAs. (To state things in a way that will perk up the ears of readers here, they have acquired new functional information. And without design. Go figure.) Why PaV thinks that these in some way deflate the reality of the T-urf13 example escapes me. Finally, PaV’s comments about the physiological consequences of T-urf13 expression tell me that he (and, I presume, most ID proponents) does not think much about the creative possibilities attendant with apoptotic processes, and on top of that that females are evolutionary dead ends. Neither of these opinions makes any sort of biological sense, and I don’t think that this sort of nonsense has much of an impact on my arguments. (T-urf13, in a nutshell, is a gene that converts a hemaphrodite into a female. This sort of evolution is seen in the wild and is one of the many fascinating reproductive strategies that plants have evolved. The toxin sensitivity is an afterthought and serves to illustrate that, in biology, it is almost impossible to find a trait that is exclusively, back-or-white, beneficial or detrimental.)
Arthur Hunt provides a link to a blog where he critiques the Axe (2004) paper which I cited regarding the prohibitive rarity of functional protein folds with respect to the vast sea of combinatorial sequence space.
I ask that you take your time and understand all of the points I raise in my review of Axe’s work. The criticisms you raise here are addressed in the essay, such that they render your points irrelevant. I will wait for your promised post on my essay to elaborate (or not, as I am hopeful that you will understand why my assertion is true).Arthur Hunt
March 4, 2011
March
03
Mar
4
04
2011
06:20 PM
6
06
20
PM
PDT
Jonathan: I believe this is the article I was referring to in the previous post. Although I can't seem to get to the actual nucleotide readouts of the putative "de novo" gene(s), I'm rather sure this is what I looked at.PaV
March 4, 2011
March
03
Mar
4
04
2011
02:38 PM
2
02
38
PM
PDT
@ Jonathan M You say that the necessary "probabilistic resources" don't exist for the T-urf13 to evolve. Could it not be that Behe's calculation is mistaken?myname
March 4, 2011
March
03
Mar
4
04
2011
09:43 AM
9
09
43
AM
PDT
Here you go! http://shapiro.bsd.uchicago.edu/Shapiro&Sternberg.2005.BiolRevs.pdf JJonathan M
March 4, 2011
March
03
Mar
4
04
2011
04:10 AM
4
04
10
AM
PDT
noam_ghish, I'm sorry, I don't have access to that article either, perhaps someone who does have access on UD can help???bornagain77
March 4, 2011
March
03
Mar
4
04
2011
03:42 AM
3
03
42
AM
PDT
Bornagain77, I have an account with a university and have access to a great amount of science articles. But there are still some articles that I cannot get. Is there anyway you can get access to this article and email it to me? It's by Sternberg on why repetitive dna is important. email is bobsmith99 at catholic.org http://onlinelibrary.wiley.com/doi/10.1017/S1464793104006657/abstract?systemMessage=Due+to+scheduled+maintenance%2C+access+to+Wiley+Online+Library+will+be+disrupted+on+Saturday%2C+5th+Mar+between+10%3A00-12%3A00+GMTnoam_ghish
March 3, 2011
March
03
Mar
3
03
2011
11:39 PM
11
11
39
PM
PDT
Johnathan, thanks for the follow up article.Upright BiPed
March 3, 2011
March
03
Mar
3
03
2011
02:15 PM
2
02
15
PM
PDT
fn; The Failure Of Local Realism - Materialism - Alain Aspect - video http://www.metacafe.com/w/4744145 The falsification for local realism (materialism) was recently greatly strengthened: Physicists close two loopholes while violating local realism - November 2010 Excerpt: The latest test in quantum mechanics provides even stronger support than before for the view that nature violates local realism and is thus in contradiction with a classical worldview. http://www.physorg.com/news/2010-11-physicists-loopholes-violating-local-realism.html This following study adds to Alain Aspect's work in Quantum Mechanics and solidly refutes the 'hidden variable' argument that has been used by materialists to try to get around the Theistic implications of the instantaneous 'spooky action at a distance' found in quantum mechanics. Quantum Measurements: Common Sense Is Not Enough, Physicists Show - July 2009 Excerpt: scientists have now proven comprehensively in an experiment for the first time that the experimentally observed phenomena cannot be described by non-contextual models with hidden variables. http://www.sciencedaily.com/releases/2009/07/090722142824.htm (of note: hidden variables were postulated to remove the need for 'spooky' forces, as Einstein termed them — forces that act instantaneously at great distances, thereby breaking the most cherished rule of relativity theory, that nothing can travel faster than the speed of light.)bornagain77
March 3, 2011
March
03
Mar
3
03
2011
01:57 PM
1
01
57
PM
PDT
Another very interesting post Jonathan, thank you. I look forward to reading your defense of Axe's work in regards to the rarity of functional proteins in sequence space. Of a somewhat related note, the other day I realized that finding quantum entanglement in biology,,, Quantum Information/Entanglement In DNA & Protein Folding - short video http://www.metacafe.com/watch/5936605/ ,,, has bypassed the 'probability argument' of finding functional sequences in sequence space since quantum entanglement is exactly what was used by Alain Aspect, and company, to falsify local realism/ reductive materialism in the first place.,,, The reason why this presents a insurmountable problem for neo-Darwinism, that bypasses the 'probability argument' is, how in blue blazes, can quantum entanglement, in biology, possibly be explained by the materialistic framework of neo-Darwinism when Alain Aspect and company falsified the validity of local realism (reductive materialism) in the first place with quantum entanglement? It is simply ludicrous to appeal to the materialistic framework, which undergirds the entirety of the neo-Darwinian paradigm, that has been falsified by the very same quantum entanglement effect that one is seeking to explain in biology! Probability arguments, which have been the staple of the arguments against neo-Darwinism, simply do not apply in trying to explain quantum entanglement in biology, since it is shown to be impossible for quantum entanglement to be explained by the materialistic framework in the first place! i.e. It does not follow for neo-Darwinism to even begin to presume itself to be sufficient to be the rational explanation for the effect in question! Moreover, the implications of finding entanglement in molecular biology are non-trivial, to put it mildly, For quantum entanglement is actually the blatant violation of time and space, a blatant violation of time and space which Einstein himself called 'spooky'. i.e. With quantum entanglement being found in biology it is now clearly demonstrated that there is a integral 'higher dimensional' component to our 'physical' makeup which is not limited to time or space!!! This is certainly a non-trivial finding for the theistic perspective! Furthermore finding 'higher dimensional' quantum entanglement/information in biology goes a long way towards explaining the enigmatic 'quarter power scaling',,, The predominance of quarter-power (4-D) scaling in biology Excerpt: Many fundamental characteristics of organisms scale with body size as power laws of the form: Y = Yo M^b, where Y is some characteristic such as metabolic rate, stride length or life span, Yo is a normalization constant, M is body mass and b is the allometric scaling exponent. A longstanding puzzle in biology is why the exponent b is usually some simple multiple of 1/4 (4-Dimensional scaling) rather than a multiple of 1/3, as would be expected from Euclidean (3-Dimensional) scaling. http://www.nceas.ucsb.edu/~drewa/pubs/savage_v_2004_f18_257.pdf “Although living things occupy a three-dimensional space, their internal physiology and anatomy operate as if they were four-dimensional. Quarter-power scaling laws are perhaps as universal and as uniquely biological as the biochemical pathways of metabolism, the structure and function of the genetic code and the process of natural selection.,,, The conclusion here is inescapable, that the driving force for these invariant scaling laws cannot have been natural selection." Jerry Fodor and Massimo Piatelli-Palmarini, What Darwin Got Wrong (London: Profile Books, 2010), p. 78-79 https://uncommondescent.com/evolution/16037/#comment-369806 4-Dimensional Quarter Power Scaling In Biology - video http://www.metacafe.com/w/5964041/ Though Jerry Fodor and Massimo Piatelli-Palmarini rightly find it inexplicable for 'random' Natural Selection to be the rational explanation for the scaling of the physiology, and anatomy, of living things to four-dimensional parameters, they do not seem to fully realize the implications this 'four dimensional scaling' of living things presents. This 4-D scaling is something we should rightly expect from a Intelligent Design perspective. This is because Intelligent Design holds that ‘higher dimensional transcendent information’ is more foundational to life, and even to the universe itself, than either matter or energy are. This higher dimensional 'expectation' for life, from a Intelligent Design perspective, is directly opposed to the expectation of the Darwinian framework, which holds that information, and indeed even the essence of life itself, is merely an 'emergent' property of the 3-D material realm. supplemental note,,,, It is also very interesting to point out that the 'light at the end of the tunnel', reported in many Near Death Experiences(NDEs), is also corroborated by Special Relativity when considering the optical effects for traveling at the speed of light. Please compare the similarity of the optical effect, noted at the 3:22 minute mark of the following video, when the 3-Dimensional world 'folds and collapses' into a tunnel shape around the direction of travel as an observer moves towards the 'higher dimension' of the speed of light, with the 'light at the end of the tunnel' reported in very many Near Death Experiences: Traveling At The Speed Of Light - Optical Effects - video http://www.metacafe.com/watch/5733303/bornagain77
March 3, 2011
March
03
Mar
3
03
2011
01:56 PM
1
01
56
PM
PDT

Leave a Reply