Uncommon Descent Serving The Intelligent Design Community

Message Theory – A testable ID alternative to Darwinism – Part 1

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Message Theory is a testable scientific explanation of life’s major patterns.

That claim should intrigue you. If I heard such a claim, I would nearly leap across the room to demand more details; else I couldn’t sleep that night. That is because I highly value testability, just as all scientists do, (in physics, chemistry, geology, medicine, engineering, etcetera) – and just as evolutionists do in all their court cases.

Message Theory should even intrigue evolutionists, because it offers what they repeatedly demanded from their opponents – a testable, scientific alternative to evolution. Yes, that is exactly what they demanded. In reality, the evolutionists’ response has been exceedingly superficial, falling into two categories: (1) Silence; or (2) They misrepresent Message Theory. (If you are aware of exceptions, let me know.) Therefore, my posts here will not much address the evolutionists’ response to Message Theory, since a serious response doesn’t much exist.

The creationist/ID response has been more varied, and I focus on that here. Many see Message Theory as exciting and promising. For example, Origins Magazine reviewed it saying, “I can give no greater accolade than urging that this book should now be the starting point for all of our discussions.” Phillip E. Johnson calls it “Bold and fascinating … a comprehensive theory.” Carl Wieland calls it, “Masterpiece … incredible … of immense value.” Michael Behe and many others have given glowing reviews, (see this link). To which I say, Thanks! That’s a good start.

However, some creationists/ID-ists are hesitant to investigate Message Theory, and the central reason is its claim of testability – its claim to make numerous coherent, risky, predictions about what we should see, and should not see. Unfortunately, many creationists/ID-ists do not value testability, and some aggressively dislike testability. Without knowing any details about Message Theory, we encounter their leading objection – testability.

For example, some creationists say, “Aren’t you claiming to test God?” To which I answer: No. Message Theory is about life’s data – many observations that must be explained – and Message Theory explains those observations in a testable (falsifiable, vulnerable, empirically risky) manner. It meets all the criteria for a scientific theory. A theory is tested, not God. The thought process is no different than concerning, say, the Piltdown fossils, which needed an explanation. These fossils were a hoax created by an intelligent designer – a testable explanation that no scientist disputes. We need not test the intelligent designer, (indeed, the designer of the Piltdown Hoax remains unidentified), rather we test the theory. In science we test explanations (i.e., theories); not God.

Also, deep down, many creationists want the ‘certainty of faith,’ and they are not yet comfortable with the inherent riskiness of science – they haven’t learned to balance the two types of thought: risk and certainty.

The classic creationist organizations (ICR, AIG, CRS) often do not value testability, (and sometimes they explicitly oppose testability). Instead, they use a different criterion of science; a different value system. They claim “science must be repeatable, and since origins are not repeatable, creation and evolution are equally unscientific.” They are deeply mistaken. For example, we frequently execute murderers (which is not a flimsy thing to do) based solely on scientific evidence, even though the murder is not repeatable.

Instead, repeatability is how we identify naturalistic laws (as opposed to the work of intelligent beings); therefore the creationists’ demand for ‘repeatability’ is implicitly a demand that science must be purely naturalistic and cannot include an intelligent designer. They are shooting themselves in the foot!

Thankfully the ID organizations don’t take that approach. They take a more sophisticated approach, yet they tend to undervalue testability nonetheless, (sometimes through redefining it into obscurity).

In my many discussions with my fellow creationists/ID-ists, the foremost obstacle to Message Theory is their devaluing or misunderstanding of testability. So let me pause to underscore this for my readers: If you do not value testability highly, then leave now, or you will only waste your time, and mine. Let me put it stronger: Anyone (creationist, ID-ist, or evolutionist for that matter) who cheapens testability is a danger to science, and moreover, they miss many opportunities to advance creation/ID as superior science.

Let me put my claim stronger still: Message Theory is testable science, and macro-evolutionary theory (as practiced by its modern proponents) is not. I employ testability – the same tool evolutionists use in all their court cases – to turn the tables on evolutionists.

After handling some comments, I will next discuss Message Theory proper.

– Walter ReMine

The Biotic Message – the book

Comments
Patrick Took a look. Last I checked each aa is coded for with 3 nucleotide bases, and in turn each base has 4 states, so in effect we are looking at up to six bits or so per aa. [Actually, there will be various slight mods on AA constraints and observations, but that is good enough for rough work.] I see no material fault with your work. But that is not the real problem. the real issue is that we are facing a situaiton where people developed a pre-info ager theory, which threw out an unexpected bridge to info theory, in 1948 - 53. And, once we did rthe stiudies on DNA and proteins, we see that we are dealign with very sophisticated info sysrtems. And, info systems we know come from intelligence. but that "cannot" be permitted under the dominant paradigm of evo mat. So -- with all due respect to those to whom this does not apply -- no end of delaying, foot dragging and even temper tantrum tactics on the obvious conclusion of the matter. GEM of TKIkairosfocus
February 23, 2009
February
02
Feb
23
23
2009
01:06 PM
1
01
06
PM
PDT
kf, I'd have to agree that 1000 is more practical since even relatively "simple" biological systems exceed that and it prevents any "gotcha moments" where Darwinists may attempt to trumpet aloud any special exception that might be found. BTW, did you find any errors in my own explanation in #103 and #137? I believe I'm correct but it'd be nice to be double-checked by an expert so I don't go around repeating errors.Patrick
February 23, 2009
February
02
Feb
23
23
2009
12:39 PM
12
12
39
PM
PDT
Rob: Pardon, but I must insist: the context is that you are discussing a concept that is broader than any one person, and wehich as I note has been used in several phrasings. Over the years I have been using FSCI as a DESCRIPTIVE term for what Orgel, Yockey, Wickens etc were getting at, I emphasised "functionally specific/ specified complex info" and recently find I like the alternate "function-specifying complex info" -- esp in contexts of algorithms or structures that are precisely organised or shaped to function, like 747s and arrow heads. In ALL cases a subset of CSI is intended, just that the function in question is what gives the de facto, observable specification. (Ef it ain't wuk, it ain't wot we does want . . . [Cf the philosophical concept that, e.g. "food" is functional stuff.) GEM of TKIkairosfocus
February 23, 2009
February
02
Feb
23
23
2009
12:37 PM
12
12
37
PM
PDT
Patrick: 1,000 bits is a better "practical" limit, as the search window set by our observed cosmos is 10^150 states. So, since 1 k bits has a config space that is 10 times the square of 10^150, it means that a cosmic scope search could not sample more than 1 in 10^151 actually of the space. That aptly captures the odds of worse than 1 in 10^150 in Dembski's UPB. GEM of TKIkairosfocus
February 23, 2009
February
02
Feb
23
23
2009
12:27 PM
12
12
27
PM
PDT
kairosfocus:
Kindly look at the glossary on FSCI.
Kindly look at my comment where I say, "According to your [jerry's] understanding". I'm not talking about the FAQ's characterization, I'm talking about jerry's.R0b
February 23, 2009
February
02
Feb
23
23
2009
12:25 PM
12
12
25
PM
PDT
Rob: Kindly look at the glossary on FSCI. You will see that it is usually used in several expanded and sufficiently comparable senses that "specifying" and "specified" make no effective difference. FSCI is a subset of CSI in any case, as the issue is that the specification is tied to observed functionality. And indeed, both concepts arise from the same context: observing the functionality of the nanomachines in living cells. CSI went more general, FSCI sticks to the OBSERVED functionality focus for specifying the complex oragnisation in question. So, DNA exhibits FSCI, so does a string of 143 ASCII characters giving a message in English. So does an arrowhead or a Jumbo Jet -- the design spec relative to their functions. GEM of TKIkairosfocus
February 23, 2009
February
02
Feb
23
23
2009
12:21 PM
12
12
21
PM
PDT
Sorry for being so late to reply. This thread has run its course but I thought I'd answer these 2 questions quickly. But I feel the kairosfocus did a better job of covering the topic in-depth, anyway. So if you guys don't understand the concept at this point I don't know what else to add. Khan #104, In the other link I gave I noted that the “ice fish carr[ies] a partially deleted copy of alpha1 and lack the beta globin gene altogether. These deletions are inextricably linked to its lower blood viscosity…” IOW, a destructive mutation that gives a benefit in this limited environment. The number of repeats apparently required for this “functionality” is 4 repeats or 96 informational bits. AFAIK additional repeats are unnecessary duplications. As I mentioned tying function to biological information is the hard part, so I may be wrong on this and this example might require more than 100. No big deal either way. Not to mention, I suppose it could be argued that a degenerative, and repetitive, change like this should not even count as FCSI, although I’d leave that determination to the experts. I personally believe there will be found special exceptions where 500+ informational bits can be exceeded by non-foresighted processes, and ID theory will need to account for them, but that’s just my opinion. GSV #124, Machine code is binary, thus one bit. The biological code is a quaternary code, thus 2 bits. I was just explaining the overall concept and using an easy example. I didn't bother pulling up any sequence data, but in short informational bits = (length of functional sequence) X 2 For the ice fish example I was assuming that the 3 amino acids were encoded via ~12 nucleotides (if I'm wrong in this assumption please correct me). So 12 X 2 = 24 informational bits. Then 24 X 4 repeats = 96 informational bits. As I said, easy. Although as I've mentioned this is just an estimate of the true biological information content since we're still trying to figure out exactly how everything is encoded. So I'm sure there are plenty of caveats my quick example does not take into account. For example, how to calculate for frameshifting and an encoding scheme where the same information is reused for multiple different applications? Also, the biological example I gave on this thread were based upon generalizations. So the accuracy could be questioned but I highly doubt the numbers are going to change dramatically. And in general I prefer to deal in straight informational bits instead of probabilities (1 in 10^150 corresponds to 500 informational bits) when it comes to measuring complexity in biology since these are information-based replicators, not pebbles on a beach where probabilities would be more appropriate. Here's 2 other examples where I ran the numbers: here and herePatrick
February 23, 2009
February
02
Feb
23
23
2009
11:24 AM
11
11
24
AM
PDT
jerry:
“Is it possible for information to specify something but not be specified by something else?” I am not aware of any.
The reason I asked is this: According to your understanding, CSI is, by definition, specified by something, and FSCI, by definition, specifies something, although it is not necessarily specified by anything. Under those definitions, FSCI is not defined such that it's a subset of CSI.R0b
February 23, 2009
February
02
Feb
23
23
2009
10:31 AM
10
10
31
AM
PDT
All: At this stage, I suspect that we are seeing a Panda's Thumbster or Talk Origins [or ilk] attempt to mischaracterise FSCI in order to then -- without proper warrant -- claim that it is a confused, and useless concept; brushing it aside rhetorically. But in fact, it is at root s simple descriptive phrase, one that is pretty much self defining if anything: 1 --> Some things require/use information to function, and that info is specified by the functionality it achieves. 2 --> That information is sometimes fairly complex, and when that happens, it requires a fair quantity of storage; which can be measured in bits at work, i.e bits that are functional. 3 --> to illustrate, think here, of a CD which is empty -- 700 or so MBytes of bits that in that context are set up to provide storage -- i.e formatting and precise organisation. Then load some files, of reasonable size. 4 --> That will be complex and functionally specific in some externally recognisable context, requiring hardware, algorithms, programs, programming/storage languages and associated onward target applications to read and put it to work, i.e info storage systems require FURTHER FSCI to work. 5 --> Look at the DNA-ribosome-enzyme etc info storage and processing system in cells: DNA stores, Ribisomes etc read and translate, creating amino acid chins that then fold, agglomerate and are transported to use-sites,w here they may for instance self-assemble into a functioning flagellum. (Three weeks back, Feb 3, there was a video hosted here at UD on that onward self-assembly, which is in part based on the precise structures and capacities of the assembled proteins.) 6 --> Then, observe back to the OOL researchers in the 1970's to 80's, to see that this has been recognised for a generation at least as applying to life in the cell, leading them to naturally form the concepts -- NB definitions try to give precise borders to concepts, i.e concepts are based on abstracting key commonalities of examples and are logically/ epistemologically prior to precising definitions -- CSI and FSCI:
Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.] [TBO summarise in TMLO ch 8, 1984:] Yockey7 and Wickens5 develop the same distinction, that "order" is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, "organization" refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity. In short, the redundant order of crystals cannot give rise to specified complexity of the kind or magnitude found in biological organization; attempts to relate the two have little future. [I add: save in obfuscatory rhetoric]
7 --> So, by 1984, the CONCEPTS for FSCI and CSI were identified and exemplified with "this is that" and "this other is NOT that" cases, by leading -- and non-ID -- OOL researchers. 8 --> So, the initial generation of ID thinkers and researchers, starting with Thaxton et al, built on EXISTING concepts that were known to be relevant to the context of OOL and the functioning of the cell based on information rich organisation. 9 --> In particular, we may observe that FSCI and CSI are actually fairly familiar to those who have had to design, develop, debug or troubleshoot information-based technological systems. [Mystery solved on why such a high proportion of ID thinkers and workers come form fields that use CSI and FSCI based systems, making us familiar with the concepts and their most credible causes. Lets just note that biologists as such are usually not familiar at design and development level with such complex info-based systems.] 10 --> And so, the inference that where we see such systems design is a known cause, and therefore a serious candidate for best explanation, is a very obvious one. But, how does one make such a distinction on a reasonable and objectively warranted basis? 11 --> Already in TMLO, there is a hint, for Thaxton et al address not only classical thermodynamics but statistical thermodynamics on trying to work out the likelihood of forming proteins and/or DNA on a platnetary scale, thus the equilibrium conc in a hypothetical [and very generous] pre-biotic soup. For, they bring in Brillouin Information. 12 --> this brings up the info school of thermodynamics, and the astonishing parallel between thermodynamic entropy and the H-metric of average info per symbol in info theory. For, as Harry Robertson summarises in his Statistical Thermophysics:
. . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability should be seen as, in part, an index of ignorance] . . . . [deriving] S({pi}) = - C [SUM over i] pi*ln pi [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp - beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .[pp.3 - 6] . . . . S, called the information entropy, . . . correspond[s] to the thermodynamic entropy, with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context [p. 7] . . . . Jayne's [summary rebuttal to a typical objection] is ". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [p. 36.]
13 --> In the 1990's Drs Dembski and Behe enter the picture, the latter focusing on the origins of complex, multi-part organisation, and the former on the associated information. 14 --> In effect, first, it becomes very hard to get to complex multi-part functionality without intentional creation of parts and/or deliberate adaptation of existing parts to interface and work together in a new whole. (And there is as a rule a core of parts that are necessary if a function is to work art all.) 15 --> This is a major -- and (rhetoric notwithstanding) unsurmounted -- hurdle for RV + NS based schemes of thought on origination of body-plan level and micro-functional life systems. 16 --> Dembski's CSI models helped us to quantify the CSI, providing a metric. The universal probability bound puts up a conservative threshold where logically possible organisations become so improbable that they are unlikely to have formed on the scope of our observed cosmos by chance. the explanatory filter -- especially when focused on ASPECTS of the entity under investigation, uses a reasonable extension of statistical inference to infer on best explanation across the three long since known causal factors: chance, necessity, intelligence. 17 --> In that general context, FSCI at one level is a simple way to look at the bottomline: if an OBSERVED function uses 500 - 1,000 bits or more of storage capcity, it is beyond reasonable doubt a case of being beyond the credible reach of chance on the gamut of our observed cosmos, so we may confidently infer to design as its best explanation. (That is, once we refuse to censor out the possibility that design can give rise to systems; i.e we refuse to go with Lewontinian a priori materialism as the NAS has now explicitly imposed for "science" in the US. this silences material facts and factors before they can speak, and subverts science from being an empirically based unfettered exploration of the truth about our world.) 18 --> And now, as pointed out at 133 above, several professional workers are giving a more formal approach to FSCI, and have now published a table of 35 measured values of FSC for proteins and related molecules. ____________ So, we can see for ourselves the true state of the balance on the merits, and it is plainly not in favour of the obfuscators. GEM of TKIkairosfocus
February 21, 2009
February
02
Feb
21
21
2009
12:09 AM
12
12
09
AM
PDT
JayM: re 128:FSCI is not a rigorous measurement that can be applied to biological systems, not least because it assumes creation ex nihilo. It does not take into consideration the known evolutionary mechanisms that build on previous success. 1 --> Could you kindly specifically document this claim? 2 --> For instance, kindly explain how the TA and Durston et al papers fail to address biological contexts, and fail to produce valid FSCI values; including how such a presumably gross error escaped the attention of the peer reviewers. 3 --> Also, please point out just how the FSCI concept -- that we have empirically observed information that functions and so is functionally specific, and that is complex as it takes up a significant storage [and is not simply compressible or easily discoverable to a random walk search] -- ASSUMES Creation Ex nihilo? (FYI, a view of Creation that God used a big bang at 13.7 BYA and guided OOL and macroevoltuion thereafter qualifies as creation ex nihilo -- i.e. it is a view that God creates the physical cosmos, and that matter (in whatever form) is not eternal. [Contrast, say the fairly common circa C1- 3 AD Gk concept of the Demiurge forming recalcitrant but eternally existing matter to crudely reflect the eternal forms; leading to a messed-up physical world, so that the body becomes the prison of the soul and salvation is the business of acquiring secret knowledge so we can escape being re-imprisoned. (Try Simon Magus and his First Thought, Helen the former slave and lady of the night.)]) 4 --> Now, in general, once coded info of significant complexity is used, MOST configs are non-functional. E.g. words of 1,000 bits length or equivalent will specify a space of 10^301 configs, so that only a tiny fraction can be functional. Evolutionary mechanisms as proposed, deal with differential success of functional configs. But the first challenge is getting to an island of function in a pre-biotic context. So, how does a measure of the config space challenge that starts BEFORE evolutionary mechanisms may apply is failing to account for such? 5 --> Similarly, post OOL, we are looking at body-plan level transformations for macroevo. These credibly require increments in DNA -- which is a functional, complex, digital data string -- of order 10's to 100's of millions of bits, many dozens of times over. How, then, does pointing out that functional islands in such astonishingly large config spaces will be very hard to find, fail to address the capacity of evolutionary mechanisms -- mechanisms [RV + NS etc, so we have differential reproductive success] that focus on incremental improvements WITHIN islands of function? GEM of TKIkairosfocus
February 20, 2009
February
02
Feb
20
20
2009
04:20 PM
4
04
20
PM
PDT
bFast: JayM has for several das been putting up remarks on FSCI not being a well defined or measurable concept. He has been answered several times in several contexts, e.g. [e.g. cf WAC and glossary on CSI, FSCI and measures, also excerpts from peer reviewed papers as linked at 102 above, as well as the one you picked up at 118 or so]. But, still keeps on saying effectively the same thing; dismissing all answers and onward links as though they are meaningless. That includes dismissing a 2007 Durston et al peer reviewed paper that publishes a table of 35 values of FSC in Fits; as well as a wider discussion of the OSC vs RSC vs FSC concept and its application by Trevors and Abel. To give one instance from the table:
Flu PB2, with 608 amino acid residues [aa], has 1,692 sequences with 2,628 bits in the null state, so FSC is 2,416 Fits, and FSC density is 4.0 Fits/aa.
Not to mention, the core FSCI concept is quite simple and obvious [i.e it is a description of a common fact in today's information age], e.g. we have situations where information functions in systems, and can be measured in bits; so that when the bit length gets beyond say 1,000 even if the observed universe were regarded as a search for the FSCI, it could not sample more than 1 in 10^150 of the config space. (That gives teeth to your 1 in 10^150 remarks just above.) Sad to say, JayM's behaviour over the past few days does not come across like a serious, dialogue based on addressing of empirically based facts, towards understanding and truth. Let us hope that we see a serious engagement from him over the next day or so, in light of the above and the linked. GEM of TKIkairosfocus
February 20, 2009
February
02
Feb
20
20
2009
03:50 PM
3
03
50
PM
PDT
jayM:
First, what is this FCSI of which you speak? Do you have a rigorous mathematical definition for it? Have you shown that the rigorous definition is applicable to biological systems? Does it reflect known evolutionary mechanisms?
Let me wade into this one. FCSI is complex information (having too much information to reasonably have occurred by chance {chance: 1 in 10^150}) that specifies (is a map used to make) something that functions. Now, the definition of FCSI does not ipso facto establish cause. It has been established that intelligence (human) can produce FCSI (technical drawings of machenery, for example). The neo-Darwinists argue that evolution can also. For them to make their case, they must first show that an evolvable situation can naturally occur that requires less than complex information (such as a reproducing molecule set that contains less than 1 in 10^150). Second, they must show a reasonable and statistically supportable path from this simple reproducer to modern comlexity in 4 billion years. Can they do that? They certainly have not yet done it -- far from it.bFast
February 20, 2009
February
02
Feb
20
20
2009
02:50 PM
2
02
50
PM
PDT
Hi Mr. ReMine. Just wanted to say that I would be interested in learning more about Message Theory. Have a good weekend everyone.Platonist
February 20, 2009
February
02
Feb
20
20
2009
12:47 PM
12
12
47
PM
PDT
"Non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. So non-watermarked, noncoding DNA has function and specifies something, just like watermarked DNA. Therefore, it has FSCI. What’s wrong with that logic?" Because it is not logical. The connection is mediated by an intelligent person and does not automatically specify something else as DNA does with a protein. The connection would disappear with out the intelligent intermediary who is the one actually making the connection. You could use the same argument with a rock you found in the woods or at a crime scene that it could be used to build a house or be a murder weapon. Come on, don't you see what you are doing. You must. You appear desperate to find a gotcha and not directing you energies to understanding the issues. If the FSCI is not a valid argument find an alternative and how it arose and not use an example that takes an intelligence to make the connection. If there were an example, after all these thousands of years someone would have noticed it and made a big deal of it. At best the DNA in your example points to itself and does not beget another entity with a function.jerry
February 20, 2009
February
02
Feb
20
20
2009
12:35 PM
12
12
35
PM
PDT
Attn: Moderators I see that my posts are still being subject to moderation delays. I would like to request that you allow this one through, as a courtesy to the people with whom I'm conversing. I am bowing out of all threads in this forum due to the delays related to moderation. It is difficult enough to keep up with all the threads and respond to everyone who has taken the time to participate in the discussions without the added overhead of delays and posts that simply do not appear. I appreciate the time and effort you have all contributed, even when we don't agree. If you wish to continue the conversation in an unmoderated forum, please suggest one and I'll join you there. JJJayM
February 20, 2009
February
02
Feb
20
20
2009
11:23 AM
11
11
23
AM
PDT
kairosfocus @123
The nature and demonstrated status of FSCI is not an assumption, it is a fact of routine experience and observation; as just pointed out.
I'm afraid that you didn't point anything out, you simply continued to make the same baseless assertions I've been challenging here. FSCI is not a rigorous measurement that can be applied to biological systems, not least because it assumes creation ex nihilo. It does not take into consideration the known evolutionary mechanisms that build on previous success. FSCI, as you described it, boils down to "Gee, it's pretty unlikely that this protein or gene came together all at once, therefore some intelligence must be behind it." Of course it's unlikely to have come together all at once, that's why no biologist suggests that. Nowhere in your prodigious amount of text have you addressed the core issues I raised: 1) What is the rigorous definition of CSI (or whatever other measure you wish to use)? Note that this must allow anyone with the requisite mathematical skills to compute the same value in the same units from the same object. 2) Demonstrate that the measurement is applicable to biological systems. Assumptions of uniform distribution or creation ex nihilo are an indication that the measurement is not applicable. MET mechanisms must be accounted for. 3) Demonstrate that this measurement uniquely identifies intelligence. The argument that "We see CSI in human artifacts and we see CSI in biological systems so biological systems must require intelligent input." is begging the question. Neither you nor any other CSI/FSCI proponent here has addressed any one of these issues, yet you continue to repeat your baseless assertions. That is a significant reason why mainstream scientists do not ID seriously -- we make it too easy to ignore us. JJJayM
February 20, 2009
February
02
Feb
20
20
2009
11:15 AM
11
11
15
AM
PDT
jerry:
“You realize that non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. Does that mean that all DNA, including non-coding, has FSCI?” You should be able to answer this yourself.
Okay, I will, by rephrasing the question as a statement: Non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. So non-watermarked, noncoding DNA has function and specifies something, just like watermarked DNA. Therefore, it has FSCI. What's wrong with that logic?R0b
February 20, 2009
February
02
Feb
20
20
2009
09:44 AM
9
09
44
AM
PDT
<blockquote?Allen- Ergo, if “intelligent design” exists, it exists because of the actions of an “intelligent agent“. Bingo!
Allen: The same is the case for natural selection. Natural selection doesn’t do anything. It’s an outcome, not a “process”. To be specific, natural selection is an outcome of three separate, but related processes: 1) variation, 2) inheritance, and 3) reproduction. Given these three processes, the outcome in an environment with limited resources is: 4) unequal, non-random survival and reproduction. This outcome is what evolutionary biologists mean by the term “natural selection”.
How can NS be non-random if it depends on random inputs? Variation is random. What gets inherited is random. And fecundity can only be judged after the fact. Also added to ID would be some type of artificial selection. Now this AS could be part of the built-in programming. This programming allows parts to be kept even though they do not yet provide any advantage. And this allows for construction to occur and keep occuring until the final product is put into play.Joseph
February 20, 2009
February
02
Feb
20
20
2009
04:27 AM
4
04
27
AM
PDT
Just so that I am clear- Specified complexity/ CSI as it relates to biology equates to biological function as stated by Wm Dembski in "No Free Lunch":
Biological specification always refers to function. An organism is a functional system comprising many functional subsystems. In virtue of their function, these systems embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the same sense required by the complexity-specification criterion (see sections 1.3 and 2.5). The specification of organisms can be crashed out in any number of ways. Arno Wouters cashes it out globally in terms of the viability of whole organisms. Michael Behe cashes it out in terms of minimal function of biochemical systems.- Wm. Dembski page 148 of NFL
In the paper "The origin of biological information and the higher taxonomic categories", Stephen C. Meyer wrote:
Dembski (2002) has used the term “complex specified information” (CSI) as a synonym for “specified complexity” to help distinguish functional biological information from mere Shannon information--that is, specified complexity from mere complexity. This review will use this term as well.
In order to be a candidate for natural selection a system must have minimal function: the ability to accomplish a task in physically realistic circumstances.- M. Behe page 45 of “Darwin’s Black Box”
He goes on to say:
Irreducibly complex systems are nasty roadblocks for Darwinian evolution; the need for minimal function greatly exacerbates the dilemma. – page 46
IC- A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, non-arbitrarily individuated parts such that each part in the set is indispensable to maintaining the system’s basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. Page 285 NFL
Numerous and Diverse Parts If the irreducible core of an IC system consists of one or only a few parts, there may be no insuperable obstacle to the Darwinian mechanism explaining how that system arose in one fell swoop. But as the number of indispensable well-fitted, mutually interacting,, non-arbitrarily individuated parts increases in number & diversity, there is no possibility of the Darwinian mechanism achieving that system in one fell swoop. Page 287
Minimal Complexity and Function Given an IC system with numerous & diverse parts in its core, the Darwinian mechanism must produce it gradually. But if the system needs to operate at a certain minimal level of function before it can be of any use to the organism & if to achieve that level of function it requires a certain minimal level of complexity already possessed by the irreducible core, the Darwinian mechanism has no functional intermediates to exploit. Page 287
Joseph
February 20, 2009
February
02
Feb
20
20
2009
04:20 AM
4
04
20
AM
PDT
To Patrick "I’ll copy over my English word explanation that should makes things easy to understand." Thanks you for the reply but can you show me the math instead of the words please? I am a computer programmer with a mathematics degree, words are not my strong point, numbers are. -- I was gone for a couple days, and much conversation has taken place, so I will embed my response here for future readers: Machine code is binary, thus one bit. The biological code is a quaternary code, thus 2 bits. I was just explaining the overall concept and using an easy example. I didn't bother pulling up any sequence data, but in short informational bits = (length of functional sequence) X 2 For the ice fish example I was assuming that the 3 amino acids were encoded via ~12 nucleotides. So 12 X 2 = 24 informational bits. Then 24 X 4 repeats = 96 informational bits. As I said, easy. Also, the biological example I gave on this thread were based upon generalizations. So the accuracy could be questioned but I highly doubt the numbers are going to change dramatically. And in general I prefer to deal in straight informational bits instead of probabilities (1 in 10^150 corresponds to 500 informational bits). Here's 2 other examples where I ran the numbers: here and here - PatrickGSV
February 20, 2009
February
02
Feb
20
20
2009
01:07 AM
1
01
07
AM
PDT
JayM: Also re yr: If we assume, for the sake of argument, that FCSI is rigorously defined and that it has been demonstrated to exist in biological systems, you can no longer say “we assume it is not a natural process.” There is no valid, rational, scientific reason for making that assumption. From the perspective of methodological naturalism, the operating philosophy of modern science, confirming the existence of FCSI in a biological system would more strongly suggest that it is not a unique product of intelligence than that some intelligence created biological systems. 1 --> The nature and demonstrated status of FSCI is not an assumption, it is a fact of routine experience and observation; as just pointed out. Rhetoric flying determinedly in the teeth of the evidence does not turn a fact into an assumption. 2 --> We have many observed cases of origin of FSCI -- empirical data is the foundation of inductive, scientific reasoning -- and, in all of these, it is the product of observed intelligence in action. (if you dispute this, simply produce a credibly observed counterexample, e.g. 143+ ASCII characters in contextually responsive English, credibly produced by chance + necessity, e.g. zener diode noise digitised, evened off and spewed across a disk's surface. [If it's good enough to run lotteries, it's good enough for me as a random source.]) 3 --> This is multiplied by the needle- in- a- haystack search challenge that hypothetical chance + necessity mechanisms face, before they can get to the beaches of islands of function, once we are in excess of about 1,000 functional bits; e.g. natural selection "responds" to differences in degree of FUNCTION. [Onlookers,note how this keeps on getting ducked -- a well known tactic of the rhetor -- pass by as if it does not exist, what does not fit your case. Including Behe's observed edge of evolution based on more reproductive events per year -- across the better part of a century -- than are credibly true of all mammalia for its entire existence on earth. ] 4 --> We also see here the injection of Lewontinian- NAS a priori materialism in the name of "modern" science. in case you have not been watching in recent weeks, let us again cite the former noting that as at 2005 - 2008, the NAS has plainly made it "official" dogma:
We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door. [1997, NY review of books]
5 --> In case you didn't get the memo that this is now official dogma, courtesy the US NAS acting as friendly local magisterium, let me cite:
In science, explanations must be based on naturally occurring phenomena. Natural causes are, in principle, reproducible and therefore can be checked independently by others. If explanations are based on purported forces that are outside of nature, scientists have no way of either confirming or disproving those explanations. Any scientific explanation has to be testable — there must be possible observational consequences that could support the idea but also ones that could refute it. Unless a proposed explanation is framed in a way that some observational evidence could potentially count against it, that explanation cannot be subjected to scientific testing. [Science, Evolution and Creationism, 2008, p. 10]
6 --> Translating: (i) explanations must either be in terms of chance + mechanical deterministic forces or else reducing on origin to the spontaneous action of such, and (ii) the only possible contrast to "natural" is "supernatural" -- strictly verboten! 7 --> But a simple read of Newton's 1688 General Scholium to his famous Principia [the point of departure work for true modern science] will show that he GROUNDS modern science on a theistic worldview, including the vision that Pantokrator has set an ordering law for the realm of nature that he intelligently created and sustains, which we may study through natural philosophy; giving rise to reliable knowledge of the world, i.e science. [Lewontin is simply grossly wrong when he went on to assert that a wold in which miracles are possible is one in which nature would be chaotic.] 8 --> In fact, Newton goes on to infer that the project of what is called natural theology [cf e.g Rom 1:19 - 20 etc] is either integral to or a reasonable extension from Natural Philosophy:
This most beautiful system of the sun, planets, and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being. And if the fixed stars are the centres of other like systems, these, being formed by the like wise counsel, must be all subject to the dominion of One; especially since the light of the fixed stars is of the same nature with the light of the sun, and from every system light passes into all the other systems: and lest the systems of the fixed stars should, by their gravity, fall on each other mutually, he hath placed those systems at immense distances one from another [i.e. grounds the uniformity principle of science on God's universal dominion; hence, LAWS of nature]. . . . We know [God] only by his most wise and excellent contrivances of things, and final cause [i.e from his designs]: we admire him for his perfections; but we reverence and adore him on account of his dominion: for we adore him as his servants; and a god without dominion, providence, and final causes, is nothing else but Fate and Nature. Blind metaphysical necessity, which is certainly the same always and every where, could produce no variety of things. [i.e necessity does not produce contingency] All that diversity of natural things which we find suited to different times and places could arise from nothing but the ideas and will of a Being necessarily existing . . . And thus much concerning God; to discourse of whom from the appearances of things, does certainly belong to Natural Philosophy.
8 --> Of course, some would object 9in the teeth of massive history) that this founding father of modern science is not representative of "modern" science. To which, the proper rejoinder is that the materialism presented to us in the name of science is not modern science either -- most of its chief views, attitudes, agendas, conclusions and difficulties were more than anticipated in Lucretius' philosophical poem on the nature of things something like 2,000 years ago. As Wiki summarises:
The poem opens with a magnificent invocation to Venus, whom he addresses as an allegorical representation of the reproductive power, after which the business of the piece commences by an enunciation of the great proposition on the nature and being of the gods, which leads to a grand invective against the gigantic monster religion, and a thrilling picture of the horrors which attends its tyrannous sway. Then follows a lengthened elucidation of the axiom that nothing can be produced from nothing, and that nothing can be reduced to nothing (Nil fieri ex nihilo, in nihilum nil posse reverti); which is succeeded by a definition of the Ultimate Atoms, infinite in number, which, together with Void Space (Inane), infinite in extent, constitute the universe . . . . The problem that arises from an entirely deterministic and materialistic account of reality is free will. Lucretius maintains that the free will is possible through the random tendency for atoms to swerve (Latin: clinamen).
9 --> It is a simple point to observe that randomness is no more rational than raw, Sir Francis Crick style reductionistic determinism. Reppert's summary on the problem of trying to get to a credible mind from chance + necessity alone is apt:
. . . let us suppose that brain state A, which is token identical to the thought that all men are mortal, and brain state B, which is token identical to the thought that Socrates is a man, together cause the belief that Socrates is mortal. It isn’t enough for rational inference that these events be those beliefs, it is also necessary that the causal transaction be in virtue of the content of those thoughts . . . [But] if naturalism is true, then the propositional content is irrelevant to the causal transaction that produces the conclusion, and [so] we do not have a case of rational inference. In rational inference, as Lewis puts it, one thought causes another thought not by being, but by being seen to be, the ground for it. But causal transactions in the brain occur in virtue of the brain’s being in a particular type of state that is relevant to physical causal transactions.
10 --> But, it is notorious that we rely -- for excellent reason -- on our ability to at least some of the time think rationally [and Plantinga has shown how NS by rewarding behaviour not belief, has a hard time supporting accuracy of especially abstract beliefs, as many different contradictory beliefs are behaviourally equivalent without being true]. So, materialism is both factually challenged and self-referentially incoherent. [A challenge that is glossed over in ever so much of the confident presentation of materialism as the chief foundation stone of that epitome of rationality, science.] 11 --> And that fact-challenged status comes right out in JM's plainly question-begging assertion that if FSCI is found in a biosystem, then that somehow in effect "proves" that it had to come about by in effect evolutionary materialistic processes. 12 --> For, we already see that there is a massive probabilistic challenge to get to functional complex info of the magnitude found in DNA by chance + necessity, starting with any empirically credible pre-biotic environment. Then, we also routinely observe such FSCI being produced by intelligent agents. So on inference to best explanation, design is a far better candidate for explaining DNA than C + N in some imaginary pre-biotic soup. But, the NAS acting as magisterium will have none of this unfettered inference to best explanation nonsense! And therein lieth the REAL issue. GEM of TKIkairosfocus
February 20, 2009
February
02
Feb
20
20
2009
12:26 AM
12
12
26
AM
PDT
JayM: Re 118: First, what is this FCSI of which you speak? Do you have a rigorous mathematical definition for it? Have you shown that the rigorous definition is applicable to biological systems? Does it reflect known evolutionary mechanisms? Perhaps you are not monitoring the Thesaurus thread anymore, but from 106 - 108, 112, and especially 172 - 173 on Thursday Feb 19, your point was answered. This, complete with links to and citations from the peer-reviewed literature. In that literature, there is a published table of 35 values of functional sequence complexity in fits, i.e funcitonal bits [one of the relevant quantitative metrics]. And, that is after the matter is already addressed in the weak argument correctives and the associated glossary, with a link to the relevant paper. Besides, FSCI -- "functionally specified complex information" -- is not a a mysterious quantity requiring special definition but, instead the phrase is a simple and accurate DESCRIPTION of something that is very familiar, as near as your hard disk drive's files which have in them X Bits or bytes [8-bit clumps] of functional data in bits. Indeed, when you post a typical comment here, you are using in excess of 1,000 7-bit ASCII characters, at 7 bits per character. it is no fault of modern digital technology that it so happens that DNA has in it 4-state digitally coded data strings that specify the protein sequences that will fold, agglomerate and function in life systems. In short, you seem to be trying to make a rhetorical mountain out of an easily flattened out mole-hill. And, when it comes to the wider concept, CSI, I find the Dembski mathematical models coherent and relevant, though challenging to apply to DNA or the like. You will see in the WACs and the glossary a brief discussion of his metric for CSI, including a calculated value for a hand of 13 cards; a simple case raised in this blog by Mark Frank. if Dr Dembski's metric were incoherent or irrelevant to the real world, it would not have been possible to make such a calculation. Similarly, much heavy weather has been made by Darwinist advocates of the "dubious" practice of using flat distributions of probabilities across what in statistical thermodynamics are called microstates. Of course, that is simply the commonplace default Laplacian indifference criterion at work, in a context where we often have no real reason to move away from such a default. (E.g. in DNA chaining, we have no reason to see teh side chains as stroingly blocking successions between A, G, C, t monomers;a nd in proteins the constraints are not decisive on chaining -- teh real constraints are post-chaining, i.e on folding especially.) But in fact, from e.g. Bradley's June 2003 Cytochrome-C value for ICSI [and note whose work he is building on; this has been in my App 1 my always linked for quite some time now . . . ] and from the simple fact of the H-metric, i.e a standard info theory eqn for ENTROPY , i.e. avg info per symbol, H = - SUM pi log2pi, we see that we can address non-equioprbable cases. [BTW, this is also connected to the thermodynamic version of entropy, which is why teh name. Cf discussion in my always linked, with onward links. Robertson's Statistical Thermophysics is a useful read.] Bradley's value (after first dealing with an equiprobable distribution model, and then factoring in teh observed non-even distribution of aa's in the protein] is:
ICSI = log2 (4.35 x 10^74) = 248 bits [with] Wo/W1 = 1.85 x 10^137 / 4.26 x 10^62 = 4.35 x 10^74 [this "easily" converts from statistical weights of macrostates to a probability estimate . . . ] [Where he also cites that] Two recent experimental studies on other proteins have found the same incredibly low probabilities for accidental formation of a functional protein that Yockey found 1 in 10^75 (Strait and Dewey, 1996) and 1 in 10^65 (Bowie, Reidhaar-Olson, Lim and Sauer, 1990).
In short, we see here highlighted the key difference between rhetoric and scientific dialogue towards truth. Rhetoric seeks to persuade of a given conclusion; science at its best is seeking the truth about our world, in light of the facts, wherever they may point. GEM of TKIkairosfocus
February 19, 2009
February
02
Feb
19
19
2009
11:28 PM
11
11
28
PM
PDT
Allen_MacNeill:
Ergo, ID isn’t even in complete opposition to the concept of evolution by natural selection. ID supporters simply disagree with evolutionary biologists on the source of the variations which provide the “raw material” for the demographic “sorting and preservation” that produces the outcome we refer to as “natural selection”.
Yes! That is correct, and very clearly stated. For many of us natural selection is not the problem, random (non-foresighted) variation is the problem. Allen_MacNeill:
So, what “intelligent agents” are proposed to explain the origin of “intelligently designed” variations...
Certainly one proposed "intelligent agent" is this character known as God. The God that has been proposed would seem to have some of the essential criteria: all knowing, ever-present, timeless. That said, one certainly encounteres some fundimental differences between the common view of "God" and the nature of a designer of nature. If designer of nature, then said designer is an experimenter (weren't there about 100 phila generated during the cambrian explosion, but only about 20 have survived?) Said designer of nature uses violence and aggressiveness to his own end. Others, of course, have proposed other "designers".
, and via what mechanism(s) do such agents operate?
I actually propose that the designer has influenced individual mutational events to pull off nature. I personally suspect that the designer's playground is the quanta. That said, I do not agree with Ken Miller who also suggests that God dances in the quanta in that I believe that God's activity is detectable, that there is good evidence for design, for foresight.bFast
February 19, 2009
February
02
Feb
19
19
2009
07:59 PM
7
07
59
PM
PDT
"Intelligence" doesn't do anything. Intelligent agents do things. That is, intelligent agents do things that require "intelligence", such as making choices, specifying means, directing processes toward outcomes (and compensating for deviations produced by objects and processes in the environment. Programs don't write themselves; they are written by "intelligent agents". Ergo, if "intelligent design" exists, it exists because of the actions of an "intelligent agent". The same is the case for natural selection. Natural selection doesn't do anything. It's an outcome, not a "process". To be specific, natural selection is an outcome of three separate, but related processes: 1) variation, 2) inheritance, and 3) reproduction. Given these three processes, the outcome in an environment with limited resources is: 4) unequal, non-random survival and reproduction. This outcome is what evolutionary biologists mean by the term "natural selection". Ergo, natural selection cannot be both a "creative process" by which biological entities and processes come into being and an outcome of such a process. On the contrary, processes 1 through 3 listed above are the means by which biological entities and processes come into being, and #4 is what we perceive as the outcome: change in the characteristics present in a population of organisms over time (i.e. evolution). [1] This means that if "intelligent design" happens, it must happen by means of the actions of an intelligent agent in one or more of the processes listed above. I think that most people who post and comment at this website would agree that it probably operates as part of #1 (variation). This was Asa Gray's belief upon reading Darwin's Origin of Species. Ergo, ID isn't even in complete opposition to the concept of evolution by natural selection. ID supporters simply disagree with evolutionary biologists on the source of the variations which provide the "raw material" for the demographic "sorting and preservation" that produces the outcome we refer to as "natural selection". So, what "intelligent agents" are proposed to explain the origin of "intelligently designed" variations, and via what mechanism(s) do such agents operate? [1] Note that this "change in the characteristics present in a population of organisms over time" may be either gradual or episodic (the fossil record inclines toward the latter conclusion).Allen_MacNeill
February 19, 2009
February
02
Feb
19
19
2009
07:36 PM
7
07
36
PM
PDT
"You realize that non-watermarked DNA is routinely used for identification in forensics, just as Venter’s watermarks are used for identification. Does that mean that all DNA, including non-coding, has FSCI?" You should be able to answer this yourself. The answer is more than likely no. There could be lots of examples such as a retro virus. Much of the so called junk DNA is now thought to specify a function even if they do not know what that function is. This is based on the fact that most of the genome is transcribed. But it is likely that some or a lot of it may be just what it was called, junk, and may not specify anything. As research findings accumulate, there may be a different conclusion. So for DNA, some is definitely FSCI, some other is likely and some is probably not and the proportions will probably change over time as biologists figure out more about genomes.jerry
February 19, 2009
February
02
Feb
19
19
2009
06:21 PM
6
06
21
PM
PDT
jerry @117
But since there is no known example of any natural process that specifies or leads to anything with FCSI,
First, what is this FCSI of which you speak? Do you have a rigorous mathematical definition for it? Have you shown that the rigorous definition is applicable to biological systems? Does it reflect known evolutionary mechanisms? I thought not.
we assume it is not a natural process. It is always a possibility but the only know specifiers of FCSI is intelligence.
You can't even show that. Further, you are again assuming your conclusion. If we assume, for the sake of argument, that FCSI is rigorously defined and that it has been demonstrated to exist in biological systems, you can no longer say "we assume it is not a natural process." There is no valid, rational, scientific reason for making that assumption. From the perspective of methodological naturalism, the operating philosophy of modern science, confirming the existence of FCSI in a biological system would more strongly suggest that it is not a unique product of intelligence than that some intelligence created biological systems. Proof by repeated assertion is unconvincing. JJJayM
February 19, 2009
February
02
Feb
19
19
2009
06:09 PM
6
06
09
PM
PDT
"Is it possible for information to specify something but not be specified by something else?" I am not aware of any. That is the issue under debate. In some cases we do not know what is doing the specifying but we with reason assume an intelligence. In the case of language, it is a person who is doing the speaking or writing but may be unknown as in a cave drawing. In the case of computer code it is a person doing the specifying. For DNA, we do not know what specified it But since there is no known example of any natural process that specifies or leads to anything with FCSI, we assume it is not a natural process. It is always a possibility but the only know specifiers of FCSI is intelligence.jerry
February 19, 2009
February
02
Feb
19
19
2009
05:35 PM
5
05
35
PM
PDT
Mark Frank (50):
Presumably this is the lizard you are talking about. It did evolve some interesting new features in about 30 generations. But it was hardly saltation. A cecal valve is not an “entirely new digestive system” and there is no reason to believe it appeared fully formed in one generation. It is an example of a relatively significant change in the phenotype in a just a few generations.
Dear Mark: Please think before answering. What does this last sentence mean: "It is an example of a relatively significant change in the phenotype in ... just a few generations."? This is an observation, devoid of thought. Where did the information come for all these changes? Did all this "new" information come in just 36 generations (or was it 30, or 20 , or 10, or 5-----since no one was looking)? There is no way on earth that all this "new" information could come about so quickly. Therefore, it is safe to assume the information was already present. This moves us, then, into the realm of "evo-devo", and major genetic networks being turned on and off (in PZ Meyer's take on all this, he points out that behavior had changed, and that skull size had changed to allow larger bite size, etc. That is, coordinated changes). Well, "evo-devo" and "gradualsim" cannot coexist. And Darwinism, if it represents ANYTHING AT ALL, it represents "gradualsim". So, we now have two problems represented by this lizard: the death of Darwinism, and the formulation of "new" information, since Darwinism=Modern Synthesis, is now dead. Notice I've thought about the implications of what we know about this "new" phenotype and how it happened. You have not.PaV
February 19, 2009
February
02
Feb
19
19
2009
05:05 PM
5
05
05
PM
PDT
kairosfocus:
Re: Is it possible for information to specify something but not be specified by something else? Read here, on lucky noise. What is in principle possible...
You're saying that it's logically possible but not probable? I think we're talking past each other. I'm curious to see jerry's answer.R0b
February 19, 2009
February
02
Feb
19
19
2009
03:24 PM
3
03
24
PM
PDT
kairosfocus:
There is a history there on the terminology. It starts with noted origin of Life researcher, Orgel, in 1973:
Living organisms are distinguished by their specified complexity.
I would love to see someone make a case that Orgel meant the same thing by "complexity" that Dembski does.
As to the use of DNA sequences in a context of recognising their uniqueness — but not necessarily having identified function — that is simply high tech fingerprinting.
So do Venter's watermarks have function?R0b
February 19, 2009
February
02
Feb
19
19
2009
03:19 PM
3
03
19
PM
PDT
1 2 3 5

Leave a Reply