Home » Biology, Design inference, Genetics, ID Foundations, Intelligent Design » ID Foundations, 15(a): A Testable ID Hypothesis — Front-Loading, part A (a guest-post by Genomicus)

ID Foundations, 15(a): A Testable ID Hypothesis — Front-Loading, part A (a guest-post by Genomicus)

(Series on Front-loading continues, here)

As we continue the ID Foundations series, it will be necessary to reflect on a fairly wide range of topics, more than any one person can cover. So, when the opportunity came up to put Front-Loading on the table from a knowledgeable advocate of it, Genomicus, I asked him if he would be so kind as to submit  such a post.

He graciously agreed, and so, please find the below for our initial reflections; with parts B and C (and maybe, more? please, please, sir . . . :lol: ) to follow shortly, DV:

____________________

>> Critics of intelligent design (ID) often argue that ID does not offer any testable biological hypotheses. Indeed, often times ID proponents seem to be content with simply attacking Darwinian theory, while not offering a testable hypothesis of their own. There’s no problem with pointing out flaws in a given theory, of course, but I think it’s time that ID proponents (myself included) begin to seriously develop a robust, testable design hypothesis in biology, and devote much of their energy to doing so. To quote from Intelligent design: The next decade, an Uncommon Descent article:

“As for the next decade, with luck, we are reaching the point where it’s safe to test design hypotheses, in the sense that many might fail and a few succeed. That’s the usual way with any endeavour in science, of course.”

I more than agree with that.

And so, in the spirit of developing a robust ID hypothesis in biology, I’ll be discussing the idea of front-loading, an inherently ID hypothesis.

What is the front-loading hypothesis? As far as I know, Mike Gene first proposed the front-loading hypothesis, and formally presented it in his book, The Design Matrix. On page 147 of The Design Matrix, we find a succinct definition of front-loading:

Front-loading is the investment of a significant amount of information at the initial stage of evolution (the first life forms) whereby this information shapes and constrains subsequent evolution through its dissipation. This is not to say that every aspect of evolution is pre-programmed and determined. It merely means that life was built to evolve with tendencies as a consequence of carefully chosen initial states in combination with the way evolution works.”

In short, this ID hypothesis proposes that the earth was, at some point in its history, seeded with unicellular organisms that had the necessary genomic information to shape future evolution. Necessarily, this genomic information was designed into their genomes. Also note that under this hypothesis, the genetic code was efficient from the start, since it was, again, intelligently designed by some mind or minds. Further, the proof-reading machinery of the cell, the transcription machinery, etc., would have been present with the genetic code at the dawn of life because the first life forms on earth would have been far more complex than the simple proto-life forms envisioned by opponents of ID. To quote from The Design Matrix again, front-loading is using evolution to carry out design objectives.”

To be sure, the front-loading hypothesis is not consistent with the non-teleological view that intelligence was never involved with the origin of the genetic code, or the origin of the molecular machinery in prokaryotes, etc.

This is a tantalizing hypothesis, and if positive evidence was advanced in its favor, then this would be positive evidence that teleology has played a role in the history of life on earth. The question of “what was front-loaded” is an interesting one, and is a good research question. Suffice it to say that multicellular life, vertebrates, plants, and animals were probably front-loaded.

What front-loading is not

Front-loading does not propose that the initial life forms contained a massive amount of genes/genomic information. It does not propose that the first cells carried every single gene that we find throughout life. Nor does it propose that every single aspect of evolution was front-loaded. There is also the common misconception that front-loading entails the first cells carrying around genes that are turned off, and then suddenly they get turned on. This could be a feasible mechanism in isolated cases, but on the whole it suffers from the problem that genes that are turned off are likely to accumulate mutations that simply destroy the original gene sequence. This problem could be countered, I suppose, by overlapping genes (although overlapping genes aren’t nearly as pervasive in prokaryotes as in eukaryotes), and this is one area of the front-loading hypothesis that can be researched. Nevertheless, on the whole, front-loading most likely was not carried out by simply turning genes on and off.

Testable predictions of the front-loading hypothesis

The cool thing about the ID hypothesis of front-loading is that it’s testable in a very real sense, meaning we can actually do some bioinformatics analyses to test its predictions. What are some of the predictions it makes? Let’s consider a couple of them, outlined below.

  1. Firstly, the front-loading hypothesis predicts that important genes in multicellular life forms will share deep homology with genes in prokaryotes. However, one might object that Darwinian evolution also predicts this. However, from a front-loading perspective, we can go a step further and predict that genes that really aren’t that important to multicellular life forms (but are found in them nevertheless) will generally not share as extensive homology with prokaryote genes.
  2. With regards to the next prediction I will discuss, we will go very molecular, so hang on tightly. In eukaryotes, there are certain proteins that are extremely important. For example, tubulin is an important component of cilia; actin plays a major role in the cytoskeleton and is also found in sarcomeres (along with myosin), a major structure in muscle cells; and the list could go on. How could such proteins be front-loaded? Of course, with some of these proteins they could be designed into the initial life forms, but some of them are specific to eukaryotes, and for a reason: they don’t function that well in a prokaryotic context. For these proteins, how would a designer front-load them? Let’s say X is the protein we want to front-load. How do we go about doing this? Well, firstly, we can design a protein, Y, that has a very similar fold to X, the future protein we want to front-load. Thus, a protein with similar properties to X can be designed into the initial life forms. But what is preventing random mutations from basically destroying the sequence identity of Y, over time, such that the original fold/sequence identity of Y is lost? To counter this, Y can also be given a very important function so that its sequence identity will be well conserved.

Thus, we can make this prediction from a front-loading perspective: proteins that are very important to eukaryotes, and specific to them, will share deep homology (either structurally or in sequence similarity) with prokaryotic proteins, and importantly, that these prokaryotic proteins will be more conserved in sequence identity than the average prokaryotic protein.

Darwinian evolution only predicts the first part of that: it doesn’t predict that part that is in bold text. This is a testable prediction made exclusively by the front-loading hypothesis.

  1. The front-loading hypothesis also predicts that the earliest life forms on earth were quite complex, complete with ATP synthases, sophisticated proof-reading machinery, and the like. Figure:
Figure: The front-loading hypothesis predicts that the genetic code in the first life forms was as optimized as it is today. It thus predicts that we will not find any less optimized genetic code at the root of the tree of life. Image from CUNY as linked, per fair use. [WP will not pass a link in a caption, KF]

Thus, we can see that the front-loading hypothesis is indeed testable, and so the claim that ID offers no testable hypotheses is simply not true.

Research Questions

Another neat thing about the front-loading hypothesis is that there are a number of research questions we can ask with regards to the front-loading hypothesis.

I have already mentioned the question of “what was front-loaded,” but we can go deeper than that. Below are some research questions generated by the front-loading hypothesis: research questions we can investigate to further understand biological reality. In another essay, I’ll be exploring these rather

interesting research questions (for example: was the bacterial flagellum front-loaded or designed at the dawn of life? Or: how might the cilium have been front-loaded? And so on).

Conclusion

We’re getting to the point where we can begin developing a rigorous design hypothesis in biology, and where we can make testable predictions about the world of life based on an ID model. In the first stages of formulating this ID hypothesis, we need a lot of imagination and folks who can think outside the box. And so let’s start proposing ID hypotheses that can be tested, and prove that ID does indeed offer testable hypotheses in biology.

About me

Over the years, I have become quite interested in the discussion over biological origins, and I think there is “something solid” behind the idea that teleology has played a role in the history of life on earth. When I’m not doing multiple sequence alignments, I’m thinking about ID and writing articles on the subject, which can be found on my website, The Genome’s Tale.

I am grateful to UD member kairosfocus for providing me with this opportunity to make a guest post on UD. Many thanks to kairosfocus.

Also see The Design Matrix, by Mike Gene.>>

_____________________

The above is of course quite interesting, and presents a particular hypothesis for design of life. Others are of course possible, but if we take front loading in (a) restricted ["island of function"] and (b) general [universal common descent] senses,  we see that one may accept a and b, accept a but not b (or, b but not a!), or reject both a and b. So, it is a useful, flexible, testable hypothesis that underscores how “evolution” as such is not the opposite of design.

Let the unfettered observational evidence decide what is true!

The issue design theory takes  is with a priori, Lewontinian evolutionary materialism dressed up in the holy lab coat and improperly inserted into the definition and methods of science under the label “methodological naturalism,” not with even the universal common descent of life forms from one or a cluster of unicellular ancestral forms. For instance, well-known ID researcher prof Michael Behe, accepts universal common descent.

I would add, that design theory has in it a great many other testable, and indeed well tested hypotheses, such as that:

1] irreducibly complex systems constrained by Mengue’s criteria C1 – C5, will be hard or impossible to come about by chance variation and blind natural selection,

2] complex specified information, especially functionally specific complex information,  as can be expressed in the log-reduced form:

Chi_500 = Ip*S – 500,
bits beyond the Solar System threshold of complexity

. . . is an empirically reliable sign of origin by intelligently directed choice contingency, aka design,

3] The per aspect explanatory filter [as an explicit expansion and detailing of the classic "scientific method"] will reliably allow us to assign causes for observed aspects of phenomena, objects or processes, across chance, law-like mechanical necessity and design:

The per aspect design inference explanatory filter

4] That cost of search compounded by search for a search etc, leads to a “no free informational lunch” consequence (once we are at a reasonable threshold of complexity, tied to the universal or restricted plausibility bounds as described by Abel).

5] That physicodynamically inert prescriptive information joined to implementing machinery etc, and sources of energy, materials and components, imposes a cybernetic cut or chasm not bridgeable by blind chance and necessity.

6] Etc, etc.

So, we are in a position to have a pretty useful onward discussion, thanks (again) to Genomicus. END

(Series on front-loading continues here)

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

129 Responses to ID Foundations, 15(a): A Testable ID Hypothesis — Front-Loading, part A (a guest-post by Genomicus)

  1. ALL: Let me express, again, my appreciation to Genomicus, for being willing to submit a guest post for UD, on the Front-Loading Hypothesis. Hopefully, this will be rapidly followed by parts B and C. KF

  2. The front-loading hypothesis also predicts that the earliest life forms on earth were quite complex, complete with ATP synthases, sophisticated proof-reading machinery, and the like.

    I’m curious, how would one test this, without resorting to a time machine?

  3. Congratulations, Genomicus, on grasping the frontloading nettle :) It has always seemed to me to be the one potentially testable hypothesis for ID.

    As I understand it, your hypothesis could be summarised as:

    What was designed was an organism, ancestral to all life, and from which all live evolved by Darwinian mechanisms, but which had characteristics such that what did in fact evolve was unlikely not to have evolved?

    And you say this can be tested thus:

    Firstly, the front-loading hypothesis predicts that important genes in multicellular life forms will share deep homology with genes in prokaryotes. However, one might object that Darwinian evolution also predicts this. However, from a front-loading perspective, we can go a step further and predict that genes that really aren’t that important to multicellular life forms (but are found in them nevertheless) will generally not share as extensive homology with prokaryote genes.

    Yes, as you say, Darwinian evolution also predicts this. So the devil is in the detail:

    If front-loading is true, “genes that really aren’t that important to multicellular life forms (but are found in them nevertheless) will generally not share as extensive homology with prokaryote genes.”

    Can you unpack this? How do we identify those “genes that really aren’t that important to multicellular life forms”? How unimportant is “not that important?” And “as extensive a homology with prokaryote genes” as what? What is the comparison here?

    With regards to the next prediction I will discuss, we will go very molecular, so hang on tightly. In eukaryotes, there are certain proteins that are extremely important. For example, tubulin is an important component of cilia; actin plays a major role in the cytoskeleton and is also found in sarcomeres (along with myosin), a major structure in muscle cells; and the list could go on. How could such proteins be front-loaded? Of course, with some of these proteins they could be designed into the initial life forms, but some of them are specific to eukaryotes, and for a reason: they don’t function that well in a prokaryotic context. For these proteins, how would a designer front-load them? Let’s say X is the protein we want to front-load. How do we go about doing this? Well, firstly, we can design a protein, Y, that has a very similar fold to X, the future protein we want to front-load. Thus, a protein with similar properties to X can be designed into the initial life forms. But what is preventing random mutations from basically destroying the sequence identity of Y, over time, such that the original fold/sequence identity of Y is lost? To counter this, Y can also be given a very important function so that its sequence identity will be well conserved.

    Are you saying that if frontloading is true, the sequences that code for proteins that are essential for eukaryotes will also be found in prokaryotes, but with slight difference that mean that it codes for a different protein but one important to prokaryotes?

    In what sense would this prediction distinguish front-loading from a Darwinian scenario?

    Your version of “front-loading” seems to be the argument that at some point, an Irreducibly Complex organism initiated Darwinian evolution, and that it was inserted into the terrestrial environment with sufficient complexity that subsequent evolution really had to do what it did in fact do, right?

    Or do you think that side-loading would also have occurred? (i.e. more nudges as things went along, to keep them on track, or initiate new variance)?

  4. Observation:

    Living organisms

    Question(s)

    How did living organisms come to be (on this planet)? (Are living organisms the result of intentional design, purpose-less stochastic processes or perhaps even alien colonization?)

    Prediction:

    If living organisms were the result of intentional design then I would expect to see that living organisms are (and contain subsystems that are) irreducibly complex and/ or contain complex specified information. IOW I would expect to see an intricacy that is more than just a sum of chemical reactions (endothermic or exothermic).

    Further I would expect to see command & control- a hierarchy of command & control would be likely.

    Test:

    Try to deduce the minimal functionality that a living organism requires. Try to determine if that minimal functionality is irreducibly complex and/or contains complex specified information. Also check to see if any subsystems are irreducibly complex and/ or contain complex specified information.

    Potential falsification:

    Observe that living organisms arise from non-living matter via a mixture of commonly-found-in-nature chemicals. Observe that while some systems “appear” to be irreducibly complex it can be demonstrated that they can indeed arise via purely stochastic processes such as culled genetic accidents. Also demonstrate that the apparent command & control can also be explained by endothermic and/or exothermic reactions.

    Confirmation:

    Living organisms are irreducibly complex and contain irreducibly complex subsystems. The information required to build and maintain a single-celled organism is both complex and specified.

    Command & control is observed in single-celled organisms- the bacterial flagellum not only has to be configured correctly, indicating command & control over the assembly process, but it also has to function, indicating command & control over functionality.

    Conclusion (scientific inference)

    Both the universe and living organisms are the result of intention design.

    Any future research can either confirm or refute this premise, which, for the biological side, was summed up in Darwinism, Design and Public Education page 92:

    1. High information content (or specified complexity) and irreducible complexity constitute strong indicators or hallmarks of (past) intelligent design.
    2. Biological systems have a high information content (or specified complexity) and utilize subsystems that manifest irreducible complexity.
    3. Naturalistic mechanisms or undirected causes do not suffice to explain the origin of information (specified complexity) or irreducible complexity.
    4. Therefore, intelligent design constitutes the best explanations for the origin of information and irreducible complexity in biological systems.

    (see also Science asks 3 basic questions)

  5. Joe,
    Given that you originally made all that up in OCTOBER 31, 2007 and it convinced nobody then, what do you think will change by repeating it 5 years later?

    If living organisms were the result of intentional design then I would expect to see that living organisms are (and contain subsystems that are) irreducibly complex and/ or contain complex specified information.

    http://intelligentreasoning.bl.....chive.html

    What is your prediction if living organisms are *not* the product of intentional design? What would you expect to see then? Half-cat half dog perhaps? Can you rewrite it for the opposite case?

  6. Joe,
    You should read the Genome’s tale blog. Some if it is written with you in mind:

    These blogs represent what ID as a whole could be if the bulk of its proponents had the interest in developing ID as a rigorous biological hypothesis, instead of talking how the fossil record disproves Darwinian evolution.

    https://thegenomestale.wordpress.com/2011/12/05/is-intelligent-design-dead/

  7. Peter,

    Do you have a point? Can you produce a testable hypothesis for your position or are you just happy being belligerent?

    YOU should be presenting a testable hypothesis if living organisms are not designed, duh.

  8. 1- ID has to attack darwinism and neo-darwinism for the reasons provided- see Newton’s First Rule

    2- The theory of evolution does not have a rigorous biological hypothesis

  9. Joe,

    Do you have a point?

    My point is that 5 years ago you were already satisfied with the question and the “answer” you provided yourself.

    You’ve already decided that the universe is designed.

    So what’s left? Where else can you go? Why bother with the charade of pretending that evolution has to be falsified before ID can take over when you already know that the universe is designed?

    It seems to me, Joe, that what you are really doing is evangelizing. It’s not sufficient for you to KNOW the truth about life and the universe, you won’t be happy until others also believe in the same way.

    So, preacher man, preach away. Just don’t kid yourself that you are doing science!

  10. IOW I would expect to see an intricacy that is more than just a sum of chemical reactions

    Are there any examples of biological intricacies that are “more than just a sum of chemical reactions”? Or do you mean intricacies that must have originated through more than just chemical reactions?

  11. Peter,

    I have not already decided the universe is designed. Also ID is not anti-evolution.

    Ya see Peter the evidence points to a designed universe. So don’t blame me for that and don’t blame me because your lame position doesn’t have anything.

    Science? You have no idea what that is.

    Dude you have Petered out and obviously have nothing left.

  12. So Peter also refuses to provide a testable hypothjesis for the anti-ID position. Typical.

    Nothing says evos are intellectual cowards more than avoiding the issues.

  13. 13

    1- And when ID kills Darwinism it’ll have to stand on it’s own. What’s the ETA on that? Will you get round to investigating the designer then?

    2-No? Perhaps you could tell me what ID’s rigorous biological hypothesis is then?

    How would you expect life to look if it evolved Joe?

  14. Peter:

    1- And when ID kills Darwinism it’ll have to stand on it’s own. What’s the ETA on that? Will you get round to investigating the designer then?

    It stands on its own just fine, thanks. That you are incapable of understanding the evidence doesn’t mean anything to me.

    2-No?

    Nope and people like you prove it every day.

    And again ID is not anti-evolution. But then again you don’t appear to know much of anything about either.

  15. 15

    Joe:

    I have not already decided the universe is designed.

    Joe:

    Both the universe and living organisms are the result of intention design.

    Just scroll up a little Joe to see what you yourself just wrote!

    So Peter also refuses to provide a testable hypothjesis for the anti-ID position.

    Don’t need to. That’s not how science works.

    I can’t prove there’s no teapot in orbit around Jupiter either!

  16. So Peter also refuses to provide a testable hypothjesis for the anti-ID position.

    Don’t need to. That’s not how science works.

    Thanks for admitting that you are an intellectual coward.

  17. 17

    Joe,

    It stands on its own just fine, thanks.

    Sure it does. That’s why you constantly attack evolution!

    That you are incapable of understanding the evidence doesn’t mean anything to me.

    Present your evidence in the only venue that matters! Hint, that’s not on a blog!

    Nope and people like you prove it every day.

    Did you miss the part where I asked you if you could tell me what ID’s rigorous biological hypothesis is?

    What a surprise! But then again, “things are complex, therefore design” is hardly rigorous so I don’t particularly blame you.

  18. Peter quote-mines:

    I have not already decided the universe is designed.

    Joe:

    Both the universe and living organisms are the result of intention design.

    The EVIDENCE Peter. man you are dense.

  19. Peter:

    Sure it does. That’s why you constantly attack evolution!

    Liar {Just spotted this}

    Present your evidence in the only venue that matters! Hint, that’s not on a blog!

    But your position doesn’t have any evidence in peer-review.

    And no, not until YOU provide a rigorous biological hypothesis for your position, that way I know what you will accept and you cannot back-peddle and flail away.

    So ante up or shut up.

  20. 20

    Joe,

    So Peter also refuses to provide a testable hypothjesis for the anti-ID position.

    There is no “anti-ID” position because ID does not have a position on anything!

    Or rather, there are as many positions in ID as there are ID supporters.

    There are even some ID supporters that believe evolution can in fact do everything claimed for it, but with a little push here and there. People like, ohhh, I don’t know, William Dembski! Have you read the interview just posted here?

    How can there be an anti-ID position when you can’t even say when the design event(s) happened? Was it just once, a long time ago or is it all the time everywhere?

    C’mon Joe. Say something specific!

    Thanks for admitting that you are an intellectual coward.

    This from the guy who when asked to support his claims calls people a liar! {Please avoid personalities. This holds for both of you.}

  21. Peter:

    There is no “anti-ID” position because ID does not have a position on anything!

    {Snip — better to say “false”}

    And what specifics does your position have, Peter?

    Something happened long ago and things kept happening and here we are!

    And I have supported my ID Peter. OTOH all you have is ignorant belligerence.

    Go figure…

    _______

    Joe: Please, let us keep tone in check. KF

  22. Joe,

    And what specifics does your position have, Peter?

    My position is able to talk, with specifics, about the evolution of increased complexity in a molecular machine.

    Your position, having decided that such “machines” are designed does not need to do more then that – there is in fact nothing more to be done under an ID worldview as “it was designed” is sufficient.

    Reintroducing a single historical mutation from each paralogue lineage into the resurrected ancestral proteins is sufficient to recapitulate their asymmetric degeneration and trigger the requirement for the more elaborate three-component ring. Our experiments show that increased complexity in an essential molecular machine evolved because of simple, high-probability evolutionary processes, without the apparent evolution of novel functions. They point to a plausible mechanism for the evolution of complexity in other multi-paralogue protein complexes.

    http://www.nature.com/nature/j.....10724.html

    Something happened long ago and things kept happening and here we are!

    No, Joe, in fact that’s your position.

    “Let there be light!”

    And I have supported my ID Peter. OTOH all you have is ignorant belligerence.

    I’ve been quite restrained Joe given that you’ve called me a liar etc multiple times already. Perhaps it’s past your bedtime and you are a bit grouchy!

  23. Here’s a design inference for you.

    Genomicus = Mike Gene.

    Why? Well, like so many IDers I just “have a gut feeling”. I can’t provide any evidence (yet) but I just “know” it to be true! {Snide misrepresentation, cf here on and the UD FAQs in the resources tab.}
    __________

    PG, you — I speak here in my persona as thread owner — are hereby notified to cease and desist from snide misrepresentations of design theory and design thinkers, as notified below. In addition, you are verging on attempted outing tactics above, which are irrelevant to the substance Genomicus raised above. Kindly respond on the substantial merits, not personalities. You will not be warned again. KF

  24. I’m pretty sure he’s not Mike Gene, Mike seems to accept that flagellum evolved, whereas this guy still seems to have a problem with it.

    http://designmatrix.wordpress......flagellum/

  25. Genomicus,
    Was the designer of the initial stage of evolution able to predict the future? How else would the designer know what mutations would turn out to be successful in response to the constant environmental change which was to occur after the initial design event?

  26. Peter:

    My position is able to talk, with specifics, about the evolution of increased complexity in a molecular machine.

    You are a joke as ID is not anti-evolution and your position cannot explain the origin of molecular machines.

    Your position, having decided that such “machines” are designed does not need to do more then that – there is in fact nothing more to be done under an ID worldview as “it was designed” is sufficient.

    That is what an ignorant person would say. However reality says there is much more to do because there are many unanswered questions.

    Oh and I have called you a liar because you post lies. It appears that is all you have- lies, bald assertions and false accusations…

  27. How is that a requirement? What about “built-in responses to ebvironmental cues” as Dr Spetner wrote back in 1997?

  28. Joe:

    Please watch tone.

    There is indeed a point where one may properly assert willful falsehood in disregard of truthfulness, but that should come at a point where all doubt has been removed by persistence in falsity in the teeth of specific correction, not merely disagreement or misunderstanding leading to inadvertent misrepresentations. .

    I suggest a three strikes and you are out rule, with evidence to support so strong a CONCLUSION.

    Otherwise you will only stir up back-forth accusations. Of course, PG has crossed the threshold where I only speak OF him, in correction. Earlier on, MF crossed that line, and the infamous sock-puppet, MG. As to the owner of the hate and slander site . . .

    KF

  29. OK, my apologies.

    So how many lies, false accusations and bald assertions am I supposed to endure?

    I have jars full of them and even went out ann rented a mini-storage warehouse to keep the rest in.

  30. Onlookers,

    when I earlier used a short form too close to Mike Gene, Geno corrected me.

    I have reason to believe Geno is not Mike Gene.

    Also, who he is, even to a handle, is irrelevant to how what he has to say stands on merits of fact and reasoning.

    KF

  31. NOTICE: In addition, PG has chosen to misrepresent the design inference. This is a disciplinary warning, as his behaviour is verging on being trollish. If he is serious, he should now take time to say work through the introductory summary here. The FAQs in the resources tab top of this and every UD page will also be helpful. If PG does not fix tone and substance, disciplinary steps will be taken, escalating from the level already taken. KF

  32. How is that different from evolution?

  33. Dr Liddle, you seem to be unwilling to accept that a host of design theory propositions have predictive effects and are accepted as reliable signs precisely because they pass empirical tests and ground inductive warrant. Please think again, and I refer you here on. GEM of TKI

  34. How is evolution different from a duck or a rabbit?
    -
    -
    -

    (is that better KF?)

  35. Joe, I again suggest the same rule Paul used, with each such individual: three strikes and you are out. Warn once, twice, and the third time . . . KF

  36. P: “Evolution” is a particularly elastic and vague term. the distinction between front-loading and evolutionary materialism, especially a priori evo mat, should be quite plain. And Geno provided specific points of differential predictions. KF

  37. Petrushka,

    I have already agreed with you that a designer can and has used evolution-> ie change over time, in order to get a required result. We see it with antennas and with imitated devices (that is a program reproduced (functionally) what a human designed).

    So using a GA one could start with a random polypeptide and have a target in mind- not a specific sequence but a resulting protein that performs the pre-specified function(s).

    Then once you get all the proteins you then use them as a resource for your GA to build the required protein machinery.

    So yes, in that sense “evolution” is a wonderful and powerful tool. But in that sense “evolution” is nothing like Darwin envisioned.

    That said what Dr Spetner is referring to means that the deck is stacked in favor of certain outcomes.

  38. I’m not so much “unwilling”, kf, but completely unpersuaded. I respect the effort you have put into making your case, but we will simply have to agree to differ.

    I would agree that some ID propositions make predictions, but of those that are supported by evidence, they do not seem to differ from the what is compatible with Darwinian evolution.

  39. Re PG, for record:

    I observe an objection to inferring design as cause, on empirically tested, reliable sign, that this does not tell us in effect whodunit, when and how.

    The proper first response is, to have good reason to infer that tweredun is more than enough to trigger a scientific revolution, now in progress. I note on using Chi_500 = Ip*S – 500, a couple of days back, on geoglyphs, that the inference was independent of attempts to date or to identify method or parties involved.

    But, by introducing a nodes-arcs framework, we quantified the intuition that a circle 100 m across was most unlikely to be chance and/or necessity.

    Had we come across such a ditch on Mars — as opposed to a caldera or an impact crater [notice, comparable chance-necessity phenomena that would be distinguishable on empirical signs], we would have been entitled to make the same inference.

    Which, would have been revolutionary.

    KF

  40. 40

    Joe,
    I don’t understand.

    You wrote both statements.

    Both statements cannot be true.

    Where have I quotemined?

  41. 41

    Joe,

    So using a GA one could start with a random polypeptide and have a target in mind- not a specific sequence but a resulting protein that performs the pre-specified function(s).

    Who is this “one” you speak of? The designer?

    It seems to me that a statement like “the designer could start with a random polypeptide” is a statement about the designer and as such is off limits until sufficient is known about the design in order to be able to draw inferences about the designer.

    You yourself admit this by use of the world “could”. You don’t know one way or the other. Lots of things “could” have happened. And you yourself have said that ID is not ready to talk about the designer.

    Sure, when other people talk about plausible, perfectly possible physical pathways from one state to another that’s a “just so story” but when you appear to profess direct knowledge about what the designer is capable of then that’s science.

    So yes, in that sense “evolution” is a wonderful and powerful tool.

    Congratulations Joe. You are now a Thestic Evolutionist.

  42. 42

    the distinction between front-loading and evolutionary materialism, especially a priori evo mat, should be quite plain.

    No, it’s not plain at all.

    Front-loading is a physical thing, if it happens. Physical entities are edited, apparently by other physical entities (Joe’s space aliens?).

    Therefore materialism is not a constraint. Unless, of course, you know better KF?

  43. I’ll be replying to a number of comments shortly. For the record, no, I’m not Mike Gene. I’m not half as smart as he is, and not every proponent of front-loading is Mike Gene.

    Thanks to kairo again!

  44. Peter,

    Just because you can twist things that doesn’t make it so.

    It seems to me that a statement like “the designer could start with a random polypeptide” is a statement about the designer and as such is off limits until sufficient is known about the design in order to be able to draw inferences about the designer.

    What is that from, “A Moron’s Guide to Science” by Peter Griffin?

    You yourself admit this by use of the world “could”. You don’t know one way or the other. Lots of things “could” have happened.

    Well that seems to get by in mainstream evolutionary biology. So what’s your point?

    And you yourself have said that ID is not ready to talk about the designer.

    Fortunately I am not bound by ID and am allowed, even according ID, to do as I wish wrt the designer(s).

    I can only go by the knowledge I have of designers, and seeing I was in a design-centric industry for 3+ decades I have some experiences to draw from.

    Ya see, Peter, that is all part of it. As YOU keep saying “What has blah, blah , blah” “how did , blibiddy, blibiddy, blah”- so it seems strange that when I offer up explanations of what I have discovered so far, perhaps you should just take it or stop asking.

    Sure, when other people talk about plausible, perfectly possible physical pathways from one state to another that’s a “just so story” but when you appear to profess direct knowledge about what the designer is capable of then that’s science.

    When that happens I will listen.


    So yes, in that sense “evolution” is a wonderful and powerful tool.

    Congratulations Joe. You are now a Thestic Evolutionist.

    Then so is Dawkins and every person who has ever written a genetic/ evolutionary algorithm (ie that solves a specific problem).

    Or perhaps you are just flailing away as usual?

  45. Well, Peter, I didn’t decide, the EVIDENCE did. And if your lame position ever produces any positive EVIDENCE I will be forced to consider it and let the EVIDENCE decide.

    Ya see, Peter, this-

    Both the universe and living organisms are the result of intention design.

    -was based on the all the EVIDENCE gathered to date.

    And seeing that you are to frightened to actually ante up with testable hypotheses for your position you really don’t have anything worth reading.

  46. Thanks for your comment, Dr. Liddle.

    As I understand it, your hypothesis could be summarised as:

    What was designed was an organism, ancestral to all life, and from which all live evolved by Darwinian mechanisms, but which had characteristics such that what did in fact evolve was unlikely not to have evolved?

    That’s sort of close, but not quite close enough to what the front-loading hypothesis proposes. Firstly, the front-loading hypothesis doesn’t propose that a single cell was designed. Instead, a population of designed cells were seeded on earth (and were probably able to communicate with each other in some way). These cells contained the genes necessary for the origin of multicellular life, for example, and the genes belonging to components of molecular machines such that molecular machines to be front-loaded.

    Can you unpack this? How do we identify those “genes that really aren’t that important to multicellular life forms”? How unimportant is “not that important?” And “as extensive a homology with prokaryote genes” as what? What is the comparison here?

    Genes important in development in all multiceullular life, wherein deletion of them results in death or a similar fate would be considered “important genes” for multicellular life forms. On the other hand, if genes aren’t really that important for the existence of multicellular life, then the FLH doesn’t predict them to share as deep homology with prokaryotic genes compared with genes important to multicellular life.

    Are you saying that if frontloading is true, the sequences that code for proteins that are essential for eukaryotes will also be found in prokaryotes, but with slight difference that mean that it codes for a different protein but one important to prokaryotes?

    In what sense would this prediction distinguish front-loading from a Darwinian scenario?

    Not quite. If front-loading is correct, then we’d expect that important proteins in eukaryotes will share deep homology with prokaryotic proteins, either in sequence similarity or similar tertiary structure. However, again, non-teleological evolution predicts this, so we go a step further: we also predict that such prokaryotic homologs will be well conserved, among themselves, in sequence identity, such that it would be hard for their basic 3D shape to be destroyed by random mutations, genetic drift, etc. Darwinian evolution doesn’t predict this at all. In fact, when it comes to molecular machines, one could even say that Darwinian evolution expects homologs of components of these molecular machines to be not be very well conserved in sequence identity, since this would make it easier for them to be co-opted into a molecular machine.

    It is my position that there were no “nudges” or “side-loading,” though of course, this position could change with new data.

  47. lastyearon:

    Was the designer of the initial stage of evolution able to predict the future? How else would the designer know what mutations would turn out to be successful in response to the constant environmental change which was to occur after the initial design event?

    Remember, the front-loading hypothesis does not propose that every aspect of life was front-loaded. Major events in the history of life on earth were, in my view, front-loaded, but not the little stuff. That said, you don’t need to be able to predict the future. Consider the type III secretion system. It shares homology with the bacterial flagellum. Perhaps the type III secretion system was front-loaded so that prokaryotes would be able to form key relationships with eukaryotes? How hard would that be? Not very hard, since apparently, despite the vastly different type of environments, in Buchnera, an export system similar to the TTSS has evolved independently.

  48. Seriously? Disciplinary action? Really? LOL. What you going to do? Six lashes? KF, this is just a blog, and an entertaining one at that. Nobody is really taking much notice of this blog, let alone seriously. Don’t be such a pompous twit.

  49. Re PG:

    Onlookers, notice the a priori materialism coming out of the woodwork?

    PG wants to use this to continue to a priori impose a worldview on science, locking out possibilities inconvenient to his agendas. The first problem with that is, of course, that such materialism is a non-starter, being fatally self referentially incoherent, thus undermining rationality not just the possibility for science. Let us clip Haldane (again) as a summary on that:

    “It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms.” [["When I am dead," in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]

    Secondly, if science is taken ideological captive like that, it has no credibility as an unfettered search for the truth about what really happens or happened, it is just politics and myth-making dressed up in the holy lab coat — how dare you question us, etc. Which is precisely what is causing much discredit and controversy in climate science and policy debates.

    (The take-away lesson from that is that computer simulations are in a make-believe world, not the real one, so one has to validate carefully at every step. This is not irrelevant to the attempts to play games with genetic algorithms and kin as claimed empirical evidence for body plan level macro-evolution.)

    So, my initial concern is simple: for good reason, I do not trust ideologues, especially those caught up in self-referentially incoherent schemes of thought with dangerous history behind them.

    Next, Genomicus plainly has put on the table a cluster of points of contrast between the usual materialistic view of common descent and the front-loading, evolution by design view. Just as, I added to that, a list of points where we can see directly observable signs that — as we design thinkers argue — point, on strong inductive grounds, to design as most credible cause.

    So, a reasonable person would expect that the discussion would centre on that.

    A glance at the thread will show that the objectors have done anything but that. Which is absolutely telling on the actual balance on the merits.

    When one side wants to address evidence and merits on fact and logic, and the other wants to play the red herring led away to ad hominem laced strawmen to be set alight with little sparks of snide rhetoric games, that is telling us something.

    The third concern is the underlying subtext of snide contempt.

    PG and ilk seem to believe or assume that if you do not go along with their a priori materialist ideologisation of science, you “must” be ignorant, stupid, insane or wicked. In some cases that come from PG’s apparent circle, I have been falsely accused of all sorts of things, and have had my family subtly threatened mafioso style. That sort of bully-boy, bigoted attitude is sickeningly telling. We need not elaborate on the amorality of the underlying worldview and the nihilism of “can I get away with it” and/or “might and manipulation make ‘right’ . . .” that it encourages adherents to substitute for morals. (Let’s just link here and here on.)

    Lurking underneath all of this, is of course an outright LIE.

    A lie that — in the teeth of being corrected any number of times on the record — is being propagated as a main talking point by the school of objectors to design theory clustered on NCSE and the likes of Ms Barbara Forrest as aided and abetted by the ACLU et al, so also in the teeth of what PG and ilk know, or should know.

    Namely, the “ID is creationism in a cheap tuxedo” smear, here alluded to by PG with patent trollish intent to poison the discussion.

    But if a big enough lie is pounded in drumbeat fashion, loud enough, repetitively enough and long enough, a lot of people who were inclined to believe something like that will swallow it. As over 100 million ghosts from the century just past will remind us.

    (And, anticipating and correcting another fever swamp drumbeat lie of recent days, no, this is not laying “all the world’s ills” at the feet of Darwinists and their fellow travellers. It is identifying a problem that has haunted the century just past with horrific consequences, in the context where the root problem of humanity is that we are all finite, fallible, morally fallen, and too often ill-willed.)

    Instead, let me take this as an occasion to expose the deceptive and toxically polarising rhetorical agenda at work.

    Anyone who cares to know — cf here on in the UD Weak Argument Correctives — can easily confirm that Creationism starts from a context of appealing to a claimed revelation of the actual deep past of origins, by the Creator. On the strength of that confidence that there is in hand, a record of what happened (subject, of course to debates on interpretation), science is then done on the basis that we know a truth that accurate science will support.

    The design inference is just the opposite to that, it begins from a basic question: are there signs in our world of experience that on testing reliably point to design as cause of a given aspect of an object or phenomenon, as distinct from blind chance and/or necessity? (Cf the definition of ID at UD, here from the resources tab.)

    On inspection and testing the answer is yes, there are many such (cf OP for an initial list).

    Since also as designers ourselves we know design is possible and indeed actual in our world, we then hold that we must be open to the possibility of design in the world of nature around us. Notoriously, even a Dawkins says that biology studies complicated things that strongly appear designed, though of course on his materialist a prioris, he assumes that he can explain that appearance as illusory.

    But in fact, it is fairly easy to show that the accessible atomic resources of our solar system or the observed cosmos are simply inadequate to create more than about 500 or 1,000 bits of complex, specific information, whether directly coded and stored, or implied by specific, functionally constrained organisation. And, those thresholds are generous.

    Dembski was plainly well-warranted in NFL, when he said:

    p. 148: “The great myth of contemporary evolutionary biology is that the information needed to explain complex biological structures can be purchased without intelligence. . . . . Eigen and his colleagues must have something else in mind besides information simpliciter when they describe the origin of information as the central problem of biology.

    I submit that what they have in mind is specified complexity, or what equivalently we have been calling in this Chapter Complex Specified information or CSI . . . .

    Biological specification always refers to function . . . In virtue of their function [[a living organism's subsystems] embody patterns that are objectively given and can be identified independently of the systems that embody them. Hence these systems are specified in the sense required by the complexity-specificity criterion . . . the specification can be cashed out in any number of ways [[through observing the requisites of functional organisation within the cell, or in organs and tissues or at the level of the organism as a whole] . . .”

    p. 144: [[Specified complexity can be defined:] “. . . since a universal probability bound of 1 [[chance] in 10^150 corresponds to a universal complexity bound of 500 bits of information, [[the cluster] (T, E) constitutes CSI because T [[ effectively the target hot zone in the much wider field of possibilities W] subsumes E [[ effectively the observed event from that field], T is detachable [--> separately and simply describable without in effect listing E] from E, and and T measures at least 500 bits of information . . . ”

    In short, we have a strong, empirically and analytically well warranted sign that points to design, which sign is also the underlying one in almost everything else in design inference. Namely complex, typically functionally specified, information beyond a threshold of 500 or sometimes 1,000 bits. As posts in this thread join with the whole Internet, the wider ICT industry, the groaning shelves of our libraries and the technological world we live in to jointly testify, the empirically credible source of such FSCI is: design. And, we have good needle in haystack and infinite monkeys at keyboards grounds to see that it is not credible that such should emerge by blind chance and/or mechanical necessity, an inference from such FSCI to design as sign is well warranted.

    (For those being led by some snide comments to wonder why I sometimes use FSCO/I, that is to bring up the context that functionally specific complex organisation implies and is associated with information that guides its wiring plan so to speak. Similarly, GPuccio uses dFSCI to underscore the role of digitally coded FSCI. All of these are functionally oriented manifestations of complex, specified information or specified complexity implying such information. But if your intent is to object rather than to understand, even if one may in the end disagree, one will often make mountains out of mole-hills. Sorry to have to be so direct.)

    What all of this means is that we have a credible, scientifically and epistemologically legitimate way to use the logic of induction on well tested sign to infer to design as cause under certain circumstances. Namely, on observing FSCO/I or associates of that. This, without resorting to metaphysical a prioris, or appealing to records that would by their very nature not be acceptable as credible, to many.

    I add, that we can look as well at the finely tuned elegant physics of our observed cosmos that looks set up to make a world in which C-chemistry, aqueous medium cell based life is possible. In short, our observed cosmos also betrays signs that point to design. And indeed, we may extend the front-loading thesis to the apparent design of the cosmos that could very well program life into the design, not just make the chemical basis for life. In other worlds I am asking here, whether our very cosmos could be programmed in its physics and derived chemistry, to create life as suitable habitats emerge in light of a finely balanced design. And, i note that the degree of isolation of the operating point we sit on in the field of theoretically possible universes implies a need to have in effect a cosmos bakery that sets up such a search, if we want to appeal to a multiverse, i.e. the fie tuning is just being postponed one level. In short, it looks like there is already front loading at one level, to set up a world in which the sort of life we observe is possible. So, we have a job opening for a front loader in chief, whatever happens with biology thereafter.

    (That is, theistic ID thinkers — there are many varieties of ID thinkers! — simply do not have any hard commitment to have to find design in life forms. Cosmological ID, as the old joke says, was more than enough to have the astrophsysicists, astronomers and general physicists streaming out of their labs and seminar rooms at lunch time to go over and listen to Sir Fred Hoyle’s meditation in the campus chapel on the Monkeyer with Physics in chief, then line up to get baptised into the first church of God, Big Bang.)

    In short, ID is fundamentally a way to look at the actual evidence of an information-rich complex, highly organised world with fresh eyes and think, what is it trying to tell us.

    In the case of the world of life as we see it on earth, we have digital code based algorithmic embedded systems that work with impressive molecular nanotech to implement self-replicating machines that interact with their environment to acquire raw materials and energy, etc.

    A short look at the log reduced chi expression:

    Chi_500 = Ip*S – 500, bits beyond . . .

    will rapidly rule: design, beyond the reasonable reach of the Planck-time quantum state atomic resources of our solar system.

    That is at least a viable inference, and it is open to the direct test of showing that on the contrary, despite the calculations, we can observationally SHOW that somehow, chance plus necessity on the gamut of our solar system can and does spontaneously throw up such FSCO/I under plausible conditions without undue intervention or direction of intelligent experimenters or program coders.

    Despite many confident assurances or assumptions or assertions to the contrary, such a demonstration simply does not exist. In every claimed case we have seen, for year after year, something different is substituted — typically without realising it: behind the sciences design. (The latest guise of this is the exchange on islands vs continents of function. FSCO/I by virtue of the requirement of a properly wired network of correctly matched components, inherently locks the zone T down to being a very small fraction of W, the set of possibilities.)

    So, the bottom line is much as ID thinker Philip Johnson described in retort to Lewontin’s notorious a priori materialism 1997 NYRB article:

    For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them “materialists employing science.” And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) “give the appearance of having been designed for a purpose.”

    . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]

    The above, sadly, reveals that, nearly fifteen years later, the matter still stands where Lewontin left it in that article:

    . . It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated . . . [["Billions and billions of demons," NYRB, Jan 1997.]

    That is sad indeed, and sadly revealing.

    PG et al need to wake up and realise that what they have been doing cannot stand the cold light of Day.

    GEM of TKI

  50. Welcome, Geno . . . the discussion is developing.

  51. Re PG: Again we see refusal to accept inductive inference on well warranted signs [also cf here] that we know per investigation, credibly point to design as cause. KF

  52. Great, let’s hear your further remarks!

  53. W: You evidently do not realise how rapidly uncivil activity derails serious discussion. I will only mention, the habitual resort of darwinist objectors to name-calling, outing tactics, slander and worse. In my case, my family have been threatened. That is serious enough to verge on criminal conduct, I am afraid. And, despite your dismissal the blog is known to be quite serious, and obviously is taken to be sufficiently serious that it is being treated as a threat; thank you. KF

  54. Cf here on in context what we mean by just so stories.

  55. Best I can tell this blog is visited mostly by amateurs. Even then many do not use their real names. KF does not, so why should he/she be taken as a credible source? I have no idea who this person is. It may be serious to those who participate, but in the bigger world of science, this blog is of minor significance. Hence the calls for disciplinary action seem so absurd to an outsider. Yes, there is uncivility, but that is the nature of the Web, not really “Darwinism” ( whatever that is).

  56. Dr Liddle,

    Thank you for the onward comment.

    On the contrary to your views, however, there is some objective warrant on the matter (start here on), and it is quite plain that absent a priori materialism, dawinism has no empirically adequate mechanisms to account for, say FSCO/I. In short the pivotal matter is the a priori materialism that entails that something like darwinism “must” be so, so to the eye of materialist faith — or at any rate, evo mat influenced faith — the sketchiest illustrations become perceived proofs of the “fact” of darwinian – like macroevolution.

    I call that the ideological captivity of science, and an undermining of objectivity and unfettered truth-seeking as principal values and goals of science. But to those inside the a priori materialist system, it will seem that they have “discovered” the golden master-key that unlocks the door to truth.

    The marxists of my youth were just as convinced about their marxism and touted what they viewed as supportive evidence, even as it was on the verge of collapse. Do you need for me to talk about the confidence of the Freudians?

    Darwin’s is just the last of the Victorian bastions of scientism to fall, that’s all.

    KF

  57. Woodford,

    Sometimes I relax online by indulging in fora for my lifelong addiction — err, hobby, light tackle surf fishing. (Well, confession is good for the soul, even when the Spanish mackerels are running and I am typing not spinning.)

    Even in such venues, disciplinary action has to be taken in defence of basic civility, so broken down is our civilisation.

    You obviously have not taken under serious reckoning the thought police tactics, bully-boyism and general want of broughtupcy resorted to by too many advocates of darwinism. But, people have had careers unjustly broken on account of this behaviour. That is why anonymity is important.

    (People have literally had to disguise themselves, this is that bad. No prizes for why some bully boys want to trash my name publicly as much as possible. And yes, if some of my students were to carry on like that, something like the old fashioned washing out of a foul mouth or even the old six of the best with good Guyana cane, would have been under consideration at either of my [top flight] high schools. At minimum, in my classes, I would have some volunteers for clean-up duty. And, worse than this has happened.)

    Now, I see you seeking to denigrate issues on merits of fact and logic by trying to dismiss the venue or the source.

    Sorry, that is a classic fallacy, ad hominem.

    If you wish to overturn design theory, simply show good observationally based evidence that functionally specific, complex organisation and/or associated information, FSCO/I or the like is on a repeatably observed basis, produced by blind chance plus necessity.

    In fact, neither you nor anyone else has shown that, much less shown actually decisive evidence for microbe to man macroevolution per the darwinian blind watchmaker thesis for such evo. That holds for the peer reviewed literature, and it holds for the net and elsewhere.

    But, we can show good warrant for the claim that FSCO/I is routinely and only observed as the product of intelligence, with a needle in haystack analysis to back it up as to why. (Start here on in context, for a 101. this post is an example, as was yours.)

    In short, FSCO/I is a reliable sign of intelligent design.

    Your problem, or at least that of your perceived side, is that this applies directly to the world of life which is full of such FSCO/I. Which tosses over the neatly piled evolutionary materialist applecart. Which in turn is heavily invested in by elites all across our civilisation.

    Hence the great contention and the sort of unhinged rage and no-broughtupcy spoiled brat misbehaviour that we routinely see from fans of Darwinism [up to and including professors!], and have to control for.

    G’day

    GEM of TKI

  58. Can you direct me to a reasonably concise operational definition of FSCO/I, kf? I get these different metrics confused!

    And/or (preferably and!) an example of it?

  59. No, KF, it’s not an ad hominem attack. Questioning and considering sources is a key part of critical thinking. You are just one of numerous anonymous sources vying for my attention. Your ideas may have merit, but too often your writing is simply unreadable. Try writing for the layperson. Good straightforward writing is the sign of a superior intellect. And why should I have to work so hard for an anonymous source? If it was Dembski I would because he has the creds and publishing history. I know nothing about yours. If your writing style was accessible it might be different, but again, if I want to read 19th century English I’d rather read Dickens.

    But seriously you seem excessively paranoid. You may want to do some inward reflection on that. Or perhaps you really should spend more time fishing.

  60. W: Sorry, no source is better than his or her facts and logic. The proper appeal for/against is not to the person, but to the facts and logic, especially when we deal with a relatively controversial topic. Blind following of an authority is a recipe for trouble under those circumstances. I learned that lesson, day 1 going to a then Marxism-dominated uni. KF

  61. Dr Liddle:

    “Functionally specific, complex organisation and/or associated information” is a description, whose meaning is pretty well already stated. let’s take it from the top, starting with the Wicken quote of 1979 that was there in the linked, sect D as recently reorganised:

    ‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems. Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [[“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65.

    Going back to the term as elaborated:

    1: Information — that which can be expressed as a message, via selection from alternatives, and can make a difference as signal or meaning etc.

    2: organisation: an arrangement of components in accordance with a pattern (“wiring diagram”), usually to carry out a task or goal

    3: complex: Having a lot of possibilities, from which the actual one is presented. Or, elaborating, requiring sufficiently many components with sufficiently many configurations that accessible search resources for blind search will predictably be swamped by the scale of the field of possibilities. (If it takes at least 500 yes/no questions to specify the config, then it is complex enough to swamp the solar system’s atomic resources, and if 1000 or more, the observed cosmos; conservatively estimated.)

    4: Specific: only certain quite restricted sets of possible configs from the field of possible configs will do to achieve the required function. (If we think in terms of strings of ASCII characters, 72 or 143 characters account for 3*10^150 possibilities, or 1*10^301 possibilities, of which only a relatively few will work as meaningful or contextually responsive text in this thread.)

    5: Function: do the components, arranged in a particular pattern, carry out a job?

    In that context, we may then take Dembski’s Chi metric and log reduce it, to give a fairly simple metric:

    Chi_500 = Ip*S – 500, bits beyond the solar system threshold.

    Here, Ip is an information metric, on info carrying capacity in bits. 500 is a threshold of conservatively sufficient complexity to be beyond the resources of the solar system’s 10^57 atoms. S is a dummy variable that defaults to 0, and is only 1 if we have objective warrant for concluding specificity. In other words, if any arbitrary config would do, S = 0. And of course, unless we have high contingency, we do not have the possibility to carry information.

    I have noticed an attempt to sneer on acronyms, but the matter is actually simple. CSI is complex, specified info, the parent category. FSCI is the derivative, where the specification is on function. Where the info is digitally coded, we have dFSCI, and where we may have implied info based on specific functional organisation, we have FSCO/I.

    FSCO/I is common in our world, e.g text for posts in this thread, the code on the computers that allows the blog to work, or the computer you are reading it on to work, the text in the books sitting next to your PC, the organisation of the components on the motherboard of the PC, where that is astonishingly specific, only a little going wrong and kaput. The engine in your car is riddled with FSCO/I.

    Given the constraints described above, it is no surprise that in the cases we see directly where it comes from FSCO/I is created by design. It is in fact, a reliable and commonly used sign of design. Nor is this new, 100 years ago astronomers were debating if there was a pattern of canals on Mars, and were inferring that only intelligence would explain the sort of network they thought they were seeing. Unfortunately, they were debating the patterns they drew based on subjective perceptions, so the designs in question were their own.

    But, a few days ago, we had a discussion here on geoglyphs in Amazonia, where e.g. we had circles. This implied that we had a civilisation in Amazonia in the past, and by using the FSCO/I concept and the Chi_500 metric, we could see why. Had we found just one such geoglyph on Mars or the moon, it would have been revolutionary.

    The problem for evolutionary materialists, of course, is that the living cell is chock full of FSCO/I, in a context where they are committed to the idea that life arose by chance and necessity without design. As has been repeatedly pointed out here at UD and elsewhere, the Planck Time quantum state resources simply are not there for that to be plausible on the gamut of our observed cosmos. The ONLY observed cosmos, whatever those who speculate on multiverses may wish to suggest. (And, once we bring to bear the fine tuning issue, cosmology, even multiverse cosmology, points to intelligence as the best explanation for the cosmos we inhabit.)

    So, there is very good reason indeed to accept that FSCO/I is a reliable sign of design. And, to infer that cell based life is designed. This, in a cosmos that presents good reason to infer that it is designed for C-chemistry, cell based life.

    I trust this helps.

    GEM of TKI

  62. How would you distinguish between functions that were front-loaded and those that were not (e.g. that evolved)?

    (BTW, I’d appreciate it if my comments didn’t have to spend over 36 hours in moderation: I haven’t even called someone a liar yet)

  63. @kf,

    Your elaboration is a good example of the NON-operational nature of the terms you use.

    “Going back to the term as elaborated:

    1: Information — that which can be expressed as a message, via selection from alternatives, and can make a difference as signal or meaning etc.”

    This is non-operational. First, “that which can be expressed” is trivially tautologous: anything that is ‘expressed’ is by definition a ‘message’. A message is ‘that which is expressed’. “Via selection from alternatives” is non-operational, as there no “non-phase-spaces” from which a message can be selected and expressed. Expression is also by definition selective from some (perhaps unspecified) set of options.

    “can make a difference as a signal or meaning, etc.” is just as vacuous as the other preceding phrases you used. ANY expression can be a signal or meaningful. Meaning is extrinsic to the phenomena — the meaning of the words you are reading are not located in the shapes of the letters, but in electrical patterns in your brain that are associated with those visual patterns (for example). Anyone who has studied information theory and computing understands that signal value and semantic cargo can be attached to ANYTHING, ON DEMAND.

    If I take 10 digits from pi starting at some random offset — I picked “24574″ — I get this string of digits:

    1745712559

    Is this set of a digits a “signal”? Does it have meaning? It does now! And if I take that as a secret for the private key in a X509v3 key pair, I can “on demand” invest that string with significance; perhaps this is the key you need to decrypt an important message from me.

    Reading through the rest of your points, they are just as vacuous and non-operational as your #1. They are only operational insofar as they support someone feeding their subjective intuitions about telic intent (or not). It’s not an objective, operational measure of anything. How would #5 be assessed in any operational sense — “does it carry out a job”? That’s begging the question, right there: if it’s a “job” in the anthropomorphic sense (the object of a design from an intelligent agent), then again, by trivial tautology, you have your answer.

    Or, to restate your #5: is it the product of intelligent design? Well, yes, if it is qualified upfront as the product of design from intelligence, then we can safely conclude the phenomena is the product of design from intelligence. True by definition.

    As soon as “job” DOESN’T get constrained as an exercise of presuming one’s consequent (you are suggesting we can decide if X is the product of design by asking first if it is the product of design), it becomes completely empty, meaningless.

    That is, the “job”, then of the shadow on sidewalk behind me is to signal the shape of my silhouette given my orientation to the sunlight against the sidewalk. That is what the shadow is doing, the signal conveyed and the accomplishment of the “design”.

    But that’s just physics, and that’s what puts paid to the problem with claims like yours. In physics, EVERYTHING is information. Physical is information, and anything that happens, signals something, and reflects in some way (or many ways) the “design” of the system.

    I can (and will) provide examples that show clearly the emptiness of the terms you’re offering as “operational” here, as you’d like. There’s nothing operational here. You can’t write code against any of this to analyze phenomena, can’t do the math to classify, measure, contrast, judge. All you’ve got are means for subjective qualitative assessments, and you don’t need pseudo-techno-mumbo-jumbo to enable that kind of analysis, if that’s going to be your standard for judging.

  64. Well, it doesn’t really, kf (btw, I think eigenstate posted a response into the wrong thread).

    Do you have a link to an actual worked example?

  65. @kf, (and in case it’s not clear, this is in response to 8.2.1.2.3),

    Adding a quick look at #2:

    “2: organisation: an arrangement of components in accordance with a pattern (“wiring diagram”), usually to carry out a task or goal”

    Again, a tautology, which makes this inert for your aims. ANY “arrangement of components” is by definition a “pattern”. Choose any pattern, any configuration of components you like, and repeat it over and over, a million times. That result is now necessarily compressible based on the base pattern.

    A pattern is just that which is used as a pattern, which means this elaboration does not elaborate toward an operating model for FSCO/I.

    Why do trees have the tree ring patterns they do in their trunks. Is that “organization”? Sure, per your “wiring diagram” criteria. The dark and light rings are a function of the inputs from the external environment.

    What is the goal? That’s a self-confusing question to ask in that context. If I have a goal (this is MY goal, not the TREE’s goal!) of estimating the age of the tree, then this pattern of rings is useful in meaningful toward that goal. For MY goal, the pattern is meaningful toward such an estimate.

    But, theistic teleo-centristic excesses notwithstanding, the tree is not designed to tell me how old it is. There is no “design” for that. It’s just physics, being physics, and my making inferences and judgments based on models informed by physical phenomenon.

    It won’t do to say “no, I mean ‘wiring diagram’, as in the kind that only intelligence can design”. That is, again, an exercise in assuming the consequent, a beg to the core question. If one cannot classify and distinguish “jobs” and “goals” and “designs” outside of any notion of intelligence, than one has nothing to work from, nothing to exclude by objective means so that one might arrive at intelligence as the verdict.

    Scientists make design inferences all the time. Arrowheads get judged as such, tools fashioned by determined design of intelligent beings, and it’s not controversial as a process. But that is because that process recognizes the very thing ID demands be omitted — the matching of the phenomenon to an agent that is plausibly placed “at the scene of the crime” so to speak. There isn’t inherent “designedness” in the arrowhead from our forensic perspective. It’s a function of matching a putative design with a plausible designer (ancient human fashioning an arrowhead, for example). That same arrowhead, which we reasonably judge as “designed” from 100,000 years ago would perhaps be judged a “non-designed stone piece with the general appearance of an arrowhead” if it was dug up from the pre-Cambrian layer.

    Everything is a pattern. And natural abounds with patterns that are law-based and complex products of natural processes. Indeed, even a bit pattern like that which stores the data for rasterizing this post on your computer screen qualifies, in my view. It’s all natural patterns, and it’s a profound insight into nature to understand that the kind of distinctions and classifications you are striving for are conspicuously absent.

    If you think that’s incorrect, then the test for that to show me wrong would be in providing some rigor around the test for “pattern” as you have it, here. It’s easy to throw out these terms if you don’t have to provide operational, practical methods for applying them. When you do — or at least when I do — it fails badly, and the empty natural of the terms you use here is made manifest.

  66. @kf, in re: 8.2.1.2.3,

    “3: complex: Having a lot of possibilities, from which the actual one is presented. Or, elaborating, requiring sufficiently many components with sufficiently many configurations that accessible search resources for blind search will predictably be swamped by the scale of the field of possibilities. (If it takes at least 500 yes/no questions to specify the config, then it is complex enough to swamp the solar system’s atomic resources, and if 1000 or more, the observed cosmos; conservatively estimated.)”

    This can’t help your metric, because excluding configurations less than 1,000 bits (or 500 bits, or whatever threshold you choose) won’t narrow your candidates down in any helpful way. “Swamping the solar system’s atomic resources” is a non-factor here, as a blind search is not a dispositive factor per the test.

    If you take a standard 52 card deck of cards, and you want to look at the configuration of a deck, you need 6 bits (2^6 =64) to store the state of each card, you need 51*6 = 306 bits to describe the state of a single deck (you only need 51 slots if we are being nitpicky, because if you know the first 51 cards, you can deduce the 52nd).

    So, all you need to do is take two decks of playing cards from your game drawer and shuffle them and, — mirabile dictu! — you have “swamped the atomic resources of the solar system”. So what? That doesn’t even rise to being a red herring in terms of the metric you are going for. A random shuffle of just a handful of resources produces non-designed complexity that far exceeds those limits in trivial fashion.

    It’s difficult to find ANY phenomenon that is not the specific end product of an incalculable progression of probabilistic selections drawn from an innumerable set of options from each phase space. Literally any configuration you could name — any thing at all — is too improbable to happen. (Try the math on the odds of your parents meeting as they did to produce you, and then multiplying by the odds of your grandparents all meeting as they did to produce your parents, etc., and the folly of this kind of thinking is clear).

    A blind search is not relevant, because “search” itself is not endemic to the process. Complex configurations are ubuiqitous, and in fact we cannot help but find them, everywhere.

    Operationally, then, this criterion doesn’t help – can’t help your metric. If the metric works at all, it must work just as well by discarding this test.

  67. @kf, per 8.2.1.2.3,

    “4: Specific: only certain quite restricted sets of possible configs from the field of possible configs will do to achieve the required function. (If we think in terms of strings of ASCII characters, 72 or 143 characters account for 3*10^150 possibilities, or 1*10^301 possibilities, of which only a relatively few will work as meaningful or contextually responsive text in this thread.)”

    Yes, but so what? Back to the tree rings. How many possible configurations of rings will correspond to the actual history of the tree? More than one, given the fuzzy tolerances of measuring rings. But very, very few — very “specific” in your usage — against the phase space of possible ring configurations.

    And that should point out the problem. Specificity doesn’t help you operationally at all. What helps is the “meaningful” part, and that’s the part that stumps you and remains intractable in terms of an operational framework for identifying and measuring FCSI/O.

    Why does a falling raindrop take the specific shape it does at any given point in its fall? It’s shape is tightly specified by the governing physics — gravity, aerodynamics, temperature, etc. And this is the configuration taken to achieve “its required function” — to fall to the ground?

    Wait, what? That’s it’s function?? Rain drops don’t have “requirements”, you say? That’s precisely the problem. By describing this in the prejudicial terms you use — “required function”, you have again defeated the aim of your metric, by assuming the consequent.

    Yet another tautological trivium. If it “doesn’t have a required function”, it doesn’t qualify? Well, sure, TRUE BY DEFINITION. The whole goal here is to determine IF there is a “required function”, and what precisely is entailed by the term “required function”, if there is one. On the merits, the raindrop’s “required function” is to “resolve to its physical constraints”. Once it hits the ground, it’s “required function” is to “fill the puddle it fell in per the constraints of gravity and the shape of the ground”.

    Operationally, then, “specific” adds nothing. It’s only helpful for your metric if you presume you know the answer to the question before you ask. If there is a “required function” in an anthropormorphic sense that can be determined ahead of time, then FSCI/O is irrelevant/redundant; you already have made your design judgment prior to invoking FSCI/O. Operationally, determining specificity won’t advance anything, even if you CAN provide some quantitative measure of the narrowness of a given configuration with a given phase space.

  68. @kf, per 8.2.1.2.3,

    “The problem for evolutionary materialists, of course, is that the living cell is chock full of FSCO/I, in a context where they are committed to the idea that life arose by chance and necessity without design. As has been repeatedly pointed out here at UD and elsewhere, the Planck Time quantum state resources simply are not there for that to be plausible on the gamut of our observed cosmos. The ONLY observed cosmos, whatever those who speculate on multiverses may wish to suggest. (And, once we bring to bear the fine tuning issue, cosmology, even multiverse cosmology, points to intelligence as the best explanation for the cosmos we inhabit.)”

    It’s not a problem, because there’s no basis for excluding “law-and-chance” phenomenon from FSCO/I affirmations, meaning FSCO/I, as you’ve (non-)defined it, doesn’t discriminate between “intelligently designed” and “naturally designed”.

    Pick up the two card decks you just shuffled.

    1. Is it “INFORMATION”? Yes, it can be expressed as a message. kf, here is a description of a 104 card (2x 52 card decks) random shuffle I just did:

    Ten of Diamonds
    Five of Diamonds
    Queen of Clubs
    Seven of Diamonds
    Jack of Spades
    King of Diamonds
    Ten of Spades
    King of Hearts
    Eight of Spades
    King of Spades
    Nine of Clubs
    Four of Diamonds
    Four of Spades
    King of Clubs
    Three of Hearts
    Eight of Diamonds
    Ten of Spades
    King of Clubs
    Six of Clubs
    Two of Diamonds
    Ace of Clubs
    Two of Clubs
    Queen of Diamonds
    Eight of Clubs
    Six of Diamonds
    Nine of Spades
    Five of Spades
    Ace of Clubs
    Nine of Hearts
    King of Spades
    Three of Clubs
    King of Diamonds
    Eight of Hearts
    Four of Diamonds
    Six of Hearts
    Three of Diamonds
    Three of Spades
    Ten of Diamonds
    Three of Clubs
    Three of Hearts
    Queen of Diamonds
    Queen of Hearts
    Ten of Clubs
    Two of Spades
    Five of Clubs
    Ten of Clubs
    Queen of Hearts
    Ace of Diamonds
    Four of Clubs
    Eight of Diamonds
    Jack of Hearts
    Four of Clubs
    Five of Diamonds
    Two of Hearts
    Seven of Spades
    King of Hearts
    Four of Hearts
    Seven of Clubs
    Ten of Hearts
    Seven of Hearts
    Seven of Diamonds
    Jack of Clubs
    Five of Hearts
    Six of Diamonds
    Five of Clubs
    Seven of Spades
    Ace of Diamonds
    Queen of Spades
    Ace of Hearts
    Six of Spades
    Seven of Hearts
    Ace of Hearts
    Jack of Diamonds
    Four of Hearts
    Three of Diamonds
    Nine of Diamonds
    Jack of Clubs
    Nine of Hearts
    Six of Clubs
    Five of Hearts
    Two of Spades
    Six of Spades
    Three of Spades
    Two of Hearts
    Queen of Spades
    Jack of Hearts
    Five of Spades
    Queen of Clubs
    Two of Diamonds
    Eight of Clubs
    Jack of Diamonds
    Nine of Spades
    Two of Clubs
    Nine of Diamonds
    Nine of Clubs
    Ten of Hearts
    Eight of Hearts
    Eight of Spades
    Ace of Spades
    Four of Spades
    Six of Hearts
    Ace of Spades
    Seven of Clubs
    Jack of Spades

    That takes a lot more than 300 bits just to make it human-readable, but there is a configuration expressed to you, so that you can analyze the pattern.

    2. Is there ORGANIZATION: Yes. It’s the product of a discrete combinatory process, and if you are going to analyze the deck, or play blackjack with another online friend (it won’t work too well with me as I know the “hidden cards” in the deck as you deal), you’ve got your input all ready to go. You can’t deal from a deck that has no configuration of cards, and now you have one.

    3. Is the configuration of the deck COMPLEX, per your criterion? Yes. It’s a 600+ bit configuration, minimally, meaning just that handful of cards “swamps the atomic resources of the solar system”.

    4. Is it SPECIFIC? Yes, it is a single, discrete configuration out of a phase space of 104! = 10^166 available configurations. This configuration is as constricted as the choices get.

    5. Does this configuration carry out a JOB? Sure, a job is in the eye of the employer (and this is the key failure of your elaboration). You are now all set to statistically analyze the deck, or to deal from this deck as the dealer for an online friend who wants to play BlackJack with you. Or add your own use here — it’s a perfectly good and astonishingly strong key for securing your encrypted data (if I hadn’t posted it here, anyway).

    So, as far as I can tell, a random shuffling of two decks of cards qualifies as a “designed” configuration, per your own criteria. The only complaint I can anticipate here is that you don’t approve of the “jobs” assigned to this configuration. If that’s the case, then I will happily rest my case on this point, because as soon as you are reduced to arguing about the telic intent of the phenomenon as a pre-condition for your metric, your metric becomes useless, a meaningless distraction. You’ve already decided what your metric hoped to address in order for you to even establish the parameters for your metric.

  69. Geoglyph case as a rough calc. If you looked in the already linked, you would see three cases from Durston (biological cases building on his information estimates using the way dna codes work out, not just the near flat relationship that random chaining would), and information metrics are quite straightforward in many cases.

  70. ES:

    Kindly stop talking nonsense, I am using “Information” in ways that are commonplace in engineering, have you ever calculated the information value of a string of digits?

    It can be done indirectly or directly.

    The description of info above is a simple conceptual one, in the very reasonable context that it is understood that we routinely measure and use this quantity.

    A simple glance at the expression I have used all along in context will show that we take the standard Shannon-Hartley metric and confine our attention to cases amenable to functional specification. In other words, a practical workaround to the oddity that a random string will have the highest Shanon metric, average info per symbol.

    As in, have you ever looked at a file for a program, or a word processor document, or a CAD drawing etc?

    All of them are functionally specific, and are routinely measured in bits. The 500-bit limit is simply saying that you will not at all be likely to get to that particular outcome on the gamut of the solar system resources as you could not blindly sample enough of the space to make a difference.

    So, just stop the rubbish rhetoric games.

    Which as I recall, you have tried before.

    So, just cut the nonsense, now.

    YOU KNOW BETTER THAN YOU HAVE JUST WRITTEN, OR ELSE YOU DON’T KNOW ENOUGH TO HAVE GAINED ADMISSION TO A COURSE IN T/COMMS AS I WOULD HAVE TAUGHT. (I suggest you go to the link through my handle, LH col and read section A.)

    If you cannot do better than this, please leave this thread.

    Good day.

    GEM of TKI

  71. More rubbish.

  72. Still more rubbish. Did you even pause to read the linked from Dembski.

    Do you understand what a phase space cut down to a state space, W, is? Do you then understand that a zone T can be identified in it as specified on some requirement, here function in a context on arrangement of components as is typical of say a motherboard, and cases E from T can be observed? Do you understand that by the nature of function based on multiple matched parts, T will be much smaller than W?

    If you don’t kindly read the nanobots thought exercise, the appendix 1 my always lined. read the whole appendix, to see the context of my thinking.

  73. ES, you are only making yourself look like one objecting for the sake of objecting, as one who either does not know enough to comment seriously, or as — worse — someone wanting to pull the wool over the eyes of those who do not.

  74. Stop playing silly rhetorical games, and get serious. I suggest you get a copy of good old Taub and Schilling, and read the relevant sections. That should be a useful first primer.

  75. Dr Liddle:

    I am astonished that you would enmesh yourself in something like the game ES is evidently trying.

    Please, do better than this next time.

    KF

  76. F/N: Onlookers, just for record, let me clip from the section a my always linked, so we can clarify what info is about as a start to cut the rhetorical tangle ES tried to weave above.

    I apologise in advance for chunking like this (especially to Geno), but since the above is bound to be turned into silly talking point games in the hate sites and the like, let the record be plain that such rubbish is being done in the teeth of quite evident facts and well established reasoning to the contrary [and ES, you also need to work your way through App 1 the always linked], things that ES either knew about or should have known about, after all, they are two clicks away from EVERY post I have ever placed at UD, for I forget how many years now:

    _________________

    >> . . . let us now consider in a little more detail a situation where an apparent message is received. What does that mean? What does it imply about the origin of the message . . . or, is it just noise that “got lucky”?

    If an apparent message is received, it means that something is working as an intelligible — i.e. functional — signal for the receiver. In effect, there is a standard way to make and send and recognise and use messages in some observable entity [e.g. a radio, a computer network, etc.], and there is now also some observed event, some variation in a physical parameter, that corresponds to it. [For instance, on this web page as displayed on your monitor, we have a pattern of dots of light and dark and colours on a computer screen, which correspond, more or less, to those of text in English.]

    Information theory, as Fig A.1 [my preferred adaptation of the Shannon diagram, that I have used for 25 years in one form or another] illustrates, then observes that if we have a receiver, we credibly have first had a transmitter, and a channel through which the apparent message has come; a meaningful message that corresponds to certain codes or standard patterns of communication and/or intelligent action. [Here, for instance, through HTTP and TCP/IP, the original text for this web page has been passed from the server on which it is stored, across the Internet, to your machine, as a pattern of binary digits in packets. Your computer then received the bits through its modem, decoded the digits, and proceeded to display the resulting text on your screen as a complex, functional coded pattern of dots of light and colour. At each stage, integrated, goal-directed intelligent action is deeply involved, deriving from intelligent agents -- engineers and computer programmers. We here consider of course digital signals, but in principle anything can be reduced to such signals, so this does not affect the generality of our thoughts.]

    Now, it is of course entirely possible, that the apparent message is “nothing but” a lucky burst of noise that somehow got through the Internet and reached your machine. That is, it is logically and physically possible [i.e. neither logic nor physics forbids it!] that every apparent message you have ever got across the Internet — including not just web pages but also even emails you have received — is nothing but chance and luck: there is no intelligent source that actually sent such a message as you have received; all is just lucky noise . . . .

    In short, none of us actually lives or can consistently live as though s/he seriously believes that: absent absolute proof to the contrary, we must believe that all is noise. [To see the force of this, consider an example posed by Richard Taylor. You are sitting in a railway carriage and seeing stones you believe to have been randomly arranged, spelling out: "WELCOME TO WALES." Would you believe the apparent message? Why or why not?]

    Q: Why then do we believe in intelligent sources behind the web pages and email messages that we receive, etc., since we cannot ultimately absolutely prove that such is the case?

    ANS: Because we believe the odds of such “lucky noise” happening by chance are so small, that we intuitively simply ignore it. That is, we all recognise that if an apparent message is contingent [it did not have to be as it is, or even to be at all], is functional within the context of communication, and is sufficiently complex that it is highly unlikely to have happened by chance, then it is much better to accept the explanation that it is what it appears to be — a message originating in an intelligent [though perhaps not wise!] source — than to revert to “chance” as the default assumption. Technically, we compare how close the received signal is to legitimate messages, and then decide that it is likely to be the “closest” such message. (All of this can be quantified, but this intuitive level discussion is enough for our purposes.)

    In short, we all intuitively and even routinely accept that: Functionally Specified, Complex Information, FSCI, is a signature of messages originating in intelligent sources.

    Thus, if we then try to dismiss the study of such inferences to design as “unscientific,” when they may cut across our worldview preferences, we are plainly being grossly inconsistent.

    Further to this, the common attempt to pre-empt the issue through the attempted secularist redefinition of science as in effect “what can be explained on the premise of evolutionary materialism – i.e. primordial matter-energy joined to cosmological- + chemical- + biological macro- + sociocultural- evolution, AKA ‘methodological naturalism’ ” [ISCID def'n: here] is itself yet another begging of the linked worldview level questions.

    For in fact, the issue in the communication situation once an apparent message is in hand is: inference to (a) intelligent — as opposed to supernatural — agency [signal] vs. (b) chance-process [noise]. Moreover, at least since Cicero, we have recognised that the presence of functionally specified complexity in such an apparent message helps us make that decision. (Cf. also Meyer’s closely related discussion of the demarcation problem here.)

    More broadly the decision faced once we see an apparent message, is first to decide its source across a trichotomy: (1) chance; (2) natural regularity rooted in mechanical necessity (or as Monod put it in his famous 1970 book, echoing Plato, simply: “necessity”); (3) intelligent agency. These are the three commonly observed causal forces/factors in our world of experience and observation. [Cf. abstract of a recent technical, peer-reviewed, scientific discussion here. Also, cf. Plato's remark in his The Laws, Bk X, excerpted below.]

    Each of these forces stands at the same basic level as an explanation or cause, and so the proper question is to rule in/out relevant factors at work, not to decide before the fact that one or the other is not admissible as a “real” explanation . . . .

    This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious — as some are tempted to imagine or assert. [More details . . .]

    Then also, in certain highly important communication situations, the next issue after detecting agency as best causal explanation, is whether the detected signal comes from (4) a trusted source, or (5) a malicious interloper, or is a matter of (6) unintentional cross-talk. (Consequently, intelligence agencies have a significant and very practical interest in the underlying scientific questions of inference to agency then identification of the agent — a potential (and arguably, probably actual) major application of the theory of the inference to design.)

    Next, to identify which of the three is most important/ the best explanation in a given case, it is useful to extend the principles of statistical hypothesis testing through Fisherian elimination to create the Explanatory Filter: [cf diagram in OP] . . . .

    The second major step is to refine our thoughts, through discussing the communication theory definition of and its approach to measuring information. A good place to begin this is with British Communication theory expert F. R Connor, who gives us an excellent “definition by discussion” of what information is:

    From a human point of view the word ‘communication’ conveys the idea of one person talking or writing to another in words or messages . . . through the use of words derived from an alphabet [NB: he here means, a "vocabulary" of possible signals]. Not all words are used all the time and this implies that there is a minimum number which could enable communication to be possible. In order to communicate, it is necessary to transfer information to another person, or more objectively, between men or machines.

    This naturally leads to the definition of the word ‘information’, and from a communication point of view it does not have its usual everyday meaning. Information is not what is actually in a message but what could constitute a message. The word could implies a statistical definition in that it involves some selection of the various possible messages. The important quantity is not the actual information content of the message but rather its possible information content.

    This is the quantitative definition of information and so it is measured in terms of the number of selections that could be made. Hartley was the first to suggest a logarithmic unit . . . and this is given in terms of a message probability. [p. 79, Signals, Edward Arnold. 1972. Bold emphasis added. Apart from the justly classical status of Connor's series, his classic work dating from before the ID controversy arose is deliberately cited, to give us an indisputably objective benchmark.]

    To quantify the above definition of what is perhaps best descriptively termed information-carrying capacity, but has long been simply termed information (in the “Shannon sense” – never mind his disclaimers . . .), let us consider a source that emits symbols from a vocabulary: s1,s2, s3, . . . sn, with probabilities p1, p2, p3, . . . pn. That is, in a “typical” long string of symbols, of size M [say this web page], the average number that are some sj, J, will be such that the ratio J/M –> pj, and in the limit attains equality. We term pj the a priori — before the fact — probability of symbol sj. Then, when a receiver detects sj, the question arises as to whether this was sent. [That is, the mixing in of noise means that received messages are prone to misidentification.] If on average, sj will be detected correctly a fraction, dj of the time, the a posteriori — after the fact — probability of sj is by a similar calculation, dj. So, we now define the information content of symbol sj as, in effect how much it surprises us on average when it shows up in our receiver:

    I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1

    This immediately means that the question of receiving information arises AFTER an apparent symbol sj has been detected and decoded. That is, the issue of information inherently implies an inference to having received an intentional signal in the face of the possibility that noise could be present. Second, logs are used in the definition of I, as they give an additive property: for, the amount of information in independent signals, si + sj, using the above definition, is such that:

    I total = Ii + Ij . . . Eqn 2

    For example, assume that dj for the moment is 1, i.e. we have a noiseless channel so what is transmitted is just what is received. Then, the information in sj is:

    I = log [1/pj] = – log pj . . . Eqn 3

    This case illustrates the additive property as well, assuming that symbols si and sj are independent. That means that the probability of receiving both messages is the product of the probability of the individual messages (pi *pj); so:

    Itot = log1/(pi *pj) = [-log pi] + [-log pj] = Ii + Ij . . . Eqn 4

    So if there are two symbols, say 1 and 0, and each has probability 0.5, then for each, I is – log [1/2], on a base of 2, which is 1 bit. (If the symbols were not equiprobable, the less probable binary digit-state would convey more than, and the more probable, less than, one bit of information. Moving over to English text, we can easily see that E is as a rule far more probable than X, and that Q is most often followed by U. So, X conveys more information than E, and U conveys very little, though it is useful as redundancy, which gives us a chance to catch errors and fix them: if we see “wueen” it is most likely to have been “queen.”)

    Further to this, we may average the information per symbol in the communication system thusly (giving in termns of -H to make the additive relationships clearer):

    - H = p1 log p1 + p2 log p2 + . . . + pn log pn

    or, H = – SUM [pi log pi] . . . Eqn 5

    H, the average information per symbol transmitted [usually, measured as: bits/symbol], is often termed the Entropy; first, historically, because it resembles one of the expressions for entropy in statistical thermodynamics. As Connor notes: “it is often referred to as the entropy of the source.” [p.81, emphasis added.] Also, while this is a somewhat controversial view in Physics, as is briefly discussed in Appendix 1below, there is in fact an informational interpretation of thermodynamics that shows that informational and thermodynamic entropy can be linked conceptually as well as in mere mathematical form. Though somewhat controversial even in quite recent years, this is becoming more broadly accepted in physics and information theory, as Wikipedia now discusses [as at April 2011] in its article on Informational Entropy (aka Shannon Information, cf also here):

    At an everyday practical level the links between information entropy and thermodynamic entropy are not close. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. And, as the numerical smallness of Boltzmann’s constant kB indicates, the changes in S / kB for even minute amounts of substances in chemical and physical processes represent amounts of entropy which are so large as to be right off the scale compared to anything seen in data compression or signal processing.

    But, at a multidisciplinary level, connections can be made between thermodynamic and informational entropy, although it took many years in the development of the theories of statistical mechanics and information theory to make the relationship fully apparent. In fact, in the view of Jaynes (1957), thermodynamics should be seen as an application of Shannon’s information theory: the thermodynamic entropy is interpreted as being an estimate of the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics. For example, adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states that it could be in, thus making any complete state description longer. (See article: maximum entropy thermodynamics.[Also,another article remarks: >>in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more" . . . in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes/no questions that need to be answered in order to fully specify the microstate, given that we know the macrostate.>>]) Maxwell’s demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total entropy does not decrease (which resolves the paradox).

    Summarising Harry Robertson’s Statistical Thermophysics (Prentice-Hall International, 1993) — excerpting desperately and adding emphases and explanatory comments, we can see, perhaps, that this should not be so surprising after all. (In effect, since we do not possess detailed knowledge of the states of the vary large number of microscopic particles of thermal systems [typically ~ 10^20 to 10^26; a mole of substance containing ~ 6.023*10^23 particles; i.e. the Avogadro Number], we can only view them in terms of those gross averages we term thermodynamic variables [pressure, temperature, etc], and so we cannot take advantage of knowledge of such individual particle states that would give us a richer harvest of work, etc.)

    For, as he astutely observes on pp. vii – viii:

    . . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . .

    And, in more details, (pp. 3 – 6, 7, 36, cf Appendix 1 below for a more detailed development of thermodynamics issues and their tie-in with the inference to design; also see recent ArXiv papers by Duncan and Samura here and here):

    . . . It has long been recognized that the assignment of probabilities to a set represents information, and that some probability sets represent more information than others . . . if one of the probabilities say p2 is unity and therefore the others are zero, then we know that the outcome of the experiment . . . will give [event] y2. Thus we have complete information . . . if we have no basis . . . for believing that event yi is more or less likely than any other [we] have the least possible information about the outcome of the experiment . . . . A remarkably simple and clear analysis by Shannon [1948] has provided us with a quantitative measure of the uncertainty, or missing pertinent information, inherent in a set of probabilities [NB: i.e. a probability different from 1 or 0 should be seen as, in part, an index of ignorance] . . . .

    H({pi}) = – C [SUM over i] pi*ln pi, [. . . "my" Eqn 6]

    [where [SUM over i] pi = 1, and we can define also parameters alpha and beta such that: (1) pi = e^-[alpha + beta*yi]; (2) exp [alpha] = [SUM over i](exp – beta*yi) = Z [Z being in effect the partition function across microstates, the "Holy Grail" of statistical thermodynamics]. . . .

    [H], called the information entropy, . . . correspond[s] to the thermodynamic entropy [i.e. s, where also it was shown by Boltzmann that s = k ln w], with C = k, the Boltzmann constant, and yi an energy level, usually ei, while [BETA] becomes 1/kT, with T the thermodynamic temperature . . . A thermodynamic system is characterized by a microscopic structure that is not observed in detail . . . We attempt to develop a theoretical description of the macroscopic properties in terms of its underlying microscopic properties, which are not precisely known. We attempt to assign probabilities to the various microscopic states . . . based on a few . . . macroscopic observations that can be related to averages of microscopic parameters. Evidently the problem that we attempt to solve in statistical thermophysics is exactly the one just treated in terms of information theory. It should not be surprising, then, that the uncertainty of information theory becomes a thermodynamic variable when used in proper context . . . .

    Jayne’s [summary rebuttal to a typical objection] is “. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.” . . . . [pp. 3 - 6, 7, 36; replacing Robertson's use of S for Informational Entropy with the more standard H.]

    As is discussed briefly in Appendix 1, Thaxton, Bradley and Olsen [TBO], following Brillouin et al, in the 1984 foundational work for the modern Design Theory, The Mystery of Life’s Origins [TMLO], exploit this information-entropy link, through the idea of moving from a random to a known microscopic configuration in the creation of the bio-functional polymers of life, and then — again following Brillouin — identify a quantitative information metric for the information of polymer molecules. For, in moving from a random to a functional molecule, we have in effect an objective, observable increment in information about the molecule. This leads to energy constraints, thence to a calculable concentration of such molecules in suggested, generously “plausible” primordial “soups.” In effect, so unfavourable is the resulting thermodynamic balance, that the concentrations of the individual functional molecules in such a prebiotic soup are arguably so small as to be negligibly different from zero on a planet-wide scale.

    By many orders of magnitude, we don’t get to even one molecule each of the required polymers per planet, much less bringing them together in the required proximity for them to work together as the molecular machinery of life. The linked chapter gives the details. More modern analyses [e.g. Trevors and Abel, here and here], however, tend to speak directly in terms of information and probabilities rather than the more arcane world of classical and statistical thermodynamics, so let us now return to that focus; in particular addressing information in its functional sense, as the third step in this preliminary analysis.

    As the third major step, we now turn to information technology, communication systems and computers, which provides a vital clarifying side-light from another view on how complex, specified information functions in information processing systems:

    [In the context of computers] information is data — i.e. digital representations of raw events, facts, numbers and letters, values of variables, etc. — that have been put together in ways suitable for storing in special data structures [strings of characters, lists, tables, "trees" etc], and for processing and output in ways that are useful [i.e. functional]. . . . Information is distinguished from [a] data: raw events, signals, states etc represented digitally, and [b] knowledge: information that has been so verified that we can reasonably be warranted, in believing it to be true. [GEM, UWI FD12A Sci Med and Tech in Society Tutorial Note 7a, Nov 2005.]

    That is, we have now made a step beyond mere capacity to carry or convey information, to the function fulfilled by meaningful — intelligible, difference making — strings of symbols. In effect, we here introduce into the concept, “information,” the meaningfulness, functionality (and indeed, perhaps even purposefulness) of messages — the fact that they make a difference to the operation and/or structure of systems using such messages, thus to outcomes; thence, to relative or absolute success or failure of information-using systems in given environments.

    And, such outcome-affecting functionality is of course the underlying reason/explanation for the use of information in systems. [Cf. the recent peer-reviewed, scientific discussions here, and here by Abel and Trevors, in the context of the molecular nanotechnology of life.] Let us note as well that since in general analogue signals can be digitised [i.e. by some form of analogue-digital conversion], the discussion thus far is quite general in force.

    So, taking these three main points together, we can now see how information is conceptually and quantitatively defined, how it can be measured in bits, and how it is used in information processing systems; i.e., how it becomes functional. In short, we can now understand that:

    Functionally Specific, Complex Information [FSCI] is a characteristic of complicated messages that function in systems to help them practically solve problems faced by the systems in their environments. Also, in cases where we directly and independently know the source of such FSCI (and its accompanying functional organisation) it is, as a general rule, created by purposeful, organising intelligent agents. So, on empirical observation based induction, FSCI is a reliable sign of such design, e.g. the text of this web page, and billions of others all across the Internet. (Those who object to this, therefore face the burden of showing empirically that such FSCI does in fact — on observation — arise from blind chance and/or mechanical necessity without intelligent direction, selection, intervention or purpose.)

    Indeed, this FSCI perspective lies at the foundation of information theory:

    (i) recognising signals as intentionally constructed messages transmitted in the face of the possibility of noise,
    (ii) where also, intelligently constructed signals have characteristics of purposeful specificity, controlled complexity and system- relevant functionality based on meaningful rules that distinguish them from meaningless noise;
    (iii) further noticing that signals exist in functioning generation- transfer and/or storage- destination systems that
    (iv) embrace co-ordinated transmitters, channels, receivers, sources and sinks.

    That this is broadly recognised as true, can be seen from a surprising source, Dawkins, who is reported to have said in his The Blind Watchmaker (1987), p. 8:

    Hitting upon the lucky number that opens the bank’s safe [NB: cf. here the case in Brown's The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. [NB: originally, this imagery is due to Sir Fred Hoyle, who used it to argue that life on earth bears characteristics that strongly suggest design. His suggestion: panspermia -- i.e. life drifted here, or else was planted here.] Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [Emphases and parenthetical note added, in tribute to the late Sir Fred Hoyle. (NB: This case also shows that we need not see boxes labelled "encoders/decoders" or "transmitters/receivers" and "channels" etc. for the model in Fig. 1 above to be applicable; i.e. the model is abstract rather than concrete: the critical issue is functional, complex information, not electronics.)]

    Here, we see how the significance of FSCI naturally appears in the context of considering the physically and logically possible but vastly improbable creation of a jumbo jet by chance. Instantly, we see that mere random chance acting in a context of blind natural forces is a most unlikely explanation, even though the statistical behaviour of matter under random forces cannot rule it strictly out. But it is so plainly vastly improbable, that, having seen the message — a flyable jumbo jet — we then make a fairly easy and highly confident inference to its most likely origin: i.e. it is an intelligently designed artifact. For, the a posteriori probability of its having originated by chance is obviously minimal — which we can intuitively recognise, and can in principle quantify.

    FSCI is also an observable, measurable quantity; contrary to what is imagined, implied or asserted by many objectors. This may be most easily seen by using a quantity we are familiar with: functionally specific bits [FS bits], such as those that define the information on the screen you are most likely using to read this note:

    1 –> These bits are functional, i.e. presenting a sceenful of (more or less) readable and coherent text.

    2 –> They are specific, i.e. the screen conforms to a page of coherent text in English in a web browser window; defining a relatively small target/island of function by comparison with the number of arbitrarily possible bit configurations of the screen.

    3 –> They are contingent, i.e your screen can show diverse patterns, some of which are functional, some of which — e.g. a screen broken up into “snow” — would not (usually) be.

    4 –> They are quantitative: a screen of such text at 800 * 600 pixels resolution, each of bit depth 24 [8 each for R, G, B] has in its image 480,000 pixels, with 11,520,000 hard-working, functionally specific bits.

    5 –> This is of course well beyond a “glorified common-sense” 500 – 1,000 bit rule of thumb complexity threshold at which contextually and functionally specific information is sufficiently complex that the explanatory filter would confidently rule such a screenful of text “designed,” given that — since there are at most that many quantum states of the atoms in it — no search on the gamut of our observed cosmos can exceed 10^150 steps:>>

    ________________

    That should be enough to show why what ES is trying to do is ill-informed rubbish.

    Good day

    GEM of TKI

  77. “Kindly stop talking nonsense, I am using “Information” in ways that are commonplace in engineering, have you ever calculated the information value of a string of digits?

    It can be done indirectly or directly.”

    I am a software developer, by way of making a living. Moreover, my work over the last two decades, has orbited around communications, network security, cryptography and cryptanalysis, and higher level systems that incorporate both human-directed and evolutionary algorithms toward commercially valuable ends (sophisticated heuristics for network intrusion detection or transaction fraud across payment networks, for example).

    So work professionally with rigorous metrics and models for digital information — Shannon, Kolmogorov, etc., the frameworks for information theory that ARE operative.

    “The description of info above is a simple conceptual one, in the very reasonable context that it is understood that we routinely measure and use this quantity.”

    It’s conceptually simple, perhaps, but its non-operative, non-functional. If you can’t do math with it or measurement, it’s not an operative metric. I can (and will) do detailed math for you on data sets if you want to manage Shannon info and entropy over a channel, or measure algorithmic complexity of a string (again, with a quick point to the problematic-for-ID-crowd fact that the random string is theoretical maximum in complexity for any string).

    You, cannot, in return do math with your “indirect” methods. If I’m wrong, I will happily correct myself. But you will have to actually do some math on a real example to demonstrate this.

    “A simple glance at the expression I have used all along in context will show that we take the standard Shannon-Hartley metric and confine our attention to cases amenable to functional specification. In other words, a practical workaround to the oddity that a random string will have the highest Shanon metric, average info per symbol.”

    It’s not an oddity, that’s the key insight into the nature of information. Information is the reduction of uncertainty, so a random string is by definition maximally revealing as each character is supplied; if we had any way to guess (and per non random strings, like English strings, we can make statistical bets that pay off better than against a random string: the letter “a” follows the letter “p” much more often in English than the letter “k” follows the letter “p”, so upon seeing an “p”, we have a higher statistical expectation of an “a” being the next symbol and a lower statistical explanation of “k” being the next letter… thus an “a” in an English sentence reduces our uncertainty less than an “a” in a random string == more information in the random string).

    The whole trick is “amenablizing” any of this to your functional specification. The creation of a geoglyph in a circle-like shape is NOT amenable to a series of yes/no questions, 300 or more (or less). That’s not how information theory works, it’s voodoo handwaving that is instantly recognizable as such to those of use who work with this stuff for a living.

    Again, if I’m wrong, you have a very nice opportunity to show that I’m wrong — just show your math and be detailed. You and I both know you can’t though, because it’s not an operative concept that can be applied, as a basis for your apologetics. Show me wrong, if I am.

    “As in, have you ever looked at a file for a program, or a word processor document, or a CAD drawing etc?”

    Uh yeah. If you want to have someone here (a third party) challenge us both to writing a program that implements compression, info security and swizzling/unswizzling, I’m ready to go. Just have someone choose a novel task so neither of us can grab anything from Google or off the shelf, and we can judge who knows what’s what in this space. I’ll provide all my source code and compiled binaries for major platforms.

    “All of them are functionally specific, and are routinely measured in bits. The 500-bit limit is simply saying that you will not at all be likely to get to that particular outcome on the gamut of the solar system resources as you could not blindly sample enough of the space to make a difference”

    Blind sampling, as I said, doesn’t even rise to “red herring” status here, and if you had a basic grasp of the information aspects of this question, you’d realize how discrediting this kind of claim is to your case. None of the decisions or targets we are looking at are the products of blind sampling. You’re confusing stochastic variation (e.g. random walk) with blind, stand-alone samples, and omitting cumulative hill-climbs and descents based on feedback loops derived from iteration and environmental constraints that narrow the search space down to tiny fractions of the logical phase space.

    In short, you haven’t even got the beginning of a point here, and are deeply, deeply confused.

    “So, just stop the rubbish rhetoric games.

    Which as I recall, you have tried before.

    So, just cut the nonsense, now.

    YOU KNOW BETTER THAN YOU HAVE JUST WRITTEN, OR ELSE YOU DON’T KNOW ENOUGH TO HAVE GAINED ADMISSION TO A COURSE IN T/COMMS AS I WOULD HAVE TAUGHT. (I suggest you go to the link through my handle, LH col and read section A.)”

    Well, let’s write some code, and show all our math, and see who’s doing the hand-waving, here! Really, if you’re even a little bit grounded in what you say, this should be an easy and decisive win for you, and one you can point to, triumphantly, for a long time. Just commit to writing the code and doing the math, and showing your work. Get your ID friends who actually may have some computing skills to help you out, I don’t care. It’s not a matter of faking the computing knowledge, it’s a conceptual problem you’re stuck with about what information is, how it is created, measured and manipulated, and how it is applied to phenomenon we observe. All the coding help you can solicit won’t fix that problem for you.

    In any case, let’s let code and math do the talking, here, whaddya say? Up to show your work?

    I am.

  78. Geno:

    Let me again apologise for having to take up space on your guest post on front loading to deal with a poisonous distraction.

    KF

  79. @kf,

    None of that helps your claim at all. If i talk all the spades and clubs out of the two decks of cards I discussed above, I’ve necessarily constrained the phase space of my now-smaller deck. Two decks now have 52 instances, and just 26 discrete symbols that make up the phase space.

    But that doesn’t improve your problem with the “blind search” mistake. A natural design (impersonal processes, law + chance + resources + time) doesn’t work that way, so you won’t rule that out if you wonder if a “tornado in a junkyard might construct a 747″.

    I read your link, and couldn’t make sense it of it, other than to see (I think) that you’ve just left off the key dynamics of a natural (impersonal) design — feedback loops, massive iterations with variation and cumulative merits and demerits for new features toward (or away from) an optimized configuration for that environment.

    Perhaps you could highlight in that article where you address that, or better yet, point me at someone else you suppose is expert on this subject who may be able to express themselves more clearly in information engineering terms?

  80. 80

    eig:
    “5. Does this configuration carry out a JOB? Sure, a job is in the eye of the employer (and this is the key failure of your elaboration). You are now all set to statistically analyze the deck, or to deal from this deck as the dealer for an online friend who wants to play BlackJack with you. Or add your own use here — it’s a perfectly good and astonishingly strong key for securing your encrypted data (if I hadn’t posted it here, anyway).”

    I see both points here. In your 2-deck random string of cards, the specific assignment or job would need to be assigned. And if we understand that that specific string is no more or less likely to occur than some other string, meaning the string’s specific order was not dictated by physics, (the order being the specific arrangement of card values), then if your specific string is to execute some function, then your specific string must be set as a variable by you, or some other intelligence. This is what we find in nature with the a,t,g,c strings. So you both, in my view, are right: The probability of hitting a specific a,t,g,c string beyond 500 bits by chance is outside the solar system resources as KF says, and as you say, if we are drawing the bulls-eye around the arrow after the shot, then probability is not important. But regardless of all of this, either way, something has to set the specific arrangement of card values or a,t,g,c molecules as a variable that will execute some function every time that specific arrangement comes up. KF would call this the “islands of functions”. How can a non-intelligent agent set a variable?

  81. What are you talking about, kf? tbh I’m not even sure what comment of mine you are referring to. But, whichever it is, I assure you I am not playing “games”.

    Can you clarify?

  82. I think there’s a difference between “appeal to authority” and a careful consideration of a person’s qualifications, track record and overall credentials. Again, given the enormous amount of voices competing for attention, we need a way of filtering those voices that we should pay less attention to. So whereas a person’s qualifications may not preclude them from having something important to say – it’s even more reason for their communications to be compelling, crisp and articulate. I see it every day in the business place – if a person cannot present themselves well, both verbally and through presentation materials, it is harder for them to be heard (and they usually only get one shot).

    Like it or not, communication style matters and I think explains why people like Christopher Hitchens was so popular (whatever you think about his ideas) – he had an arresting way of communicating that was not only cerebral but also highly accessible and compelling. He stood out, not just because of his message, but the way it was delivered.

  83. OK, thanks, kf, but I don’t see that the geoglyph case wouldn’t work as well for a snowflake.

    I’ll take a look at your Durston link

  84. Actually I don’t see a reference to Durston. Can you give the link again?

    Thanks.

  85. Genomicus, thanks for your presentation. Very interesting ideas.

  86. That’s not a problem at all kairo. I’ll be only responding to comments relevant to front-loading, and that’ll give me time to cook up part B of this series. Again, don’t worry about taking up space in order to discuss misconceptions in information theory.

  87. One prediction of all design-centric venues is that when agencies interact with nature they tend to leave traces of their involvement behind. IDists have defined what such traces would look like:

    The criteria for inferring design in biology is, as Michael J. Behe, Professor of Biochemistry at Leheigh University, puts it in his book Darwin ‘ s Black Box: “Our ability to be confident of the design of the cilium or intracellular transport rests on the same principles to be confident of the design of anything: the ordering of separate components to achieve an identifiable function that depends sharply on the components.”

    He goes on to say:
    ” Might there be some as-yet-undiscovered natural process that would explain biochemical complexity? No one would be foolish enough to categorically deny the possibility. Nonetheless, we can say that if there is such a process, no one has a clue how it would work. Further, it would go against all human experience, like postulating that a natural process might explain computers.”

    Then we have-

    Irreducible Complexity:

    IC- A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, non-arbitrarily individuated parts such that each part in the set is indispensable to maintaining the system’s basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. Page 285 NFL

    Numerous and Diverse Parts If the irreducible core of an IC system consists of one or only a few parts, there may be no insuperable obstacle to the Darwinian mechanism explaining how that system arose in one fell swoop. But as the number of indispensable well-fitted, mutually interacting,, non-arbitrarily individuated parts increases in number & diversity, there is no possibility of the Darwinian mechanism achieving that system in one fell swoop. Page 287

    Minimal Complexity and Function Given an IC system with numerous & diverse parts in its core, the Darwinian mechanism must produce it gradually. But if the system needs to operate at a certain minimal level of function before it can be of any use to the organism & if to achieve that level of function it requires a certain minimal level of complexity already possessed by the irreducible core, the Darwinian mechanism has no functional intermediates to exploit. Page 287

    Dr Behe responds to IC criticisms:

    One last charge must be met: Orr maintains that the theory of intelligent design is not falsifiable. He’s wrong. To falsify design theory a scientist need only experimentally demonstrate that a bacterial flagellum, or any other comparably complex system, could arise by natural selection. If that happened I would conclude that neither flagella nor any system of similar or lesser complexity had to have been designed. In short, biochemical design would be neatly disproved.- Dr Behe in 1997

    So if what we are investigating fits any of the descriptions above then we have more than enough to check into the possibilty that is was designed.

  88. ES: You are now backing up derailment with chucking badness. I am perfectly willing to discuss your concerns on a relevant thread, where they are reasonable.

    Kindly go there now.

    KF

    Genomicus,

    Kudos for making what looks like an earnest offering on testability here. I got so caught up in KairosFocus’ nonsense [{emphasis added, KF} on his “metrics”,

    {Rude. You seem to assume that you know all that is relevant, having failed to show serious evidence of familiarity with something as close to hand as a 235 k byte Word Document with meaningful text in English. This is a disciplinary warning, for cause}

    and am just now getting around to a couple questions on your post.

    Maybe it’s just one question.

    You said:

    “Firstly, the front-loading hypothesis predicts that important genes in multicellular life forms will share deep homology with genes in prokaryotes. However, one might object that Darwinian evolution also predicts this. However, from a front-loading perspective, we can go a step further and predict that genes that really aren’t that important to multicellular life forms (but are found in them nevertheless) will generally not share as extensive homology with prokaryote genes.”

    How is this not just a tautology, a self-fulfilling prediction? My reaction to this is, “Of course, the genes we deep most deeply anchored in the hierarchy are BY DEFINITION the “most important”. That’s what “most important” *means* in terms of the hierarchy.

    It seems to me you will need to define what “most important” means as an independent criteria, something we can establish as important or not before and without knowledge of the hierarchy that developed. If you can do that, I would agree that you’ve broken out of offering a mere tautology here, and have something substantial to look at. But having thought about this a bit today, I can think of what would ground your “most important” rating per your predictions APART from where those genes have been observed to occur in the species taxonomy.

    Evolutionary theory doesn’t make this prediction as a prediction, but accepts and emphasizes the tautology. Evolution predicted nested hierarchies, for reasons apart from having merely observed such patterns in the record, which is why those observations are compelling: the independently confirm a prediction of evolutionary theory regarding the production of nested hierarchies.

    But the “most important genes” are not pre-identified. That is a function of the material processes (environmental factors driving selection, reproduction with varitation). But from an evolutionary biology point of view, noting that the “most important genes” are also the genes in “root” and “large span” positions of the hierarchy, is to offer nothing more than reiteration, restatement. We don’t predict which genes are “most important” ahead of time; we can only observe how various configurations fared across the tests of the environment, and deem those “most important” retrospectively.

    If we could magically know the fine points of the environment, all physical law, and the starting configurations of all the extant alleles and their host organisms at some point, we might hazard some predictions, as we suppose we might be able to “simulate” or otherwise roughly anticipate what will happen as the clock runs forward. But we have no such magic oracle to reveal that to us, so we are left with understand the tautology, and observing what was enduring, successful, “important” as “all star genes”, in retrospect.

    This is similar to the frequent misunderstanding about natural selection, or more precisely, the “survival of the fittest” tautology. Yes, it’s not a prediction (in the novel, scientific sense) that the “fittest will survive”. It’s definitional. Those which survive are by definition the most fit. That’s not a problem for evolutionary theory, it’s just a stumbling block to understanding because those who reject and don’t understand the theory suppose there’s a problem with that being a tautology, as opposed to an entailment of the theory.

    But evolutionary theory doesn’t make such predictions. It seems you are offering a tautology here, as a novel prediction.

    Why should I not simply dismiss this as trivially self-fulfilling, regardless of any empirical tests? The test for me would be an independent criteria for what genes are “all stars” as opposed to losers. As I have it from you here, it’s trivial as a prediction. You and I will simply and certainly agree that the genes that “carry big loads” in the hierarchy are the “most important ones”. But that doesn’t rise to the demands of a scientific prediction. That’s just an assessment that can and will be made after the fact for any emergent hierarchy.

    Do you have some independent means of establishing what makes and “important gene” important, apart from simply looking at the established hierarchy and picking the ones in the “important places”?

    Thanks for your feedback on this.

  89. @junkdnaforlife,

    That was the point I was raising to KF about calculating the probability *he* would have been born. The odds are astronomically, vanishingly against his being born, all the probabilistic factors that have to line up just so to reify KF. If you look back and say “that was the target”, it should not have, statistically could not have (per tolerances here!) happened.

    And yet, there he is, enlightening us day in and day out.

    The resolution is, of course, that KF didn’t have to happen. The probabilities might have broken another way. But *something* unfolds, which is just as improbable in isolation.

    So when you wonder what set the particular mappings for the four bases, my answer is the same as your answer to “how did KF get here?” It’s fantastically improbable as a standalone case, but that is the essence of painting the target around the arrow after it landed and being amazed that you got such an improbable bullseye.

    Something was going to happen. In a lottery, SOMEONE is going to get the winning ticket, no matter how long the odds for that ticket.

    This winds up with the risk of getting wrapped around the axle of the anthropic principle. If the deck had been shuffled a little different — you can take that to mean as a matter of biology, chemistry, or physical constants — we wouldn’t be here to talk about it. Life would not have developed as it has if the deck had been shuffled differently. Maybe life didn’t happen at all. Maybe the deck is such that a eukaryote is as far as life could progress in terms of complexity, even after hundreds of trillions of years.

    None of that, though, is any more grounds for supposing we are “created of God”, than thinking there’s something supernatural involved in a random shuffling of a 104-card deck resulting in the order that it does, or that it’s a miracle that *someone* one the lottery (maybe you!).

    KF, it seems, is hung up on the “tornado in a junkyard” mistake, and supposes such improbabilities warn him away from the adaptive and optimized landscape search methods that nature actuals deploys, and which produce vastly improbable configurations, when viewed as “tornadoes in a junkyard”, but aren’t problematic at all as the result of persistent, cumulative processes with positive feedback loops.

  90. @KF,

    Let’s just get something simple and non-handwaving on the record here, regarding FSCI as an “observable, measurable” quantity. You say I am just imagining that you are blowing smoke. Here’s your chance to show that I am.

    Take your geoglyph example from the other thread. What is the calculation for FSCI for that phenomenon? That seems like a relevant, current and straightforward example we can make some progress on and test your claims and mine.

    Even if you aren’t at the Amazon location to take whatever local measurements you suppose you need, it’d be fine if you just made the values up. The exact values of the measurements aren’t important here. What’s at issue is the METHOD for acquiring the measurements and integrating them into your overall FSCI metric.

    If you succeed, this would be an excellent “go-to” item for your sidebar, or maybe a Wikipedia entry on the subject.

    That would be cool to see!

  91. eigenstate:

    I have not followed the whole discussion, and I cannot answer about KF or enyone else being born, but if you are taking again the deck of cards and lottery arguments to state that there is no meaning in the concept of CSI or dFSCI, you are simply a fool. It’s night now, but we can discuss the point at your ease tomorrow, if you want.

  92. eigenstate:

    If you have time, you can find a detailed discussion I had recently about the subject with Peter Griffin (still open, I believe):

    http://www.uncommondescent.com.....ent-415489

    post 21 and following.

  93. @gpuccio,

    I did read a bit of that exchange previously, but will go back and review, thanks. From memory, though, I recall you saying that dFCSI was a empirical but “qualitative”. Maybe I’m not recalling that correctly and will happily correct the record if that’s the case.

    If I’m correct in that recollection, though, you’re comparing anvils and oranges, here. Help yourself to all the qualitative assessments you like, empirical or no. I can’t be bothered. That’s just not even remotely interesting as a means of building rigorous models. You’re welcome to it, but if you can’t provide objective measurement, the most you’re going to get from me is a shrug (and that’s not necessarily a bad thing either way).

    KF is making different, pretentious noises about his metric as measurable, objectively. That’s both exceedingly bold and important as a claim, and conspicuously lacking in substantiation from KF, namely in the form of any real application of the metric via those claimed methods of objective measurement.

    If you want to file an amicus curiae on his behalf, be my guest. I’d get farther and the discussion will progress better, even if we disagree, if you were to take up the other side. But if you want to protest about qualitative insights about design-seeming properties, I just don’t want you to waste your time. That is, at best, a he-said/he-said clash of intuitions and claims that can’t be investigated or adjudicated at all. There’s no possible positive outcome from that, even if we disagree (and disagreeing is often a quite satsifactory outcome).

    On the meaning of the configuration, the crux of where I’m going here is that KF’s FSCO/I, as he’s related it, is vacuous as a metric, and adds nothing to what it depends on crucially and externally — the design inference as to what is a “function” or “job” or “requirement” in an anthropocentric sense.

    That is, KF’s criteria add absolutely nothing to what he requires as a beginning, the finding whether some phenomenon or configuration is operating as part of a design, fulfilling some role or achieving some telic end.

    Which is just a beg to the basic question of the provenance of the configuration. If you think that configuration X is necessarily the fulfillment of some telic end, then you don’t need to calculate FSCI/O; you already have your answer, as you began with it. You’ve just affirmed your consequent. That’s your prerogative (er, KF’s or anybody’s), but it doesn’t say anything about the central question of determining design in a rational, rigorous way.

    If, on the other hand, you begin from an agnostic standpoint, uncommitted to either position — designed by intelligence or designed by impersonal natural processes — FSCI/O can’t help you. It’s useless, because it only coheres once you’ve identified the telic endpoints of the system or subsystem you are looking at.

    It’s only possibly value as a post facto bit of rhetorical tooling, then.

    That’s why the use of a deck of cards is important, as an example. We need to be able to being with something that is putatively naturally (i.e. non-intelligently) designed as a configuration, and apply KF’s metric to get to a judgment on its intelligent-designedness.

    But we are stumped, stopped, if we don’t have a presupposition about the provenance of the function. So we can’t use if for anything we haven’t already concluded is designed. And like I said, if we’ve already concluded a subsystem is designed, we have no need to even bother with FSCI/O.

    Which is just to note that KF’s “elaboration” is nothing more than an elaborate beg to the question. That is why I press for actual applications and actual measurements and calculations. That is what will discredit the idea that he’s just begging the question, and the conspicuous absence of that, and indeed an entrenched reluctance to provide that, supports my claim that it’s just question begging at work here.

  94. Dr Liddle:

    Sorry, that will not wash.

    You plainly enmeshed yourself in what is obviously a derail attempt on a thread that is SUPPOSED to be about the front-loading hypothesis.

    You provided what seemed to be the innocent aside that I accommodated by giving a simplified explanation, making a pause in a fairly busy day for that. ES then dropped the other shoe, and you neatly referred to his comments as though nothing was out of order.

    If this were a thread on something more closely related or a theme that was not significant and largely new to many of UD’s readership, and a guest post by someone who is somewhat of a critic of UD and the debates/tone that too often crop up here, I would be far more tolerant.

    But right now, this looks uncommonly like dumping garbage on your neighbour’s newly mown lawn, when he is entertaining guests. (And, remember, I just had to call Joe and PG to order. Is this any way to treat a guest? Can you understand how I feel like the parent having to correct a child who decides to act up in front of guests? [That noise you hear is foot tapping and old Mr Leathers being limbered up to be applied to the seat of learning with vigour. Six of the best is about right . . . ])

    I hope you understand why I went livid on seeing such, repeated.

    The Caribbean word for this is: broughtupcy.

    As in, the plain want of it. And, no less than FOUR UD regulars have been implicated.

    Surely, we can do better than this!

    Let me just say to you in no uncertain terms, I am using “information” in ways that are closely related to common technical uses, as common as referring to computer application document files.

    In so doing, I am building on the underlying statistical and classical thermodynamics and standard work in info theory as I have known about for over thirty years; notice my reference to Connor and Taub-Schilling. I am also building on Dembski’s expression, using a log reduction that ties his work to the atomic level quantum resources of our solar system (or the observed cosmos) in a way that in effect builds on the sort of needle in the haystack calculation used by Borel from many decades ago, in light of Dembski’s bound and Abel’s bound. This reflects into sampling theory and the reasonable results to be expected from proportionally small blind samples of very large populations.

    If you and ES really want to thrash this matter out, not just derail the thread — what this looks like now, then I suggest you go to the post no 11 in the ID foundations series, which is still open for discussion. You may be well advised to take under reckoning several earlier posts in the series, to provide background. (That is part of why I went livid yesterday afternoon on seeing ES’s stunt. If he were serious, at any time for the better part of a year, there have been open threads that could be used.)

    Good day, madam.

    GEM of TKI

  95. eigenstate:

    While I am sure that KF says the same things as I say, I will use my terminology and definitions.

    First of all, my concept of dFSCI is not qualitative: it is a metric. If you read the posts I have linked, you will see it.

    You say:

    “It’s useless, because it only coheres once you’ve identified the telic endpoints of the system or subsystem you are looking at.

    It’s only possibly value as a post facto bit of rhetorical tooling, then.”

    That’s the basic point. You are wrong here.

    Defining explicitly a function that can be objectively neasured does generate a functional subset in the set of possible outcomes. As the function objectively exists, you cannot say that we have invented if “post hoc”.

    I will refer to the function of an enzyme that amazingly accelerates a biochemical reaction, that otherwise could never happen or would be extremely slow.

    That function is objective. We need a conscious observer to recognize it and define it (because the concept itself of function is a cosncious concept). So, I am not saying that there is not a subjective aspect in the function. There is, always. What I am saying is that the function, once recognized and defined consciously, can be objectively observed and measured ny any conscious observer, for instance in a lab.

    So, I can objectively measure if an AA sequence has the function of accelerating that reaction or not.

    So, I can compute the probability of such a sequence, with such a property, emerging in a purely random system.

    That is all that dFSCI is.

    Let’s go to your examples of the lottery and the deck of cards.

    The example of the lottery is simply stupid (please, don’t take offense). The reason is the following: in a lottery, a certain number of tickets is printed, let’s say 10000, and one of them is extracted. The “probability” of a ticket winning the lottery is 1: it is a necessity relationship.

    But a protein og, say, 120 AAs, has a search space of 20^120 sequences. To think that all the “tickets” have been printed would be the same as saying that those 20^120 sequences have been really generated, and one of them is selected (wins the lottery). But, as that number is by far greater than the number of atoms in the universe (and of many other things), that means that we have a scenario where 10000 tickets are printed, each with a rnadom numbet between 1 and 20^120, and one random number between 1 and 20^120 is extracted. How probable is then that someone “wins the lottery”?

    So, if we are done with the stupid exanple of the lottery (again, please don’t tale offense), let’s go to the deck of cards.

    You have taken the time to give a specifi example of a shuffling. I appreciate that.

    So, I will simply state that your sequence has no dFSCI, because it is not functionally specified, and that therefore we cannot infer design for it.

    You say:

    Is it SPECIFIC? Yes, it is a single, discrete configuration out of a phase space of 104! = 10^166 available configurations. This configuration is as constricted as the choices get.

    Well, specific does not mean functionally specified. Each sequence is specific. If used as a pre-specification, each sequence is a good specification. But that has nothing to do with functional specification, that can be used “post hoc”.

    You say:

    Does this configuration carry out a JOB? Sure, a job is in the eye of the employer (and this is the key failure of your elaboration). You are now all set to statistically analyze the deck, or to deal from this deck as the dealer for an online friend who wants to play BlackJack with you. Or add your own use here — it’s a perfectly good and astonishingly strong key for securing your encrypted data (if I hadn’t posted it here, anyway).

    That is not a functional specification. Or, if it is, is a very wide one.

    I will be more clear. According to my definition od dFSCI, the first step is that a conscious observer must recognize and define a function in the digital sequence, and specify a way to objectively measure it. Any function will do, because dFSCI will be measured for thet function. IOWs, dFSCI is the complexity necessary to implement the function, not the complexity of the object. That is a very important point.

    So, trying to interpret your thought, I could define the following fucntions for your sequence:

    a) Any sequence of that length that can be statistically analyzed

    b) (I don’t know, you say: I don’t understand well your second point)

    c) Any sequence of that length that can be good for encrypting data.

    While I wait for a clarification about b) (or for other possible definitions of function for your sequence), I will notice that both a) and c) have practically no dFSCI, because any sequence would satisfy a) (the functional space is the same as the search space, and the probability is 1), ans all random sequences, for all I know of crypting data, would satisfy c) at least as well as your sequence (the functional space is almost as big as the search space, the probability is almost one).

    I hope that is clear.
    ____

    GP, thanks for trying to help clear up a distraction. I have redirected discussion elsewhere. If someone actually who lives in a world pervaded by ICTs and other functional technologies thinks that to differentiate function from non function on observation [and of course various measures of performance] on a case by case basis is question-begging, there is but little hope for a reasonable discussion. KF

  96. ES:

    Pardon, but I am going to be fairly direct, as it is plain that nothing else will get your attention.

    You are acting like the child who insists on acting up in front of guests, and will not take a warning glance as a hint to be better behaved. (Note, please, my remarks to Dr Liddle just now. You are on even stronger notice than she is.)

    If you further try to derail this thread, I will take disciplinary action, never mind that this is already an embarrassment in front of Genomicus, a guest here at UD. There is no excuse for your misbehaviour, even a three year old would get the hint that he is out of line.

    If you seriously want to take up the matter you have raised, there is an open thread that is relevant, as notified to Dr Liddle, here.

    As to your rudely and snidely disrespectful remark about handwaving, that speaks to basic manners problems on your part — and that swishing noise you hear is old Mr Leathers limbering up, and to a want of basic familiarity with the background for information theory, on the charitable interpretation. (On the suspicious interpretation, it is an attempt to mislead the naive onlooker who would trust you to know what you are talking about, that the more or less standard background for info theory and its extension on glorified common sense to speak of application program files that do work, or computer motherboards hooked up on a wiring diagram, or a house built in accordance with a blueprint, or a geoglyph in Amazonia, etc, is irrelevant.)

    So, which is this, inexplicable ignorance of sixty years and more of development in a major field of technology as accessible as your PC multiplied by massive social insensitivity to occasion, or outright disrespect and manipulative malice intent on thread derailing?

    (And, do you see how you are forcing me to have to deal with you in front of a guest here at UD? I hope you are thoroughly ashamed. Any more stunts like that and Mr Leathers will be applied to the seat of learning, with extreme prejudice; for cause. In case you don’t get the message, for excellent reason — did you even take a moment to look at the criticisms of UD on tone and combative focus at Genomicus’ blog? — I am LIVID. [Geno, forgive us, but this is the sort of stunt we routinely have to deal with.])

    I suggest, ES, that you go off and familiarise yourself with some basic background on info theory and similar background from serious sources, then go to the linked thread if you genuinely want to deal with a matter o the merits. And BTW, Shannon’s original 1948 paper is still a good early stop-off on this. I just did a web search and see it is surprisingly hard to get a good simple free online 101 on info theory for the non mathematically sophisticated; to my astonishment the section A of my always linked note clipped from above is by comparison a fairly useful first intro. I like this intro at the next level here, this is similar, this is nice and short while introducing notation, this is a short book in effect, this is a longer one, and I suggest the Marks lecture on evo informatics here as a useful contextualisation. Qualitative outline here. I note as well Perry Marshall’s related exchange here, to save going over long since adequately answered talking points, such as asserting that DNA in the context of genes is not coded information expressed in a string of 4-state per position G/C/A/T monomers. The one good thing is, I found the Jaynes 1957 paper online, now added to my vault, no cloud without a silver lining.

    If you are genuinely puzzled on practical heuristics, I suggest a look at the geoglyphs example already linked. This genetic discussion may help on the basic ideas, but of course the issues Durston et al raised in 2007 are not delved on.

    (As to your suggestion about Wiki, on this sort of topic, it is hopelessly compromised and given over to snide hit pieces, where corrections are swarmed down and censored out; only serious threats of legal action seem to make a difference. In short, your side of this issue has a BIG problem.)

    Let me just say this, for the benefits of onlookers who may not be able to spot what is really going on:

    Dear Onlooker,

    First, pardon the need to deal with such an interruption for a thread that was supposed to be a guest post where an important facet of design thought is explored.

    As you see above, someone is trying to challenge the basic concepts of information, specificity, functional organisation as being information rich, etc. This, in a way that is rudely distractive and disrespectful, not least to you, the interested onlooker.

    (Sorry, we too often have to deal with that here at UD, a reflection of the longstanding problems of where evolutionary materialism tends to go in society, as identified as long ago as 360 BC, by Plato in the Laws, Bk X. And, if necessary, sorry, we will have to take someone to the woodshed for an overdue educational meeting with old Mr Leathers. Not to worry, six of the best, vigorously applied, is painful but not harmful, indeed, for those who will heed it, it is even beneficial. Sorry, though, to have to deal with a scene triggered by a stunt.)

    Now, let us clarify, based on a perhaps unfamiliar expression:

    Chi_500 = Ip*S – 500, bits beyond the solar system threshold.

    Let’s unpack.

    First, I am using and measuring information in ways that are direct extensions of how we commonly speak of a Word Document of size 235 k bytes, where k is of course 2 ^ 10 = 1024, and a byte is eight bits; that takes care of I.

    The dummy variable S is a simple way to bring to bear the difference between a real document and a similarly sized file full of gibberish that does nothing: if we have no reason to objectively hold that the piece of information in view is specific, S = 0, but if we see function that depends on a fairly specific configuration or cluster of configurations, S = 1. S would be 1 for the text of this post, which is a string of ASCII characters taking 7 informational bits per character (we ignore for a moment the parity check bit that does no real functional information-bearing work, other than indicating that an error may be present.)

    The 500 bits threshold is s simple way of finding a conservative threshold for complex enough that it is unlikely that so many functional bits would emerge by lucky coin tosses or the equivalent, using he entire atomic resources of our solar system, 10^57 atoms, our effective universe for chemical level interactions, and for things that build on chemical level interactions.

    Yes, whether a coin is heads or tails is one bit of information. We could thus arrange coins to spell out this entire post using the famous ASCII code.

    And if you saw a string of 500 coins with the opening 72 characters of the post on them, you would properly conclude that that is best explained on someone taking time to do that, not tossing the coins up and lo and behold the proverbial golden calf emerges from the fire.

    (That stunt and excuse did not go over too well with Moses, and people have known on the record ever since c 1446 BC, that the notion that such complex functionally specific organisation and information emerged by chance does not pass the smell test. If you don’t like a Biblical reference, from 50 BC, Cicero has a reference that says much the same, using text as his example, not functional organisation but directly strung information in digital code. And yes, alphabetical writing is a digital code, one you have been using since you were 4 years old.)

    The real problem is that such FSCO like the golden calf, and such coded text — FSCI and even dFSCI — like a stanza from the Annals of Ennius, are present everywhere in the living cell. Aaron’s successors, however, are still trying to tell us they tossed the stuff in the fire and lo and behold, see what came out by wonderful chance.

    Surely, you can spot the problem including that the very capacity of the cell to self-replicate that is often trotted out at this point, is based on the organised use of coded functional information, forming what is called a von Neumann kinematic self-replicator.

    In short, Paley had it pegged in 1806 [cf context of the just linked], when in the ch II of his book that is almost never discussed when he is strawmannified and kicked over, he observed that the ability of a watch to self-replicate would be an additional reason to infer to masterful design.

    So, pardon while we deal with a little problem, and let us get back on track with front loading.

    Apologies again for having to deal with a stunt and a scene.

    KF

    I trust, we can now return this thread to its proper focus.

    G’morning,

    GEM of TKI

  97. Geno:

    Thank you for being so graciously understanding.

    I am however going to redirect discussion of info issues to a thread where that would be relevant, so this thread can be focussed on front loading, which is worth discussing for its own value.

    After all, those who want to learn about front-loading have a reasonable expectation that they should not have to wade through an off-topic, where there is a reasonable alternative.

    Thanks again.

    KF

  98. NOTICE: As thread owner, in the interest of a discussion of an important topic that will be new to many at UD, I have requested that discussion of a tangential and unrelated matter be taken to a still open thread where it is relevant. It is expected that discussion there will avoid the sort of curt dismissal as “nonsense” or “handwaving” or “question-begging” that I just had to correct above. In short, if you think measuring info that is functional on a measure that is proposed is nonsense, SHOW it, do not resort to curt dismissals. That is a warning that if the redirected to thread is derailed into personalities, action will be taken there too. KF

  99. kf, I apologise for my part in this derail. However, I would point out that my first post in this thread was absolutely on topic, and that all I have done is respond to other responses. Yes, we should all be disciplined, I guess, but it is the nature of internet conversations that they lead off in new directions. It’s potentially an advantage of nested formats (as we now have here) that this results in less disruption, but I think my conclusion (to make yet another tangential point!) is that unless the nesting is really clear on the page (as it is with Scoop sites, for instance, where only the title of each post is shown unless you reveal the rest of the post) I don’t really think it works. I’m going to go back to linear format for my own site, I think. But that means either making derailing a hanging offence, or simply moving off-topic posts to a more appropriate thread.

    Be that as it may, I’m happy to accept derailing as a hanging offence here, and will try to sit on my thumbs when I see an off-topic post I am itching to respond to. But I have to say, sitting on my thumbs is not my forte!

    I’ll take your advice, and respond on the other thread.

    But please do not assume that all derails are deliberate sabotage attempts. This one most certainly wasn’t. I do, as I suspect many do, given the format of this site, and the fact that interesting threads are not bumped upwards but are rapidly relegated to back pages by the flood of new posts, tend to respond to items on the “new comments” list, not always noting which thread they are comments to.

    So apologies for that, but please note plea of mitigation.

    Peace.

    Lizzie

  100. Why don’t you actually move the posts to that thread, kf, if you have that ability? Then we have all the stuff in the same place.

    Genomicus, responding to your response to me shortly! Apologies for delay!

  101. I agree that it would be nice to see KF’s proposed method and metric worked through and used as a standard ‘textbook’ example, but I also agree that it is a topic for a different thread so I’m sure KF will be more than willing to provide a worked through example as a new topic for discussion.

  102. Okay, pardon my own livid response, which is in a context that should be plain. KF

  103. FWIW, I have from the first few days of nesting requested a chrono sequence view option. I’ve been told, significant coding, no time soon. Every alternative has problems, I think. We will have to wait; it should be possible, as posts are time stamped. KF

  104. Joe: Thanks for a good response on the tangential matter, as you will see, I am redirecting discussion of the tangential issue elsewhere. KF

  105. Okay, pardon my own livid response, which is in a context that should be plain. KF

    No problem :)

  106. Thanks for your comment, Dr. Liddle.

    As I understand it, your hypothesis could be summarised as:

    What was designed was an organism, ancestral to all life, and from which all live evolved by Darwinian mechanisms, but which had characteristics such that what did in fact evolve was unlikely not to have evolved?

    That’s sort of close, but not quite close enough to what the front-loading hypothesis proposes. Firstly, the front-loading hypothesis doesn’t propose that a single cell was designed. Instead, a population of designed cells were seeded on earth (and were probably able to communicate with each other in some way). These cells contained the genes necessary for the origin of multicellular life, for example, and the genes belonging to components of molecular machines such that molecular machines to be front-loaded.

    OK, thanks for the clarification.

    Can you unpack this? How do we identify those “genes that really aren’t that important to multicellular life forms”? How unimportant is “not that important?” And “as extensive a homology with prokaryote genes” as what? What is the comparison here?

    Genes important in development in all multiceullular life, wherein deletion of them results in death or a similar fate would be considered “important genes” for multicellular life forms.

    This is somewhat problematic. Deleting the keystone of an arch will cause the arch to fall down. That doesn’t mean that the keystone was always necessary to keep up the arch, merely that it became so when other elements were removed. This is the problem with Behe’s IC concept of course – undoing something isn’t the same as doing it!

    On the other hand, if genes aren’t really that important for the existence of multicellular life, then the FLH doesn’t predict them to share as deep homology with prokaryotic genes compared with genes important to multicellular life.

    OK.

    Are you saying that if frontloading is true, the sequences that code for proteins that are essential for eukaryotes will also be found in prokaryotes, but with slight difference that mean that it codes for a different protein but one important to prokaryotes?

    In what sense would this prediction distinguish front-loading from a Darwinian scenario?

    Not quite. If front-loading is correct, then we’d expect that important proteins in eukaryotes will share deep homology with prokaryotic proteins, either in sequence similarity or similar tertiary structure. However, again, non-teleological evolution predicts this, so we go a step further: we also predict that such prokaryotic homologs will be well conserved, among themselves, in sequence identity, such that it would be hard for their basic 3D shape to be destroyed by random mutations, genetic drift, etc.

    OK. So essentially, you are saying that frontloading will product highly similar, but also highly conserved, sequences in the two different groups that nonetheless code for very different proteins? To be different from a Darwinian prediction, you would also specify, I think, that, homologous as the two sequences are, that there are no non-lethal pathways between the two, right?

    If so, this is sort of interesting, because it’s a variant on the IC story, but predictive.

    However, I think it suffers from exactly the same problem, which is that, a posteriori, we cannot establish what a lethal (or even seriously disadvantageous) intermediate step is. This is because the selection coefficient of a sequence is not simply a property of the sequence, but of a sequence within its environment, which includes the genetic environment.

    So it seems to me this remains as untestable as IC.

    Darwinian evolution doesn’t predict this at all. In fact, when it comes to molecular machines, one could even say that Darwinian evolution expects homologs of components of these molecular machines to be not be very well conserved in sequence identity, since this would make it easier for them to be co-opted into a molecular machine.

    Well, no. A sequence can be highly conserved AND co-opted for a function that subsumes the earlier function.

    For example, a sequence that is highly conserved because it serves to increases the mobility of a unicellular organism by raising it above the boundary layer (to pinch Nick Matske’s scenario for the flagellum) could be co-opted as a flagella, because the old function is subsumed by the new.

    And it can also be highly conserved, then copied, and one of the copies coopted for an unrelated function. It can also be highly conserved in one environment, then not be in a new environment, leaving it available for co-option. For instance the vitamin C gene is highly conserved in most environments, but this didn’t stop it being lost from a branch of primates, who, presumably, lived in an environment with plenty of vitamin C in the diet.

    This is my point about the importance of modelling changing environment as well as changing sequence. Which raises a whole host of unknowables. Known unknowns.

    It is my position that there were no “nudges” or “side-loading,” though of course, this position could change with new data.

    OK. A more elegant position, I think :) Not unlike Darwin’s, as it happens. Or the one he expressed at the end of “Origin”.

    Nice to talk to you! I’ve been hoping someone would do this for ages :)

    Lizzie

  107. {SNIP, you were warned. But, since I am feeling somewhat less livid, I will put the comment next thread and reply there. ES, take this as a warning that any further side tracking will be dealt with quite seriously. KF.}

  108. @gpuccio,

    That’s the basic point. You are wrong here.

    Defining explicitly a function that can be objectively neasured does generate a functional subset in the set of possible outcomes. As the function objectively exists, you cannot say that we have invented if “post hoc”.

    I will refer to the function of an enzyme that amazingly accelerates a biochemical reaction, that otherwise could never happen or would be extremely slow.

    That function is objective. We need a conscious observer to recognize it and define it (because the concept itself of function is a cosncious concept). So, I am not saying that there is not a subjective aspect in the function. There is, always. What I am saying is that the function, once recognized and defined consciously, can be objectively observed and measured ny any conscious observer, for instance in a lab.

    It’s the “conscious observer to recognize it and define it” part that is the big problem, here. The reaction acceleration is not a problem — I have no issues with identifying such a reaction as an objectively observable physical process.

    But the metric fails to be an objective metric if it depends on “conscious recognition”. If you think about why “conscious recognition” is required here by you, it should be evident that such “non-algorithmic steps” are needed because it defies objective formalization. Or to put a more fine point on it, it enables us to put our own subjective spin on it, and not just as a qualitative assessment around the edges, but as a central predicate for the numbers you may apply. That;’s why I say this is question-begging in its essence; unless one BEGINS with prior recognition (“conscious observer” recognizing function), the metric doesn’t get off the ground.

    If you BEGIN with such conscious recognition, the game’s over, and FSCI, or whatever acronym you want to use to apply to this idea, won’t tell you anything you haven’t already previously concluded about the designedness (or not) of any given phenomenon.

    So, I can compute the probability of such a sequence, with such a property, emerging in a purely random system.

    There is no such thing as a “purely random system”. “System” implies structure, constraint, rule, and process. But that’s not just being pedantic on casual speaking on your part, it’s the core problem here. The AA sequence is not thought to be emergent in a random way.

    There’s a fundamental difference between one-time “tornado in a junkyard” sampling of a large symbol set from a huge phase space, and the progressive sampling of that same large symbol set as the result of a cumulative iteration that incorporates positive and negative feedback loops in its iteration.

    So the probability of the sequence is NOT a matter of 1 shot out of n where n is vast. If, in my card deck example, we keep after each shuffle, the highest two cards we find out of the 104 (per poker rules, say), and set them aside as “fixed” and continue to shuffle the remaining cards, and repeat, we very quickly arrive at very powerful and rare (versus the 104 card phase space) deck after just a few iterations.

    That’s brutally quick as an “iterative cycle”, but it should convey the point, and the problem with “tornado in a junkyard” type probability assignments.

    Let’s go to your examples of the lottery and the deck of cards.

    The example of the lottery is simply stupid (please, don’t take offense). The reason is the following: in a lottery, a certain number of tickets is printed, let’s say 10000, and one of them is extracted. The “probability” of a ticket winning the lottery is 1: it is a necessity relationship.

    But a protein og, say, 120 AAs, has a search space of 20^120 sequences. To think that all the “tickets” have been printed would be the same as saying that those 20^120 sequences have been really generated, and one of them is selected (wins the lottery). But, as that number is by far greater than the number of atoms in the universe (and of many other things), that means that we have a scenario where 10000 tickets are printed, each with a rnadom numbet between 1 and 20^120, and one random number between 1 and 20^120 is extracted. How probable is then that someone “wins the lottery”?

    No one I’ve ever read on this supposes that all the possible permutations have been generated, nor that they need be generated for the theory to hold. Note the phase space for for the double deck of 104 cards – there are 10^166 possible sequences there, more combinations than you amino acid sequences.

    The question is not a math question, wondering how likely 1 chance in 20^120 is, that’s evident in the expression of the question. The question is the “recipe” for coming to an AA sequence that achieves something we deem “functional”. If you have a a cumulative filter at work – environmental conditions which narrow the practical combinations in favorable ways, stereochemical affinities that “unflatten” the phase space so that some permutations are orders of magnitude more likely to occur, including permutations that contribute to the functional configuration we are looking at, then the “1 in 20^120″ concern just doesn’t apply. It’s not an actual dynamic in the physical environment if that’s the case.

    Or, cumulative iterative processes with feedback loops completely change probability calculations. That is why scientists laugh at the absurd suggestion that these processes are like expecting a tornado in a junkyard to produce a 747. Your “100000 in 20^120″ depends on this same kind of simplistic view of the physical dynamic.

    So, I will simply state that your sequence has no dFSCI, because it is not functionally specified, and that therefore we cannot infer design for it.

    You say:

    Is it SPECIFIC? Yes, it is a single, discrete configuration out of a phase space of 104! = 10^166 available configurations. This configuration is as constricted as the choices get.

    Well, specific does not mean functionally specified. Each sequence is specific. If used as a pre-specification, each sequence is a good specification. But that has nothing to do with functional specification, that can be used “post hoc”.

    This is, again, where the question-begging obtains. If you are going to assert that it is only “functionally specified” if it’s the product of intelligent choices or a will toward some conscious goal, then (d)FSCI *is* a ruse as a metric, not a metric toward investigating design, but a means of attaching post-hoc numbers to a pre-determined design verdict.

    Which just demands a formalism around “functionally specific”? That seems to be the key to what you are saying. Can you point me to some symbolic calculus that will provide some objective measurement of a candidate phenomenon’s “functional specifity”? If you cannot, and I think you cannot, else you’d have provided that in lieu of the requirement of a conscious observer who “recognizes” functional specificity, then I think my case is made that you are simply begging the question of design in all of this, and (d)FSCI is irrelevant to the question, and only a means for discussing what you’ve already determined to be designed by other (intuitive) means.

    That is not a functional specification. Or, if it is, is a very wide one.

    I will be more clear. According to my definition od dFSCI, the first step is that a conscious observer must recognize and define a function in the digital sequence, and specify a way to objectively measure it. Any function will do, because dFSCI will be measured for thet function. IOWs, dFSCI is the complexity necessary to implement the function, not the complexity of the object. That is a very important point.

    This renders dFSCI completely impotent on the question of design, then! That requirement — that a “conscious observer must recognize and define a function in the digital sequence” — you’ve already past the point where dFSCI is possibly useful for investigation. Never mind that the requirement is a non-starter from a methodological standpoint – “recognize” and “defined” and “function” are not objectively defined here (consider what you’d have to do to define “function” in formal terms that could be algorithmically evaluated!), even if that were not a problem, it’s too late.

    dFSCI, per what you are saying here, cannot be anything more than a semi-technical framework for discussing already-determined design decisions. And even then, you have a “Sophie’s Choice” so to speak in terms of how you define “function”. Either you make it general and consistent, in which case it doesn’t rule out natural, impersonal design processes (i.e. mechanisms materialist theories support), or you define ‘functional’ in a subjective and self-serving way, gerrymandering the term in such a way as to admit those patterns that you suppose (for other reasons) are intelligently design, and to exclude those (for other reasons) which you suppose are not.

    So, trying to interpret your thought, I could define the following fucntions for your sequence:

    a) Any sequence of that length that can be statistically analyzed

    b) (I don’t know, you say: I don’t understand well your second point)

    c) Any sequence of that length that can be good for encrypting data.

    While I wait for a clarification about b) (or for other possible definitions of function for your sequence), I will notice that both a) and c) have practically no dFSCI, because any sequence would satisfy a) (the functional space is the same as the search space, and the probability is 1), ans all random sequences, for all I know of crypting data, would satisfy c) at least as well as your sequence (the functional space is almost as big as the search space, the probability is almost one).

    I hope that is clear.

    I think you are close to getting my point. A random sequence is highly function, just as a random sequence. It’s as information rich as a sequence can be, by definition of “random” and “information”, which means, for any function which requires information density — encryption security, say — any random string of significant length is highly functional, optimally functional. If my goal is to secure access to my account and my password is limited to 32 characters, a random string I generate for the full 32 characters is the most the most efficient design possible.

    Sometimes the the design goal IS random or stochastic input. Not just for unguessability but for creativity. I feed randomized data sets into my genetic algorithms and neural networks because that is the best intelligent design for the system — that is what yields the optimal creativity and diversity in navigating a search landscape. Anything I would provide as “hand made coaching” is sub-optimal as an input for such a system; if I’m going to “hand craft” inputs, I’m better off matching that with hand-crafted processes that share some knowledge of the non-random aspects of that input.

    When you say “That is not a functional specification. Or, if it is, is a very wide one.” I think that signals the core problem. It’s only a “wide” specification as a matter of special pleading. It’s not “wide” in an algorithmic, objective way. If you think it is, I’d be interested to see the algorithm that supports that conclusion.

    Which is just to say you are, in my view, smuggling external (and spurious) design criteria into your view of “function” here. This explains why you do not offer an algorithm for determining function — not measuring it but IDENTIFYING it. If you were to try to do so, to add some rigor to the concept, I believe you would have to confront the arbitrary measures you deploy and require for (d)FSCI. If I’m wrong, providing that algorithm would be a big breakthrough for ID, and science in general.

  109. Hey there,

    This is somewhat problematic. Deleting the keystone of an arch will cause the arch to fall down. That doesn’t mean that the keystone was always necessary to keep up the arch, merely that it became so when other elements were removed. This is the problem with Behe’s IC concept of course – undoing something isn’t the same as doing it!

    Deletion of a gene isn’t the only way to detect if a gene is of great importance to multicellular life forms. Firstly, if we delete a gene from many genomes belonging to many different multicellular life forms, and the results were practically the same, it would at least make us suspicious that it plays a key role in multicellular life. Further, if the gene is well conserved in sequence identity across multicellular taxa, this would strengthen the position. So, if we analyze a gene based on various criteria, and all these criteria support the suspicion that this gene is important, then there’d be good reason for thinking that it is.

    OK. So essentially, you are saying that frontloading will product highly similar, but also highly conserved, sequences in the two different groups that nonetheless code for very different proteins? To be different from a Darwinian prediction, you would also specify, I think, that, homologous as the two sequences are, that there are no non-lethal pathways between the two, right?

    Okay, let me try to make this particular prediction a bit clearer. Let’s use an actual example, tubulin for example. Tubulin is almost certainly a very important protein in eukaryotes. Tubulin is not found in prokaryotes, but it shares structural and some sequence homology with a prokaryotic protein called FtsZ. The front-loading hypothesis predicts that if we take a bunch of FtsZ sequences and align them, a higher degree of sequence similarity will be observed among these FtsZ sequences than if we did the same with the average prokaryotic protein. Darwinian evolution makes no such prediction. Under a Darwinian framework, there’s no reason at all for FtsZ to be more well conserved in sequence identity than the average prokaryotic protein.

    Regarding the latter part of your response, I think it’s a bit of an unrelated topic – whether it’s easy for highly conserved proteins to be co-opted. To clarify what’s going on here, basically I’m just saying that it’d be easier (in the sense that it wouldn’t be as likely to disrupt cellular function) to co-opt a loosely conserved protein than a highly conserved sequence.

  110. psst, gpuccio and eigenstate:

    We are supposed to be here.

    Get your butts out of here if you want to avoid a leathering :)

  111. Hello,

    How is this not just a tautology, a self-fulfilling prediction? My reaction to this is, “Of course, the genes we deep most deeply anchored in the hierarchy are BY DEFINITION the “most important”. That’s what “most important” *means* in terms of the hierarchy.

    Well, actually, to tell if a gene is important to eukaryotes we don’t need to look at its entire phylogeny. There are many ways to see if a gene is important: deleting the gene, checking its degree of sequence conservation across taxa, identifying its role in the cell, etc.
    Hope this clarifies things!

  112. @kf,

    As to your rudely and snidely disrespectful remark about handwaving, that speaks to basic manners problems on your part — and that swishing noise you hear is old Mr Leathers limbering up, and to a want of basic familiarity with the background for information theory, on the charitable interpretation. (On the suspicious interpretation, it is an attempt to mislead the naive onlooker who would trust you to know what you are talking about, that the more or less standard background for info theory and its extension on glorified common sense to speak of application program files that do work, or computer motherboards hooked up on a wiring diagram, or a house built in accordance with a blueprint, or a geoglyph in Amazonia, etc, is irrelevant.)

    I don’t know why you insist on such cryptic language, but as far as I can parse this, you are referring to, uh, getting ready to *spank* me with a leather something-or-other? “Mr. Leathers” doesn’t register for me, except for a maker of protective clothing for motorcycle riding. That doesn’t seem to fit in with what you are saying, so my best guess you are making some kind of spanking-with-leather reference?

    That is quite a peculiar way to respond if that’s the case.

    How about we keep this at a grown up level and talk about symbols, phase spaces, specifications, functions and objective metrics as means of detecting design or not.

    Handwaving, by the way, is certainly not something to be proud of for the hand-waver as a tactic in debate. But it’s not the least bit rude or bad manners to point handwaving out, anymore than it is rude to point out that you have not answered key and relevant questions put to you. It makes you uncomfortable perhaps (it does me when it’s put to me that I’m engaging in handwaving), but that’s the price of handwaving for whoever engages in it. Either you can reduce this to applied math we can look at and test for precision and coherence, or you cannot. You post lots and lots about the importance and power of your metric, but you can’t or don’t apply it in a way a fair observer or critic can see and test.

    That’s handwaving. There’s nothing rude in pointing it — it (hopefully) drives the discussion toward substance and constructive discussion: applied math and substance supplied for your ambitious and pervasive claims, in this case.

  113. But “checking its degree of sequence conservation across taxa” IS going to indicate “deep homology” isn’t it?

    It seems to me that your dependent and independent variables are confounded :)

  114. Hi Elizabeth,

    I get that KF said he wanted to move the sub-thread discussion over there, but he’s not done so. Why not, I don’t know, maybe it’s just he hasn’t gotten around to it, yet.

    I certainly don’t have the ability to move things between threads on this blog.

    If he wants to complain because I haven’t been able to move this cluster of posts to a different thread, well, more impossible demands are made of the ID critics here, I guess. But I can’t do what I can’t do.

  115. Yeah, I know.

    But I did (badly) copy your post into that thread in one of mine to at least show willing :)

    I think we are both innocent. Let’s blame WordPress.

  116. No, because you’re not checking its degree of sequence conservation across all phyla, just across metazoan or eukaryotic or vertebrate phyla.

  117. eigenstate (and Elizabeth):

    Let’s do it this way: I am going to answer you in the other thread, and we can go on from that.

  118. 118

    eigenstate:

    “So when you wonder what set the particular mappings for the four bases”

    More of what I was getting at pertains to the particular specification of the sequences. To re-cap (paraphrase) what you are saying:

    If we walk up to a roulette wheel and look at the last 10 spins posted on the board:

    {2,4,10,00,17,23,21,13,29,12}

    that we shouldn’t be surprised to find that specific sequence because any 10 spin roulette sequence is equally probable, and we just happened to stumble upon this particular one. Absolutely agree. However if we then decide to use that specific sequence to represent some other thing:

    var 2,4,10,00,17,23,21,13,29,12 = x

    var atg,gtc,aat,tta = protein(a)

    then setting that specific 10 spin roulette sequence (or specific a,t,g,c string) as a variable would seem to require an intelligence. So it is not so much the finding of that specific sequence that matters, the sequence itself can be perfectly accounted for by physics, rather it is the assignment or setting of that specific sequence as a variable that will execute some function that seems unusual. [This group of sequences cut from the matrix and assigned values that execute some function are what KF refers to as the "islands of function"]. My issue is that of all the possible a,t,g,c string combinations, only a certain few strings execute a function, and the rest do nothing. This would IMO indicate a fundamental intelligence.

    —-

    So much of the analogies I here, (liz and the snowlflakes or the tree rings) do not apply unless:

    var snowflake.pattern.1 = worms mate

    var snowflake.pattern.2 = worms fight

    var tree.rings.pattern.1 = birds dry hump

    If these specific snowflake/tree-ring patterns do not execute some function, then the “stumbling upon the roulette wheel” analogy simply applies.

  119. @Genomicus,

    Thanks for the reply. I think it does clarify things somewhat, but that clarity just confirms the problem the non-predictiveness (in the scientific sense) of you putative prediction. Having to check ANY of the tree as means fo FORMULATING your prediction is problematic, and reduces your proposed prediction to a tautology.

    Deleting the gene, and checking conservation across taxa both commit precisely the error I mentioned with regard to evolutionary “predictions” regarding “survival of the fittest”. Like those who look at the truism of “survival of the fittest”, and suppose that is a prediction of evolutionary theory rather than a tautology (admittedly it’s somewhat nuanced because that tautology *is* entailed by the theory, it’s just an entailed definition, given the processes the theory proposed.), your first two ways are commit the error I was pointing to above.

    If you have to go LOOK at the tree and say “Yep, those are the important ones that got front loaded” — even a small part of the tree, you’ve invalidated that as a prediction. If you have to delete genes to see what happens in term of differential survival, you’re out of luck as well.

    On your third case, I can see some daylight, conceptually. If you are able to identify functions and roles for particular genes and arrive at an INDEPENDENT basis for what these genes are “important”, and those other genes are not, then you have created the proper predicate for the prediction you are hoping to establish. If you can identify, say, some group of “most important genes”, and from their particular stereochemistry or other features establish why your filter identifies them as important and “overarching” for some future phylogeny, then, happily, when you set that aside as an entailed product of your theory, you have a prediction which can actually be tested by then looking at the actually observed hierarchy.

    So if that way is what you meant, then a) it’s odd, that you offered two other “ways” in your list which appear to be valid responses in your view, but which render your prediction void as a prediction, and b) you have an enormous, but interesting and well grounded challenge ahead of you in producing the objective filter that establishes gene importance WITHOUT looking at the test evidence (the observed hierarchy).

    Just to make sure that’s clearly, pedagogically, consider a computer simulation of key dynamics in evolution. Without depending on any input from the observed hierarchies in the evidential record, we can implement an algorithm that provides for reproduction with variation, speciation, fitness testing and accumulation of beneficial traits.

    That will produce nested hierarchies. It’s the deterministic consequence of the dynamics involved (implemented in software, here). That non-dependence on hierarchy data, as a basis for establishing nested hiearchies, and demonstrating WHY those nested hierarchies occur, is why the evolutionary biologist’s prediction of nested hierarchies is an actual prediction not a tautology.

    We don’t need to implement a program in software to get there, it can be done in maths, but it’s a clear way to see the entailments of the theory come to production.

    Similarly, then, what program could you write that identified the “importance of genes” when run across available candidates? It would need to make its judgments independent of a retrospective look at the observed hierarchy.

    Is this something your proposed prediction can support? I’d be impressed if so.

  120. Hello,

    I’m not sure why you think that checking a given gene’s sequence conservation across metazoan lineages would reduce my front-loading prediction to a tautology. Please elaborate. Thanks.

  121. As I described above, because it’s circular, of course. If you are using the sequence conservation as the BASIS for your prediction (in identifying what genes are “important”) as well as the TEST of that SAME prediction, you are simply giving us a definition, not a prediction in the scientific sense of the term.

    Here’s an example from outside of genetics that illustrates this point. If I develop a theory regarding who are the best baseball hitters, and I predict that the best baseball hitters will have a) high batting averages (BA), high on-base percentages (OBP), and lots of RBIs, I haven’t made a prediction as part of my theory, but rather just offered a trivial tautology.

    High batting average, and lots of RBIs,etc. are how we DEFINE good hitting and measure it. So it’s not a prediction, scientifically, but a truism — it cannot be false; by definition, the best hitters will be the ones who have the best hitting stats.

    Back to your prediction, if you are going to “predict” that the most important genes will be most conserved, you’ve committed the same error. It’s not a prediction, but just a definition restated. Most conserved is what we MEAN when we say “most important”.

    Unless you can provide an independent basis for predicting which genes will be more conserved (and thus “more important”), you’ve done nothing more than the baseball theorist who predicts the best battes will have the best stats.

    Now, if I, as a baseball theorist, were to predict that batters who have blue eyes and are taller than 6’2″ would be the best batters, which is to say they would have the best stats, then I would have a substantial prediction. Whether it is entailed by my model isn’t established yet, but assuming for the moment it is, this is a non-tautologous prediction. It may fail as a prediction against empirical tests, but it is structurally and epistemically sound.

    Can you see the difference between the two baseball predictions?

    Back to your case, checking a given gene’s sequence conservation is just trafficking in tautology. Your mention earlier about “identifying its role in the cell” might be analogous to the valid baseball prediction (tall, blue eyed batters perform best because the combination of that height and blue irises produces maximal tracking, targeting and power in hitting a baseball, etc.).

    I haven’t seen any expansion on that angle from you, but am open to it (it seems like hard work to supply, so I’m not really expecting such a treatise). But as you have it with checking gene conservation, your criterion for forming your prediction is the same as your means of testing that same prediction. It’s guaranteed to be “true”, any time, for any case, for the same reason “the best hitters will have the best stats” is an always-true prediction.

  122. F/N: The redirected discussion in the thread IF Founds 11, is here on.

    An examination of what FSCO/I means in the case of the ribosome is here.

    Now, let us continue to discuss front-loading.
    G’ mornin’

    KF

  123. As I described above, because it’s circular, of course. If you are using the sequence conservation as the BASIS for your prediction (in identifying what genes are “important”) as well as the TEST of that SAME prediction, you are simply giving us a definition, not a prediction in the scientific sense of the term.

    Well, actually, I’m not sure you quite understand exactly what this prediction entails. To test if a gene is of importance in eukaryotic lineages, for example, we can check its degree of sequence conservation in terms of sequence identity across eukaryotic taxa. We’re limiting our analysis strictly to eukaryotic taxa, not prokaryotic taxa. If, then we find that a gene is important in eukaryotic taxa through this method, from a front-loading perspective we would predict that this gene will share deep homology with prokaryotic genes. This isn’t tautological in any way. From the basis of pure logic, there’s no reason why a highly conserved eukaryotic gene should share deep homology with prokaryotic genes any more than a not-so-highly-conserved eukaryotic gene. Thus,I’m afraid I’m not seeing any tautology here.

  124. But if it “shares deep homology” with prokaryotic genes, then, by definition, it is highly conserved in both groups, suggesting not only common descent, but that one group is ancestral to the other.

    Why would this suggest “front-loading” rather than Darwinian evolution?

  125. Let me rapidly rephrase that last post (hit submit too early!)

    “But if it “shares deep homology” with prokaryotic genes, then, by definition, it is highly conserved in both groups, suggesting not only common descent, but that one group evolved from the other following a small change in the sequence that changed its function.

    Why would this suggest “front-loading” rather than Darwinian evolution?”

  126. Dr. Liddle:

    For some reason, I don’t find any reply button below your comments, so I’m replying here.

    “But if it “shares deep homology” with prokaryotic genes, then, by definition, it is highly conserved in both groups, suggesting not only common descent, but that one group evolved from the other following a small change in the sequence that changed its function.

    Why would this suggest “front-loading” rather than Darwinian evolution?”

    I think there’s a bit of a misunderstanding going on here. The prediction I proposed goes like this: you find a gene in all eukaryotes, and by comparing its sequences across various eukaryotic taxa, you find that it’s probably very important to eukaryotes. On the other hand, you find a gene in all eukaryotic taxa, but it doesn’t seem to be all that important. Front-loading predicts that the former gene is far more probable to share deep homology with prokaryotic genes than the latter gene.

  127. Geno:

    Beyond a certain level of nesting [about four sub-dots in], the “reply” option vanishes. Just one more of the “it’s not a bug it’s a feature” points. We need that chrono timeline view option!

    Anyway, let’s be thankful, WP has many good points!

    KF

  128. Genomicus said: “The prediction I proposed goes like this: you find a gene in all eukaryotes, and by comparing its sequences across various eukaryotic taxa, you find that it’s probably very important to eukaryotes. On the other hand, you find a gene in all eukaryotic taxa, but it doesn’t seem to be all that important. Front-loading predicts that the former gene is far more probable to share deep homology with prokaryotic genes than the latter gene.”

    So, what you are saying here is the following: A gene that is highly homologous across eukaryotic taxa is more likely to also be highly homologous in prokaryotic taxa than a gene that is not highly homologous among eukaryotes.
    That would be a pretty straightforward prediction of ANY theory that assumes common descent. I don’t understand why you think that only frontloading would make this prediction?

  129. Genomicus continues the discussion here. Accordingly, I will close off comments here. KF