Uncommon Descent Serving The Intelligent Design Community

“Competence” in the Field of Evolutionary Biology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Thomas Cudworth in his post here referenced “…being competent in the field of evolutionary biology.”

My question is, What does it mean to be “competent” in the field of evolutionary biology?

It seems to me that it would mean providing hard empirical evidence that the mechanism of random variation/mutation and natural selection which is known to exist (e.g., bacterial antibiotic resistance) can be extrapolated to explain the highly functionally integrated information-processing machinery of the cell — at a very minimum! This empirical demonstration should be a prerequisite, before we even begin to entertain speculation about how this mechanism produced body plans and the human brain.

Yet, the theoretically most “highly competent” evolutionary biologists never even attempt to address this requirement. They just wave their hands, make up increasingly bizarre, mathematically absurd, unsubstantiated stories out of whole cloth (like co-option), declare that the solution has been found, and that anyone who questions them is a religious fanatic.

This is the antithesis of legitimate scientific investigation.

My definition of competence in the field of evolutionary biology is Michael Behe, who has actually empirically investigated the limitations of the creative powers of the Darwinian mechanism. The conclusion is clear: It can do some stuff, but not much of any ultimate significance, and cannot possibly be extrapolated to explain what Darwinists expect us to accept through blind faith, in defiance of all reason and evidence.

Comments
Elizabeth is fine :) Not many people do, but I do rather like it.Elizabeth Liddle
July 15, 2011
July
07
Jul
15
15
2011
09:40 AM
9
09
40
AM
PDT
Elizabeth: May I call you Elizabeth? I like that more... Well, I am happy to know more about you. As for me, I am a medical doctor in Italy, with a great interest in statistics, biology, genetics, practical philosophy, and many other things. ID has been for me a great passion, and has opened very important vistas on reality, for me. It has also given me the opportunity to deepen my understanding of molecular biology and genetics, even if they are not strictly my field. All my interventions here are completely on a "non authority" basis. I don't believe in authority in science, but competence is certainly a very high value. And, luckily, nobody has ever tried to call me "Dr."! :)gpuccio
July 15, 2011
July
07
Jul
15
15
2011
09:38 AM
9
09
38
AM
PDT
It's been pointed out to me that as people keep addressing me as "Dr Liddle" I should probably explain my academics. I'm somewhat embarassed to be constantly so addressed, and only ever sign myself "Lizzie". I use my real name (sans salutation) as my login name, and I am not anonymous. Early on, I was addressed as Ms Liddle, and I somewhat facetiously explained that I was (IIRC) usually known as Mrs, or Lizzie, and occasionally Dr! So can I here and now invite everyone to drop the salutation - I am more than happy with Lizzie. But, as it is probably clear to most people that I do have a PhD (as I'm sure many others do here), let me give a brief academic CV: My first degree was in Music with Education (major, Music), followed by a postgraduate certificate in music education (PGCE). I taught high school music for a bit, and also did a lot of performing (in the field of "early music"). I then did a second bachelor's degree in architecture, followed by some years working freelance in various architectural practices, and continuing to work as a professional musician, also traveling to Basel to study viola da gamba with Jordi Savall! That was all fun. Then I finished my architecture training (including a masters in Urban Design) and moved to Vancouver, Canada, and had a much longed-for surprise baby (on my last egg!). My "miracle baby". Then tried to get a job in architecture in Vancouver in the middle of a property slump :(. Managed to get lots of music gigs though, which was good. But I came to the conclusion that being a touring musician wasn't really compatible with motherhood (too much when I had work; too little when I didn't) and thought I'd go back to school, pursue my lifelong interest in educational psychology, and do a masters, maybe a PhD in that, and practice as an Ed Psych. I did the most of that Masters (at UBC) then moved back to the UK, somewhat unexpectedly, and embarked on a PhD in cognitive psychology/neuroscience, in a motor control lab. Since completing my PhD I have worked as a researcher in the field of neuroscience, mostly in brain imaging, in connection with research into mental disorders. I am not a biologist, although of course my work is biological, nor a geneticist, although I work with genetic data. I'm not a programmer, though I program, and I'm not a computational modeller, though I have written, and published, computational models! I've always been fascinated by biology, and evolution, but have no more than an enthusiast's lay understanding, although as it impinges on my field (as it does on every biological field!) I read especially voraciously in that area. My area of research is learning (that's a lifelong theme) and timing (also lifelong), specifically learning of timing. Learning models are, in effect, evolutionary algorithms (or, most of them are) and so computational aspects of evolution are something that I am familiar with. I also use, and write, classifier algorithms, which are learning algorithms that are essentially Darwinian. But that's it. I don't wish to give the impression I have any special authority endowed by my PhD, except perhaps the authority that comes from the skill, shared by many here, of using the scientific method - operationalising hypotheses, casting them as statistically testable predictions, figuring out how to measure variables, handling data, etc. I have also, as some people know, written some children's books - not many. One is about Heaven ("Pip and the Edge of Heaven"), the other two are in German, and were part of a commissioned series on "questions children ask". They are fiction-with-a-purpose if you like. I also do some musical composing. I think that's it. Hope that has cleared things up. And please call me Lizzie :) Cheers LizzieElizabeth Liddle
July 15, 2011
July
07
Jul
15
15
2011
01:29 AM
1
01
29
AM
PDT
Sorry, lost my bookmark for this one. Will be back, but probably not till weekend.Elizabeth Liddle
July 14, 2011
July
07
Jul
14
14
2011
05:04 PM
5
05
04
PM
PDT
Elizabeth Liddle, where are you??PaV
July 14, 2011
July
07
Jul
14
14
2011
01:37 PM
1
01
37
PM
PDT
Elizabeth: First of all, two more papers that will help in our debate: The Evolutionary History of Protein Domains Viewed by Species Phylogeny http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0008378 and Sequence space and the ongoing expansion of the protein universe http://www.nature.com/nature/journal/v465/n7300/full/nature09105.html (this one is not free, but there is a summary of it here: http://www.lucasbrouwers.nl/blog/2010/05/the-big-bang-of-the-protein-universe/ Well, I would like to sum up my personal scenario, based in part on these sources. Let's begin with a clarification about FUCA and LUCA. Here I have to make specific epistemological choices. a) LUCA: Is the hypothesis of a last universal common ancestor a just so story? No, it is not. It is a reasonable inference derived from what we can observe in the proteome today (that is, from facts). I am convinced that the analysis of protein homologies can give great insight on their evolutionary links. So, if one believes in common descent (and I do), it is perfectly reasonable to isolate those proteins that are credibly very ancient, and common to all known living beings. Those proteins and protein families are hundreds and hundreds, and include many fundamental and complex functional systems (for instance, those of DNA replication, transcription, protein synthesis, and so on). If the observed homologies tell us that those families share a common ancestor, and have not arisen separately during the more recent evolutionary nodes, then it is reasonable to assume that somne progenitor to all modern lines of living beings must have existed. That is LUCA. We can not only assume that it existed, but also make some inference on its nature ant time of existence. as far as I can understand, there is no reason to think that LUCA was anything much different from a prokaryote, some bacterium - archea. It is very likely that it existed in very old times, probably not too long after the conditions on earth became compatible with life. I suppose that an exact timing of LUCA, anyway, is still controversial. b) FUCA: Is the hypothesis of one or more simpler progenitors of LUCA a just so story? Yes, it is. There is nothing based on facts that supports that theory. Let's sayy that LUCA could well have been FUCA, and there is nothing against that simple possibility, except the resistance to accept that life, at its emergence, may have already been very complex. Therefore, the existence of simpler precursors to LUCA is driven by ideology, not facts. You may have understood, at this point, that I am not a big fun of all existing OOL theories, from the primordial soup to the RNA world. I believe that all of them are at present just so stories. They can well be pusued, but there is no reason to accept them as credible scientific theories. Having stated this certainly unpopular belief, I would like to go on a little. From the above paper about protein domains evolution, we can derive the reasonable hypothesis, that I had already quoted without giving a specific reference, that more than half the basic protein information was already present in LUCA. You can see in Table 1 of the paper that according to the authors'analysis, 1984 domains were already represented at the beginning of cellular life, out of 3464. But it is also true that the remaining ones emerged after, most of them at the node of bacterial emergence (144) and eukaryote emergence (492), and gradually les at later points, up to about 10 after the mammals node. That is interesting, isn't it? A few considerations. a) If functional protein domains were so common and numeorus, how is it that all the powerful evolutionary search has found anly a few thousands of them? And more half of them at the stage of LUCA, very near to OOL? And yet, very important functional modifications have happened after that, not the least the emergence of humans. But, for some reasons, those modifications were realized with only minor new discoveried of basic functional protein structures. I would say. one of two: either not so many really useful protein structures remain to be found (my favourite), or for some reason life as we know it does not need the rest of what could be found. You choose. Let's go to the big bang theory of protein evolution, and to neutral mutations. This is how I see things. New protein structures (folds, superfamilies, domains, according to how you choose to define them). appear suddenly at specific points of evolution. Probably, when they are needed for what is being engineered (OK, this is an ID interpretation). A new fold, let's say, appears in some place inside the "functional island" of the sequences that can express that function. After that, what happens? The sequence "evolves". But what does that mean? In many cases, the function does not evolve at all: it remains the same, and the folding remains more or less the same (with minor adaptations). But the sequence changes. The more distant species are in time, the mnore the sequence of the same fold, with the same function, changes. In some cases, it changes so much that it becomes almost unrecognizable (less than 10 - 30% homology). Why? I am not sure, but the most likely explanation is: neutral mutations and negative selection. Plus a rugged landscape of function, that makes some mutations posiible only if other, compensatory mutations have happened before, and therefore slows down the process. That's what neutral mutations do: they change the sequence, and allow the proteins to "diverge" inside a functional island, where negative selection preserves the function. But when mutations reach the "border" of the functional island, and function is compromised, then mutations are no more neutral, and they are usually eliminated. So, the original functional structures emerge originally with all their functionality (are engineered, we would say in ID). But neutral mutations and negative evolution allow divergence without significant loss of function. I am obviously aware that, in many other cases, functional divergence also happens in a protein superfamily. That is usually obtained by relatively minor tweakings at the active site, while the general fold is often maintaned. That case is analyzed (IMO quite well) in the second Axe paper I quoted. Are this variations, usualy of a few AAs, micorevolution or macroevolution? Are they realized by a purely darwinian mechanism? I don't know, but I have my doubts. I can anyway say that they are certainly much more near the reach of darwinian theory than the emergence of basic protein domains. Well, I think that's enough for today. I have been probably too long. And here (in Italy) it's time to rest.gpuccio
July 13, 2011
July
07
Jul
13
13
2011
01:54 PM
1
01
54
PM
PDT
Dr Liddle: The known replicators joined to metabolic cells -- the relevant case to life as we observe it and can reconstruct it on the conventional timeline to 3.2 - 3.8 BYA -- are based on vNSRS linked to metabolic systems. You may be able to construct something else but he evidence is that when we look at such self replication ans an additional facility, we are looking at irreducibly complex, symbol based representational systems. Things that are well beyond the 1000 bit threshold for the cosmos as a whole. Where in fact on a case by case basis, a solar system scaled site is the credible unit of study. So, the issue is to here get to the first funcitonal body plan, without which the issue of differential reproductive success on different metabolism oriented function in an environment does not even obtain. Multiply this by the known architecture of embryological development from an initial cell, and you run head on into the islands of function issue, given the easily perturbed embryological development processes. This issue is in my considered opinion, the decisive one. GEM of TKIkairosfocus
July 13, 2011
July
07
Jul
13
13
2011
09:34 AM
9
09
34
AM
PDT
kf: I don't think that "life-forms reproduce" is past its sell-by date at all! I do understand that you consider it refuted, but I certainly do not! I've given several examples of self-replicators that are far simpler than your von Neumann self-replicator, and you have simply rejected them. The assertion that the minimum entity capable of Darwinian evolution is a von Neumann self-replicator is simply that - an assertion. But I'm happy to leave you with your fine-tuning argument - I readily concede that life is unlikely in a universe lacking carbon-weight elements. I don't think the fine-tuning argument actually works very well, but it's not what I'm arguing here, or what any "Darwinian" argues. Darwinian evolution rests on (at least) two premises: that physics and chemistry as we know it existed; that self-replicators, capable of replicating with variance in the efficiency with which they self-replicate, existed. But given those two premises, Darwinian evolution seems well up to explaining what we observe in the living world. And if you want to argue that the simplest self-replicator capable of Darwinian evolution is a von Neumann self-replicator, be my guest :) I'm not yet persuaded :)Elizabeth Liddle
July 13, 2011
July
07
Jul
13
13
2011
06:25 AM
6
06
25
AM
PDT
PS: And before someone trots out the long past sell-by date "life forms reproduce" objection, remember, this is what is needed for the kind of "cell based, metabolising, von Neumann self-replication as additional capacity" entity involved. This is evidence of design of cell based life, which goes on top of evidence of design of a cosmos that is fine tuned and so set up to facilitate C-chemistry cell based life and in turn makes design the best candidate to explain the additional FSCI in body plans.kairosfocus
July 13, 2011
July
07
Jul
13
13
2011
05:53 AM
5
05
53
AM
PDT
GP: The issue of islands of function starts with proteins but goes beyond there. No one in his right mind would imagine that a tornado in a junkyard will form a 747 jumbo jet. But it will not form a single instrument on its panel either, not even a basic D'Arsonval moving coil Galvanometer. For the same basic reason. And in fact I would be astonished to see such a tornado winding the 50-turn fine wire coil, putting in the leads and inserting the spindle and jewels at the ends correctly. As to matching that to the magnets and iron core etc to get he meter to work, much less needle spring and scale . . . Don't even think about boxing it properly! The monkeys at keyboards exercise backed up by scans of literature has been tried, the result to date is summed up by Wiki, in effect searching a space of 10^50, not 10^150:
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[21] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
This has been brought up many times for months. GEM of TKIkairosfocus
July 13, 2011
July
07
Jul
13
13
2011
05:28 AM
5
05
28
AM
PDT
F/N: Search on the gamut of our observed cosmos has the upper bound of 10^150 Planck Time Quantum states [PTQS] for the atomic matter of the observed cosmos. About 10^80 atoms, 10^25 s to effective heat death, 10^45 states per sec, roughly. Where the FASTEST CHEM RXNS TAKE 10^30 PTQS. Think of one PTQS as the cosmic clock tick. 500 bits [2^500 ~ 3*10^150) is at that level and is beyond practical search, number of states is beyond chemistry to find needles in haystacks, noting that stats tell us random samples strongly tend to be typical not atypical. 1,000 is so far beyond cosmos as to be ridiculous, you just squared a 500 bit space. And yet 125 bytes or 143 7-bit ASCII characters is a trivial quantum of control code for anything of consequence. Life forms start out at about 100,000 bits, Again so far beyond the ridiculous that this is now blatant to the point of self evidence. New body plans are going to require 10 - 100 million bits, per observations. This is off the chart. The ONLY empirically warranted way to get FSCI on that scale is intelligence. The inference to design on the genome -- we have not yet touched epigenetic factors -- is obvious. Save to those indoctrinated into a C20 update on a C19 theory that has held institutional dominance and is ruthlessly abusing power to keep itself going long past when it should have gone out to pasture. I'd say some time by the 1960s when it was clear what DNA meant and when we saw what it took to make computers and do cybernetics and controls things that are a lot less sophisticated than a living, metabolising self replicating cell. As in "rocket science." No wonder Wernher von Braun was a Creationist! Yup, I know: those ignorant, stupid, insane or wicked fundies! GEM of TKIkairosfocus
July 13, 2011
July
07
Jul
13
13
2011
05:18 AM
5
05
18
AM
PDT
Elizabeth: About functional islands. It's controversial. The truth is we still don't know for certain. Darwinists obviously hope and try to demonstarte (not always fairly) that there are many many of them, that each one is very very big, and that findiong them by a random walk is, if not easy, perfectly possible. I will be brief, and list some arguments that, IMO, point to the opposite view. Many of these points are in some way discussed in the first of the two Axe papers I pointed to. 1) Human protein engineering, with all its intelligent search, has not yet been able (to my knowledge) to produce a single new functional protein folding, unrelated to those in the proteome, which has a really useful function. Least of all a selectable one. The last SCOP release mentions in the known proteome 1195 folds,1962 superfamilies,3902 families. Designed proteins, both by top down or bottom up methods, are only a few, and as far as I can understand they are neither functional nor selectable (I have not checked the most recent, so I could be wrong about that). I am not saying that intelligent engineering cannot design proteins: it certainly can, and will do that. But the same fact that, with all our understanding of biochemistry and with all our computational resources that is still an extremely difficult task is scarcely in favor of common functional proteins, everywhere to be found. 2) Recent research aimed to transform one fold into another (mentioned by Axe) shows that that is a very difficult task, even with short proteins, and even followinf a natural algorithm that guides from one known form to another known form. 3) Tha main research paper cited by darwinists to affirm that functional proteins are present in a random repertoire is the Szostak paper, which is flawed and does not in any way justify that conclusion, as I have tried to show many times here and elsewhere. 4) I would suggest the following interesting paper: Experimental Rugged Fitness Landscape in Protein Sequence Space http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0000096 While suggested to me by a darwinist, it has become one of my favourite. It si about optimization, not about the finding of an original functional structure, but still it shows how a random repertoire is a very poor tool for a serious optimization, even in the presence of strong natural selection. I would like to emphasize the following part of the discussion: "The question remains regarding how large a population is required to reach the fitness of the wild-type phage. The relative fitness of the wild-type phage, or rather the native D2 domain, is almost equivalent to the global peak of the fitness landscape. By extrapolation, we estimated that adaptive walking requires a library size of 10^70 with 35 substitutions to reach comparable fitness." That is really interesting. Another interesting point about optimization, always from that paper, is that suboptimal peaks of function in a rugged landscape are indeed obstacles to the final optimization, because they become dead ends where the random walk stops, and not steps to further optimization. 5) I will present my last argument in my next post, because it is about the role of neutral mutations, and it deserves a detailed discussion. That's all, for now.gpuccio
July 13, 2011
July
07
Jul
13
13
2011
05:13 AM
5
05
13
AM
PDT
GP: Well said as usual. A claim that there are infinitely many functionally specific complex patterns of relevant elements is in effect a claim to an infinite multiverse supercosmos where everything no matter how improbable, can and does happen. There is ZERO clear observational evidence for such. This is unfounded metaphysical speculation designed to save the phenomena for a preferred worldview. In short, worldview speculation. So, we are entitled to challenge it on phil terms, such as, comparative difficulties with other credible alternatives on the table and ALL the evidence in their favour. Such as this. And, if that sort of multiverse speculation is the real alternative to a design inference on the significance of FSCI in light of the known cosmos, that tells us that science is not the root issue. The empirical evidence we do have supports that FSCI etc are excellent and reliable signs of intelligent cause. To counter this, due to a priori commitment to a dubious materialistic worldview, resort is made to metaphysical speculation hiding in a lab coat. And then saying we are science, so you cannot challenge our metaphysics on comparative difficulties. Nor can you suggest to students that there are other ways to understand science, its methods and its findings. That is telling. Science held ideological captive. By the materialists. The new establishment. GEM of TKIkairosfocus
July 13, 2011
July
07
Jul
13
13
2011
05:01 AM
5
05
01
AM
PDT
computerist (#60): Again, I don't understand.gpuccio
July 13, 2011
July
07
Jul
13
13
2011
04:44 AM
4
04
44
AM
PDT
computerist: While we cannot measure Natural Selection explicitly, NS can actually be deduced from a set of possible configurations. No. Absolutely no. Natural selection can interven only is any new step gives a reproductive advantage. Show the selectable steps, shiw why they are selectable, and then and only then you can hypothesize a naturally selectable path. One objection to Durston’s equation was that it didn’t take into account NS, and that the equation only takes into account a target and the probability of RM alone producing the target in one try. That is absolutely true, but it is no objection at all. Durston is examining protein fanilies for which no path based on natural selection has ever been even vaguely proposed. They are protein families which have no sequence relationship with others. So, their functional complexity must be accounted for entirely, unless and until true scientific hypotheses about a selectable path are made. We are all ready and eager to evaluate darwinian paths based on NS, if and when they are explicitly and realistically proposed. We are just waiting. Science is done on facts and explicit, verifiable hypotheses, not on just so stories and ideology. However, this is not the case as Durston’s equation takes into account the number of possible configurations. Durston's equation takes into account the number of functional configurations in relation to the number of total configurations. It has nothing to do with natural selection. Nothing at all. the probability of RM alone producing the target in one try. Again the error I outlined in my previous post. Not in one try. By a random walk. The number of possible configurations can be restated to the number of steps taken before a locked function, which is in essence NS. No. If the steps are intelligently selected, it is intelligent selection. If the steps are intelligently guided, it is direct input of information. It is NS only if you can demonstrate that each step expanded in the population because it conferred a reproductive advantage. 320 Gen to reach a target is equivalent to 320 possible configurations before a FCSI effect is achieved. The Weasel is an obvious example of intelligent selection based on knowledge of the target. Even Dawkins should have understood that by now. No NS is involved. The only objection I can foresee from Darwinists is that there is infinite possible FCSI’s, and therefore any combination in a biological context would likely produce FSCI. FCSI doesn’t really exist and is an illusion (and the universe is inside a magic 8 ball). I hope you understand what you are saying here. I don't.gpuccio
July 13, 2011
July
07
Jul
13
13
2011
04:42 AM
4
04
42
AM
PDT
WilliamRoache: Wow, so many answers to give, and so little time to do it! What mechanism are you proposing is used to search it? Does the search have to find all 462 bits at once? Design, which allows an input of active information. That can take different forms (direct input as guided mutation, targeted random mutations plus intelligent selection, and so on). The 462 bits need not ne found at once, but they must be in some way "recognized" and fixed, otherwise we are again in the scenario of a random walk, which is incompatible with a realistic search. And perhaps those isolated islands when zoomed in upon reveal themselves to be capable of sustaining life as we know it? I am not sure I understand what you mean. If I understand well, that is obviously true. If you are looking for single 462 bit sequences by random searching (monkey on keyboard, tornado in a junkyard) then of course they will be rare. Very rare. I am looking for functional sequences of high complexity. They are rare. And it's impossible to find them by a random walk. But nobody except ID proponents claim that such sequences arose in a single step with no simpler precursors. Again, the point is not, and never has been, that they must arise "in a single step". They can arise in as many steps you like. But, if each step is not simple and naturally selectable, the random walk rameins a random walk, and the probabilities remain the same. Darwinists like to create ambiguity about the temporal context, affirming that IDists believe that the variation must happen "all at once". That is not true. The probabilities for a specific 400 bit variation are ridiculously low, either we try sinlge variation of 400 bits or many successive variations of one bit. Id natural selection cannot expand each single step, then we are always in the random walk scenario, and the probabilities are practically nil. So what’s your claim? That the protein families in question were intelligent designed? All proteins? Could some have evolved? How do you tell the difference? It's simple. All protein families which have a functional complexity higher than some conventional threshold, and cannot be deconstructed into an explicit path of naturally selectable precursors, are probably designed. A reasonable biological threshold for a random step can be set, IMO, at 100 bits (about 25 coordinated AAs), even if empirical thresholds are certainly much lower. Simpler variations could in principle be in the range of random variation, but probably any single non deconstructible step of more than 5 - 10 AAs is designed. Remember that all protein superfamilies are unrelated at the sequence level, all 2000 of them.gpuccio
July 13, 2011
July
07
Jul
13
13
2011
04:26 AM
4
04
26
AM
PDT
Elizabeth Liddle @ 11:
You may have meant: 600 genes are necessary for mitosis. I don’t know if this is the case: it may be, now. That would not mean it had always been that way, nor does it mean that there is no allelic variation in those genes.
"I don't know if this is the case: it may be, now." Are you seriously suggesting that other, simpler means of mitosis existed? How is this more than hand-waving? This is an argument from ignorance. " . . . nor does it mean that there is no allelic variation in those genes." In most proteins, from what I've read, there are conserved portions, and non-conserved portions. How do we differentiate? By looking at polymorphisms across a population or across related species. Since the vital portion of the protein/enzyme is, per definition, 'conserved', then you can have all kinds of varied alleles and yet this does not affect function (per definition). So, it would seem that if we assume that, let's say, 30% of a gene/allele is vital/conserved, then some mechanism must explain how that 30% came about. The number of "alleles", to my mind, is simply immaterial. That is, wherein proteins function, they do not vary; wherein they vary, they do not affect function. In support of my point, let me not that in the paper I mentioned in my last post, they compared critical sequences across related populations and found NO polymorphisms whatever in the regions critical to function. In fact, isn't that really the true role of what we call NS? That is, putative "purifying selection".PaV
July 12, 2011
July
07
Jul
12
12
2011
08:25 PM
8
08
25
PM
PDT
Elizabeth Liddle @ 35:
Firstly, you are assuming an a priori “target” – a specific trait, that involves two independent amino acid changes, and that will confer a reproductive advantage.
There is a paper recently out wherein for a beneficial novelty to occur, a minimum of thirteen mutations were needed. The authors seemed to suggest that only five mutations might suffice. Not to quibble, let's assume that only five mutations are needed. Now they mention that if they make each of the five mutational changes individually, the increase in fitness is not that great. The beneficial effect occurs only when all five mutations are in place 'simultaneously'. Now, in a population of half a million individuals, let's say that there are, at time t=0, 500 individuals with one of these needed mutations. Since the increase in fitness is mild, then for this mutation to become fixed in the population, 4N of such mutations are needed. So, 1,999,500 more such mutations are needed. If, per generation, 10^-6 mutations occur per individual (using Kimura's number), then every two years, another needed mutation at just the right spot will occur. So, to arrive at the needed two million mutations (rough count), 4 million years would be needed. Now what about the second, and the third, (and so forth) mutations? It's possible that the genome that 'fixes' will have one of these other mutations. What is the possibility? Is it one in a half a million? Is it one in four? Most likely it is one in four, since at any other locus on the genome, that locus, too, will have changed (theoretically) two million times; but the chance of the proper nucleotide base being in place is always just one in four. Using this probability, since the genome with the first needed mutation has become 'fixed', there should be 1/4 x 0.5 x 10^6 possibilities for the second needed mutation already at hand. This equals 125,000. So, for the second mutation to become fixed, we need 4N mutations at that particular locus. At 10^-6 mutations per locus per generation, and with the need for 1,875,000 second mutations, we'll need 3.75 million years. This will be the same for the third, fourth and fifth mutations. Total time for all 5 needed mutations: 19 million years. But what if all 13 mutations are needed, as found in nature? Then roughly 53 million years are needed to bring this novelty about through 'random drift'. This is some, relatively, simple change to ONE protein..............................and roughly 53 million years (at the very minimum, 19 million years) are needed. Excuse me, Elizabeth, but this is not impressive in the least. How does a functioning genome come into existence under this kind of scenario? To the rational mind, it's absolutely inconceivable!PaV
July 12, 2011
July
07
Jul
12
12
2011
07:46 PM
7
07
46
PM
PDT
Elizabeth Liddle:
So even if we assume (probably a fair assumption) that many advantageous traits arise from gene-gene interactions (you need a specific combo to get the advantage), the opportunities for those combinations to occur is actually quite substantial. Of course the probability of any one specific combination, out of all the alleles, occurring, may be very low, but that is not the relevant probability. The relevant probability is the probability of some advantageous combination occurring in some individual at some point.
Your basic argument, or so it seems to me, is that 'statistically', any genome is just a matter of time. One would have to assume that a corollary to this argument is that the more complex the genomic structure, the longer the time needed to 'find' (stochastically) the 'target' (which, of course, to Darwinists, is not known ahead of time---but that doesn't matter here.) Then, per this corollary, the fossil record should provide us with two things: (1) there ought to be a tremendous gap in time between the arrival of the first eukaryotes (the truly LUCA) and the discovery (stochastically) of Phyla; that is, body-plans; (2) after a long period of time, newer body-plans should be also 'discovered'. We find neither of these to be true. Darwin, who, having had his religious sensibilities shaken, reckoning the Earth to have existed for eons upon eons, felt that what we call the Cambrian Explosion must have, by force of his 'theory', been preceded by eons upon eons of intermediate forms. But the Big Bang has burst the bubble of envisioning the Earth to be quasi-eternal, and the intermediate forms have not been discovered. And quite to the contrary, an intricate Mullusc eye has been found right in the middle of the Cambrian. So, help me if you can, but frankly I don't see any way of interpreting neutral drift so as to explain what the fossil record provides. Again, as in my previous post, this is a real-life example. It's not hypothetical.PaV
July 12, 2011
July
07
Jul
12
12
2011
06:58 PM
6
06
58
PM
PDT
Elizabeth Liddle @ 35:
Firstly, you are assuming an a priori “target” – a specific trait, that involves two independent amino acid changes, and that will confer a reproductive advantage. However, this would be the Texas Sharp Shooter fallacy! There may be many many proteins that would confer the same or similar advantageous trait, and, indeed, there may be (and are) many advantageous traits.
I suspect you have not read Michael Behe's book, The Edge of Evolution. There he deals with a real-life---as opposed to a purely hypothetical---scenario in which the P. falciparum (malarial parasite), which reproduces asexually, is in a life-and-death struggle to overcome the effects of chloroquine. In any individual infected with the parasite, at peak infection 10^12 replicates will have been produced (the mutational equivalent of a population of one million organisms maintaining its population size for one million years). This represents a tremendous amount of mutation: the same genome replicating itself generation after generation, with some constant number of mutations introduced every time the genome is reproduced. And yet, with all of the proteins contained in its genome, with all of the replications, with all of the mutations, not a SINGLE one is capable of fighting off the effects of a rather simple chemical (because of 'neutral drift' there should be so many candidates, right?). When, after twenty years of millions of people annually becoming infected with different versions of malarial genomes, finally a solution, a real, live solution, is found. It comprises of, basically, two amino acid substitutions. Now, with this great level of selective pressure at play, and when, in the end, only two a.a. residues need to be changed, why is it that roughly 10^20 replications are needed for the solution, if, as you say, there are many, many proteins able to "confer the same or similar advantageous trait"? Again, this isn't theoretical, it's real. Very real.PaV
July 12, 2011
July
07
Jul
12
12
2011
06:42 PM
6
06
42
PM
PDT
Just to add to 59, in my opinion I think slight modifications to Durston's equation could render a much more realistic result. In particular, instead of using "dispersed" possible configurations we use an approximate starting configuration and a relative-random offset. In order to model NS more precisely a dependency condition between subsequent configurations should be made. OR We be very generous (as Durston has) with the number of possible configurations.computerist
July 12, 2011
July
07
Jul
12
12
2011
04:59 PM
4
04
59
PM
PDT
gpuccio said:
that is how many sequences can approximately express that biochemical function versus the total search space.
This can also be stated another, more relevant way I believe. I have not followed the entire thread, so please forgive if I'm repeating whats already been mentioned. While we cannot measure Natural Selection explicitly, NS can actually be deduced from a set of possible configurations. One objection to Durston's equation was that it didn't take into account NS, and that the equation only takes into account a target and the probability of RM alone producing the target in one try. However, this is not the case as Durston's equation takes into account the number of possible configurations. The number of possible configurations can be restated to the number of steps taken before a locked function, which is in essence NS. Going back to Weasel for example; 320 Gen to reach a target is equivalent to 320 possible configurations before a FCSI effect is achieved. So it does in fact, take into account NS. The only objection I can foresee from Darwinists is that there is infinite possible FCSI's, and therefore any combination in a biological context would likely produce FSCI. FCSI doesn't really exist and is an illusion (and the universe is inside a magic 8 ball).computerist
July 12, 2011
July
07
Jul
12
12
2011
04:14 PM
4
04
14
PM
PDT
WR:
But nobody except ID proponents claim that such sequences arose in a single step with no simpler precursors.
Is that what Durston, Abel, Axe, etc claim? I missed that part.Mung
July 12, 2011
July
07
Jul
12
12
2011
03:53 PM
3
03
53
PM
PDT
Mung:
My grandmother was not a single celled organism that reproduced by cloning.
But then, you already knew that, didn't you. Elizabeth Liddle:
Well, “allele shuffling” isn’t the only mechanism of mutation, and indeed, wouldn’t have been relevant to our LUCA which can’t have been a sexually-reproducing organism.
So what was the point of bringing in my grandmother? Elizabeth Liddle:
And our LUCA was almost certainly more complex than its predecessors, so asking why “1000 superfamilies were already present in LUCA” is a bit like asking why a runner half way round the track has “already” completed half a lap!
And if drift is a phenomenon associated with sexual reproduction, how is it going to help you? How does neutral theory help?Mung
July 12, 2011
July
07
Jul
12
12
2011
03:50 PM
3
03
50
PM
PDT
gpuccio,
But 462 bits still means a search space for the function of about 10^139. We are in the order of magnitude, more or less, of Dembski’s UPB. No kidding here.
That is indeed a large search space. What mechanism are you proposing is used to search it? Does the search have to find all 462 bits at once?
I will try to give you my reasons for my firm belief that functional configurations are rare and isolated islands in the ocean of possible sequences.
And perhaps those isolated islands when zoomed in upon reveal themselves to be capable of sustaining life as we know it? If you are looking for single 462 bit sequences by random searching (monkey on keyboard, tornado in a junkyard) then of course they will be rare. Very rare. But nobody except ID proponents claim that such sequences arose in a single step with no simpler precursors.
We are in the order of magnitude, more or less, of Dembski’s UPB. No kidding here.
So what's your claim? That the protein families in question were intelligent designed? All proteins? Could some have evolved? How do you tell the difference?WilliamRoache
July 12, 2011
July
07
Jul
12
12
2011
03:30 PM
3
03
30
PM
PDT
Elizabeth, My grandmother was not a single celled organism that reproduced by cloning. Was the point of my post @45 just entirely lost?Mung
July 12, 2011
July
07
Jul
12
12
2011
03:12 PM
3
03
12
PM
PDT
Yes, it's late here too (Nottingham, UK), so time to chuck the cat off my knee and maybe look at those papers in bed :) See you tomorrow. LizzieElizabeth Liddle
July 12, 2011
July
07
Jul
12
12
2011
02:42 PM
2
02
42
PM
PDT
Elizabeth: Well, it's rather late here, and I think I will go on tomorrow, if I can. My next, big argument will be about how common functional configurations are in the search space. It is a very controversial subject, and an obviously crucial one. I will try to give you my reasons for my firm belief that functional configurations are rare and isolated islands in the ocean of possible sequences. So, to tomorrow (if my work allows). Have a good night, whatever time it is where you live.gpuccio
July 12, 2011
July
07
Jul
12
12
2011
02:10 PM
2
02
10
PM
PDT
Elizabeth: First of all, a methodological premise. In the discussion, I will have to label some of the ideas you express as "just so stories" or "fairy tales". I don't do that with any intention of being derogatory to you, but rather with a specific espistemological meaning. We are indeed discussing scientific theories, and any theory can be admitted and discussed. Bur a scientific theory must have at least a minimun of empirical justification to be really interesting. Otherwise, for me, it is only a "just so story". That said, I would like to briefly comment on the Durston paper, and why I consider it so important. 1) The Durston method to compute functional complexity in protein families. If you read carefully the paper, you will see that it shows a really ingenious way to approximate the true value of fucntional complexity in a protein family, applying a variation of Shannon's H to the data in the known proteome. We can discuss the detail if you want, but for the moment I would like to mention that it is the only simple way to account for the size of the target space for a function, that is how many sequences can approximately express that biochemical function versus the total search space. Now, look at the table with the results and take, for instance, Ribosomal S2, a not too big protein of 197 AAs. While the total search space is of 851 bits, the functional complexity of the family is of "only" 462 bits. So, as you can see, in ID we do take into account that the target space for one function is big, and not limited to one or a few sequences. But 462 bits still means a search space for the function of about 10^139. We are in the order of magnitude, more or less, of Dembski's UPB. No kidding here. The table shows the results for 35 protein families. Most of them are above 300 bits of functional complexity. That's something. And the important point is also that functional complexity does not strictly correlate with protein length: some proteins are more functionally dense than others. So, we are really measuring an important property of the protein family here.gpuccio
July 12, 2011
July
07
Jul
12
12
2011
02:05 PM
2
02
05
PM
PDT
gpuccio:
Elizabeth:
Anyway, nice to talk to you. It’s good when people return the ball Your serve.
Me too! I enjoy talking to you, because you don’t elude the arguments, understand them, and sincerely try to give your answers.
*blushes* Thanks :) I'm enjoying it here, and I appreciate the tolerance I've been met with!
That’s very fine, and it’s more than I usually can expect from many of my interlocutors here. Moreover, it’s the first time that I can debate with a convinced neutralist, and it’s fun.
*adds yet another label to her luggage*
So, be sure that I respect you and your position, that I believe you entertain in full good faith. Unfortunately, I disagree with many of them and therefore, in a spirit of friendship, I will try to explain why.
Cool.
You raise many different points which deserve detailed discussion. Time is limited, so I will start in a brief way, and we can deepen any point that you find interesting, or simply don’t agree with. I sincerely enjoy intellectual confrontation, provided that it conveys true reciprocal clarification, and not only stereotyped antagonism.
Me too :)
So, let’s start. First of all, the papers I quoted. The Durston paper is the one you have already found. Tha two Axe papers, instead, are the following: The Case Against a Darwinian Origin of Protein Folds http://bio-complexity.org/ojs/.....O-C.2010.1 and The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway http://bio-complexity.org/ojs/.....O-C.2011.1 More in the next post
Thanks. I will get hold of those and read them.Elizabeth Liddle
July 12, 2011
July
07
Jul
12
12
2011
02:01 PM
2
02
01
PM
PDT
1 2 3

Leave a Reply