Uncommon Descent Serving The Intelligent Design Community

Cambrian shrimp’s heart more complex than modern one

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We’ll let BioScience Technology tell it:

“This is only the second case of the description of a cardiovascular system in a Cambrian arthropod, the first one being that of the inch-long Marrella from Burgess Shale,” emailed Diego Garcia-Bellido of the University of Adelaide, who co-discovered that first arthropod while at the University of Cambridge. Garcia Bellido was not involved in the new study. “This new finding of a cardiovascular system in a larger animal (Fuxianhuia is about two to three times as large, thus more detail), together with a fantastically preserved, and very complex, nervous system, unknown in Marrella, and the gut, make it probably the most complete arthropod internal anatomy known in the fossil record.”

The main conclusion drawn, said Garcia-Bellido: “The level of complexity of the Fuxianhuia was extremely high, considering that we are studying some of the oldest animals on Earth.”

This leaves how much time for neo-Darwinian evolution (natural selection acting on random mutation)?

It’s a good thing that most Americans doubt Darwin.

As the facts roll in relentlessly, to believe in Darwinism is to believe in magic.

Indeed, even the language assumes that character, as when scientists (scientists!) say things like,

“Evolution is telling us these genes are really important for survival,” adds Winston Bellott, a research scientist in the Page lab and lead author of the Nature paper. “They’ve been selected and purified over time.”

They believe in a wizard and his name is Darwin.

Follow UD News at Twitter!

Hat tip: Philip Cunningham

Comments
rhampton7: Thank you for your patience too. I think you sum up our differences fairly. You see, I don't agree with that specific statements by Behe (indeed, I have many times criticized that specific part of TEOE, which is IMO too similar to a "theistic evolution" position, a position which I fully reject). You are perfectly right that if we witnessed in a lab a new function being generated in a living being, "it would do so by physical processes like mutations". I have never thought that design requires any violation of physical laws. But it is not true that the simple inference that we are observing a guided process would be "metaphysical", whatever it means. Not at all. Not any more than the inference, if we see a person writing a poem, or if we just find a poem, that the physical process is being guided, or has been guided, by a conscious being. ID is exactly about that: the inference of design intervention from conscious intelligent beings, in this world, with its physical laws. Is that metaphysical? I leave it to you (the word really means little for me). If it is metaphysical for the poem, it is metaphysical for biological information too. But I will not try to explain the poem by the physical processes which are "proximally responsible" for its writing, alone.gpuccio
April 30, 2014
April
04
Apr
30
30
2014
10:22 PM
10
10
22
PM
PDT
Thank you for being patient and responding to my questions. I think I understand your position better. Thinking things through last night I think I see the obstacle that prevents us from ultimately agreeing. Behe described the scenario of an "uberphysicist" selecting a universe with one specific history (an act of design) without requiring interference.
After the first decisive moment the carefully chosen universe undergoes “natural development by laws implanted in it.” In that universe, life evolves by common descent and a long series of mutations, but many aren’t random. There are myriad Powerball–winning events, but they aren’t due to chance. They were foreseen, and chosen from all the possible universes.
The point you are making about evidence for design still holds true, but natural processes (mutations, et. al.) are proximally responsible for "new" information to any observer within the universe, although ultimate causation is still attributable to God. So if we were lucky enough to be in the lab with all kinds of sensors running and happened to witness a new function being "created", it would do so by physical processes like mutations. And this is the point I am making. Objectively, the scientific description of the mechanics involved is entirely correct, not so the metaphysical assumption of being unguided.rhampton7
April 30, 2014
April
04
Apr
30
30
2014
05:00 PM
5
05
00
PM
PDT
rhampton7: Errata corrige: The first probability should be 0.5132988, not 0.4734378 The second probability should be 0.7185076, not 0.3616739. My mistake. Nothing changes in the reasoning.gpuccio
April 30, 2014
April
04
Apr
30
30
2014
12:54 AM
12
12
54
AM
PDT
rhampton7: No. Random variation is mostly neutral, that is well known. So, your 100 mutations will be mostly neutral, if they are really random. Neutral mutations can happen in functional sequences. How many of them can happen in a functional sequence depends on the type of functional sequence. Some proteins, like ATP synthase, seem to tolerate only minimal neutral variation through the ages. Other proteins are much more robust to neutral variation. We don't know how tolerant of neutral variation are the different forms of functional non coding DNA. The remaining random mutations are usually deleterious, and many of them are eliminated by negative (purifying) selection. That's why they mostly disappear, and are not fixed, not even by drift. What about positive mutations? IN a random system, they are exceedingly rare, almost non existent. And they are always simple. The only observed ones are those cases of 1 - 2 aminoacid variation which confer some advantage under extreme pressure in the known cases of microevolution (antibiotic resistance, and similar). All other forms of naturally selectable positive mutations exist only in one place: the rich imagination of darwinists. But the, what about the observed positive variation? What about new proteins that appear, non coding sequences which become functional, new body plans and regulatory networks in new species, and so on? The answer is simple: that is designed variation. It is not random variation, neither neutral nor deleterious nor positive random variation. It is the obvious product of design. However, to answer you last question:
To make things simple, lets works with 100 mutations per generation, 51 of which are functional (be it as one 51 bit functional unit or many smaller units in aggregate). How does this stack up to your assessment of probabilities – natural origin or intelligent intervention? In any case, it would seem that functional mutations of at least 2 bits are “a dime a dozen” so to speak. And again, because these mutations accumulate, I should think it not unreasonable that one or more mutations could (and do) combine into a new and unique functional unit.
No. As already said, in you 100 mutations lot there would be no functional mutations, they would probably be 99 neutral and 1 deleterious. But let's suppose, for the sake of argument, that there is 1 functional 2 bit mutation in 10^12 mutations in, say, 100 years (a much more realistic, and still very generous, scenario). I think that your arguments is: then in one million years I will have 1000 one bit functional mutations which could "combine into a new and unique functional unit", let's say of 2000 bits of functional complexity. Absolutely not! That shows that you have not understood the nature of functional complexity. Those 1000 functional mutations, each of 2 bits complexity, are mutations which confer advantage in 1000 different ways and contexts. They are not related the one with the other. Even if the single mutation is functional in some way, the sequence of 1000 mutations has the same probability of being functional, in the sense of generating a new complex sequence, as any other sequence of 1000 variated bits. IOWs, functionality is relative to the function. It does not simply "accumulate". Obviously not! Let's say that you have 1000 english words. Let's say that, with some algorithm, and using some intelligent information (like an english dictionary) you introduce random one letter mutations in each word until you get another correct english word. That's all your algorithm can do. And it is the equivalent of getting 1000 "functional" simple mutations in a genome. Not easy, but possible. How many are the probabilities that those 1000 mutated words form a Shakespeare poem? Would those 1000 mutations represent 4000 bits of new functional information? (I am assuming here 4 bits for each letter, which is a little bit underestimated). Absolutely not! A Shakespeare poem, made with those 1000 words, would represent new complex functional information. You will never get it from your algorithm. 2000 bits of functional information means one single sequence with one specific function which cannot exist unless there are at least 2000 specific bits in that sequence, which can be only in that configuration. It does not mean 1000 single functional variations of 2 bits, for 1000 different functions. I will also make the calculation for you, to show the difference. Let's say, to simplify, that in a system we have a one bit mutations, and a probability of 1:10^6 that a single mutation (one attempt) generates an advantage. To have 100 positive mutations in that system, how many attempts (mutations) do we need? Applying the binomial distribution, the answer is: a) with 10^8 attempts, you have a probability of 0.4734378 of getting at least that number of positive individual mutations. Perfectly reasonable. Now, let's go to the "combination". What is the probability, in that system, of getting a single exact sequence of just 100 one bit mutations which has a specific new function, for which those exact 100 mutations are necessary? Now, the probability of the event is 1: 2^100, that is 1:1.26765e+30. Again, we apply the binomial distribution: b) Now, the probabilities of getting that event are so low, with 10^8 attempts, that my software (R) gives 0 as an answer. If we increase the number of attempts, you need 10^30 attempts (mutations) to have a probability of 0.3616739 of getting at least one positive outcome. Not so reasonable any more. Can you see the difference?gpuccio
April 30, 2014
April
04
Apr
30
30
2014
12:31 AM
12
12
31
AM
PDT
I am saying that no function with more than 500 bits of functional information can ever originate without a design intervention. Believe it or not, that is the truth. No example falsifying that has ever been observed.
I do believe you, but my questions have to do with much smaller functional elements (2 bit changes). To put this in context, it was reported that:
Every time human DNA is passed from one generation to the next it accumulates 100–200 new mutations, according to a DNA-sequencing analysis of the Y chromosome.
Now, if we assume that most of bits in DNA are not junk, then most of these mutations ought to be functional. Otherwise the amount of junk would accumulate along with the number of mutations. To make things simple, lets works with 100 mutations per generation, 51 of which are functional (be it as one 51 bit functional unit or many smaller units in aggregate). How does this stack up to your assessment of probabilities - natural origin or intelligent intervention? In any case, it would seem that functional mutations of at least 2 bits are "a dime a dozen" so to speak. And again, because these mutations accumulate, I should think it not unreasonable that one or more mutations could (and do) combine into a new and unique functional unit.rhampton7
April 29, 2014
April
04
Apr
29
29
2014
04:28 PM
4
04
28
PM
PDT
rhampton7:
Do you mean to say that the difference between species or subspecies is one 1, perhaps 2, amino acids?
No, my statement has nothing to do with comparing the genomes. It refers to observed cases of microevolution, like antibiotic resistance. We don't know the mechanisms of speciation. In most cases, speciation is certainly macroevolution.
Also, are you saying that there can be no more than a total of 500 new bits of information (to have formed without intelligent intervention) in all the genomes of all the creatures that ever lived. Surely that can’t be right.
I am saying that no function with more than 500 bits of functional information can ever originate without a design intervention. Believe it or not, that is the truth. No example falsifying that has ever been observed.
And I presume the 4, 8, 16 represent individual organisms. For example, we would expect to find a new four bit function in only 1 of 16 humans, tigers, bacteria, etc.
No, you are confused. The numbers represent the ratio between the number of sequences exhibiting the defined function and the number of possible sequences of the same length (the target space/search space ratio). That is the probability of finding the functional sequence by a random search/walk in one attempt. For example, the probability of getting the subunit beta of ATP synthase with its constrained 334 aminoacids is, as I have said, 1: 2^-1443. IOWs, you should perform a number of attempts in generating new random sequences of the order of that number (2^1443), if you want to have realistic probabilites to find that functional sequence.gpuccio
April 29, 2014
April
04
Apr
29
29
2014
03:54 PM
3
03
54
PM
PDT
Incidentally, I just found this: The tiger genome and comparative analysis with lion and snow leopard genomes, Nature Communications 4, (September 2013)rhampton7
April 29, 2014
April
04
Apr
29
29
2014
03:34 PM
3
03
34
PM
PDT
Almost all cases of microevolution observed are 1 aminoacid
Do you mean to say that the difference between species or subspecies is one 1, perhaps 2, amino acids? If we were to compare the genomes between tigers and lions, for example, would we really only find a difference of a few amino acids? Also, are you saying that there can be no more than a total of 500 new bits of information (to have formed without intelligent intervention) in all the genomes of all the creatures that ever lived. Surely that can't be right. As for the table, I really do hope you make an attempt. I take it that your probabilities for bits works by factors of two (you made a comment about 4bits being a 1 in 16 event) 2 bits ... 1 in 4 3 bits ... 1 in 8 4 bits ... 1 in 16 5 bits ... 1 in 32 6 bits ... 1 in 64 And I presume the 4, 8, 16 represent individual organisms. For example, we would expect to find a new four bit function in only 1 of 16 humans, tigers, bacteria, etc.rhampton7
April 29, 2014
April
04
Apr
29
29
2014
03:13 PM
3
03
13
PM
PDT
rhampton7: By the way, 22 bits is not "dubious". It is an empirical threshold (again, very generous) for what is really onserved. Behe sets the edge of evolution at 2-3 coordinated aminoacid mutations. Axe is a little bit more generous. I have taken 5 aminoacids (22 bits). Almost all cases of microevolution observed are 1 aminoacid. A few may be 2 (see Behe). Some functional tweaking in an existing active site could in principle happen with 3, 4 AA variation by chance (but that is not really proved). Nothing observed suggests anything more complex.gpuccio
April 29, 2014
April
04
Apr
29
29
2014
02:37 PM
2
02
37
PM
PDT
rhampton7: So, as you can see from the previous post, I have grossly calculated the total number of possible mutations (states) in a maximal bacterial system on our planet in 4 billion years. The result? 10^42. And that is very generous. It means 140 bits. Those are the maximum biological probabilistical resources of our planet. That's why I have suggested 150 bits as a reasonable biological probability bound for our planet. That means that any protein with more than that level of functional information has no real chance to emerge by random variation in 4 billion years on our earth. Now, let's go to another example I have recently given. ATP synthase. That is a very old and very important complex protein which is in all living cells, and appears in LUCA, more or less at the origin of life. Its subunit beta is about 500 AAs long. Now, let's compare the sequence in E. coli (a prokaryote) and in humans. Very distant species, reasonably separated by a very old divergence. a) ATP synthase subunit beta E. Coli: 460 aminoacids b) ATP synthase subunit beta Human: 529 aminoacids Alignment (BLASTP): Score Expect Identities Positives 660 bits(1703) 0.0 334/466(72%) 382/466(81%) Now, let's consider only the identities, for the moment. 334 aminoacids are identical. Absolutely conserved. Now, each AA completely conserved corresponds to 4.32 bits of functional information. Therefore, the minimal functional information in this molecule is 1443 bits. 1443. Without considering the functional information in the AAs which are only "similar". For one subunit of one fundamental, very old molecule. 500 bits of functional information is the Dembski's UPB. That is the number of different quantum states which took place in out universe from the big bang to now. Here, we have 1443 bits at least for one subunit of a protein. How many universes do you think we need to get that sequence in a random way? About 10^283 universes. At least. Isn't that funny?gpuccio
April 29, 2014
April
04
Apr
29
29
2014
02:31 PM
2
02
31
PM
PDT
rhampton7: I copy here an example from a recent post of mine in another thread. I was answering a question by VJ Torley which was in part similar to yours. "There is no problem about the simultaneous or sequential appearance of new proteins. Just follow me a little bit. a) “Simultaneous” obviously does not mean “in one attempt”. What we have to consider is the whole system, which is made of: a1) a population size (a number of replicators) a2) a mean replication time a3) a time span (the time available for the new “species”, or whatever, to appear), IOWs for the transition from A (the precursor) to B (the new thing) a4) a mutation rate a5) the number of new proteins that characterizes the new state (B) versus A a6) the probability for each new protein to arise in a random system, in one attempt b) Given those numbers, we can make a few easy computations c) I will assume an extremely generous model. c1) Out population is the whole prokaryotic population on our planet. I will estimate it at 5*10^30 individuals (I have found that on the internet) c2) I assume a mean replication time of one division every 30 minutes c3) I assume a time span of 4 billion years (2.1*10^15 minutes) c4) I assume a mutation rate of 0.003 mutations per genome per generation (from internet, again) c5) I assume that B is characterized, versus A, by 3 new proteins, completely unrelated at sequence level with all the proteins in A, and unrelated one with the others c6) I assume the same functional complexity for each of the 3 proteins, of 357 bits (Fits), which is the median value for the 35 protein families evaluated in Durston’s paper. Multiplying c1 by c2 by c3 by c4, we get the total number of possible mutations in our system in the time span of 4 billion years. The result, with those numbers, is 1.0512*10^42. That is a higher threshold for the total number of individual new states that can be reached in our system in the time span (if each mutation gives a new state). OK? Now, each of our 3 functional proteins has a probability of 1:2^357 of being found in one attempt (one new state tested). That is 1:(3.4*10^108). Now, using the binomial distribution, it is easy to compute the probability of having 3 successful results in 1.0512*10^42 attempts, when the probability of success in one attempt is 1:(3.4*10^108). The result is: 5.568067e-264 That is the probability of finding our 3 new functional proteins in our system, in all the time span, with all the reproductions and mutations possible in that system. Obviously, I am considering the system as random, with uniform probability distribution. I am not considering any intervention of NS, in any sense, so this is a computation for the powers of neutral variation. As already said, genetic drift is irrelevant in that reasoning, because we are already considering all the possible states that can be reached in the system. I am fully available to discuss any aspect of this model." More in next post.gpuccio
April 29, 2014
April
04
Apr
29
29
2014
02:06 PM
2
02
06
PM
PDT
rhampton7: If a function has 500 bits of functional information, than you need those 500 bits for the function to exist. "250 individual 2 bit mutations" are the same as 500 bits. Obviously, you need the correct 250 mutations in the same individual, to generate the 500 bit function. The probability is the same. It is not important if the mutations happen in one day or in one million years, the only important facts are the probability of the outcome and the probabilistic resources (how many states are tested by the system). I don't understand your problem. A single 2 bit mutation which generates a naturally selectable function can be expanded and fixed. So, if you are saying that 250 individual mutations, each naturally selectable and selected, can happen sequencially, you are right. The simple problem is that no 500 bit function can be deconstructed into 250 2 bit steps, each of them naturally selectable. Indeed, no selectable precursors are known for protein superfamilies. That's why my reasoning is about the potentialities of a random system, which are nil. The contribution of NS is nil too, and has never be demonstrated. And never will be. Single mutations can sometimes be functional, but they do not lead to a functional complex sequence. IOWs, by mutation single bits you can sometimes get some simple advantage (very rarely), but you will never get a new software with new procedures and complex new code. In the same way, by mutating single letters in a shopping list you will never get a Shakespeare poem. More in next post.gpuccio
April 29, 2014
April
04
Apr
29
29
2014
02:03 PM
2
02
03
PM
PDT
rhampton7: If a function has 500 bits of functional information, than you need those 500 bits for the function to exist. "250 individual 2 bit mutations" are the same as 500 bits. Obviously, you need the correct 250 mutations in the same individual, to generate the 500 bit function. The probability is the same. It is not important if the mutations happen in one day or in one million years, the only important facts are the probability of the outcome and the probabilistic resources (how many states are tested by the system). I don't understand your problem. A single 2 bit mutation which generates a naturally selectable function can be expanded and fixed. So, if you are saying that 250 individual mutations, each naturally selectable and selected, can happen sequencially, you are right. The simple problem is that no 500 bit function can be deconstructed into 250 2 bit steps, each of them naturally selectable. Indeed, no selectable precursors are known for protein superfamilies. That's why my reasoning is about the potentialities of a random system, which are nil. The contribution of NS is nil too, and has never be demonstrated. And never will be. Single mutations can sometimes be functional, but they do not lead to a functional complex sequence. IOWs, by mutation single bits you can sometimes get some simple advantage (very rarely), but you will never get a new software with new procedures and complex new code. In the same way, by mutating single letters in a shopping list you will never get a Shakespeare poem. More in next post.gpuccio
April 29, 2014
April
04
Apr
29
29
2014
02:02 PM
2
02
02
PM
PDT
I had an idea and I wonder if you would indulge me. Do you think you could produce a table showing the number of bits in a function and the expected time to develop and/or number of incidence to occur within a 13 billion year span? Something like: Bits per Function | Time to Appear (yrs OR generations) | Events per Universe -------------------------------------------------------------------- 2 ... 1 year (1 generation) ... 13,000 million events 3 ... 10 yrs (10 gen) ......... 1,300 million events 4 ... 100 yrs (100 gen) ....... 130 million events (numbers are not representative of actual values - for display only) Does this make sense?rhampton7
April 29, 2014
April
04
Apr
29
29
2014
01:35 PM
1
01
35
PM
PDT
Not sure what you meant in a). I understand the argument for one feature that requires 500 bits of information, but not for 250 individual 2 bit mutations. For example, I presume there is more than 500 bits of information that differentiate lions from tigers, of that differentiate the subspecies of Siberian from Malayan tigers. Yet I would think that ID theory does not object to a natural origin in either case. If I understand b) correctly, then an existing 2 bit function that mutated to become a new 4 bit function would count as 4 new bits, not 2 old and 2 new. So the proper identification and documentation of any new functionality, going forward, would require comparing the a genome of a parent and child, and then its child, and so forth. Tracking the code changes seems easy enough (given the technical resources), but what about functionality? If DNA is mostly not junk (that is, mostly functional) then most of those mutations would be functional, yes? As for c), the 22 bit threshold seems dubious as I would think chemistry alone could generate molecules as complex.rhampton7
April 29, 2014
April
04
Apr
29
29
2014
11:25 AM
11
11
25
AM
PDT
rhampton7: The answers: a) A 500 bit threshold means that a specific function needs at least 500 bits of sequence information to be implemented and to work. "250 individual 2 bit mutations" should be the exact 250 individual 2 bit mutations. There is a 2e-500 probability of getting that exact result in a random system, and that is beyond the probabilistic resources of the whole universe. So yes, it is a problem. b)New and old does not refer to the individual bits, but to the whole function. The new function must be there to be fixed and therefore to become a permanent part of the genome. To be there, it needs at least those exact 500 bits of information, which cannot arise in a random system because the functional space is too small compared to the search space. Your bits can be as new or as old as you like, but if you have not the exact 500 bits (new or old), the new function is simply not there. c) Yes, my point is that many proteins exceed the 500 bit threshold (universal probability bound for the whole universe), and therefore could not naturally originate without intelligent intervention. And most proteins (30 out of 35 protein families in Durston's paper) exceed the 150 bit threshold (my proposed biological probability bound for our planet), and therefore could not naturally originate on our planet without intelligent intervention. And practically all proteins (exceed the 22 bit threshold (empirical biological probability bound derived from Axe's work), and therefore could not reasonably naturally originate without intelligent intervention. You choose the statement you like most. d) You ask: "Therefore proteins are the direct result of an intelligent agent. Is that a fair assessment?" Yes. To be more precise, the functional information in them is "the direct result" of the design intervention "of an intelligent agent"gpuccio
April 28, 2014
April
04
Apr
28
28
2014
10:37 PM
10
10
37
PM
PDT
gpuccio:
The “entire sequence arising at once” objection is a classical example of how darwinists don’t understand ID.
I think it's a classic example of how they don't understand biology!Mung
April 28, 2014
April
04
Apr
28
28
2014
05:49 PM
5
05
49
PM
PDT
gpuccio, A few thoughts come to mind. First, a 500 bit threshold could be met by having 250 individual 2 bit mutations. I presume it would take several generations to accumulate those 250 mutations, but that doesn't seem to be a problem unless some portion of them are no longer considered 'new' (as I alluded to before). And that brings me to me second thought, when does a new bit of information become old? Third, amino acids have been found in space, and have formed complex dipeptides within the simulated conditions of space. But I understood your point to be that proteins (many, most, all?) exceed the 500 bit threshold and therefore could not naturally originate without intelligent intervention. Therefore proteins are the direct result of an intelligent agent. Is that a fair assessment?rhampton7
April 28, 2014
April
04
Apr
28
28
2014
12:22 PM
12
12
22
PM
PDT
rhampton7 (if you are still there): I think a practical example is the best way to explain what functional complexity is. Let's take ATP synthase, a very old, very important molecule. The foundation of cell metabolism, in a sense (it stores energy in the for of ATP). Let's take subunit alpha of that molecule in humans. It's 553 aminoacids long. And it is only a subunit .of the whole molecule. Well, that is a molecule that has been around for billions of years, probably from the times very near to OOL. If we BLAST it against the genome of bacteria, what do we find? I have done it for E. coli. The result is really amazing. The rtwo molecules, human and bacterial, share 290 identical aminoacids! The whole result is 57% identities, 72% positives. Therefore, even if we just stick to those 290 aminoacids which are identical, what can we say? However you want to look at it, one conclusion is inevitable: those 290 AAs are really necessary for the protein function. They must be what they are. No neutral mutations, with or without drift, have been able to change them in billion of years. Do you know which is the probability of finding a specific sequence of 290 AAs, without any possible change? About 1 : 10^377. Any comments, anyone?gpuccio
April 27, 2014
April
04
Apr
27
27
2014
10:18 AM
10
10
18
AM
PDT
AVS:
everything I say is stupid
Not everything. But a lot of it.
Sayonara!
Ciao, e buona fortuna.gpuccio
April 27, 2014
April
04
Apr
27
27
2014
12:38 AM
12
12
38
AM
PDT
Alright ladies, it's been fun once again, but this is just too addicting. I can't wait to read about you guys and all the work you're doing to falsify evolution. You guys have so much evidence backing up your claims that it should be any day now right? Oh wait, no, that my side of the argument that has all the evidence. Oh well, maybe next time guys. Enjoy "teaching" the scientifically illiterate about "evolution," unfortunately for you the only thing that does is bolster your ranks with more and more of the scientifically illiterate. Sayonara! <3AVS
April 26, 2014
April
04
Apr
26
26
2014
10:30 PM
10
10
30
PM
PDT
Yes poochy, lets here it, everything I say is stupid, stupid, stupid darwinist propaganda and everything you say about biology is true. "only a few changes will give reproductive advantage" Really? that's all I needed to hear. You're clueless.AVS
April 26, 2014
April
04
Apr
26
26
2014
09:56 PM
9
09
56
PM
PDT
For some reason nobody seems to understand ID except you guys. Maybe it's because it doesn't make any sense at all. You guys love pulling out these astronomical numbers and saying "it's impossible!" with absolutely no regard for the actual biology behind what you are talking about.AVS
April 26, 2014
April
04
Apr
26
26
2014
09:50 PM
9
09
50
PM
PDT
AVS: You are repeating two other objections which are standard stupid darwinist propaganda. a) Evolution has no direction. That is partly true, mostly false. Neutral drift has no direction. And it can do nothing to generate function. Positive NS, if and in the measure that it exists, has one definite direction: reproductive success. b) Evolution (NS) can find "any possible function". Your: "Do you not realize how broad the spectrum is of possible sequences that could provide a reproductive advantage?". Stupid and false. In a complex system, only a few changes will give reproductive advantage, and they are mostly complex changes, which add complex functionality to an existing very complex system. Simple variations in a complex system are almost always neutral or deleterious (as well known by neutralists). The few known cases of microevolution are the only examples of simple variations which confer some reproductive advantages, and they almost always happen with some loss of the original complex function, and under extreme environmental selection (see the case of simple antibiotic resistance).gpuccio
April 26, 2014
April
04
Apr
26
26
2014
09:50 PM
9
09
50
PM
PDT
AVS:
You are assuming that the entire sequence arose at once, basically ignoring the entire biological aspect of the situation.
No, I am not, as Mung kindly already explained. As you can see if you really read my posts, I am always referring to all the probabilistic resource of a system, therefore to multiple attempts in the time span. The "entire sequence arising at once" objection is a classical example of how darwinists don't understand ID.gpuccio
April 26, 2014
April
04
Apr
26
26
2014
09:43 PM
9
09
43
PM
PDT
Mung, do you not see how terrible your argument is? "In fact, evolution does have a specific direction, towards those forms that leave more offspring" This is one of the grossest oversimplifications I have ever seen. "It searches for those specific sequences that provide a reproductive advantage." Do you not realize how broad the spectrum is of possible sequences that could provide a reproductive advantage? And yet in your example you are looking for a single specific sequence. You gloss over the biology because you don't know anything about it.AVS
April 26, 2014
April
04
Apr
26
26
2014
09:41 PM
9
09
41
PM
PDT
Mung:
Sorry mate. Got to call you on this one. It is applying a log that allows them to be summed!
OK, that's correct. I just meant that by summing the logs you are not really summing the real numbers, but multiplying them.
How many questions would you need to ask to find out the state of a system of four coins which can each exist in one of two states, with each state being equiprobable? Given that there are sixteen possible states of such a system, does it not require sixteen questions?
I am not sure I understand what you mean. I said that the probability to find a functional state which is one out of 16 in one random attempt is 1 : 16. That means that if you generate randomly one of the 16 configurations, and you "ask" if it is the functional one (for example, by testing in some way the function), you have 1 : 16 probabilities to get a success in one attempt. I think we are saying the same thing. If you are lucky enough, you can find the right configuration in one attempt. But, if the configuration is one out of 10^45 (150 bits of functional information), then all the biological attempts available on our planet will never suffice. And with 500 bits you have no chance even with all the material attempts available in our universe.gpuccio
April 26, 2014
April
04
Apr
26
26
2014
09:38 PM
9
09
38
PM
PDT
AVS:
Again, your example completely ignores the biological aspect of the situation. In the evolution of a coding sequence it is much more complex.
And you're just being obtuse. Or maybe you just don't know what you're talking about. You're arguing that "in the evolution of a coding sequence it is much more complex," by which you can only mean that's it's much more improbable. How does that help your case?
A specific sequence doesn’t necessarily need to arise from scratch and evolution does not “search” for specific sequences as it does not have a specific direction.
No one is claiming that the sequences just arise from scratch. That's a straw-man. And from the premise that "evolution" does not have a specific direction it does not follow that "search" is inappropriate. In fact, evolution does have a specific direction, towards those forms that leave more offspring . So what is wrong with describing evolution as a search for those forms which leave more offspring? And further, contrary to your misguided claims, evolution does search for specific sequences. It searches for those specific sequences that provide a reproductive advantage. No reproductive advantage, no evolution. No specific sequence, no reproductive advantage. No specific sequence, no evolution.Mung
April 26, 2014
April
04
Apr
26
26
2014
09:22 PM
9
09
22
PM
PDT
of note: Virus-inspired DNA nanodevices https://vimeo.com/91950046bornagain77
April 26, 2014
April
04
Apr
26
26
2014
08:34 PM
8
08
34
PM
PDT
AVS, Darwinists claim that unguided processes can generate new proteins fairly easily but they/you have no evidence that unguided processes can do what you claim for them. All the evidence we have tells that unguided Darwinian processes are grossly inadequate for the claim being made. Hand-waving, and name-calling, that proteins can be created without Intelligence is not science. It is dogmatism. Moreover, where Darwinian evolution has failed to back up its claim for the generation of new proteins, it has been demonstrated that Intelligence can create proteins:
Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator - Fazale Rana Excerpt of Review: ‘Another interesting section of Creating Life in the Lab is one on artificial enzymes. Biological enzymes catalyze chemical reactions, often increasing the spontaneous reaction rate by a billion times or more. Scientists have set out to produce artificial enzymes that catalyze chemical reactions not used in biological organisms. Comparing the structure of biological enzymes, scientists used super-computers to calculate the sequences of amino acids in their enzymes that might catalyze the reaction they were interested in. After testing dozens of candidates,, the best ones were chosen and subjected to “in vitro evolution,” which increased the reaction rate up to 200-fold. Despite all this “intelligent design,” the artificial enzymes were 10,000 to 1,000,000,000 times less efficient than their biological counterparts. Dr. Rana asks the question, “is it reasonable to think that undirected evolutionary processes routinely accomplished this task?” http://www.amazon.com/gp/product/0801072093
Dr. Fuz Rana, at the 41:30 minute mark of the following video, speaks on the tremendous effort that went into building the preceding protein:
Science - Fuz Rana - Unbelievable? Conference 2013 - video http://www.youtube.com/watch?v=-u34VJ8J5_c&list=PLS5E_VeVNzAstcmbIlygiEFir3tQtlWxx&index=8
Also of note:
Computer-designed proteins programmed to disarm variety of flu viruses - June 1, 2012 Excerpt: The research efforts, akin to docking a space station but on a molecular level, are made possible by computers that can describe the landscapes of forces involved on the submicroscopic scale.,, These maps were used to reprogram the design to achieve a more precise interaction between the inhibitor protein and the virus molecule. It also enabled the scientists, they said, "to leapfrog over bottlenecks" to improve the activity of the binder. http://phys.org/news/2012-06-computer-designed-proteins-variety-flu-viruses.html
of related note to the fact that Darwinists have ZERO empirical evidence of Darwinian processes EVER producing a molecular machine, here are some examples that intelligence can do as such:
(Man-Made) DNA nanorobot – video https://vimeo.com/36880067 Whether Lab or Cell, (If it's a molecular machine) It's Design - podcast http://intelligentdesign.podomatic.com/entry/2013-01-25T15_53_41-08_00
Also of note, Dr. James Tour, who, in my honest opinion, currently builds the most sophisticated man-made molecular machines in the world,,,
Science & Faith — Dr. James Tour – video (At the two minute mark of the following video, you can see a nano-car that was built by Dr. James Tour’s team) https://www.youtube.com/watch?v=pR4QhNFTtyw
,,will buy lunch for anyone who can explain to him exactly how Darwinian evolution works: “I build molecules for a living, I can’t begin to tell you how difficult that job is. I stand in awe of God because of what he has done through his creation. Only a rookie who knows nothing about science would say science takes away from faith. If you really study science, it will bring you closer to God." James Tour – one of the leading nano-tech engineers in the world - Strobel, Lee (2000), The Case For Faith, p. 111
Top Ten Most Cited Chemist in the World Knows That Evolution Doesn’t Work – James Tour, Phd. – video https://www.youtube.com/watch?v=JB7t2_Ph-ck
bornagain77
April 26, 2014
April
04
Apr
26
26
2014
08:27 PM
8
08
27
PM
PDT
1 2 3

Leave a Reply