Uncommon Descent Serving The Intelligent Design Community

Dover all over

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From Evolution News & Views:

Following Kitzmiller v. Dover, an Excellent Decade for Intelligent Design

Tomorrow marks the tenth anniversary of opening of arguments in the Kitzmiller v. Dover case that resulted in the most absurdly hyped court decision in memory. In 2005, did an obscure Federal judge in Dover, Pennsylvania, at last settle the ultimate scientific question that has fascinated mankind for millennia?

Of course not. The decision by Judge John Jones established nothing about intelligent design — far from being the “death knell” sometimes claimed by Darwin defenders.

A number of post-Dover achievements are listed, including

– Lots of pro-ID peer-reviewed scientific papers published.

– Experimental peer-reviewed research showing the unevolvability of new proteins.

– Theoretical peer-reviewed papers taking down alleged computer simulations of evolution, showing that intelligent design is needed to produce new information. Much more.

With the ten-year anniversary of Dover upcoming, expect Darwin’s followers to be too busy with hype to notice that the ground is subtly shifting.

Ironically, Dover was a major help in making it all possible.

Darwin’s followers are more apt to believe their own storytelling than reality. The reality was that people who wanted design taught in schools were a major hassle and distraction in the years leading up to Dover.

Much theoretical and research work needed to be done. But theorists and researchers were overshadowed by well-meaning people with ideas about what the school system needed—resulting in some amazing Darwinblog rants and opinionating by concerned bimbettes from Talk TV.

It would be useless to ask if the latter had read any book by an ID theorist. Most likely, Bimbette had not read any book since graduating from the journalism program. A characteristic of the type is that they “believe in evolution,” but know almost nothing about it and see no need.

Dover, thankfully, got the crowd out of people’s laptop cases and lab coat pockets, and that was —in my opinion—one of the reasons the decade was fruitful.

Darwin followers continued to claim that the Discovery Institute wanted ID taught in schools. As someone with a ringside seat, I knew that wasn’t true; its involvement in Dover was more or less forced by events.

The “teach the controversy” approach the institute did advocate was taken to be a plot to advance ID in the schools. It was actually an attempt to teach evidence-based thinking, as opposed to the Darwin lobby’s metaphysical claims.

But fortunately, the pants in knot street theatre Darwin’s faithful created over the issue was an unexpected help. It tended to focus much of the hysteria on something other than the main work of the ID community.

Here’s to another decade of fruitful work for the ID community and creative profanity from the Darwinblogs! Oh yes, and pontificating about what God would or wouldn’t do from the Christian Darwinists. At least we will all have our priorities straight.

Follow UD News at Twitter!

Comments
JoeCoder, you made the claim that 'A lot of them probably don’t, even though we don’t know for sure'. That is a claim for which you have no empirical evidence and for which I provided empirical evidence against. Your 'Darwinian' hunch does not count as empirical evidence. It is up to you to cite actual empirical evidence to counter the evidence I provided. Let me save you time, there is no actual empirical evidence for your 'Darwinian' hunch that 'random', as opposed to 'directed', mutations are 'completely neutral'!bornagain77
October 9, 2015
October
10
Oct
9
09
2015
12:08 PM
12
12
08
PM
PDT
@ba77 Right, I already mentioned alternate reading frames and controlling transcription speed above. I think the alt-frames are problematic for Darwinian evolution in particular. But what evidence is there that ALL synonymous codons require a specific nucleotide for optimal function? A lot of them probably don't, even though we don't know for sure. I don't think that's evidence against design any more than it is for all the characters of this comment only using 7 out of 8 bits.JoeCoder
October 9, 2015
October
10
Oct
9
09
2015
11:49 AM
11
11
49
AM
PDT
The evidence for the detrimental nature of mutations in humans is overwhelming for scientists have already cited over 100,000 mutational disorders.
Inside the Human Genome: A Case for Non-Intelligent Design - Pg. 57 By John C. Avise Excerpt: "Another compilation of gene lesions responsible for inherited diseases is the web-based Human Gene Mutation Database (HGMD). Recent versions of HGMD describe more than 75,000 different disease causing mutations identified to date in Homo-sapiens."
I went to the mutation database website cited by John Avise and found:
Mutation total (as of June 27, 2015) - 166,768 http://www.hgmd.cf.ac.uk/ac/
Despite what Dr. Avise may believe, that is certainly not good from the evolutionary standpoint!
Critic ignores reality of Genetic Entropy - Dr John Sanford - 7 March 2013 Excerpt: Where are the beneficial mutations in man? It is very well documented that there are thousands of deleterious Mendelian mutations accumulating in the human gene pool, even though there is strong selection against such mutations. Yet such easily recognized deleterious mutations are just the tip of the iceberg. The vast majority of deleterious mutations will not display any clear phenotype at all. There is a very high rate of visible birth defects, all of which appear deleterious. Again, this is just the tip of the iceberg. Why are no beneficial birth anomalies being seen? This is not just a matter of identifying positive changes. If there are so many beneficial mutations happening in the human population, selection should very effectively amplify them. They should be popping up virtually everywhere. They should be much more common than genetic pathologies. Where are they? European adult lactose tolerance appears to be due to a broken lactase promoter [see Can’t drink milk? You’re ‘normal’! Ed.]. African resistance to malaria is due to a broken hemoglobin protein [see Sickle-cell disease. Also, immunity of an estimated 20% of western Europeans to HIV infection is due to a broken chemokine receptor—see CCR5-delta32: a very beneficial mutation. Ed.] Beneficials happen, but generally they are loss-of-function mutations, and even then they are very rare! http://creation.com/genetic-entropy Human Genome in Meltdown - January 11, 2013 Excerpt: According to a study published Jan. 10 in Nature by geneticists from 4 universities including Harvard, “Analysis of 6,515 exomes reveals the recent origin of most human protein-coding variants.”,,,: "We estimate that approximately 73% of all protein-coding SNVs [single-nucleotide variants] and approximately 86% of SNVs predicted to be deleterious arose in the past 5,000 -10,000 years. The average age of deleterious SNVs varied significantly across molecular pathways, and disease genes contained a significantly higher proportion of recently arisen deleterious SNVs than other genes.",,, As for advantageous mutations, they provided NO examples,,, http://crev.info/2013/01/human-genome-in-meltdown/ Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - May 2013 Excerpt: It is almost universally acknowledged that beneficial mutations are rare compared to deleterious mutations [1–10].,, It appears that beneficial mutations may be too rare to actually allow the accurate measurement of how rare they are [11]. 1. Kibota T, Lynch M (1996) Estimate of the genomic mutation rate deleterious to overall fitness in E. coli . Nature 381:694–696. 2. Charlesworth B, Charlesworth D (1998) Some evolutionary consequences of deleterious mutations. Genetica 103: 3–19. 3. Elena S, et al (1998) Distribution of fitness effects caused by random insertion mutations in Escherichia coli. Genetica 102/103: 349–358. 4. Gerrish P, Lenski R N (1998) The fate of competing beneficial mutations in an asexual population. Genetica 102/103:127–144. 5. Crow J (2000) The origins, patterns, and implications of human spontaneous mutation. Nature Reviews 1:40–47. 6. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501. 7. Imhof M, Schlotterer C (2001) Fitness effects of advantageous mutations in evolving Escherichia coli populations. Proc Natl Acad Sci USA 98:1113–1117. 8. Orr H (2003) The distribution of fitness effects among beneficial mutations. Genetics 163: 1519–1526. 9. Keightley P, Lynch M (2003) Toward a realistic model of mutations affecting fitness. Evolution 57:683–685. 10. Barrett R, et al (2006) The distribution of beneficial mutation effects under strong selection. Genetics 174:2071–2079. 11. Bataillon T (2000) Estimation of spontaneous genome-wide mutation rate parameters: whither beneficial mutations? Heredity 84:497–501. http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006
As to my 'deletion of functionless genes' comment, I wrongly extrapolated from studies on bacteria and agree with you that selection cannot see that well in multicellular creatures. But that only adds to Dr. Sanford's argument for Genetic Entropy in humans.bornagain77
October 9, 2015
October
10
Oct
9
09
2015
11:40 AM
11
11
40
AM
PDT
as to: "What about four-fold degeneracy sites? Some of them may have other purposes (in an alternate reading frame, or affecting transcription speed) but others should be able to mutate free of consequence." And your empirical evidence for 'should be able to mutate free of consequence' is exactly what other than the hidden Darwinian presupposition in your argument that it must be able to mutate free of consequence?
Synonymous (“Silent”) Mutations in Health, Disease, and Personalized Medicine: Review - 2012 Excerpt: The CBER authors compiled a list of synonymous mutations that are linked to almost fifty diseases, including diabetes, a blood clotting disorder called hemophilia B, cervical cancer, and cystic fibrosis. http://www.fda.gov/BiologicsBloodVaccines/ScienceResearch/ucm271385.htm Synonymous Codons: Another Gene Expression Regulation Mechanism - September 2010 Excerpt: There are 64 possible triplet codons in the DNA code, but only 20 amino acids they produce. As one can see, some amino acids can be coded by up to six “synonyms” of triplet codons: e.g., the codes AGA, AGG, CGA, CGC, CGG, and CGU will all yield arginine when translated by the ribosome. If the same amino acid results, what difference could the synonymous codons make? The researchers found that alternate spellings might affect the timing of translation in the ribosome tunnel, and slight delays could influence how the polypeptide begins its folding. This, in turn, might affect what chemical tags get put onto the polypeptide in the post-translational process. In the case of actin, the protein that forms transport highways for muscle and other things, the researchers found that synonymous codons produced very different functional roles for the “isoform” proteins that resulted in non-muscle cells,,, In their conclusion, they repeated, “Whatever the exact mechanism, the discovery of Zhang et al. that synonymous codon changes can so profoundly change the role of a protein adds a new level of complexity to how we interpret the genetic code.”,,, http://www.creationsafaris.com/crev201009.htm#20100919a 'Snooze Button' On Biological Clocks Improves Cell Adaptability - Feb. 17, 2013 Excerpt: Like many written languages, the genetic code is filled with synonyms: differently spelled "words" that have the same or very similar meanings. For a long time, biologists thought that these synonyms, called synonymous codons, were in fact interchangeable. Recently, they have realized that this is not the case and that differences in synonymous codon usage have a significant impact on cellular processes,, http://www.sciencedaily.com/releases/2013/02/130217134246.htm A hidden genetic code: Researchers identify key differences in seemingly synonymous parts of the structure - January 21, 2013 Excerpt: (In the Genetic Code) there are 64 possible ways to combine four bases into groups of three, called codons, the translation process uses only 20 amino acids. To account for the difference, multiple codons translate to the same amino acid. Leucine, for example, can be encoded in six ways. Scientists, however, have long speculated whether those seemingly synonymous codons truly produced the same amino acids, or whether they represented a second, hidden genetic code. Harvard researchers have deciphered that second code,,, Under some stressful conditions, the researchers found, certain sequences manufacture proteins efficiently, while others—which are ostensibly identical—produce almost none. "It's really quite remarkable, because it's a very simple mechanism," Subramaniam said. "Many researchers have tried to determine whether using different codons affects protein levels, but no one had thought that maybe you need to look at it under the right conditions to see this.",,, While the system helps cells to make certain proteins efficiently under stressful conditions, it also acts as a biological failsafe, allowing the near-complete shutdown in the production of other proteins as a way to preserve limited resources. http://phys.org/news/2013-01-hidden-genetic-code-key-differences.html Design In DNA – Alternative Splicing, Duons, and Dual coding genes – video (5:05 minute mark) http://www.youtube.com/watch?v=Bm67oXKtH3s#t=305 Codes Within Codes: How Dual-Use Codons Challenge Statistical Methods for Inferring Natural Selection - Casey Luskin - December 20, 2013 Excerpt: In fact, one commentator observed that on the same analysis, codons may have more than two uses: "By this logic one could coin the term "trion" by pointing out that histone binding is also independently affected by A-C-T-G letter frequencies within protein-coding stretches of DNA." But this isn't the first time that scientists have discovered multiple codes in biology. Earlier this year I discussed research that found an analog code in the DNA that helps regulate gene expression, in addition to the digital code that encodes primary protein sequence. In other cases, multiple proteins are encoded by the same gene! And then of course there's the splicing code, which helps control how RNAs transcribed from genes are spliced together in different ways to construct different proteins (see here and here). It boggles the mind to think about how such "codes within codes" could evolve by random mutation and natural selection. But now it turns out that evidence of different functions for synonymous codons could threaten many standard methods used to infer selection in the first place,,, http://www.evolutionnews.org/2013/12/codes_within_co080381.html Sounds of silence: synonymous nucleotides as a key to biological regulation and complexity. - Jan 2013 Excerpt: Synonymous positions of the coding regions have a higher level of hybridization potential relative to non-synonymous positions, and are multifunctional in their regulatory and structural roles. http://www.ncbi.nlm.nih.gov/pubmed/23293005 Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation George Montañez 1, Robert J. Marks II 2, Jorge Fernandez 3 and John C. Sanford 4 - published online May 2013 Excerpt: In the last decade, we have discovered still another aspect of the multi- dimensional genome. We now know that DNA sequences are typically “ poly-functional” [38]. Trifanov previously had described at least 12 genetic codes that any given nucleotide can contribute to [39,40], and showed that a given base-pair can contribute to multiple overlapping codes simultaneously. The first evidence of overlapping protein-coding sequences in viruses caused quite a stir, but since then it has become recognized as typical. According to Kapronov et al., “it is not unusual that a single base-pair can be part of an intricate network of multiple isoforms of overlapping sense and antisense transcripts, the majority of which are unannotated” [41]. The ENCODE project [42] has confirmed that this phenomenon is ubiquitous in higher genomes, wherein a given DNA sequence routinely encodes multiple overlapping messages, meaning that a single nucleotide can contribute to two or more genetic codes. Most recently, Itzkovitz et al. analyzed protein coding regions of 700 species, and showed that virtually all forms of life have extensive overlapping information in their genomes [43]. 38. Sanford J (2008) Genetic Entropy and the Mystery of the Genome. FMS Publications, NY. Pages 131–142. 39. Trifonov EN (1989) Multiple codes of nucleotide sequences. Bull of Mathematical Biology 51:417–432. 40. Trifanov EN (1997) Genetic sequences as products of compression by inclusive superposition of many codes. Mol Biol 31:647–654. 41. Kapranov P, et al (2005) Examples of complex architecture of the human transcriptome revealed by RACE and high density tiling arrays. Genome Res 15:987–997. 42. Birney E, et al (2007) Encode Project Consortium: Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. Nature 447:799–816. 43. Itzkovitz S, Hodis E, Sega E (2010) Overlapping codes within protein-coding sequences. Genome Res. 20:1582–1589. http://www.worldscientific.com/doi/pdf/10.1142/9789814508728_0006
bornagain77
October 9, 2015
October
10
Oct
9
09
2015
11:34 AM
11
11
34
AM
PDT
@BA77 Sanford writes:
a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding, and nucleosome building. If a nucleotide carries absolutely no (useful) information, it is, by definition, slightly deleterious, as it slows cell replication and wastes energy.
What about four-fold degeneracy sites? Some of them may have other purposes (in an alternate reading frame, or affecting transcription speed) but others should be able to mutate free of consequence. But deleting them causes a nasty frameshift. I respect John Sanford, he knows all this already, and problems only arise when I pedantically take a hyper-literal reading of his statement. But no more than what you are doing with me : ) Above I wrote:
The answer is because most mutations are either neutral or very slightly deleterious. Which depends on how much of the genome you think is functional.
I actually think the genome is mostly functional and therefore most are slightly deleterous. Don't think I'm arguing for mostly junk DNA.
in that not yet deleted gene
Why do you think disabled genes are usually deleted? Selection isn't strong enough to care about whether a 3 billion base genome has an extra 1000 bases here or there. And only very rarely would a deletion target the exact start and end of a gene.JoeCoder
October 9, 2015
October
10
Oct
9
09
2015
10:18 AM
10
10
18
AM
PDT
Andre- Those with sickle-cell trait, only one copy of the mutated gene, survive just fine and have some immunity to malaria. It is only when the individual has both copies of the gene with the mutation is it the disease sickle-cell anemia. That's how Darwinian evolution "works"- break something, hope it isn't fatal and hope it helps the organism survive. If it helps the organism survive it has a chance to be passed on. Darwin's "theory" of de-evolution is born. :cool:Virgil Cain
October 9, 2015
October
10
Oct
9
09
2015
09:53 AM
9
09
53
AM
PDT
Andre -- can I suggest you learn something about sickle cell before you rant about it?wd400
October 9, 2015
October
10
Oct
9
09
2015
09:48 AM
9
09
48
AM
PDT
How exactly is dying beneficial to the organism WD400? How? Lastly ever consider the reason the mosses stay away from sickle cell sufferers is because they have a mechanism that detects there are issues with the food source?Andre
October 9, 2015
October
10
Oct
9
09
2015
09:05 AM
9
09
05
AM
PDT
JoeCoder, "Any mutation in a gene that has already been knocked out should be completely neutral" So you are depending on the previous detrimental effect of the loss of an entire gene to argue that a mutation in that not yet deleted gene may be 'completely' neutral? And me and Andre are suppose to take this argument for completely neutral mutations seriously why exactly? I would suggest that you perhaps soften your stance and say that mutations can be 'nearly neutral' instead of completely neutral.
"Moreover, there is strong theoretical reasons for believing there is no truly neutral nucleotide positions. By its very existence, a nucleotide position takes up space, affects spacing between other sites, and affects such things as regional nucleotide composition, DNA folding, and nucleosome building. If a nucleotide carries absolutely no (useful) information, it is, by definition, slightly deleterious, as it slows cell replication and wastes energy.,, Therefore, there is no way to change any given site without some biological effect, no matter how subtle." - John Sanford - Genetic Entropy and The Mystery of The Genome - pg. 21 - Inventor of the 'Gene Gun' Mutations: Enemies of Evolution with Geneticist Dr John Sanford - Genesis Unleashed (4:10 minute mark) https://youtu.be/MfCETJ_PI1s?t=250
bornagain77
October 9, 2015
October
10
Oct
9
09
2015
09:01 AM
9
09
01
AM
PDT
@BA77 Any mutation in a gene that has already been knocked out should be completely neutral, and we all have many broken genes. I said that MOST mutations are neutral or slightly deleterious, but most mutations are not in developmental gene regulatory networks. And even mutations that are, are none at four-fold degeneracy sites?JoeCoder
October 9, 2015
October
10
Oct
9
09
2015
08:29 AM
8
08
29
AM
PDT
As for do mutations happen? Yes once the Integrity checks systems, repair systems and PCD mechanisms fail mutations do happen and its pretty much lethal to the organism every time, Sickle Cell being a good example of that, cancer also.
We each have many mutations, so this is just not true. The great majority of sickle cell disease is not caused by new mutations, but form standing variation in our population. The sickle allele is common because it's beneficial to have one copy of it in malaria-endemic areas.wd400
October 9, 2015
October
10
Oct
9
09
2015
08:20 AM
8
08
20
AM
PDT
JoeCoder, actually mutations that happen in developmental gene regulatory networks are 'always catastrophically bad' (Stephen Meyer). Moreover, the idea that mutations can be completely neutral is false (John Sanford).bornagain77
October 9, 2015
October
10
Oct
9
09
2015
08:19 AM
8
08
19
AM
PDT
@Andre wrote:
mutations do happen and its pretty much lethal to the organism every time
Do you agree that humans get something around 60 to 160 mutations per generation? There is no biologist, creation, ID, or otherwise who would dispute that number. If ever mutation is "pretty much lethal", why aren't we all dead 100 times over? (to reappropriate Kondrashov's famous question) The answer is because most mutations are either neutral or very slightly deleterious. Which depends on how much of the genome you think is functional.JoeCoder
October 9, 2015
October
10
Oct
9
09
2015
07:27 AM
7
07
27
AM
PDT
Andre, this article may interest you:
Nobel Prize 2015: What the chemistry winners taught us about the fragility of human life - Julia Belluz - October 7, 2015, Excerpt: Early this morning we learned that the 2015 Nobel Prize in Chemistry went to Tomas Lindahl of the Francis Crick Institute, Paul Modrich of Duke University, and Aziz Sancar of University of North Carolina Chapel Hill. They won for a simple reason: Their scientific discoveries revealed the surprising ways in which our DNA is at once extremely fragile and super resilient.,,, As late as the 1960s and '70s, these building blocks of life were believed to be exceptionally stable. How else could DNA be passed down from generation to generation? Scientists surmised that human evolution must have selected for sturdy molecules. After all, if our gene molecules were fragile, no complex organism could possibly survive, right? Around that time, however, Lindahl began to question the conventional wisdom, asking: "How stable is DNA, really?" As a postdoc student at Princeton and later at the Karolinska Institutet in Stockholm, he carried out a series of experiments showing that DNA molecules, when isolated outside of the cell, actually degraded pretty quickly. Lindahl's research suggested that DNA can actually sustain quite a bit of damage — but somehow manage to thrive and repair itself. "[DNA] turned out to be photosensitive, temperature sensitive, and all-sorts-of-other-stuff sensitive, and that meant that living cells (1) must have mechanisms to repair DNA damage and (2) must spend a substantial amount of time and energy on them," explained chemist Derek Lowe in a fantastic blog post on the awards.,,, the Nobel Prize Committee said. "It is constantly subjected to assaults from the environment, yet it remains surprisingly intact." The big question, then, was how DNA gets repaired. Lindahl arrived at part of the answer here: He identified a bacterial enzyme that removes damaged cells from DNA. Later on, he also discovered a cellular process — called "base excision repair" — that essentially continuously repairs damaged DNA using a similar enzyme. Lindahl's co-winner, Aziz Sancar, later built on this work, mapping the mechanism that cells use to repair the most common type of assault — UV damage — a technique called "nucleotide excision repair." Basically, our cells can cut out sections of DNA that are damaged by UV light and replace them with new DNA. Meanwhile, Paul Modrich discovered yet another repair mechanism: Cells can correct replication errors through a process called "mismatch repair." The upshot of these discoveries is that cells are constantly working to repair DNA damage. "Every day, [these processes] fix thousands of occurrences of DNA damage caused by the sun, cigarette smoke or other genotoxic substances; they continuously counteract spontaneous alterations to DNA and, for each cell division, mismatch repair corrects some thousand mismatches," the Nobel Committee described. "Our genome would collapse without these repair mechanisms." These discoveries were important in themselves: They completely changed how the scientific community understood the fundamentals of cell biology and DNA. http://www.vox.com/2015/10/7/9470913/nobel-prize-2015-what-the-chemistry-winners-taught-us-about-the
bornagain77
October 9, 2015
October
10
Oct
9
09
2015
04:13 AM
4
04
13
AM
PDT
WD400 You know all those useless junk you Darwinian's love selling? Would you believe that these non-coding regions are quite possibly the builtin responses to changes in an environment. As for do mutations happen? Yes once the Integrity checks systems, repair systems and PCD mechanisms fail mutations do happen and its pretty much lethal to the organism every time, Sickle Cell being a good example of that, cancer also. Mutations are bad! They kill you!Andre
October 8, 2015
October
10
Oct
8
08
2015
11:02 PM
11
11
02
PM
PDT
Mung -- I know how to distinguish drift form seleciton. I've never diagnosed the source of your confusion on that topic, and don't suppose I willwd400
October 8, 2015
October
10
Oct
8
08
2015
06:43 PM
6
06
43
PM
PDT
wd400, have you learned the difference between genetic drift and neutral evolution yet? Or how to distinguish drift from selection?Mung
October 8, 2015
October
10
Oct
8
08
2015
06:39 PM
6
06
39
PM
PDT
I understand that's the way D&S are doing it. 1/u for the first mutation A and 1/sqrt(u) for the second mutation B. Because they reason any time A appears, it might linger in the population for a while, and during that time B may appear among one of those with A. But I think if the 1/sqrt(u) component were correct we could restructure all of our brute force password cracking algorithms as an evolutionary search and make them exponentially faster. Nobody does this and we all still use passwords so I'm inclined to thing the 1/sqrt(u) term is incorrect. Or maybe there's some other difference I'm missing?JoeCoder
October 8, 2015
October
10
Oct
8
08
2015
03:29 PM
3
03
29
PM
PDT
You seem(?) to be calculating the probability of finding the double mutation in a single chain. The point D&S make is that you have to calculate the probability that a cell getting the "B" mutation descends from a one that carried the "A" mutation.wd400
October 8, 2015
October
10
Oct
8
08
2015
03:16 PM
3
03
16
PM
PDT
@wd400 I don't think scaling is the only thing they got wrong. I'm also skeptical of the sqrt(u2) in their theorem 1 which itself is the root of our difference in calculations. Let me explain why I find it problematic: 1. The malarial genome is 23 megabases. 2. It therefore has 23 million squared possible 2-nucleotide permutations. That's 5e14, so far so good. 3. But the mutation rate you used in the D&S equation above is 10^-10, or about 1 mutation every 435 replications. 4. 5e14 * 435 is 2e17 So it should take 2e17 malaria to search every possible 2-nucleotide permutation. Or 1e17 to search half of them. In computer science we know it's impossible to find a value faster than you can look for it. That assumption is critical in all our security systems. Otherwise a password of 8 digits could be found in far less than an average of 10^8 / 2 searches. So I think D&S's sqrt(u2) component must be incorrect. But I have not dug through the Iwasa et al. paper cited by D&S to try to figure out why. Thank you for your help in evaluating this so far. P.S. I think 1e17 is still reconcilable with Behe and Tim White's suggestion it takes 1e20 malaria to fJoeCoder
October 8, 2015
October
10
Oct
8
08
2015
01:56 PM
1
01
56
PM
PDT
Andre, In 38 you really sound like you are denying mutations can happen? Are you serious?wd400
October 8, 2015
October
10
Oct
8
08
2015
01:42 PM
1
01
42
PM
PDT
Hi JoeC, I think the bit the D&S got wrong with regard Dembski was how to specify u -- the nucleotide rate against the particular nucloetide mutation required for the amino acid change. But for a given rate D&S show how to calculate the waiting time for 2 mutations. So, I don't think scaling and re-scaling is going to get you back to the inverse square?wd400
October 8, 2015
October
10
Oct
8
08
2015
01:37 PM
1
01
37
PM
PDT
But before we even discuss all the sophisticated systems, the reasonable observer should take pause for a minute and consider the following... How can any unguided process without any help build it's own guided process to prevent any unguided processes from happening in the first place? Luck? Chance?Andre
October 8, 2015
October
10
Oct
8
08
2015
01:23 PM
1
01
23
PM
PDT
JoeCoder An integrity checker will have a signature file of what the data should be like. Any changes to the data and the integrity check fails. DNA integrity checks are not simple checksums. They are highly sophisticated, worse there are multiple integrity checks which means the system even have builtin redundancy. If this is the case spanning back 550 million years Darwinian evolution is impotent, powerless and unable to do anything because anything it attempts, random or non-random first have to pass these multiple checks. If they do for whatever reason it means the integrity check system failed. If the integrity checks fail the system attempts repairs... How does it know what to repair? Well consider that there are additional information or signatures of the system and when repair fails what happens next? Yes you got it the system goes into self destruct when all checks and all repairs failed. Darwinian evolution can't do jack because not only.does the system not tolerate it but when the system faults and is not repairable it shuts down indefinitely.Andre
October 8, 2015
October
10
Oct
8
08
2015
01:11 PM
1
01
11
PM
PDT
@Andre Because any lineage that trashed them didn't live to tell about it? Thus they are conserved.JoeCoder
October 8, 2015
October
10
Oct
8
08
2015
12:58 PM
12
12
58
PM
PDT
JoeCoder Here is the problem. All the integrity check systems, all the repair mechanisms, and all the PCD systems are evolutionary conserved. This presents a major hurdle for Darwinian evolution.... Want to guess why?Andre
October 8, 2015
October
10
Oct
8
08
2015
12:51 PM
12
12
51
PM
PDT
@BA77 I’m not sure why error correction is a paradox for evolution? You mean besides the obvious paradox of presuming random errors built an extremely sophisticated, multi-overlapping, system of random error correction? Yeah no paradox in that Darwinian presumption at all! :) A bit off topic JoeC, but the following paper may interest you in regards to 'directed' mutations:
Duality in the human genome - Nov. 28, 2014 Excerpt: The results show that most genes can occur in many different forms within a population: On average, about 250 different forms of each gene exist. The researchers found around four million different gene forms just in the 400 or so genomes they analysed. This figure is certain to increase as more human genomes are examined. More than 85 percent of all genes have no predominant form which occurs in more than half of all individuals. This enormous diversity means that over half of all genes in an individual, around 9,000 of 17,500, occur uniquely in that one person - and are therefore individual in the truest sense of the word. The gene, as we imagined it, exists only in exceptional cases. "We need to fundamentally rethink the view of genes that every schoolchild has learned since Gregor Mendel's time.,,, According to the researchers, mutations of genes are not randomly distributed between the parental chromosomes. They found that 60 percent of mutations affect the same chromosome set and 40 percent both sets. Scientists refer to these as cis and trans mutations, respectively. Evidently, an organism must have more cis mutations, where the second gene form remains intact. "It's amazing how precisely the 60:40 ratio is maintained. It occurs in the genome of every individual – almost like a magic formula," says Hoehe. http://medicalxpress.com/news/2014-11-duality-human-genome.html
bornagain77
October 8, 2015
October
10
Oct
8
08
2015
12:51 PM
12
12
51
PM
PDT
@wd400 Above I said:
So if we take your 5e14 above and multiply it by 30
But I think you did a calculation for a mutation rate of 1e-10 and not 1e-8, so scratch that part.JoeCoder
October 8, 2015
October
10
Oct
8
08
2015
12:32 PM
12
12
32
PM
PDT
@BA77 I'm not sure why error correction is a paradox for evolution? Shouldn't there be a sweet spot between a mutation rate high enough to allow for evolution, but not so high it drives species extinct? However I agree with you and Sanford's genetic entropy thesis that we and likely most other higher animals are way above that safe point. I also agree that the distribution of mutations can be very non-random. I'm assuming randomness because 1) these things would be too hard to calculate otherwise, and 2) I want to show how difficult evolution is even under generous assumptions.JoeCoder
October 8, 2015
October
10
Oct
8
08
2015
12:27 PM
12
12
27
PM
PDT
@wd400 You're correct that Durret+Schmidt differed u1 and u2 by a factor of 30 (10 nucleotide binding site times 3 ways to mutate each letter). I did not notice that before. However, I did some googling and found an interesting debate between Behe and Durret+Schmidt. In the paper you linked above, Durret+Schmidt said Behe's estimated waiting time for two mutations off by "5 million times". Behe wrote this response saying their estimate was 30 times too generous because they assumed any mutation would change the protein instead of only codon-altering mutations. In a response to Behe, Durret+Schmidt agreed "Behe is right on this point." So if we take your 5e14 above and multiply it by 30, we get 1.5e16. Durret+Schmidt use a mutation rate of 1e-8 per nucleotide per generation for humans, and the inverse square of that is 1e16. So I think their calculations do indeed show that the odds are the inverse of the mutation rate. Behe was wrong in Edge to not consider the differences in the mutation rates between humans and Malaria, which I think accounts for the remainder of the difference between Durret+Schmidt's and Behe's calculations for waiting time for two mutations in humans.JoeCoder
October 8, 2015
October
10
Oct
8
08
2015
12:09 PM
12
12
09
PM
PDT
1 2 3

Leave a Reply