Uncommon Descent Serving The Intelligent Design Community

“The Unbearable Lightness of Chimp-Human Genome Similarity” by Rick Sternberg

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Walter ReMine once said to me, the supposed 99.5% identity between chimps and humans is like taking two books, creating an alphabetical listing of all the unique words in each book, and then comparing the lists of unique words derived from each book. It would be really easy then to use these lists to argue: “see the books are 99.5% identical!”

Another ID proponent, David Pogge, argued that the sequence comparison are like comparing driving directions: two sets of directions can have 99% similarity, but a few differences can lead to radically different destinations.

With this in mind, here is Rick Sternberg’s Guy Walks Into a Bar and Thinks He’s a Chimpanzee: The Unbearable Lightness of Chimp-Human Genome Similarity.

Comments
Sal writes: derwood worte: "If our population has a variationin genome size of some tens of millions of BPs, chances are pretty good that similar populations will, too." If so this is devastating evidence against natural seleciton’s ability to manage large numbers of nucleotides and hence it shows almost total non involvement by natural selection in the large numbers of base differences between chimps and humans. Complete non-sequitur. Why would natural selection have to 'manage' large numbers of nucleotides? It appears that you want to argue that all nucleotide sites must be under the control of natural selection, and if there are large numbers of nucleotide variation within/among species, then this is a problem. How do you propose that this was set up by your preferred Intelligent Agent? Are there computer programs designed to function with variable amounts of code, yet still be the same program? Must be. So your counter claim is a double edged sword against Darwinian evolution as being responsible for the chimp human divergence. Whether the regions are functional or not is beside the point. Not at all. You seem to be arguing a very strict selectionist position here, yet I know from reading your posts for some time that you should at the very least understand that this is not a realsitic position. The functionality part in fact is devastating to your position, since it is clear that any two humans caqn differ by tens of millions of bases yet they are still both humans and neither will exhibit any particulat phenotypic deficits. Thus, the argument that all DNA is 'fully functional' seems to have been devastated just by investigating human genome variation. The differences are there, they are substantial, and natural selection can’t be argued as the driving force. The driving force for what? Controlling how much variation occurs? I can't see what you think your argument actually is here, other than tossing something out to counter what the facts show. Why on earth would natural selection HAVE to 'manage' genome size? You are not making sense. Your cliam: "If our population has a variationin genome size of some tens of millions of BPs, chances are pretty good that similar populations will, too." unwittingly underscores selection’s irrelevance to the supposed chimp human divergence. Actually, my point was that YOUR claim regarding the number of "fixed" differences between species is in error for at least two reasons - you presented it as occurring only in one species, hoping to make the genetics more difficult to explain; because there is relatively large scale variation WITHIN species, the 'fixed' differences are likley to be somewhat smaller than you indicated. Beyond that, as I've already explained, since the raw nucleotide differences are due not only to small scale mutations, but also larger indels and duplication events, lumping ALL nucleotide differences together in one big pot and declaring THAT number to be what must be explained by population genetics is simply absurd. It is sort of like looking at the 4000 or so American deaths in Iraq and insisting that each one represents a unique battle. derwood: "If that were so, again, it should be made clear that any two humans differ by millions of nucleotides." But the proper way to do the comparison is to create a representative genome for humans where there are large scale monomorphic regions. Do the same for chimps, and then human-human difference will be zero, the chimp-chimp difference will be zero and the chimp-human differences will be at least 5%. Yes, when you count indels and the like, as should have been obvious by now. You are still talking raw nucleotide difference. Let us look at it this way: Say we have a locus from two species. This locus starts out 1000 bp in loength for both. Over time, species A accumulates 5 unique substitutions. Species B accumulates 4 unique substitutions and experiences one insertion even of 100 bps in length. If we compare the sequences from a raw nucleotide difference POV, we would see that they differ by 109 bps, or about 9%. If we compare them from a mutational POV, we see that they differ by 10 such events, or less than 1%. Sure, the raw nucleotide difference has certain applications and relevance, but it is true that the raw nucleotide difference will be larger than the number of mutation events required to produce it. This sort of comparison would be no less legitimate than the 99.5% identity claimed by making comparisons of only 1.5% of the regions between humans an chimps and humans that are similar. The 99.5% identity figure is misleading. The legitimacy of the number depends on what is being measured and in what context. It is misleading to claim that all individual nucleotide differences - the raw nucleotide divergene - has a greater import when discussing descent than does the number of mutations required to explain the difference. I already said that I don’t mean to say that chimps are so far away from humans as to be placed in different orders. Even the creationist community of Linnaeus and friends would consider the chimp-human relationship to be close relative to chimps vs. plants or humans vs. plants. It is good that you recognize this. Do you also recognize that using the raw nucleotide difference for ALL interspecies comparisons, as you want to do with the human-chimp issue, would make ALL relarted species more divergent (by measurement only)? What ever would the Baraminologists do when they realize that all turtles may NOT be in the same holobaramin using these new criteria? Their ark, it seems, suddenly has to get larger.derwood
May 21, 2009
May
05
May
21
21
2009
06:58 AM
6
06
58
AM
PDT
If that were so, again, it should be made clear that any two humans differ by millions of nucleotides.
But the proper way to do the comparison is to create a representative genome for humans where there are large scale monomorphic regions. Do the same for chimps, and then human-human difference will be zero, the chimp-chimp difference will be zero and the chimp-human differences will be at least 5%. This sort of comparison would be no less legitimate than the 99.5% identity claimed by making comparisons of only 1.5% of the regions between humans an chimps and humans that are similar. The 99.5% identity figure is misleading. I already said that I don't mean to say that chimps are so far away from humans as to be placed in different orders. Even the creationist community of Linnaeus and friends would consider the chimp-human relationship to be close relative to chimps vs. plants or humans vs. plants.scordova
May 20, 2009
May
05
May
20
20
2009
05:04 PM
5
05
04
PM
PDT
derwood worte: If our population has a variationin genome size of some tens of millions of BPs, chances are pretty good that similar populations will, too.
If so this is devastating evidence against natural seleciton's ability to manage large numbers of nucleotides and hence it shows almost total non involvement by natural selection in the large numbers of base differences between chimps and humans. :-) So your counter claim is a double edged sword against Darwinian evolution as being responsible for the chimp human divergence. Whether the regions are functional or not is beside the point. The differences are there, they are substantial, and natural selection can't be argued as the driving force. Your cliam:
If our population has a variationin genome size of some tens of millions of BPs, chances are pretty good that similar populations will, too.
unwittingly underscores selection's irrelevance to the supposed chimp human divergence.scordova
May 20, 2009
May
05
May
20
20
2009
04:50 PM
4
04
50
PM
PDT
Mr Cordova, The point I was trying to make about polyconstraint goes back to Dr Sanford's original quote "polyfunctional implies polyconstrained". The clear implication I see in Sanford's text is that a string of polyfunctional DNA is less able to vary because of these multiple constraints. My point about backup is to show how backup undercuts this concern. Any function that is provided for in two separate places can then begin to vary in one of those places. I think Dr Sanford's argument on "poly" is that overlapping function would be hard for a human to design, therefore it is evidence of superhuman design. However, we do design this way when trying to optimize certain parameters, such as code size. The unexplored alternative hypothesis is whether evolutionary methods also result in overlapping functional descriptions, and under what set of optimization pressures does this happen.Nakashima
May 20, 2009
May
05
May
20
20
2009
10:56 AM
10
10
56
AM
PDT
Mr Joseph, And does anyone know how many mutations it would take to go from an ape-like organism to a fully developed human? This article estimates only a few hundred mutated genes.Nakashima
May 20, 2009
May
05
May
20
20
2009
10:38 AM
10
10
38
AM
PDT
Mr Joseph, IOW do all the other muscles in that area also have to “shrink” such that the diminished masseter can then work? That is the point of gene regulatory networks. Vary a gene near the beginning of the chain, and all the downstream development adapts. There is not one gene per muscle, bone, neuron, and blood vessel. You can find out more by looking up FGF and BMP, for example.Nakashima
May 20, 2009
May
05
May
20
20
2009
10:18 AM
10
10
18
AM
PDT
Sal writes: Thank you for that comment. That was very informative. I was not aware of issues with guinea pigs. No problem - but remember, I am not certain that it is in guinea pigs. Dave W. will probably know.derwood
May 20, 2009
May
05
May
20
20
2009
09:19 AM
9
09
19
AM
PDT
Green writes: derwood writes: "Luskin’s point was a red herring. The chromosonmal fusion has NEVER been presented as evidence for a speciation event, it has always been presented, as far as I know, to explain why we have differing karyotypes." But derwood, nowhere did Luskin claim that it caused the speciation event. In fact, I think in recent podcast, he affirmed that it *didn’t*. Right - which is why it is a red herring. Since evolutionists have never, as far as I know, posited the fusion as an explanation for speciation, going to lengths to claim that it did NOT cause speciation is at the very best irrelevant, since we've never said anything else. At worst, which is what I conclude, Luskin's goal is to argue against human/chimp ancestry by claiming that the fusion event cannot account for human evolution, therefore, talking about the fusion event is a distraction. Likewise, nor did I say that it caused the speciation event. We have evidence today that fusion events DON’T cause speciation events (e.g. different BREEDS of horse, and different RACES of mice). Right. Like I said, we've never claimed, as far as I know, that it did. The discussion of the fusion is to explain WHY we have fewer chromosomes than chimps do, for this has in the past been used as an argument against evolution. This is good evidence for the idea that chromosomal fusion events happen fairly commonly (without adverse effects) *within* species. Thus it is perfectly plausible that 2 human chromosomes fused to give human chromosome 2, rather than 2 chimp chromosomes fusing to give human chromosome 2. I'm not so sure about all that, but again, the karyotype issue has been used as an argument against evolution. It is good to see that Luskin is now arguing the evolution position, according to you.derwood
May 20, 2009
May
05
May
20
20
2009
09:17 AM
9
09
17
AM
PDT
Sal: Thank you for your comment, however, I would argue that the figure of size is relevant in that it emphasizes the 99% identity isn’t what it may appear to most. So, you are arguing semantics, essentially? I am unconcerned with how 'most' would interpret something. If most see no problem with drawing conclusions on what they do not fully grasp, so be it, but I don't think that we should dimiss the reality of sequence analyses because 'most' misinterpret them. The 5% difference (of extra base pairs) is just tossed out. If we toss out differences, of course the supposed identity numbers will be misleadingly inflated. You seem to have 'tossed out' the explanation. The 'extra bases' are largley the product of a much smaller number of mutational events. It is the mutational event that is relevant, not the exact number of bases. If that were so, again, it should be made clear that any two humans differ by millions of nucleotides. We really don’t know that those extra base pairs are junk after all, do we. I don't recall implying that they are. Although, since intraspecies differences can rank in the millions, it would seem that if they were NOT 'junk', or at least not 'required', that there would be major phenotypic consequences. And even if they are junk, we still have the nasty problem of how 177,500,000 base pairs got fixed into a population. How do you know they are fixed? If our population has a variationin genome size of some tens of millions of BPs, chances are pretty good that similar populations will, too. How does 177,500,000 base pairs of junk get fixed in a population? How did the human population get nearly 50% of its' genome fixed via duplication events? First, since we are comparing the genomes of different species, with similar generation times (more or less), it is safe to assume that roughly half the difference is found in each species, not all change in one. Second, there is no quarantee that ALL of the differences are fixed, as should be pretty obvious. I wouldn’t be too quick to presume random drift could be a good explanation for such a large amount of base pairs. I wouldn't be too quick to presume that because we do not have all answetrs to all questions this very day that the better answer is to lay it all at the doorstep of one's preferred Intelligent Designer.derwood
May 20, 2009
May
05
May
20
20
2009
09:10 AM
9
09
10
AM
PDT
Adel (98), Are you trying to say that you are a knuckle-walker? :)Joseph
May 20, 2009
May
05
May
20
20
2009
07:25 AM
7
07
25
AM
PDT
One thing I should add, the chimp-human similarity issue has some independence from the question of the discussions about function a polyconstrained regions. If a nucleotide posistion is different between a chimp and a human, this change still has to be accounted for. For example, if there is an indel in humans which are not in chimps, we still have to account for how the indel was fixed into the human population, independent of whether the indel is functional or not. If the indel is not subject to selection, this is probably as much a problem for it overtaking the populaiton without any seleciton pressure, especially in populations that are not well-stirred due to geographic distribution. So even granting (only for the sake of argument) that the non-coding regions are non-funcitonal, the differences in the regions still need to be accounted for when trying to model the evolution after the supposed chimp-human divergence.scordova
May 20, 2009
May
05
May
20
20
2009
06:55 AM
6
06
55
AM
PDT
What is the evidence that demonstrates an ape-like organism with diminished masseter muscles can even eat?
I'll have to chew on that.Adel DiBagno
May 20, 2009
May
05
May
20
20
2009
06:52 AM
6
06
52
AM
PDT
On the subject of polyconstraint, there seems to be an assumption that no other part of the genome is a backup to any other, in any of its functions.
I don't think that is the case. Something can be polyconstrained and also serve as a back up. In the software engineering world (of which you are familiar), functional modules are poly constrained. Changing the module (like say a print routine) may or may not have a radical outcome on all the system parts that depend on this routine. We might make changes (say for maintenance purposes) to the module but the functional results are the same. The fact that we can make these changes does not mean the module is not polyconstrained. There are acceptable variations, but it is still poly constrained. Being polyconstrained constrains the limits of variation, it does not imply no variation can exist. The problem is determining which variations actually compromise function, and this is not so easy for the reasons I outlined here: Airplane Magnetos Contigency Designs and Reasons ID Will Prevail In deeply redundant systems (such as space ships) it is customary for copies of software to be located in numerous places. This provides backup and robustness. So we can have copies of poly constrained entities. Polyconstraint does not imply there cannot be redundancy. Further, because something is a copy or a backup that is rarely if ever used cannot be an argument that it is not really functioning. A spare tire or spare magneto in an aiplane that is never used still serves an important function. Trying to use immeditate selective advantage as a criterion for determining if something is functioning is a questionable philosophical approach to characterizing integrated systems. I put forward some of my objections to using immeditate selective advantage as a metric for measuring functionality here: Airplane Magnetos Contigency Designs and Reasons ID Will Prevail and Survival of the Sickest I believe there are desirable variations and undesirable variations. We do not yet know what are and aren't. That was the philosophical issue with "Survival of the Sickest". We might argue sickle-cell anemia is an acceptable variation since in some contexts it lends survival advantage, but I find this viewpoint problematic. This leads to a discussion of a good point which you made:
We were talking recently here on UD about a gene (an opsin gene? I forget) where there were over one hundred variants in the human population thus far sampled. This gene is not ‘polyconstrained’. There are a large number of acceptable variations and substitutions - variations possible because they do not significantly affect the electrosurface of the protein.
Polyconstraint does not imply everything is prevented from having some variation. The way Sanford argues for detecting poly constraint is the manner in which the same stretch of DNA can be used for different functions (such as in alternative splicing). That is the main idea in the way Sanford describes polyconstraints. What variations are and are not consistent with the original design intent is an open question. But if system biologists are attempting to characterize systems in terms of what are the intended functions, these philosophical question will come up again. I can say sickle-cell anemia is not an intended function, but on what grounds can this be argued? The question remains open, but it is an important one.scordova
May 20, 2009
May
05
May
20
20
2009
06:16 AM
6
06
16
AM
PDT
Do we even know what makes a chimp a chimp and a human a human? No.Joseph
May 20, 2009
May
05
May
20
20
2009
05:01 AM
5
05
01
AM
PDT
Nakashima, What is the evidence that demonstrates an ape-like organism with diminished masseter muscles can even eat? IOW do all the other muscles in that area also have to "shrink" such that the diminished masseter can then work? Where is the data? And does anyone know how many mutations it would take to go from an ape-like organism to a fully developed human?Joseph
May 20, 2009
May
05
May
20
20
2009
05:00 AM
5
05
00
AM
PDT
Mr Cordova, Yes, I addressed that possibility in the next sentence after the one you quoted. The Sanford quote you provided begins and ends with the protein coding polyfunctional possibilities, and discusses other possibilities in between. On the subject of polyconstraint, there seems to be an assumption that no other part of the genome is a backup to any other, in any of its functions. We were talking recently here on UD about a gene (an opsin gene? I forget) where there were over one hundred variants in the human population thus far sampled. This gene is not 'polyconstrained'. There are a large number of acceptable variations and substitutions - variations possible because they do not significantly affect the electrosurface of the protein.Nakashima
May 19, 2009
May
05
May
19
19
2009
05:45 PM
5
05
45
PM
PDT
I assumed it meant that one sequence of DNA could (and did) produce more than one protein transcript.
Providing sequences that describe proteins is not the only function of DNA. There is more to DNA than providing a sequence that describes coding. Visit JunkDNA.com for the other functions of DNA.scordova
May 19, 2009
May
05
May
19
19
2009
05:08 PM
5
05
08
PM
PDT
Mr Biped, Good luck on your paper, gambatte! I originally thought that your phrase "physically inert" meant something like "chemically inert", and that is how I have been answering your first question. But now I see that just as your questions 2 and 3 use very idiosyncratic language to describe the idea of independent trials, your question 1 is also using terms in a very special way. I have never heard anyone say 'existence is inert' before. Newell and Simon treat with physical symbol systems, and address the idea that data should be "inert" - A computer should be able to store any arbitrary pattern of 1s and 0s without setting itself on fire, for example. (A counterexample would be a computer that represented 1 with a heavy disk, and 0 with a light one. The computer might actually tip over in some configurations of memory! This representation would not be inert.) In this Newell & Simon sense, the code table of DNA is mostly inert, but not completely, as we saw with guanine quadraplexes. I've read that paper cited by Khan previously. It seems to argue that our code is somewhat mediocre, and therefore quite possibly a frozen accident. The paper also brings up in passing another example of how the genetic code is not perhaps completely inert if there is a chemical relationship between arginine and its codon. So I would give you a qualified "yes" as an answer to Q1. The symbols are physical, and (for the most part) inert.Nakashima
May 19, 2009
May
05
May
19
19
2009
10:32 AM
10
10
32
AM
PDT
Khan, you and Nakashima should do a stage act. One is demanding that the code is a physical issue, while the other pushes a paper that demonstrates how non-physical the system is. - - - - - Nakashima, a red plastic ball does not violate any physical laws. But the existence of a red plastic ball is physically inert. Certainly, the elements that make up the ball are all acting in accordance with the physical laws that govern the matter in this Universe, but there is nothing in those laws that says “dye yourself red and form a sphere”. (That required something else :) ) Khan, give it a rest. I have a research paper due in 72 hours, so I am going to leave it to you to figure out what elements of the issue that Novo-Koonin-Wolf left out of their paper. (Hint: it is the reason for the questions I asked) (Second Hint: re-read their own conclusions)Upright BiPed
May 19, 2009
May
05
May
19
19
2009
09:28 AM
9
09
28
AM
PDT
Mr Borne, What does polyfunctional mean to you? From the quote of Dr Sanford's book, I assumed it meant that one sequence of DNA could (and did) produce more than one protein transcript. If you object to specifying proteins, lets just say transcriptome. The argument remains that if large sections of DNA actually were read in multiple ways all the time, then the transcriptome would be 2x, 3x, up to 12x larger than a reading of a single strand in a single direction in a single reading frame. That would be evidence that large parts of our DNA genome are polyfunctional. I don't know of such evidence. If the genome was polyfunctional to a great degree, we wouldn't see gene duplications and frame shift mutations revealing new proteins. Those proteins would be already being produced by the original gene under multiple reading frames.Nakashima
May 19, 2009
May
05
May
19
19
2009
09:28 AM
9
09
28
AM
PDT
Nakashima:
This is similar to being excited about 'polyfunctional' DNA. Sure, a stretch of DNA could be read in any of 12 different ways, but that doesn’t mean that DNA is actually read more than one way over most of its length. If it were, the proteome would be 12 times larger than predicted by a simple reading of the genome, which would be very obvious and very big news. It isn’t.
Your conclusion on that is a non sequitur. It does not follow logically any more than saying an iPhone should be 12 times bigger because it is polyfunctional. Polyfunctionality is, I think, been demonstrated more than once. Polyfunctional implies poly-constrained as well. Think the implications of that over.Borne
May 19, 2009
May
05
May
19
19
2009
09:04 AM
9
09
04
AM
PDT
Mr Cordova, I think you are too willing to ascribe significance to the entire non-coding genome. Its obvious that some of it is important. For example, HAR1 and HAR2 are in non-coding regions. But there are also the remnants of ERVs scattered around, I find it hard to believe these are "functional" now even if they were functional (from the point of view of the virus) in the past. This is similar to being excited about "polyfunctional" DNA. Sure, a stretch of DNA could be read in any of 12 different ways, but that doesn't mean that DNA is actually read more than one way over most of its length. If it were, the proteome would be 12 times larger than predicted by a simple reading of the genome, which would be very obvious and very big news. It isn't.Nakashima
May 19, 2009
May
05
May
19
19
2009
08:45 AM
8
08
45
AM
PDT
So it would be interesting if this kind of study could be re-done today to see if there is still a “paradox” at all.
Even though the number of genes is smaller, the number of nucleotides we have to consider is no longer 1.5 % of the genome but may now 100% of the genome since we are now realizing the non-coding sequences are significant. Even mis-spellings in the telomeric repeats “TTAGGG” will degrade the knotting properties. You are correct to cite the issue of spontaneous abortion, but we know that most mutational changes, are only slightly deleterious, not fatal enough to cause spontaneous abortion. I do agree it would be a good idea to revisit the U-Paradox, but I think the paradox has gotten worse not better. But I think we can both agree more research is in order.scordova
May 19, 2009
May
05
May
19
19
2009
08:13 AM
8
08
13
AM
PDT
Mr Cordova, Thanks for the links on the Nachman paper. I googled it and read it. Looking over his methods and assumptions, it lookslike some things in the paper could use updating, now that we have complete sequences for humans and chimps. He assumes 70,000 genes for humans, which we know is wrong. Two points which I found interesting - he shows that much more mutation is going on in males than females. As I have read elsewhere, we males are the disposable guinea pigs, testing solutions which will be stored (if useful) in the female genome. Also, his 40 children really means 40 zygotes, 40 pregnancies begun. We don't know the prevalence of spontaneous abortion in our primitive ancestors. Even now the rate can be as high as 33%. So it would be interesting if this kind of study could be re-done today to see if there is still a "paradox" at all.Nakashima
May 19, 2009
May
05
May
19
19
2009
07:45 AM
7
07
45
AM
PDT
Nakashima-san, Thank you for you insight. Yes, I was very impressed with Ohno's insight into the number of genes. As a side note, while discussing the dis-similarities between chimps and humans, I don't want to give the impression that I think chimps are so dis-similar from humans as but butterflies are dis-similar from humans. The creationist Linnaeus was correct to group humans along with primates because of their similarities. Primates are certainly more similar to each other than they are to plants! However, if we wish to make an accounting for how much evolution had to take place, we ought to examine the number of diffences and the number of separte events that are needed to be accounted for. I think the diffences are far more than 177,000,000 base pairs. If we have to account for the non-coding regions as well as the 3-D architectures, and the epignetic features, I think the number would be truly enormous. This is not to say that it is improper to group chimps and humans together, but rather to emphasize the difficulty in evolving creatures even in the same order, namely primates.scordova
May 19, 2009
May
05
May
19
19
2009
07:14 AM
7
07
14
AM
PDT
Mr Cordova, Clearly, Ohno-sensei's paper came to teach something to people that they did not expect. Prior to this paper is when you might have seen estimates such as what he shoots down at the beginning - millions of genes based on an assumption that all of DNA coded for a gene. It is interesting to see that the actual number of genes is within an order of magnitude of his estimate. Also, at the same time as he coins the term "junk DNA" in the title, he finds many uses for non-coding DNA in the text! It's a pity people only remember the catchy phrase.Nakashima
May 19, 2009
May
05
May
19
19
2009
07:00 AM
7
07
00
AM
PDT
For the reader's benefit, I describe the U-Paradox here: Other problems for Human Evolution, Nachman’s U-Paradox One solution to the U-Paradox was to invoke "junkDNA", but that "solution" is no longer feasible since we now know, junkDNA isn't junk.scordova
May 19, 2009
May
05
May
19
19
2009
06:44 AM
6
06
44
AM
PDT
Mr Joseph, We have data showing why the jaw muscles in humans are not as robust as chimps. But what does that mean? Was a chimp born with diminished jaw muscles? And then that chimp was the father/ mother of humans? I would say ape instead of chimp, but otherwise correct. Let's say this ape has taken to feeding on softer fruits, and so does not need such strong jaws (which help grind tougher plant matter). Its muscles are now over-developed relative to its diet. Energy spent during development on strong muscles is available (in a less well muscled lineage) for other survival activities. It is really no different than cave fishes not developing eyes. What we see as a loss is a survival advantage. Nature doesn't editorialize.Nakashima
May 19, 2009
May
05
May
19
19
2009
06:43 AM
6
06
43
AM
PDT
think if you go back and read the literature, you will see that scientists expected all of DNA to be significant
I was referencing considerations by Ohno-sensei: http://www.junkdna.com/ohno.html Ohno coined the term "junk DNA". The considerations were from the U-Paradox. He doesn't mention the U-Paradox explicitly, but you can see elements of it in his calculations in the link.scordova
May 19, 2009
May
05
May
19
19
2009
06:35 AM
6
06
35
AM
PDT
why prefer one choice over another,and what are the values for all species-species comparisons with this method, not just human-chimp?
Part of the problem is we don't yet have the ability to measure some of these differences. For example, to actually analyze the knots we need Nuclear Magnetic Resonance imaging. This is not cheap and it is not easy compared to dna sequencing. It's relatively easy to do gene comparisons, but how do we correctly analyze the non-coding regions? The reason gene detection is easy is we can often look for a DNA sequence and then test the organism to see if a particular protein is expressed. That is relatively "easy". We can test if an organism expresses insulin, but determining if it expresses it properly and how it relates to the non-coding regions is not so easy. Some forms diabetes are suspected to originate in the non-coding regions. These are not easy inferences to make. I don't think we have even scratched the surface in terms of all the real complexities at play in order to even begin to think we can make valid comparisons beyond superficial similarities like genes and the proteins they code.scordova
May 19, 2009
May
05
May
19
19
2009
06:30 AM
6
06
30
AM
PDT
1 2 3 4

Leave a Reply