Uncommon Descent Serving The Intelligent Design Community

Quality, Quantity and Intelligent Design

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In all things there are two different kinds of characteristics: quality and quantity. While quantity is relatively easy to define, quality is difficult to define or specify. Consider an apple. It is easy to grasp what is the difference between one apple, two apples, three apples… only an integer number changes, representing the amount of apples. Differently, it becomes hard to define in detail what an apple is, what are its essential properties and its intrinsic attributes, ─ in a single word ─ what is its quality, which distinguishes it from anything else. This is more true more the thing investigated is complex and rich of information and organization.

Often a quality of a thing is related to its shape. Shape, form, position, topology imply qualitative attributes, which cannot be reduced in principle to numbers, let alone to a single number. This is the reason why in engineering, which deals with complex things, to describe them, one uses, yes, numbers but also uses words, symbols, drawings, charts, diagrams, pictures, etc. Consider a simple example: a spring. What distinguishes a spring from a straight steel wire and gives the spring the properties and functionalities the latter lacks? Aside from the fact that just the material must have specific qualities, the helical shape has a major role in making a spring. One could object that this shape can be described by an equation with three variables related to the three dimensions of space. Ok, but an equation is not pure quantity, is not a single number.

Sometimes the quality of a thing is related to its dynamic behaviour. Processes, events, evolutions, transitions, movement, state sequences are qualitative things. They entail time, which is even more qualitative than space. This is the reason why usually dynamic systems are more complex than static ones, and, consequently, more difficult to model and describe. Consider the example of a watch. Its dynamics transcends quantity and cannot be reduced to a number.

In some advanced cases the quality of a thing is related to the levels of abstractness it involves. Symbols, codes, words, languages, instructions are qualitative things. They entail meanings, which have high qualitative rank. This is the reason why information processing systems are among the more complex dynamic systems. Consider the example of a computer. It is made of matter, but this matter is support of an information processing that transcends matter. A computer transcends quantity, while being able to compute quantity.

Have you noted how many measures of complexity have been invented in system theory and in the complex systems field? All experts admit that no measure is perfect. A kind of complexity measure is good for a family of systems but is bad for another. Again, this is due indeed to the strict relation between true complexity and quality.

Quality and quantity are incomparable and irreducible. Total quantification, reduction of a thing to a single number, is a chimera. Quantification would be fine for us, because numbers are what is easier to deal with (not by chance, computers are good at crunching numbers but bad at grasping meanings). Unfortunately, to perfectly reduce quality to quantity is impossible in principle. As a direct consequence, perfect measures of complexity don’t exist. Expressed in a single number, what is the information content, the degree of organization, of a spring, of a watch, of a computer? Intuitively we feel they are in increasing order of complexity, but of how much?

What all this has to do with intelligent design?

It has a lot to do, and explains why so many discussions, also here at UD, arise about the concepts of CSI, FSCI, functional information, complexity and organization. We can consider a design as a thing containing “much” quality (but where this “much” cannot be properly quantified, so to speak). Roughly speaking, in the ID concept of CSI (complex specified information), the “complex” is somewhat related to quantity (it is a probability), while the “specification” is somehow related to quality (it is in relation with the functional description of the system). It easy to see here how quality, when we expel it from the front door, comes back through the back door. We would like a purely quantitative measure of complexity, nevertheless CSI seems to contain a qualitative part. In fact we can, yes, count the bit/bytes of the system description but this number will never perfectly represent the deep meaning of the description, what the system is. We may have two fully different systems with the same description length or even the same CSI.

What is the CSI of a mousetrap (Behe’s intuitive example of irreducible complexity)? What is the amount of information contained inside a mousetrap? The shapes of its parts matter and positional constraints about them are necessary so that the mousetrap work. Besides, the mousetrap implies a mechanism causing a short but effective dynamic event, when it catches the mouse. So consider how just this little example of design is difficult to quantify, how it is difficult to measure it by means of a single number. Go figure systems much more complex and organized than a mousetrap!

That being said, I do not mean the metrics of complexity and the measures of information proposed so far are useless. They can give an idea of what a system is, an hint about how much is more complex than another, how difficult is to design and produce it. Often, especially for specific, not extremely complex, types of systems, the measures of complexity are particular apt to characterize it, to provide an approximate metric.

Of course what said here doesn’t at all undermine ID. Quite the contrary, in a sense it reinforces ID, because dignifies more the role of design, in so far as eminent container of quality. Here what matters for me is the principle question, i.e. something we should be aware conceptually, while practically we can well use quantitative methods and tools, if they serve to get some result. After all also defective tools can be useful.

Comments
Optimus, thank you for your very kind words. I wish we had a chance for more of these kinds of exchanges, and I appreciate Chance, gpuccio, kf and others for hanging in there through one long thread, and our (semi) hijacking of this one as well. Perhaps if the front page format of UD were changed to put the news stuff on a sidebar as some of us have long suggested, then perhaps the substantive threads would last longer. Oh well, occasionally it works out. Thanks again.Eric Anderson
May 29, 2013
May
05
May
29
29
2013
07:52 PM
7
07
52
PM
PDT
Thanks Optimus, it's nice to get a little credit for being wrong. :PChance Ratcliff
May 29, 2013
May
05
May
29
29
2013
04:37 PM
4
04
37
PM
PDT
@Eric Anderson I really enjoyed your comments at 31 & 40. 31 especially is worth headlining IMO. This OP and the subsequent comments are very important in elucidating the precise meaning of "specified/specification" as it pertains to the metric of CSI. Thinking back to the Mathgrrl guest posts that challenged ID folks to measure CSI, I remember that he had a real difficulty in giving comprehensible examples of specification. The scenarios he provided always seemed a little off. Anyway, a tip of the hat to EA, Chance, & gpuccio - your exchange shows UD at its finest, promoting constructive interchange.Optimus
May 29, 2013
May
05
May
29
29
2013
03:58 PM
3
03
58
PM
PDT
Eric @40, thanks much. You are correct that I have confused/conflated specification with description, and your comments and gpuccio's have been quite helpful in helping me sort this out. I was operating with a blind spot. I still need to let all the comments sink in, but I can see now that my block space model exposes two levels of description, and not specification in the CSI sense. The programming language part of the model provides an opportunity for dramatically simplifying the description of an object, but it is not a specification except by a looser and inapplicable definition of the word. Nothing in the program specification necessitates a description of function or purpose. Thanks again for your patience and attention to my example.Chance Ratcliff
May 29, 2013
May
05
May
29
29
2013
03:35 PM
3
03
35
PM
PDT
Chance, apologies for the delay. I've taken a look this morning at your #29. I believe you are still conflating complexity with specification. Indeed, in your last paragraph you move from one to the other as though they were the same thing within the same paragraph.
At the very least such a hypothetical system, if the programming language were well defined, could give us an indication of how simple or complex certain objects are, such as mousetraps and watches, and provide a method for comparison between specification complexity. It might even be possible to prove a minimal possible specification for any given object, but I’m not entirely sure.
You start out saying that your hypothetical system could provide an indication of "how simple or complex certain objects are." This is true. Your example of the blocks deals with complexity. But then you go on to talk about this complexity as though it were a "possible specification for any given object." I think this is again a terminology issue. In all of your examples you've really been describing and discussing complexity. But then you talk about describing the complexity as a "specification." Again, a description of an object is not its specification -- at least not in the ID sense of CSI. A specification in terms of CSI has to do with function, meaning, purpose. Perhaps pursuing your watch example in the block space will be instructive. As you point out, we can describe the watch by "specifying" points within the block space. But what we have really done is identify all the three-dimensional points within the search space -- we have provided a description. This identification/description of what exists in the search space is complexity, just the same as when we have a string of amino acids that make up a functional protein. But the specification of the watch -- the specification in terms of what we mean in the CSI context -- is a mechanical apparatus to keep time. To flesh out the example, if we took all the parts of the watch and randomly assigned them to various locations within your three-dimensional block space, we would also have a complex arrangement. Then if we threw in a couple of extra gears and springs, we would have more complexity. But the jumbled assemblage of parts would not be a functional watch. It would not be a specification for purposes of CSI. So although English certainly permits us to say that, in the process of describing an object, we are "specifying" the parts of the object, we need to keep in mind that this is not the kind of specification we mean when discussing CSI. Perhaps thinking of it this way as we pursue the watch example will be helpful: A watch exists as a real physical object in a three-dimensional space. Its specification for CSI purposes is that it is a functional timekeeping device. Its description can be provided in words, in drawings, or in a series of x, y, z coordinates. So we have: Real object: Watch Specification: Functional timekeeping device Description: Series of x, y, z coordinates Now, here is the part that gets a bit interesting. In creating the description of the watch, we have produced information and now have another real object that exists, namely a set of numerical coordinates. This real object too has a specification and a description. Real object: Series of x, y, z coordinates. Specification: To create a three-dimensional representation of a watch. Description: We can describe our coordinates either by just listing them, or perhaps running a compression algorithm, or describing how many Shannon bits, etc., the series of numbers has or, ultimately, turning it into a series of binary 1's and 0's. (We could go through the same process as above if we used a word description of the watch or drawings instead of x, y, z coordinates. Indeed, it would add one more step in the middle.) What we see is that there is a hierarchy of objects and their descriptions back to the most basic. Ultimately, all descriptions can probably fall back to a series of binary 1's and 0's -- as long as we understand the specification, meaning the context, function and purpose of each description along the way. If we understand the specification (meaning the function/purpose) at each step, we can then use the description (meaning the identification of complexity within the particular space) to produce the next level step up the chain.Eric Anderson
May 29, 2013
May
05
May
29
29
2013
01:30 PM
1
01
30
PM
PDT
gpuccio @36,
"I hope that is clear."
Thanks. I'll consider carefully what you wrote and get back to you with any questions or clarifications. ChanceChance Ratcliff
May 28, 2013
May
05
May
28
28
2013
11:40 AM
11
11
40
AM
PDT
ForJah as to: "Is he correct that life is a spectrum?" Does he believe in front-loading? for all the information for all the colors of a spectrum are contained within white light and no new information is added. Thus if he is comparing life to a spectrum then to stay true to his comparison then he must also believe that the first life had all the information for the 'spectrum' of life to follow: http://chipl.edublogs.org/files/2010/11/Prisma-lightSpectrum-goethe-qwtxvd.gif http://chipl.edublogs.org/files/2010/11/Reverse-light-spectrum-yy6cje.jpg Isaac Newton http://chipl.edublogs.org/2010/11/25/isaac-newton/bornagain77
May 28, 2013
May
05
May
28
28
2013
09:02 AM
9
09
02
AM
PDT
Good comments from all, thank you. I add a note about the mousetrap, which could help to understand how sometimes big quality is hidden in small details. For example, consider the end of the spring that is *below* the hammer and, indeed due to that, powers the hammer. If this spring's end is *above* the hammer, the mousetrap doesn't work. The quantitative difference (number of characters) in the description is zero. Differently, the qualitative difference is large, because is the difference between functionality vs. inutility.niwrad
May 28, 2013
May
05
May
28
28
2013
07:00 AM
7
07
00
AM
PDT
Chance #29: I would like to comment on your model, also considering Eric's very good arguments at #31. As I have already argued, there is only one way to make the discourse about functional specification simple and clear. The following points must be always remembered: a) The functional specification is not of the object, but of the functiona described and defined for that object. b) The (digital) functional complexity (for each functional specification) is obtained by dividing the numerosity of the functional space by the numerosity of the search space. c) The functional complexity is an expression of the minimal complexity necessary to provide the defined function, and of the probability to get that function in a random system. d) The functional complexity can be categorized as present or absent by some appropriate (for the system) cutoff of functional performance. Your model is obviously digital, both in the "generating" system (the programming language, the printing machine) and in the output (the blocks). So, it very much resembles our general model of protein synthesis. The basic search space has already been indicated by you (for specific conditions) as the number of possible combinations. You say: "For even other objects, such as those resembling random selections, the specification would be intractably large." No. For random sequences, the complexity is maximum (the same as the search space), but there is no functional specification, so there is no functional complexity. Now, let's imagine we have a program sequence (or the corresponding block) which is truly random. Let's say the search space is 300 bits. Now, if we define a sequence as "any possible sequence of that length", that could be considered as a very large kind of functional specification: for example, we could have a security system where any sequence of that length is a good key. In that case, our random sequence does have the defined function. The complexity of the sequence is 300 bits, but the functional complexity of the sequence for that very generic function is 0. Instead, if our 300 bit sequence has to produce only blocks which perform some specific function at some defined level, things change. Let's say that only 2^10 blocks of all possible blocks of that length can perform the defined function at the defined level. So, the functional complexity tied to that function in that system is 290 bits (corresponding to a probability of 1: 2^290 of getting that function in a random system). So, the total complexity of both the random sequence and the functional sequence is the same, 300 bits (that is the search space). But the functional complexity (for the defined function) of the random sequence is 0 (the function is not present), while for the second sequence it is 290 bits. If we had defined a cutoff of 150 functional bits for that system (my "biological" threshold of functional complexity), we could say that the second sequence exhibits dFSCI. I hope that is clear.gpuccio
May 28, 2013
May
05
May
28
28
2013
05:18 AM
5
05
18
AM
PDT
gpuccio What was with the insult? I think I believe in common descent also..it's the via an undirected process I deny. Of course that isn't really answering my question. Thank you born for helping me but it still doesnt make it any clearer what he meant by spectrum or was I wrong when I asked him for a quantifiable estimate of the morphological changes? And was Berlinski...since he is the one who says it's necessary.ForJah
May 27, 2013
May
05
May
27
27
2013
08:24 PM
8
08
24
PM
PDT
Corrected link: my #29Chance Ratcliff
May 27, 2013
May
05
May
27
27
2013
07:09 PM
7
07
09
PM
PDT
Eric, if you have any thoughts about my #29 with regard to specification vs description I'd appreciate hearing them. Perhaps I'm introducing a similar ambiguity there. Thanks again. I take your point about the difference between the string's simplest description and the contents of the message. This all comes from my trying to wrap my head around Shannon's notion of information. Since I don't know anybody outside of this board who likes to explore such things, you all end up being my victims. ;)Chance Ratcliff
May 27, 2013
May
05
May
27
27
2013
07:07 PM
7
07
07
PM
PDT
Eric @31, thanks much. I think I understand your distinction, that the content of a message is not the same as it's description. Point taken, and I'll consider it carefully. I appreciate you taking the time.Chance Ratcliff
May 27, 2013
May
05
May
27
27
2013
06:57 PM
6
06
57
PM
PDT
Chance @23: Thanks for your comments. I think we had a good discussion and I appreciate your thoughts. I wonder, however, if there may be a definitional issue at work. You wrote:
One of the points I tried to get across was that maximum uncertainty in a string, or total randomness, precludes a simple description (specification).
If I may, I suspect your interchangeable use of those last two words gets to the heart of the matter. This may sound a bit strange at first, but a description of a string is not its specification. (At least not in the sense we are talking about for "complex specified information.") Let me see if I can describe what I mean. Take the string: 010101110110100101101100011011000010000001111001011011110111010100100000011011010110000101110010011100100111100100100000011011010110010100111111 Now we can describe this string by simply repeating it. We can also run some compression algorithms to see if there is a simpler description. We can even turn to our good friend Shannon, who will calculate the "entropy" and tell us that we have 144 bits. All of that relates to the description of the string. And all of it relates to the string's complexity, not its specification. However, if we are told or we discover that the string was sent by a young man to his geek girlfriend and we start to analyze the string, we soon find that it is a binary representation of the ASCII characters "Will you marry me?" That message, that meaning, that function -- that is the specification. Now suppose the young man had instead sent the following string: 01010111011010010110110001101100001000000111010100100000011011010110000101110010011100100111100100100000011011010110010100111111 Again, we can describe the string, run algorithms, recur to our friend Shannon who will tell us that we are dealing with only 128 bits in this case, etc. In other words the complexity is less. But if we again are able to discover the underlying message, we find that it says: "Will u marry me?" The message, its meaning, its function, its specification is virtually identical. And we find this specification due to our knowledge, our experience, the clues, by observing the function of the string, and so on, not because we have run a calculation. One string might have more entropy or be more complex than the other. One might even be more "random" than the other. We could compare the Shannon calculations and say that the second string requires only 89% as many bits as the first string to describe. But we would never say that the second string has 89% as much of a specification. The string either has a specification or it doesn't. We either recognize it or we don't. But the "specification" that you are trying to calculate is really a description, meaning it is the complexity side of CSI, not the specification side.Eric Anderson
May 27, 2013
May
05
May
27
27
2013
06:45 PM
6
06
45
PM
PDT
Great post!Optimus
May 27, 2013
May
05
May
27
27
2013
05:47 PM
5
05
47
PM
PDT
From the OP,
"What is the CSI of a mousetrap (Behe’s intuitive example of irreducible complexity)? What is the amount of information contained inside a mousetrap? The shapes of its parts matter and positional constraints about them are necessary so that the mousetrap work. Besides, the mousetrap implies a mechanism causing a short but effective dynamic event, when it catches the mouse. So consider how just this little example of design is difficult to quantify, how it is difficult to measure it by means of a single number. Go figure systems much more complex and organized than a mousetrap!"
I want to reiterate some thoughts I've expressed on other threads in the past. I'm not sure they address the above issues directly, but at least to me, it's interesting to consider. Imagine three theoretical objects: 1) A three-dimensional block comprised of 1024 smaller blocks per dimension, for a total of 1024^3 little blocks. Each block can be composed of a specific material among many; for example's sake, let's say we have 64 materials, including "none". This gives us a discrete configuration space of 64^(1024^3), or 6*2^30 bits (if I simplified correctly). This search space is vast. I'll refer to this hence as "block space." The resolution of this object is variable, so we're not strictly limited to 64 materials or 1024 dimensions. 2) A printing machine capable of outputting any object in the block space defined above. This is a glorified 3D printer. 3) A programming language for specifying objects to be output by the block space printer. With all of the above, we can write a program that will output a vast variety of objects, from simple to complex. One thing to note right away is that the discrete nature of the hypothetical cube means we can specify objects in lexicographical fashion. This suggests there is a number or set of numbers for every single configuration in the block space. In other words, there exists a discrete number for every possible output. Note that I'm not using this as an argument in favor of the quantification of specification; but in the context of block space, such a number would seem to exist -- but only in the context of a fixed block space. Now it's easy to imagine that for some objects, such as simple shapes, our program would be relatively simple compared to the complexity of the output in block space. For other objects, such as watches, the program would be much more complex. For even other objects, such as those resembling random selections, the specification would be intractably large. At the very least such a hypothetical system, if the programming language were well defined, could give us an indication of how simple or complex certain objects are, such as mousetraps and watches, and provide a method for comparison between specification complexity. It might even be possible to prove a minimal possible specification for any given object, but I'm not entirely sure. /$0.02Chance Ratcliff
May 27, 2013
May
05
May
27
27
2013
05:16 PM
5
05
16
PM
PDT
Yet, RNA transcripts are being even more uncooperative towards alignment for similarity than Genes currently are:
Phylogeny: Rewriting evolution - Tiny molecules called microRNAs are tearing apart traditional ideas about the animal family tree. - Elie Dolgin - 27 June 2012 Excerpt: “I've looked at thousands of microRNA genes, and I can't find a single example that would support the traditional tree,” he says. "...they give a totally different tree from what everyone else wants.” (Phylogeny: Rewriting evolution, Nature 486,460–462, 28 June 2012) (molecular palaeobiologist - Kevin Peterson) Mark Springer, (a molecular phylogeneticist working in DNA states),,, “There have to be other explanations,” he says. Peterson and his team are now going back to mammalian genomes to investigate why DNA and microRNAs give such different evolutionary trajectories. “What we know at this stage is that we do have a very serious incongruence,” says Davide Pisani, a phylogeneticist at the National University of Ireland in Maynooth, who is collaborating on the project. “It looks like either the mammal microRNAs evolved in a totally different way or the traditional topology is wrong. http://www.nature.com/news/phylogeny-rewriting-evolution-1.10885
footnote, and we haven't even mapped the entire the genome yet:
Ten years on, still much to be learned from human genome map - April 12, 2013 Excerpt:,,,"What's more, about 10 percent of the human genome still hasn't been sequenced and can't be sequenced by existing technology, Green added. "There are parts of the genome we didn't know existed back when the genome was completed," he said.,,, http://medicalxpress.com/news/2013-04-ten-years-human-genome.html
Not a good day to be a Darwinist ForJah, as if there ever were :) verse and music:
John 1:1 In the beginning was the Word, and the Word was with God, and the Word was God. Lecrae Live at Passion 2013 http://www.youtube.com/watch?v=gu59YLVTfV0
bornagain77
May 27, 2013
May
05
May
27
27
2013
04:41 PM
4
04
41
PM
PDT
Moreover, as if that was not devastating enough to the 99% similarity myth, orphan genes are now being found in each new genome that is sequenced:
Genes from nowhere: Orphans with a surprising story - 16 January 2013 - Helen Pilcher Excerpt: When biologists began sequencing genomes they discovered up to a third of genes in each species seemed to have no parents or family of any kind. Nevertheless, some of these "orphan genes" are high achievers (are just as essential as 'old' genes),,, But where do they come from? With no obvious ancestry, it was as if these genes appeared out of nowhere, but that couldn't be true. Everyone assumed that as we learned more, we would discover what had happened to their families. But we haven't-quite the opposite, in fact.,,, The upshot is that the chances of random mutations turning a bit of junk DNA into a new gene seem infinitesmally small. As the French biologist Francois Jacob wrote 35 years ago, "the probability that a functional protein would appear de novo by random association of amino acids is practically zero".,,, Orphan genes have since been found in every genome sequenced to date, from mosquito to man, roundworm to rat, and their numbers are still growing. http://ccsb.dfci.harvard.edu/web/export/sites/default/ccsb/publications/papers/2013/All_alone_-_Helen_Pilcher_New_Scientist_Jan_2013.pdf Orphan Genes (And the peer reviewed 'non-answer' from Darwinists) - video http://www.youtube.com/watch?v=1Zz6vio_LhY Widespread ORFan Genes Challenge Common Descent – Paul Nelson – video with references http://www.vimeo.com/17135166 Estimating the size of the bacterial pan-genome - Pascal Lapierre and J. Peter Gogarten - 2008 Excerpt: We have found greater than 139 000 rare (ORFan) gene families scattered throughout the bacterial genomes included in this study. The finding that the fitted exponential function approaches a plateau indicates an open pan-genome (i.e. the bacterial protein universe is of infinite size); a finding supported through extrapolation using a Kezdy-Swinbourne plot (Figure S3). This does not exclude the possibility that, with many more sampled genomes, the number of novel genes per additional genome might ultimately decline; however, our analyses and those presented in Ref. [11] do not provide any indication for such a decline and confirm earlier observations that many new protein families with few members remain to be discovered. http://www.paulyu.org/wp-content/uploads/2010/02/Estimating-the-size-of-the-bacterial-pan-genome.pdf The Dictionary of Life | Origins with Dr. Paul A. Nelson - video http://www.youtube.com/watch?feature=player_detailpage&v=zJaetK9gvCo#t=760s The essential genome of a bacterium - 2011 Figure (C): Venn diagram of overlap between Caulobacter and E. coli ORFs (outer circles) as well as their subsets of essential ORFs (inner circles). Less than 38% of essential Caulobacter ORFs are conserved and essential in E. coli. Only essential Caulobacter ORFs present in the STING database were considered, leading to a small disparity in the total number of essential Caulobacter ORFs. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3202797/pdf/msb201158.pdf Proteins and Genes, Singletons and Species - Branko Kozuli? PhD. Biochemistry Excerpt: Horizontal gene transfer is common in prokaryotes but rare in eukaryotes [89-94], so HGT cannot account for (ORFan) singletons in eukaryotic genomes, including the human genome and the genomes of other mammals.,,, The trend towards higher numbers of (ORFan) singletons per genome seems to coincide with a higher proportion of the eukaryotic genomes sequenced. In other words, eukaryotes generally contain a larger number of singletons than eubacteria and archaea.,,, That hypothesis - that evolution strives to preserve a protein domain once it stumbles upon it contradicts the power law distribution of domains. The distribution graphs clearly show that unique domains are the most abundant of all domain groups [21, 66, 67, 70, 72, 79, 82, 86, 94, 95], contrary to their expected rarity.,,, Evolutionary biologists of earlier generations have not anticipated [164, 165] the challenge that (ORFan) singletons pose to contemporary biologists. By discovering millions of unique genes biologists have run into brick walls similar to those hit by physicists with the discovery of quantum phenomena. The predominant viewpoint in biology has become untenable: we are witnessing a scientific revolution of unprecedented proportions. http://vixra.org/pdf/1105.0025v1.pdf
No ForJah, I'm not nearly as inclined to accept the genetic evidence for common ancestry as I once was. And considering that the recent ENCODE study is calling for a redefinition of the concept of 'gene', I see no hope of ever changing my mind in the future:
Landscape of transcription in human cells – Sept. 6, 2012 Excerpt: Here we report evidence that three-quarters of the human genome is capable of being transcribed, as well as observations about the range and levels of expression, localization, processing fates, regulatory regions and modifications of almost all currently annotated and thousands of previously unannotated RNAs. These observations, taken together, prompt a redefinition of the concept of a gene. http://www.nature.com/nature/journal/v489/n7414/full/nature11233.html Demise of the Gene - September 19, 2012 Excerpt: Although the gene has conventionally been viewed as the fundamental unit of genomic organization, on the basis of ENCODE data it is now compellingly argued that this unit is not the gene but rather the (RNA) transcript (Washietl et al. 2007; Djebali et al. 2012a). On this view, genes represent a higher-order framework around which individual transcripts coalesce, creating a poly-functional entity that assumes different forms under different cellular states, guided by differential utilization of regulatory DNA. (What does our genome encode? John A. Stamatoyannopoulos Genome Res. 2012 22: 1602-1611.) http://www.evolutionnews.org/2012/09/demise_of_the_g064371.html
bornagain77
May 27, 2013
May
05
May
27
27
2013
04:41 PM
4
04
41
PM
PDT
and even as low as 49%
Do Human and Chimpanzee DNA Indicate an Evolutionary Relationship? Excerpt: the authors found that only 48.6% of the whole human genome matched chimpanzee nucleotide sequences. [Only 4.8% of the human Y chromosome could be matched to chimpanzee sequences.] http://www.apologeticspress.org/articles/2070
moreover the gene count is now known to be basically similar across species, even at the most 'primitive' level:
More Questions for Evolutionists - August 2010 Excerpt: First of all, we have 65% of the gene number of humans in little old sponges—an organism that appears as far back as 635 million years ago, about as old as you can get [except for bacteria]. This kind of demolishes Darwin’s argument about what he called the pre-Silurian (pre-Cambrian). 635 mya predates both the Cambrian AND the Edicarian, which comes before the Cambrian (i.e., the pre-Cambrian) IOW, out of nowhere, 18,000 animal genes. Darwinian gradualism is dealt a death blow here (unless you’re a ‘true believer”!). Here’s a quote: “It means there was an elaborate machinery in place that already had some function. What I want to know now is what were all these genes doing prior to the advent of sponge.” (Charles Marshall, director of the University of California Museum of Paleontology in Berkeley.) I want to know, too! https://uncommondescent.com/intelligent-design/more-questions-for-evolutionists/
and even Zebrafish
Family Ties: Completion of Zebrafish Reference Genome Yields Strong Comparisons With Human Genome - Apr. 17, 2013 Excerpt: Researchers demonstrate today that 70 per cent of protein-coding human genes are related to genes found in the zebrafish,,, http://www.sciencedaily.com/releases/2013/04/130417131725.htm
and Kangaroos even had a surprise
First Decoded Marsupial Genome Reveals "Junk DNA" Surprise - 2007 Excerpt: In particular, the study highlights the genetic differences between marsupials such as opossums and kangaroos and placental mammals like humans, mice, and dogs. ,,, The researchers were surprised to find that placental and marsupial mammals have largely the same set of genes for making proteins. Instead, much of the difference lies in the controls that turn genes on and off. http://news.nationalgeographic.com/news/2007/05/070510-opossum-dna.html
Yet what accounts for such drastic differences in the species if the gene count is basically the same across species? Alternative splicing does. But alternative splicing is found to be species specific:
Evolution by Splicing – Comparing gene transcripts from different species reveals surprising splicing diversity. – Ruth Williams – December 20, 2012 Excerpt: A major question in vertebrate evolutionary biology is “how do physical and behavioral differences arise if we have a very similar set of genes to that of the mouse, chicken, or frog?”,,, A commonly discussed mechanism was variable levels of gene expression, but both Blencowe and Chris Burge,,, found that gene expression is relatively conserved among species. On the other hand, the papers show that most alternative splicing events differ widely between even closely related species. “The alternative splicing patterns are very different even between humans and chimpanzees,” said Blencowe.,,, http://www.the-scientist.com/?articles.view%2FarticleNo%2F33782%2Ftitle%2FEvolution-by-Splicing%2F The mouse is not enough - February 2011 Excerpt: Richard Behringer, who studies mammalian embryogenesis at the MD Anderson Cancer Center in Texas said, “There is no ‘correct’ system. Each species is unique and uses its own tailored mechanisms to achieve development. By only studying one species (eg, the mouse), naive scientists believe that it represents all mammals.” http://www.the-scientist.com/news/display/57986/
This finding is far more devastating than most people realize.. ,,,The reason why finding very different alternative splicing codes between closely related species is devastating to (bottom up) neo-Darwinian evolution is partly seen by understanding 'Shannon Channel Capacity':
“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible” Donald E. Johnson – Bioinformatics: The Information in Life Shannon Information - Channel Capacity - Perry Marshall - video http://www.metacafe.com/watch/5457552/
But perhaps the best way to understand why this is so devastating to (bottom up) neo-Darwinian evolution is best understood by taking a look at 'ontogenetic depth'
Darwin or Design? - Paul Nelson at Saddleback Church - Nov. 2012 - ontogenetic depth (excellent update) - video Text from one of the Saddleback slides: 1. Animal body plans are built in each generation by a stepwise process, from the fertilized egg to the many cells of the adult. The earliest stages in this process determine what follows. 2. Thus, to change -- that is, to evolve -- any body plan, mutations expressed early in development must occur, be viable, and be stably transmitted to offspring. 3. But such early-acting mutations of global effect are those least likely to be tolerated by the embryo. Losses of structures are the only exception to this otherwise universal generalization about animal development and evolution. Many species will tolerate phenotypic losses if their local (environmental) circumstances are favorable. Hence island or cave fauna often lose (for instance) wings or eyes. http://www.saddleback.com/mc/m/7ece8/
bornagain77
May 27, 2013
May
05
May
27
27
2013
04:40 PM
4
04
40
PM
PDT
Well ForJah, as you have probably guessed, unlike gpuccio, Dr. Behe, Dr. Torley, and others who support ID, I'm not nearly as impressed for the evidence for common descent as these others are.,,, Another area of evidence for common descent that has fallen completely apart is the genetic similarity evidence.,, First, it is found that the genetic similarity one derives is highly subjective to 'various methodological factors'
Guy Walks Into a Bar and Thinks He's a Chimpanzee: The Unbearable Lightness of Chimp-Human Genome Similarity - 2009 Excerpt: One can seriously call into question the statement that human and chimp genomes are 99% identical. For one thing, it has been noted in the literature that the exact degree of identity between the two genomes is as yet unknown (Cohen, J., 2007. Relative differences: The myth of 1% Science 316: 1836.). ,,, In short, the figure of identity that one wants to use is dependent on various methodological factors. http://www.evolutionnews.org/2009/05/guy_walks_into_a_bar_and_think.html
Even ignoring the subjective bias of 'various methodological factors' that Darwinists introduce into these similarity studies, the first inkling, at least for me, that something was terribly amiss with the oft quoted 99% similarity figure was this,,,
Humans and chimps have 95 percent DNA compatibility, not 98.5 percent, research shows - 2002 Excerpt: Genetic studies for decades have estimated that humans and chimpanzees possess genomes that are about 98.5 percent similar. In other words, of the three billion base pairs along the DNA helix, nearly 99 of every 100 would be exactly identical. However, new work by one of the co-developers of the method used to analyze genetic similarities between species says the figure should be revised downward to 95 percent. http://www.caltech.edu/content/humans-and-chimps-have-95-percent-dna-compatibility-not-985-percent-research-shows
and then this,,,
Chimps are not like humans - May 2004 Excerpt: the International Chimpanzee Chromosome 22 Consortium reports that 83% of chimpanzee chromosome 22 proteins are different from their human counterparts,,, The results reported this week showed that "83% of the genes have changed between the human and the chimpanzee—only 17% are identical—so that means that the impression that comes from the 1.2% [sequence] difference is [misleading]. In the case of protein structures, it has a big effect," Sakaki said. http://cmbi.bjmu.edu.cn/news/0405/119.htm
this had caught my eye in 2008,,,
Chimpanzee? 10-10-2008 - Dr Richard Buggs - research geneticist at the University of Florida ...Therefore the total similarity of the genomes could be below 70%. http://www.idnet.com.au/files/pdf/Chimpanzee.pdf
And then this caught my eye in 2011:
Study Reports a Whopping "23% of Our Genome" Contradicts Standard Human-Ape Evolutionary Phylogeny - Casey Luskin - June 2011 Excerpt: For about 23% of our genome, we share no immediate genetic ancestry with our closest living relative, the chimpanzee. This encompasses genes and exons to the same extent as intergenic regions. We conclude that about 1/3 of our genes started to evolve as human-specific lineages before the differentiation of human, chimps, and gorillas took place. (of note; 1/3 of our genes is equal to about 7000 genes that we do not share with chimpanzees) http://www.evolutionnews.org/2011/06/study_reports_a_whopping_23_of047041.html
In late 2011 Jeffrey P. Tomkins, using an extremely conservative approach, reached the figure of 87% similarity:
Genome-Wide DNA Alignment Similarity (Identity) for 40,000 Chimpanzee DNA Sequences Queried against the Human Genome is 86–89% - Jeffrey P. Tomkins - December 28, 2011 Excerpt: A common claim that is propagated through obfuscated research publications and popular evolutionary science authors is that the DNA of chimpanzees or chimps (Pan troglodytes) and humans (Homo sapiens) is about 98–99% similar. A major problem with nearly all past human-chimp comparative DNA studies is that data often goes through several levels of pre-screening, filtering and selection before being aligned, summarized, and discussed. Non-alignable regions are typically omitted and gaps in alignments are often discarded or obfuscated. In an upcoming paper, Tomkins and Bergman (2012) discuss most of the key human-chimp DNA similarity research papers on a case-by-case basis and show that the inclusion of discarded data (when provided) actually suggests a DNA similarity for humans and chimps not greater than 80–87% and quite possibly even less. http://www.answersingenesis.org/articles/arj/v4/n1/blastin Genomic monkey business - similarity re-evaluated using omitted data - by Jeffrey Tomkins and Jerry Bergman Excerpt: A review of the common claim that the human and chimpanzee (chimp) genomes are nearly identical was found to be highly questionable solely by an analysis of the methodology and data outlined in an assortment of key research publications.,,, Based on the analysis of data provided in various publications, including the often cited 2005 chimpanzee genome report, it is safe to conclude that human–chimp genome similarity is not more than ~87% identical, and possibly not higher than 81%. These revised estimates are based on relevant data omitted from the final similarity estimates typically presented.,,, Finally, a very recent large-scale human–chimp genome comparison research report spectacularly confirms the data presented in this report. The human–chimp common ancestor paradigm is clearly based more on myth and propaganda than fact. http://creation.com/human-chimp-dna-similarity-re-evaluated
Then earlier this year when better data had come in, and still using an extremely conservative approach, Tomkins reached the figure of 70% similarity:
Comprehensive Analysis of Chimpanzee and Human Chromosomes Reveals Average DNA Similarity of 70% - by Jeffrey P. Tomkins - February 20, 2013 Excerpt: For the chimp autosomes, the amount of optimally aligned DNA sequence provided similarities between 66 and 76%, depending on the chromosome. In general, the smaller and more gene-dense the chromosomes, the higher the DNA similarity—although there were several notable exceptions defying this trend. Only 69% of the chimpanzee X chromosome was similar to human and only 43% of the Y chromosome. Genome-wide, only 70% of the chimpanzee DNA was similar to human under the most optimal sequence-slice conditions. While, chimpanzees and humans share many localized protein-coding regions of high similarity, the overall extreme discontinuity between the two genomes defies evolutionary timescales and dogmatic presuppositions about a common ancestor. http://www.answersingenesis.org/articles/arj/v6/n1/human-chimp-chromosome
Though outliers, I've even found studies for percent similarity figures as low as 62%,,
A simple statistical test for the alleged “99% genetic identity” between humans and chimps - September 2010 Excerpt: The results obtained are statistically valid. The same test was previously run on a sampling of 1,000 random 30-base patterns and the percentages obtained were almost identical with those obtained in the final test, with 10,000 random 30-base patterns. When human and chimp genomes are compared, the X chromosome is the one showing the highest degree of 30BPM similarity (72.37%), while the Y chromosome shows the lowest degree of 30BPM similarity (30.29%). On average the overall 30BPM similarity, when all chromosomes are taken into consideration, is approximately 62%. https://uncommondescent.com/intelligent-design/a-simple-statistical-test-for-the-alleged-99-genetic-identity-between-humans-and-chimps/
bornagain77
May 27, 2013
May
05
May
27
27
2013
04:39 PM
4
04
39
PM
PDT
Great OP. I can't really find anything to disagree with. There is definitely a qualitative difference between qualitative and quantitative. ;)Chance Ratcliff
May 27, 2013
May
05
May
27
27
2013
03:39 PM
3
03
39
PM
PDT
Eric @19,
"Reminiscent of a discussion we had just a few days ago about whether specification can be precisely quantified. Complexity, sure. Specification, not so much."
To be clear, in that other discussion I was not trying to develop a precise quantification of specificity, but rather the indication of a potential specification based upon the level of uncertainty in a string. Maximum uncertainty in a string would preclude a specification that is more concise than the string itself; whereas low uncertainty strings would be more amenable to concise specification. I'm not really trying to reboot that discussion on this thread, I just want to illustrate the difference between quantifying specificity in the way you seem to be suggesting, and indicating the potential for a simple specification for a given string. One of the points I tried to get across was that maximum uncertainty in a string, or total randomness, precludes a simple description (specification). For example, the binary string: 11110011110000100100011000011010111001111111101110 is 50 characters and totally random, so the Kolmogorov complexity is no simpler than the string itself: output "11110011110000100100011000011010111001111111101110" The above constitutes a specification for the random string, which is no less complex than the string itself. On the other hand, a low uncertainty string is more amenable to specification. For example, the string: 00000000000000000000000000000000000000000000000000 is also 50 characters, but can be specified this way: "output '0' 50 times" which is only 19 characters, and more than 2:1 compression. So we find that low uncertainty strings (by probabilistic analysis) can be effectively compressed, whereas random (maximally uncertain) strings cannot. Random strings are not amenable to simple specifications; but as the uncertainty decreases, the possibility of a simple specification goes up. I don't know how to prove this, but I think it's intuitively clear. For example, the string: 10000000000000000000000000000000000000000000000000 can be described as: "output '1'; output '0' 49 times" which is 31 characters, shy of a 2:1 compression ratio. So I was trying to establish that as uncertainty drops in a string, on probabilistic analysis, the string becomes more amenable to a simple specification (which is similar to compression). As uncertainty gets low, and as string length increases, the ability to specify the string with far fewer characters than the string itself becomes more certain. And I would go on to say that as uncertainty decreases, we become less certain that the string could have been randomly generated, and more certain that the string was generated either algorithmically or directly by intelligent agency; in other words, we become more certain that it could have been specified in some way. It should also be noted that complexity can be measured for both a specification and for the output string it describes. As the ratio of the minimal description of a string to the string itself gets low, we can be both less certain that the string was randomly generated, and more certain that it may have been specified. There is apparently an inverse proportional relationship here. Do you still think this confuses or conflates specification and complexity? Thanks in advance.Chance Ratcliff
May 27, 2013
May
05
May
27
27
2013
03:10 PM
3
03
10
PM
PDT
I was reminded of the following quote of Max Planck just now, while watching a YouTube video-clip: 'There is no matter as such. All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together. We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter.' And the question occurred to me: What would our dirt-worshipping naturalist chums make of Planck's concluding sentence, beginning, without equivocation with the words, 'We must assume....' As for Einstein, for Planck, it was clearly self-evident that the whole of creation into which atoms are configured must be the work of a super-intelligent mind: a creation intelligently designed by a mind of an incomprehensibly subtle and infinite capacity, and a power to match it. Not a self-designed and self-created universe! Such a lack of imagination.... And to think, now they're both icons of the history of science! Unfortunately, instead of the binding, hegemonic force of the quantum-physics paradigm stymie-ing our chums, they see it as a 'Get Out Of Jail Free' card. Any untestable postulation they might come up with, no matter how gratuitously fanciful and/or infantile, in order to wriggle out of a sane, minimally deistic world-view, they will conflate with the proven and testable paradoxes of quantum mechanics, as 'mysterious', COUNTER-INTUITIVE, etc. If I were a physicist it would drive nuts to hear other physicists, or scientists in any other field speaking dismissively of the mysterious nature of quantum mechanics; as if it's something that just has to be tolerated but not taken very seriously. You know: 'Life's like that. Throws a bit of a wobbly every now and again. Bit of a standing joke, really... not something for serious scientists to bother about, epistemologically - not like abiogenesis, for instance,' would seem to be the subtext, however subliminally. It's never been news that the world is insane, has it?Axel
May 27, 2013
May
05
May
27
27
2013
03:04 PM
3
03
04
PM
PDT
ForJah: You and/or your friend are really champions of senselessness. And evasion. For your information, I, like many others here, believe in common descent. For your information, ask your friend to comment on what I said about protein sequence space. By.gpuccio
May 27, 2013
May
05
May
27
27
2013
02:32 PM
2
02
32
PM
PDT
ForJah as to:
without even accounting for fossil evidence
It may surprise you to learn that the supposed fossil evidence for human evolution does not exist save for in the fertile imagination of those who have been misled by, and believe in, the Darwinian dogma:
“We have all seen the canonical parade of apes, each one becoming more human. We know that, as a depiction of evolution, this line-up is tosh (i.e. nonsense). Yet we cling to it. Ideas of what human evolution ought to have been like still colour our debates.” Henry Gee, editor of Nature (478, 6 October 2011, page 34, doi:10.1038/478034a), Paleoanthropologist Exposes Shoddiness of “Early Man” Research - Feb. 6, 2013 Excerpt: The unilineal depiction of human evolution popularized by the familiar iconography of an evolutionary ‘march to modern man’ has been proven wrong for more than 60 years. However, the cartoon continues to provide a popular straw man for scientists, writers and editors alike. ,,, archaic species concepts and an inadequate fossil record continue to obscure the origins of our genus. http://crev.info/2013/02/paleoanthropologist-exposes-shoddiness/ When we consider the remote past, before the origin of the actual species Homo sapiens, we are faced with a fragmentary and disconnected fossil record. Despite the excited and optimistic claims that have been made by some paleontologists, no fossil hominid species can be established as our direct ancestor. Richard Lewontin - "Human Diversity", pg.163 (Scientific American Library, 1995) - Harvard Zoologist Evolution of the Genus Homo - Annual Review of Earth and Planetary Sciences - Tattersall, Schwartz, May 2009 Excerpt: "Definition of the genus Homo is almost as fraught as the definition of Homo sapiens. We look at the evidence for “early Homo,” finding little morphological basis for extending our genus to any of the 2.5–1.6-myr-old fossil forms assigned to “early Homo” or Homo habilis/rudolfensis." http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.earth.031208.100202 Man is indeed as unique, as different from all other animals, as had been traditionally claimed by theologians and philosophers. Evolutionist Ernst Mayr (What Evolution Is. 2001) Human Origins and the Fossil Record: What Does the Evidence Say? - Casey Luskin - July 2012 Excerpt: Indeed, far from supplying "a nice clean example" of "gradualistic evolutionary change," the record reveals a dramatic discontinuity between ape-like and human-like fossils. Human-like fossils appear abruptly in the record, without clear evolutionary precursors, making the case for human evolution based on fossils highly speculative. http://www.evolutionnews.org/2012/07/human_origins_a_1061771.html "A number of hominid crania are known from sites in eastern and southern Africa in the 400- to 200-thousand-year range, but none of them looks like a close antecedent of the anatomically distinctive Homo sapiens…Even allowing for the poor record we have of our close extinct kin, Homo sapiens appears as distinctive and unprecedented…there is certainly no evidence to support the notion that we gradually became who we inherently are over an extended period, in either the physical or the intellectual sense." Dr. Ian Tattersall: - paleoanthropologist - emeritus curator of the American Museum of Natural History - (Masters of the Planet, 2012) Read Your References Carefully: Paul McBride's Prized Citation on Skull-Sizes Supports My Thesis, Not His - Casey Luskin - August 31, 2012 Excerpt of Conclusion: This has been a long article, but I hope it is instructive in showing how evolutionists deal with the fossil hominin evidence. As we've seen, multiple authorities recognize that our genus Homo appears in the fossil record abruptly with a complex suite of characteristics never-before-seen in any hominin. And that suite of characteristics has remained remarkably constant from the time Homo appears until the present day with you, me, and the rest of modern humanity. The one possible exception to this is brain size, where there are some skulls of intermediate cranial capacity, and there is some increase over time. But even there, when Homo appears, it does so with an abrupt increase in skull-size. ,,, The complex suite of traits associated with our genus Homo appears abruptly, and is distinctly different from the australopithecines which were supposedly our ancestors. There are no transitional fossils linking us to that group.,,, http://www.evolutionnews.org/2012/08/read_your_refer_1063841.html Double Standards and a Single Variable - Casey Luskin - August 2012 Excerpt: (arguments) revolving around a single variable (brain size) which he claims (wrongly) shows smooth, gradual evolution. Even if this variable did evolve smoothly, I provide an extensive discussion in my chapter of why that would not demonstrate that humans share a common ancestor with apes. McBride fails to engage my discussion of the evolution of brain size, ignoring my arguments why skulls of "intermediate" size demonstrate very little. And as we'll see in a further article, the authorities he relies upon to claim that the evolution of cranial capacities displays a "lack of discontinuity" in fact argue that there is great discontinuity -- including "punctuational changes" and "saltation" -- in the hominin fossil record as it pertains to skull size. http://www.evolutionnews.org/2012/08/part_1_double_s063661.html
As to the supposed 'skull evidence' in which Darwinists have lined up skulls which give the false impression of progression from apes to man, I would like to point out this little known fact:
Are brains shrinking to make us smarter? - February 2011 Excerpt: Human brains have shrunk over the past 30,000 years, http://www.physorg.com/news/2011-02-brains-smarter.html If Modern Humans Are So Smart, Why Are Our Brains Shrinking? - January 20, 2011 Excerpt: John Hawks is in the middle of explaining his research on human evolution when he drops a bombshell. Running down a list of changes that have occurred in our skeleton and skull since the Stone Age, the University of Wisconsin anthropologist nonchalantly adds, “And it’s also clear the brain has been shrinking.” “Shrinking?” I ask. “I thought it was getting larger.” The whole ascent-of-man thing.,,, He rattles off some dismaying numbers: Over the past 20,000 years, the average volume of the human male brain has decreased from 1,500 cubic centimeters to 1,350 cc, losing a chunk the size of a tennis ball. The female brain has shrunk by about the same proportion. “I’d call that major downsizing in an evolutionary eyeblink,” he says. “This happened in China, Europe, Africa—everywhere we look.” http://discovermagazine.com/2010/sep/25-modern-humans-smart-why-brain-shrinking
After looking through the 'fossil evidence' for human evolution, it seems to me that the most suggestive thing evolutionists have for proving to us 'the fact that humans evolved from apes', as they adamantly claim, are the infamous cartoon drawings that show a ape slowly evolving into man. Yet we find 'artistic license' to be rampant in these ape-men reconstructions:
Paleoanthropology Excerpt: In regards to the pictures of the supposed ancestors of man featured in science journals and the news media Boyce Rensberger wrote in the journal Science the following regarding their highly speculative nature: "Unfortunately, the vast majority of artist's conceptions are based more on imagination than on evidence. But a handful of expert natural-history artists begin with the fossil bones of a hominid and work from there…. Much of the reconstruction, however, is guesswork. Bones say nothing about the fleshy parts of the nose, lips, or ears (or eyes). Artists must create something between an ape and a human being; the older the specimen is said to be, the more apelike they make it.... Hairiness is a matter of pure conjecture." http://conservapedia.com/Evolution#Paleoanthropology "National Geographic magazine commissioned four artists to reconstruct a female figure from casts of seven fossil bones thought to be from the same species as skull 1470. One artist drew a creature whose forehead is missing and whose jaws look vaguely like those of a beaked dinosaur. Another artist drew a rather good-looking modern African-American woman with unusually long arms. A third drew a somewhat scrawny female with arms like a gorilla and a face like a Hollywood werewolf. And a fourth drew a figure covered with body hair and climbing a tree, with beady eyes that glare out from under a heavy, gorilla-like brow." “Behind the Scenes,” National Geographic 197 (March, 2000): 140
One can see that 'artistic license' for human evolution being played out on the following site.
10 Transitional Ancestors of Human Evolution by Tyler G., March 18, 2013 http://listverse.com/2013/03/18/10-transitional-ancestors-of-human-evolution/
Please note, on the preceding site, how the sclera (white of the eye), a uniquely human characteristic, was brought in very early on, in the artists' reconstructions, to make the fossils appear much more human than they actually were, even though the artists making the reconstructions have no clue whatsoever as to what the colors of the eyes, of these supposed transitional fossils, actually were.
"alleged restoration of ancient types of man have very little, if any, scientific value and are likely only to mislead the public" Earnest A. Hooton - physical anthropologist - Harvard University
bornagain77
May 27, 2013
May
05
May
27
27
2013
01:13 PM
1
01
13
PM
PDT
Reminiscent of a discussion we had just a few days ago about whether specification can be precisely quantified. Complexity, sure. Specification, not so much.Eric Anderson
May 27, 2013
May
05
May
27
27
2013
12:51 PM
12
12
51
PM
PDT
Some more ERVs that don't fit into the naturalistic evolutionary assumption of common descent:
PTERV1 in chimpanzee, African great apes and old World monkeys but not in humans and asian apes (orangutan, siamang, and gibbon). http://www.sciencedaily.com/releases/2005/03/050328174826.htm Conservation and loss of the ERV3 open reading frame in primates. http://www.ncbi.nlm.nih.gov/pubmed/15081124 ERV3 sequences were amplified by PCR from genomic DNA of great ape and Old World primates but not from New World primates or gorilla, suggesting an integration event more than 30 million years ago with a subsequent loss in one species. Many more cases of 'anomalous' ERVs https://uncommondescent.com/intelligent-design/life-project-architecture/#comment-449621
bornagain77
May 27, 2013
May
05
May
27
27
2013
12:40 PM
12
12
40
PM
PDT
ForJah as to:
Chimps and humans share at least 16 ERV matches, which basically seals the deal right from the get-go without even accounting for fossil evidence, observed mutations and speciations etc.”
I would wait before I signed the bottom line of that deal if I were you:
Endogenous Retroviruses (ERVs) - video http://www.youtube.com/watch?v=TIz0UOfTVa8&feature=player_detailpage#t=156s
Further notes:
The definitive response on ERV’s and Creation, with Dr. Jean Lightner http://www.youtube.com/watch?v=feHYEgzaGkY Refutation Of Endogenous Retrovirus - ERVs - Richard Sternberg, PhD Evolutionary Biology - video http://www.metacafe.com/watch/4094119 Sternberg, R. v. & J. A. Shapiro (2005). How repeated retroelements format genome function. Cytogenet. Genome Res. 110: 108-116. Endogenous retroviruses regulate periimplantation placental growth and differentiation - 2006 http://www.pnas.org/content/103/39/14390.abstract. Retrovirus in the Human Genome Is Active in Pluripotent Stem Cells - Jan. 23, 2013 Excerpt: "What we've observed is that a group of endogenous retroviruses called HERV-H is extremely busy in human embryonic stem cells," said Jeremy Luban, MD, the David L. Freelander Memorial Professor in HIV/AIDS Research, professor of molecular medicine and lead author of the study. "In fact, HERV-H is one of the most abundantly expressed genes in pluripotent stem cells and it isn't found in any other cell types. http://www.sciencedaily.com/releases/2013/01/130123133930.htm Transposable Elements Reveal a Stem Cell Specific Class of Long Noncoding RNAs - (Nov. 26, 2012) Excerpt: The study published by Rinn and Kelley finds a striking affinity for a class of hopping genes known as endogenous retroviruses, or ERVs, to land in lincRNAs. The study finds that ERVs are not only enriched in lincRNAs, but also often sit at the start of the gene in an orientation to promote transcription. Perhaps more intriguingly, lincRNAs containing an ERV family known as HERVH correlated with expression in stem cells relative to dozens of other tested tissues and cells. According to Rinn, "This strongly suggests that ERV transposition in the genome may have given rise to stem cell-specific lincRNAs. The observation that HERVHs landed at the start of dozens of lincRNAs was almost chilling; that this appears to impart a stem cell-specific expression pattern was simply stunning!" http://www.sciencedaily.com/releases/2012/11/121125192838.htm Retroviruses and Common Descent: And Why I Don’t Buy It - September 2011 Excerpt: If it is the case, as has been suggested by some, that these HERVs are an integral part of the functional genome, then one might expect to discover species-specific commonality and discontinuity. And this is indeed the case. https://uncommondescent.com/evolution/retroviruses-and-common-descent-and-why-i-dont-buy-it/
bornagain77
May 27, 2013
May
05
May
27
27
2013
12:40 PM
12
12
40
PM
PDT
Gpuccio: Here is what the dude said in response to what I asked. Which i wrote about in box 1 on this page. He gave me a lettered sequence that changed colors from red to blue. Here I will provide you with a link. http://i.imgur.com/xWpvw.jpg Now I happen to think and I believe this dude would agree that the example provided does provide for a good analogy. It actually shows the exact opposite. But he goes on to example what the analogy is SUPPOSED to show. Keeping in mind this is in reference to morphological changes. "The whole idea of the red-blue paragraph is to help you understand that species exist on a spectrum, not as separate boxes walled-off from one another. Purple is purposefully not defined. For the analogy, purple would be the point in a species evolution where hybridization would occur. If you had a species, and had say generation 1 to generation 100, measured by millions, it would be like this; Generations 1-50 are all close enough genetically to interbreed, thus they are considered the same species. Generation 51 is where the genetic differences have begun to have an effect on interbreeding. If a generation 51 were to interbreed with a generation 1, there is a high likelihood of infertile offspring. Though a generation 51 can still successfully interbreed with say a generation 30. However, a generation 80 would have more trouble creating viable offspring with a generation 30, than with a generation 50 specimen. Eventually, if you took a generation 100 specimen and tried to breed it with a generation 1 specimen, they would not be able to produce offspring whatsoever, thus there is a speciation event between the two. Even though both generation 1 and generation 100 can interbreed, albeit with some difficulty, with generation 50. This can be a tricky thing to understand for many people, even many who accept evolution because how we define species is flawed; it works for most practical uses, but makes the concept of evolution and speciation harder to understand because we are trying to put solid boundaries on something that has no solid boundary, but exists on a blurred spectrum. As far as humans coming from ape like ancestors, I suggest you investigate ERVs. Much like how science can determine if a person is or is not your direct descendant through paternity tests, which looks for specific genetic markers in your DNA, ERVs allow us to apply the same principle to species, and see who shares a relatively recent common ancestor. Chimps and humans share at least 16 ERV matches, which basically seals the deal right from the get-go without even accounting for fossil evidence, observed mutations and speciations etc."ForJah
May 27, 2013
May
05
May
27
27
2013
11:23 AM
11
11
23
AM
PDT
niwrad (and rorydaulton): I would definitely say that human cognition (the only form of cognition we know of) is never pure quantity. The concept of quantity itself is a qualitative concept. niwrad, you say: "It is easy to grasp what is the difference between one apple, two apples, three apples… only an integer number changes, representing the amount of apples. " True. But I would add that the concept of integers, of natural numbers, is one of the most difficult qualitative concept to define and understand. All cognition is based on conscious representations and experiences that defy any quantitative, or simply "objective" definition. I always cite meaning and purpose, but quantity, and all fundamental mathemathical abstractions, including the principles of deduction, could well be more detailed examples. niwrad, you say: "computers are good at crunching numbers but bad at grasping meanings". True. But I will be more radical. Computers are very good at crinching numbers without even understanding what numbers are. And they definitely cannot grasp any meaning, not even the most simple. Meaning is an experience of consciousness, and computers are not conscious. I would also like to comment on the concept of CSI, especially in the form of dFSCI, which I usually use. In no way it is a "defective tool". It is a perfect tool. I will be more specific. A tool is only as good as the things it does. A tool is an empirical instrument. The purpose (!) of dFSCI is to allow a positive design inference. dFSCI does that perfectly (100% specificity). So, it is a perfect tool, as far as our empirical experience confirms. The purpose of dFSCI is to measure the complexity necessary to provide a function. The function is qualitative, and it is not measured. It can be assessed categorically, however, by means of a (qualitative) setting that gives a quantitative result: for example, a lab setting that measures a biochemical activity, which is categorized by a pre-specified threshold. So, the qualitative function is assessed as present or absent by a quantitative method (and a method is always in essence qualitative, even if it provides quantitative results). The complexity tied to the function is certainly measured quantitatively, and in a digital context it is always possible, in principle. to do that. In most cases, we will have only approximations (a very qualitative concept), because of empirical difficulties, as it happens in all empirical sciences. But that is not the point. The functional complexity can always be exactly defined and measured in principle. Even the method to measure digital functional complexity, however, is based on many abstract and subtle qualitative concepts: search space, functional space, probability distributions, models, and so on.gpuccio
May 27, 2013
May
05
May
27
27
2013
10:43 AM
10
10
43
AM
PDT
1 2

Leave a Reply