Uncommon Descent Serving The Intelligent Design Community

How to become IDer in two weeks

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This opportunity is dedicated to Darwinists/evolutionists or any ID denier who sincerely desire to become convinced IDer but failed the target until today. It is a great opportunity that unfortunately Darwin could not get at his times (you will understand why at the end).

First off, no worry the method is entirely free of charge. No books of ID theorists to buy. No lectures or schools are requested. No need to travel or participate to meetings or seminars. You can stay quietly where you are now, before the screen of your computer.

Some analyses have shown that the difficulty of understanding ID and its concepts (CSI, IC, etc.) consists in the following aspects. (1) Usually people simply look at the complex systems (where CSI, IC … are) in a passive manner, without any active stance. For example evolutionary biologists look at the biological realities but they don’t try to construct them (yes I know genetic engineering tries to do something like that but one cannot say properly that it starts from nothing). To look at is too easy. (2) Reading ID books or attending lectures sure can help yes but also here the participation is passive, no warranty that at the end a real understanding was achieved. To read and hear is too easy. (3) To discuss ID/evo issues with friends, colleagues and debaters can help but often is counterproductive: each one remains on his position, even more convinced than before, given that discussion may invigorate one’s wrong convictions. To discuss is too easy. (4) To study complex systems (both the artificial and natural ones) can help but again a thing is to study, another thing is to construct them. To study is too easy. (5) To write documents, articles, peer reviewed papers and whatever about complex systems can help (it is sure more demanding than to read, speak or discuss) but however it remains always the possibility that one continues to believe that such systems can evolve after all. To write is too easy.

At this point you wonder: no reading, no writing, no discussion, no study, no analysis of the systems. What is the method then? To tell it in a word, the method is based on design. Yes, the principle of the method is that to really understand design one must personally design. It is not enough to do the above activities. (I know you are disappointed.)

Of course a good example of design would be engineering in all its specialties. Unfortunately almost all fields of engineering are inaccessible to laymen for many reasons. But the good news is that there is a field that is theorically and practically available (at least at a basic level) to almost all people (or at least to scientific-minded people as most ID deniers are): computer science. Our suggested patent-pending method to become IDer is based on computer programming. Developing programs gives ID refuters a lot of advantages to learn ID.

(1) Computer programming is an activity where, differently from literature, philosophy, journalism and so on, a severe control overarches all the design cycle. In programming errors matter, also the minor ones are never condoned. This is good discipline for the student, to be always forced to correct his errors. If you write a book filled with errors, no worry, it will be published the same. If you write a program with one error nothing works. This is the difference between storytelling and programming. Usually there are at least two kinds of control or filter: at compilation time and at run-time. Any program works only if it passes the two filters. Extremely useful is to try to find the causes of a failure or wrong behavior of your program. In programming you will always face this hard reality: you are the only source of all functional bits.
(2) Computer programs don’t arise by unguided evolution. They entail CSI and only intelligence can create CSI. Whether software were generable by mean of randomness and machines, software houses wouldn’t need to pay legions of expensive programmers. When you are programming you see directly your intelligence at work. Eventually other programmers can help you but no other unintelligent thing can do the job for you.
(3) To develop programs is a good exercise to learn CSI, IC, nested functional hierarchies, sub-functions, structures, dependences among parts, meta-information, libraries, etc. For example, to test if a particular sub-module is IC at the functional level one can easily delete some of its instructions in the program sources and see what happens. You can have a raw measure of the CSI of your program looking at the size of its source (or its binary executable). More instructions you write more CSI and personal gratification.
(4) Computer programming is an information processing job. Therefore is particularly apt to understand what happens in the biological cells, where information is processed and instructions are run by the molecular machinery. Cells don’t work according to storytelling, rather according to programming.
(5) Last but not least you can even simulate random mutations. You can insert some random error in the source and see if such variation is functionally beneficial (how neo-Darwinism theory hopes). Eventually if you want a better randomness you can ask a (non programmer) friend to introduce a blind change in your code. The analogy between human software and biological code is good, so directly testing random mutations in the former can give you an idea of their results in the latter.

What is necessary to start? Of course a computer but you have it already. The second step is the choice of the programming language. In the history of informatics hundreds of programming languages have been developed. All have pro and cons. My personal advice nowadays is to adopt Perl or PHP. By the way Perl is particularly useful in genomic research. Someone even claims that Perl “saved the human genome project” (see here).

Perl and PHP are modern high level language, running on every operating system, relatively easy to learn and in the same time very powerful. One can freely download their compilers and on-line manuals from their web-sites. Read the initial chapters of the manual (not the entire manual). After the installation on your computer you can just start writing you first “Hello world” program. If you will be insisting with constancy every day after two weeks it is likely you will have developed a functioning program more complex than that where to experiment variations. In any case your awareness that, when bits are involved and instructions have to be run by processors, randomness is only destructive will be increased. Besides you will see with your own eyes that in informatics not a single functional bit is gratis but must come from the intelligence of a programmer. As a consequence, given the analogy between informatics and biology, eventually you will pass from the “chance and necessity” unguided evolution side to the ID side. Congratulations! You are welcome!!

Comments
On thermodynamics and its relations with ID/evo I just posted a new article today. Please eventually post your comments about there. Thank you.niwrad
October 13, 2009
October
10
Oct
13
13
2009
06:09 AM
6
06
09
AM
PDT
Nakashima: Do you deny that the organization and complexity of the surface of the Earth is driven by radiation from the Sun? No. Neither do I deny that automobiles are driven by fuel. Shall we reason that the energy stored in petroleum caused the organization of metal and other materials into automobiles? I maintain that unintelligent self-organization is an optimistic fantasy. But this hypothesis - "we know that a local increase in organization can be traded for a larger increase in entropy throughout the larger system" - why not test it six ways from Sunday and see what sort of organization results? Take an example of the Sun/Earth where massive increases in entropy in the Sun drive modest amounts of compleity and organization here at the surface of the Earth. Modest complexity being DNA, human intellect, millions of life forms, Shakespeare, etc. What would be an example of immodest complexity?ScottAndrews
October 13, 2009
October
10
Oct
13
13
2009
05:51 AM
5
05
51
AM
PDT
Mr ScottAndrews, re the other 7 planets, whatever it is it is. Do you deny that the organization and complexity of the surface of the Earth is driven by radiation from the Sun? Think about Miller-Urey type experiments. A small yield of amino acids for large energy inputs. Some fraction of the energy helped build the amino acids, most got radiated away as waste heat.Nakashima
October 10, 2009
October
10
Oct
10
10
2009
07:51 PM
7
07
51
PM
PDT
Take an example of the Sun/Earth where massive increases in entropy in the Sun drive modest amounts of compleity and organization here at the surface of the Earth. The sun shines on eight other planets in this solar system. Where is their increase in organization? If the hypothesis is that increases in entropy drive complexity and organization, how would we test that? It sounds like something we should be able to repeat.ScottAndrews
October 10, 2009
October
10
Oct
10
10
2009
11:01 AM
11
11
01
AM
PDT
Mr Niwrad, Without intelligence all things go unavoidably towards disorder, that is exactly in the opposite direction of organization. I fear you have returned to an assertion that we know is not true. As a general point, we know that a local increase in organization can be traded for a larger increase in entropy throughout the larger system. Take an example of the Sun/Earth where massive increases in entropy in the Sun drive modest amounts of compleity and organization here at the surface of the Earth. Or take individual chemical reactions that create complex results but also heat, or water or some other high entropy product. The total entropy of the reaction products may have increased, but the increase is unevenly distributed.Nakashima
October 10, 2009
October
10
Oct
10
10
2009
10:53 AM
10
10
53
AM
PDT
Nakashima #36 may be I must clear a misunderstanding. I agree perfectly with you that in science probability matrices do exist and are used to model many phenomena. For example in mathematics there is the theory of Markov chains, which are stochastic processes based on probabilistic transition matrices (the passage from a state to the next one is not deterministic because depends on a probability). Therefore I don’t deny at all probability matrices (or arrays). Simply I deny that the genetic code is a probability vector. In the translation process the passage from an RNA codon to the codified amino acid doesn’t depend on a probability but on a fixed rule.
I think if you accept that physical processes can change aspects of the code, you are open to evolution of the code from a simpler state to a more complex state. This strikes me as a frontloading/TE kind of position, as you say design time vs run time. Abiogenesis is design time and evolution is run time.
If with "more complex" state you mean "more organized" then the passage to more organization always implies the intervention of intelligence (because organization implies CSI). To try returning to the issue of my post, when you write software you are organizing data, processes and events. There is no other way to organize things than to apply intelligence. Without intelligence all things go unavoidably towards disorder, that is exactly in the opposite direction of organization. You say "abiogenesis is design time and evolution is run time". Here I could again agree with you, if you concede that evolution has only a passive role, insofar it develops only the possibilities that at design time were inserted in the systems. No doubt ultra complex systems (as the biological ones) have large potentialities of variation. In the informatics terms we could say that the biological software is highly configurable and parametrical. Moreover many configuration changes are triggerable by environmental events. All these aspects can be grouped under the name "evolution". No one denies that organisms change. What ID denies is that changes implying increase in organization might arise thank to randomness and laws only.niwrad
October 10, 2009
October
10
Oct
10
10
2009
09:30 AM
9
09
30
AM
PDT
Mr Niwrad, Thank you very much for your kind words. Here is an example of a code where you might use probabilities as entries. In cryptography, it is important to disguise the letter (and bigram and trigram)frequencies of a substitution cipher as much as possible, since these frequencies are the easist path to solving the cipher. Let us say that E is 10 times more frequent than Q. I construct a cipher where E is probabilistically mapped to 10 byte codes, but Q is mapped to only one. In this way I disquise the identities of E and Q. I bring this up only to point out that a matrix with probabilities may have some uses in different computer applications. Related to the subject of evolution, there are also Estimation of Distribution Algorithms, in which an array or martrix is populated with probabilities which are updated as the algorithm is run. EDAs can be a very compact way of representing an entire population. For example, an EDA WEASEL (everyone's favorite example) would start with an array 28 entries long and each entry would be a set of 27 probabilities, one for each possible character. In the first generation, each character in each space has the same probability, 1/27. When the algorithm has worked for a while, the probabilities have changed such that in slot 1 the probability of M is now higher than all the other letters, etc. (Yes, I am skipping how you change the probabilities based on fitness!) I think if accept that physical processes can change aspects of the code, you are open to evolution of the code from a simpler state to a more complex state. This strikes me as a frontloading/TE kind of position, as you say design time vs run time. Abiogenesis is design time and evolution is run time. I'm not going to push you too hard on that now.Nakashima
October 8, 2009
October
10
Oct
8
08
2009
09:00 PM
9
09
00
PM
PDT
Nakashima #31 Thank you for your always challenging and interesting remarks. You are a person that the ID movement would be glad to have on board (and by the way I don’t consider impossible that you in the future will be entirely on our side).
I think you are wrong about whether a rule matrix can be populated with probabilities instead of certainties.
Here I am not sure to understand your point. I said that rule matrices (which specify codes) must be populated by constant values, not probabilities. The issue of my post was focused on the usefulness of computer programming to have an idea of what happens inside the biological systems. When a programmer must specify a code, usually he fills an array or hash with constants or eventually inserts them into a file or library module. Here probabilities don’t matter, only certainties. May be you mean the probability of the random arise of a code. You know that for me this probability is null.
We can also observe that in practice the implementation of the genetic code by the machinery of the cell is not perfect. Protein assembly errors do happen.
Abstract models are near perfect per se but when they are implemented in matter entropy intervenes and errors happen. No wonder about that, it’s everyone everyday experience.
On the subject of mapping only 15 amino acids, I’m asking you to imagine a form of life that only needs 15 amino acids. So a map that only contains 15 is fine. For example, suppose that there used to be only one ancestral map, which mapped 22 different amino acids, but our cells are a reduction of that ancestral map, while mitochondria are another reduction.
I don’t know what would be the technical consequences of a 15 or 22 amino acids life. I suspect the repercussions would be many and deep. However it seems clear to me that these considerations are frameworked into an ID perspective.
I’m not sure what mechanism you think exists that prevents a physical, material system which can be related to one abstraction from changing such that it is now related to another abstract system. The system doesn’t “know” it is related to an abstraction.
The system doesn’t "know" it is related to an abstraction but the designer of the system must know, otherwise he couldn’t design it!
My laptop has a CPU that is related to the abstraction of a Turing machine. It can be destroyed by a cosmic ray, nonetheless. [...] Being related to an abstraction, even a powerful abstraction, doesn’t offer any protection from the slings and arrows of outrageous fortune.
Of course. The relation to an abstract model is a must at the design level but at run time it doesn’t protect against entropy and all its harmful effects.
Therefore, I don’t see how you can maintain that all the genetic codes we see today in the world _must_ have been independently designed to be exactly what they are today. And if these maps _can_ change, there is the opening for evolution.
Any engineering variation in an ultra complex information processing system has to be designed, except the changes that the system is able to do by itself . . . according to possibilities that were frontloaded into the system just from the beginning by the designer!niwrad
October 8, 2009
October
10
Oct
8
08
2009
12:48 PM
12
12
48
PM
PDT
Cam: Keep in mind also that complexity must be paired with specificity to indicate design.SpitfireIXA
October 7, 2009
October
10
Oct
7
07
2009
06:29 PM
6
06
29
PM
PDT
cam:
Are you suggesting that Microsoft Windows could be simplified and still maintain all of its features, including compatibility with previous versions?
Yes. Same with the EU and the Tax Code.
And while I agree that the US Tax Code could be simpler, I think you’ll have trouble defending your argument that is came about by design.
Okay, I hope you're joking. I'd admit that there'd be humor in it if you were. Therefore, parsimony does not indicate design better than complexity.SpitfireIXA
October 7, 2009
October
10
Oct
7
07
2009
06:28 PM
6
06
28
PM
PDT
SpitfireIXA, #29
cam @27 Would you agree that parsimony is a better indicator of design than complexity? Absolutely not, unless you wish to say that Microsoft Windows, the European Union regulatory body and the US Tax Code were clearly not designed. Since they were, parsimony appears empirically to be irrelevant to the definition of design.
Are you suggesting that Microsoft Windows could be simplified and still maintain all of its features, including compatibility with previous versions? If so, I suggest you offer your services to Mr. Ballmer. I would also like to see you simplify the EU regulatory body while retaining its functions within all of the independant countries. And while I agree that the US Tax Code could be simpler, I think you'll have trouble defending your argument that is came about by design. If you're looking for an example of cumulative selection outside of the biological sciences, this would be a good one.camanintx
October 7, 2009
October
10
Oct
7
07
2009
06:15 PM
6
06
15
PM
PDT
Mr Niwrad, Thank you for answering clearly. I appreciate your definiteness when so many equivocate. I think you are wrong about whether a rule matrix can be populated with probabilities instead of certainties. In the abstract, it is easy to imagine and analyze the behavior of such an object, inquire whether the probabilistic nature can be distinguished from noise in the channel, and many other interesting observations. We can also observe that in practice the implementation of the genetic code by the machinery of the cell is not perfect. Protein assembly errors do happen. The map is not the territory. Google "aaRS isoleucine valine error" and look at p 36 of the book that Google brings up in Google Books, Translation mechanisms By Jacques Lapointe. On the subject of mapping only 15 amino acids, I'm asking you to imagine a form of life that only needs 15 amino acids. So a map that only contains 15 is fine. For example, suppose that there used to be only one ancestral map, which mapped 22 different amino acids, but our cells are a reduction of that ancestral map, while mitochondria are another reduction. I'm not sure what mechanism you think exists that prevents a physical, material system which can be be related to one abstraction from changing such that it is now related to another abstract system. The system doesn't "know" it is related to an abstraction. My laptop has a CPU that is related to the abstraction of a Turing machine. It can be destroyed by a cosmic ray, nonetheless. A field programmable gate array can be arranged to be related to the abstraction of a Turing machine, and still reprogrammed. Being related to an abstraction, even a powerful abstraction, doesn't offer any protection from the slings and arrows of outrageous fortune. Therefore, I don't see how you can maintain that all the genetic codes we see today in the world _must_ have been independently designed to be exactly what they are today. And if these maps _can_ change, there is the opening for evolution.Nakashima
October 7, 2009
October
10
Oct
7
07
2009
03:16 PM
3
03
16
PM
PDT
Nakashima #22 Codes (as ASCII and genetic ones) are abstract functions mapping a set of abstract symbols into another set of abstract symbols. Random mutations, for definition, apply to matter, don't apply to abstractness. Therefore there is no need to compute probabilities because here we face a problem of impossibility. To answer your question: in the case of codes the design inference is matter of principle. Rules and laws (and codes are matrices of them) are fixed for definition. They could not work if they were not fixed when the system that uses them is running. In other words an information processing system using codes could not work properly if these codes are changing at run time.
We know variants of the genetic code exist. Were they each an independent act of design?
My answer is yes for what I said above. Of course these variants must be stated before the systems start working.
Do you deny that there is even the possibility that there is another system, self consistent, close to the current system in terms of mutation and other forms of variation, which is simpler, more error prone, slower, but still works?
A genetic code must map somehow all 20 amino acids. Obviously it cannot be simpler than that, for example it cannot map 15 amino acids only.niwrad
October 7, 2009
October
10
Oct
7
07
2009
12:30 PM
12
12
30
PM
PDT
cam @27
Would you agree that parsimony is a better indicator of design than complexity?
Absolutely not, unless you wish to say that Microsoft Windows, the European Union regulatory body and the US Tax Code were clearly not designed. Since they were, parsimony appears empirically to be irrelevant to the definition of design.SpitfireIXA
October 7, 2009
October
10
Oct
7
07
2009
09:50 AM
9
09
50
AM
PDT
Upright BiPed, #25
You then say “Your assumption about what I base my proof on is the real joke.” I based my comment on your own words: “Since you obviously don’t consider life a “naturally forming system”, I seriously doubt you will accept my answer, but I’ll try anyway.”
You asked for an example, not proof.
camanintx
October 7, 2009
October
10
Oct
7
07
2009
07:00 AM
7
07
00
AM
PDT
Upright BiPed, #25
Originally I answered this comment appropriately by saying that you are making judgments about the designing agent that are both unwarranted and without empirical support.
Would you agree that parsimony is a better indicator of design than complexity?camanintx
October 7, 2009
October
10
Oct
7
07
2009
06:57 AM
6
06
57
AM
PDT
I hear ya Nak... A purely undirected material rise to Life not only has no empirical inferences behind it, but is fraught with a staggering number of go/no-go problems, both conceptually and chemically - but hey! what if it weren't* (*a remark made in the true spirit of institutionalized assumptions contrived to support personal worldviews and political conveniences which have nothing to science)Upright BiPed
October 6, 2009
October
10
Oct
6
06
2009
07:42 PM
7
07
42
PM
PDT
Caman, “Since three different bases combined in sets of three provides adequate information to define only 20 amino acids, wouldn’t the introduction of a fourth base indicate a lack of parsimony, a common feature of intelligent design?” Originally I answered this comment appropriately by saying that you are making judgments about the designing agent that are both unwarranted and without empirical support. Now I can see that you are stuck in a world where one gene equals one protein and all that other stuff is junk DNA left over from building proteins. It’s as if you see it all explained if you can get to proteins arising from amino acid chains arising from nucleic acid sequencing. Perhaps you could more easily recognize the issues presented if you allowed yourself to expand your view to include a fuller range of systems, subsystems, and computational routines at work by virtue of the cellular processing of encoded information. Robert Shapiro (no friend of ID) stated:
Genomes are hierarchically organized as systems assembled from DNA modules, which themselves generally constitute systems at lower levels. Each genome is formatted and integrated by sequence elements that do not code for proteins. These formatting elements constitute codons in multiple genetic codes for distinct functions such as transcription, replication, DNA compaction and genome distribution to daughter cells. Consequently, the genome has computational system architecture. (Proceedings of the 4th International Conference on Biological Physics).
He went on to talk about the range of “universals” in biophysics (and how they’ve changed over time) as well as about the over theories themselves.
…the conventional theory that evolution occurs by a random walk through adaptive space and produces a virtually endless series of sui generis inventions. One alternative to this conventional view is that there exist design principles and procedures that are used repeatedly in evolution (in other words, evolution occurs as an engineering process).
He then list some post 1953 (DNA) universals which included: Cellular Computation and decision-making, Surveillance (sensitivity and signal transduction), Biochemical and Genetic Regulation, Error Detection (repair, checkpoints), Complexity and Connectivity (reliability, precision, robustness), and what he called built-in natural genetic engineering mechanisms for genome change. He then states:
” Some of these Universals have been known for a long time, but two of them developed out of the molecular biology revolution in the second half of the 20th Century. The first post-1953 Universal, the idea that cells compute and make decisions, is not new. But its widespread acceptance has only recently emerged from studies of biological regulation and the identification of countless molecules, multimolecular complexes, and signaling systems which provide detailed control over the operation of virtually every aspect of cellular function (1, 2). On a very short time-scale, this computational capacity allows cells to respond appropriately to internal and external signals, to adjust to changing conditions, to detect and correct misfunctions, and to coordinate the millions of biochemical events involved in metabolism, growth, morphogenesis, cell division and multicellular development. The second post-1953 Universal, the recognition that the vast majority of genetic change results from the action of cellular biochemical systems that act on DNA, is far less widely known, and its significance is not appreciated outside a small group of specialists (3, 4). The discovery of built-in natural genetic engineering mechanisms dates back to Barbara McClintock’s pioneering cytogenetic studies of the late 1940s and early 1950s (5). However, the ubiquity of internal systems for genome change only became apparent through molecular studies in bacteria in the 1960s and, with recombinant DNA technology, in eukaryotes in the 1970s and1980s (6-8). In terms of a 21st Century view of evolution, the major importance of natural genetic engineering is that this capability removes the process of genome restructuring from the stochastic realm of physical-chemical insults to DNA and replication accidents. Instead, cellular systems for DNA change, place the genetic basis for long-term evolutionary adaptation in the context of cell biology where it is subject to cellular control regimes and their computational capabilities (9-11).
And if I may call your attention to his next comment:
“The genome is the long-term storage medium for each species (much like a computer hard disk) and consists of the total information content of the DNA molecules in the cells of that species. Although most genomics researchers focus on the "coding" regions of the genome that determine the proteins a species can synthesize, genomes are built up of protein-coding and other classes of DNA sequences that are combinatorially formatted to carry out the multiple tasks necessary for overall genome function (Table II). While textbooks call the triplet code for amino acids in proteins "the genetic code," there are in fact many genetic codes for the different aspects of genome coding, packaging, replication, distribution, repair and evolution. “
I hope this will help you understand that bioscience has moved well past how proteins are formed from triplets. No only has that encoded symbol-system remained unexplained by purely materialistic causes, but so are the others that have come to light since the “genetic code” was cracked. - - - - - - - - You then say “Your assumption about what I base my proof on is the real joke.” I based my comment on your own words: “Since you obviously don’t consider life a “naturally forming system”, I seriously doubt you will accept my answer, but I’ll try anyway.” - - - - - - - - “Since one of the unique characteristics of “life” is the ability to pass along information to later generations, asking for an example of a non-living system which produces such data is a ridiculous question.” You have become confused. I didn’t ask for a system that provides for heredity, I asked a different question altogether: Can you name a “naturally forming system” that creates a function by means of transcribing encoded information? This would be an example of how inanimate material causes can create a symbol system that leads to a function. You cannot name one, because there isn’t one. - - - - - - - - - - You then quote me as saying that no one has ever produced “empirical evidence that inanimate matter can organize itself into a function by means of symbolically-encoded instructions” and you follow by saying that my “knowledge on the subject is lacking”. Your proof of such is a link to a paper about Squirm3, an artificial chemistry program which the paper’s authors admit has no analog in nature. “The system we shall describe developed from attempts to realise a representation for artificial creatures that was both computationally efficient and flexible in shape. Mass-spring models were rejected because of the potentially high computational cost of simulating the physics of multi-body systems. Traditional cellular automata were not used because they do not easily permit creatures to interact with each other, and it is felt that these rich interactions are essential for driving creative forces in evolution. From a purely systems design point of view artificial chemistries seem to offer the right mix of flexibility of form and a rich spectrum of possibilities regarding construction.” Are you kidding me? Did you even read the paper? C’mon, be serious.Upright BiPed
October 6, 2009
October
10
Oct
6
06
2009
06:15 PM
6
06
15
PM
PDT
Cam @21 Squirm3, and other carefully designed computer simulations, do not produce self-organizing machinery that accomplish functions. They produce self-organizing patterns only, and only if the rules of the computer simulation allow it. Squirm3's method of "evolution" is to impose artifical rules on their population to induce change. Basically, it's the WEASEL program for the MIT crowd. Since you apparently did not know that, I would say that your knowledge on the subject is lacking. Therefore, Upright Biped's claim @20 stands unrefuted. For the non-bloviated version of Squirm3's description, check out The Squirm3 SiteSpitfireIXA
October 6, 2009
October
10
Oct
6
06
2009
05:51 PM
5
05
51
PM
PDT
Mr BiPed, In order to hold my hypothesis, I assume that most or all of the things you mention, and more besides, can vary, and that many of the possible variances in one part do not cause the whole system to fail, but do cause it to work better or worse at reproducing itself. I assume these variants existed in a world of limited resources, so that systems which worked better captured more resources for themselves, and forced slower and more error prone variants out of existence. I assume these variances have been induced in earlier systems by natural means such as the presence or absence of a chemical, pressure changes, photons of heat or higher energies, or radiation. I assume that the earliest systems were the simplest, the slowest, the most error prone, and involved direct templating of amino acids on RNA such that adjacent RNA triples were involved in a reaction that helped bind the AAs together as a protein. As such, these early systems did not involve any machinery other than RNA and amino acids formed by the environment, in environments where such reactions were chemically favorable. I assume that this kind of templating could exploit AAs from broad classes, any hydrophilic AA, for example. While I feel comfortable with these assumptions, based on my reading of (some of) the literature, I'm very aware that experiment can falsify much of what I've assumed. For example, codon capture hypotheses can be tested. Imagine eliminating an AA from the code table, and replacing every instance of that AA with a closely related one. How much of the machinery of the cell would still work? How much, just of the machinery of the genetic code itself? Imagine a code with two AAs equiprobably coded for, two kinds of tRNA, etc. Does the cell still work?Nakashima
October 6, 2009
October
10
Oct
6
06
2009
04:39 PM
4
04
39
PM
PDT
Mr Niwrad, Yes, we have had previous discussions, and I am now trying extremely hard to spell your name correctly! With respect to the ASCII code, I will hold back my historical knowledge and admit ignorance of whether it is designed. I'm even ignorant of the space this one event is drawn from. How would I detect design? What computational procedure do you have that yield T or F with probability significantly different from .5, which you can also apply to the genetic code? A function or code is not a thing generable by chance and necessity because is an abstract concept, it is a set of rules. The matrix of rules called “genetic code” is invariant respect the underlying molecular events and processes. It could not be invariant if were not abstract. Perhaps it would help to realize that it is not invariant. The entries in the code table are probabilities, not certainties. The genetic code, taken as a system, works most of the time. Do you deny that there is even the possibility that there is another system, self consistent, close to the current system in terms of mutation and other forms of variation, which is simpler, more error prone, slower, but still works? Do you deny that this slower system could ever mutate along an available pathway to become the current system? We know variants of the genetic code exist. Were they each an independent act of design?Nakashima
October 6, 2009
October
10
Oct
6
06
2009
04:03 PM
4
04
03
PM
PDT
Upright BiPed, #20
“Since there are only 20 known amino acids, using 64 combinations to express them appears overly complex for a designed system.” This is a personal assumption made about the agent of design. It has no empirical foundation beyond your personal belief, and it means zilch toward the two issues that are in need of an explanation. You cannot say (with a straight face) “using x and y seems odd to me, therefore, unguided chance can coordinate a symbolic code after all!” It’s a non-starter.
Since three different bases combined in sets of three provides adequate information to define only 20 amino acids, wouldn't the introduction of a fourth base indicate a lack of parsimony, a common feature of intelligent design?
Pointing to Life as your proof is a joke. Do you not find any logical fallacy in assuming that Life was the result of material chance, then pointing to Life as a proof that material chance can produce Life?
Your assumption about what I base my proof on is the real joke. Since one of the unique characteristics of "life" is the ability to pass along information to later generations, asking for an example of a non-living system which produces such data is a ridiculous question.
We know that no person has ever produced the slightest bit of empirical evidence that inanimate matter can organize itself into a function by means of symbolically-encoded instructions?
Your knowledge on the subject is lacking. Evolvable Self-Replicating Molecules in an Artificial Chemistrycamanintx
October 6, 2009
October
10
Oct
6
06
2009
02:27 PM
2
02
27
PM
PDT
Caman, “Since you obviously don’t consider life a “naturally forming system” Whether or not I think life is a naturally forming system is subject to an intractable debate as to what “natural” means, and more importantly, it is completely beside the point (both to me and to the design inference). Life is a system, on that there is no doubt. Life uses symbolic information, and on that there is no doubt. What is to be explained is the information and the symbol system in which it is instantiated into matter. “Since there are only 20 known amino acids, using 64 combinations to express them appears overly complex for a designed system.” This is a personal assumption made about the agent of design. It has no empirical foundation beyond your personal belief, and it means zilch toward the two issues that are in need of an explanation. You cannot say (with a straight face) “using x and y seems odd to me, therefore, unguided chance can coordinate a symbolic code after all!” It's a non-starter. - - - - - - - So, in the end, you simply punted on the question I asked. What is to be demonstrated by you is that unguided chance acting on inanimate matter can produce the effect we observe: cellular function being created by means of the transcription of symbolic information. You cannot name a single instance where unguided chance acting on inanimate matter has produced such an observed effect – yet is it is you that claims that such capabilities are within the properties of unguided chance acting on inanimate matter. Pointing to Life as your proof is a joke. Do you not find any logical fallacy in assuming that Life was the result of material chance, then pointing to Life as a proof that material chance can produce Life? (give me a break…) Yet on the other hand we have “agency” as a cause that operates within the natural world. So with that, we can form the basis of a question: We know that no person has ever produced the slightest bit of empirical evidence that inanimate matter can organize itself into a function by means of symbolically-encoded instructions? We also know that the very concept of inanimate matter coordinating such a system runs directly opposite to our uniform experience with such systems? We also know that in every instance in which we find inanimate matter organized into a function by means of symbolic information, we also find that an agent is the cause behind that result. So: Effect A has never been described as the result of Cause X. Cause X has never been described as creating Effect A. Effect A has regularly been described as a result of Cause Y. Cause Y has regularly been described as creating Effect A. You argue that Cause X is the obvious (and apparently only) choice - even though Cause X has absolutely no empirical inferences whatsoever leading to it, while Cause Y has all empirical inferences leading to it I ask you again: Why? Since your conclusion has seemingly little to do with the inferences coming from the actual observations, please be specific.Upright BiPed
October 6, 2009
October
10
Oct
6
06
2009
11:44 AM
11
11
44
AM
PDT
Nakashima, “Let’s think specifically about RNA and protein.” Sure. Would you like to discuss the formose reaction, or deamination, or the half-life of cytosine, or the formation of ribose, or cross contamination, or polyribonucleotide length, or unfulfilled protein functions, or a self-replicating membrane, or the ex nihlio synthesis of the full gamut of mRNA and tRNA constituent parts, or the transition from ribozyme-based protein synthesis to protein-based protein synthesis, or the building blocks of RNA in the pre-biotic environment, or the appearance of functional RNA oligonucleotides, or the appearance of RNA replicase, or the templates used to produce proteins, or the mysterious reverse transcription of information? Perhaps it would be easier if you simply listed the full range of necessary assumptions you must make in order hold your hypothesis.Upright BiPed
October 6, 2009
October
10
Oct
6
06
2009
11:37 AM
11
11
37
AM
PDT
Upright BiPed, #15
caman, Can you name a “naturally forming system” that creates a function by means of transcribing encoded information? This is not a rhetotrical question.
Since you obviously don't consider life a “naturally forming system”, I seriously doubt you will accept my answer, but I'll try anyway. AS you pointed out, DNA codes for each amino acid using six bits of data. Since there are only 20 known amino acids, using 64 combinations to express them appears overly complex for a designed system. When you realize that each amino acid consists of three parts (an amine group, a carboxylic acid group and one of the twenty R-groups), using three codons to identify each starts to make sense. Since amino acids are known to form naturally under a very wide range of conditions, then shouldn't you expect any “naturally forming system” based on them to use a 6 bit code to pass along genetic information?camanintx
October 6, 2009
October
10
Oct
6
06
2009
10:50 AM
10
10
50
AM
PDT
Nakashima #13 I answer you with a question: before the ASCII code what would you say: is it designed or not? The ASCII code is a function that maps the 256 8-bit sets into 256 characters and symbols used to write on computers. The genetic code is something like that. It is a function that maps the 64 6-bit sets (composed of three RNA symbols) into 64 amino acids. Since the different amino acids are only 20, some different 6-bit sets map into an identical amino acid, therefore, in mathematical term, the genetic code is a surjective function. This surjectiveness is the cause of the redundancy of the genetic code. The choice of 20 amino acids, the choice of 4 different RNA nucleotides (A, U, G, C), the choice of grouping 3 nucleotides (called "codon" = 6 bits) to code one amino acid are all design choices to obtain a certain redundancy. A function or code is not a thing generable by chance and necessity because is an abstract concept, it is a set of rules. The matrix of rules called "genetic code" is invariant respect the underlying molecular events and processes. It could not be invariant if were not abstract. You and I already discussed this: rules and laws are abstract things that overarch matter and as such cannot be reduced to matter. Only design states rules and laws. Chance and necessity simply obey to laws, they never create laws.niwrad
October 6, 2009
October
10
Oct
6
06
2009
08:11 AM
8
08
11
AM
PDT
Mr BiPed, Let's think specifically about RNA and protein. Could aaRSs evolve? Could the genetic code be simpler and still be useful? Could a code as simple as "some codon triples = any hydrophobic AA, all other triples = any hydrphillic AA" be more useful than no code, or a random code? Could the ribosome be simpler, or have a greater error rate, and still be more useful than no ribosome at all? These aren't rhetorical questions.Nakashima
October 6, 2009
October
10
Oct
6
06
2009
07:28 AM
7
07
28
AM
PDT
caman, Can you name a "naturally forming system" that creates a function by means of transcribing encoded information? This is not a rhetotrical question.Upright BiPed
October 6, 2009
October
10
Oct
6
06
2009
06:11 AM
6
06
11
AM
PDT
niwrad, #12
Organisms’ self-reproducing implies they are more complex and advanced than computers, then if computers are designed to greater reason organisms are.
You appear to believe that complexity is an indicator of design when evidence suggests exactly the opposite. Compare any naturally forming system to an equivalent designed system and the design will almost invariably be simpler.camanintx
October 6, 2009
October
10
Oct
6
06
2009
06:00 AM
6
06
00
AM
PDT
Mr niwrad, an indubitable designed feature (the redundancy of the genetic code) Are you saying that aaRS did not evolve? What, exactly, about the genetic code was the design and intervention of an intelligent agent in a world otherwise incapable of reaching this state (IE the probability of its reaching this state is less than 1/(2^150).Nakashima
October 6, 2009
October
10
Oct
6
06
2009
05:07 AM
5
05
07
AM
PDT
1 2

Leave a Reply