Home » Intelligent Design » Comments on Kathryn Applegate’s May Posts on BioLogos

Comments on Kathryn Applegate’s May Posts on BioLogos

Since I am a cell biologist and immunologist by training, it is with great interest that I read Kathryn Applegate’s May BioLogos posts drawing parallels between adaptive immunity and evolution. In the first essay she claims that antibody “production requires randomness at multiple levels” and that God may use random processes to create “life over long periods of time.” In the second post Dr. Applegate goes on to suggest that evolution uses “the same kinds of mechanisms, except the mutations occur in germ cells…”

These are interesting hypotheses, but I am not convinced that the elegant processes whereby B cells differentiate and germ cells are formed actually give rise to the conclusions drawn. Good science is dependent on accurately distinguishing between data, interpretation of data, extrapolation from data, and even speculation; in these posts this has not been adequately accomplished. In fact, even the science is faulty in places. To explain, the data shows that B cells manufacture over 1015 different antibodies using less than a couple of hundred gene segments. They accomplish this feat by rearrangements and excision of DNA sequences—these occur in a highly regulated fashion that has been extensively described in the literature. These facts have been established by interpretation of vast amounts of data.

However, I would like to suggest that the claims that B cell differentiation is 1) random, 2) a model for the way that God created life, and 3) that evolution “works” by B-cell-like mutations in the germ cell line, or 4) that germ cell formation is in any way analogous to antibody formation are based on a one-sided explanation of the science and much speculation. Dr. Applegate states that God could have done it this way; I do not dispute this. After all, if He is God, it is logical that He can do whatever He wants.

 Unfortunately, in science the question is not what God could do, but what actually happens/ happened or, in essence, what He did do. In fact, unless we cross the line from interpretation to speculation, much further research is required before any conclusions about the parallels between the B cell picture, germ cell formation, and macroevolution can be drawn.

Let’s look at each of the above in turn, assessing the scientific merit of the claims. Is the process whereby stem cells become fully functioning, specific antibody-producing B cells random? No. In fact, it is anything but random and much too complex and intricate to describe here. Lippincott’s Immunology and indeed Applegate herself describe the process as elegant. In elucidation of this question, instead of glossing over the process and majoring on the randomness, it would be beneficial to examine the sequence of events in at least a little more detail.

Starting with the basics, the prime function of a B cell is to make immunoglobulins (antibodies); the base structure of an antibody consists of one heavy and one light chain of protein (See http://8e.devbio.com/article.php?id=31 for a nice image). These chains are encoded in DNA, which is much like an instruction book for the cell. The DNA is located in the nucleus of a cell. When some of the information in the DNA is needed, that part of the DNA is transcribed into messenger RNA (mRNA), which carries the information into the cytoplasm of a cell. There, complex machines translate the information in the mRNA into specific proteins—antibodies or immunoglobulins are a specific type of protein. A human being’s DNA instruction book is divided into 23 chromosomes or volumes (more about this later).

All DNA coding for the immunoglobulin heavy chain is found on chromosome 14: this chromosome has DNA encoding several types of immunoglobulin C (constant) regions, 27 types of D (diversity) regions, 6 types of J (joining) regions, and 65 (not 51 as Dr. Applegate claims) V (variable) regions. The finished immunoglobulin heavy chain will then consist of three or four C, one D, one J and one V region. And, as the article stresses, which D, J and V region are used to make the protein is random. However, it is important to realize that the process whereby this is accomplished is extremely controlled and accurate—in other words, the mixing-up is deliberate. First the DNA between a randomly-selected D region and a randomly-selected J region is deleted, joining these two together. Then the process is repeated to join a V region to the D region, making a VDJ gene. The steps to put these DNA segments together are orchestrated by protein machines so that they are carried out precisely and sequentially. In addition, as the cell successfully completes each step, it puts signals on its surface to alert the nearby cells where it is in the process. Finally, the VDJ DNA and DNA from the C region (δ and μ, to be specific) is transcribed into one long RNA, the intervening sequences removed, and the RNA processed to maturity.

After translation of this mRNA into protein so that a particular type of immunoglobulin (IgM) can be found in the cytoplasm, the cell begins light chain construction—signaling the fact on its surface by display of at least two markers (IL-7 receptor and CD19). So, the cell is not just carefully constructing antibody in a time and place-controlled manner, it is also communicating where it is in the process to the surrounding milieu. (As the cells progress through the stages until they become fully functional B cells, immunologists give them different names, like pro B cells, pre B cells, and more—I will just call them all B cells to avoid confusion.)

Light chains consist of one of two C (constant) regions, a V (variable) region and a J (joining) region. The DNA sequences for light chains are found on chromosomes 2 and 22. There are approximately 100 V variants, 4 to 6 J variants, and two types of C region (κ or λ). Again, construction of a light chain is highly regulated. First, the DNA for one of the V and one of the J regions is spliced together and the intervening DNA discarded. Next, this DNA and the DNA encoding one of the C regions is transcribed into one long RNA molecule. The cell edits out the intervening RNA and a mature mRNA molecule consisting of VJC is formed. This mRNA is translated into protein and the light chain is formed.

Note that the above paragraph says that a V region and a J region are spliced together, but this is not as simple as it sounds. For example, one might ask how the B cell “knows” where one “V” section ends and another “V” starts and what exact part of the DNA should be spliced to the “J” section. After all, the DNA code has only four letters (A, C, T, G) and one part of DNA must look very similar to another. This is regulated by special RSS sequences that can be found at the ends of each V and each J region. A group of protein machines (enzymes) called recombinases recognize the RSS sequences at the ends of each region, and another group of enzymes (RAG) cut the DNA in those places. The ends are deliberately modified by an enzyme terminal deoxytidyl transferase which adds random “letters,” and the DNA is spliced back together by still another type of enzyme called a ligase. That way most heavy chains only have three or four C’s, one D, one J, and one V. Ingeniously, sometimes the regions are put in backwards and sometimes more than one region is utilized, further increasing the diversity or possible antibodies generated. In addition, some special regions of the DNA can and do undergo point changes—again to increase the diversity. However, note that this is only random in the sequence generated—it is very specific as to the particular places and the exact times of occurrence.

Next, the antibody-displaying B cells are tested in various ways, such as whether they recognize self. Those that are found unsuitable commit cellular suicide, another highly-regulated and precise process called apoptosis that has the goal of not spilling cellular “guts” and making a mess! The rest of the B cells are allowed to leave the bone marrow for the next round of maturation—also an exceedingly controlled and information-rich process. (Dr. Applegate’s claim that many cells die because they have a frameshift mutation is inaccurate.) Amazingly, this tremendously simplified explanation of the process of antibody formation only outlines what occurs before the B cells encounter antigen (a “foreign” substance)! My extrapolation from this information is that the process looks deliberately engineered for generating a diversity of antibodies rather than something that could be used as evidence for the mechanism of evolution because it displays the “power of randomness”.

This brings us to the question of whether “the generation of antibody diversity” is an example of how “God creates and sustains life”. Dr. Applegate asserts that because God uses this supposedly “blind system,” He uses the same process in evolution, so that mutations and natural selection give rise to the diversity of life. This is akin to saying that the careful merging of the parts of a manuscript, putting a table of contents at the beginning, the introduction next, the body in the middle, and references at the end (analogous to the B cell generation process), making sure that only one version of each is included, is the same as altering the manuscript by typos. Having just released a book (http://www.freetothink.us), I can tell you that this is definitely not the case!

Diversity of antibodies generated by B cells is due to deliberate, cell-engineered changes in the DNA sequence, not random mutations. In fact, I have never before heard the process whereby functional antibodies are formed (before they encounter antigen) described as mutation. And it is well-known that the appearance of functionality as a result of a mistake-mutation is extremely rare. Of course, after encountering antigen the hypervariable regions of the antibody DNA do undergo somatic hypermutation, but again this is in particular places and is controlled by enzymes. Using some of the examples Dr. Applegate cites, loss or gain of an entire somatic chromosome in a human results in miscarriage or, in the case of our smallest chromosome, 21, Down ’s syndrome. Duplication of the entire genome only occurs in plants and does not appear to confer a significant advantage. The literature reveals that insertions, duplications, point mutations, and inversions cause a myriad of genetic diseases. Perhaps to assert that B cell production of novel antibodies is a picture of the way God creates and sustains life through random mutation and natural selection is speculation and even invalid extrapolation from the science.

Now, let’s consider whether the generation of diversity in the germ cells (sperm and egg) is at all analogous to what occurs in the B cell. Meiosis, which is the process whereby germ cells are formed, is like a highly-regulated dance performed for the purpose of mixing up the genetic material (See second image at http://avonapbio.pbworks.com/Chapter-13). In order to understand this process, one must first know that every cell of a human has two copies of every one of their 23 chromosomes: one copy comes from their mom and one from their dad, making a total of 46. Put simply, genes are subsections of chromosomes, like sentences are subsections of books. Each of the 23 chromosomes code for completely different genes, but the copies from mom or dad only vary by the specific nature of the gene. Take a silly example. Say if chromosome 2 codes for nose shape; then the copy of chromosome 2 from mom (I will call it 2“m”) might code for hooked nose and the one from dad (2d) for snub nose. Chromosome 2 contains a gene that codes for nose shape, but the particular shape is dependent on the specific version of the gene.

A baby is a result of the fusion of a sperm and an egg, but of course the baby, being human, also has 46 chromosomes in each cell. Therefore, the germ cells that fuse during conception can only have 23 chromosomes each. Meiosis is the process whereby the 46 chromosomes in the progenitor cells are pared down to 23 in the germ cell. Whether each chromosome (1-23) is the version from mom (m) or the version from dad (d) is random, but the process has to be highly regulated because each germ cell must only have one copy of each chromosome—duplicate somatic chromosomes or deletion of an entire somatic chromosome result in miscarriage. (One can get away with more than or less than two sex chromosomes.) The variability in the genetic nature of the germ cells is further enhanced by a process that occurs during mitosis where two chromosome 1’s or two chromosome 2’s etc. can swap tips. The resultant chromosome is an amalgam of the originals—but again note that 2 can only swap with 2 and 3 with 3, etc. Other swaps (e.g. between 2 and 5) cause genetic disease. Again, the process is highly regulated and there are many mechanisms in place to minimize the chances of occurrence of a mutation.

So, how is this all accomplished? Well, the process is nothing like the way B cell diversity is generated. First, all the chromosomes in the progenitor cell are copied; now the cell has four copies of each chromosome making 92 total (the chromosomes are now given a different name, but we will skip that for the sake of clarity). In our example above, the cell now contains two 1m and two 1d, two 2m and two 2d, etc. up to two 23m and two 23d. 1m and 1d are referred to as homologues and 1m and 1m are called sisters. Next, in a step that takes most of the time given to meiosis, the sisters and homologues are organized into groups of four: all the 1’s are stuck together, all the 2’s, all the 3’s, etc. This is when the tips of homologues can crossover and be swapped, resulting in chromosomes with, perhaps, a body that is 2m and tips that are 2d. The scientific name for this process is synapsis and chiasmata formation and numerous papers have been published describing the complex machinery needed.

The tetrads are then attached to cellular filaments or microtubules, lined up along the middle of the cell, and pulled apart so that the homologues progress to different poles of the cell and the sisters to the same. However, which homologue goes to which side is random. Next, the cell divides, so each new cell has 46 chromosomes worth of DNA, but only 23 chromosomes worth of information (remember that the sisters, which are identical, except for maybe their swapped tips) go to the same pole). But, the two daughter cells have different versions of each chromosome. These cells then divide again, going through a similar dance, and the germ cells, which each have 23 chromosomes) are formed. It is possible that mutations occur during this process and these are passed on to the offspring, but note that there is NO deliberate excision and splicing of genetic material, no production of antibodies, in fact virtually no similarity between this process and that of B cell antibody production. In fact, the only similarity between these processes is that both are marvels of precise engineering and nanotechnology.

Now, it may be that the mechanism of evolution is random mutation followed by natural selection; this is a theory accepted by many scientists. But, it is vital to remember that scientific facts are not decided by popularity, but by data and interpretation. So, in order to evaluate the theory, let’s stick with the science, not with speculation and insufficiently substantiated conclusions based on an inadequate evaluation of the processes being discussed.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

12 Responses to Comments on Kathryn Applegate’s May Posts on BioLogos

  1. It is a great to see you Dr. Crocker! Great post.

    With respect to randomness, the pattern of a shot gun blast has some statistical randomness associated with it.

    If someone were to go fire a shotgun blast into the window of a neighbor’s car, he would not have a good case in court that the process was random and undirected, therefore the shooter is innocent!

    In fact, the stochastic pellet pattern of the shotgun is arguably a design feature.

    Likewise, merely because there may be a stochastic component in a process hardly qualifies it as blind!

    Royal Truman posted on this issue, and it is nice to see it revisited with some new twists. See:
    Unsuitability of B Cell Maturation as an analogy to Darwinism

  2. Great to have you posting here at UD, Dr. Crocker. Very informative.

    I particularly liked your conclusion after the first section:

    My extrapolation from this information is that the process looks deliberately engineered for generating a diversity of antibodies rather than something that could be used as evidence for the mechanism of evolution because it displays the “power of randomness”.

    I imagine that if you were playing some handheld electronic poker game, that the individual games would somehow be ‘randomly’ generated. Should we then conclude that the electronic device is the result of stochastic processes?

    The Darwinist’s way out of this logic is, of course, to insist that there is a very big difference between life and electronic machines in that the machines cannot self-replicate. And so it is. However, then it becomes a game of probabilities.

    For example, if a person were placed before a firing squad and given one hour to place 50 die (plural of dice) with the face up of each of the fifty die matching the sequence of fifty die that the Sergeant of Arms lined up at random, I’d say that person was out of luck. Why? Because there are 6^50 different possibilities, and there is simply not enough time (one hour) to even come remotely close, via random sampling, to the sequence of the Sergeant of Arms.

    Michael Behe wonderfully points this out in his “Edge of Evolution.”

    It is a wonder that scientists can’t see that ‘randomness’ is a non-starter. Invoking NS is no help, since NS is no more than the firing squad. If the placing of die is repeated by an infinite number of persons, one after the other, all would fail; and all would die. Assuming a person could assemble a sequence of 50 die in about a minute’s time, rougly speaking, it would take 10^38 individuals to succeed in an hour’s time. Here, the 10^38 would correspond to a large ‘population size’, whereas the random sequences is equivalent to mutation rates. Bottom line, the “probabilistic resources” simply aren’t there. So, who are we kidding?

    Finally, if someone wants to assert that there are perhaps some ‘laws of nature’ that find, let us say, ‘shortcuts’ to the right combinations/permutations, then let’s just admit that this is no more than invoking an ‘invisible cause’. How is this anything different than invoking an “invisible” Intelligent Designer? And the invocation of unseen laws is a worse assumption since–as in the case of a handheld eletronic poker game–assuming design is the more logical inference.

    It all makes you wonder why scientists find it so hard to see the “writing on the wall”.

  3. The relevant question is whether variation in either process is “random” with respect to fitness.

  4. Dr. Applegate was also highly selective in the analogies she used to portray evolution. For example, as cells age there is increasing somatic cell mutation and malformation. Why did she not choose this as an example of REAL evolution?

    To focus on the B cells in the way she did and ignore the much more extensive examples of undesirable mutations in the aging somatic cells seems like cherry picking.

    One could also argue that the fact somatic cells slowly age and decay and otherwise mutate in harmful ways inside an organism is an example that natural selection does NOT really work so well!

  5. Thanks for very informed post Dr. Crocker as to how the immune system does not reflect ‘real’ life. I especially liked this part:

    ‘It is possible that mutations occur during this process and these are passed on to the offspring, but note that there is NO deliberate excision and splicing of genetic material, no production of antibodies, in fact virtually no similarity between this process (Meiosis) and that of B cell antibody production. In fact, the only similarity between these processes is that both are marvels of precise engineering and nanotechnology.’

    This article is definitely referenced for future use.

    I’ve always found it interesting that evolutionists (Darwinian or Theistic) will always try to defend the non-existent integrity of evolution by referencing the clearly designed processes like the immune system or computer algorithms. Processes that are clearly designed to solve the very limited scope of ‘hill climbing’ problems.,,. But I guess it is not that surprising when we realize this is really all they have to work with to put a good face on a hopeless situation.,,, But even in this very limited scope of bottom up ‘hill climbing’ the gain in information is a far cry from the multi-tiered levels of top down information they must explain:

    further note:

    Here is a clear example, for Theistic Evolutionists, that reflects exactly why it is easier for the ‘Designer’ to design systems top down than to do so incrementally from the bottom up:

    Poly-Functional Complexity equals Poly-Constrained Complexity
    Excerpt: The primary problem that poly-functional complexity presents for neo-Darwinism or even theistic evolution is this:

    To put it plainly, the finding of a severely poly-functional/polyconstrained genome by the ENCODE study has put the odds, of what was already astronomically impossible, to what can only be termed fantastically astronomically impossible. To illustrate the monumental brick wall any evolutionary scenario (no matter what “fitness landscape”) must face when I say genomes are poly-constrained to random mutations by poly-functionality, I will use a puzzle:

    If we were to actually get a proper “beneficial mutation’ in a polyfunctional genome of say 500 interdependent genes, then instead of the infamous “Methinks it is like a weasel” single element of functional information that Darwinists pretend they are facing in any evolutionary search, with their falsified genetic reductionism scenario I might add, we would actually be encountering something more akin to this illustration found on page 141 of Genetic Entropy by Dr. Sanford.

    S A T O R
    A R E P O
    T E N E T
    O P E R A
    R O T A S

    Which is translated ;
    THE SOWER NAMED AREPO HOLDS THE WORKING OF THE WHEELS.

    This ancient puzzle, which dates back to 79 AD, reads the same four different ways, Thus, If we change (mutate) any letter we may get a new meaning for a single reading read any one way, as in Dawkins weasel program, but we will consistently destroy the other 3 readings of the message with the new mutation.

    This is what is meant when it is said a poly-functional genome is poly-constrained to any random mutations.

    The puzzle I listed is only poly-functional to 4 elements/25 letters of interdependent complexity, the minimum genome is poly-constrained to approximately 500 elements (genes) at minimum approximation of polyfunctionality. For Darwinist to continue to believe in random mutations to generate the staggering level of complexity we find in life, in a bottom up manner, is absurd in the highest order! As for Theistic evolutionists all I ask is do you think that it would be easier for God to incrementally change the genome of a organism maintaining functionality all the time in a bottom up manner or do you think it would be easier for Him to design each kind of organism in a top down manner?

    Notes:

    Simplest Microbes More Complex than Thought – Dec. 2009
    Excerpt: PhysOrg reported that a species of Mycoplasma,, “The bacteria appeared to be assembled in a far more complex way than had been thought.” Many molecules were found to have multiple functions: for instance, some enzymes could catalyze unrelated reactions, and some proteins were involved in multiple protein complexes.”
    http://www.creationsafaris.com.....#20091229a

    First-Ever Blueprint of ‘Minimal Cell’ Is More Complex Than Expected – Nov. 2009
    Excerpt: A network of research groups,, approached the bacterium at three different levels. One team of scientists described M. pneumoniae’s transcriptome, identifying all the RNA molecules, or transcripts, produced from its DNA, under various environmental conditions. Another defined all the metabolic reactions that occurred in it, collectively known as its metabolome, under the same conditions. A third team identified every multi-protein complex the bacterium produced, thus characterising its proteome organisation.
    “At all three levels, we found M. pneumoniae was more complex than we expected,”
    http://www.sciencedaily.com/re.....173027.htm

    Scientists Map All Mammalian Gene Interactions – August 2010
    Excerpt: Mammals, including humans, have roughly 20,000 different genes.,,, They found a network of more than 7 million interactions encompassing essentially every one of the genes in the mammalian genome.
    http://www.sciencedaily.com/re.....142044.htm

    http://docs.google.com/Doc?doc.....Zmd2emZncQ

    Mutations are the ‘bottom rung of the ladder’ as far as the ‘higher levels of the layered information’ of the cell are concerned:

    Stephen Meyer on Craig Venter, Complexity Of The Cell & Layered Information
    http://www.metacafe.com/watch/4798685

  6. one more related note:

    Three Subsets of Sequence Complexity and Their Relevance to Biopolymeric Information – David L. Abel and Jack T. Trevors – Theoretical Biology & Medical Modelling, Vol. 2, 11 August 2005, page 8
    “No man-made program comes close to the technical brilliance of even Mycoplasmal genetic algorithms. Mycoplasmas are the simplest known organism with the smallest known genome, to date. How was its genome and other living organisms’ genomes programmed?”
    http://www.biomedcentral.com/c.....2-2-29.pdf

  7. If a process like this were elucidated apart, from the detail that it occurred within a cell, no one would naturally consider the idea that it could occur by chance.

    If a programming solution like this was presented in a paper, slightly disquised, as the artifice of human talent, prestigious research organizations and software engineering coroporations would beat a path to the author’s door in hopes of securing, exlusively, that talent.

    The only good news I can see in the current state of denial in the mainstream of science over as much as the tentative consideration of such observations is this: the posessor of this talent is not for sale.

  8. Dr. Crocker:

    Great post.

    I have argued many times here that the two procedures by which immunological diversity and specificity is attained (the building of the primary antibody repertoire before exposition to antigens, and antibody maturation after the primary response) are indeed wonderful examples of highly complex designed procedures utilizing a targeted random search exactly where it can really be useful. They are a model of biological design, and have nothing to do with germ cell maturation or with the random recombination of alleles at meiosis.

    It is interesting that, IMO, the two procedures use the random component in two completely different ways, and for two completely different reasons.

    The building of the primary repertoire is targeted to realizing a low affinity coverage of the vast (but not huge) epitope space. Epitopes are relatively simple, and that makes their combinatorics accessible to a well designed random engineering of variation in the primary antibody repertoire, even if at the expenses of specificity. So, it is as though the designer has created a procedure which guarantees, given a limited genetic information in the form of a limited number of genes, the optimal diversity and coverage of possible epitopes for the primary response, a sort of minimal first line defense for all, or at least most, possible occurrences. That is completely reasonable, and efficient, because obviously nobody knows in advance what antigens will be met by a specific living being, and covering the whole epitope space with high affinity primary antibodies would be impossible. Primary response requires instead, and definitely achieves, sensibility.

    That is where antibody maturation falls in. After a specific epitope has been encountered, that is the moment to develop specificity. And that is done, again, partly through a random search.

    But here things are completely different. Here the system knows, in a sense, what is the searched for result (in the form of the antigen, processed and stored in the antigen presenting cells), but does not know exactly how to achieve that result (exactly which modifications of the existing low affinity antibody will increase affinity). So, a really amazing designed random search begins. Again, targeted random variation is achieved in a very controlled context, but this time results are strictly evaluated in terms of function (the degree of affinity to the antigen), and only advantageous modifications are selected and promoted. IOWs, what we have here is an amazing association of targeted random variation and intelligent functional selection: biological engineering at its best.

    How can anyone think that such a beautifully ordered and engineered process can be other than strictly designed?

    An important point is that in both procedures a random search is used to achieve something which, although amazing, is in the range of what a random search can realistically achieve: the thorough exploration of a big, but not huge, search space. The designer obviously knows combinatorics, and the intrinsic limits of a physical random search, unlike darwinists :)

    It is interesting to observe that there is at least another model where a completely different choice has been done by the designer in engineering a complex basic repertoire of proteins with great diversity: I am referring to the system of olfactory receptors, where surprisingly about 1000 different genes are used to implement a very complex system of biochemical interaction with outer reality.

    I just mention it here because it is an interesting example of how different solutions and strategies have been employed by the designer to solve apparently similar problems, which probably imply different difficulties and different constraints which at present we can only vaguely analyze.

    That’s what I mean when I say that an open minded analysis of the design strategies we can objectively observe and understand is the best way to get information about the designer and the design process.

  9. gpuccio,

    I just mention it here because it is an interesting example of how different solutions and strategies have been employed by the designer to solve apparently similar problems, which probably imply different difficulties and different constraints which at present we can only vaguely analyze.

    If god is the designer he ain’t got no problems to solve; he already knows the answer, he already knows what he did, and just do what he’s got to do.

    (A God that is all-knowing (especially one supposedly outside of time) can’t help but know his own future actions. God can do no more than gyre and gimble in the wabe, and he has no freedom to do otherwise. (Steve Zara))

  10. Cabal:

    Old and dogmatic argument.

    First, I never assume that God is the designer. I have always been loyal to this premise. But I can consider the possibility, from a scientific point of view.

    Second, it has always been my position here, documented in many posts, that even if God is the designer, evidence points to a designer who works in a context, and is limited by that context. I have always believed that this is not at all incompatible with the hypothesis that the designer is God.

    But if your specific faith makes that incompatible for you, you are entitled to that.

  11. Second, it has always been my position here, documented in many posts, that even if God is the designer, evidence points to a designer who works in a context, and is limited by that context.

    Are you departing from the usual ID disclaimers against assigning attributes to the designer?

  12. Petrushka:

    Absolutely. But I don’t think that such a point has ever been an “usual ID disclaimer”.

    The correct point is that knowing details about the designer is not necessary for the design inference. On that I agree. I absolutely agree.

    But I have never thought, or said, that, after having inferred design, it is not possible to reason scientifically about the designer.

    Quite the opposite. And you should well know.

Leave a Reply