Home » Intelligent Design » Who are the (multiple) designers? James Shapiro offers some compelling answers

Who are the (multiple) designers? James Shapiro offers some compelling answers

Is there only one Designer of life or are their multiple designers? Here is James Shapiro’s take: Bacteria are small but not stupid:
Cognition, natural genetic engineering, and sociobacteriology

Bacteria as natural genetic engineers….

This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings.

In the case of engineered products we often might think of designers (plural) versus a designer (singular). It may be that some Ultimate Intelligence created the universe and (by way of extension) engineers. But even for those of us who accept that there is an Ultimate Intelligence, it is not customary to say that God made automobiles and airplanes and genetically engineered food.

Can we find proximal sources of intelligent design of life without appealing directly to the Ultimate Intelligence? Even though I personally believe God was the Ultimate Creator of the universe and hence even the creator of the Wright Brothers, I generally still identify airplanes as the proximal intelligent design of the Wright Brothers. A similar issue may arise in identifying the Designer or designers of life on Earth.

Whether out of sincerity or parody, Richard Hoppe of PandasThumb suggests his own ID theory: Multiple Designer Theory (MDT). MDT is certainly true of a grand undertaking such as the design of a space ship. But what about life on Earth? Some have proposed an alien civilization as the source of life on Earth (Crick, Orgel, Hoyle, Klyce, others). Now Shapiro enters the fray. Below is the full abstract. I encourage reading the entire paper and visiting James Shapiro’s website: here.

ABSTRACT: 40 years experience as a bacterial geneticist have taught me that bacteria possess many cognitive, computational and evolutionary capabilities unimaginable in the first six decades of the 20th Century. Analysis of cellular processes such as metabolism, regulation of protein synthesis, and DNA repair established that bacteria continually monitor their external and internal environments and compute functional outputs based on information provided by their sensory apparatus. Studies of genetic recombination, lysogeny, antibiotic resistance and my own work on transposable elements revealed multiple widespread bacterial systems for mobilizing and engineering DNA molecules. Examination of colony development and organization led me to appreciate how extensive multicellular collaboration is among the majority of bacterial species. Contemporary research in many laboratories on cell-cell signaling, symbiosis and pathogenesis show that bacteria utilize sophisticated mechanisms for intercellular communication and even have the ability to commandeer the basic cell biology of “higher” plants and animals to meet their own needs. This remarkable series of observations requires us to revise basic ideas about biological information processing and recognize that even the smallest cells are sentient beings.

Whether bacteria have conscious minds is a curious issue, but I have often said at the very least, bacteria evidence weak AI. Is AI (Artificial Intelligence) still intelligence? Yes, in a manner of speaking. In the book, No Free Lunch by Bill Dembski, there is a statement that could be considered inclusive of weak AI provided that weak AI functions as a surrogate of real intelligence (RI):

the designer or some surrogate

I personally subscribe to a quasi multiple-designer hypothesis, with other sources of intelligence (both AI and RI) playing a minor role to the Ultimate Designer (but I emphasize that is a personal view). Whether bacteria are conscious beings is something we may not ever know, but I think the idea that bacteria have weak AI is very defensible, and hence in a sense bacteria are among the designers of life today.

Finally, humans are partial designers of life today as well (via genetic engineering). This is so undeniable that even Dawkins was forced to admit it at the end of his Salon interview with Gordy Slack The Atheist

I think it well may be that we’re living in a time when evolution is suddenly starting to become intelligently designed.

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

19 Responses to Who are the (multiple) designers? James Shapiro offers some compelling answers

  1. “Finally, humans are partial designers of life today as well (via genetic engineering).”

    I’ll do you one better: Humanity has been ‘designing’ its own life since it attained what can reasonably be called sentience. Both by purposefully selecting and raising offspring, helping to choose our offspring’s mates (arranged marriage, etc), breeding of animals (Domesticated animals, different breeds of dogs, etc were around well in advance of Darwin), and so on.

    I’d be tempted to believe in an ultimate designer as well, in a personal view. Perhaps it is the case that the original and ultimate designer started a purposeful process that was amazingly aggressive and diverse, intending to result in beings in his image – ones that don’t just follow preprogrammed patterns, but designed on their own.

    Which would make for an interesting philosophical fight. Either everything is designed (Possible ID proposal), or nothing is (Dennett and Dawkins’ essential proposal, reasonably interpreted.)

  2. Wow.

  3. I think there may be basic trouble with more than one ultimate designer.

    Before the dawn of the dawn of time “it” was declared that more than one ultimate designer would invite competition.

    Competition would invite possible nasty consequences, so there would be none permitted between the postulated possibility of more than one, and the declared necessity of only one. There could only be one ultimate designer, and that designer would produce designs greater than which no other designs could be conceived.

    From that timeless moment, attempts by mere designs to make ultimate designers has failed, does fail, and will fail. At each attempt the “ancient” declaration repeats itself as it did in the beginning, as it does in the present, and as it will in the future — unto the ages of ages, some have said.

    And yet, designs reflect their designer. If they acknowledge the ultimacy of their designer they are offered intellectual freedom. If not, they design another intellectual straitjacket.

    Shapiro says…

    “The take-home lesson of more than half a century of molecular microbiology is to recognize that bacterial information processing is far more powerful than human
    technology.”

    One small step for each, one giant leap for all!

  4. I’ve wondered recently if intelligence is structurally like a holograph, which is almost the opposite of intelligence being an emergent property.

    In other words, instead of intelligence emerging from a certain amount of accumulation of particularly structured non-intelligent matter, matter has some intelligence all the way down to its most fundamental level.

    Some theoretical physicists seem to be confirming this now, even on the one of the blogs aligned with PZ Myers (though they’d dismiss my admittedly pedestrian interpretation of the work with a collective hand wave).

  5. Now this is ID that I can get behind! I have always seen the universe as a product of design. I have also always been a naturalist in the sense that appeals to magic have always been deeply dissatisfying. Evolutionary Theory has always seemed to be the best if woefully incomplete notion of evolution. I could not justify replacing ET’s incompleteness with appeals to magic. I still cannot get behind IC. Dr. Dembski’s work is more interesting, but using so many arbitrary variables, albeit necessarily, makes for an unconvincing case in my book.

    This is the kind of discovery that could create a significant paradigm shift in biology. Purely materialistic views of nature have been dead or dieing for a long time, and yes biology is behind the curve in that regard. This, I think, is somewhat understandable though as they have not yet had concrete reasons to break with reductionism, because biologists had not yet had reductionism run its full course and run into the wall it has run into in other disciplines. I think we are very near that shift now though.

    I think many people are perfectly open to the importance of existing mechanisms of evolution being changed and finding entirely new mechanisms through future discoveries. I think the ID movement comes up against entrenched resistance from ordinary people (non-rabid atheists) when it is percieved that ID is attempting to challenge the basic narrative of evolution.

  6. Once again, I would call people’s attention to the work of Rupert Sheldrake, who’s holistic, field centered views of life seem to be confirmed by more evidence every day…

  7. I presume that science is happy to open up consideration that microorganisms actually have some “intelligence” and that they actively twiddle with their own DNA. This is an interesting concept. However, why is it that science can consider such an angle, but must soundly reject the idea that a designer(s) was involved prior to the microorganism developing this ability?

  8. Shapiro is thinking outside the box. I like it.

    In general, anytime you’ve got millions, billions, or trillions of dynamic elements together and able to interact, even when they are simple, computational systems can be formed. Your computers are basically just a very large collection of on/off switches (over a trillion these days if you count each bit of dynamic storage on a disk drive) that operate in cascaded patterns like dominoes falling (except these dominoes can right themselves as well as fall).

    An individual bacteria is far more complex than the individual logic gates that make up computers and when billions and trillions of them get together in a colony and communicate by chemical and mechanical means the possibilities get pretty interesting. I applaud Shapiro for getting outside the box to think about those possibilities.

  9. H’mm:

    Following up the JS link, I ran across this excerpt:

    . . . My own view is that we are witnessing a major paradigm shift in the life sciences in the sense that Kuhn (1962) described that process. Matter, the focus of classical molecular biology, is giving way to information as the essential feature used to understand how living systems work. Informatics rather than mechanics is now the key to explaining cell biology and cell activities. Bacteria are full participants in this paradigm shift, and the recognition of sophisticated information processing capacities in prokaryotic cells represents another step away from the anthropocentric view of the universe that dominated pre-scientific thinking . . .

    A few points come to mind, noting that JS is observing that he sees on decades of investigation, a sophisticated algorithmic information system in action in bacteria and more specifically in colonies of bacteria. (Of course, as CS noted, we should distinguish his scientific observations from his more speculative ideas on say the innnate intelligence of bacteria.)

    On points:

    1] The 300 – 500 k baseline for DNA base-pair counts is three orders of magnitude beyond Dembski’s 500 bits to give configuration spaces of scale ~ 10^150.

    2] Algorithms in action obviously reflect functionally specified, contingent and complex information.

    3] The odds of such information coming to be by chance + necessity only within the gamut of the observed cosmos is negligibly different from zero.

    4] In further point of fact, on observing a similarly functional and complex — but far less sophisticated — information entity such as this post, we routinely infer to agency as its cause as we experientially know the cause of such CSI. We do not treat such as “lucky noise.” [We treat it as meaningful . . .]

    So, it seems to me that a major explanatory gap is opening up in the evolutionary materialist paradigm.

    (Resort to an unobserved, speculative quasi-infinite cosmos as a whole to swamp the odds is of course a move over into metaphysics, and should not be allowed to prevail without first explicitly addressing other alternative worldview scanarios. E.g. it is misleading to keep the label “science” on such materialistic speculations.)

    So, a major rethink is indeed in order.

    Cheerio

    GEM of TKI

  10. Re 9 (point 4)

    “…routinely infer to agency as its cause as we experientially know the cause of such CSI. We do not treat such as “lucky noise.” [We treat it as meaningful . . .]”

    Maybe things aren’t always that simple.

    For example, in the process of mathematical optimization, progress can be achieved by inferring to agency, which then defers to “lucky noise” methods like simulated annealing or genetic algorithms or neural networks, etc.

    “Tunneling” through hyperspace with “lucky noise” can arrive at a “best solution” far faster than deterministic methods could achieve in ordinary time.

    Finally, agency determines a solution is best, but has relaxed itself to get that far.

    Which all seems to infer a design greater than which can be defined by precluding chance and necessity. I need some comments on this!

  11. Dawkins:

    I think it well may be that we’re living in a time when evolution is suddenly starting to become intelligently designed.

    Now if we could just get him to see that if may have happened before humans existed. Ah, well.

  12. Hi Eebrom

    RE: in the process of mathematical optimization, progress can be achieved by inferring to agency, which then defers to “lucky noise” methods like simulated annealing or genetic algorithms or neural networks, etc

    Perhaps, I was too brief in the point above. (Though I do speak to this in the always linked . . .)

    I am adverting to the root causal factors, chance, natural regularities, agency. As just linked, I observe of a case in point:

    . . . heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

    That is, the issue I raised is not whether or not agents can use chance and natural regularities, to achieve their ends. [Indeed, given the issue of tolerance in engineering, most notably in electronics, we have got to routinely work with chance as well as the forces, materials and phenomena of nature!]

    Rather, the issue is whether certain contingent [so, not "necessity" alone], functionally specified, complex and often fine-tuned empirical situations are credibly achievable by chance + necessity alone.

    This is of course the core thesis of the evolutionary materialists.

    In that context, I have raised the point that we routinely recognise agency in contexts of FSCI in communicative situations, where the lucky noise thesis faces similar odds relative to say the Dembski-type bound. In that context, we routinely infer to design/agency, not “lucky noise” to explain complex, functionally specific digital strings etc.

    Thus, there is a selective hyperskepticism issue that obtains in cases where we then wish to reject such inferences as “unscientific” when we encounter them in contexts that challenge the evo mat thesis and worldview. [DNA and the associated molecular nanotechnology and evident algorithms of life are a capital case in point.]

    I am of course quite aware of quantum mechanical tunnelling whereby what would be classically an insurmountable barrier is in fact accessible to escape at quantum scale, on a probabilistic basis. Oddly, in total internal reflection, there is a tunnelling effect of light so that if one brings up another matching optical surface close enough, one can subvert the TIR and get partial or in a direct close-up,essentially complete transmission. (Think about why if we imagine an appropriately slanted internal surface within a block of optical glass, there is no TIR there, but if we actually break apart at the line in question, suddenly it appears. Then imagine slowly bringing back together the matching surfaces just broken apart. This is of course of some relevance to, say, creating long fibre optic transmission lines.)

    Your “final” point seems to be on the issue of satsficing vs optimisation. Quite often a “good enough to go” solution is effective, and the effort to optimise relative to a target function and associated cost/constraints is not worth it. (There is also the case where unless one knows the target function, one cannot properly assess optimality . . .) Further to this, attempting to optimise every sub-factor often frustrates achieving the overall goal, the problem of sub optimisation; that is, there are such things as trade-offs, and a compromise solution that is good enough will often out-perform attempted micro-managed optimisation across all aspects of a solution.

    Further to this, when we see that in some situations one factor dominates the design, e.g. in furniture, rigidity, comfort and aesthetics, while being strong enough is often what is needed, so a solution that is good enough on the other factors may in fact be targetted on meeting the overriding issue.

    I think we can see a lot of parallels to this in the biological world, e.g our bony structures and bodily architecture are “good enough to go” rather than perfect relative to any and all conceivable constraints and ideals.

    That is why I think the debate over say the panda’s “thumb,” is so utterly artificial and irrelevant.

    Jesus’ phrase for that sort of reasoning was: straining out a gnat while swallowing a camel. The camel here, being the obvious evidence of massive FSCI and the implication that chance + necessity alone could not credibly go there in the gamut of the observed cosmos. The gnat of course is the dysteleological claim that cases of claimed imperfection prove lack of design.

    The architecture of the eye is another capital case in point, where there are different factors at work and the overall design is obviously effective and complex beyond the reasonable reach of chance + necessity acting alone in the same gamut, but because one can find a detail that may be puzzling that is the focus for saying that since the blood vessels and/or nerve connexions of the retina or whatever are puzzling on a thesis of design, then the eye cannot be designed, as that is “stupid.” But in fact obviously such factors do not prevent us from seeing well enough, and may have surprising and quire reasonable explanations. [Cf also Behe's remarkshere and the linked remarks in Denton here.]

    I have seen a similar argument on the claim that since the start codons for DNA are multifunctional, then the algorithm-based, computer language-using nanotechnology of the cell cannot be designed. (Turns out that in some cases stop codons are also multifunctional, and that there is a disambiguation algorithm/ procedure, which is partly not understood. Good enough to work astonishingly well, and complex beyond the credible reach of chance + necessity, again.)

    In short, one uses a detail issue to dismiss an obvious and massively evident case. But, that works by creating a convenient distraction rather than by addressing the issue in the main on the merits.

    Can we name this the gnat and camel fallacy, or something like that?

    Okay, trust this helps

    GEM of TKI

  13. Hi Eebrom

    RE: in the process of mathematical optimization, progress can be achieved by inferring to agency, which then defers to “lucky noise” methods like simulated annealing or genetic algorithms or neural networks, etc

    Perhaps, I was too brief in the point above. (Though I do speak to this in the always linked . . .)

    I am adverting to the root causal factors, chance, natural regularities, agency. As just linked, I observe of a case in point:

    . . . heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes!

    That is, the issue I raised is not whether or not agents can use chance and natural regularities, to achieve their ends. [Indeed, given the issue of tolerance in engineering, most notably in electronics, we have got to routinely work with chance as well as the forces, materials and phenomena of nature!]

    Rather, the issue is whether certain contingent [so, not "necessity" alone], functionally specified, complex and often fine-tuned empirical situations are credibly achievable by chance + necessity alone.

    This is of course the core thesis of the evolutionary materialists.

    In that context, I have raised the point that we routinely recognise agency in contexts of FSCI in communicative situations, where the lucky noise thesis faces similar odds relative to say the Dembski-type bound. In that context, we routinely infer to design/agency, not “lucky noise” to explain complex, functionally specific digital strings etc.

    Thus, there is a selective hyperskepticism issue that obtains in cases where we then wish to reject such inferences as “unscientific” when we encounter them in contexts that challenge the evo mat thesis and worldview. [DNA and the associated molecular nanotechnology and evident algorithms of life are a capital case in point.]

    I am of course quite aware of quantum mechanical tunnelling whereby what would be classically an insurmountable barrier is in fact accessible to escape at quantum scale, on a probabilistic basis. Oddly, in total internal reflection, there is a tunnelling effect of light so that if one brings up another matching optical surface close enough, one can subvert the TIR and get partial or in a direct close-up,essentially complete transmission. (Think about why if we imagine an appropriately slanted internal surface within a block of optical glass, there is no TIR there, but if we actually break apart at the line in question, suddenly it appears. Then imagine slowly bringing back together the matching surfaces just broken apart. This is of course of some relevance to, say, creating long fibre optic transmission lines.)

    Okay, pause . . .

    GEM of TKI

  14. Eebrom:

    Continuing . . .

    Your “final” point seems to be on the issue of satsficing vs optimisation: agency determines a solution is best, but has relaxed itself to get that far. Which all seems to infer a design greater than which can be defined by precluding chance and necessity.

    Quite often a “good enough to go” solution is effective, and the effort to optimise relative to a target function and associated cost/constraints is not worth it. (There is also the case where unless one knows the target function, one cannot properly assess optimality . . .) Further to this, attempting to optimise every sub-factor often frustrates achieving the overall goal, the problem of sub optimisation; that is, there are such things as trade-offs, and a compromise solution that is good enough will often out-perform attempted micro-managed optimisation across all aspects of a solution.

    In that context, as already commented on, we see that agents can use both natural regularities [forced materials, phenomena of nature] and chance [e.g. The tolerance issue, also the use of targetted random-walk searches such as Genetic Algortithms] as part of their problem-solving or design or functional process. (This is only relevant to design in the contextt hat chance plus natural regularities cannot credibly reach to the solution on the relevant probabilistic resources available. The Dembski bound shows that this level can come up surprisingly quickly: 500 bits or the equivalent in information carrying capacity makes a configuration space ~ 10^150. DNA and protein spaces are of course far beyond this bound. Thus,t he issue is that “lucky noise” cannot credibly account for the funcitonally specified, complex information in certain important empirical cases, as discussed im my always linked. The detection of these cases through CSI-linked filters is then a relevant issue,a nd Dembski’s solution is to extend the work of Fischer on elimination of sufficiently improbable chance-based null hypotheses. The point being, that the issue then becomes, are we in reality dodging behind a cloud of selective hyperskepticism, as we routinely apply such filters in many contexts, intuitively and quantitatively? Are we then simply begging worldview level questions?)

    Further to this, when we see that in some situations one factor dominates the design, e.g. in furniture, rigidity, comfort and aesthetics, while being strong enough is often what is needed, so a solution that is good enough on the other factors may in fact be targetted on meeting the overriding issue.

    I think we can see a lot of parallels to this in the biological world, e.g our bony structures and bodily architecture are “good enough to go” rather than perfect relative to any and all conceivable constraints and ideals.

    That is why I think the debate over say the panda’s “thumb,” is so utterly artificial and irrelevant.

    Jesus’ phrase for that sort of reasoning was: straining out a gnat while swallowing a camel. The camel here, being the obvious evidence of massive FSCI and the implication that chance + necessity alone could not credibly go there in the gamut of the observed cosmos. The gnat of course is the dysteleological claim that cases of claimed imperfection prove lack of design.

    The architecture of the eye is another capital case in point, where there are different factors at work and the overall design is obviously effective and complex beyond the reasonable reach of chance + necessity acting alone in the same gamut, but because one can find a detail that may be puzzling that is the focus for saying that since the blood vessels and/or nerve connexions of the retina or whatever are puzzling on a thesis of design, then the eye cannot be designed, as that is “stupid.” But in fact obviously such factors do not prevent us from seeing well enough, and may have surprising and quire reasonable explanations. [Cf also Behe's remarkshere and the linked remarks in Denton here.]

    I have seen a similar argument on the claim that since the start codons for DNA are multifunctional, then the algorithm-based, computer language-using nanotechnology of the cell cannot be designed. (Turns out that in some cases stop codons are also multifunctional, and that there is a disambiguation algorithm/ procedure, which is partly not understood. Good enough to work astonishingly well, and complex beyond the credible reach of chance + necessity, again.)

    In short, one uses a detail issue to dismiss an obvious and massively evident case. But, that works by creating a convenient distraction rather than by addressing the issue in the main on the merits.

    Can we name this the gnat and camel fallacy, or something like that?

    Okay, trust this helps

    GEM of TKI

  15. Oops:

    1] The forces, materials and phenomena of nature . . .

    2] Witt not Behe

    GEM of TKI

  16. re: 12-15

    Thanks, kairosfocus. Your comments, almost like an attractor!

    The point of confusion was that I was supposing the possibility of design greater than which could be defined by precluding chance and necessity. If I understand it correctly, that doesn’t make sense because design can use (do) anything it wants; mere chance and necessity can’t. Because of the limitatons of chance and necessity something else is required.

    What is odd is that science has trouble admitting that something else is required, especially if there could be any implication of a scary Designer — rather than a rather permissive Mother Earth. Too bad concepts like Laplace’s black hole don’t implicate design so readily.

  17. Behe is winning half the battle.

    Evolution cannot happen through “random” mutations.

    James Shapiro actually says “non-random” two times, not “stochastic” one time, with reference to DNA re-engineering in the bacteria; and of course, intelligent processes are non-random.

    http://shapiro.bsd.uchicago.ed.....eeting.pdf
    Bacteria are small but not stupid:
    Cognition, natural genetic engineering, and sociobacteriology
    (p.10)

  18. [...] Various physicists, on scientific grounds alone, postulate there is an ultimate MIND (God if wish to call him that) that exists: The Quantum Enigma of Consciousness and the Identity of the Designer. But even supposing God exists, it does not mandate that He is the proximal intelligent designer as I pointed out here: Who are the multiple intelligent designsers, Shapiro offers some compelling answers. [...]

  19. [...] Various physicists, on scientific grounds alone, postulate there is an ultimate MIND (God if wish to call him that) that exists: The Quantum Enigma of Consciousness and the Identity of the Designer. But even supposing God exists, it does not mandate that He is the proximal intelligent designer as I pointed out here: Who are the multiple intelligent designsers, Shapiro offers some compelling answers. [...]

Leave a Reply