Uncommon Descent Serving The Intelligent Design Community

Reductionist Predictions Always Fail

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 Rod Dreher writes:

Time and time again, an experimental gadget gets introduced — it doesn’t matter if it’s a supercollider or a gene chip or an fMRI machine — and we’re told it will allow us to glimpse the underlying logic of everything. But the tool always disappoints, doesn’t it? We soon realize that those pretty pictures are incomplete and that we can’t reduce our complex subject to a few colorful spots. So here’s a pitch: Scientists should learn to expect this cycle — to anticipate that the universe is always more networked and complicated than reductionist approaches can reveal.

…Karl Popper, the great philosopher of science, once divided the world into two categories: clocks and clouds. Clocks are neat, orderly systems that can be solved through reduction; clouds are an epistemic mess, “highly irregular, disorderly, and more or less unpredictable.” The mistake of modern science is to pretend that everything is a clock, which is why we get seduced again and again by the false promises of brain scanners and gene sequencers. We want to believe we will understand nature if we find the exact right tool to cut its joints. But that approach is doomed to failure. We live in a universe not of clocks but of clouds.

Comments
Petruska and why can't you look them up? Something tells me that if you can't even expend the energy to look up the proper references you would not be persuaded even though I present them to you personally: I give you a clue where many references are though,,, click on my handle:bornagain77
June 23, 2010
June
06
Jun
23
23
2010
08:04 PM
8
08
04
PM
PDT
I don't watch videos. Give mne a link to a textbook or journal paper.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
07:39 PM
7
07
39
PM
PDT
Petruska you state: "Biologists have never believed a specific sequence is likely to occur, or that any historical sequence would recur (or occur in reverse)." But Petruska if you had watched the video I listed, that is exactly the point the Dr. Rana makes. Even though it is clearly not suppose to happen it does. Thus either your presupposition is wrong, which it is not, or the neo-Darwinian framework is falsified by another line of evidence.,,, Speaking of falsification of the neo-Darwinian framework I kind of like the falsification of the entire genetic reductionism scenario pointed out by Dr. Meyer in this video: Stephen Meyer - Complexity Of The Cell - Layered Information - video http://www.metacafe.com/watch/4798685 So please tell me "what your mechanism is for change now that genetic reductionism is falsified?"bornagain77
June 23, 2010
June
06
Jun
23
23
2010
06:16 PM
6
06
16
PM
PDT
As all 15 AAs are different in the beginning from the final target, what we have here is a random search in the search space of all possible combinations of those 15 AAs. That space is 20^15.
The calculation is completely irrelevant. Protein A is not changing into B. It's simply changing.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
05:11 PM
5
05
11
PM
PDT
Convergent evolution does not involve the repeating of a sequence of changes. Biologists have never believed a specific sequence is likely to occur, or that any historical sequence would recur (or occur in reverse).Petrushka
June 23, 2010
June
06
Jun
23
23
2010
05:08 PM
5
05
08
PM
PDT
Petruska you state: "No sequence of mutations will repeat." Then you don't believe in "convergent evolution" but believe in historical contingency? Well congratulations that's the correct stance, But the bad news is that it refutes neo-Darwinism. See at the 2:30 minute mark of the following video: Lenski's Citrate E-Coli - Disproof of "Convergent" Evolution - Fazale Rana - video http://www.metacafe.com/watch/4564682bornagain77
June 23, 2010
June
06
Jun
23
23
2010
03:31 PM
3
03
31
PM
PDT
So Petruska do you at least adhere to Dollo's law? Well I got bad news for you on that front as well: Dollo's law and the death and resurrection of genes ABSTRACT: Dollo's law, the concept that evolution is not substantively reversible, implies that the degradation of genetic information is sufficiently fast that genes or developmental pathways released from selective pressure will rapidly become nonfunctional. Using empirical data to assess the rate of loss of coding information in genes for proteins with varying degrees of tolerance to mutational change, we show that, in fact, there is a significant probability over evolutionary time scales of 0.5-6 million years for successful reactivation of silenced genes or "lost" developmental programs. Conversely, the reactivation of long (>10 million years)-unexpressed genes and dormant developmental pathways is not possible unless function is maintained by other selective constraints; http://www.pnas.org/content/91/25/12283.full.pdf+html Dollo's Law was further verified to the molecular level here: Dollo’s law, the symmetry of time, and the edge of evolution - Michael Behe Excerpt: We predict that future investigations, like ours, will support a molecular version of Dollo's law: ,,, Dr. Behe comments on the finding of the study, "The old, organismal, time-asymmetric Dollo’s law supposedly blocked off just the past to Darwinian processes, for arbitrary reasons. A Dollo’s law in the molecular sense of Bridgham et al (2009), however, is time-symmetric. A time-symmetric law will substantially block both the past and the future,". http://www.evolutionnews.org/2009/10/dollos_law_the_symmetry_of_tim.htmlbornagain77
June 23, 2010
June
06
Jun
23
23
2010
02:38 PM
2
02
38
PM
PDT
gpuccio -
Freelurker: You say: If one makes a bad-design argument against ID (which I don’t), IDists, including Behe, will legitimately tell you that nobody knows the purposes the purported designer had in mind. I don’t agree. Bad design arguments are bad arguments essentially for one reason: bad design is still design.
As I said, I don't make the bad-design argument. I brought it up only because it is in those kind of discussions that one can see IDists saying (legitimately) that nobody knows the purposes the purported designer had in mind. In nature, IDists are trying to detect purposefulness without detecting purposes, i.e., they are trying to detect "free-floating purposefulness."Freelurker_
June 23, 2010
June
06
Jun
23
23
2010
02:34 PM
2
02
34
PM
PDT
Petruska, You built a strawman argument. It was taken away. You then switched gears and built another. Again, it was removed. Now you wave your sword in the air and repeat them both as if the preceding never occured. Clearly, you are not interested in evidence, and you've apparently given up on honesty as well.Upright BiPed
June 23, 2010
June
06
Jun
23
23
2010
02:18 PM
2
02
18
PM
PDT
Petrushka: Are we speaking the same language? You say that you haven't retreated a bit from the "simultaneity" argument, and to prove your point you go on with a series of "arguments" which never mention the simultaneity issue and have nothing to do with it? Anyway, I am tired... Good night!gpuccio
June 23, 2010
June
06
Jun
23
23
2010
02:14 PM
2
02
14
PM
PDT
I am anyway satisfied that you have apparently retreated from the vain “simultaneity” argument.
I haven't retreated a bit. The probability calculations presented by ID advocates are based on a number of bogus assumptions. 1. There is no incremental path from a state of not having a complex structure to a state where the structure exists. 2. Evolution has goals. Structures are specified. 3. Every step in the accumulation of change leading to a structure must involve an increment in fitness and Progress toward the function. None of these assumptions are part of biology. They are subsets of an overall assumption that what is was destined to be. The first assumption simply isn't science. Calculating probabilities after something has happened makes no sense, and you can't calculate the probabilities of a sequence unless you know the sequence. The second and third assumptions are also divorced from reality. No one in biology assumes that the evolution of a flagellum is inevitable, and certainly not by the route taken historically. No sequence of mutations will repeat. Dollo's Law. It is possible that there are many routes to a flagellum. No one knows. But we do know that there are dozens of partial flagella and many variations involving some, but not all of the proteins found in the E.coli flagellum. At any rate, calculating probabilities without knowing the history and the landscape is nonsense.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
01:59 PM
1
01
59
PM
PDT
Upright BiPed: Thanks anyway... :)gpuccio
June 23, 2010
June
06
Jun
23
23
2010
01:37 PM
1
01
37
PM
PDT
Ah...GP beat me to it. (stands to reason)Upright BiPed
June 23, 2010
June
06
Jun
23
23
2010
01:33 PM
1
01
33
PM
PDT
Petrushka, You stated very plainly:
"Biologist do not assume that structures came together in a single event, so the mathematics of improbability is irrelevant."
GP then has gone out of his way to explain that a single simultaneous event has nothing whatsoever to do with it. Having been relieved of this strawman complaint, you now switch gears without ever acknowledging your mistaken argument. You now have moved your complaint-making apparatus to "intentionality" and a "goal".
"The probability calculations are irrelevant because they assume that changes are “leading up” to something, or anticipating being part of a larger structure... ...no pre-specified series of changes is likely to occur... ...No serious biologist thinks that a series of changes leading to a new function was inevitable... ...Not if you assume that there is a goal being searched for, but no one in biology thinks that... ...No serious person thinks that (b)ecause a function exists, it was destined. "
YET, absolutely nowhere in GP's argument does he mention a prespecified goal or intentionlity. He simply follows the logic that one protein must have accumulated some changes in order to become another protein. It so crazy it might be logical. - - - - - - Your attempted argument is so transparent it astounding that you take it so seriously. Really.Upright BiPed
June 23, 2010
June
06
Jun
23
23
2010
01:32 PM
1
01
32
PM
PDT
Petrushka: As usual, you change arguments when you don't know what to say. Firts you bring about the problem of simultaneity, then, after I have shown that it is a false problem, instead of admitting that, you shift to the usual: "The probability calculations are irrelevant" or: "there is no target" or just try to affirm "the likelihood that there are nearly infinite combinations that are viable". I have already dealt with all that elsewhere, and I will not do that again now. I am anyway satisfied that you have apparently retreated from the vain "simultaneity" argument. But I am sure I will read that again from you in another thread...gpuccio
June 23, 2010
June
06
Jun
23
23
2010
01:27 PM
1
01
27
PM
PDT
I think you are completely ignoring the likelihood that there are nearly infinite combinations that are viable, but which never get explored. We know that there are vast oceans of possibilities, for the simple reason that most of the seven billion humans are genetically unique. Change is not necessary death, nor is it necessarily dramatic.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
01:01 PM
1
01
01
PM
PDT
The problem is: can those 15 (in our example) coordinated mutations be found by the probabilistic resource in time t?
Not if you assume that there is a goal being searched for, but no one in biology thinks that. No serious person thinks that tecause a function exists, it was destined. I suppose there could hypothetically be some instances where biochemistry dictates a sequence of change, but that would be lawful behavior, not an improbable event.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
12:58 PM
12
12
58
PM
PDT
Simultaneity needs not be assumed. If you are not convinced of that, please specify why.
For the simple reason that we know that alleles can have useful functions unrelated to their function as part of a larger structure or function. We also know that alleles can persist in a population when their effect is neutral. The probability calculations are irrelevant because they assume that changes are "leading up" to something, or anticipating being part of a larger structure. This isn't what biologists assume or observe. The current understanding of Dollo's law is that no pre-specified series of changes is likely to occur. That's pretty much a restatement of Behe's claim. No serious biologist thinks that a series of changes leading to a new function was inevitable or destined. And in cases where similar structures have evolved through different routes, this pretty much demonstrates that functionality is not a matter of islands separated by unbridgeable seas.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
12:53 PM
12
12
53
PM
PDT
Petrushka: No, you don't understand. I'll try to explain. Let's say. just to have a model, that in the course of evolution a new protein B comes form an existing protein A in a time t. To make things simpler, and to stay within a common evolutionary scenario, let's say that B comes form an inactive duplicate of A, let's say A', so that we can ignore the problem of the loss of function of A because of mutations (I think that's the best darwinist scenario we can imagine). Now, in our scenario, B is different from A' for at least 15 AAs: IOW, at least 15 AAs must change, and be present at the same time in B in the new form, so that the function of B appears. We also assume that, as soon as the new function appears, it undergoes NS: IOW, the single clone where the mutations have taken place is expanded and fixed. But not before that. So, always for simplicity, we assume that A' changes by single random independent mutations, stepwise. As all 15 AAs are different in the beginning from the final target, what we have here is a random search in the search space of all possible combinations of those 15 AAs. That space is 20^15. Each time a mutation happens, one of those 20^15 possibilities is explored. Obviously, as the mutations are independent, each new mutation can also change a previous "favourable" mutation. Anyway, the fact remains true that each new mutation is a new "trial". So, the probabilities to get to B must be evaluated taking into account: a) the search space (20^15) b) the probabilistic resources (number of possible trials in the time t) Obviously, if the target space is bigger than 1 (if more than one sequence will have the B function, which is usually the case), then we have to take that into account (calculate the ratio between target space and search space, and then refer it to the probabilistic resources). As you can see, nowhere in this model (which is a correct ID model to compute the probabilistic credibility of any protein transition) is it necessary to think that the 15 mutations have to happen simultaneously. It is obviously more sensible to assume that they happen stepwise. The problem is: can those 15 (in our example) coordinated mutations be found by the probabilistic resource in time t? IOW, is a random search credible? Has it the power to determine this particular transition in the historical time and in the biological context we are assuming? Or is the result completely out of range of a random search? And please note, even if we in the end infer design, the mutations can just the same have happened in a stepwise, guided modality. Design needs not be implemented "simultaneously". Guided mutation or intelligent selection in a stepwise modality remain, IMO, the best scenario for biological design. So, simultaneity is a false problem. Is that clear?gpuccio
June 23, 2010
June
06
Jun
23
23
2010
12:43 PM
12
12
43
PM
PDT
and all of them must be present at the same time to give the new function, then the probability computation is the same.
I'm not even sure what that means. Of course they must be "present" at the same time, but they do not need to occur at the same time, nor at the time they occur do they have to be in anticipation of some future combination. The simple logic of evolution is that mutations and selection are observed, even two-step accumulations of mutations. No law of physics or chemistry is violated. No simultaneous two or three step mutation has ever been observed, whether it be the result of designer intervention or anticipation of need. Nor has the instantaneous creation of any organism been observed. So what you have is ongoing research under the assumption that small changes accumulate, or the assumption that larger changes (which have never been observed) occur under the influence of an unnamed and undescribed agent, at unspecified times, using unspecified methods for unspecified reasons. In the absence of an actual observed history you are merely asserting that the accumulative history didn't happen, without providing an alternative.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
12:11 PM
12
12
11
PM
PDT
Petrushka: I have already pointed to you that there is no nedd that the mutations happen simultaneously. This is a strange idea that you seem to stick to. And a completely wrong one. If the mutations are independent, not individually selectable (the intermediates have no special increase in function), and all of them must be present at the same time to give the new function, then the probability computation is the same. It doesn't matter if they happen stepwise, or all at the same time. Simultaneity needs not be assumed. If you are not convinced of that, please specify why.gpuccio
June 23, 2010
June
06
Jun
23
23
2010
11:35 AM
11
11
35
AM
PDT
He is telling you how to detect that the parts came together purposefully rather than as a result of regularity or of chance.
Actually, he's merely asserting that several favorable mutations happening at once is unlikely -- a subset of the argument that a complex structure is unlikely to assemble in one step. Behe says nothing at all about the actual probability of a flagellum evolving stepwise, because neither he nor anyone else knows the exact history of the flagellum. If you had a time machine and could prove that three or six simultaneous mutations occurred, you would have positive evidence for ID.Petrushka
June 23, 2010
June
06
Jun
23
23
2010
11:23 AM
11
11
23
AM
PDT
Freelurker: You say: If one makes a bad-design argument against ID (which I don’t), IDists, including Behe, will legitimately tell you that nobody knows the purposes the purported designer had in mind. I don't agree. Bad design arguments are bad arguments essentially for one reason: bad design is still design. Design needs not be perfect to be design. Design needs not be optimal. The bad design arguments made by some darwinists are in reality of the kind: "but if you believe that God, who in your opinion is perfect, designed living beings, how can you explain bad design?" That argument is not only bad, it is a religious, philosophical argument. It has no scientific value. And, even as a religious-philosophical argument, it is very bad anyway, because first of all even a perfect God can operate in a context, and adjust to the existing context. Second, as you say, we are not sure we can understand the whole scenarion of God's intentions (or, for that, of any designer's intentions). But these, again, are philosophical aspects of a bad philosophical argument. Design detection is a scientific issue. So, first let's affirm design where it is recognizable, and then, and only then, we can try to understand if the observed design is optimal, suboptimal, or simply gross, if and when the data allow that kind of analysis.gpuccio
June 23, 2010
June
06
Jun
23
23
2010
05:28 AM
5
05
28
AM
PDT
Freelurker: Thank you for your clarifications about your thought. You say: Function is not, technically, the same thing as purpose. A function is actually a regularity; it’s a mapping between system inputs and outputs. A function may fulfill a purpose (fulfill a requirement.) I am afraid here you are thinking as a mathemathician. Let's think as engineers, instead. Let's say that a function is a mapping which fulfills a purpose. That's the correct definition for ID. My personal definition of dFSCI, on which I base all my ID discourse, is exactly based on an explicit definition of function in that sense, and indeed needs a conscious observer to recognize and define function in each case. You can find my explicit definition of dFSCI here: https://uncommondescent.com/intelligent-design/signature-of-controversy-new-book-responds-to-stephen-meyers-critics/#comment-355968 You say: The point is that, in ID, “design” does not mean an arrangement of parts. This is most clear in Dembski’s definition of design, which is “the complement of regularity and chance.” No, that's not correct. In ID, like in any other context, design means that something has been designed, IOW that it is the intentional and purposeful product of an intelligent conscious being. There is no coubt about that. That's what design means, nothing else. But ID is about recognizing design, when possible, from the properties of tyhe designed thing (and not, as would be obvious, from a direct observation of the design process). So, as the distinctive trait of designed things is specification (which is the direct result of the conscious, purposeful process), the first thing we have to observe, to hypothesize design, is some form of specification in the designed object. Now, here is where darwinists get confused (or, sometimes, willfully equivocate). Dembski discusses various kind of specifications, and gives different definitions of it in different works. That's very fine for me, but not necessary for my discourse about biological ID. As I have said many times, the only restricted kind of specification we need in biological analysis is functional specification, and please check my link for a specific definition of it. The issue of "“the complement of regularity and chance”, instead, is a separate discourse. Once specification, of any valid kind, has been established, then we have to be sure that we are not observing what I call a "pseudo-specification". IOW, something which appears as specified, but is not the product of design. That is not impossible, and not unlikely. One thing many people seem not to understand is that design can be simple. One can purposefully design a very simple thing, which has some simple function. That is designed, and if I can observe the process of design, I will know for certain that it is designed. But, if I can only observe the product, and not the process, I will not be able to say that it is designed. The product, being simple, could be the result of random processes. So, that would be a false negative in ID detection: the thing is designed, but we cannot be sure of that. That's where the complexity is necessary. Only complex specified things can be recognized with certainty as designed, because the complexity empirically rules out a random origin. Origin form necessity has to be ruled out separately (IOW, the observed complexity must not be compressible, we have to refer to the true Kolmogorov complexity). That's what Dembski means when he says that design is “the complement of regularity and chance.” IOW, after having ruled out both regularity and chance as causal models of the specified information we observe (in biology, of the functionally specified information we observe), then design is the best inference (indeed, the only inference left). That's exactly the application to biology. In biology, we observe functionally specified digital information: the simplest case is the information for the primary sequence of a functional protein in a protein coding gene. Neo darwinism affirms that such information can be explained as originating from a previously existing information (for instance, another, different protein) through the process of darwinian evolution: a process which has two causal moments, one purely random (RV), and the other purely necessary (NS). In the light of ID, that model can only work if the transitions between the times whan NS can operate (selectable new information) can always be explained in terms of RV. IOW, it cannot work. No detailed darwinist model exists, even if only theorical, of how the different protein domains known to us could have originated that way. So, we are left with a lot of functionally specified information (all existing different protein domains) well beyond the reach of random variation, and with no model based on necessity which can explain it. So, design is, absolutely is, the best inference.gpuccio
June 23, 2010
June
06
Jun
23
23
2010
05:08 AM
5
05
08
AM
PDT
@gpuccio The point is that, in ID, "design" does not mean an arrangement of parts. This is most clear in Dembski's definition of design, which is "the complement of regularity and chance." Behe's definition is, not surprisingly, not very different from Dembski's. To see this, notice that when Behe tells you how to detect "design" in the bacteria flagellum he is not telling you how to detect the arrangement of the flagellum's parts. He is telling you how to detect that the parts came together purposefully rather than as a result of regularity or of chance.
The purpose of the flagellum (its function) is obviously to allow movement in space. Weren’t you aware of that? And Behe discusses in detail trhe purpose of each of its parts (stator, rotor, juncion, filament, etc.), exactly as we would do for a man made machine, or for a man made software. I really can’t understand what’s your problem.
If one makes a bad-design argument against ID (which I don't), IDists, including Behe, will legitimately tell you that nobody knows the purposes the purported designer had in mind. This is what I'm emphasizing when I say that, in ID, "design" means free-floating purposefulness. Function is not, technically, the same thing as purpose. A function is actually a regularity; it's a mapping between system inputs and outputs. A function may fulfill a purpose (fulfill a requirement.)
While your concept of “free-floating purposefulness” is certainly funny and bizarre, it means nothing.
But it's not my concept; it's what "design" means in ID. If you find it funny and bizarre then it means you are thinking like an engineer.Freelurker_
June 23, 2010
June
06
Jun
23
23
2010
04:07 AM
4
04
07
AM
PDT
uoflcard: I have many problems too with the imaginative post by AllenMcneill, but really I did not feel like commenting on it. As you have opened the debate, I will just say here that I don't understand the basis for the following statement: The schematic diagrams of bacterial flagella, drawn like engineering designs, illustrate the “average” arrangement of such structures. In any given bacterium, the actual structures only approximate this ideal structure. However, given large numbers of “approximations” of the “ideal” structures, biological processes proceed with fairly high efficiency. Frankly, I don't understand in what sense the actual structures should "approximate" the ideal structure. Is that only a philosophical metaphor? Or is there any biological basis for that affirmation? Is the primary, or secondary, or tertiary, or quaternary structure of proteins different in individual flagella? Or is it only a vague boutade, like saying that each molecule of water is just an approximation of the ideal structure of water, but given the high number of molecules, we can drink water with good efficiency? And all this philosophical nonsense just because a transmission electron microscope image looks grainy?gpuccio
June 22, 2010
June
06
Jun
22
22
2010
05:43 AM
5
05
43
AM
PDT
Allen, I didn't see anyone respond to most of your comments at #6 and #13, but I have some serious issues with some of the things you said.
The point here is that the relative inefficiency of biological “machines” such as rubisco is almost always compensated for by “massive redundancy”. The schematic diagrams of bacterial flagella, drawn like engineering designs, illustrate the “average” arrangement of such structures. In any given bacterium, the actual structures only approximate this ideal structure. However, given large numbers of “approximations” of the “ideal” structures, biological processes proceed with fairly high efficiency.
I don't think you're making quite the momentous statement that you seem to believe you're making. So there is an ideal structure, which biological structures approximate. Well if most aren't close enough to that ideal to be considered efficient individually, then the system/process as a cumulative whole will not be efficient. I'm not a molecular biologist, so I will not argue over the efficiency of particular machines, like rubisco. But I will say that if you have an efficient system of individual components, those individual components are also efficient. Multiplying inefficient parts will never produce an efficient whole. You can't have a system of engines that operate at 65% efficiency that add up to a whole system that operates at 90%. If you can, you have successfully turned thermodynamics on its head.
This model of biological efficiency — that irreducible randomicity is compensated for by massive redundancy – is, of course, the underlying organizing concept in evolution by natural selection. An irreducibly stochastic generator of phenotypic variation (the so-called “engines of variation”; seehttp://evolutionlist.blogspot......awman.html for a list) is coupled with a probabilistic “filter” that preserves and reproduces only those phenotypic variations that on the average result in continued function.
Now are we talking about efficiency or just brute function (i.e. survival)? You seem to treat them as one and the same while they are quite different. Generating function out of a system of components that vary in efficiency and function is one thing, but generating high efficiency by summing a population of components that on average have lower efficiencies is impossible. The only possible conclusion when viewing an efficient system is that its components must on average be at least that efficient.
The roads we drive on, like the biological systems of which we are composed, are only approximations of what could be called “ideal designs”. The dispute between evolutionary biologists (EBers) and ID supporters (IDers) is between EBers who see biological systems as being constructed and operated “from the bottom up”, with irreducible random/stochastic variation woven in at all levels, and IDers who see biological systems as being designed “from the top down”, with no genuinerandom/stochastic variation at all.
This seems to be a very elementary view of the ID argument. Arguing for ID does not require arguing for static, indistinctive biological machines. This feels like you're trying to claim facts as sole property of EBers (that there is variation at all levels of biological systems). Is every Rolex the exact same down to a molecule? Of course not. At the molecular level, the precise arrangement of atoms will vary wildly from one watch to the next. Does it follow, then, that the watches were not intelligently designed? From a functional perspective, some watches tell time slightly fast, some slightly slow; again, there is stochastic variation in everything we can observe in this universe, it seems, other than the laws of physics. So how stochastic variation is off limits to IDers or how it transforms inefficient parts into efficient wholes is beyond me. From #13
As to the distinction between clocks and clouds, of course biological organisms are clouds. This, however, means that a great deal of the fine structure of biological organisms (like the fine structure of clouds) is the result of stochasticprocesses. That is, a cloud viewed as a single entityexhibits regular structure and function. This is why we can classify them as cirrus, cumulus, stratus, etc. However, this overall regularity is the result of the mass action of a very large number of very small particles, which viewed individually act as purely stochastic “Newtonian” particles. That is, although the cloud as a whole exhibits teleomatic changes over time (to use Ernst Mayr’s word for purely physical processes with predictable cause-and-effect relationships), each individual particle moves and collides with others in essentially random patterns.
This seems to be an incredible stretch. Of course at the molecular level there is tremendous variation, like I just explained with Rolexes. But it is quite remarkable to extrapolate molecular variation of clouds and biological structures to them having equivalent ultimate forms. It is also completely irrelevant to the ID debate as no one is arguing about the specific order of each molecule of biological structures. There is a tremendous, obvious distinction to be made between a cloud and a clock, namely funcitonal, specified information. In biology, that information does not reside at the molecular level of the machines themselves but rather at a higher order of operation (just like the CSI of a Rolex does not reside in the particular arrangment of atoms in each gear and spring).
So, once again we find that biological processes exhibit predictable patterns of “behavior” (i.e. change over time), but these are grounded in stochastic processes that have irreducible random components.
Just as cells are "blobs of protoplasm". No, they are grounded at a higher order than the particular atomic arrangement of the particles. Highly complex, functional behavior is simply never grounded in randomness.
Ergo, the “complexity” of clouds (as compared with clocks) is due to their massively greater stochasticity, rather than greater organization at the level of fine structure. Therefore, it seems to me that asserting that biological systems, if they are more like clouds than clocks, are much closer to the evolutionary model of reality than the ID model. Clouds evolve (i.e. change over time) as the result of purely “natural” processes which do not require any “intelligence” or “design” at all, whereas clocks (at least the kind that are manufactured by humans) are designed for an intended purpose by intelligent agents.
Again, you attempt to claim the randomness of individual atoms of biological structures as sole property of EBers, on what grounds I am unsure. I am also lost as to the grounds for equating the atomic arrangement of biological machines to the evolution of the functional, specified, complex information of the genome, which is the heart of the ultimate debate here.uoflcard
June 22, 2010
June
06
Jun
22
22
2010
05:22 AM
5
05
22
AM
PDT
Freelurker (#50): I don't understand what you are saying: In engineering, a design is an arrangement (a pattern) of parts. Performing an act of design is coming up with an arrangement of parts... As an example, when software engineers speak of an object-oriented design they are speaking of a type of arrangement of a software item’s parts. OK, that's fine. And the same is true for design in biological entities. The above is simply not what IDists mean by design, according to their own definitions. Michael Behe defines design as “the purposeful arrangement of parts.” But what do you mean? Isn't that the same concept? Or are you suggesting that in object oriented design the arrangement of parts is not "purposeful"? Parts are arranged to perform some specific function, both in human design and in biological design What is the difference that you see? He says that he has detected such design in, for example, the bacterial flagellum. And so? But notice that he does not claim to be one of the people who discovered the arrangement of parts in the flagellum. He learned about that from scientists working in labs. And so? Behe is arguing that there is a functional arrangement of parts in the flagellum. What does it matter who discovered the facts which allow us to make that conclusion? In case you don't know, in science facts aren't anybody's property. Also note that he does not claim to have discovered the purpose of the flagellum, or the purposes of its parts. Are you kidding? The purpose of the flagellum (its function) is obviously to allow movement in space. Weren't you aware of that? And Behe discusses in detail trhe purpose of each of its parts (stator, rotor, juncion, filament, etc.), exactly as we would do for a man made machine, or for a man made software. I really can't understand what's your problem. So what is this “design” that Behe has detected? It’s free-floating purposefulness. Absolutely not! It's the arrangement of functional parts to perform a function. While your concept of "free-floating purposefulness" is certainly funny and bizarre, it means nothing. To be sure, IDists claim to be able to be able to infer purposefulness (i.e., design in the ID sense) from certain arrangements (patterns) of parts (i.e., design in the engineering sense) but you do not help your case when you confuse these two different concepts. What different concepts? I see no different concepts here. I believe you are only equivocating on the term "purposefulness" which you use instead of "function". Now, let's be clear: in biological design, the specification is given by function. That's why I (like many others here) always refer to the subset of CSI which is called FSCI (functionally specified complex information). That point has been discussed in great detail many times. So, let's see: 1) In software, we can infer design because the arrangement of parts is functional (and, especially in object oriented software, the parts are themselves a functional arrangements of parts). And, obviously, the functionality in software is absolutely purposeful (unless you believe that software comes out by RV and NS). Design is purposeful by definition. The designer arranges parts to implement a function. Implementing that function is his purpose. Is that clear? 2) In biological information, it's exactly the same thing. In the flagellum, we can observe the function of the whole machine (movement), and the contributing function of each part, and the functional arrangement of parts in the whole machine. Moreover, each part is made of objects (proteins), each of which is made by the functional arrangement of parts (aminoacids) which allows the function of the whole protein. All of that is obviously purposeful in the same sense that software is purposeful. There is no difference. The concepts of CSI and of irreducible complexity are necessary for the design inference, both in software and in biological information, just to make the inference certain, and to avoid false positives (random structures which could seem functional, but in reality have never been purposeful, because they originated by non intelligent mechanisms). The concept is exactly the same for design detection in any structure, be it a man made machine, a man made software, or biological information.gpuccio
June 22, 2010
June
06
Jun
22
22
2010
12:34 AM
12
12
34
AM
PDT
Freelurker - It is true that this is already done. That's part of the ID argument - we're merely pointing out that the *methodology* that produces good results in biology is the one that *assumes* that things in the cell were designed purposefully, and then engages to figure out that purpose. This is why evolutionists tie themselves up in knots - they know this to be true but try to explain it in some way which doesn't obviously show the truth of ID. However, in addition to what is already done implicitly in biology, I contend that doing it *explicitly* will lead to even more fruitfulness. I give an example of this with an extended conception of Irreducible Complexity here, and give other examples here. I'll leave you with a quote from Michael Ruse (Darwin and Design):
We treat organisms—the parts at least— as if they were manufactured, as if they were designed, and then try to work out their functions. End-directed thinking—teleological thinking—is appropriate in biology because, and only because, organisms seem as if they were manufactured, as if they had been created by an intelligence and put to work
So, as Michael Ruse points out, ID - even if he wouldn't call it that - is an important principle for biology. The problem is that materialists simply refuse to see that this actually puts the burden of proof on them to show how something was not designed, despite the fact that our primary modes of investigation assume its design. Natural selection is invoked as a magic dust to remove traces of real design from apparent design without discussion, and by assuming that the case is already closed.johnnyb
June 21, 2010
June
06
Jun
21
21
2010
08:31 PM
8
08
31
PM
PDT
scordova -
Interpretation of software is recognizing the design of the software. Johnnyb was highlighting the fact that one does not need to know the designer in order to recognize designs.
As I said earlier, IDist engineers are prone to equivocating between the way the term "design" is used in ID and the way it is used in engineering. I thank you for providing such a vivid example. In engineering, a design is an arrangement (a pattern) of parts. Performing an act of design is coming up with an arrangement of parts. When we read a design specification or attend a design review for a system, we expect to learn about what its parts will be, how they will be arranged, and how they will interact to fulfill the system's purposes (its requirements). As an example, when software engineers speak of an object-oriented design they are speaking of a type of arrangement of a software item's parts. The above is simply not what IDists mean by design, according to their own definitions. Michael Behe defines design as "the purposeful arrangement of parts." He says that he has detected such design in, for example, the bacterial flagellum. But notice that he does not claim to be one of the people who discovered the arrangement of parts in the flagellum. He learned about that from scientists working in labs. Also note that he does not claim to have discovered the purpose of the flagellum, or the purposes of its parts. So what is this "design" that Behe has detected? It's free-floating purposefulness. (Dembski calls it the "complement of regularity and chance.") To be sure, IDists claim to be able to be able to infer purposefulness (i.e., design in the ID sense) from certain arrangements (patterns) of parts (i.e., design in the engineering sense) but you do not help your case when you confuse these two different concepts. Saying that "Interpretation of software is recognizing the design of the software" would be fine if by "recognizing the design" you meant what a software engineer would mean by this, that is, meaning identifying how the parts are arranged and what they do. You wish to associate IDists with that activity, but that activity is not something that IDists do; IDists can't even begin to determine if an arrangement of parts is attributable to intelligence until after the arrangement of parts has been determined. Determining whether or not something is, per se purposeful, that is, determining if it's the product of "the complement of regularity and chance" is actually foreign to the practice of engineering.Freelurker_
June 21, 2010
June
06
Jun
21
21
2010
06:32 PM
6
06
32
PM
PDT
1 2 3 4 5

Leave a Reply