Uncommon Descent Serving The Intelligent Design Community
300px-AmineTreating

ID Foundations, 3: Irreducible Complexity as concept, as fact, as [macro-]evolution obstacle, and as a sign of design

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[ID Found’ns Series, cf. also Bartlett here]

Irreducible complexity is probably the most violently objected to foundation stone of Intelligent Design theory. So, let us first of all define it by slightly modifying Dr Michael Behe’s original statement in his 1996 Darwin’s Black Box [DBB]:

What type of biological system could not be formed by “numerous successive, slight modifications?” Well, for starters, a system that is irreducibly complex. By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the [core] parts causes the system to effectively cease functioning. [DBB, p. 39, emphases and parenthesis added. Cf. expository remarks in comment 15 below.]

Behe proposed this definition in response to the following challenge by Darwin in Origin of Species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case . . . . We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. [Origin, 6th edn, 1872, Ch VI: “Difficulties of the Theory.”]

In fact, there is a bit of question-begging by deck-stacking in Darwin’s statement: we are dealing with empirical matters, and one does not have a right to impose in effect outright logical/physical impossibility — “could not possibly have been formed” — as a criterion of test.

If, one is making a positive scientific assertion that complex organs exist and were credibly formed by gradualistic, undirected change through chance mutations and differential reproductive success through natural selection and similar mechanisms, one has a duty to provide decisive positive evidence of that capacity. Behe’s onward claim is then quite relevant: for dozens of key cases, no credible macro-evolutionary pathway (especially no detailed biochemical and genetic pathway) has been empirically demonstrated and published in the relevant professional literature. That was true in 1996, and despite several attempts to dismiss key cases such as the bacterial flagellum [which is illustrated at the top of this blog page] or the relevant part of the blood clotting cascade [hint: picking the part of the cascade — that before the “fork” that Behe did not address as the IC core is a strawman fallacy], it arguably still remains to today.

Now, we can immediately lay the issue of the fact of irreducible complexity as a real-world phenomenon to rest.

For, a situation where core, well-matched, and co-ordinated parts of a system are each necessary for and jointly sufficient to effect the relevant function is a commonplace fact of life. One that is familiar from all manner of engineered systems; such as, the classic double-acting steam engine:

Fig. A: A double-acting steam engine (Courtesy Wikipedia)

Such a steam engine is made up of rather commonly available components: cylinders, tubes, rods, pipes, crankshafts, disks, fasteners, pins, wheels, drive-belts, valves etc. But, because a core set of well-matched parts has to be carefully organised according to a complex “wiring diagram,” the specific function of the double-acting  steam engine is not explained by the mere existence of the parts.

Nor, can simply choosing and re-arranging similar parts from say a bicycle or an old-fashioned car or the like create a viable steam engine.  Specific mutually matching parts [matched to thousandths of an inch usually], in a very specific pattern of organisation, made of specific materials, have to be in place, and they have to be integrated into the right context [e.g. a boiler or other source providing steam at the right temperature and pressure], for it to work.

If one core part breaks down or is removed — e.g. piston, cylinder, valve, crank shaft, etc., core function obviously ceases.

Irreducible complexity is not only a concept but a fact.

But, why is it said that irreducible complexity is a barrier to Darwinian-style [macro-]evolution and a credible sign of design in biological systems?

First, once we are past a reasonable threshold of complexity, irreducible complexity [IC] is a form of functionally specific complex organisation and implied information [FSCO/I], i.e. it is a case of the specified complexity that is already immediately a strong sign of design on which the design inference rests. (NB: Cf. the first two articles in the ID foundations series — here and here.)

Fig. B, on the exploded, and nodes and arcs “wiring diagram” views of how a complex, functionally specific entity is assembled, will help us see this:

Fig. B (i): An exploded view of a gear pump. (Courtesy, Wikipedia)

Fig. B(ii): A Piping  and Instrumentation Diagram, illustrating how nodes, interfaces and arcs are “wired” together in a functional mesh network (Source: Wikimedia, HT: Citizendia; also cf. here, on polygon mesh drawings.)

We may easily see from Fig. B (i) and (ii) how specific components — which may themselves be complex — sit at nodes in a network, and are wired together in a mesh that specifies interfaces and linkages. From this, a set of parts and wiring instructions can be created, and reduced to a chain of contextual yes/no decisions. On the simple functionally specific bits metric, once that chain exceeds 1,000 decisions, we have an object that is so complex that it is not credible that the whole universe serving as a search engine, could credibly produce this spontaneously without intelligent guidance. And so, once we have to have several well-matched parts arranged in a specific “wiring diagram” pattern to achieve a function, it is almost trivial to run past 125 bytes [= 1,000 bits] of implied function-specifying information.

Of the significance of such a view, J. S Wicken observed in 1979:

‘Organized’ systems are to be carefully distinguished from ‘ordered’ systems.  Neither kind of system is ‘random,’ but whereas ordered systems are generated according to simple algorithms [[i.e. “simple” force laws acting on objects starting from arbitrary and common- place initial conditions] and therefore lack complexity, organized systems must be assembled element by element according to an [originally . . . ] external ‘wiring diagram’ with a high information content . . . Organization, then, is functional complexity and carries information. It is non-random by design or by selection, rather than by the a priori necessity of crystallographic ‘order.’ [“The Generation of Complexity in Evolution: A Thermodynamic and Information-Theoretical Discussion,” Journal of Theoretical Biology, 77 (April 1979): p. 353, of pp. 349-65. (Emphases and notes added. Nb: “originally” is added to highlight that for self-replicating systems, the blue print can be built-in.)]

Indeed, the implication of that complex, information-rich functionally specific organisation is the source of Sir Fred Hoyle’s metaphor of comparing the idea of spontaneous assembly of such an entity to a tornado in a junkyard assembling a flyable 747 out of parts that are just lying around.

Similarly, it is not expected that if one were to do a Humpty Dumpty experiment — setting up a cluster of vials with sterile saline solution with nutrients and putting in each a bacterium then pricking it so the contents of the cell leak out — it is not expected that in any case, the parts would spontaneously re-assemble to yield a viable bacterial colony.

But also, IC is a barrier to the usual suggested counter-argument, co-option or exaptation based on a conveniently available cluster of existing or duplicated parts. For instance, Angus Menuge has noted that:

For a working [bacterial] flagellum to be built by exaptation, the five following conditions would all have to be met:

C1: Availability. Among the parts available for recruitment to form the flagellum, there would need to be ones capable of performing the highly specialized tasks of paddle, rotor, and motor, even though all of these items serve some other function or no function.

C2: Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.

C3: Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.

C4: Coordination. The parts must be coordinated in just the right way: even if all of the parts of a flagellum are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.

C5: Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if a paddle, rotor, and motor are put together in the right order, they also need to interface correctly.

( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

In short, the co-ordinated and functional organisation of a complex system  is itself a factor that needs credible explanation.

However, as Luskin notes for the iconic flagellum, “Those who purport to explain flagellar evolution almost always only address C1 and ignore C2-C5.” [ENV.]

And yet, unless all five factors are properly addressed, the matter has plainly not been adequately explained. Worse, the classic attempted rebuttal, the Type Three Secretory System [T3SS] is not only based on a subset of the genes for the flagellum [as part of the self-assembly the flagellum must push components out of the cell], but functionally, it works to help certain bacteria prey on eukaryote organisms. Thus, if anything the T3SS is not only a component part that has to be integrated under C1 – 5, but it is credibly derivative of the flagellum and an adaptation that is subsequent to the origin of Eukaryotes. Also, it is just one of several components, and is arguably itself an IC system. (Cf Dembski here.)

Going beyond all of this, in the well known Dover 2005 trial, and citing ENV, ID lab researcher Scott Minnich has testified to a direct confirmation of the IC status of the flagellum:

Scott Minnich has properly tested for irreducible complexity through genetic knock-out experiments he performed in his own laboratory at the University of Idaho. He presented this evidence during the Dover trial, which showed that the bacterial flagellum is irreducibly complex with respect to its complement of thirty-five genes. As Minnich testified: One mutation, one part knock out, it can’t swim. Put that single gene back in we restore motility. Same thing over here. We put, knock out one part, put a good copy of the gene back in, and they can swim. By definition the system is irreducibly complex. We’ve done that with all 35 components of the flagellum, and we get the same effect. [Dover Trial, Day 20 PM Testimony, pp. 107-108. Unfortunately, Judge Jones simply ignored this fact reported by the researcher who did the work, in the open court room.]

That is, using “knockout” techniques, the 35 relevant flagellar proteins in a target bacterium were knocked out then restored one by one.

The pattern for each DNA-sequence: OUT — no function, BACK IN — function restored.

Thus, the flagellum is credibly empirically confirmed as irreducibly complex.

The “Knockout Studies” concept — a research technique that rests directly on the IC property of many organism features –needs some explanation.

[Continues here]

Comments
AJG Interesting. And there actually is some proper code out there! Gkairosfocus
February 1, 2011
February
02
Feb
1
01
2011
12:04 PM
12
12
04
PM
PDT
kairosfocus@94 Yes. There is a lot of messy code out there. I have written some of it. Quite embarrassed to look at some of the stuff I have done. Although I am not a full time programmer. Before I say anything else let me say code that is maintainable is inevitably well planned. It also normally has gone through a few design iterations before the problem domain is sufficiently understood for the design to be good. Which is another reason I find it hard to believe any unplanned system could be so efficient and stood up over so many years. I think the following are examples of excellent software:- - Qt 4 - Symfony 2.0 - YUI3 None of those are applications themselves. They are application frameworks, toolkits, etc but the same problems apply. Maitenance programming is not the problem with software if it is well designed. By that I mean it has the following characteristics:- 1. Layered 2. Modular 3. Encapsulated 4. Abstracted 5. Is DRY "Don't repeat yourself' 6. Loosely coupled - normally anyway I am sure there are more. But that comes to mind. For example say my application opens images in various places. Given that I am only supporting one OS, or environment I could just call the functions needed to perform the actions I want on the image inline. But that would lead to a maintenance nightmare because I would be duplicating a lot of code and I would be violating my layering and encapsuation rules. So instead I create a class which contains methods which will perform the tasks I am going to be using over and over again. Now suppose I want to support a number of platforms and any number of future ones. I must sacrifice some speed so that I can better maintain my code in future. So I will create a further level of abstraction. I create a standard interface e.g. open, rotate, resize, save, etc to my image as before. But when I when I use the open function I use a factory class to determine which libraries I am going to use to manipulate my image. The factory will check which platform is being used, what type of image it is and return an object that implements my standard interface specific to the platform. In this way I have future proofed my image handling. I can add functionality, change backends, etc to my image handling within the application without any other part of the application needing to know how it is being done. I think security is the biggest issue in software development. You may be interested to read about a research operating system by Microsoft which moves operating systems into the 21st century. Its called singularity. It has the following characeristics which are quite revolutionary. 1. No shared memory. A program cannot access memory which does not belong to it. 2. No context switching. All programs run in the highest priviledge space but are prevented from doing anything malicious because they all run in a sandbox - they can't do anything outside of there space. They can send a message to a message broker in order to communicate. 3. All programs are object code i.e. they are not raw (compiled) instructions. They are checked before hand to make sure they can't do anything malicious. 4. Programs cannot modify their own code. Buffer overflows, bad drivers and things like that are not possible in this operating system. It certainly is highly fault tolerant. Read about it here. http://research.microsoft.com/pubs/69431/osr2007_rethinkingsoftwarestack.pdf The homepage is here. http://research.microsoft.com/en-us/projects/singularity/ P.S. Some code that is hard to maintain is there because there has been no incentive to create easily maintainable code. I mean I get paid to develop x and y functionality. I then get paid to perform a and b maintenance activities. The effort involved to do a and b is inversely related to the effort for x and y. But the cost of a and b is not questioned, but the cost of x and y is questioned. So I short cut x and y and defer the effort to a and b. So I suppose it is a management and incentive problem.andrewjg
February 1, 2011
February
02
Feb
1
01
2011
05:30 AM
5
05
30
AM
PDT
LastYearOn, Did you say Natural Processes explain humans? I sure would like to see your evidence (any evidence besides 'just so stories) for the 'natural' explanation for some of these characteristics of humans. The Amazing Human Body - Fearfully and Wonderfully Made - video http://www.metacafe.com/watch/5289335/bornagain77
February 1, 2011
February
02
Feb
1
01
2011
04:15 AM
4
04
15
AM
PDT
lastyearon: In this argument against the evolution of biological systems, Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn’t mean that they aren’t ultimately explainable naturally. Behe’s argument is therefore circular. this is really a pearl! So, you are saying that Behe's argument is circular because, as everybody knows, "natural processes do explain technology, by explaining humans". And, I suppose, "natural processes do explain technology, by explaining humans" because Behe's argument is circular, like probably all ID arguments. I think I have found another point we could add to those brilliantly listed by Haack (see the pertinent thread) to detect scientism: "Imagining circularity in others' arguments where it is not present, and supposrting that statement by truly circular arguments".gpuccio
February 1, 2011
February
02
Feb
1
01
2011
02:01 AM
2
02
01
AM
PDT
F/N: Re, LYO @ 74: Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans . . . Behe’s argument is therefore circular. Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example. Ironically, this is a case of making a circular assumption, then projecting it to those whose argument one objects to. If we go back to the first two articles in the ID foundations series -- here and here, we will see that there is excellent reason to distinguish the credible capabilities of nature [here, chance + necessity] and art [i.e. intelligence . . . an OBSERVED entity in our world, we need not smuggle in assumptions about its nature or roots, just start with that basic fact, it is real]. Namely, as complex, functionally specific organised entities are in deeply isolated islands of function in large config spaces, undirected chance plus mechanical necessity will on random walks from arbitrary initial points, scan so low a proportion of the configs that there is no good reason to expect them to ever land on such an island, on the gamut of the observed cosmos. This is the same basic reasoning as undergirds the second law of thermodynamics on why spontaneous trends go towards higher probability, higher entropy clusters of microstates: i.e. more and more random distributions of mass and energy at micro-levels. But, we routinely see intelligence creating things that exhibit such FSCO/I. So, we have good -- observational and analytical -- reason to recognise such FSCO/I as a distinct characteristic of intelligence. What LYO is trying is to say that once we ASSERT OR ASSUME that nature is a closed, evolutionary materialistic system, items of art such as computers "ultimately" trace to chance + necessity that somehow threw up humans. But we do not have a good right to such an assumption. Instead, we need to start from the world as we see it, where nature and art are distinct and distinguishable on their characteristic consequences and signs. When we do so, we see that the sign is evidence of the signified causal factor. On the strength of that, we then see that life shows FSCO/I and is credibly designed. Major body plans show FSCO/I and are credibly designed, and finally the cosmos as we see it shows that the physics to set it up is finely tuned so that a complex pattern of factrors sets up a cosmos that supports such C-Chemistry intelligent life as we experience. So, once we refuse to beg worldview level questions at the outset of doing origins science, we see that a design view of origins is credible, and credibly a better explanation than the blind chance + necessity in a strictly material world view. It should not be censored out or expelled, on pain of destructive ideologisation of science. Which, unfortunately, is precisely what is happening, as -- pardon the directness -- the materialistic establishment seems to value their agenda over open-mindedness. GEM of TKIkairosfocus
February 1, 2011
February
02
Feb
1
01
2011
12:27 AM
12
12
27
AM
PDT
lastyear @ 74 "There are many facts of nature that defy all logic and reason." I'm having a difficult time thinking of any. Maybe you could help me out. Share some of these facts with me. Thanks.tgpeeler
January 31, 2011
January
01
Jan
31
31
2011
03:29 PM
3
03
29
PM
PDT
borne @ 73 "...or to regard anyone that does as the perfect fool." Indeed. The problem that I see over and over is that it is literally impossible to reason with fools. (How to reason with someone who rejects the epistemological authority of reason?) Particularly when they think they are the ones being rational. It would be hysterically funny if the consequences of foolishness were not eternal. But they are, thus it is infinitely tragic.tgpeeler
January 31, 2011
January
01
Jan
31
31
2011
03:11 PM
3
03
11
PM
PDT
As in: are we exchanging one difficulty for another? I.e. is there a reasonable basis for the messy code that seems to be ever so common out there? Gkairosfocus
January 31, 2011
January
01
Jan
31
31
2011
07:47 AM
7
07
47
AM
PDT
AJG: Is there any really good complex software -- by that standard -- out there? What is it like to maintain it? Just curious . . . Gkairosfocus
January 31, 2011
January
01
Jan
31
31
2011
06:23 AM
6
06
23
AM
PDT
Charles@89 With regard to the maintenance issues you describe. Those issues are only applicable to poorly designed software. Good software features a high degree of encapsulation i.e implementation details of the objects or modules making up the design are hidden and not relevant to other parts of the software interacting with it. The interfaces are what is important. The biggest difficulty in software is getting the design or architecture correct given the assumptions. And often over time the assumptions change. For me the fact that so much in life is shared at the cellular level is a strong indication of design. It speaks to an architecture in life which has been able to withstand and accommodate so much variety in life over so long a period.andrewjg
January 29, 2011
January
01
Jan
29
29
2011
10:11 AM
10
10
11
AM
PDT
CJ: Okay, though of course the "evolution" as observed is most definitely intelligently -- not to be equated with "wisely" -- directed. The design view issue is, where do functionally specific complex organisation and associated information come from? A: As this case again illustrates, consistently, on observation: from intelligence, as the islands of function in large config spaces analysis points. And, the software embrittlement challenge you highlighted shows that design does not necessarily have to be perfect, to be recognised -- from its signs -- as design. It also highlights how robustness with adequate performance may be a sound solution, warts and all. [Office 97 still works adequately, thank you. So does Win XP. And, OO is doing fine on far less space than MS Office.] GEM of TKI PS: On loss of function, I am thinking not only on efficiency but on becoming crochety, unstable and buggy. BTW, I once had a PC with a Win ME install that worked very well, and stably.kairosfocus
January 29, 2011
January
01
Jan
29
29
2011
09:31 AM
9
09
31
AM
PDT
kairofocus: The point of my intervention was simply to try to demonstrate that complexity of a system evolving in a changing environment will tend to increase unless energy is invested to avoid it. I also wanted to show that an increase in the complexity of a system is not necessarily a good thing. I hope I’ve been able to explain this correctly through my last posts. I definitely don’t think it will solve the debate on irreducible complexity and evolution, but I do think it’s an aspect of the problem that should not be ignored. As a conclusion, I’d like to come back quickly on your point about loss-of-function. When a program is tested correctly, there is no reason for loss-of-function to happen, even if there is an increase in complexity. Most changes are usually transparent to the user (since I bought my last OS, there have been many patches, but the vast majority of them didn’t change the way I interact with it). There is of course a loss in term of efficiency as the program gets larger and more complex.CharlesJ
January 29, 2011
January
01
Jan
29
29
2011
09:01 AM
9
09
01
AM
PDT
CJ: I see and take your focal point: maintenance not bloat. (My opinions on the latter are no secret -- I really don't like the ribbon and software with annoying features that cannot be turned off in any reasonably discoverable fashion. As for "personalisations" that end up mystifying anyone who needs to come in on short notice . . . ) I am not so sure, though, that thermodynamics is the right term, though I understand the issue of embrittlement due to implicit couplings and odd, unanticipated interactions. (I appreciate your point on deterioration due to embrittlement, thence eventual loss of function.) And I see your use of "natural" in the sense of an emergent trend that seems inexorable, once we face the inevitability of some errors and the implications of subtle interactions. I think though that that is distinct from: that which traces to spontaneous consequences of chance + mechanical necessity without need for directed contingency. maybe we need to mark distinct senses of terms here. Significant points . . . GEM of TKIkairosfocus
January 29, 2011
January
01
Jan
29
29
2011
07:52 AM
7
07
52
AM
PDT
kairofocus: I agree with you that the addition of new modules to a program is driven by human needs. But this is not what I mean when I say that the complexity of a program will increase over time. What I’m talking about is large programs (several hundred thousand lines of code) developed by large group of programmer. There will always be flaws in those kinds of programs that will need to be addressed during maintenance. The problem is that when a change is made in a specific module, others changes have to be made in other parts of the program that are using this module. Over time, the situation tend to get worst, and even making what would seem to be a simple change require hours and hours of programming to cope with sides effects in other parts of the program. Eventually, the program will get so complex that it can’t even be maintained anymore. The need to add new features to a program is driven by competition in the market, but the increase in complexity is simple thermodynamics.CharlesJ
January 29, 2011
January
01
Jan
29
29
2011
07:32 AM
7
07
32
AM
PDT
CJ: The tendency to complexity is a reflection of the demands on programmers, driven in the end by the tech environment. The nature in question is HUMAN. GEM of TKIkairosfocus
January 29, 2011
January
01
Jan
29
29
2011
06:54 AM
6
06
54
AM
PDT
Eugen: Pardon, missed your post in the rush, since you are a newbie, you will be in mod at first. Re 47 -- yup, even one component can have parts to it or sections or shapes etc that are functionally critical and information-rich, including stuff like the nonlinear behaviour of a bit of wire! (I have linked Behe's rebuttal on the mousetrap above now, too.) GEM of TKI PS: IC, BTW is not an "anti-evo" point but a pro-design point. Behe believes in common descent but identifies design as a key input to any significant evo. Even many modern YEC's similarly hold with a rapid adaptation to fit niches, up to about the family level.kairosfocus
January 29, 2011
January
01
Jan
29
29
2011
06:52 AM
6
06
52
AM
PDT
Joseph: They are definitely not doing this on purpose. No programmer is saying to himself: I have to make this code as complex as possible. That’s quite the opposite! There are many books aimed to teach programmers to make code as simple as possible to make maintenance much easier. What natural tendency towards complexity? It’s entropy. Code won’t start cleaning itself spontaneously, company have to invest a lot of money and energy in order to keep code manageable.CharlesJ
January 29, 2011
January
01
Jan
29
29
2011
06:30 AM
6
06
30
AM
PDT
Hi kairos It seems that even one component per function could be "irreducible". Please check post 47 .Eugen
January 29, 2011
January
01
Jan
29
29
2011
06:18 AM
6
06
18
AM
PDT
Charles J:
But in the case of a large program, it’s happening against the will of the programmers.
And yet programmers are doing it.
It’s a natural tendency toward complexity that has to be fought against in order to keep the code manageable.
What natural tendency towards complexity?Joseph
January 29, 2011
January
01
Jan
29
29
2011
04:26 AM
4
04
26
AM
PDT
LYO: I think you would do well to address the points Joseph has raised. Also, in so doing, please bear in mind that "evolution" in the sense of change across time of populations, is not the same as Darwinian macro-evolutionary theory. There is also no observed evidence that empirically grounds the claim that accumulation of small changes due to chance variation and differential reproductive success is adequate to explain the origin of major body plans form one or a few unicellular common ancestors. The required FSCO/I, across time, on our observation of the known and credible source of such information, is intelligence. In the case of irreducibly complex systems, the issues C1 - 5 in the OP above point strongly to the need for foresighted creation of parts, sub assemblies, and intelligent organisation of same to achieve successful innovations. This not only covers the existence of body plans, but the origin of life itself, the very first body plan, including the issue of the capacity of self replication, a condition for any evolving by descent with modification. Observe as well that the strong evidence is that adaptations, overwhelmingly, are by breaking the existing information in life forms, where it is adaptive to do so. There is no observational evidence of the spontaneous origin of novel functional bio-information involving blind chance and mechanical necessity based spontaneous creation of novel bio-information of the order of 500 or more base pairs. GEM of TKIkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
10:08 PM
10
10
08
PM
PDT
CJ: Interesting comment. Complex programs, in our observation, are invariably designed, using symbols and rules of meaning. That is they are manifestations of language. Which is, again in our observation, a strong sign of agency, intelligence and art at work. Going beyond, as systems are designed and developed, we are dealing with the intelligently directed "evolution" of technology. That is, we see here how an "evolution" that is happening right before our eyes is intelligently directed and is replete with signs of that intelligence such as functionally specific, complex organisation and information, as well as, often irreducible complexity [there are core parts that are each necessary for and jointly sufficient to achieve basic function of the whole]. What is our experience and observation of complex language based entities such as programs emerging by chance and mechanical necessity? Nil. What is the analysis that shows how likely that is? It tells us that in a very large sea of possible configs, islands of function will be deeply isolated and beyond the search resources of he observed cosmos on blind chance + necessity. Now, you suggest that complex programs/applications "naturally" tend to become ever more complex. But actually, what you are seeing is that the expectations of customers and management get ever more stringent; including the pressure that the "new and improved" is what often drives the first surge of sales that may be the best hope for a profit. So, by the demands of human -- intelligent -- nature, competition creates a pressure for ever more features and performance. If one is satisfied with "good enough" one can get away with a lot: I still use my MS Office 97, and my more modern office suite is the Go OO fork of Open Office. Works quite well for most purposes, and keeps that ribbon interface at bay. Feature bloat is not the same as progress. GEM of TKIkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
09:58 PM
9
09
58
PM
PDT
Joseph: But in the case of a large program, it’s happening against the will of the programmers. It’s a natural tendency toward complexity that has to be fought against in order to keep the code manageable.CharlesJ
January 28, 2011
January
01
Jan
28
28
2011
07:00 PM
7
07
00
PM
PDT
CharlesJ:
What about large programs? With time, they become incredibly complex.
Not by chance, nor necessity, but by agency intervention. So the question should be is if every time we observe IC systems and know the cause it has always been via agency involvement can we infer agency involvement when we observe IC and don't (directly) know the cause once we have eliminated chance and necessity?Joseph
January 28, 2011
January
01
Jan
28
28
2011
06:30 PM
6
06
30
PM
PDT
What about large programs? With time, they become incredibly complex. What was once built by using various independent modules becomes more and more complicated, with various parts becoming dependant on each others, even if it was not intended. Although every module should theoretically be independent, a time come when it is harder and harder to make a simple chance without have to recode many parts of the program. Until the program gets so complicated, it's unmanageable. Various strategy can be used to to slow the process, but large programs will always tend to grow more complicated with time since it has to adapt to a changing environment (new OS, competition from other programs, client needs, etc...). So shouldn't the question be: is irreducible complexity even avoidable in a system evolving in a frequently changing environment?CharlesJ
January 28, 2011
January
01
Jan
28
28
2011
05:16 PM
5
05
16
PM
PDT
PS: LYO, have you ever had to design a complex, multi-part functional hardware based system that had a lot of fiddly interfaces, and get it to work? What was the experience like?kairosfocus
January 28, 2011
January
01
Jan
28
28
2011
04:40 PM
4
04
40
PM
PDT
lastyearon:
As for evidence, Irreducible Complexity is not evidence against evolution.
It is evidence for intelligent design. Ya see we are still stuck with the fact that there isn't any positive evidence that blind, undirected processes can construct functioning multi-part systems. lastyearon:
But IC does not rule out evolution.
IC strongly argues against the blind watchmaker and makes a positive case for intentional design. And there does come a point in which the IC system in question does rule out the blind watchmaker. lastyearon:
However natural processes do explain technology, by explaining humans.
If they did you would have a point. However that is the big question. We know natural processes cannot explain nature- natural processes only exist in nature and tehrefor cannot account for its origins, which science has determined it had. lastyearon:
Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans.
Your problem is there isn't any evidence that nature can produce cells from scratch. There isn't any evidence that blind, undirected processes can do much of anything beyond breaking things- in biology anyway. Think of it this way- you still don't have any evidence that blind, undirected chemical processes can construct functional multi-part systems.Joseph
January 28, 2011
January
01
Jan
28
28
2011
04:35 PM
4
04
35
PM
PDT
LYO: Do you ken what Borne is saying? GEM of TKIkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
04:30 PM
4
04
30
PM
PDT
Borne: Yup, only, the dependencies are in a 3-D mesh. Gkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
04:29 PM
4
04
29
PM
PDT
Joseph:
Even Dr Behe said he wouldn’t categorically deny that those systems could have evolved by Darwinian processes, but it would defy all logic, reason and evidence.
There are many facts of nature that defy all logic and reason. As for evidence, Irreducible Complexity is not evidence against evolution. One may legitimately point to a lack of evidence for evolution. As in, 'wow that is a truly intricate and complex system. The evidence isn't strong enough for me to believe that this system evolved' And that certainly is debatable. It all depends on what you consider strong enough evidence. But IC does not rule out evolution. In Behe's quote, he states:
Further, it would go against all human experience, like postulating that a natural process might explain computers.
In this argument against the evolution of biological systems, Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn't mean that they aren't ultimately explainable naturally. Behe's argument is therefore circular. Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example.lastyearon
January 28, 2011
January
01
Jan
28
28
2011
04:03 PM
4
04
03
PM
PDT
I've read through some of this, so forgive me if I'm adding something already mentioned. A bit long I'm afraid. One of the key elements involved in any functional, multi-part system is the problem of combinatorial dependencies - i.e. parts that depend on parts that depend on still other parts - like we see in the flagellum. Moreover, we have in this code that depends on code that depends on code. As soon as we introduce dependencies and more specially combinatorial dependencies (CD) we also introduce statistical mechanics. Engineers get this. The great majority of Darwinian biologists simply don't get it and thus bypass it as though it doesn't exist. Darwinists pretend CDs and statistical mechanics (SM) have no application in biology, or worse, they haven't got a ruddy clue what CD or SM is! There are CDs in a flagellum. All component design specifications must meet specific physical criteria if any such motor is to work. We have to consider for example : -component sizes - must match with connected components -component pliability -component strength -capacity to resist external and internal forces applied to them - ex. stress from torsion, shear, pressure, heat etc. -rotational forces and motility (ex. revs. per/s) stiffness ... -energy requirements -to move parts -component coupling -flexibility, rigidity -component material - too soft = galling; too hard = fatigue & eventually cracking -component clearance tolerances between parts This is indeed a "goldie-locks" situation. Components must to be just right or it won't work. The probability of having the all components set to correct specs -allegedly by RMs + NS- is small indeed. And, this is supposing that the component parts already exist; but the laws of physics & chemistry alone do not guarantee such at all. Now add to this the algorithmic information (prescribed information -Abel, Trevor) needed to assemble the parts in the right order and you have an impossible situation in front of you. Nature, blind and without goals or purpose, is never going to assemble a flagellum -even supposing all the component protein parts already exist in the correct locus! The probability against this occurring by the laws of physics & chemistry + selection, are ridiculously small. Let the Darwinists get over it and accept the obvious and properly inferred design. Order is vital in this problem. So, the P of just getting the parts in the correct order is about 1/42! (given 42 protein parts, assuming the flag. is made of such) Therefore, when Darwinists place their bets on the evolution lottery machine blindly accomplishing just one simple rotary engine by chance and necessity, it is truly a "tax for the stupid". "Yet", the Darwinists answers, "lottery tickets are still producing winners. Aha! Evolution could too!" Sorry, but this is a gross misunderstanding. If you had a single lottery wherein the gambler had to select the exact sequence of 42 numbers out of 42 numbers, it would be highly suspicious anyone ever won. Yet evolution allegedly did this billions of times over since earths life supporting climate arrived! Incredible credulity is required to believe such nonsense. Around a 1 in 10^50th chance is hardly good news for Darwin. Breaking an encrypted key of 42 bits long is no easy task even with intelligently conceived decrypting algorithms being employed on fast computers executing in gflops. BUT! Nature isn't even trying to produce a functional anything - i.e. it isn't trying to win any lottery - it isn't "buying tickets"! It ain't ever tried to build a rotary engine or anything else under Darwinian theory. Amazing that Darwinists are so laughably gullible, not to mention pitifully ignorant, that they still push their idiotic theory as fact! And all this is addressing a mere 42 part flagellum while assuming the parts already exist and are localized! So, pretend the parts are non existent and have to be evolved themselves - all 42 or whatever it is exactly. Add localization. Then remember all the multitude of other "must exist first" components and then do the math. Even using wild approximations -conservative or liberal- you'll find astronomically low P values. You end up with virtually "impossible" written in huge letters all over Darwin's inane 'theory'. In light of this IC can rightly be called Irreducible Dependencies. If this were indeed a lottery wherein you're betting everything you have- would you still bet for Darwin in the fat and pudgy weakling corner? Permit me to doubt it, or to regard anyone that does as the perfect fool.Borne
January 28, 2011
January
01
Jan
28
28
2011
03:58 PM
3
03
58
PM
PDT
1 2 3 4

Leave a Reply