Uncommon Descent Serving The Intelligent Design Community
300px-AmineTreating

ID Foundations, 3: Irreducible Complexity as concept, as fact, as [macro-]evolution obstacle, and as a sign of design

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

[Continued from here]

In steps:

1: What it is — Wiki has a helpful summary:

>> A gene knockout (abbreviation: KO) is a genetic technique in which an organism is engineered to carry genes that have been made inoperative (have been “knocked out” of the organism). Also known as knockout organisms or simply knockouts, they are used in learning about a gene that has been sequenced, but which has an unknown or incompletely known function. Researchers draw inferences from the difference between the knockout organism and normal individuals. >>

a –> That is, the idea is that genes effect proteins that function (do jobs) in the cell, so if a gene is knocked out, the function will be lost with the lost protein

b –> So, the animal [mice are typical] with the knocked out gene will have a gap in function relative to a known typical mouse.

c –> Notice, the heart of the technique is the functional part concept: the protein does a job, so if lost that job is blocked, and we may infer the function from the difference between the KO animal and the normal one, e.g. turn off a gene for hairiness or one that controls how fat an animal tends to be etc.

d –> And, on logic: if we have a cluster of parts that each of them is necessary to function, and all together are jointly sufficient for function, we have an irreducibly complex entity.

e –> So, what Scott Minnich did is reasonable . . .

2: How tweredun — here, the Cytokines & Cells Encyclopedia article is helpful on two typical techniques (and has a handy diagram):

>> (a) Use of insertion type vectors involves a single cross-over between genomic target sequences and homologous sequences at either end of the targeting vector. The neomycin resistance gene contained within the vector serves as a positive selectable marker.

(b) Gene targeting using replacement type vectors requires two cross-over events. The positive selection marker (neo) is retained while the negative selectable marker (HSV thymidine kinase) is lost. The advantage of this system is the fact that cells harboring randomly and unspecifically integrated gene constructs still carry the thymidine kinase gene. These cells can be eliminated selectively by using thymidine kinase as a selective marker . . . >>

f –> The idea here for (a) is that the target point in a gen is split, the sequence is duplicated on either side, and a marker is pushed in the middle, breaking gene function and marking where the break was made so it can be observationally confirmed:

g –> original DNA sequence:

– 1 2 3 4 5 6 7 8 9 10 –

h –> to — with “n e o” as marker:

– 1 2 3 4 5 6 7 — n e o – 2 3 4 5 6 7 8 9 10 –

i –> For (b), go look at the diagram in the linked article.

3: In praxis — Wiki (same article) is helpful again:

>> Knockout is accomplished through a combination of techniques, beginning in the test tube with a plasmid, a bacterial artificial chromosome or other DNA construct, and proceeding to cell culture. Individual cells are genetically transformed with the DNA construct. Often the goal is to create a transgenic animal that has the altered gene. If so, embryonic stem cells are genetically transformed and inserted into early embryos. Resulting animals with the genetic change in their germline cells can then often pass the gene knockout to future generations. >>

j –> In other words, a new constructed clone with the knockout is grown into a full animal from an early embryo.

k –> BTW, something like 15% [~ 1 in 6] of cloned KO mice die in embryonic stages, so the technique reveals the riskiness of mutations for embryonic development. This is itself a challenge to claims that chance mutations played a big role in origin of body plans.

l –> Next, a lab strain can then be created by breeding KO animals — it being much cheaper and more reliable to reproduce the old fashioned way. (There is actually a market in specific strains, for particular types of research, e.g. Methuselah is a long lived mouse strain.)

So, KO studies, an established research technique, is based on the reality of IC in biological organisms.

We already know that this is an informational barrier to evolvability, once we are beyond the FSCO/I type threshold. Also, as Behe pointed out in later studies on patterns of observed evolution with the malaria parasite, there seems to be an empirical barrier at the double-mutation point, i.e. there is a credible edge of evolution. Of course, we may comfortably accept many exceptions to this barrier — once they are empirically demonstrated — and still not have reached the relevant level of challenge: origin of complex organs, facilities or body plan features that have irreducible complexity. And, credibly, the role of the von Neuman-type self replicator in the ability of cells to replicate themselves while being also able to perform metabolism puts an IC barrier right at the origin of life — the root of Darwin’s tree of life — itself.

In addition:

“There is now considerable evidence that genes alone do not control development. For example when an egg’s genes (DNA) are removed and replaced with genes (DNA) from another type of animal, development follows the pattern of the original egg until the embryo dies from lack of the right proteins. (The rare exceptions to this rule involve animals that could normally mate to produce hybrids.) The Jurassic Park approach of putting dinosaur DNA into ostrich eggs to produce a Tyrannosaurus rex makes exciting fiction but ignores scientific fact.” [The Design of Life – William Dembski, Jonathan Wells Pg. 50. Emphasis added.  HT: BA 77]

But, there is a fifth barrier involved, which can perhaps be best seen in the recently studied case of the origin of birds. To appreciate the point, let us provide a picture of a flight-type feather to show how it is structured and how it works in context in the wing:

Fig. C: The feather, showing the complex interlocking required for function (Courtesy, Wiki)

Fig. D: Flight feathers and the wing, which also requires support musculature and sophisticated control and co-ordination (Courtesy, Wiki CCA, L. Shyama))

As ENV currently reports:

In a peer-reviewed paper titled “Evidence of Design in Bird Feathers and Avian Respiration,” [US $ 50 paywall] in International Journal of Design & Nature and Ecodynamics, Leeds University professor Andy McIntosh argues that two systems vital to bird flight–feathers and the avian respiratory system–exhibit “irreducible complexity [defined as Behe does]”  . . . .

Regarding the structure of feathers, he argues that they require many features present in order to properly function and allow flight:

[I]t is not sufficient to simply have barbules to appear from the barbs but that opposing barbules must have opposite characteristics – that is, hooks on one side of the barb and ridges on the other so that adjacent barbs become attached by hooked barbules from one barb attaching themselves to ridged barbules from the next barb (Fig. 4). It may well be that as Yu et al. [18] suggested, a critical protein is indeed present in such living systems (birds) which have feathers in order to form feather branching, but that does not solve the arrangement issue concerning left-handed and right-handed barbules. It is that vital network of barbules which is necessarily a function of the encoded information (software) in the genes. Functional information is vital to such systems.

He further notes that many evolutionary authors “look for evidence that true feathers developed first in small non-flying dinosaurs before the advent of flight, possibly as a means of increasing insulation for the warm-blooded species that were emerging.” However, he finds that when it comes to fossil evidence for the evolution of feathers, “[n]one of the fossil evidence shows any evidence of such transitions.”

Regarding the avian respiratory system, McIntosh contends that a functional transition from a purported reptilian respiratory system to the avian design would lead to non-functional intermediate stages. He quotes John Ruben stating, “The earliest stages in the derivation of the avian abdominal air sac system from a diaphragm-ventilating ancestor would have necessitated selection for a diaphragmatic hernia in taxa transitional between theropods and birds. Such a debilitating condition would have immediately compromised the entire pulmonary ventilatory apparatus and seems unlikely to have been of any selective advantage.” With such unique constraints in mind, McIntosh argues that the “even if one does take the fossil evidence as the record of development, the evidence is in fact much more consistent with an ab initio design position – that the breathing mechanism of birds is in fact the product of intelligent design.”

Indeed, the first of these examples is not a new one. The co-founder of evolutionary theory (and an early advocate of “Intelligent Evolution”), Alfred Russel Wallace,  argued in The World of Life; A Manifestation of Creative Power, Directive Mind and Ultimate Purpose (Chapman and Hall, 1914 [orig. 1911]), here (36 MB)):

. . . the bird’s wing seems to me to be, of all the mere mechanical organs of any living thing, that which most clearly implies the working out of a pre-conceived design in a new and apparently most complex | and difficult manner, yet so as to produce a marvellously successful result. The idea worked out was to reduce the jointed bony framework of the wings to a compact minimum of size and maximum of strength in proportion to the muscular power employed ; to enlarge the breastbone so as to give room for greatly increased power of pectoral muscles ; and to construct that part of the wing used in flight in such a manner as to combine great strength with extreme lightness and the most perfect flexibility. In order to produce this more perfect instrument for flight the plan of a continuous membrane, as in the flying reptiles (whose origin was probably contemporaneous with that of the earliest birds) and flying mammals, to be developed at a much later period, was rejected, and its place was taken by a series of broad overlapping oars or vanes, formed by a central rib of extreme strength, elasticity, and lightness, with a web on each side made up of myriads of parts or outgrowths so wonderfully attached and interlocked as to form a self-supporting, highly elastic structure of almost inconceivable delicacy, very easily pierced or ruptured by the impact of solid substances, yet able to sustain almost any amount of air-pressure without injury. [287 – 88] . . . .

A great deal has been written on the mechanics of a bird’s flight, as dependent on the form and curvature of the feathers and of the entire wing, the powerful muscular arrangements, and especially the perfection of the adjustment by which during the rapid down-stroke the combined feathers constitute a perfectly air-tight, exceedingly strong, yet highly elastic instrument for flight ; while the moment the upward motion begins the feathers all turn upon their axes so that the air passes between them with hardly any resistance, and when they again begin the down-stroke close up automatic-ally as air-tight as before. Thus the effective down-strokes follow each other so rapidly that, together with the support given by the hinder portion of the wings and tail, the onward motion is kept up, and the strongest flying birds exhibit hardly any undulation in the course they are pursuing. But very little is said about the minute structure of the feathers themselves, which are what renders perfect flight in almost every change of conditions a possibility and an actually achieved result.

But there is a further difference between this instrument of flight and all others in nature. It is not, except during actual growth, a part of the living organism, but a mechanical | instrument which the organism has built up, and which then ceases to form an integral portion of it is, in fact, dead matter. [290 – 1]

In short, Wallace sees in the complex and specific, functional mechanisms and organisation of organs and limbs such as the wings of a bird, marks of purposeful organisation, and underlying intelligent direction; much as Paley had in his day. However, this is seen with a significant difference; to Wallace, the design comes about through evolutionary means, and is rooted in an underlying purpose of designed diversity in nature working through organising principles.

Intelligent design, using evolutionary means, in short.

_____________________

So, we may freely conclude: however we may debate mechanisms, the point remains, that irreducible complexity is a clear obstacle to evolution by chance-based Darwinian type processes of variation and selection by differential reproductive success; especially when the issue of the origin of complex organs and body plans is on the table. In short:

That challenge was unmet in 1859.

It was still unmet in 1911.

It remained unmet in 1996.

And, it is still unmet today.

For, irreducible complexity is a strong, empirically well-supported sign pointing to intelligent design. END

Comments
AJG Interesting. And there actually is some proper code out there! Gkairosfocus
February 1, 2011
February
02
Feb
1
01
2011
12:04 PM
12
12
04
PM
PDT
kairosfocus@94 Yes. There is a lot of messy code out there. I have written some of it. Quite embarrassed to look at some of the stuff I have done. Although I am not a full time programmer. Before I say anything else let me say code that is maintainable is inevitably well planned. It also normally has gone through a few design iterations before the problem domain is sufficiently understood for the design to be good. Which is another reason I find it hard to believe any unplanned system could be so efficient and stood up over so many years. I think the following are examples of excellent software:- - Qt 4 - Symfony 2.0 - YUI3 None of those are applications themselves. They are application frameworks, toolkits, etc but the same problems apply. Maitenance programming is not the problem with software if it is well designed. By that I mean it has the following characteristics:- 1. Layered 2. Modular 3. Encapsulated 4. Abstracted 5. Is DRY "Don't repeat yourself' 6. Loosely coupled - normally anyway I am sure there are more. But that comes to mind. For example say my application opens images in various places. Given that I am only supporting one OS, or environment I could just call the functions needed to perform the actions I want on the image inline. But that would lead to a maintenance nightmare because I would be duplicating a lot of code and I would be violating my layering and encapsuation rules. So instead I create a class which contains methods which will perform the tasks I am going to be using over and over again. Now suppose I want to support a number of platforms and any number of future ones. I must sacrifice some speed so that I can better maintain my code in future. So I will create a further level of abstraction. I create a standard interface e.g. open, rotate, resize, save, etc to my image as before. But when I when I use the open function I use a factory class to determine which libraries I am going to use to manipulate my image. The factory will check which platform is being used, what type of image it is and return an object that implements my standard interface specific to the platform. In this way I have future proofed my image handling. I can add functionality, change backends, etc to my image handling within the application without any other part of the application needing to know how it is being done. I think security is the biggest issue in software development. You may be interested to read about a research operating system by Microsoft which moves operating systems into the 21st century. Its called singularity. It has the following characeristics which are quite revolutionary. 1. No shared memory. A program cannot access memory which does not belong to it. 2. No context switching. All programs run in the highest priviledge space but are prevented from doing anything malicious because they all run in a sandbox - they can't do anything outside of there space. They can send a message to a message broker in order to communicate. 3. All programs are object code i.e. they are not raw (compiled) instructions. They are checked before hand to make sure they can't do anything malicious. 4. Programs cannot modify their own code. Buffer overflows, bad drivers and things like that are not possible in this operating system. It certainly is highly fault tolerant. Read about it here. http://research.microsoft.com/pubs/69431/osr2007_rethinkingsoftwarestack.pdf The homepage is here. http://research.microsoft.com/en-us/projects/singularity/ P.S. Some code that is hard to maintain is there because there has been no incentive to create easily maintainable code. I mean I get paid to develop x and y functionality. I then get paid to perform a and b maintenance activities. The effort involved to do a and b is inversely related to the effort for x and y. But the cost of a and b is not questioned, but the cost of x and y is questioned. So I short cut x and y and defer the effort to a and b. So I suppose it is a management and incentive problem.andrewjg
February 1, 2011
February
02
Feb
1
01
2011
05:30 AM
5
05
30
AM
PDT
LastYearOn, Did you say Natural Processes explain humans? I sure would like to see your evidence (any evidence besides 'just so stories) for the 'natural' explanation for some of these characteristics of humans. The Amazing Human Body - Fearfully and Wonderfully Made - video http://www.metacafe.com/watch/5289335/bornagain77
February 1, 2011
February
02
Feb
1
01
2011
04:15 AM
4
04
15
AM
PDT
lastyearon: In this argument against the evolution of biological systems, Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn’t mean that they aren’t ultimately explainable naturally. Behe’s argument is therefore circular. this is really a pearl! So, you are saying that Behe's argument is circular because, as everybody knows, "natural processes do explain technology, by explaining humans". And, I suppose, "natural processes do explain technology, by explaining humans" because Behe's argument is circular, like probably all ID arguments. I think I have found another point we could add to those brilliantly listed by Haack (see the pertinent thread) to detect scientism: "Imagining circularity in others' arguments where it is not present, and supposrting that statement by truly circular arguments".gpuccio
February 1, 2011
February
02
Feb
1
01
2011
02:01 AM
2
02
01
AM
PDT
F/N: Re, LYO @ 74: Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans . . . Behe’s argument is therefore circular. Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example. Ironically, this is a case of making a circular assumption, then projecting it to those whose argument one objects to. If we go back to the first two articles in the ID foundations series -- here and here, we will see that there is excellent reason to distinguish the credible capabilities of nature [here, chance + necessity] and art [i.e. intelligence . . . an OBSERVED entity in our world, we need not smuggle in assumptions about its nature or roots, just start with that basic fact, it is real]. Namely, as complex, functionally specific organised entities are in deeply isolated islands of function in large config spaces, undirected chance plus mechanical necessity will on random walks from arbitrary initial points, scan so low a proportion of the configs that there is no good reason to expect them to ever land on such an island, on the gamut of the observed cosmos. This is the same basic reasoning as undergirds the second law of thermodynamics on why spontaneous trends go towards higher probability, higher entropy clusters of microstates: i.e. more and more random distributions of mass and energy at micro-levels. But, we routinely see intelligence creating things that exhibit such FSCO/I. So, we have good -- observational and analytical -- reason to recognise such FSCO/I as a distinct characteristic of intelligence. What LYO is trying is to say that once we ASSERT OR ASSUME that nature is a closed, evolutionary materialistic system, items of art such as computers "ultimately" trace to chance + necessity that somehow threw up humans. But we do not have a good right to such an assumption. Instead, we need to start from the world as we see it, where nature and art are distinct and distinguishable on their characteristic consequences and signs. When we do so, we see that the sign is evidence of the signified causal factor. On the strength of that, we then see that life shows FSCO/I and is credibly designed. Major body plans show FSCO/I and are credibly designed, and finally the cosmos as we see it shows that the physics to set it up is finely tuned so that a complex pattern of factrors sets up a cosmos that supports such C-Chemistry intelligent life as we experience. So, once we refuse to beg worldview level questions at the outset of doing origins science, we see that a design view of origins is credible, and credibly a better explanation than the blind chance + necessity in a strictly material world view. It should not be censored out or expelled, on pain of destructive ideologisation of science. Which, unfortunately, is precisely what is happening, as -- pardon the directness -- the materialistic establishment seems to value their agenda over open-mindedness. GEM of TKIkairosfocus
February 1, 2011
February
02
Feb
1
01
2011
12:27 AM
12
12
27
AM
PDT
lastyear @ 74 "There are many facts of nature that defy all logic and reason." I'm having a difficult time thinking of any. Maybe you could help me out. Share some of these facts with me. Thanks.tgpeeler
January 31, 2011
January
01
Jan
31
31
2011
03:29 PM
3
03
29
PM
PDT
borne @ 73 "...or to regard anyone that does as the perfect fool." Indeed. The problem that I see over and over is that it is literally impossible to reason with fools. (How to reason with someone who rejects the epistemological authority of reason?) Particularly when they think they are the ones being rational. It would be hysterically funny if the consequences of foolishness were not eternal. But they are, thus it is infinitely tragic.tgpeeler
January 31, 2011
January
01
Jan
31
31
2011
03:11 PM
3
03
11
PM
PDT
As in: are we exchanging one difficulty for another? I.e. is there a reasonable basis for the messy code that seems to be ever so common out there? Gkairosfocus
January 31, 2011
January
01
Jan
31
31
2011
07:47 AM
7
07
47
AM
PDT
AJG: Is there any really good complex software -- by that standard -- out there? What is it like to maintain it? Just curious . . . Gkairosfocus
January 31, 2011
January
01
Jan
31
31
2011
06:23 AM
6
06
23
AM
PDT
Charles@89 With regard to the maintenance issues you describe. Those issues are only applicable to poorly designed software. Good software features a high degree of encapsulation i.e implementation details of the objects or modules making up the design are hidden and not relevant to other parts of the software interacting with it. The interfaces are what is important. The biggest difficulty in software is getting the design or architecture correct given the assumptions. And often over time the assumptions change. For me the fact that so much in life is shared at the cellular level is a strong indication of design. It speaks to an architecture in life which has been able to withstand and accommodate so much variety in life over so long a period.andrewjg
January 29, 2011
January
01
Jan
29
29
2011
10:11 AM
10
10
11
AM
PDT
CJ: Okay, though of course the "evolution" as observed is most definitely intelligently -- not to be equated with "wisely" -- directed. The design view issue is, where do functionally specific complex organisation and associated information come from? A: As this case again illustrates, consistently, on observation: from intelligence, as the islands of function in large config spaces analysis points. And, the software embrittlement challenge you highlighted shows that design does not necessarily have to be perfect, to be recognised -- from its signs -- as design. It also highlights how robustness with adequate performance may be a sound solution, warts and all. [Office 97 still works adequately, thank you. So does Win XP. And, OO is doing fine on far less space than MS Office.] GEM of TKI PS: On loss of function, I am thinking not only on efficiency but on becoming crochety, unstable and buggy. BTW, I once had a PC with a Win ME install that worked very well, and stably.kairosfocus
January 29, 2011
January
01
Jan
29
29
2011
09:31 AM
9
09
31
AM
PDT
kairofocus: The point of my intervention was simply to try to demonstrate that complexity of a system evolving in a changing environment will tend to increase unless energy is invested to avoid it. I also wanted to show that an increase in the complexity of a system is not necessarily a good thing. I hope I’ve been able to explain this correctly through my last posts. I definitely don’t think it will solve the debate on irreducible complexity and evolution, but I do think it’s an aspect of the problem that should not be ignored. As a conclusion, I’d like to come back quickly on your point about loss-of-function. When a program is tested correctly, there is no reason for loss-of-function to happen, even if there is an increase in complexity. Most changes are usually transparent to the user (since I bought my last OS, there have been many patches, but the vast majority of them didn’t change the way I interact with it). There is of course a loss in term of efficiency as the program gets larger and more complex.CharlesJ
January 29, 2011
January
01
Jan
29
29
2011
09:01 AM
9
09
01
AM
PDT
CJ: I see and take your focal point: maintenance not bloat. (My opinions on the latter are no secret -- I really don't like the ribbon and software with annoying features that cannot be turned off in any reasonably discoverable fashion. As for "personalisations" that end up mystifying anyone who needs to come in on short notice . . . ) I am not so sure, though, that thermodynamics is the right term, though I understand the issue of embrittlement due to implicit couplings and odd, unanticipated interactions. (I appreciate your point on deterioration due to embrittlement, thence eventual loss of function.) And I see your use of "natural" in the sense of an emergent trend that seems inexorable, once we face the inevitability of some errors and the implications of subtle interactions. I think though that that is distinct from: that which traces to spontaneous consequences of chance + mechanical necessity without need for directed contingency. maybe we need to mark distinct senses of terms here. Significant points . . . GEM of TKIkairosfocus
January 29, 2011
January
01
Jan
29
29
2011
07:52 AM
7
07
52
AM
PDT
kairofocus: I agree with you that the addition of new modules to a program is driven by human needs. But this is not what I mean when I say that the complexity of a program will increase over time. What I’m talking about is large programs (several hundred thousand lines of code) developed by large group of programmer. There will always be flaws in those kinds of programs that will need to be addressed during maintenance. The problem is that when a change is made in a specific module, others changes have to be made in other parts of the program that are using this module. Over time, the situation tend to get worst, and even making what would seem to be a simple change require hours and hours of programming to cope with sides effects in other parts of the program. Eventually, the program will get so complex that it can’t even be maintained anymore. The need to add new features to a program is driven by competition in the market, but the increase in complexity is simple thermodynamics.CharlesJ
January 29, 2011
January
01
Jan
29
29
2011
07:32 AM
7
07
32
AM
PDT
CJ: The tendency to complexity is a reflection of the demands on programmers, driven in the end by the tech environment. The nature in question is HUMAN. GEM of TKIkairosfocus
January 29, 2011
January
01
Jan
29
29
2011
06:54 AM
6
06
54
AM
PDT
Eugen: Pardon, missed your post in the rush, since you are a newbie, you will be in mod at first. Re 47 -- yup, even one component can have parts to it or sections or shapes etc that are functionally critical and information-rich, including stuff like the nonlinear behaviour of a bit of wire! (I have linked Behe's rebuttal on the mousetrap above now, too.) GEM of TKI PS: IC, BTW is not an "anti-evo" point but a pro-design point. Behe believes in common descent but identifies design as a key input to any significant evo. Even many modern YEC's similarly hold with a rapid adaptation to fit niches, up to about the family level.kairosfocus
January 29, 2011
January
01
Jan
29
29
2011
06:52 AM
6
06
52
AM
PDT
Joseph: They are definitely not doing this on purpose. No programmer is saying to himself: I have to make this code as complex as possible. That’s quite the opposite! There are many books aimed to teach programmers to make code as simple as possible to make maintenance much easier. What natural tendency towards complexity? It’s entropy. Code won’t start cleaning itself spontaneously, company have to invest a lot of money and energy in order to keep code manageable.CharlesJ
January 29, 2011
January
01
Jan
29
29
2011
06:30 AM
6
06
30
AM
PDT
Hi kairos It seems that even one component per function could be "irreducible". Please check post 47 .Eugen
January 29, 2011
January
01
Jan
29
29
2011
06:18 AM
6
06
18
AM
PDT
Charles J:
But in the case of a large program, it’s happening against the will of the programmers.
And yet programmers are doing it.
It’s a natural tendency toward complexity that has to be fought against in order to keep the code manageable.
What natural tendency towards complexity?Joseph
January 29, 2011
January
01
Jan
29
29
2011
04:26 AM
4
04
26
AM
PDT
LYO: I think you would do well to address the points Joseph has raised. Also, in so doing, please bear in mind that "evolution" in the sense of change across time of populations, is not the same as Darwinian macro-evolutionary theory. There is also no observed evidence that empirically grounds the claim that accumulation of small changes due to chance variation and differential reproductive success is adequate to explain the origin of major body plans form one or a few unicellular common ancestors. The required FSCO/I, across time, on our observation of the known and credible source of such information, is intelligence. In the case of irreducibly complex systems, the issues C1 - 5 in the OP above point strongly to the need for foresighted creation of parts, sub assemblies, and intelligent organisation of same to achieve successful innovations. This not only covers the existence of body plans, but the origin of life itself, the very first body plan, including the issue of the capacity of self replication, a condition for any evolving by descent with modification. Observe as well that the strong evidence is that adaptations, overwhelmingly, are by breaking the existing information in life forms, where it is adaptive to do so. There is no observational evidence of the spontaneous origin of novel functional bio-information involving blind chance and mechanical necessity based spontaneous creation of novel bio-information of the order of 500 or more base pairs. GEM of TKIkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
10:08 PM
10
10
08
PM
PDT
CJ: Interesting comment. Complex programs, in our observation, are invariably designed, using symbols and rules of meaning. That is they are manifestations of language. Which is, again in our observation, a strong sign of agency, intelligence and art at work. Going beyond, as systems are designed and developed, we are dealing with the intelligently directed "evolution" of technology. That is, we see here how an "evolution" that is happening right before our eyes is intelligently directed and is replete with signs of that intelligence such as functionally specific, complex organisation and information, as well as, often irreducible complexity [there are core parts that are each necessary for and jointly sufficient to achieve basic function of the whole]. What is our experience and observation of complex language based entities such as programs emerging by chance and mechanical necessity? Nil. What is the analysis that shows how likely that is? It tells us that in a very large sea of possible configs, islands of function will be deeply isolated and beyond the search resources of he observed cosmos on blind chance + necessity. Now, you suggest that complex programs/applications "naturally" tend to become ever more complex. But actually, what you are seeing is that the expectations of customers and management get ever more stringent; including the pressure that the "new and improved" is what often drives the first surge of sales that may be the best hope for a profit. So, by the demands of human -- intelligent -- nature, competition creates a pressure for ever more features and performance. If one is satisfied with "good enough" one can get away with a lot: I still use my MS Office 97, and my more modern office suite is the Go OO fork of Open Office. Works quite well for most purposes, and keeps that ribbon interface at bay. Feature bloat is not the same as progress. GEM of TKIkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
09:58 PM
9
09
58
PM
PDT
Joseph: But in the case of a large program, it’s happening against the will of the programmers. It’s a natural tendency toward complexity that has to be fought against in order to keep the code manageable.CharlesJ
January 28, 2011
January
01
Jan
28
28
2011
07:00 PM
7
07
00
PM
PDT
CharlesJ:
What about large programs? With time, they become incredibly complex.
Not by chance, nor necessity, but by agency intervention. So the question should be is if every time we observe IC systems and know the cause it has always been via agency involvement can we infer agency involvement when we observe IC and don't (directly) know the cause once we have eliminated chance and necessity?Joseph
January 28, 2011
January
01
Jan
28
28
2011
06:30 PM
6
06
30
PM
PDT
What about large programs? With time, they become incredibly complex. What was once built by using various independent modules becomes more and more complicated, with various parts becoming dependant on each others, even if it was not intended. Although every module should theoretically be independent, a time come when it is harder and harder to make a simple chance without have to recode many parts of the program. Until the program gets so complicated, it's unmanageable. Various strategy can be used to to slow the process, but large programs will always tend to grow more complicated with time since it has to adapt to a changing environment (new OS, competition from other programs, client needs, etc...). So shouldn't the question be: is irreducible complexity even avoidable in a system evolving in a frequently changing environment?CharlesJ
January 28, 2011
January
01
Jan
28
28
2011
05:16 PM
5
05
16
PM
PDT
PS: LYO, have you ever had to design a complex, multi-part functional hardware based system that had a lot of fiddly interfaces, and get it to work? What was the experience like?kairosfocus
January 28, 2011
January
01
Jan
28
28
2011
04:40 PM
4
04
40
PM
PDT
lastyearon:
As for evidence, Irreducible Complexity is not evidence against evolution.
It is evidence for intelligent design. Ya see we are still stuck with the fact that there isn't any positive evidence that blind, undirected processes can construct functioning multi-part systems. lastyearon:
But IC does not rule out evolution.
IC strongly argues against the blind watchmaker and makes a positive case for intentional design. And there does come a point in which the IC system in question does rule out the blind watchmaker. lastyearon:
However natural processes do explain technology, by explaining humans.
If they did you would have a point. However that is the big question. We know natural processes cannot explain nature- natural processes only exist in nature and tehrefor cannot account for its origins, which science has determined it had. lastyearon:
Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans.
Your problem is there isn't any evidence that nature can produce cells from scratch. There isn't any evidence that blind, undirected processes can do much of anything beyond breaking things- in biology anyway. Think of it this way- you still don't have any evidence that blind, undirected chemical processes can construct functional multi-part systems.Joseph
January 28, 2011
January
01
Jan
28
28
2011
04:35 PM
4
04
35
PM
PDT
LYO: Do you ken what Borne is saying? GEM of TKIkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
04:30 PM
4
04
30
PM
PDT
Borne: Yup, only, the dependencies are in a 3-D mesh. Gkairosfocus
January 28, 2011
January
01
Jan
28
28
2011
04:29 PM
4
04
29
PM
PDT
Joseph:
Even Dr Behe said he wouldn’t categorically deny that those systems could have evolved by Darwinian processes, but it would defy all logic, reason and evidence.
There are many facts of nature that defy all logic and reason. As for evidence, Irreducible Complexity is not evidence against evolution. One may legitimately point to a lack of evidence for evolution. As in, 'wow that is a truly intricate and complex system. The evidence isn't strong enough for me to believe that this system evolved' And that certainly is debatable. It all depends on what you consider strong enough evidence. But IC does not rule out evolution. In Behe's quote, he states:
Further, it would go against all human experience, like postulating that a natural process might explain computers.
In this argument against the evolution of biological systems, Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn't mean that they aren't ultimately explainable naturally. Behe's argument is therefore circular. Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example.lastyearon
January 28, 2011
January
01
Jan
28
28
2011
04:03 PM
4
04
03
PM
PDT
I've read through some of this, so forgive me if I'm adding something already mentioned. A bit long I'm afraid. One of the key elements involved in any functional, multi-part system is the problem of combinatorial dependencies - i.e. parts that depend on parts that depend on still other parts - like we see in the flagellum. Moreover, we have in this code that depends on code that depends on code. As soon as we introduce dependencies and more specially combinatorial dependencies (CD) we also introduce statistical mechanics. Engineers get this. The great majority of Darwinian biologists simply don't get it and thus bypass it as though it doesn't exist. Darwinists pretend CDs and statistical mechanics (SM) have no application in biology, or worse, they haven't got a ruddy clue what CD or SM is! There are CDs in a flagellum. All component design specifications must meet specific physical criteria if any such motor is to work. We have to consider for example : -component sizes - must match with connected components -component pliability -component strength -capacity to resist external and internal forces applied to them - ex. stress from torsion, shear, pressure, heat etc. -rotational forces and motility (ex. revs. per/s) stiffness ... -energy requirements -to move parts -component coupling -flexibility, rigidity -component material - too soft = galling; too hard = fatigue & eventually cracking -component clearance tolerances between parts This is indeed a "goldie-locks" situation. Components must to be just right or it won't work. The probability of having the all components set to correct specs -allegedly by RMs + NS- is small indeed. And, this is supposing that the component parts already exist; but the laws of physics & chemistry alone do not guarantee such at all. Now add to this the algorithmic information (prescribed information -Abel, Trevor) needed to assemble the parts in the right order and you have an impossible situation in front of you. Nature, blind and without goals or purpose, is never going to assemble a flagellum -even supposing all the component protein parts already exist in the correct locus! The probability against this occurring by the laws of physics & chemistry + selection, are ridiculously small. Let the Darwinists get over it and accept the obvious and properly inferred design. Order is vital in this problem. So, the P of just getting the parts in the correct order is about 1/42! (given 42 protein parts, assuming the flag. is made of such) Therefore, when Darwinists place their bets on the evolution lottery machine blindly accomplishing just one simple rotary engine by chance and necessity, it is truly a "tax for the stupid". "Yet", the Darwinists answers, "lottery tickets are still producing winners. Aha! Evolution could too!" Sorry, but this is a gross misunderstanding. If you had a single lottery wherein the gambler had to select the exact sequence of 42 numbers out of 42 numbers, it would be highly suspicious anyone ever won. Yet evolution allegedly did this billions of times over since earths life supporting climate arrived! Incredible credulity is required to believe such nonsense. Around a 1 in 10^50th chance is hardly good news for Darwin. Breaking an encrypted key of 42 bits long is no easy task even with intelligently conceived decrypting algorithms being employed on fast computers executing in gflops. BUT! Nature isn't even trying to produce a functional anything - i.e. it isn't trying to win any lottery - it isn't "buying tickets"! It ain't ever tried to build a rotary engine or anything else under Darwinian theory. Amazing that Darwinists are so laughably gullible, not to mention pitifully ignorant, that they still push their idiotic theory as fact! And all this is addressing a mere 42 part flagellum while assuming the parts already exist and are localized! So, pretend the parts are non existent and have to be evolved themselves - all 42 or whatever it is exactly. Add localization. Then remember all the multitude of other "must exist first" components and then do the math. Even using wild approximations -conservative or liberal- you'll find astronomically low P values. You end up with virtually "impossible" written in huge letters all over Darwin's inane 'theory'. In light of this IC can rightly be called Irreducible Dependencies. If this were indeed a lottery wherein you're betting everything you have- would you still bet for Darwin in the fat and pudgy weakling corner? Permit me to doubt it, or to regard anyone that does as the perfect fool.Borne
January 28, 2011
January
01
Jan
28
28
2011
03:58 PM
3
03
58
PM
PDT
1 2 3 4

Leave a Reply