Uncommon Descent Serving The Intelligent Design Community

ID and Catholic theology

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Father Michal Heller, 72, a Polish priest-cosmologist and a onetime associate of Archbishop Karol Wojtyla, the future pope, was named March 12 as the winner of the Templeton Prize.

http://www.catholicnews.com/data/stories/cns/0801398.htm

In this recent interview came a critique of the intelligent design position as bad theology, akin to the Manichean heresy. Fr. Heller puts forth this rather strange argument as follows:

“They implicitly revive the old manicheistic error postulating the existence of two forces acting against each other: God and an inert matter; in this case, chance and intelligent design.”

Coming from a theologian, this is an astonishing summary of the Manichean heresy. Historically Manichæism is a form of dualism: that good and evil were equal and opposite forces, locked in an eternal struggle. In this distortion, the role of the all-powerful evil is replaced by chance? It is traditional Christian teaching that God forms (i.e. designs) creation. Does this make God the arch-rival of chanciness? It is difficult to see how the intelligent design perspective could possibly be contrary to Catholic teaching. For example, St. Thomas Aquinas speaks in his Summa of God explicitly as the great designer of the creation:

“… the “Spirit of God” Scripture usually means the Holy Ghost, Who is said to “move over the waters,” not, indeed, in bodily shape, but as the craftsman’s will may be said to move over the material to which he intends to give a form.”

The ID point of view is such a minimalist position it is amazing to see the charge of heresy– it simply does not have the philosophical meat necessary to begin to make this kind of theological accusation.

There are some points that Fr. Heller raises that are entirely consistent with an ID point of view:

“There is no opposition here. Within the all-comprising Mind of God what we call chance and random events is well composed into the symphony of creation.” But “God is also the God of chance events,” he said. “From what our point of view is, chance — from God’s point of view, is … his structuring of the universe.”

In this quote, he is basically saying the there is no such thing as fundamental chance, only apparent chance. The apparent noise is really a beautiful tapestry viewed from the wrong side. Of course if there were discernable structure, then we could … well … discern it (this is the whole point of ID). The problem here is that Fr. Heller does not have a self-consistent position that one can argue or agree with, as his next quote shows:

“As an example, Father Heller said, “birth is a chance event, but people ascribe that to God. People have much better theology than adherents of intelligent design. The chance event is just a part of God’s plan.”

Now if I were picking from a list of random events to use as my illustration of chance acting in the world, childbirth would not be one of them. Does he mean the timing of birth, or the act of conception, or the forming of the child? If anything this is an extremely well-choreographed event that has very little to do with chanciness of any flavor. Here he seems to be going back, and saying that chance is real (not just apparent) – but God intends to have it that way. Once again, the ID position can also be reconciled with the existence of fundamental chance, but not fundamental chance as the only thing that exists in the universe.

In this interview, Fr. Heller does not seem to have a sophisticated view of how randomness can work together with intelligence, he also does not seem to have read any books by design advocates – the arguments he makes are directed to nonexistent opponents. For a physicist/theologian that is giving an interview upon the reception of his Templeton award, the only physics/theology that is offered is internally confused, and based on caricatures of the ID argument. My feeling is that if he actually read and considered the ID arguments we might find a kindred spirit.

Comments
PS: Maybe I can make a point above clearer this way, by turning about and modifying the following:
As you have not proven the cosmos to be designed [cell-based biological life to have originated by chance + necessity in some form o fa pre-biotic soup, and onward that such life by CV + NS has diversified into the range of observed body-plans], you are assuming your conclusion and running with it.
In short, if you [RF] hold that on principles of science, you may confidently reconstruct the past in accordance with the evolutionary materialist paradigm, then so can I [GEM] use the same principles to reconstruct the past based on the design paradigm – and with a better fit to ALL the data. [Unless, you also intend that you can write evolutionary materialism into the very definition of “science” which is not only question-begging -- as well as historically and philosophically unwarranted -- but boils down to censorship when the results of free scientific thinking cut across the desired materialistic outcomes!]kairosfocus
April 17, 2008
April
04
Apr
17
17
2008
04:54 AM
4
04
54
AM
PDT
2] You ain't read it [Dawkins' Blind Watchmaker] . . . how dare you critique it On the direct and irrefutable contrary -- cf, e.g., what is how 252, point 2 -- I have read and now twice excerpted the relevant section of BW, viz, that on Weasel. It suffices to summarise my findings in the point Eric and I have made very clear: WEASEL is a clear instance of the sadly now very familiar illustration of rhetoric in the guise of education, the misleading icon of darwinism. In short, having no real reply on the substantial case, RF sets up a strawman, since I have pre-read and on the strength of therefore finding BW seriously wanting, refused to waste time and money to buy it 20 years ago. What he conveniently insists on leaving out, is that when I have needed to make specific reference to a specific point, Weasel, I have been able to find and excerpt the relevant data. Then, I have used it to show just how seriously amiss Weasel is [excerpting again . . .]:
1 –> It presented itself as a simplification of the million monkeys typing out Shakespeare example [Huxley wasn’t it or someone like that], without drawing out the significant difference between a corpus of millions of words and less than a dozen: COMPLEXITY, including the complexity of the “simplest” unicellular life forms and the increment in complexity to get to the body plan divergence that the Cambrian fossils show us. 2 –> Also, the traditional monkeys example was notoriously in the context of creation of biologically FUNCTIONAL information by chance; showing that complex information could be produced by a random walk. [The traditional illustration — per its rhetorical purpose — never got around to the issue of the unlikelihood of success of random walks in vast search spaces, though . . .]. And, e.g. Wiki’s dismissive reference to saltationism vs cumulative change does not address cogently the implications of the sort of credible scale of increments in information we are dealing with – e.g. Unicellular to arthropod would require something like 100 mn+ base prs, or ~ 200 mn bits. [At ~ 4.75 bits per letter, then 7 letters per avg word, that is about 42 mn letters or 6 mn words, or at 600 words per page, about 10,000 pages. A good slice of Shakespeare’s corpus I’d say!] 3 –> WEASEL proceeds to use an instance of artificial selection as as substitute for cumulative natural selection [just as Darwin did in Origin], producing sense out of nonsense, tada, like “magic.” [And, AS is of course DESIGN!] 4 –> Dawkins then infers — noting en passant but not highlighting the telling implications of the crucial difference — onward to NS: In true natural selection, if a body has what it takes to survive, its genes automatically survive because they are inside it. So the genes that survive tend to be, automatically, those genes that confer on bodies the qualities that assist them to survive. [BTW, Darwin did essentially the same thing in Origin when he used AS as evidence supportive to NS.] 5 –> Now, let’s ask: where do these novel genes come from? 6 –> D’uH: Genes, presumably — per NDT type models — are the product of chance variations? BINGO!
Onlookers, simply compare what I excerpted and discussed above, what Eric B similarly discussed, and what RF is trying to do. 3] you appear to have a problem with the whole concept [of GA's]. What is it that you think that people are claiming for GA’s that you know they cannot do? This is again an utter and evidently willful or at minimum willfully negligent misrepresentation. Here is what I most recently said when this strawman was raised:
253, point 11] [RF,] 250 [now 244], the text you quote contains “The genetic algorithm solves optimization problems by mimicking the principles of biological evolution”. Biological evolution eh? How about that. GEM: Precisely: my point was, and is, that the GA approach is routinely [and usually ignorantly] misrepresented as mimicking the principles of biological evolution. For details of why cf Eric and myself above, onlookers.
That is I am very aware of the fact that GA's work, and of how they work. Also,t hat this is said to be how biological evolution happened – BTW, ironically this could be read as saying that biological macro-level diversity owes itself -- per the true, observed and reliably known nature of GA's -- to intelligent design! But, that is not the intended meaning; the idea is that chance + necessity suffice to create “brilliant design” and GA's are held to mimic that. But to do that, the issue of first finding the shores of islands of functionality is dodged, and the further issue of the step-size to get to bio-function [comparable to the information content of a major literary work or corpus] is also dodged. This detailed in the same post; which serious onlookers can easily enough scroll up to. RF again shows himself a devotee of the strawman stratagem. 4] As you have not proven the cosmos to be designed you are assuming your conclusion and running with it. Let's see, I have long since first pointed out that “proof” is not a proper category in science, but rather the making of inference to best, empirically anchored explanation, linking a discussion of same. What do we see in response: insistence on an irrelevant, misrepresentation based, question-begging objection, selective hyperskepticism in short: i --> Scientific theories are not held to the criterion of logico-mathematical demonstration relative to universally acceptable premises, but that of inference to best current – and provisional – explanation. ii --> That is a commonplace of phil of sci, it is even held up as a virtue of science by the likes of Popper. iii --> So, we put forward the well-tested observation that, reliably, complex, functional organisation [as analysed by the EF or other similar formal and informal techniques] is a sign of intelligent action, as opposed to the other major source of high contingency outcomes, chance. Can this be overturned by reference to empirical data? No, or RF would have eagerly and gladly done so long ago. [Cf how he tried to dismiss the difference between a random stone in the backyard and a Clovis point, which is actually used by Archaeologists as a diagnostic for a particular culture in the Americas: i.e. if it shows up, you infer to that culture.] iv --> So, per the basic sci method, we have a well-tested, reliable hypothesis: organised, especially functional complexity [as indicated by FSCI or the like] is a reliable sign of intelligent action. We have a right to use this principle, subject to of course a solid counter-instance. That is just how science works, based on the general -- as opposed to absolute -- concept that the world works in an orderly, reliable, intelligible way; a premise BTW, that historically owes its origin to the influence of Judaeo-Christian theism in the founding era of science. v --> Now, we briefly mentioned, excerpted and linked [Section D my always linked] on the complex, convergently fine-tuned organisation of the physics of the observed, life-facilitating cosmos we happen to inhabit. Per that discussion, we infer provisionally but with high confidence, that the cosmos as we observe it, manifests reliable signs of design. vi --> What is the objection: first, that you have not PROVED. But of course not, science, as an empirical world exercise is incapable of proof. Tha tis an irrelevancy leading out to a misrepresentation of what HAS been said. vii --> Second, that the question is being begged. Not at all, per scientific method, a reliable principle is being used to infer to the best explanation of observed phenomena. That means that the burden of proof, properly, is on those who would overturn it. On pain of selective hyperskepticism -- inconsistently rejecting what does not sit well with what one wishes the world is like, while accepting similar cases that fit with what one hopes the world is like. viii --> So, RF had a choice: EITHER [P] keep his rejection of cosmological design, but at the expense of admitting that he rejects the basic principles of scientific investigation, OR [Q] accept the scientific principles, recognise that the inference to design is based on a hitherto well-tested principle, but – tada – here is why the principle fails as a generalisation. ix --> That he insistently picks P shows strongly that he cannot fulfill what Q requires. Thas tis, he is unable to overturn that organised complexity is a reliable sign of design, but does not wish to accept certain key cases [cosmology, DNA, body plan level biodiversity spring to mind; cf the always linked sections B - D], so he instead pretends that science is about “proof” as opposed to empirically based well-warranted, provisional inference to best explanation. So, it is RF who is really begging the question here. Given the above all too painfully plain track record by RF, that is, sadly, no great surprise. GEM of TKIkairosfocus
April 17, 2008
April
04
Apr
17
17
2008
03:51 AM
3
03
51
AM
PDT
Mr Fry (and onlookers): You have plainly not been expelled (maybe, for cause, put on moderation? or it could be the usual annoying bugs that show up here . . .); you have dropped out of the college of reasoned discourse, by insistent resort to quote-mining, misrepresentations and ad hominems. I will very mildly mitigate that by noting that a mechanical search on READAK -- at least in Firefox --will not show up my remarks in 252 [which goes next to making a freebie ad for READAK], so part of the trouble may be that you are not reading carefully but then essay to rebut on ill-informed snippets out of context. (BTW, evidently some deletions of posts probably related to recent disciplinary action by DS, has thrown the numbers above into chaos.) Sadly, in either case, this is trollish behaviour, not serious civil-minded discourse. In any case, this pattern moves far afield from the core business of the thread, so let us again draw attention to that core business, from whifh RF and his ilk would insistently distract us: A: Fr Heller and ID, from what is now 236
[OP] Father Michal Heller, 72, a Polish priest-cosmologist . . . . In this recent interview [linked to his winnint the Templeton Prize] [gave] a critique of the intelligent design position as bad theology, akin to the Manichean heresy . . . . “They implicitly revive the old manicheistic error postulating the existence of two forces acting against each other: God and an inert matter; in this case, chance and intelligent design.” . . . . “There is no opposition here. Within the all-comprising Mind of God what we call chance and random events is well composed into the symphony of creation.” . . . . “God is also the God of chance events,” he said. “From what our point of view is, chance — from God’s point of view, is … his structuring of the universe.
“Lord Ickenham” then links a summary of the Manichean heresy, from the classic edn of the Catholic Enc:
Manichæism is a religion founded by the Persian Mani in the latter half of the third century . . . As the theory of two eternal principles, good and evil, is predominant in this fusion of ideas and gives color to the whole, Manichæism is classified as a form of religious Dualism . . . . The key to Mani’s system is his cosmogony. Once this is known there is little else to learn. In this sense Mani was a true Gnostic, as he brought salvation by knowledge. Manichæism professed to be a religion of pure reason as opposed to Christian credulity; it professed to explain the origin, the composition, and the future of the universe . . . . Before the existence of heaven and earth and all that is therein, there were two Principles, the one Good the other Bad. The Good Principle dwells in the realm of light . . . . This Father of light together with the light-air and the light-earth, the former with five attributes parallel to his own, and the latter with the five limbs of Breath, Wind, Light, Water, and Fire constitute the Manichæan pleroma. This light world is of infinite exrtent in five directions and has only one limit, set to it below by the realm of Darkness, which is likewise infinite in all directions barring the one above, where it borders on the realm of light. Opposed to the Father of Grandeur is the King of Darkness. He is actually never called God, but otherwise, he and his kingdom down below are exactly parallel to the ruler and realm of the light above. The dark Pleroma is also triple, as it were firmament, air, and earth inverted . . . . These two powers might have lived eternally in peace, had not the Prince of Darkness decided to invade the realm of light. On the approach of the monarch of chaos the five aeons of light were seized with terror. This incarnation of evil called Satan or Ur-devil (Diabolos protos, Iblis Kadim, in Arabic sources), a monster half fish, half bird, yet with four feet and lion-headed, threw himself upward toward the confines of light.
In short, it seems that Heller is probably thinking of ID as a theological dualistic system that sees forces of order and organisation opposed to those of chaos [chance]. But in fact, the design inference is a scientific inference to best explanation within the observed cosmos, from SIGNS of Intelligence to its credibly known source, intelligent action: i –> It is a commonplace observation, immemorial since the days of Plato, that causal factors commonly rsolve into [1] natural regularities tracing to mechanical necessity, [2] chance (often showing itself in random behaviour), [3] intelligent action. ii –> It is possible to see the three at work in a given situation. For instance as discussed in the always linked, heavy objects fall under the natural regularity we call gravity. If the object now in question is a die, its uppermost face for practical purposes is a mater of chance, and the die may have been tossed as a part of a game, and intelligently designed process using intelligently designed objects in ways that take advantage of chance and natural regularities to achieve the purposes of agents. iii –> natural regularities are detectable by consistent, low contingency patterns of events, i.e we may use scientific approaches to infer to natural laws as their best explanation. iv –> When by contrast we see high contingency, we know that chance and/or agency are the relevant predominant causal factors. That is the particular configs observed may result from chance or from agent action: we may toss a six or we may set the die with 6 uppermost. v –> Per the principle of large numbers, we observe that random/chance samples of a population of possible outcomes tend to reflect its predominant clusters of configurations. [This is in fact the foundation of the statistical form of the 2nde law of thermodynamics. It is also the basis for Fisherian style inference testing and experiment designs that use these principles, e.g control expts and treatments studied using ANOVA etc] vi –> When therefore we see that these predominant clusters are non-functional, but the observed outcome is functionally specified and complex, we infer — routinely and reliably — not to chance but to intelligent agency. vii –> This is generally non-controversial, but on matters tied to origins of life and the cosmos, there is a currently dominant, evolutionary materialist school of thought that strongly objects to what would otherwise be the obvious explanation for the organised complexity [OC] of cell-based life and the similar OC of the physics that underlies the cosmos that facilitates such life. Thus, through the injection of methodological naturalism into the understanding of science [= “knowledge,” etymologically], the question is too often begged. viii –> This is a matter of science, not theology. The inference to design is a reasonable principle in science not a theologically speculative, ill-founded heresy. ix –> but , going beyond the province of science, as Fr Heller has, the issue brings up a very familiar and unquestionably foundational Christian theological context that challenges Fr Heller’s thinking:
Rom 1:19 . . . what may be known about God is plain to [men], because God has made it plain to them. 20 For since the creation of the world God’s invisible qualities–his eternal power and divine nature–have been clearly seen, being understood from what has been made, so that men are without excuse. RO 1:21 For although they knew God, they neither glorified him as God nor gave thanks to him, but their thinking became futile and their foolish hearts were darkened. 22 Although they claimed to be wise, they became fools 23 and exchanged the glory of the immortal God for images made to look like mortal man and birds and animals and reptiles [yesteryear, in temples, today, often in museums, magazines, textbooks and on TV] . . . .
x –> So, is the design of the world plain to those willing to follow the evidence where it leads instead of rejecting the evidence in favour of agenda-serving assumptions and stories? Paul says, yes. In so saying, he opens himself up to empirical test, and the implication of the design inference is that design is indeed intelligible and very evident in the world as we experience and observe it. That makes the theological inference to a Creator God as designer of the world a reasonable worldview alternative indeed. B: Onlookers: On exposing further select distractions and distortions: 1] 255: there’s “no need” to tell me the CSI you obtained for each item [stone vs Clovis point]? Let's roll the actual tape from point 7, what is now 253, which RF artfully excerpted just a tiny snippet of:
No need: basic common sense and a little observation will do nicely. Going beyond that, we may observe that functionally specified complex organisation is just as effective an index of the action of agency as is a measured value of he statistical weights of functional and non-functional subsets of the config space for an observed functional element. Random rocks don’t make good spear heads, which in turn tend to be pointed, symmetrical, adapted for hafting, and conform to styles, also showing signs of flint knapping. All of which were discussed and linked above. And, one can look at functionality vs non-functionality of rock and clovis point. Then we can look at the pattern of the elements of shape vs say typically observed ones for similarly sized rocks, from balls to plates in shape. Even without explicit calculation it should be plain that there is a vast config space for such rocks, of which those shapes that correspond to clovis points are a very small subset. So, we have observed functionality, and high contingency, then also being in an otherwise improbable state, apart form intelligent action. Not too hard to see, if one is willing, i.e is docile before evident truth.
That is, I pointed out that the EF-CSI scheme is just one of several ways of formally or informally detecting design. I started with common sense [how we recognise spear-points in our backyard in the first instance], then moved up to complex organisation, then pointed to the way one could do the calc if one wanted, noting that in fact explicit calcs of config spaces is unnecessary, as the point is plain from looking at random rocks and clovis points. [Take a largish sample of representative random rocks of appropriate size (i.e using the principle that samples tend to look like the population) , then profile their shapes. Take some representative Clovis points. Which is more sharply constrained as to shape and functionality? And of course, RF puts in my mouth the claim that I claimed to have made the explicit calculation. There was no need, as a glance at my backyard full of volcanic rocks suffices to remind me of the vast variety of flattened, oblong and rounded off shapes natural rocks take, vs what spear points do.] So, the problem -- sadly but plainly -- is want of docility before plainly evident truth. [ . . . ]kairosfocus
April 17, 2008
April
04
Apr
17
17
2008
03:41 AM
3
03
41
AM
PDT
KF I asked you
Presumably you determined this partly by comparing the values of Complex Specified Informatinon for a rock and a Clovis point? What were the values you obtained for the CSI of each? Please don’t explain it further, or refer to your always linked, Just put the figure down (or whatever form it takes) for each please.
You then said
No need: basic common sense and a little observation will do nicely.
So there's "no need" to tell me the CSI you obtained for each item? Basic common sense and observation tell us that the sun orbits the earth. The whole point of formalising such a method of design detection is that it is formalised! It's very convinenent that the crux upon which your argument rests does not need anything other then basic common sense to prove it's case. I'll ask again. What were the values of the CSI in the Clovis point and the rock? It's a calculation you claim to have done. So lets see the "working", don't just give me "it's obvious"
You will note that I gave a context: “READAK-trained pre-read.”
The word READAK does not appear in this thread. Perhaps you imagined it.
In short, on the tangential point, I in fact surveyed and sampled Dawkins’ Blind Watchmaker in my old university bookshop when it came out, found it sadly wanting, and saved my money.
SO you've not read it and yet think you can pontificate about what it purports to say. Wrong.
It is therefore not part of the 100 or so shelf feet of books that surround me as I speak.
Very impressive.
So, I hardly can be said to be in the position of dismissing what I have not read. I pre-read, found wanting, and moved on to what makes better substance.
Have you read any of Dawkins' work?
Worse, Eric B has repeatedly also addressed the Weasel example and the wider question of GA’s, only to meet the same tangential tendencies. I find it hard to believe that RF’s argument is serious, instead of a rhetorical game
What is your problem with GA's in general KF? Yes, we know all about your thoughts on Weasl, but you appear to have a problem with the whole concept. What is it that you think that people are claiming for GA's that you know they cannot do? Please don't quote, in your own words if possible.
Thus, that we can see similar chance-based elements in a designed cosmos should not be surprising either.
As you have not proven the cosmos to be designed you are assuming your conclusion and running with it. Again, this comment is not appearing. KF, I guess you win by default once more. I really can't be bothered to type if there's a good chance my comment will never appear. I guess I've been expelled!RichardFry
April 16, 2008
April
04
Apr
16
16
2008
04:19 AM
4
04
19
AM
PDT
12] RF: Did the previous planes to the 747 (the ones that evolved into it) also self-assemble in junkyards? Oddly, RF here echoes the classic Berra's blunder: planes “evolved into” the 747 all right – by technological transformation through INTELLIGENT DESIGNS; designs that, step by step, with cases of revolutionary change in body plan, improved on previous ones. Macro-evolution by intelligent design, in short. On the “self-assemb[ly]” issue: misrepresentation, by now “as usual” – nowhere do I argue that planes self-assemble, but that it is logically and physically possible for a plane to be assembled by a tornado passing through a junkyard, but utterly improbable to the point where we routinely and reliably infer to agency as being responsible for the FSCI in a plane. Now, this point, which echoes Hoyle, is rooted in the underlying fact that the config space is too big and too populated with non-functional states. Oddly enough, Dawkins in the same book, BW, p. 8, agrees with me and with the late, great Sir Fred [member in good standing of The Noble Order of the Gadfly]:
Hitting upon the lucky number that opens the bank's safe [NB: cf. here the case in Brown's The Da Vinci Code] is the equivalent, in our analogy, of hurling scrap metal around at random and happening to assemble a Boeing 747. Of all the millions of unique and, with hindsight equally improbable, positions of the combination lock, only one opens the lock. Similarly, of all the millions of unique and, with hindsight equally improbable, arrangements of a heap of junk, only one (or very few) will fly. The uniqueness of the arrangement that flies, or that opens the safe, has nothing to do with hindsight. It is specified in advance. [BW, p.8]
Thus, when we see a jumbo jet, we routinely and reliably – and accurately – infer to agent action as the cause, not chance. 13] What’s your point? If you had a random target what would be the point? Pretending that something is unclear can only get you so far. Here is the context in 234 – and note onlookers how RF does not give that context, No prize for guessing why:
MATLAB, discussing its GA toolbox, comments: The genetic algorithm solves optimization problems by mimicking the principles of biological evolution, repeatedly modifying a population of individual points using rules modeled on gene combinations in biological reproduction. Due to its random nature, the genetic algorithm improves your chances of finding a global solution. In short, we see exactly what EB pointed out: a –> In praxis — as opposed to how the general context is presented to the public [and the naive technical practitioner] — a finite and conveniently scaled [digital] search/performance space is set up, a criterion of fitness [performance metric] towards optimisation [resource constrained search for maximum desired performance relative to some objective function] is set up. b –> random initial points in a space known to be near to the desired performance are sampled [e.g. we do not start with random individual atoms floating dispersed in a fluid to search for a high-performance antenna, or of course — per my version of Hoyle’s well-known 747 scenario at point 6, app 1 the always linked (which is in fact such an “evolutionary search” — and with a simpler case than an organism) — to get to a flyable jet plane] and tested for fitness relative to the known desired performance objective [i.e. We have to have a criterion of performance to assess which is better and which worse] c –> a process of iterative culling and “interbreeding” is used to try to find an optimum or at least good performance in a finite number of steps. d –> This is of course targetted search through intelligently designed artificial selection that exploits active information to get search process gains over a pure random walk. e –> However, it is then presented as a model of how biological [including of course body-plan origination level macro-]evolution works [without adequately dealing with the search space and complexity issues, and the absence of intelligent direction in the usual model of such evolution].
14] give me an example of people putting out this “notion” [ie. that GA's are said to mimic biological evolution] Just did, by excerpting 234. Indeed, when you thought it served your interests to do so, you cited the exact same text. The difference: I have pointed out just why GA's do not at all mimic the observed sort of NDT-based biological evolution [which is of course microevolution.]. 15] , it’s 100% random v’s 100% intelligent design. strawman? I think so. Here is how I discussed the die excample in the always linked section A:
A Tumbling Die: For instance, heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert.
Then again:
a hypothetical, dice-based information system: If one were so inclined, s/he could define a six-state code and use a digital string of dice to store or communicate a message by setting each die in turn to the required functional value for communicating the message. In principle, we could then develop information-processing and communication systems that use dice as the data-storage and transmission elements [say, using registers made from plastic troughs loaded with strings of dice set to particular values and "read" by scanning the pips]; rather like the underlying two-state [binary] digital code-strings used for this web page. So also, since 6^193 ~ 10^150, if a functional code-string using dice requires significantly more than 193 to 386 six-state elements [we can conveniently round this up to 200 - 400], it would be beyond the edge of chance as can be specified by the Dembski universal probability bound, UPB. [That is, the probabilistic resources of the observed universe would be most likely fruitlessly exhausted if a random-walk search starting from an arbitrary initial point in the configuration space were to be tasked to find an "island" of functionality: not all "lotteries" are winnable (and those that are, are designed to be winnable but profitable for their owners). So, if we were to then see a code-bearing, functionally meaningful string of say 500 dice, it would be most reasonable to infer that this string was arranged by an agent, rather than to assume it came about because someone tossed a box of dice and got really lucky! (Actually, this count is rather conservative, because the specification of the code, algorithms and required executing machinery are further -- rather large -- increments of organised, purposeful complexity.)]
Who is misrepresenting whom, on the explicit evidence here, Mr Fry? I can leave onlookers to judge that for themselves. However, the bottomline is clear: HAVING PLAINLY LONG SINCE LOST ON THE MERITS, EVO MAT ADVOCATES ARE NOW RESORTING TO RED HERRING DISTRACTGORS AND STRAWMAN MISREPRESENTATIONS, plainly to disguise and distract attention from the unwelcome fact. In the large through the sort of tactics exposed in Expelled and in the brouhaha just raised on alleged plagiarism and stealing of intellectual property. In the minor key, the sort of distracting rhetoric as we have again had to point out step by step in this thread. For shame, Mr Fry. Cho man, do betta dan dat. GEM of TKIkairosfocus
April 16, 2008
April
04
Apr
16
16
2008
02:30 AM
2
02
30
AM
PDT
4] RF: Sure, science can never say 100% “this is the way it is”, it’s always a balance of probabilities. You have not shown that the balance has tipped in your favour. Onlookers, I took time to show in the always linked that RF refuses to address, just why there is good reason to infer to design when we see fucntionallys pecified, complex information and more broadly fucntioanlly specific organised complexity. This, I pointed to in the case of observed information, DNA and the like, the body plan level diversity of the observed life forms here on earth, and the underlying, convergent, multidimensionally fine tuned physics of the cosmos. RF has never actually addressed on the merits what he seeks to dismiss. At least, he is willing to accept that scientific findings are inherently provisional so the acceptance of any given theory is in the end a matter of trust in what cvannot be provfed and is subject to correction. That correction is precisely what the design inference is provciding in the context of the evolutionary materialist paradigm. 5] You have no definitive proof that the cosmos is designed and so are reduced to “may” yet then continue on as if you’ve established your case beyond reasonable doubt. As just noted, and as pointed out repeatedly above to anoterh participant, “proof” is not a reasonable criterion in a scientific context, provisional, empirically anchred warrant per abductive inference to best explanation is. And that is what I have provided in the always linked, thus my use of “may.” In short, sadly, we see here selective hyperskepticism at work, leading to evidently closed minded objectionism. 6] obsession with chance-based searches? Karios, do you really think that it’s simply a choice between “intelligent design” and “random chance”? Again, a serious misrepresentation – aka strawman. On two levels. First, I have repeatedly and consistently pointed out that from Plato on, it has been immemorial that chance, necessity and agency are three relevant causal factors. Mechanical necessity shows itself in natural regularities, i.e low contingency: a heavy object – e.g. a die -- resting on a table does so in light of fundamentally gravitational and electrical forces leading to elastic deflections and equilibrium. Where contingency dominates -- e.g which of the die's faces is uppermost -- that is either chance or intelligent action. So, in contexts where we study highly contingent phenomena [and information is precisely based on such high contingency to configure meaningful symbols to represent states of affairs etc], we use techniques that more or less reliably discriminate between agency and chance. One of those techniques, as Mr Bolinski showed in inferring that Premise Media used XVIVO's work as a source for their clips on the inner workings of a cell, is complex specified information. (So, why then do so many now so desperately want to resist a similar inference when we address the origin of DNA say as an information-bearing macromolecule . . . ?] So, at the first level, to address chance and agency as the two long-known alternatives to account for high contingency, is plainly not an “obsession.” A the second level, kindly avoid conflating chance and randomness. A chance situation is one that could just as easily have been something else, as opposed to a purposefully set state. Randomness is a property of certain mathematical and practical situations, but chance may come up in for instance sensitive dependence on initial conditions. [in the case of the die, it comes up in that the precise config of forces and initial conds does lead deterministically tot he outcome of which face is uppermost, but the sensitive dependence means that the outcome is for us incalculable as we cannot specify sufficiently accurately to determine the outcome consistently. So, for practical purposes the uppermost face of a tossed fair die is chance and is random, due to the finest degrees of differences in the initial conditions.] 7] Presumably you determined this [the difference between a random rock and a clovis point] partly by comparing the values of Complex Specified Informatinon for a rock and a Clovis point? What were the values you obtained for the CSI of each? No need: basic common sense and a little observation will do nicely. Going beyond that, we may observet hat fucntionally specified complex organisation is just as effective an index of the action of agancy as is a measured value of he statistical weights of functional and non-functional subsets of the config space for an observed functional element. Random rocks don't make good spear heads, which in turn tend to be pointed, symmetrical, adapted for hafting, and conform to styles, also showing signs of flint knapping. All fof which were discussed and linked above. And, one can look at functionality vs non-functionality of rock and clovis point. Then we can look at the pattern of the elements of shape vs say typically observed ones for similarly sized rocks, from balls to plates in shape. Even without explicit calculation it should be plain that there is a vast config space for such rocks, of which those shapes that correspond to clovis points are a very small subset. So, we have observed functionality, and high contingency, then also being in an otherwise improbable state, apart form intelligent action. Not too hard to see, if one is willing, i.e is docile before evident truth. 8] 249, Onlookers, please note the habit of using the “always linked” as a cover all for any point at all claimed without further substantiation. This is rich! Having just complained that I am objecting to a work that I have only pre-read [and have cited the relevant parts of, cf supra], RF now objects that I always link a reasonably detailed summary case for the design inference, and refer him to it for substantiation of shorter remarks in this thread. [And, were I to take out the details in the blog thread, he would doubtless join Leo in objecting to prolixity. As Morris Cargill used to say: logic with a swivel – there is always an objection to be made.] RF, why not show us that you have looked at the relevant linked and have found the relevant case wanting on specific grounds? Like, my “obsession” with chance vs agency as the relevant causal factors for highly contingent outcomes? 9] If you’d read the book you would realise that it [WEASEL]was not supposed to be a “realistic model” of anything, let alone “the genome” (presumably human). if you'd cared to respond to what I actually said,a dn what Eer4ic actually said, you would realise that we have cited the weasel case and given its rhetorical context, warranting our objections. It is a case of the now long traditional iconic substitution of artificial selection for natural selection, to persuade the unwary that NS is capable of vast informational innovation and creativity. 10] As you know all the factors affecting “bio-functionality” in the Cambrian I can but acquiesce to your knowledge, oh time-lord. Again, a strawman, this time putting words into my mouth that do not belong there. What I pointed out is that we see in the fossil record on the usual interpretation that in a window of some 10 MY, ~ 5- 600 MYa, up to about 40 phyla and subphyla appear in the fossil record, requiring innovation of dozens of body plans with required diverse organisation, organs, tissues, cell types and of course proteins. This requires a vast increment in DNA. Using Meyer's example of an arthropod, I indicated that for just one of these body plans, relative to the 1 mn or so DNA base prs in a reasonable simple unicellular organism, we have to account for upwards of about 100 mn base pairs of incremental DNA. That is in an obviously functional context, and it sets up config spaces of order ~ 1.36*10^60,205,999 states. To find the observed and potential functional subsets of any reasonable scale becomes maximally improbable on the gamut of the observed universe, much less the required window of time here on earth. But, 100 mn base prs is 200 mn bits, or about 29 mn 7-bit ASCII characters worth of information storage/representation capacity. At 7 characters per word in English, avg., that is about 4 mn words, or about 6 – 7,000 pp at 600 words per page; comparable to the corpus of a great writer or a major reference work. In short, we are right back at a million monkeys banging away at keyboards at random, trying to write a good slice of Shakespeare, before we can get tot he bio-functionality required to account for the fossil evidence, then to climb from functionality to improved functionality and diversity within the general body plan. To not weary the reader even further, let us be more selective from now on: 11] 250, the text you quote contains “The genetic algorithm solves optimization problems by mimicking the principles of biological evolution”. Biological evolution eh? How about that. Precisely: my point was, and is, that the GA approach is routinely [and usually ignorantly] misrepresented as mimicking the principles of biological evolution. For details of why cf Eric and myself above, onlookers. [ . . . ]kairosfocus
April 16, 2008
April
04
Apr
16
16
2008
02:20 AM
2
02
20
AM
PDT
Participants [and onlookers]: First of all, kindly cf 242 above and of course, the original post, to see the effect of one tangential distractor after another on a serious matter in the main. So, I ask: is anyone out there serious about discussing the matter in the main? In absence of such, we can now conclude that those who play rhetorical games -- as will be documented yet again below -- with distractors and misrepresentations thereby reveal their utter want of a serious case on the merits. We can further take it as a given that if the argument in the main [that, contra Heller, design is evident in the cosmos and in cell based life and that it is not Manichean heresy to see that] were easily overturned, it would have been, so RF's resort to one red herring after another; leading out to one strawman after another (then duly pummelled – at least, not soaked in oil of ad hominem and ignited to cloud and poison the atmosphere through polarisation and confusion), is indicative of the balance of the case on the merits of fact and logic. And, not to his advantage. Having noted that general point, we need to address the usual cluster of tangential red herrings, yet again, so that certain points may be made clear: 1] RF, 249: what on earth is “pre-reading”?Karios, how would you like it if I criticised the contents of a book, for example the Bible, and then it turned out that in fact I’ve never read it? You will note that I gave a context: “READAK-trained pre-read.” This is a fast survey of a written work that takes in key features: themes/theses, topic sentences, conclusions, summaries, key illustrations and examples etc, to get an overview of its substance in a very few minutes. [Go look up READAK; they are still in business, and are worth far, far, far more than every cent that one of their courses costs. This is one deeply satisfied and grateful client.] In short, on the tangential point, I in fact surveyed and sampled Dawkins' Blind Watchmaker in my old university bookshop when it came out, found it sadly wanting, and saved my money. It is therefore not part of the 100 or so shelf feet of books that surround me as I speak. [Those are the books that passed the pre-read test and were worth the investment, including e.g. TBO's TMLO as I discuss in the appendix 1 the always linked. Sears-Salinger's Thermodynamics came in as a textbook and has stayed on as an old friend.] So, I hardly can be said to be in the position of dismissing what I have not read. I pre-read, found wanting, and moved on to what makes better substance. (and BW is hardly comparable in literary, historical or spiritual merit to the Bible, which notoriously many a skeptic critiques without taking time to so much as pre-read.] 2] Weasel, again: On the more direct response, onlookers, kindly compare what is actually being discussed: Dawkins' notorious WEASEL, which I took time in 227 to excerpt the actual discussion in BW, ch 3, from Wiki [i.e I have actually not only read but presented the matter in this thread]. Namely:
I don’t know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence ‘Methinks it is like a weasel’, and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . We again use our computer monkey, but with a crucial difference in its program. It again begins by choosing a random sequence of 28 letters, just as before … it duplicates it repeatedly, but with a certain chance of random error – ‘mutation’ – in the copying. The computer examines the mutant nonsense phrases, the ‘progeny’ of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . The human eye has an active role to play in the story. It is the selecting agent. It surveys the litter of progeny and chooses one for breeding. …Our model, in other words, is strictly a model of artificial selection, not natural selection. The criterion for ’success’ is not the direct criterion of survival, as it is in true natural selection. In true natural selection, if a body has what it takes to survive, its genes automatically survive because they are inside it. So the genes that survive tend to be, automatically, those genes that confer on bodies the qualities that assist them to survive.
There, I addressed the deep challenges this notorious bit of rhetoric in the guise of popular education faces:
1 –> It presented itself as a simplification of the million monkeys typing out Shakespeare example [Huxley wasn’t it or someone like that], without drawing out the significant difference between a corpus of millions of words and less than a dozen: COMPLEXITY, including the complexity of the “simplest” unicellular life forms and the increment in complexity to get to the body plan divergence that the Cambrian fossils show us. 2 –> Also, the traditional monkeys example was notoriously in the context of creation of biologically FUNCTIONAL information by chance; showing that complex information could be produced by a random walk. [The traditional illustration — per its rhetorical purpose — never got around to the issue of the unlikelihood of success of random walks in vast search spaces, though . . .]. And, e.g. Wiki’s dismissive reference to saltationism vs cumulative change does not address cogently the implications of the sort of credible scale of increments in information we are dealing with – e.g. Unicellular to arthropod would require something like 100 mn+ base prs, or ~ 200 mn bits. [At ~ 4.75 bits per letter, then 7 letters per avg word, that is about 42 mn letters or 6 mn words, or at 600 words per page, about 10,000 pages. A good slice of Shakespeare’s corpus I’d say!] 3 –> WEASEL proceeds to use an instance of artificial selection as as substitute for cumulative natural selection [just as Darwin did in Origin], producing sense out of nonsense, tada, like “magic.” [And, AS is of course DESIGN!] 4 –> Dawkins then infers — noting en passant but not highlighting the telling implications of the crucial difference — onward to NS: In true natural selection, if a body has what it takes to survive, its genes automatically survive because they are inside it. So the genes that survive tend to be, automatically, those genes that confer on bodies the qualities that assist them to survive. [BTW, Darwin did essentially the same thing in Origin when he used AS as evidence supportive to NS.] 5 –> Now, let’s ask: where do these novel genes come from? 6 –> D’uH: Genes, presumably — per NDT type models — are the product of chance variations? BINGO!
In short, a major red herring leading out to a strawman. Worse, Eric B has repeatedly also addressed the Weasel example and the wider question of GA's, only to meet the same tangential tendencies. I find it hard to believe that RF's argument is serious, instead of a rhetorical game. 3] 248, You have not shown that these “chance based features” [ie. Stemming from the role dice play in Monopoly] are in fact random. Nor have you shown that chance is inapplicable even in “designed contexts”. Now, first, RF takes the excuse that I have only pre-read BW to refuse to address the relevant contents of my always linked. Had he taken time, he would have seent hat in section A, I speak to the use of dice as in a game:
heavy objects tend to fall under the natural regularity we call gravity. If the object is a die, the face that ends up on the top from the set {1, 2, 3, 4, 5, 6} is for practical purposes a matter of chance. But, if the die is cast as part of a game, the results are as much a product of agency as of natural regularity and chance. Indeed, the agents in question are taking advantage of natural regularities and chance to achieve their purposes! This concrete, familiar illustration should suffice to show that the three causal factors approach is not at all arbitrary or dubious -- as some are tempted to imagine or assert.
Now, a die has eight corners and twelve edges. In tumbling these features cause it to have sensitive dependence on initial conditions. Consequently, we see the physical basis for the commonplace observation that a reasonably fair die shows a pattern where each side turns up uppermost 1 in 6 throws, on an effectively random basis. Indeed, that is why such dice are commonly used, even as one of the three basic examples in any initial course in probability and statistics, with the fair coin and the deck of cards. So, as long since noted by the undersigned, chance-based – even, effectively random -- elements can be a part of a designed context, per very familiar example. Thus, that we can see similar chance-based elements in a designed cosmos should not be surprising either. And, I cited rocks vs spearheads, thermally linked phenomena and more to illustrate in the observed cosmos. In short, this is plainly objection for the sake of objection, not serious dialogue at this point. Worse, there is an OUTRIGHT MISREPRESENTATION there too: I have NEVER set out to argue that “chance is inapplicable even in “designed contexts”.” Just the opposite, that designed contexts can embed chance elements, as illustrated by the game, Monopoly. [ . . . ]kairosfocus
April 16, 2008
April
04
Apr
16
16
2008
02:15 AM
2
02
15
AM
PDT
RichardFry (248): "I heard a story about a GA that was required to evolve a timer mechanism. It was provided with a mechanism to place components on a board. Rather then evolve a timer from the parts available to it it evolved a radio reciever. Which picked up signals from a nearby computer as computers have timers in them to synchronise everything up, don’t ya know?. Seems to me that fits the bill for “insight and imagination to creatively configure elements” no?" In short, no. (But let me tell you a story sometime about a fish I caught... ;-) Sorry to rain on that story you have heard, but I will assure you that it was not as creative as the story makes it sound. You can safely count on the fact that the program did exactly what it was told to do. Whether people noticed that the outcome could be used in other ways might reflect on the observation of those people or some lucky accident. But I assure you that the program itself did not think "Hey, while I was working on this, I got an idea. What about this instead?" Evolutionary / genetic algorithms provide a way to search a large set of possible candidate solutions predefined by the programmer's model. They cannot innovate outside the defined box they are told to search. Think for a moment about the fact that everything they do is evaluated by the defined fitness function. Please see my response to austin_english at 256 and please review my post at 232 in regard to this point. Also, did you find either 245 or 246 helpful?ericB
April 15, 2008
April
04
Apr
15
15
2008
07:54 PM
7
07
54
PM
PDT
austin_english (255) "Reproduction increases information".
Actually, simple reproduction itself doesn't do much to increase information. See the discussion above in post 184 about “Let’s go to your string “THIS TEXT STRING” repeated a million times.
"In evolutionary computation the researcher makes up his own “laws of nature.” He sets the standards of success, but he doesn’t say how to succeed. The information on HOW comes from reproduction. And reproduction doesn’t “know” anything about the standards."
In a way, yes the programmer does say how. Each program must be written in such a way that every candidate solution must be able to be understood and evaluated by the associated fitness function. Thus, it is necessary that all production of candidate solutions is constrained so that it always produces only those kinds of solutions that the fitness function can understand and evaluate. Otherwise, the program would crash or give faulty results. So the programmer must define the solution space of possible candidates. This is typically too large to have even a computer consider every one in turn, but it is constrained in its nature. There is no opportunity for true novelty outside the predefined solution space of possible candidates. What the programmer doesn't know is exactly which possible candidates will be evaluated and of these which will score the best given the predefined fitness function. This makes this type of algorithm potentially very useful for searching a large solution space of possibilities. However, it never gives a solution other than one of those within the predefined constraints of the model. The computer is searching within a large box in a deterministic way, not being inspirationally creative in the way we would think of creativity, as in thinking outside the box. It is up to the programmer to creatively define the box they want the software to search.
"If you believe that artificial “laws of nature” direct artificial evolution, then why can’t THE laws of nature direct natural evolution?"
Even those who doubt that neo-Darwinian processes can account for as much as is claimed for them do not claim that undirected processes are free of the laws of nature. So I don't think anyone is saying that natural evolution is not being influenced by the laws of nature. The question is more subtle. How far can that sort of process go in taking an existing creature and changing it into something quite different? Are there never any limits at all to change? If there are limits or constraints, what are they? What is the edge of evolution? To say that the laws of nature are in the game does not automatically mean that those laws are encouraging change. It could also mean that they sometimes inhibit or prevent change. (Indeed most of the time, the fossil record shows stasis for species -- they stay the same for long periods of time until they become extinct.) Within both computer algorithms and in biological change, there is also the issue of being trapped in a local optimal region (sometimes called a local maxima or a local minima). Other possible solutions may be separated by impassable regions that divide the possibilities into islands of variation. Which of these effects the laws might have and when they do it is a question for research. Saying the laws participate doesn't by itself inform about the effect of that participation. You are asking good questions and thinking about these issues. I would recommend that you read Dr. Michael Behe's recent book The Edge of Evolution. In it he looks at both the abilities and the limitations of these processes, based on the best available empirical evidence.ericB
April 15, 2008
April
04
Apr
15
15
2008
07:36 PM
7
07
36
PM
PDT
I'm going to do my best to say how my dad explained this to me. Reproduction increases information. You have to write more to describe all the critters after reproduction than before. The "laws of nature" (and some chance) decide which offspring reproduce. Not all reproduce, so there may be a decrease in information here. But what remains may say more about what works in nature than the information in the original parents. In evolutionary computation the researcher makes up his own "laws of nature." He sets the standards of success, but he doesn't say how to succeed. The information on HOW comes from reproduction. And reproduction doesn't "know" anything about the standards. If you believe that artificial "laws of nature" direct artificial evolution, then why can't THE laws of nature direct natural evolution?austin_english
April 15, 2008
April
04
Apr
15
15
2008
04:39 PM
4
04
39
PM
PDT
DLH, On “Is anyone using this” please do your homework first. e.g. see Differential Evolution Why that? Please don't dump a source on us without saying what to get from it.austin_english
April 15, 2008
April
04
Apr
15
15
2008
03:54 PM
3
03
54
PM
PDT
DLH
False interpretation. Note that “that makes sense in English” incorporates grammar or “law”.
Sorry, not clear to me what you refer to here. I'm not interpreting anything here. I'm pointing out that Kariosfocus' intrepretation is 100% random 100% of the time.
That is the problem with evolutionary searches.
That it's not a blind search? Sorry, again here it's opaque to me.
On “Is anyone using this” please do your homework first. e.g. see Differential Evolution
Once more, not clear what you are getting at here. I'm not claiming that GA's are not used right now in industry. They are, of course. KF seems to be saying they are unable to solve a class of problem,a class the deatils of which he so far has not made clear to me except to claim that the use of pre specified information in setting the target somehow invalidates something.RichardFry
April 15, 2008
April
04
Apr
15
15
2008
02:33 PM
2
02
33
PM
PDT
RichardFry
Again, it’s 100% random v’s 100% intelligent design. strawman? I think so.
False interpretation. Note that "that makes sense in English" incorporates grammar or "law".
If it was a blind search it might as well just start at the beginning and end at the end.
That is the problem with evolutionary searches. On "Is anyone using this" please do your homework first. e.g. see Differential EvolutionDLH
April 15, 2008
April
04
Apr
15
15
2008
02:14 PM
2
02
14
PM
PDT
For, first, GA’s are precisely NOT a blind search, but a set up intelligently designed search in a selected, constrained target zone with preset criteria of performance.
And? If it was a blind search it might as well just start at the beginning and end at the end. And in any case if your "selected, constrained target zone" is "physical reality" then does that not change anything for you?
putting out the notion that somehow an artificially constrained search across a finite and controllable domain that was selected for likelihood of success, is a good model of OOL and OO body plan level biodiver4sity by a claimed, unintelligent, none foresighted, non purposeful process..
Then give me an example of people putting out this "notion"
Then, when we see that through intelligence we have targetted the tiny fraction of possible configs of such a large set of elements that makes sense in English, we see the power of insightful, conceptual intelligence, over the million monkeys banging away at keyboards at random.
Again, it's 100% random v's 100% intelligent design. strawman? I think so.RichardFry
April 15, 2008
April
04
Apr
15
15
2008
12:35 PM
12
12
35
PM
PDT
Karios, continued
a –> In praxis — as opposed to how the general context is presented to the public [and the naive technical practitioner]
a –> In praxis — as opposed to how the general context is presented to the public [and the naive technical practitioner]
a finite and conveniently scaled [digital] search/performance space is set up
I don't think the general public care as long as the stuff keeps coming out of the shops. Yes, and as I noted you can also set up an analogue search/performace space using physical components and breadboards. One page from one book does not define the entirety of a subject. And in any case the the text you quote contains "The genetic algorithm solves optimization problems by mimicking the principles of biological evolution". Biological evolution eh? How about that.
random initial points in a space known to be near to the desired performance are sampled [e.g. we do not start with random individual atoms floating dispersed in a fluid to search for a high-performance antenna
I doubt the origin of life if achieved via non-intelligent design methods would either.
or of course — per my version of Hoyle’s well-known 747 scenario
Randomness again KF? I know it's your trusty sheld but it's getting a bit thin. Did the previous planes to the 747 (the ones that evolved into it) also self-assemble in junkyards?
a process of iterative culling and “interbreeding” is used to try to find an optimum or at least good performance in a finite number of steps
What, selection?
This is of course targetted search through intelligently designed artificial selection that exploits active information to get search process gains over a pure random walk.
. What's your point? If you had a random target what would be the point? So "maximise the gain on this antenna" is active information? Or the physical behaviour of the system in question is "active information" Of course I'm aware of Dr Dembski's work on this, no need to link...
However, it is then presented as a model of how biological [including of course body-plan origination level macro-]evolution works [without adequately dealing with the search space and complexity issues, and the absence of intelligent direction in the usual model of such evolution].
I saw a child playing with a model plane. He said "look at my plane fly, my model plance". I simply had to correct him, his "plane" was simply not a smaller version of the real thing, it was being depicted as something it was not. KF, are you active in the research of the origination of body-plans then?
A book, of course, is openly intelligently designed, using a 128-state digital element as its base unit, the alphanumeric charactern [it is a lot more than 26 letters and a space!]. Authors are intelligent, and use insight, knowledge and imagination to construct intelligent communications. We do not write books by the million monkeys banging away at keyboards method! For excellent reason.
You state the obvious in such eloquent ways. Congratulations. Again your comment with the monkeys indicates it's "random v's intelligent design" here yet again. Yet you were quoting things about selection earlier.
GA’s by contrast, are too often presented as if they were models of evolution by chance variation and undirected, non-purposive natural selection
There must be 100's of GA's out there. If "often" is true you will have no trouble giving me one, ten, two dozen examples of GA’s misrepresented as if they were models of evolution by chance variation and undirected, non-purposive natural selection.RichardFry
April 15, 2008
April
04
Apr
15
15
2008
12:27 PM
12
12
27
PM
PDT
Karios @234
In particular, he has tried to use the fact that I pre-read instead of reading the Dawkins book to say in effect, that he does not need to read the substantiating information
what on earth is "pre-reading"? Karios, how would you like it if I criticised the contents of a book, for example the Bible, and then it tured out that in fact I've never read it? How would that strike you?
on a different point where he accused me of simply asserting without substance, and where I proceeded to point him to the fact that right from the beginning I had highlifhted just where the details were to be found.
please. Onlookers, please note the habit of using the "always linked" as a cover all for any point at all claimed without further substantiation.
Thus, I showed, from Dawkins himself, that Weasel is of course a specifically targetted Hamming distance based search in a constrained finite domain that deliberately cuts down from the scale of config space that a more realistic model of the genome would require.
Sigh. Please. If you'd read the book you would realise that it was not supposed to be a "realistic model" of anything, let alone "the genome" (presumably human).
Notice how I showed above that the Cambrian life revolution would require searching out bio-functionality through a digital config space comparable to creating the worlks of Shakespeare
As you know all the factors affecting "bio-functionality" in the Cambrian I can but acquiesce to your knowledge, oh time-lord.
it was used rhetorically (NOT “educationally”) to try to persuade the reader that chance variations plus natural selection can be used to create functionally specific complex information.
Both parts of that sentence cannot be true. If it was used rhetorically how could it have been, in your words an attempt at a realistic simulation?
To do that, it cut down the search space unrealistically, used a purposeful target and substituted artificial for natural selection.
Not only that, but as he did not simulate the interactions of all the atoms and electrons that also cut down the search space unrealistically. As he also purposeful target and substituted artificial for natural selection (what's the difference by the way?) that would also have cut down the search space unrealistically. Hardly worth the bother spending any tmie looking at really , except as a toy example to teach people stuff.
In short, it set up a strawman – RF’s “toy example” is in fact a backhanded acknowledgement of this that refuses to face the implications.
I think the onlookers (some of them) will agree quote the opposite. It is you that has set this "toy example" up and destroyed it with thousands of words. Continued in next comment.RichardFry
April 15, 2008
April
04
Apr
15
15
2008
12:00 PM
12
12
00
PM
PDT
Karios, I keep losing my place with you so here's more: Karios @223
I used the case of Monopoly to show that in a known designed context, one may have chance-based features and processes.
You have not shown that these "chance based features" are in fact random. Nor have you shown that chance is inapplicable even in "designed contexts".
Second, we observe that the cosmos, per fine-tuning, may be designed
It may also be made of fine wire and cheese. You just don't get it do you? Sure, science can never say 100% "this is the way it is", it's always a balance of probabilities. You have not shown that the balance has tipped in your favour. You have no definitive proof that the cosmos is designed and so are reduced to "may" yet then continue on as if you've established your case beyond reasonable doubt.
Further to this, we have no good reason to infer that humans exhaust the set of possible or existing intelligent agents
Indeed, just the opposite obtains: intelligent agents use insight and imagination to creatively configure elements to achieve entities and processes that work to achieve goals.
I heard a story about a GA that was required to evolve a timer mechanism. It was provided with a mechanism to place components on a board. Rather then evolve a timer from the parts available to it it evolved a radio reciever. Which picked up signals from a nearby computer as computers have timers in them to synchronise everything up, don't ya know?. Seems to me that fits the bill for "insight and imagination to creatively configure elements" no?
FSCI is a manifestation of that configuration, where relatively rare and specified configs show up in config spaces that are sufficiently vast that relevant probabilistic resources for chance-based searches would be exhausted.
What is this obsession with chance-based searches? Karios, do you really think that it's simply a choice between "intelligent design" and "random chance"? Everything I read from you on that subject leads me to think so.
Next, we observe actual cases: a common, garden variety rock is credibly a product of chance + necessity, but say a Clovis point spearhead is not. And, the evident functionality, characteristic stylistic features and vast array of alternative configs for a rock show that the Clovis point exhibits FSCI and is designed. [In fact it is used as a diagnostic of certain ancient cultures in the Americas.]
Presumably you determined this partly by comparing the values of Complex Specified Informatinon for a rock and a Clovis point? What were the values you obtained for the CSI of each? Please don't explain it further, or refer to your always linked, Just put the figure down (or whatever form it takes) for each please.RichardFry
April 15, 2008
April
04
Apr
15
15
2008
11:41 AM
11
11
41
AM
PDT
Eric: Excellent again:
Now that biological life exists, groups of three nucleotides — a codon — can form a symbol corresponding to an amino acid in a functional protein. Biological life includes symbolic information . . . . The problem is, can a prebiotic universe make the transition to a universe with information-based biological life using only Blind Watchmaker processes, that is without the help of intelligent agency (i.e. a seeing watchmaker)? The answer is no . . . . What is the biological equivalent for “nearness” to symbolic information in a prebiotic universe? There is none. That is the problem. A mindless prebiotic universe has no access to any way to detect that one arrangement of matter is “closer” to being symbolic information than another. There is no gradual upward path to climb (i.e. the Climbing Mount Improbable). A prebiotic universe is both utterly blind to and supremely uninterested in the possibility of symbolic information. It has neither desire nor need for it. (Aristotle defined “Nothing” as that which rocks dream about.) . . . . Chance plus time plus the laws of physics and chemistry are insufficient to explain the origin of symbolic conventions and symbolically encoded information. Nature only defines the properties of the medium, not the message. To have symbolic conventions and a symbolic message, intelligent agency is required.
While of course it is logically possible that all of this originated by sheer lucky noise (as a 747 can be assembled, in principle, by a tornado passing through a junkyard . . .): 1] symbolic codes expressed in chemical monomer letters, 2] algorithms to implement same codes, 3] physical molecular nanomachines in proximity and arrangeement to implement said algors and meaningful codes . . . this is counter to our experience and observation, and requires resort to quasi-infinities of quasi-infinities of sub cosmi, each on the scale of our observed cosmos. The epicycles and deferents are multiplying without limit -- a sure sign of runaway ad-hocery. GEM of TKIkairosfocus
April 15, 2008
April
04
Apr
15
15
2008
01:30 AM
1
01
30
AM
PDT
p.s. To RichardFry, here are two other ways to think about the fundamental difficulty. 1) Sometimes people think of the problem of information as though it were merely a difficult probability problem, as if it were just a matter of finding sequences of letters that match actual English. Viewed that way, it does seem like a hard problem, but only practically impossible, not strictly impossible. In principle, one can imagine being miraculously lucky and landing on a valid sequence. But that doesn't capture the whole of the real problem. It takes for granted that "English" (the symbolic convention by which the sequence is translated) exists and that there is someone or something that can do the translating. The real problem is that neither of these exist for free in the prebiotic world, and the prebiotic world has no means, reason, or motivation to provide them. Yet without them, none of the possible sequences of objects have symbolic meaning and the probability of finding one that has meaning is literally zero. 2) Suppose we knew all the laws and properties of paper and ink and could apply them. For now, ignore all issues of uncertainty, incompleteness, etc. With that knowledge, would it be possible in principle to derive the contents of a book? Or even to derive just the remainder of a book, given its first chapter? No. The properties of the medium do not define the information held by the medium, not even in principle. Chance plus time plus the laws of physics and chemistry are insufficient to explain the origin of symbolic conventions and symbolically encoded information. Nature only defines the properties of the medium, not the message. To have symbolic conventions and a symbolic message, intelligent agency is required.ericB
April 14, 2008
April
04
Apr
14
14
2008
08:29 PM
8
08
29
PM
PDT
RichardFry (241): "... I’d like to know what “nearness” and “markers” mean in biological terms. Plus in biological terms, is there any meaning in the “markers”? Perumably so, but I’d like you to tell me what it is as I’ve no idea."
By "markers" I simply meant anything we might hope could become a symbol, but which is not yet a symbol. But even "marker" may be suggesting more than is justified. You could use "object" or anything else similar for something that is not at present a "symbol". Biologically, a nucleotide in a prebiotic universe might be such an object, if any existed. Now that biological life exists, groups of three nucleotides -- a codon -- can form a symbol corresponding to an amino acid in a functional protein. Biological life includes symbolic information. Is there any meaning in these objects or "markers"? No, none at all. That is why I didn't use the term "symbol". A symbol represents something other than itself according to an associated convention. A symbol has meaning. The problem is, can a prebiotic universe make the transition to a universe with information-based biological life using only Blind Watchmaker processes, that is without the help of intelligent agency (i.e. a seeing watchmaker)? The answer is no. By "nearness" I am just referring to the game of "hotter/colder" that all genetic/evolutionary algorithms play. The fitness function distinguishes all candidates based on a score. The better the score, the "nearer" that candidate solution is to the goal defined by the fitness function. What is the biological equivalent for "nearness" to symbolic information in a prebiotic universe? There is none. That is the problem. A mindless prebiotic universe has no access to any way to detect that one arrangement of matter is "closer" to being symbolic information than another. There is no gradual upward path to climb (i.e. the Climbing Mount Improbable). A prebiotic universe is both utterly blind to and supremely uninterested in the possibility of symbolic information. It has neither desire nor need for it. (Aristotle defined "Nothing" as that which rocks dream about.) In addition to the many merely chemical obstacles, this is why Blind Watchmaker processes could not ever create information-based biological life. Analogies to GAs are useless and not applicable. Intelligent agency is required.
RichardFry: "Are you here saying that you accept that WEASEL is simply a illustration for teaching purposes and so conclusions about other GA’s cannot (and should not) be drawn from it?"
I was never drawing conclusions about all GAs based merely on WEASEL. I do say that WEASEL does not even rise to the level of a legitimate example or illustration for biological contexts. Its presence in The Blind Watchmaker illegitimately attempts to attach credibility by violating the core premise of the book. A stage magician's magic trick is not even a "toy" example of supernatural magic. It does not serve to teach supernatural magic. It is just a trick and all it teaches is deception. Dawkins is mistaken if he thinks the problem can be patched over, and it would be a mistake for readers to just trust him on this one. BTW, Dembski's book No Free Lunch devotes chapter 4 to Evolutionary Algorithms. Dembski's thorough treatment does not depend on WEASEL. It addresses all evolutionary algorithms in general.ericB
April 14, 2008
April
04
Apr
14
14
2008
07:19 PM
7
07
19
PM
PDT
PS I forgot: 4^300,000 ~ 9.94*10^180,617. That is the sort of config space we have to find islands of functionality in to get to first life on an evo mat prebiotic scenario. Even if there were 10^1,500 such islands, each with 10^150 configs, that would be so maximally improbable to access through prebiotic chemistry on the scope of our observed cosmos that the FSCI leads tot he comfortable inference to intelligent design of life -- and we have only factored in the DNA strands here, not the code nor the algorithm nor the associated implementing machinery. The only seriously mentioned evo mat alternative is a quasi-infinite array of sub cosmi, to soak up the config space, and that plainly moves out of science into ad hoc speculative philosophical metaphysics. This is similar to the attempt to get around the finetunig of the cosmos by a similar quasi-infinite array. The result: a quasi-infinite array of life-facilitating sub cosmi, each within the contrext of a quasi infinite array of non-life facilitating sub cosmi. The epicycles and deferents are multiplying without limit in a vain attempt to save the phenomena for the modern equivalent to Ptolemy's system. Intelligent design of the observed cosmos and of cell-based life in it is plainly a far superior alternative to that!kairosfocus
April 14, 2008
April
04
Apr
14
14
2008
04:17 AM
4
04
17
AM
PDT
2] RF: You [GEM] obviously only have a superficial understanding of the subject [GA's] as shown by your preference to quote others rather then submit your own opinions on the matter. Onlookers: I have cited reasonable authority, in order to show on the record that I am not misrepresenting the facts. As the above shows, the real problem is not that I do not understand what is going on in GA's, but that RF is unwilling to accept that there is a serious problem with trying to make out that GA's work as materialistic evolution is claimed to have worked to get to where the biosphere and fossil record are. Now, too, had I simply stated my opinions, which I ALSO did, RF would doubtless have been objecting that I am just giving ill-informed, one-sided opinion. Also, notice, onlookers, how this pattern of argument by RF moves ever further away from the issues in the main for the thread, along one red herring after another. That consistent pattern of the rhetoric of distraction and ad hominem, is no accident. Now, on this particular tangent: i --> I cited and stated that the facts are that GA's are intelligently designed, targetted searches in conveniently sized spaces, in zones reasonably known or suspected by the designers to be close to “good” performance, based on comparative performance and various randomising processed that try to keep hill climbing from just going up one local hill instead of finding a better hill to climb. ii --> More specifically, on the point originally challenged, WEASEL is an even more targetted search, where the hill of peak performance is known from the outset and iterative partial solutions are propagated to the next iteration based on closeness to the target. iii --> Thus, in the context of a book that advocates for the power of CV + NS, WEASEL [just as I originally objected and as Eric backs me up . . .] serves a rhetorically manipulative purpose, not an educational one. iv --> And, by his own words as cited above from the discussion in Wiki, Mr Dawkins knew at the time that the example was off-target. [But then, artificial selection processes have been used as persuasive examples for NS, ever since Darwin. Yet another misleading icon of evolution, in short.] 3] What do random keypresses have to do with the subject at hand? Where is selection here? Had you paid close attention, you would have seen that I took time to show that the relevant increments in bio-functional information to get to body-plan level macro-evolution are fully comparable with having to first produce a good slice ofsay the Shakespearean corpus to get TO the functionality that we can then see competitive reproductive success selecting against. To even just get to the observed complexity of observed simplest life forms. You have to generate information before you can select for functionality much less comparative functionality. For instance we are looking at DNA strands of order 300 – 500 or even 1 mn bases. 300 k bases is a config space of 4^300k ~ . And, that would have to be arrived at in a prebiotic soup that somehow manages to create the algors of life, a coding scheme, then the molecules in the appropriate protected configs to implement the algors of cellular life. To go thence to the dozens of Phyla in the Cambrian, we are looking at credibly increments of ~ 100mn bases per major body plan. In short, we are looking at having to get huge increments in bio-information before we can select through differential survival and reproduction; which is in turn a diversity-reducing filter, through culling of the “unfit.” 4] Eric, thank you for keeping close to the topic at hand. In short, this is loaded: he here tries to imply that I have not – a falsehood. And of course the fact that his is a tangential issue relative tot he thread in the main as led by RF is of course not discussed. 5] the toy example WEASEL appears now to be used to prove that evolution cannot generate new information, somehow, without it being “sneaked in”. So, as WEASEL is being conflated with actual biological evolution I’d like to know what “nearness” and “markers” mean in biological terms. First, Weasel ends up inadvertently showing that intelligent design is capable of moving to a functional target. And, it was set in the context of a book that set out to use it to promote the evolutionary materialist paradigm to the public. Second, as long since shown, it is a rhetorical bait and switch. Instead of showing how we move from one functional state arrived at by CV + NS, it is a case of incrementally moving to a target by small changes that are kept if they get closer to it. In the case of the broader issue, bio-functionality is an observable phenomenon: the organism lives or dies. Within islands of functionality [which “live” in vast config spaces,the vast majority of which, e.g. per the existence of the simple stop codon in DNA will be non-functional], one may indeed have differential reproductive success that may either preserve an existing population or – per observations – leads to small variations. But, the problem is the increments in information to get TO these islands, starting with OOL and the Cambrian phyla explosion. So, on GA's the selection of zones of known or suspected functionality, multiplied by definition of fitness functions by intelligent designers, multiplied by the use of symbolic systems of representation [the genetic element of the GA], we are looking at a very large increment of too often unacknowledged active information giving purpose and targetting to the search: search in this zone, and use the following intelligent technique [comparative performance, mutations, cross-breeding etc] to get to a hopefully better solution as evaluated through the following equally intelligently designed objective function. 6] What about UTF-8 encoding then? Yet another red herring. It is enough to use ASCII to make the point [as would good old EBCDIC]; UTF-8 is an extension thereto. 7] what is the chance of a book self-assembling wholesale, as a 747 could not do in a junkyard? As a matter of fact, it is logically and physically possible – but maximally improbable – that a tornado in a junkyard near the Boeing factory would assemble a 747. Similarly, an explosion in a print factory could conceivably splash ink across paper to make a book. That is we are looking at config spaces and microstates in such spaces. It is only because the predominant observable clusters of states are so overwhelming in relative statistical weight that we do not expect to see functionally specified complex information merging by such processes. The odds against are similarly huge, which is precisely why we normally infer from FSCI to intelligent action as its best explanation – save when worldview level evo mat commitments are at stake it seems. We could do an experiment, to try to create a page of text by spraying ink from a nozzle at a suitable distance from a page, with the number of dots of – lets say fusable toner – being enough to give the typical 5% cover on the page at say 300 dots per inch and a written area of 9” x 6.5” -- 58.5 sq in at 90,000 pixels per sq in; 5.265 mn pixels. 5% would be 263,250. Then let's say we have to do 200 pp. i.e. 5% cover of pixels, making symbolic, grammatical and narrative sense. (These pp would have to embed a coherent continuous narrative in a recognisable language, using its symbols.) That such could happen by chance is of course logically and physically possible [nothing blocks it physically and it does not imply a logical contradiction], but the config space is such that the islands of functionality thus specified would be overwhelmed on the scale of our planet. If we see pages of a book, we routinely and reliably infer very properly to an intelligent author. Indeed, we could extend this to an in-principle GA: move to a computer blank screen with the same constraints and impose the 5% cover of pixels at random, then do so across a population of initial dots on the computer equivalent to a sheet of paper, and do mutations and fitness-based probability enhanced cross breeding to move towards a functional text. We could even start with an initial functional text and see if the process could transform it from one form to another. Do that matter, we could do the same to try to create the engineering drawings for say the jet engine of the 747, or even the drawings for an antenna. Then, see how many iterations it takes to get the page of text or the 747 engine drawing or the high performance antenna. 8] Are books not written a word at a time rather then coming into being at once? Ever heard of a book -- or an essay – plan? 9] Don’t they evolve? Indeed they do: by intelligent design, through drafts in the first instance,t hen editorial revisions then further issuing of revised editions. 10] are you of the opinion that the config space for books is too large for them to actually come into existence? I am of the observationally anchored opinion that books are the product of intelligent authors who use their smarts to target functional configs, not random walks that have to initially hit islands of functionality [target zones] before climbing hills through one hill-climbing algor or another to more or less good performances. 11] 241, Karios, what’s your point anyway? That GA’s don’t work? That they don’t represent biological evolution? That they can’t create new information? That they are no better then a random search? GA's work; this – as onlookers can easily verify – I have never doubted or disputed.. They do so, however, preciely by incorporating intelligently sourced active information. In so doing they may well hit surprising new configs in the said target zone -- base d on the intelligent input from the beginning. And, as Marks and Demsbski showed in the papers I have linked twice now, IT IS THE INJECTION OF SUCH ACTIVE INFORMATION INTO GA'S THAT GIVES BETTER PERFORMANCE ON AVERAGE THAN RTANDOM SEARCH. (So, again, this set of quesitons is a strawman.) GA's may well represent real-world biological evolution -- as opposed to the evolutionary materialist picture of such evolution. That is, if life in the various forms we see now and in the fossil record evolved through body plan level macroevolution -- whether all the way from a last common ancestor or no -- then it credibly was intelligently directed. And, GA's are supportive evidence for that. 12] I don’t believe you’ve ever actually said what the problem with you and GA’s is. Simply false, and yet another tangential misrepresentation and distraction. I have repeatedly pointed out that GA's and their kin -- contrary to the billing often put up by evo mat advocates -- are not illustrative of the proposed CV + NS mechanisms that are foundaitonal to the NDT account of macro-evolutionary change. GEM of TKIkairosfocus
April 14, 2008
April
04
Apr
14
14
2008
03:52 AM
3
03
52
AM
PDT
Eric and onlookers (Re RF): So very sadly, it is now increasingly evident that RF is either only superficially glancing at what others have to say, then dashes off to attacking the resulting strawman [typically on yet another tangent to the issues for the thread in the main; which he has consistently distracted from], or else is that he is a willful distorter of what others have to say, with intent to mislead onlookers. Let us therefore pause and put the focus back on track: A: Heller
[OP] Father Michal Heller, 72, a Polish priest-cosmologist . . . . In this recent interview [linked to his winnint the Templeton Prize] [gave] a critique of the intelligent design position as bad theology, akin to the Manichean heresy . . . . “They implicitly revive the old manicheistic error postulating the existence of two forces acting against each other: God and an inert matter; in this case, chance and intelligent design.” . . . . “There is no opposition here. Within the all-comprising Mind of God what we call chance and random events is well composed into the symphony of creation.” . . . . “God is also the God of chance events,” he said. “From what our point of view is, chance — from God’s point of view, is … his structuring of the universe.
“Lord Ickenham” then links a summary of the Manichean heresy, from the classic edn of the Catholic Enc:
Manichæism is a religion founded by the Persian Mani in the latter half of the third century . . . As the theory of two eternal principles, good and evil, is predominant in this fusion of ideas and gives color to the whole, Manichæism is classified as a form of religious Dualism . . . . The key to Mani's system is his cosmogony. Once this is known there is little else to learn. In this sense Mani was a true Gnostic, as he brought salvation by knowledge. Manichæism professed to be a religion of pure reason as opposed to Christian credulity; it professed to explain the origin, the composition, and the future of the universe . . . . Before the existence of heaven and earth and all that is therein, there were two Principles, the one Good the other Bad. The Good Principle dwells in the realm of light . . . . This Father of light together with the light-air and the light-earth, the former with five attributes parallel to his own, and the latter with the five limbs of Breath, Wind, Light, Water, and Fire constitute the Manichæan pleroma. This light world is of infinite exrtent in five directions and has only one limit, set to it below by the realm of Darkness, which is likewise infinite in all directions barring the one above, where it borders on the realm of light. Opposed to the Father of Grandeur is the King of Darkness. He is actually never called God, but otherwise, he and his kingdom down below are exactly parallel to the ruler and realm of the light above. The dark Pleroma is also triple, as it were firmament, air, and earth inverted . . . . These two powers might have lived eternally in peace, had not the Prince of Darkness decided to invade the realm of light. On the approach of the monarch of chaos the five aeons of light were seized with terror. This incarnation of evil called Satan or Ur-devil (Diabolos protos, Iblis Kadim, in Arabic sources), a monster half fish, half bird, yet with four feet and lion-headed, threw himself upward toward the confines of light.
In short, it seems that Heller is probably thinking of ID as a theological dualistic system that sees forces of order and organisation opposed to those of chaos [chance]. But in fact, the design inference is a scientific inference to best explanation within the observed cosmos, from SIGNS of Intelligence to its credibly known source, intelligent action: i --> It is a commonplace observation, immemorial since the days of Plato, that causal factors commonly rsolve into [1] natural regularities tracing to mechanical necessity, [2] chance (often showing itself in random behaviour), [3] intelligent aciton. ii --> It is possible to see the three at work in a given situation. For instance as discussed in the always linked, heavy objects fall under the natural regularity we call gravity. If the object now in question is a die, its uppermost face for practical pruposes is a mater odf chance, and the die may have been tossed as a part of a game, and intelligently designed process using intelligently designed objects in ways that take advantage of chance and natural regularities to achieve the purposes of agents. iii --> natural regularities are detectable by consistent, low contingency patterns of events, i.e we may use scientific approaches to infer to natural laws as their best explanation. iv --> When by contrast we see high contingency, we know that chance and/or agency are the relevant predominant causal factors. That is the particular configs observed may result from chance or from agent action: we may toss a six or we may set thedie with 6 uppermost. v --> Per the principle of large numbers, we observe that random/chance samples of a population of possible outcomes tend to reflect its predominant clusters of configurations. [This is in fact the foundation of the statistical form of the 2nde law of thermodynamics. It is also the basis for Fisherian style inference testing and experiemnt designs that use these principles, e.g control expts and treatments studied using ANOVA etc] vi --> When therefore we see that these predominant clusters are non-functional, but the observed outcome is functionally specified and complex, we infer -- routinely and reliably -- not to chance but to intelligent agency. vii --> This is generally non-controversial, but on matters tied to origins of life and the cosmos, there is a currently dominant, evolutionary materialist school of thought that strongly objects to what would otherwise be the obvious explanation for the organised complexity [OC] of cell-based life and the similar OC of the physics that underlies the cosmos that facilitates such life. Thus, through the injection of methodological naturalism into the understanding of science [= “knowledge,” etymologically], the question is too often begged. viii --> This is a matter of science, not theology. The inference to design is a reasonable principle in science not a theologically speculative, ill-founded heresy. ix --> but , going beyond the province of science, as Fr Heller has, the issue brings up a very familiar and unquestionably foundational Christian theological context that challenges Fr Heller's thinking:
Rom 1:19 . . . what may be known about God is plain to [men], because God has made it plain to them. 20 For since the creation of the world God's invisible qualities--his eternal power and divine nature--have been clearly seen, being understood from what has been made, so that men are without excuse. RO 1:21 For although they knew God, they neither glorified him as God nor gave thanks to him, but their thinking became futile and their foolish hearts were darkened. 22 Although they claimed to be wise, they became fools 23 and exchanged the glory of the immortal God for images made to look like mortal man and birds and animals and reptiles [yesteryear, in temples, today, often in museums, magazines, textbooks and on TV] . . . . RO 1:28 . . . since they did not think it worthwhile to retain the knowledge of God, he gave them over to a depraved mind, to do what ought not to be done. 29 They have become filled with every kind of wickedness, evil, greed and depravity. They are full of envy, murder, strife, deceit and malice. They are gossips, 30 slanderers, God-haters, insolent, arrogant and boastful; they invent ways of doing evil; they disobey their parents; 31 they are senseless, faithless, heartless, ruthless. 32 Although they know God's righteous decree that those who do such things deserve death, they not only continue to do these very things but also approve of those who practice them. RO 2:6 God "will give to each person according to what he has done." 7 To those who by persistence in doing good seek glory, honor and immortality, he will give eternal life. 8 But for those who are self-seeking and who reject the truth and follow evil, there will be wrath and anger.
x --> So, is the design of the world plain to those willing to follow the evidence where it leads instead of rejecting the evidence in favour of agenda-serving assumptions and stories? Paul says, yes. In so saying, he opens himself up to empirical test, and the implication of the design inference is that design is indeed intelligible and very evident in the world as we experience and observe it. That makes the theological inference to a Creator God as designer of the world a reasonable worldview alternative indeed. B: Dealing with RF's latest . . . 1] 240: you [GEM] continue to conflate randomness with the workings of GA’s it’s pointless to continue talking to you. I have pointed out, FYI, that random searches are incorporated as components in GA's, and that the Gas are targetted searches based on finite solution spaces which are sampled through a partly random process in an attempt to escape the local maximum problem. However, this boils down to the point that GA's search within islands of known or suspected functionality – target zones similar to the one we are just about to designate here for geothermal energy, not the config spaces as a whole. As I discuss in my always linked sections B and C [in light of the discussion of inference to design in section A], OOL would have to have started with getting to monomers of life then would have had to get the polymers of life and the configs that store and implement code-based algorithms. Similarly, to move from first life to body-plan level biodiversity, huge increments of functional information would have had to be generated before hill-climbing through natural selection etc serving as culling [thus information-based diversity-reducing!] mechanisms could get to work. You have to get to “Island Improbable” before you can try to climb “Mt Improbable,” while avoiding getting stuck on climbing just one of its foot hills. And, since – per the evo mat model -- we are not dealing with intelligently designed searches that start within target zones, we have to address getting to the initial functional configs through random walks not rewarded by mere relative closeness to functionality. Close[r] to functioning [per WEASEL] is not good enough to reward: you gotta minimally function before you can try to incrementally get better function. How long would WEASEL take if we imposed the criterion that each and every iteration must itself be a functional sentence, while it tries to get to the targetted phrase? [ . . . ]kairosfocus
April 14, 2008
April
04
Apr
14
14
2008
03:51 AM
3
03
51
AM
PDT
Eric
The case of meaningful text is actually a very good example of what is fundamentally misleading about supposed illustrations such as WEASEL or others like it.
On re-reading your comment it struck me that we might be closer then you think. Are you here saying that you accept that WEASEL is simply a illustration for teaching purposes and so conclusions about other GA's cannot (and should not) be drawn from it? And Karios, what's your point anyway? That GA's don't work? That they don't represent biological evolution? That they can't create new information? That they are no better then a random search? What? I don't believe you've ever actually said what the problem with you and GA's is.RichardFry
April 14, 2008
April
04
Apr
14
14
2008
01:13 AM
1
01
13
AM
PDT
Karios
we see the power of insightful, conceptual intelligence, over the million monkeys banging away at keyboards at random.
As you continue to conflate randomness with the workings of GA's it's pointless to continue talking to you. You obviously only have a superficial understanding of the subject as shown by your preference to quote others rather then submit your own opinions on the matter. What do random keypresses have to do with the subject at hand? Where is selection here? You are trying to lead the onlookers down a rabbit hole! Eric
Consequently, any blind process of shuffling markers without knowing a translation convention is inherently doomed. It cannot ever possibly assess “nearness” to meaning because those markers have no meaning.
Eric, thank you for keeping close to the topic at hand. If we put WEASEL to the side for a moment, what do you think in biological evolution the terms "nearness" and "markers" have as analogues? After all the toy example WEASEL appears now to be used to prove that evolution cannot generate new information, somehow, without it being "sneaked in". So, as WEASEL is being conflated with actual biological evolution I'd like to know what "nearness" and "markers" mean in biological terms. Plus in biological terms, is there any meaning in the "markers"? Perumably so, but I'd like you to tell me what it is as I've no idea. KariosFocus
PS: that the solution space for standard English text is based on a 128-state digital element [the 7-bit acii alphanumeric character] is a basis for constructing a vast config space when we write a book
What about UTF-8 encoding then? And KariosFocus, what is the chance of a book self-assembling wholesale, as a 747 could not do in a junkyard? Are books not written a word at a time rather then coming into being at once? Don't they evolve? Or are you of the opinion that the config space for books is too large for them to actually come into existence?RichardFry
April 14, 2008
April
04
Apr
14
14
2008
01:09 AM
1
01
09
AM
PDT
Eric: Excellent again. I especially liked:
When we think of a blind process shuffling symbols, it is easy for us to forget that without an established translation convention, they would not be symbols and they would have no symbolic meaning at all. Consequently, any blind process of shuffling markers without knowing a translation convention is inherently doomed. It cannot ever possibly assess “nearness” to meaning because those markers have no meaning. That is why the dependence of life upon symbolic information is the doom for explaining life entirely through Blind Watchmaker processes, regardless of how broad that category is . . . . The whole point of the term “blind” [in the title of The Blind Watchmaker] is to deny any claim of using foresight. In this context, to provide a supposedly supporting “illustration” or “example” that relies on foresight programmed in by an intelligent designer, one who supplies the understanding of English language implicitly though the program itself does not understand English, this is a clear cut example of cheating. It is a deceptively misleading illustration that smuggles in the very sort of quality that Dawkins claims the Blind Watchmaker can manage without. It runs contrary to the core premise of the book.
GEM of TKI PS: that the solution space for standard English text is based on a 128-state digital element [the 7-bit acii alphanumeric character] is a basis for constructing a vast config space when we write a book, say of 10^6 characters [~ 9.32 *10^2,107,209 cells]. Then, when we see that through intelligence we have targetted the tiny fraction of possible configs of such a large set of elements that makes sense in English, we see the power of insightful, conceptual intelligence, over the million monkeys banging away at keyboards at random.kairosfocus
April 13, 2008
April
04
Apr
13
13
2008
04:00 PM
4
04
00
PM
PDT
RichardFry (233): "Eric, you are obviously an expert in this field. I expect you see yourself as the person “pulling away the curtain”. I’ll leave you to it then rather then attempt to make you face facts." Please know that I am interested in your input. If there are facts that you feel I am not facing, I do want to better understand what you are specifically alluding to. Your contributions have value in the exchange from different perspectives. I'm listening. RichardFry (233): "However, all I would say in response to this [When genetic/evolutionary software is developed, the developers define in advance the model of the solution space they will explore.] Is that when you write a book the “solution space” you explore is constructed from only 26 letters and a space. Presumably you would also count this as cheating." If I miss part of your point here, please correct me. I will try to address what I would count as "cheating". The case of meaningful text is actually a very good example of what is fundamentally misleading about supposed illustrations such as WEASEL or others like it. When we look at the example, we easily forget that as readers we are supplying the ability to translate. However, translation of symbols to their meaning is always via a non-essential convention. We can see that, for example, when we find words that mean different things in different languages, or when we consider that different languages express the same meaning differently, each by their own conventions. When we think of a blind process shuffling symbols, it is easy for us to forget that without an established translation convention, they would not be symbols and they would have no symbolic meaning at all. Consequently, any blind process of shuffling markers without knowing a translation convention is inherently doomed. It cannot ever possibly assess "nearness" to meaning because those markers have no meaning. That is why the dependence of life upon symbolic information is the doom for explaining life entirely through Blind Watchmaker processes, regardless of how broad that category is. Now if Dawkins were writing a book about understanding computer search strategies, then the WEASEL example might have legitimately served some purposes related to the topic of the book. However, I understand The Blind Watchmaker to be about Blind Watchmaker processes related to biology, showing us how these could potentially accomplish the design we thought required a seeing watchmaker. The whole point of the term "blind" is to deny any claim of using foresight. In this context, to provide a supposedly supporting "illustration" or "example" that relies on foresight programmed in by an intelligent designer, one who supplies the understanding of English language implicitly though the program itself does not understand English, this is a clear cut example of cheating. It is a deceptively misleading illustration that smuggles in the very sort of quality that Dawkins claims the Blind Watchmaker can manage without. It runs contrary to the core premise of the book.ericB
April 13, 2008
April
04
Apr
13
13
2008
10:28 AM
10
10
28
AM
PDT
Is that when you write a book the “solution space” you explore is constructed from only 26 letters and a space. Presumably you would also count this as cheating. Richard are you implying that books are written via random mutations and natural selection?tribune7
April 13, 2008
April
04
Apr
13
13
2008
05:39 AM
5
05
39
AM
PDT
PPS: This is also useful, from the GA warehouse's tutorial:
Genetic algorithms are one of the best ways to solve a problem for which little is known. They are a very general algorithm and so will work well in any search space. All you need to know is what you need the solution to be able to do well [TRANS: optimisable objective -- i.e. target performance -- function], and a genetic algorithm will be able to create a high quality solution. Genetic algorithms use the principles of selection and evolution [NOT CV + NS!] to produce several solutions to a given problem. Genetic algorithms tend to thrive in an environment in which there is a very large set of candidate solutions and in which the search space is uneven and has many hills and valleys. True, genetic algorithms will do well in any environment, but they will be greatly outclassed by more situation specific algorithms in the simpler search spaces . . .
In short, intelligently designed, targetted, hill-climbing searches across a defined, feasibly [finite and small enough] config space.kairosfocus
April 13, 2008
April
04
Apr
13
13
2008
04:42 AM
4
04
42
AM
PDT
PS: Eric this analysis is so good I want to scoop it out and highlight it: _____________ . . . [What the WEASEL example illustrates is that] the ability of computer software to accomplish a task need not represent or demonstrate how biology actually works — even if we attach words like “genetic” or “evolutionary” to the algorithms. Consequently, the fundamental problem is not that it is “simple” or “toy” but that it is “not like biology” regarding the key questions. One of the fundamental dissimilarities of computer software, including genetic/evolutionary algorithms, is that the authors can build in knowledge and a sense of target that real world biology would have no access to. In WEASEL, storing and comparing against a target is the obvious departure. In other software, the same unrepresentative advantage can be built in. When genetic/evolutionary software is developed, the developers define in advance the model of the solution space they will explore. This is necessary because they must anticipate and insure that a) as new solutions are generated, they must fall within the defined solution space they have modeled, and b) the fitness function must be able to evaluate any solution so generated. Consequently, there cannot be any solution that is not within the predetermined solution space of the model, and the process is simply a way of searching that large solution space for solutions that optimize the predefined function, i.e. finding the peaks in the landscape defined by their chosen function. Undirected processes are inherently unable to invent symbolic representation and encode information into it. [Here we see that, per reliable (and even routine) observation, it is intelligent agents who originate and use symbolic codes to arrive at functionally specified, complex information] Dawkins’ non-example is inappropriate hand waving not only because the program stored a target string, but also more fundamentally because it could sense relative “nearness” to meaningful text even when the strings were still gibberish. It is the insight from the special information that gave it a slope that it could climb. [In short, active information from the programmer came in the back door of the algorithm; and the issue of required functionality at every stage of a system that has to live [function] and reproduce (at least potentially) while it evolves was dodged. In WEASEL, this is tantamount to specifying that unless the string at each stage is a viable sentence, it cannot go forward, regardless of nearness to the target . . . which would at once make WEASEL fail to get off the ground.] In software, developers can build that kind of inside knowledge into the fitness function. The developer knows what s/he is looking for and tailors accordingly. In that artificial world, the preservation of closeness to desired future function need not have any demonstrable significance in the intermediate steps. A future goal is being pursued. This is exactly what Dawkins’ non-example does. It does not preserve blindly according to present value considerations only. One gibberish is just as much gibberish as another. Rather, it uses secret knowledge to preserve progress beneficial to a future function, i.e. becoming a meaningful English sentence. In the real world, the Blind Watchmaker cannot do this. _________________ So, it looks like there are a few facts to face here, RF. [And BTW, what about the substantial matters above . . . ?]kairosfocus
April 13, 2008
April
04
Apr
13
13
2008
04:19 AM
4
04
19
AM
PDT
1 2 3 9

Leave a Reply