Uncommon Descent Serving The Intelligent Design Community
meniscus

FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

Y = C + I + G + [X – M].

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Comments
I suggest you look back at fig 3 in the original post [judging a meniscus, a common enough scientific measurement task] and then come back to us on whether subjectivity and objectivity are opposites.
The beaker provides an objective tool for measuring a liquid but the property of the liquid and the way it interacts with the beaker influences the degree of accuracy when taking a measurement - use a taller, narrower beaker for more accuracy. Following a correct procedure will increase the accuracy and the correct procedure is based on an emperical understanding of the liquids behaviour. The volume of the liquid does not change if the observer changes but an unskilled observer will take inacurate measurements. What is the measurement system used to measure a facial likeness? Can a face have an actual likeness in the way a liquid has an actual volume.
The scope of the acceptable island would be searched by simply injecting noise. This will certainly be less than 10^150 configs. [Notice, the threshold set for possible islands of function is a very objective upper limit: the number of Planck-time quantum states for he atoms of our observed cosmos.] At the same time, the net list will beyond reasonable doubt exceed 125 bytes, or, 1,000 bits. That’s an isolation of better than 1 in 10^150 of the possible configs. And it is independent of the subjectivity of any given observer.
You make the claim without doing the math! People often see a likeness in natural phenomena, for example the face on mars was hailed as a sign of design, and likenesses of religous figures are frequently observed, and claimed as design. How do you measure any of these objectively - what metric do you propose that is independant of human perception?
In short, your snipping exercise made up- and knocked over a strawman.
It is a point that goes to the heart of the issue - How do you objectively measure function?
PS: The allusion you just made is in very poor taste, and twists my remarks out of context very nastily. Please, do not do the like again.
Your remarks were snide, bordering on uncivil. I responded with a joke. Now please provide a mathematically rigorous way to measure function in a portrait.DrBot
May 23, 2011
May
05
May
23
23
2011
04:11 AM
4
04
11
AM
PDT
F/N: Onlookers, I am astonished to see Dr Bot's follow on MG's talking point:
If you can’t measure function in a mathematically rigorous, objective way then any CSI calculation is subjective (and lacks mathematical rigour)
Pardon, but have you ever had to get the right car-part or your car will not start? We here subjectively observe an objective situation, one that can be recognised in a mathematical model by a threshold variable, FS = 1/0. Similarly, while there are many contextually responsive possible answers to Dr Bot's argument -- notice, I here have composed a second answer that responds to his claim, which is different, but is likewise in contextually responsive English text coded using ASCII -- there is a sharp, observable objective and quantifiable difference between the FSCI of text in English and at random typing or endless repetition of a single letter. That is, there is no question of functional specificity being in all cases "merely" subjective and so not objective or measurable. Likewise, we can see that Dr Bot's dodge to a red herring on the way reading a meniscus -- note the original actually comes form a pharmacology context, i.e. life and health are at stake in the routine use of the technique -- shows how objectivity and subjectivity are in fact inter-related, and how the subjective involvement can be quantified. (That is we can assess when a volume is read correctly or incorrectly by inspecting a meniscus, just as we do much the same for how a tape measure is used, by tailor or by carpenter. If the measurements are wrong, the clothes or the furniture will not work right.) So, function can be measured, it can be measured objectively -- think metrics on the performance of software for a further instance -- and it can be measured quantitatively, with sufficient consistency to be relied on in serious contexts. It seems clear too that Dr Bot needs to read 34 - 35 above, especially the part that discusses mathematical models and metrics. Let me again clip Wiki in that context (noting that the RION scales issue also needs to be followed up) as cited in point 10, this being a confession against interest:
A mathematical model is a description of a system using mathematical language. The process of developing a mathematical model is termed mathematical modelling . . . A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. The values of the variables can be practically anything; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, measured system outputs often in the form of signals, timing data, counters, and event occurrence (yes/no). The actual model is the set of functions that describe the relations between the different variables.
See why -- as someone who has worked with mathematical models for decades -- I am astonished to see the rhetorical pretence that valid models based on reasonable practice are now suddenly suspect of being not rigorous enough? Wiki's remarks on rigour are also worth clipping from the excerpt at point 16:
An attempted short definition of intellectual rigour might be that no suspicion of double standard be allowed: uniform principles should be applied. This is a test of consistency . . . . Mathematical rigour is often cited as a kind of gold standard for mathematical proof. It has a history traced back to Greek mathematics, in the work of Euclid. This refers to the axiomatic method . . . . Most mathematical arguments are presented as prototypes of formally rigorous proofs. The reason often cited for this is that completely rigorous proofs, which tend to be longer and more unwieldy, may obscure what is being demonstrated. Steps which are obvious to a human mind may have fairly long formal derivations from the axioms. Under this argument, there is a trade-off between rigour and comprehension. Some argue [obviously on the other side] that the use of formal languages to institute complete mathematical rigour might make theories which are commonly disputed or misinterpreted completely unambiguous by revealing flaws in reasoning.
In fact, the weight of practice is on the side that one formalises to the extent required to be clear, factually adequate and intelligible in steps taken. The Reduced chi metric starts from the most commonly used mathematical metric for information, then addresses the specificity issue by confining it to zones of interest, T -- the specification. The log reduction of the equation Dembski proposed in 2005 then shows that the issue is degree of isolation in a config space. And, for our solar system -- the cosmos we live in -- 150 bits is more than enough. If you think that is not stringent enough, 1,000 bits swamps the search resources of the observed cosmos, as we saw in the case of the Lincoln statue just above. Which should have sufficed to show on a specific indicative example how such cases can be set within a most definite objective threshold. If something is specific, on observed effects of injected randomness beyond a certain point, or on using a code or implementing an algorithm etc, then we have good reason to infer it is in an island of function. If something is so complex that the search resources of our solar system or the observed cosmos would be insufficient to have a random walk and trial and error algorithm credibly work -- the needle in the haystack problem -- then it is reasonable to infer that the FSCI in it has the directly known routinely observed cause of such FSCI, intelligence, as its best explanation. That is FSCI is a well tested and credible sign of intelligence. The best answer to such is to find a counter-example. ev crashes in flames, along with a host of other suggested counter examples ranging all the way out to the infamous Mars canals. Honest and serious tests, on random text generation run up to a space of 1 in 10^50 being searchable, as is similar to the limit suggested by Borel decades ago for the lab scale. So, we have excellent reason to see that FSCI in contexts where we are dealing with 500 to 1,000 bits at the lower end, are enough to make the inference to design on FSCI a best current explanation,. Which is the degree of warrant -- notice warrant is the relevant term not "rigour" -- suitable for a scientific claim. At this point the burden of proof is actually int eh hands of the objectors, and plainly they cannot meet it. So they are resorting to demanding that an empirical inference meet the criteria that not even most mathematical arguments can. Which they know or should know. Selective hyperskepticism leading to reductio ad absurdum, again and again. GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
04:08 AM
4
04
08
AM
PDT
Dr Bot: You are snipping and making up a strawman:
If you can’t measure function in a mathematically rigorous, objective way then any CSI calculation is subjective (and lacks mathematical rigour).
I suggest you look back at fig 3 in the original post [judging a meniscus, a common enough scientific measurement task] and then come back to us on whether subjectivity and objectivity are opposites. Then, you can look at the issue of forming a threshold of judgement when a statue's features lose recognisability. But more to the point, you know or should know from what was already said in 126 that:
The event, E, is a particular statue of Lincoln, say. The zone of interest or island of function, T, is the set of sufficiently acceptable realistic portraits. The related config space would be any configuration of a rock face. The nodes and arcs structure would reduce to a structured set of strings, a net list. This is very familiar from 3-d modelling (and BTW, Blender is an excellent free tool for this; you might want to start with Suzie). Tedious, but doable — in fact many 3-d models are hand carved then scanned as a 3-d mesh, then reduced — there is a data overload problem — and “skinned.” (The already linked has an onward link on this.) The scope of the acceptable island would be searched by simply injecting noise. This will certainly be less than 10^150 configs. [Notice, the threshold set for possible islands of function is a very objective upper limit: the number of Planck-time quantum states for he atoms of our observed cosmos.] At the same time, the net list will beyond reasonable doubt exceed 125 bytes, or, 1,000 bits. That’s an isolation of better than 1 in 10^150 of the possible configs. And it is independent of the subjectivity of any given observer. ["The engines are on fire, sir! WE'RE GOING DOWN . . . ]
In short, your snipping exercise made up- and knocked over a strawman. GEM of TKI PS: The allusion you just made is in very poor taste, and twists my remarks out of context very nastily. Please, do not do the like again.kairosfocus
May 23, 2011
May
05
May
23
23
2011
03:31 AM
3
03
31
AM
PDT
KF: I've been away, hence my lack of response:
The event, E, is a particular statue of Lincoln, say. The zone of interest or island of function, T, is the set of sufficiently acceptable realistic portraits.
Sufficiently acceptable? What metric are you using and how is it defined mathematically?
And it is independent of the subjectivity of any given observer. ... the function is sculptural resemblance.
? If the function is sculptural resemblance then it requires a person to judge the resemblance. This is subjective. The amount of CSI cannot be calculated precisely because it will depend on the observer, there is no metric to measure a facial likeness in absolute terms as is possible with volume or force.
recognisability as a portrait of a specific individual is subjective but that is not as opposed to being objective.
? So it is subjective, but that doesn't mean it isn't objective ? Quasi objective perhaps?
Now, we address the red herring led away to the strawman: why is “function” a question of MATHEMATICAL “rigour”?
If you can't measure function in a mathematically rigorous, objective way then any CSI calculation is subjective (and lacks mathematical rigour)
To see what I mean, is VOLUME of a liquid an objective thing?
The volume of a liquid can be measured to a degree of precision that depends on the measurement apparatus. Volume does not vary if the person doing the measurement is blind, or from china.
The objection is misdirected, and based on a conceptual error, probably one driven by insufficient experience with real world lab or field measurements.
I have plenty of practical experience both in real world measurement and the design of precise measurement equipment, more than you I suspect. You should consider the fact that an objective measure of function may not be possible for 'a facial likeness' but that does not mean one cannot be found for something else. It may simply be that this particular example is not a good example of CSI because of the inherent subjectivity in the way function has to be measured for a facial likeness.
PS: “We’re going downnnnn . . . !”
Not on me you're not ;)DrBot
May 23, 2011
May
05
May
23
23
2011
03:08 AM
3
03
08
AM
PDT
F/N 2: Reminder, the rigour question is addressed most directly at 34 - 5 above. If MG is serious about her claim, she will respond to that, which has been drawn to her attention repeatedly, and has been ignored to date; at least once by a clever rhetorical tactic of talking about reading on from her comment at was it 60 above; when she knew or should have known from links that the main response was in 34 - 5, and a rebuttal to a clip of her main argument was in 23 - 4. Let us see if MG will at length actually address a matter on the merits.kairosfocus
May 23, 2011
May
05
May
23
23
2011
01:48 AM
1
01
48
AM
PDT
F/N: Let's clip out a bit of Mung's dissection from 126 and 182 in the CSI Newsflash thread: ____________ Mung, 126: >> So let’s take a closer look at Schneider’s Horse Race [the link is there in the original thread] page and do a little quote mining. A 25 bit site is more information than needed to find one site in all of E. coli (4.7 million base pairs). So it’s better to have fewer bits per site and more sites. How about 60 sites of 10 bits each? Tweak. We are sweating towards the first finishing line at 9000 generations … will it make it under 10,000? 1 mistake to go … nope. It took to about 12679 generations. Revise the parameters: Tweak. It’s having a hard time. Mistakes get down to about 61 and then go up again. Mutation rate is too high. Set it to 3 per generation. Tweak. Still having a hard time. Mistakes get down to about 50 and then go up again. Mutation rate is too high. Set it to 1 per generation. Tweak. 3 sites to go, 26,300 generations, Rsequence is now at 4.2 bits!! So we have 4.2 bits × 128 sites = 537 bits. We’ve beaten the so-called “Universal Probability Bound” in an afternoon using natural selection! And just a tad bit of intelligent intervention. Dembski’s so-called “Universal Probability Bound” was beaten in an afternoon using natural selection! And a completely blind, purposeless, unguided, non-teleological computer program! Does Schneider even understand the UPB? Does he think it means that an event that improbable can just simply never happen? Evj 1.25 limits me to genomes of 4096. But that makes a lot of empty space where mutations won’t help. So let’s make the site width as big as possible to capture the mutations. … no that takes too long to run. Make the site width back to 6 and max out the number of sites at 200. Tweak. The probability of obtaining an 871 bit pattern from random mutation (without selection of course) is 10-262, which beats Dembski’s protein calculation of 10-234 by 28 orders of magnitude. This was done in perhaps an hour of computation with around 100,000 generations. HUH? With or without selection? It took a little while to pick parameters that give enough information to beat the bound, and some time was wasted with mutation rates so high that the system could not evolve. But after that it was a piece of cake. You don’t say. MathGrrl @105 There is no target and nothing limits changes in the simulation. There aare both targets and limits.>> 182: >> Again, in Schneider’s own words: Repressors, polymerases, ribosomes and other macromolecules bind to specific nucleic acid sequences. They can find a binding site only if the sequence has a recognizable pattern. We define a measure of the information (Rsequence) in the sequence patterns at binding sites. The Information Content of Binding Sites on Nucleotide Sequences Recognizer a macromolecule which locates specific sites on nucleic acids. [includes repressors, activators, polymerases and ribosomes] We present here a method for evaluating the information content of sites recognized by one kind of macromolecule. No targets? These measurements show that there is a subtle connection between the pattern at binding sites and the size of the genome and number of sites. …the number of sites is approximately fixed by the physiological functions that have to be controlled by the recognizer. Then we need to specify a set of locations that a recognizer protein has to bind to. That fixes the number of sites, again as in nature. We need to code the recognizer into the genome so that it can co-evolve with the binding sites. Then we need to apply random mutations and selection for finding the sites and against finding non-sites. INTRODUCTION So earlier in this thread I accused MathGrrl of not having actually read the papers she cites. I think the case has sufficiently been made that that is in fact a real possibility. I suppose it’s also possible that she reads but doesn’t understand. MathGrrl, having dispensed with the question of targets in ev, can we now move on the the question of CSI in ev? >> _________________ The emphases, blocks and links are of course there in the original. The thread has much more.kairosfocus
May 23, 2011
May
05
May
23
23
2011
01:39 AM
1
01
39
AM
PDT
CY: Thank you. MG needs to address some serious matters on the merits, instead of simply repeating long since cogently responded to talking points over and over again. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) On CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. Mung's remarks and clips on MG's tactics, in 117 - 120 in the CSI Newsflash thread, in this light, are a telling corroboration. MG knows or should know better than she has acted. Sadly revealing GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
01:22 AM
1
01
22
AM
PDT
EZ: When I was able to get through to actual pages in Mr Camping's web site, it turns out that he is actually broad-brush writing off the church. So, it is on the face of it unfair to use his folly in a credibility kill attempt against the church, which is exactly what he was set up for -- observing, too, how the same media tip-toes ever so carefully around issues tied to Islam: but then mebbe the attitude is that enraged Muslims KILL, Christians at most will protest . . . But then, ever so much of the media have lost -- did they ever have? -- any sense of duty of care to truth, balance and fairness. You will see that I think the failures of Mr Camping and co were fundamentally those of organisational governance. For, if there were proper ac countability to stakeholders and to genuine expertise in a panel, the sort of blunders indulged in would not have happened. He needs to publicly apologise, including to the church and its leadership that he has broad-brush dismissed. Then he needs to get his message straightened out, equally publicly. I think you are right that we have seen folly like this before, but it is not a peculiarity of the religious, it is a mark of unaccountable autocracy with a mike, or of an unaccountable elite with a mike. Lord Acton was right: power tends to corrupt, absolute -- unaccountable -- power corrupts absolutely, great men are bad men. Including, for those who have imposed evolutionary materialist censorship on origins science, including trying to radically redefine science in ideological ways that fetter it from being able to freely seek the truth about our world in light of empirical evidence. And, as for even the BBC, I am afraid they, too, have slipped far from their former greatness. I have seen or heard far too many one sided accounts, party-line ideological promos and willful omissions from the BBC to trust it anymore. (The BBC's performance in response to the climategate revelations alone suffice to underscore the point. Failure to give us an accurate picture of the history of Islamic expansionism, eschatological Mahdism and its underlying ideology over 1400 years, during the ten years since 9/11 nails it hard home. To see what I mean, try out: what are the black flag armies and Khorasan about? What event's 318th anniversary was September 11, 2001 the eve of? And on the longer running ME dispute, what is the significance of January 1919, London, and the names Chaim and Faisal? In the context of both of these, what is a Gharqad tree and what is its Madhist eschatological significance? What are hadiths? What is the historical allusion of the Islamist chant "Khaybar, Khaybar . . . " and how does this relate to, say, events of last May on a certain boat off the coast of Israel? [Without sound answers to such, we do not understand things that dominate our headlines, and BBC's leading voices know or should know better. As for BBC's vaunted appeals process, I have personal experience of its fox judging the fox failures. And, we could go on and on.]) Ah, well . . . GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
01:01 AM
1
01
01
AM
PDT
KF: I'm afraid Mr Camping was just the latest goofy religious leader that was paraded in front of the world for entertainment purposes. To be fair, he did start the publicity himself. He wanted the word to be spread, all over the world. Hundreds of people in Vietnam were waiting for the Rapture. As you pointed out, he was wrong before and you'd think he'd be a bit more humble about his personal interpretations of Holy Scripture when he was clearly in a very, very small minority. I think he really did believe he was correct and I suspect, if he's honest, that he is examining his precepts. I hope so anyway, for his own sake. The media . . . . sigh . . . it's not about informing and educating the public anymore. Or investigating important issues. It's about entertainment, more and more like facebook and youtube every day. I live in England and am so lucky having the BBC to hand. Sadly, I expect this kind of thing will happen again and the fear of being labelled a weird-o will stop some sincere and honest folks from speaking their mind.ellazimm
May 22, 2011
May
05
May
22
22
2011
10:33 PM
10
10
33
PM
PDT
Mung, KF, MG, MF, others, I have been quite the onlooker in these threads over the last couple of months, and I have to say, you're (KF and Mung) both doing a fine job. I've had several "aha" moments from these exchanges. I really want to address one thing to MathGrrl: What is your criteria for determining that the ev program does not involve a targeted search? I think this is really key to one of the main disagreements here. So far I've only seen you assert that it does not, but I haven't seen you engage any of the arguments presented by either Mung, with his careful analysis of the ev program in several posts on another thread, nor with KF's very reasoned argument for the quantification of CSI. I think your demands are unreasonable given the careful arguments here, which you have not apparently engaged - no - merely asserting the same rhetorical talking point denials will not get you very far. Well it might on Mark's "echo chamber," but do you honestly care about that? On a semi-related matter: I've also read many of the comments on Mark's blog regarding UD's moderation policy. I find it quite amuzing that many of the complainers there were able to post at UD for several years before being moderated, which says a lot about the tolerance level of the moderators here. Several of the names mentioned who are now in moderation, were posting here for quite some time - years, in fact. I think the reality is that they continued to ramble on the same talking points as you seem to be doing here without much interaction with the points already made ad-nauseum, and which are addressed in the "Frequently Raised Arguments" brief at the right side and top of every page. I think the reason you've been allowed to go beyond the fray is not because you've asked a question that hasen't already been asked here, but because you've for the most part engaged yourself civilly. I sense that civility is waning with your recent repetitions. Repetition can be uncivil when it doesn't respect the fact that a question was answered with careful patience and knowledge-based insight. They've even gone so far as to give you your own guest post. This hardly reflects in the gross misjudgment of UD going on at Mark's blog. Mark: I have to say though Mark, that you've been fair to us to a point, and it appears to have gotten you into a little hot water even at your own blog - I'm referring to recent comments from someone who's decided to leave your blog because of your mild dismissal of some of the complaints towards UD's policy. I also noticed how summarily your readers dismiss ID writings, such as Stephen Meyer's SITC, which is perhaps why you're remaining quite detached from the book while accepting a free copy. I don't know, but that's how it appears. I get the impression that your blog is slightly more opinion-controlling than the moderation here at UD. You seem almost afraid to admit that you're reading ID material apart from a cursory glance in order to dismiss it. So with that in mind, I have to ask MG: is that what you're really afraid of? If you engage in the arguments from KF, Mung and others here, you'll get into hot water on Mark's blog? You are, after all somewhat of a hero there at the moment. In my estimation, being a hero at the expense of understanding a crucial and pointed argument is hardly worth any notoriety you might gain from it. Even if you end up disagreeing with KF after carefully considering all of his points, I think going further with the fine points of his argument will increase your integrity many-fold. Consider it. I think most of us can agree that you're rhetorical talking points are getting a little tiresome. I think a good place to start might be to ask Mark's readers if they have anything to contribute to the question of the ev program and whether or not it involves a targeted search. A little change of subject there is warranted now, given the persistence in merely complaining about UD. I would also suggest that you familiarize yourself with a number of threads we had here a few years back with regard to Dawkins' Weasel program. I think starting there will allow you to see from a more elementary level how these programs are set up by designers themselves in order to demonstrate random chance searches; which is a bit like giving typewriters to monkeys to demonstrate that they can type. Well they can if you make them, but what's the point? You can start here: https://uncommondescent.com/evolution/dawkins-weasel-proximity-search-with-or-without-locking/ Let's have a real discussion here. How about having Mark's readers actually read Meyer's book rather than panning it? - Let's have a discussion of Chapter 13, and then let's really get into the nitty gritty of NFL. I'd really like to see that discussion. Right now, sad to say, I'm getting bored.CannuckianYankee
May 22, 2011
May
05
May
22
22
2011
06:44 PM
6
06
44
PM
PDT
F/N: Onlookers, in 34 - 5, I addressed the issue of rigour in the context of mathematical models and metrics of phenomena that are fundamentally empirical. In 23 - 4, I clipped one of MG's many repetitions of her claims and answered point by point. If you scroll through the next ten days of comments, you will see that at no point does MG actually respond cogently on the merits. Instead, she simply repeats her drumbeat strawmannised false assertions. So, it is entirely reasonable to call her to answer the issues raised, and to hold that unless she does so cogently, she knows or should know that she has no case but finds it rhetorically effective to repeat false assertions and caricatures endlessly.kairosfocus
May 22, 2011
May
05
May
22
22
2011
04:19 PM
4
04
19
PM
PDT
PS: Onlookers, simply scroll up (or click up) to 23 - 24 above and 34 - 35 above to see why MG's rhetorical drumbeat repetition that there is no "mathematically rigorous" definition of CSI is an empty talking point -- but then if you want to make a noisy drum it has to be hollow inside. The two links basically summarise the corrections MG has received for over two months now and has just now again refused to engage on the merits, preferring to yet again repeat an ill-founded and patently red herring led on to strawman claim over and over again as though that would make it true. By now, sadly, she knows or should know that her claim is ill-founded, and so the repetition is irresponsible to the point of being willfully deceptive.kairosfocus
May 22, 2011
May
05
May
22
22
2011
03:45 PM
3
03
45
PM
PDT
MG: You are still refusing to engage facts, e.g. you were given highly specific links in this thread above, that you plainly have not engaged. That is not serious behaviour. Similarly, Mung is not speculating, he gave citations from the text by Schneider (which we can all follow up), and I was able to confirm some of the key points through my own clips from Schneider. His step response graph, to one who has had to analyse closed loop controller behaviour, was utterly and inadvertently diagnostic. Mr Schneider's attempt to correct Dembski's use of probably the most common plain vanilla quantification of information was the most striking point for me. Regardless of his paper qualifications, if he does not know enough to know that Dembski was using common and accepted usage, he is so ill informed on the subject as to not be credible. Period. Your onward behaviour of again repeating long since cogently answered talking points and making pretence that they have not been adequately answered removes you from the list of serious participants in a discussion. And, you have yet to explain yourself on some very serious matters and insinuations you have made, as also pointed out. Good day, madam. GEM of TKIkairosfocus
May 22, 2011
May
05
May
22
22
2011
03:18 PM
3
03
18
PM
PDT
kairosfocus et al., I believe we have reached the point of significantly diminishing returns with respect to the discussion of CSI in this thread. I will continue to monitor it, but unless you or someone else provides a rigorous mathematical definition of CSI, as described by Dembski, and detailed example calculations for one or more of the four scenarios described in my guest thread, I'm not going to continue to point out that the Emperor is nude. Any of the objective "onlookers" addressed so often by you have sufficient information available to draw their own conclusions. While I have little hope that it will happen in this particular thread, I do suspect that this topic will arise in the future here at UD and I look forward to engaging in the discussion with you then.MathGrrl
May 22, 2011
May
05
May
22
22
2011
12:56 PM
12
12
56
PM
PDT
kairosfocus,
In the case of the digitally coded FSCI [dFSCI] in the living cell, complex codes, algorithms, and code strings have only one known capable causal force, intelligence.
Unless and until you provide a rigorous mathematical definition of CSI, as described by Dembski, and demonstrate how to objectively calculate it for some real world scenarios, you cannot make this claim. Without a definition, your terms are literally meaningless. Without an objective calculation, there is no way to test your assertion that intelligent agency is involved. Continuing to repeat unfounded claims after repeatedly demonstrating that you are unable to support them is unconvincing, at best.MathGrrl
May 22, 2011
May
05
May
22
22
2011
12:54 PM
12
12
54
PM
PDT
Joseph,
Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target.
My claim has been supported- by Mung- who provd ev is a targeted search- as did Marks and Dembski.
None of your sources provided any such proof. I have already provided two links to comments where I address your points: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 I invite you, along with kairosfocus, to read the source material of the ev paper and Schneider's PhD thesis for yourself. Please show me any support for your claims in either of those documents or in the ev program source code.
IOW you are either lying or just plain ignorant.
*sniff* I love the smell of civility in the morning!MathGrrl
May 22, 2011
May
05
May
22
22
2011
12:54 PM
12
12
54
PM
PDT
kairosfocus,
Prominent on this — right there in the opening paragraph of the comment — is Mung’s summary dissection of ev at comment 180, which DOES reveal beyond any reasonable doubt — from the horse’s mouth (cf his snippets at 182 and some of his initial examination of the Schneider horse race page from 126 on . . . ) — that it is in fact targetted search, through the target — the string(s) to be matched — are allowed to move around a bit.
Mung's summary of ev is inaccurate and his claim that it is a targeted search are incorrect, for the reasons I provide in these two comments: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858
In addition, ev uses in effect a Hamming distance to target metric in selecting the next generation.
That is absolutely incorrect. I strongly suggest you read the source material, namely the ev paper and Schneider's PhD thesis for yourself. If you still believe that ev models a targeted search, please explain why you think so, with reference to the ev paper and the program source code, and we can no doubt have an interesting discussion.MathGrrl
May 22, 2011
May
05
May
22
22
2011
12:53 PM
12
12
53
PM
PDT
Chris Doyle,
I would still appreciate a response to the bacteria comment I made six months ago, it was not a soliloquy, it was a direct challenge to your claims about evolution in bacteria.
The only claim I made was that one of the participants in the conversation would benefit from reading some textbooks and peer reviewed papers on bacterial evolution. I recommend the same to you.
I think it made uncomfortable reading for you and you don’t know how to respond to it. Am I right?
I'm sure you would like to think that, but I made my position very clear both in that thread and in my response to you above. My interest here is in understanding the positive evidence for ID, CSI in particular, to a level of detail that will allow me to test the claims of ID proponents myself. I have neither the time nor inclination to bring you up to speed on basic biology.
Ultimately, the record is here for all to see whether or not your questions have been answered by kariosfocus. I for one think they have been.
Your personal beliefs are irrelevant. The record shows that no ID proponent has provided a rigorous mathematical definition of CSI as described by Dembski nor has any ID proponent used such a definition to calculate CSI for the four scenarios I described. If someone had, you'd find be referencing it in your response rather than simply sharing your thoughts.
I won’t be returning to a blog where people like “The Whole Truth” can make comments like that with the active support of people like “Toronto” and the passive support of all the other banned evolutionists.
How convenient that one easily ignored participant on Mark Frank's blog can prevent you from returning to support the insulting and baseless claims that you made there. Fortunately, Mark doesn't remove comments so anyone interested in your personal standard of online courtesy will find it easily.MathGrrl
May 22, 2011
May
05
May
22
22
2011
12:52 PM
12
12
52
PM
PDT
kf - You managed to use over 1400 words to fail to answer a question I was asking Joseph. The nearest you get is "we may describe and define a specification, T, that gives us the requirements to fit in the island of meaningful function within the wider space of possible but overwhelmingly non-functional configs." But how do we "describe and define a specification, T"? What properties must it have? This is what I'm not seeing.Heinrich
May 22, 2011
May
05
May
22
22
2011
05:03 AM
5
05
03
AM
PDT
H: I see your:
Perhaps we should concentrate on the “meaning/function” part – how is that formally specified? I’m not sure how the everyday use of “information” can be formalised to be of use here – can you explain?
1 --> As has been pointed out over and over in response to the underlying talking point, meaning, function and information are first and foremost terms and concepts describing facts of experience. So, definitional statements and mathematical models, variables and associated metrics have to adequately, coherently and simply (but not simplistically) answer to that experience. 2 --> You are currently having the experience of reading this post, which is an instance of functional, coded information in English that responds to a particular context. It is functionally specific and complex information, by contrast with the gibberish in a bit of random typing like this:jfgwegjgegh. (And, already, this is an ostensive definition that points out an example and a counter-example to specify meaning by facts understood by us, judging, experiencing, knowing semiotic agents. Indeed, without that subjectivity of the conscious mind, there would be no knowledge of facts.) 3 --> To try to pretend that in absence of "formal" -- i.e. precising per necessary and sufficient statement and/or genus and difference and/or especially quantitative -- definition, such are meaningless or dismissible, is an example of self-refuting selective hyperskepticism. You are forced to rely on the meaningfulness of FSCI, to try to object to it. Reductio ad absurdum. 4 --> But also, pardon a direct comment: if you had troubled yourself enough to scroll up and look at the UD short glossary, accessible this and every UD page, top right, under "Information" you would find this telling admission against interest scooped from Wikipedia:
Information — Wikipedia, with some reorganization, is apt: “ . . that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message . . . . In terms of data, it can be defined as a collection of facts [i.e. as represented or sensed in some format] from which conclusions may be drawn [and on which decisions and actions may be taken].”
5 --> This describes the concept of information. In terms of how we usually measure it, we use the fact that it can usually be reduced to symbol elements (even spoken words are built up from phonemes), which have frequency distributions that can be observed, so information -- on a suggestion by Hartley over 80 years ago, is quantified on symbol frequencies interpreted as probabilities (Dembski in NFL is right, and Schneider's attempt to "correct" him by substituting a rarer synonym, "surprisal," is wrong -- cf my now frequently repeated cite from Taub and Schilling and the discussion in my always linked here); for message element mk: Ik = - log pk, in bits. 6 --> Onward, Shannon developed a measure of average information quantity per symbol (aka entropy, aka uncertainty) across a set of symbols, i, H: H = - [SUM on i] pi log pi (In bits if the log base is 2. This is what is often called "Shannon information," especially in the context of the carrying capacity of a channel of a given bandwidth and int eh face of a set signal to noise ratio with Gaussian white noise, such as a modelled telephone line) 7 --> But of course this quantification so far does not address the meaningfulness or function, which last are at the heart of why information is important. To do this next step, we first note that functional info strings are meaningful and are aperiodic but not at-random, and are not forced into repetitive patterns like how a crystal's unit cell is endlessly repeated in a crystalline body. 8 --> That is why, in 1973, Orgel wrote -- in the decade after DNA had been initially decoded:
. . . In brief, living organisms are distinguished by their specified complexity. Crystals are usually taken as the prototypes of simple well-specified structures, because they consist of a very large number of identical molecules packed together in a uniform way. Lumps of granite or random mixtures of polymers are examples of structures that are complex but not specified. The crystals fail to qualify as living because they lack complexity; the mixtures of polymers fail to qualify because they lack specificity. [The Origins of Life (John Wiley, 1973), p. 189.]
9 --> What Dembski did was to identify how this could be used to develop a model and metric of complex specified information. (Cf the original post for the CSI newsflash discussion to which this thread is a footnote.) 10 --> In particular, we note that we may observe informational events E1, E2, etc that will carry out the same function and carry the same essential meaning. 11 --> However, not just any arbitrary configuration of symbols will do, only certain ones that follow certain rules and bear a certain content of meaning will do the required job, and so we may describe and define a specification, T, that gives us the requirements to fit in the island of meaningful function within the wider space of possible but overwhelmingly non-functional configs. 12 --> So, we have a non-arbitrary delimiter, T, that specifies an island of function in the wider config space. Not just any arbitrary sequence of ASCII characters will fit into the context of this thread of discussion and make sense in English. Overwhelmingly, most at-random clusters of such characters of the same length would be gibberish. Lucky noise is not a credible source of a message, and indeed the very concept signal to noise ratio rests on the understanding that signals and noise have different and distinguishable characteristics. 13 --> And so we see the key insight: once the length of the string is big enough, where essentially any informational entity can be reduced to structured sets of strings (so this is without loss of generality), it is maximally unlikely to arrive at such a string by a random walk rewarded through a trial and error or hill-climbing algorithm that depends on function [which is an observable and can be quantified by a dummy variable: if it works, 1, if not, 0: pass/fail . . . once that new part is in, does your car engine start? If no, back to the drawing board . . . ] not mere proximity to a target in config space. 14 --> Thus we see the quantification by doing a log reduction of the Chi metric: Chi_500 = Ip * [fs] - 500, functionally specific bits beyond a complexity threshold, where FS is the dummy variable on observed function: 1/0. 15 --> Is this a "mathematically rigorous definition" so beloved of MG in her dismissive talking points? 16 --> The key problem there -- as was pointed out above 23 - 4 and 34 - 5 -- is that not even mathematical proofs are usually rigorous in that sense. We are dealing with real world modelling and metrics, which respond to empirical realities and allow us to reason and analyse them, here using concepts closely parallel to those that are at the foundation of the second law of thermodynamics. Special configs in a large enough config space are going to be unobservable on chance plus necessity. 17 --> Such FSCI is however quite common, and it is routinely observed -- b[TR]illions of test cases, growing by the millions per week -- to be the product of intelligence, which is composing meaningful strings based on knowledge and intent. Like this post. 18 --> As this thread's original post shows, the Dembski type metric in log reduced is demonstrably amenable to real world biological cases. We also see cases where it correctly shows how things within that threshold can be originated by chance, as the OP's clip on random text generation from a config space of about 10^50 elements shows. 19 --> So, the objections are specious. 20 --> The real problem is not that the metric is not sufficiently meaningful, but that it carries an unwelcome message: the extremely complex and functionally specific information in DNA is far, far beyond the threshold where we may confidently infer intelligent design. 21 --> If you don't like the message, the proper way to address it is obvious: show by observed cases, how chance and necessity without intelligent direction -- and Mung has shredded ev etc as claimed cases of this -- can create FSCI beyond the threshold, at least the solar system threshold of 500 bits and preferably the observed universe threshold of 1,000 bits. 22 --> Almost needless to say, the reason why such red herring and strawman tactics as the "rigorous" talking point are being resorted to is plain: there are no such cases. 23 --> in short, the empirical evidence is that the reduced chi metric works as advertised when it is used in a design detecting explanatory filter. 24 --> So, the real and unmet challenge is there for evolutionary materialism advocates: show on empirical evidence (not misleading simulations) that a metabolising, von Neumann self-replicating physical entity like the living cell, can and does arise spontaneously by undirected forces of chance and mechanical necessity in a plausible initial environment. ___________ In short, it is time to stop dragging red herrings out to convenient strawmen, and show that your claimed life origin and body plan diversity origin story is grounded on observed evidence. Design thinkers are able to say that we know already that FSCI is routinely created by intelligence (indeed, in our observation, it has only been so created), and that Venter et al have shown that the intelligent creation of a living cell is possible, though we have not gone all the way yet. So, on inference to best explanation . . . GEM of TKIkairosfocus
May 22, 2011
May
05
May
22
22
2011
04:23 AM
4
04
23
AM
PDT
EZ: As expected, it is Sunday and we are all here -- save of course those who moved on as individuals overnight. I find it astonishing (or, maybe, telling) that the same major media entities that spend so much time and effort repeating over and over that extremists like OBL are fringe relative to Islam, are so often willing to let the impression be created that a Mr Camping or the like, are typical of the Christian faith or of Christians who take the scriptures seriously. Indeed, the sensationalised coverage over the past few days -- I had never heard of this man before -- sounds to me like a credibility kill attempt: set up someone, make him seem to be a leading figure, knock him over, spread guilt by invidious association. In fact, Mr Camping is demonstrably in error -- easily known before the fact as I pointed out Friday afternoon above -- and is a fringe figure. He seems to have a radio enterprise, and to have amassed a fortune to back it. He is also a long since retired civil engineer (he is 89 years of age, from what I see) essaying into theological waters and using principles of interpretation that are known to be unsound at even basic Bible Study level. E.g. when I wrote a basic Bible study guide 25 years ago, I cautioned that one should not go looking for esoteric "hidden" meanings in the text where the text has a plain and natural sense that makes good sense. And yet, that is exactly what he did, by coming up with some idiosyncratic date for Noah's flood, then taking a text on how the eternal God is patient beyond human understanding (a day is like a 1,000 years in his sight . . . ) and taking a reference to seven days to go to the flood, plugging in the idea that 1 day --> 1,000 years, then voila, we arrive at May 21, 2011. Patent folly based on clipping words out of their natural sense in context and imposing a read-in meaning. Worse, he has done this before, some 15 years ago. He excuses himself as having made a mathematical error then. I do not know what he will say this time around, but he needs to apologise to his followers, like that zealous and self-sacrificing young man I saw Friday in front of our Hospital. he then needs to apologise to the church and the leaders who tried to correct him, who he would not listen to. Sadly, I gather he has dismissed and derides the church at large and has tried to in effect gather circles of listeners into informal groups; a classic sectarian blunder -- and one that will give a bad name to groups that meet for Bible study, prayer and discussion in homes or schools or offices. Then, he needs to go with the church leaders he has been reconciled with and sit before the world, apologising and allowing the leaders to present a more balanced view of the Christian faith's core message and its view of the End of Days and Day of the Lord. After that, he needs to set up a proper board of governance for his ministry, with serious stakeholder representatives on it. And, he needs to attach to it a panel of expert advisors who have the right and responsibility to keep his ministry on track through sound counsel. For, this is in the end a major failure of governance of a corporate entity. Idiosyncratic autocracy is dangerous, too dangerous today to be tolerated. But it is not just Mr Camping who needs to reflect on what just happened and make amends What was troubling is that the coverage did not stress that here is a fringe person who has gone off the deep end and has been repeatedly corrected, but instead it sensationalised the error, as though this is a set up of a strawman. The contrast with the very cautious treatment of Islam, tells me that this is likely to be a cynical agenda at work on the part of key media figures, and once such have big enough mikes, the rest will endlessly repeat and amplify the standard story-line. That lemming-like media mentality is very dangerous, and the sort of cynicism that failed to be balanced in this case -- even while being if anything overly cautious in dealing with the likes of radical Islam -- is even more dangerous. There is a lot of painful and bloody history on what happens when movements of conscience are repeatedly strawmannised and slandered. Demonisation and dehumanisation are the first steps to unjust suppression. (And when I see cases like the John foster parenting case in the UK, where the UK Dept for Equality and Human Rights told a High Court that Bible-believing Christianity is an "infection," and were not roundly rebuked, that is a grim portent. Our civilisation has been down that road before -- too many times, and it is nowhere where any sane person wants to go.) Frankly, it smells a lot like hypocrisy and hostility. Anyway, let us return to focus for the thread. I'll address Heinrich in a moment, DV. GEM of TKIkairosfocus
May 22, 2011
May
05
May
22
22
2011
03:04 AM
3
03
04
AM
PDT
KF: Thanks! I'm resigned to paying some bills and doing the grocery shopping as usual. Sigh. I'm still thinking about what you said . . . but, I'm still not up to a decent objection. Yet! :-) Now I suppose I'd best mow the lawn while it's not raining in God's Own Country . . . well, that's according to the locals. Yorkshire is quite nice I do admit. But wet. See you all later but don't hold your breath. Life calls!ellazimm
May 21, 2011
May
05
May
21
21
2011
03:49 AM
3
03
49
AM
PDT
EZ: You enjoy your weekend. I actually ran into a young man today from the organisation promoting the date setting for the end of the world, next to the rum shop in front of the local hospital. Tried to ask him about date setting and the Bible's prohibition on that. He was not really listening, ran into a wound up spiel. Said he had been all over the Caribbean in the past several weeks, handing out booklets and books. He did look tired. However, we can be pretty sure date setters are in error, per say Mt 24:36. But, it seems there is a temptation that some cannot resist. What is far more serious is Paul's statement in Ac 17, and remember this is eyewitness lifetime report (cf here on the minimal facts analysis):
Ac 17:26And He made from one [common origin, one source, one blood] all nations of men to settle on the face of the earth, having definitely determined [their] allotted periods of time and the fixed boundaries of their habitation (their settlements, lands, and abodes), 27So that they should seek God, in the hope that they might feel after Him and find Him, although He is not far from each one of us. 28For in Him we live and move and have our being; as even some of your [own] poets have said, For we are also His offspring. 29Since then we are God's offspring, we ought not to suppose that Deity (the Godhead) is like gold or silver or stone, [of the nature of] a representation by human art and imagination, or anything constructed or invented. 30Such [former] ages of ignorance God, it is true, ignored and allowed to pass unnoticed; but now He charges all people everywhere to repent ([d]to change their minds for the better and heartily to amend their ways, with abhorrence of their past sins), 31Because He has fixed a day when He will judge the world righteously (justly) by a Man Whom He has destined and appointed for that task, and He has made this credible and given conviction and assurance and evidence to everyone by raising Him from the dead.(B) [AMP]
Okay, all best GEM of TKIkairosfocus
May 20, 2011
May
05
May
20
20
2011
02:08 PM
2
02
08
PM
PDT
KF: I'll keep thinking but I've got nothing else of value to add to the thread at this time. Which is okay by me; I really am more interested in understanding your view and I've got a much better insight into that now. I don't think your posts are random noise . . . . well, maybe some of them. :-) So, thanks for indulging me. I've always liked the idea of the Socratic method and it's nice to be able to wallow in it. I hope you all have a good weekend, not interrupted by the Rapture as predicted by www.familyradio.com. I'd like to be able to continue conversing in the future. But if the Rapture really is coming I'm in for a pretty hideous time.ellazimm
May 20, 2011
May
05
May
20
20
2011
01:02 PM
1
01
02
PM
PDT
Heinrich: How do you formally define “specified complexity”, and “meaning/ function”? I told you already- Dembski took care of the complexity part in NFL and he also covered “meaning/ function”. In biology specification refers to biological function. IOW “information” as it is used by IDists is the same as every day use.
Where did you tell me? Perhaps we should concentrate on the "meaning/function" part - how is that formally specified? I'm not sure how the everyday use of "information" can be formalised to be of use here - can you explain?Heinrich
May 20, 2011
May
05
May
20
20
2011
12:55 PM
12
12
55
PM
PDT
EZ: Thanks for your further remark. We could debate the ins and outs of many origins sciences fields till the proverbial cows come home, e.g. just where did the EC arc of explosive volcanoes come from, and what does that mean for old smoky maybe a dozen miles S of where I sit -- who was stinking up the place with H2S earlier this week, just to remind us he is still in business. (I used to have a joke about how he would occasionally break into Mrs Dyer-Howe's Volcano Rum stocks to tipple a sample, tank up and blow . . . ) But the bottomline will remain: in OS work, one provides on inference to best explanation, a provisional causal account of traces of the past in the present in light of observed dynamics causally adequate to account for them. In the case of the digitally coded FSCI [dFSCI] in the living cell, complex codes, algorithms, and code strings have only one known capable causal force, intelligence. BTW, the issue is not whether my posts are a simulation, as that too would be a design, but whether they are lucky noise, such as a burst of sky noise getting into a server on the net. In short, even your own response shows just how strongly we know that the most credible explanation for dFSCI in particular is design. GEM of TKIkairosfocus
May 20, 2011
May
05
May
20
20
2011
08:50 AM
8
08
50
AM
PDT
KF: Okay, I see what you're saying about the reams of evidence. Have I ever met you? I can't say as I don't even know what your real name is! We're both trained in mathematics so . . . . it is possible we have met. How do I KNOW you're a real person? Well, you are way too stubborn, idiosyncratic and original to be a simulation. I am known to be quite pedantic but you must drive some people mad! I happen to like people who don't take BS lightly. So, even though I am disagreeing with you I respect the way you approach important questions. You think deeply, latch on and don't let go. As far as the issue of first showing a current process that is capable of producing the effect 'observed' in the past before making an inference . . . . . which is, after all, the mantra of modern geography . . . the only example I can think of as a parallel to evolutionary reasoning is plate tectonics. I'm not saying it's a great analogy. So . . . my thinking is that we have observed in the last 100 years creepingly slow sea floor spreading and small shifts along known fault lines but not the incredible shifts of continents the theory 'predicts'. In my mind, perhaps erroneously, generalising the small observed shifts AND all the other consistent evidence into large shifts over eons and eons of time is the same kind of reasoning that, say, the observed morphological changes observed in dog breeds can be extended to explain the development of whales from land dwelling creatures. And considering all the other parallel and confirming evidence as always. (I've been reading Jonathan Well's newest book and, I'm pretty sure, he doesn't address the geographical distribution evidence. Which is a shame. I was hoping to hear his thoughts on ring species.) I just thought of another possible comparison. We observe certain decay rates in radioactive substances and use that to reach into the past and draw conclusions about thing that were not 'observed'. I will keep thinking. I have no doubt you will find a flaw in my 'logic'. But, as I've said many times, I'm here to find out how y'all think about these issues.ellazimm
May 20, 2011
May
05
May
20
20
2011
08:15 AM
8
08
15
AM
PDT
EZ: Having had to correct unresponsive misbehaviour above, let me first express appreciation for the responsiveness in:
I see your point. I’m going to have a think about what to say in response. I have to admit I’m a bit skeptical of your statement: “In this case, we have abundant — billions of test cases, growing at literally millions per week thanks to the Internet — on how FSCI is a reliable SIGN of intelligent design. This is known, routine source/cause and reliable sign.”
Perhaps I can give you some context: have you ever met me? How do you know that what I have put up is a real person with a real mind writing, and not mere lucky noise on the Internet? One key reason is that you know that contextually responsive posts in the code patterns of a known language -- as opposed to random gibberish: fgwdjjgfuhvhb -- are a hallmark of design. (Think, then of the files in cabinets and drawers etc in offices all over the world, then the groaning shelves of libraries, then the stored plans in design offices, then the Internet full of pages, emails and blogs etc. I can confidently say that any of these documents with over 125 bytes worth of FSCi is designed.) You may want to look here, in a background post for the ID foundations series, and then here on the first post in the series. GEM of TKIkairosfocus
May 20, 2011
May
05
May
20
20
2011
07:01 AM
7
07
01
AM
PDT
Onlookers: Sadly predictable. Having been bested on the facts again, MF finds an excuse to ignore another participant in discussion. Thus, it is increasingly and sadly plain that he is not here for dialogue or even -- a distinct step down -- debate, but to score distractive or dismissive talking points and try to stir up confused exchanges with those who try to discuss issues with him. In that exercise, he plainly wants to cherry pick who he can play the points scoring game off, without having to actually seriously engage substantial matters, or accept well earned corrections. In short, this is the all too familiar red herring, strawman, ad hominem tactic, in another guise. Pardon some direct observations, only such will in the end clear the fog of misleading rhetoric away. I find that, frankly, dishonest, arrogant and utterly rude; though it is of course a bit more cleverly sophistical than the sort of fever swamp abuse that is all too common from objectors to the design inference. This pattern is also increasingly evident for MG. Now, as you can see above, when Joseph indeed went overboard, I called him up on his tone. (Notice carefully: MF fails to acknowledge that, as he has determined to ignore anything I say on the flimsiest of excuses. So, for excellent reason, I find any pretences on his part to be civil, concerned for respectful discussion or serious about the actual issues distinctly hollow. Of course, he and/or others of his ilk will use talking points like suggesting that I am being abusive to correctively point out the dishonest rhetorical tactics being used. That, too is yet another subtle sophistical, or even propagandistic tactic: the turnabout accusation, designed to confuse the onlooker by pretending that the victim is the chief perpetrator. Just remember, to set this in proper perspective: MF is currently hosting a blog where participants are indulging in privacy violation, which while he is quick to correct CD about someone who has been openly nasty, he glides over in silence when he comes here to comment on a thread on a post by the victim of that outing behaviour. Think about the depth of willful disrespect, sheer chutzpah and plain no-broughtupcy rudeness involved in doing that.) Now, Joseph plainly heeded the correction I gave above. (And, J, the tactics I now have to be engaging are part of why we need to be very careful not to be unjustifiably harsh or abusive, including eslewhere such as in your own blog. Notice how terms you use to tag evo mat advocates in your own blog are being thrown in our faces here at UD.) But Joseph makes a handy target to personalise and dismiss the issue by attacking the man. Instead of accepting well warranted correction on a gross error when MF said:
How can science not know how they originated and yet know that all nucleotides are equally probable?
. . . instead we see an attack to the man. Just, a bit more subtle than the usual fare of open invective. Sadly, predictably typical. On the whole, I think we can now safely take MF's I will not respond to you on excuse X, Y or Z tactic as an admission of want of substance on the merits. In this case, he and MG accused J of not having grounds for his claim that we can assign nucleotide bases at 2 bits storage capacity. If you doubt me on that, simply scroll up and see the exchange over the past couple of days once J intervened and gave his calculation on CSI being Shannon metric info in the context of functional specificity. I drew out the elaboration on the difference between storage capacity and code usage of that capacity, noting that we are going to be at about the same order of magnitude in a context where we have orders of magnitude to play with, and MF and Mg tried dismissive tactics that showed ignorance of the basic fact that DNA is a 4-state storage unit on a per base basis. Yup, that is how ill-informed the objections now are. Both J and I responded, pointing out the chaining in the nucleotide string, and that in DNA, the complementarity is between corresponding points on the TWO helices, with the further note that to store information, we need flexibility: as I noted, if there is not contingency in the chain sequence, the n we have a crystal not an information storing molecule. The sugar-phosphate backbone of the DNA strand assures that required flexibility, and the thousands of proteins coded for in DNA show just how flexible the sequencing of the chain is. And, BTW, when we go across to the tRNA that actually clicks proteins together AA by AA, the AA is held on the end opposite the anticodon, i.e the correspondence between DNA and AA coded for is not physical- dynamic but informational- algorithmic. The tRNA is a moving-arm device and taxi, so a sequence of tRNAs assembles the AA chain step by step. The actual functioning of the resulting protein is several stages further on: folding, agglomeration, activation, transport to use site. And the mechanism of sequencing AAs is not physically constrained by the particular sequences that fold functionally. In short, fold domains are deeply isolated in AA sequence space. Islands of function, in the heart of the cell. Function coded for in DNA using a language used to indirectly control an assembly machine, the ribosome, with mRNA as the code tape device. All of this is strong evidence of purposeful design, save to those who refuse to see it. GEM of TKI PS: Have a read here, in the IOSE course on these topics. Take time to watch the video.kairosfocus
May 20, 2011
May
05
May
20
20
2011
06:39 AM
6
06
39
AM
PDT
KF: I see your point. I'm going to have a think about what to say in response. I have to admit I'm a bit skeptical of your statement: "In this case, we have abundant — billions of test cases, growing at literally millions per week thanks to the Internet — on how FSCI is a reliable SIGN of intelligent design. This is known, routine source/cause and reliable sign." BA77: Likewise I am puzzled by your statement: "thus ellazimm you have no justification for your statement because we can see the entire universe, itself, coming into existence, and we can experimentally confirm what kind of event it must have been, thus clearly the ‘Designer’ has certain attributes that lend themselves readily to experimentation." But I shall think on that also before responding. Joseph:ellazimm
May 20, 2011
May
05
May
20
20
2011
06:27 AM
6
06
27
AM
PDT
1 3 4 5 6 7 10

Leave a Reply