Uncommon Descent Serving The Intelligent Design Community
meniscus

FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

Y = C + I + G + [X – M].

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Comments
Mark- That is OK as I find tryig to discuss these matters with you but as fulfulling as talking to a wall. I don't know why you are here if you cannot produce any evidence to support your position. Attacking ID will not do that. You actually need to provide positive evidence for your position. Also you don't need to worry about CSI. All you need to do is focus on your position and demonstrate that blind, undirected processes can produce what IDists call CSI. IOW the keyn to refuting ID is in demonstrating your position. Good luck with that...Joseph
May 20, 2011
May
05
May
20
20
2011
05:58 AM
5
05
58
AM
PDT
Joseph - just so you know where I stand. I don't want you to waste time making responses I am not reading. I am finding debating with you too aggressive and will no longer participate. This is my weakness as much as yours - but we do this for pleasure and I am not enjoying the experience. I am sure you will find other willing opponents. Markmarkf
May 20, 2011
May
05
May
20
20
2011
05:41 AM
5
05
41
AM
PDT
MarkF:]
How can science not know how they originated and yet know that all nucleotides are equally probable?
The two have nothing to do with each other. What I said concerns the way the nucleotides are ordered on ONE side of he DNA. And with genes not any seqwuence makes gene- that is te point.
We may not know in detail how the first genes began but we know quite a lot about the processes by which they develop and change – duplication, inversion, replication, transposition, point mutation etc.
So what? What methodolgy was used to determine all of those are blind watchmaker processes? Show us the math- provide a mthematically rigorous definition or get lost.
Does one of your comments deny that when a gene is duplicated then the duplicate is almost identical to the original?
Do you realize that wth a gene duplication it isn't always that te entire gene get duplicated? So in those cases the duplicate will not resemble the original. But there still isn't an evidencen that gene duplictions are blind watchmaker processes. Nice of you to continue to ignore that.
That’s not a claim it is a request!
Dude you claimed there isn't any justification for my assumption. Now you have to step up an support that claim. Or admit that you are lying.Joseph
May 20, 2011
May
05
May
20
20
2011
05:26 AM
5
05
26
AM
PDT
MathGrrl:
Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target.
My claim has been supported- by Mung- who provd ev is a targeted search- as did Marks and Dembski. IOW you are either lying or just plain ignorant.Joseph
May 20, 2011
May
05
May
20
20
2011
05:13 AM
5
05
13
AM
PDT
Hi Mark, Thanks very much for that. Seeing your comment to "The Whole Truth" is very reassuring. The main problem with any established forum that attracts regular participants on both sides of a very strong disagreement is the inevitability of a blood-feud breaking out. That's why strict moderation is important because it will filter out those remarks that are likely to escalate the problem. I totally appreciate your concerns about double standards: and I explained why they need to be tolerated over on your blog. For even daring to question evolution on other forums, I (along with others) have been subject to uncensored, horrendous abuse which is in a completely different league to anything you see on here. But we've all got to try and draw a line somewhere and make a fresh start or else constructive debate will cease.Chris Doyle
May 20, 2011
May
05
May
20
20
2011
04:16 AM
4
04
16
AM
PDT
Onlookers: FOR THE RECORD: Kindly note: at the same time that MF is busily trying to burnish his civility credentials with CD, he is insistently ignoring the author of the post here, on the flimsiest excuses, and tolerated privacy violations at his blog. You will understand why I will have nothing further to do with MF's blog and those of like ilk, save to remark for the record; when strictly necessary. GEM of TKIkairosfocus
May 20, 2011
May
05
May
20
20
2011
04:13 AM
4
04
13
AM
PDT
F/N: EZ, I hope you understand that I am requiring that -- before you can claim to have an adequate model of the past -- you must show on empirical observation in the present that your claimed causal factors (blind chance and mechanical necessity) are empirically sufficient to trigger the effect in question, FSCI. We have a known and reliable causal factor for that, but it is not chance plus mechanical necessity, it is design. And, your dodge notwithstanding -- the context should have been obvious -- you plainly do not. Or, instead of playing strawman games on what I said, you would have triumphantly announced it. In short, it is what you tip-toed by in silence that is utterly revealing.kairosfocus
May 20, 2011
May
05
May
20
20
2011
04:07 AM
4
04
07
AM
PDT
#134 Chris I won’t be returning to a blog where people like “The Whole Truth” can make comments like that with the active support of people like “Toronto” and the passive support of all the other banned evolutionists. Anything else you want to say to me or respond to, needs to be said here. I am sorry to hear that. For the record this is the final comment I made to "The Whole Truth" on my blog about his comments: WT – nine of the last ten posts are from you and they are increasing personal and lacking in content. If you want to use up so much bandwidth for these purposes please can you do it somewhere else.. I could hardly put it more strongly. It was preceded by a number of other requests to alter his approach. This one appears to have been successful as he has not commented since. I find it worthwhile commenting here despite a fairly continuous level of insults and such like. Just avoid discourse with those that you cannot get on with for one reason or another (I guess I have to accept that Joseph is one of those).markf
May 20, 2011
May
05
May
20
20
2011
03:50 AM
3
03
50
AM
PDT
EZ, 133: Pardon, I am very busy just now, so I must be very focussed and selective. So, let me pick a key slice of the cake that shows the vital ingredients in action:
BUT . . . you make a design inference without observational data! You draw conclusions based on the results of events that happened a long time ago . . .
1 --> Are you aware of the difference between operations science and origins science? (Cf discussion here.) 2 --> The former works by direct observation of the facts on the ground, the latter provisionally reconstructs the past by creating a model based on results of operations science that shows the processes in the present capable of causing what we see as traces from the past beyond observation and record. 3 --> So, if you are challenging design inference on such traces and dynamics, you must either challenge the whole system of the reconstruction of the past as similarly fatally flawed -- geology, paleontology, cosmology etc, or else find yourself guilty of selective hyperskepticism, Cliffordian/Saganian evidentialist form; exerting a double standard in warrant, to reject what you do not want to accept. 4 --> In this case, we have abundant -- billions of test cases, growing at literally millions per week thanks to the Internet -- on how FSCI is a reliable SIGN of intelligent design. This is known, routine source/cause and reliable sign. 5 --> We have in addition, no sound counter examples where chance and necessity without intelligent direction, gives rise to CSI (cf my remarks on Ev just above). 6 --> And on the needle in the haystack/infinite monkeys analysis, we see good reason -- closely related to the analysis that warrants the second law of thermodynamics -- to accept that the targets in question are beyond the capacity of the cosmos acting by chance and necessity without intelligence, to find. 7 --> In short, we are well warranted to infer from sign to known routine source, with the backup of known reliability of the sign, and the analysis that shows why that should be. 8 --> On the strength of that, we have every good reason to conclude that FSCI is a good sign of design as most credible cause, once we take the blinkers of a priori imposed materialism off. 9 --> So strong is this, that we have every good reason to then challenge those who would explain the origin of life and of body plans -- both deeply embedded with FSCI -- that to warrant their case, in light of the discoveries about the cellular nature of life since the discovery of DNA in 1953 and its decoding since the 1960's as well as the related discoveries about the nanotech machinery of cell based life, they must now show that blind chance and necessity acting by themselves are capable of creating FSCI, or surrender their claims that imply such. 10 --> The rise of GA's shows that his challenge has been implicitly seen as valid, and that it is serious. The nature of GA's, turns out to further support the point that FSCI is the product of design, as the just above brings out for the case of ev. In short, the attempted counterexample turns out to substantiate the point. 11 --> Going beyond this, macro evo is a major claim in biology. It cuts across a major set of empirical findings and related analyses as just summarised. While such observations and analyses are inevitably provisional -- as is the case with all of science -- the weight of evidence and analysis, especially in light of that tie to the second law of thermodynamics, plainly dramatically shifts the burden of proof, just as it is incumbent on proposers of perpetual motion machines of the 2nd kind, that they show that their contraptions work as advertised. (Unmet to date.) 12 --> So, I am fully warranted to demand:
What you need to do — if you are to think scientifically — is to produce empirical observational, factual data that tests and adequately supports the claims; without ideological blinkers [i.e. a priori evolutionary materialism or its kissing cousins] on.
See my point? GEM of TKIkairosfocus
May 20, 2011
May
05
May
20
20
2011
03:22 AM
3
03
22
AM
PDT
Onlookers: Here is yet another example of MG's failure to face, recognise or acknowledge basic and evident facts, facts that -- for weeks now -- have been just one clicked link away:
Joseph: It has been demonstrated that ev is a targeted search. MG: Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target.
In fact, in the previous CSI Newsflash thread, you will see in my edit response to Graham at comment no 1, a summary of issues and concerns, including matters for MG to explain herself on. Prominent on this -- right there in the opening paragraph of the comment -- is Mung's summary dissection of ev at comment 180, which DOES reveal beyond any reasonable doubt -- from the horse's mouth (cf his snippets at 182 and some of his initial examination of the Schneider horse race page from 126 on . . . ) -- that it is in fact targetted search, through the target -- the string(s) to be matched -- are allowed to move around a bit. In addition, ev uses in effect a Hamming distance to target metric in selecting the next generation. To see how that conclusion is warranted, cf Mung at 177 on number of "mistakes" and my remarks just following at 179 on how that translates into a Hamming distance to target metric. (Also cf no 178 on the closely related general nature of GA's. BTW, GA's, by virtue of using hill climbing on a fitness function of nice trendy slope that leads to the targets, are inherently targeted searches. The design of such nice trendy fitness functions matched to the underlying config space of the "genome" is a non-trivial, intelligent matter, as can be seen form the online textbook on GA's that was unearthed in the discussion. In this context, this means that GA's operate WITHIN islands of function, i.e. they are models of micro evo, at best. The issue design theory raises -- as I pointed out again yesterday, is the question of getting TO such islands of function in config spaces that by many orders of magnitude, swamp the available resources of the atoms of our observed cosmos.) In short, ev is a slightly more sophisticated version of Dawkins' notorious Weasel. It is plain that MG is either unable or unwilling to examine and properly assess the facts, or -- pardon, this is what she would have to be if she is knowingly making false and misleading assertions as cited at the top of this comment -- she is a brazen rhetor exploiting the fact that often onlookers are not going to examine the true facts for themselves, so can be misled by someone who they think is their champion. (Especially, is such a rhetor uses the tactic of ducking out and waiting until further discussion has in effect buried the relevant facts, so one can then pretend that they do not exist.) Nor, have I forgotten the issue of MG's snide allusion to Galileo's whispered remark after his forced recantation at the hands of the Inquisition, which comes up in the cluster leading up to 180. This is an outrage, that needs to be apologised for, as there is no religious magisterium here imposing its will by threats of the thumbscrews. MG is here guilty of outright slander. MG has some serious explaining to do. Again. GEM of TKIkairosfocus
May 20, 2011
May
05
May
20
20
2011
02:46 AM
2
02
46
AM
PDT
ellazimm, you are wrong on just about all of the presuppositions and conclusions that you have made; for instance this one you made: 'but postulating the existence of one (a Designer) (which is implied by allowing the inference to be drawn) is less parsimonious because it introduces a process for which there is no independent physical evidence AND, if the designer is considered to be outside of the reach of experimentation and evidence then it’s not an issue that can be addressed by science. ,,,' Yet ellazimm we can look back in time to the beginning of the universe and see the entire universe being brought into existence instantaneously: The Known Universe by AMNH http://www.youtube.com/watch?v=17jymDn0W6U and moreover we know what photons are 'made' of, thus we have a very good picture of what kind of event it must have been; Explaining Information Transfer in Quantum Teleportation: Armond Duwell †‡ University of Pittsburgh Excerpt: In contrast to a classical bit, the description of a (photon) qubit requires an infinite amount of information. The amount of information is infinite because two real numbers are required in the expansion of the state vector of a two state quantum system (Jozsa 1997, 1) --- Concept 2. is used by Bennett, et al. Recall that they infer that since an infinite amount of information is required to specify a (photon) qubit, an infinite amount of information must be transferred to teleport. http://www.cas.umt.edu/phil/faculty/duwell/DuwellPSA2K.pdf Researchers Succeed in Quantum Teleportation of Light Waves - April 2011 Excerpt: In this experiment, researchers in Australia and Japan were able to transfer quantum information from one place to another without having to physically move it. It was destroyed in one place and instantly resurrected in another, “alive” again and unchanged. This is a major advance, as previous teleportation experiments were either very slow or caused some information to be lost. http://www.popsci.com/technology/article/2011-04/quantum-teleportation-breakthrough-could-lead-instantanous-computing Quantum no-hiding theorem experimentally confirmed for first time Excerpt: In the classical world, information can be copied and deleted at will. In the quantum world, however, the conservation of quantum information means that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. A third and related theorem, called the no-hiding theorem, addresses information loss in the quantum world. According to the no-hiding theorem, if information is missing from one system (which may happen when the system interacts with the environment), then the information is simply residing somewhere else in the Universe; in other words, the missing information cannot be hidden in the correlations between a system and its environment. (This experiment provides experimental proof that the teleportation of quantum information in this universe must be complete and instantaneous.) http://www.physorg.com/news/2011-03-quantum-no-hiding-theorem-experimentally.html ,,, thus ellazimm you have no justification for your statement because we can see the entire universe, itself, coming into existence, and we can experimentally confirm what kind of event it must have been, thus clearly the 'Designer' has certain attributes that lend themselves readily to experimentation. etc.. etc.. The Afters - Light Up The Sky - Official Video http://www.youtube.com/watch?v=8LQH6UDi15sbornagain77
May 20, 2011
May
05
May
20
20
2011
02:35 AM
2
02
35
AM
PDT
Just to be clear: I don't see how you can look at the exiting genome and say that it implies design intervention at some undefined point in the past if your contention is that such events must be observed. We look at the same data and evidence. You say: I wasn't there but I think there is clear indication that some of this is better explained by the intervention of an intelligent designer as opposed to blind, unguided processes. And you say: In addition, the naturalistic explanation is so highly improbable that it's less parsimonious. I say: You can't prove a negative. Even though I wasn't there and I don't understand exactly how all the steps occurred I think the different strands of evidence are all consistent with common descent with modification. And I say: I can't disprove the intervention of a designer but postulating the existence of one (which is implied by allowing the inference to be drawn) is less parsimonious because it introduces a process for which there is no independent physical evidence AND, if the designer is considered to be outside of the reach of experimentation and evidence then it's not an issue that can be addressed by science. You may say: The evidence is also consistent with an intelligent designer who chose to proceed in that fashion. And I would say: True, but a designer with that ability could have done things differently whereas unguided processes have no 'choice'. And, if the designer was limited then you are assuming aspects of the designer which begs the question of the designer's existence. Is that fair? Probably not but I tried. And I've got a full day ahead of me know. And I'm not sure there's an more points to make. But, as always, thanks for the discussion!ellazimm
May 20, 2011
May
05
May
20
20
2011
12:19 AM
12
12
19
AM
PDT
Hi Mathgrrl, thanks for your responses on this thread. I would still appreciate a response to the bacteria comment I made six months ago, it was not a soliloquy, it was a direct challenge to your claims about evolution in bacteria. I think it made uncomfortable reading for you and you don't know how to respond to it. Am I right? Ultimately, the record is here for all to see whether or not your questions have been answered by kariosfocus. I for one think they have been. The only reason I made any other reference to you was because Mark was pressing me on the subject of Joseph (by comparison to himself and yourself). This despite the fact that I expressed a reluctance to do so. I think my comments about you were fair ones: you do tend to ignore points raised by your opponents that you have no answer to, and go on like a broken record about stuff that has been dealt with several times over a long, long time ago. As I keep on saying, there really is no need for all this unpleasantness. All the best. PS. I won't be returning to a blog where people like "The Whole Truth" can make comments like that with the active support of people like "Toronto" and the passive support of all the other banned evolutionists. Anything else you want to say to me or respond to, needs to be said here.Chris Doyle
May 20, 2011
May
05
May
20
20
2011
12:13 AM
12
12
13
AM
PDT
KF: "What you need to do — if you are to think scientifically — is to produce empirical observational, factual data that tests and adequately supports the claims; without ideological blinkers on." BUT . . . you make a design inference without observational data! You draw conclusions based on the results of events that happened a long time ago. You can't point to a clear and unambiguous case of genomic design (by an unknown, mysterious designer) that was observed to occur right then. Isn't it inconsistent for you to ask for a type of evidence that you yourself are unable to provide for your own argument? You don't get to have double standards in science. I'm sorry but lots of science progresses by drawing inferences and conclusions based on the evidence and effects from non-observed events. I've never understood why many in the ID community belabour that point. The whole point of archaeology is to draw conclusion based on cultural remains NOT observed events. And yes, in some cases design arguments arise. But only when it's clear there was a possibility of there being a non-transcendental designer available, i.e. independent evidence of a designer. AND, if/when a speciation event or the creation of an organic molecule under blind processes is observed and documented you'll have to give up that defensive position and fall back.ellazimm
May 19, 2011
May
05
May
19
19
2011
11:59 PM
11
11
59
PM
PDT
Chris Doyle,
I don’t know about Mathgrrl (disrespecting your opponents doesn’t always manifest itself as explicitly incivil remarks: ignoring points that have been raised (for 6 months in my case!) and repeating the same refuted arguments over and over again is very disrespectful and a waste of all our time, for example)
Making side comments about someone without backing up your baseless accusations is considerably more rude than anything I've written online anywhere in the past few years. I explained above why you didn't get a response six months ago. Someone less generous of spirit than myself might come to the conclusion that you scrounged around for an excuse to cast aspersions on someone you disagree with for other reasons. Further, I have not repeated refuted arguments, I have been consistently and patiently attempting to get answers to what I originally thought would be simple questions about a key ID metric. I have yet to receive those answers. You, sir, have no business criticizing the online manners of others.MathGrrl
May 19, 2011
May
05
May
19
19
2011
07:33 PM
7
07
33
PM
PDT
kairosfocus,
I have read through all of your responses since my comment numbered 60 in this thread . . . . you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition.
Now, I have repeatedly pointed you to and linked 23
Yes, you have. Unfortunately, none of the comments to which you've linked contain either a mathematically rigorous definition of CSI, a detailed example of how to calculate it for my gene duplication scenario, or answers to any of the questions I asked in my comments numbered 59 and 83 which are a direct response to your comment 23. I honestly don't understand why it is so difficult to get answers to these questions. As I have mentioned before, if one of my colleagues were to tell me that she had a metric that could be used to characterize data sets in a way no one had done before and I asked her to define her metric in mathematical detail plus show me exactly how to calculate it for a few examples, she'd fill whiteboard after whiteboard for me. The hardest part would be to get her to stop. In the analogous situation here, I can't get anyone to even rigorously define what the metric is, let alone provide any calculations.MathGrrl
May 19, 2011
May
05
May
19
19
2011
07:32 PM
7
07
32
PM
PDT
Chris Doyle, By the way, since we're discussing how and why online threads are dropped by some participants, I thought you'd like to know that there are still some open questions regarding your comments on Mark Frank's blog. The subthread starts here: http://mfinmoderation.wordpress.com/2011/05/14/does-uncommon-descent-deliberately-suppress-dissenting-views/#comment-3498MathGrrl
May 19, 2011
May
05
May
19
19
2011
07:31 PM
7
07
31
PM
PDT
Chris Doyle,
As you’ve begun a theme of unanswered posts, I wonder if you’d be so kind as to respond to a post I addressed to you 6 months ago. You can find it here: https://uncommondescent.com.....ent-366931
As I noted in that thread, my interest was solely to correct a misconception or two about the nature of genetic algorithms. I explicitly said, earlier in the thread, that I didn't have the time or inclination to engage the topic of the evolution of bacteria. Given that, I don't see why you would expect a response to a post that was more of a soliloquy than a question.
Check out post 23 on this thread for the answers you’re looking for from kairosfocus.
That comment does not contain a rigorous mathematical definition of CSI as described by Dembski nor does it use such a definition to explain how to objectively calculate CSI for the first of my scenarios. It is not an answer.
There’s a difference between you not liking the answer and not being answered at all.
I find that comment . . . odd coming from someone who writes so much about the rudeness of ID critics. Glass houses and all that.MathGrrl
May 19, 2011
May
05
May
19
19
2011
07:30 PM
7
07
30
PM
PDT
Joseph,
It has been demonstrated that ev is a targeted search.
Your unsupported assertion notwithstanding, it is not possible for ev to be demonstrated to be a targeted search because review of the algorithm and inspection of the code proves that there is no target. The entire ev digital genome is subject to mutation and selection, including those sections that model the binding sites and the recognizers. Those sections co-evolve and have different sequences in different runs of ev. There is no explicit target. The really interesting result of ev, predicted from Schneider's work with biological organisms, is that Rfrequency and Rsequence evolve to the same value. There is nothing in the algorithm or code that would lead one to expect this. This further demonstrates the lack of a target in ev. I have provided this detail before, both here on UD: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 and on Mark Frank's blog when the threads here stopped accepting comments: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 You can read those for more detail. If you want to confirm for yourself that ev is not a targeted search, you can visit Schneider's site and download the papers and source code for yourself. (Now, did you see what I just did there? Instead of simply replying that you are wrong and that I've already addressed the issue, I explained, again, why you are wrong and provided links to where I directly addressed your point and provided sufficient detail for you to understand the topic under discussion. Could you try that yourself sometime, please?)MathGrrl
May 19, 2011
May
05
May
19
19
2011
07:29 PM
7
07
29
PM
PDT
Joseph, At 7:53 am on 05/19/2011 you wrote:
I have nothing else to say to you- you are a waste of time and bandwidth.
Then at 8:12 am you began a comment with:
And MathGrrl,
What a tease!MathGrrl
May 19, 2011
May
05
May
19
19
2011
07:28 PM
7
07
28
PM
PDT
Dr Bot: The event, E, is a particular statue of Lincoln, say. The zone of interest or island of function, T, is the set of sufficiently acceptable realistic portraits. The related config space would be any configuration of a rock face. The nodes and arcs structure would reduce to a structured set of strings, a net list. This is very familiar from 3-d modelling (and BTW, Blender is an excellent free tool for this; you might want to start with Suzie). Tedious, but doable -- in fact many 3-d models are hand carved then scanned as a 3-d mesh, then reduced -- there is a data overload problem -- and "skinned." (The already linked has an onward link on this.) The scope of the acceptable island would be searched by simply injecting noise. This will certainly be less than 10^150 configs. [Notice, the threshold set for possible islands of function is a very objective upper limit: the number of Planck-time quantum states for he atoms of our observed cosmos.] At the same time, the net list will beyond reasonable doubt exceed 125 bytes, or, 1,000 bits. That's an isolation of better than 1 in 10^150 of the possible configs. And it is independent of the subjectivity of any given observer. ["The engines are on fire, sir! WE'RE GOING DOWN . . . ] The chi- or X- metrics for Mt Rushmore, will -- unsurprisingly -- easily be in "design" territory. In short, another case of what we see in the metric corresponding with what we see in the world of direct observation. And, one that shows how a subjective view can very rapidly and easily be taken out of the context of mere clashing opinions. Now, in this case, because of the specific situation, the function is sculptural resemblance. That would have to be judged (even though we already have an upper limit), and it should be possible to calibrate a model for how much variation we can get away with -- i.e. recognisability as a portrait of a specific individual is subjective but that is not as opposed to being objective. (And BTW, the coder of the program is using his subjectivity all through the process. As well, the engineer who designs the relevant equipment. Subjectivity is the CONTEXT in which we assess objectivity: credible extra-mental reality, at least on a sufficiently good approximation basis. Subjectivity is therefore not the opposite of objectivity.) Function, however, is in many other cases not a matter of subjective judgement, especially for algorithmic code in an operational context, e.g. for a system controller. Programs that crash and burn are rather obvious, and may have rather blatant consequences. Similarly, even though we probably would have to use observers to decide when garbling of text is out of function, that is much more easily achieved than might be suspected -- cf Axe's work on that. Now, we address the red herring led away to the strawman: why is "function" a question of MATHEMATICAL "rigour"? Especially, in a context where not even mathematical proofs and calculations are usually fully mathematically rigorous? (Cf 34 - 35 above and 23 - 24 above.) The proper issue is whether function is an objective, observable phenomenon. And, plainly it is. We may construct mathematical models, but that will not remove subjectivity int eh process. To see what I mean, is VOLUME of a liquid an objective thing? As in, note Fig 3 above, on how to read a meniscus [and onward, how to read an end-point of a titration with a colour indicator]: there is a judgment, and an inescapable subjectivity involved in many relevant cases, but that does not mean the result is not objective. The objection is misdirected, and based on a conceptual error, probably one driven by insufficient experience with real world lab or field measurements. GEM of TKI PS: "We're going downnnnn . . . !"kairosfocus
May 19, 2011
May
05
May
19
19
2011
03:32 PM
3
03
32
PM
PDT
KF @ 68:
The way to do that is here, following on from Fig I.2 (cf. also I.1 and I.3) — and has been for many months, i.e reduce to a net list on the nodes-arcs wireframe, with some exploration of the noise to lose portraiture fidelity.
Thanks for the link. I've been too busy to follow the numerous threads on CSI so I appreciate you directing me straight to the pertinent information. The example of CSI in Mt Rushmore brings up a few interesting questions - some of which may be me just misunderstanding. The first thing I should do is check mu assumptions, which are that 'function' in this instance is basically 'looks like Lincoln' (or one of the others but lets stick with Lincoln for the moment). The method you describe, of reducing the relevant portion of the memorial to a wireframe introduces, as you say, some degree of error in interpretation - namely how much granularity is required for the likeness to be recognisable. This introduces the interesting question of whether the CSI is actually measuring Mt Rushmore, or just the minimum information required to convey a likeness of Lincoln in wireframe format - We might find that the minimum likeness is recognisable, but not distinguishable specifically as Rushmore from any other sculpture of Lincoln. And we mustn't forget that we have knowledge, and images of Lincoln to compare to (i.e what if the face was distinct, but the product of the artists imagination and not a real person). Another big factor here is that this is, and always will be, a subjective measure. If the observer is someone like me with mild Prosopagnosia then the granularity must be higher than for someone else. If you happen to have a relative who looks like Lincoln then you might argue that the face at Mt Rushmore actually looks more like your great uncle Fred than Lincoln himself. The upshot is that any CSI calculation in this case would require some large error bars. It gets even more interesting though if the observer - the semiotic agent - is blind! Does it still have function? Maybe, you can touch the face and feel the contours (if you are a skilled climber) but the CSI may be dramatically different when the face is apprehended this way. The general observation that struck me (and which may be in error) is that the measure of CSI in this particular instance depends a great deal on the observer - people could argue over whether a low granularity reproduction actually looks like Lincoln or not. Contrast this with physics - we have an objective measure of force (with the label Newton) and so we don't get stuck trying to decide if there are x or y Newtons of force - we can measure it objectively and the measure does not depend on any personal skills or biases. It is consistent and reproducible. What I am interested in knowing is how you can objectively calculate CSI rather than relying on a subjective assessment. If CSI in general requires a subjective assessment of a property like function (or is it specificity) then how can it be mathematically rigorous?DrBot
May 19, 2011
May
05
May
19
19
2011
02:32 PM
2
02
32
PM
PDT
MarkF:
Me: How do you know that for any position any of them is possible, much less equiprobable?
Joseph:Science.
This is rather lacking in detail!  You believe that “No one knows how genes originated”.  How can science not know how they originated and yet know that all nucleotides are equally probable?
Me: Although we don’t know a simple law or formula that determines the order, nevertheless genes are created by a process with stochastic influences and limitations.
Joseph: No one knows how genes originated. No one knows how genes originated. And there isn’t any evidence for blind, undirected processes producing one.
We may not know in detail how the first genes began but we know quite a lot about the processes by which they develop and change – duplication, inversion, replication, transposition, point mutation etc.
Me: For example, in the case of gene duplication the nucleotides will almost certainly replicate the pattern of the gene that is being replicated – all other orders are very unlikely.
Joseph: Do you read what I post?
Most of it.  I don’t always understand it.  Does one of your comments deny that when a gene is duplicated then the duplicate is almost identical to the original?
Me: What claim did you make and need to support?
Joseph: This one:
Please explain how you conclude that there are two bits of information per nucleotide using that formula – without simply assuming that the probability of any given base pair is equally likely and independent of every other base pair (because that assumptions has no justification and is patently untrue).
That’s not a claim it is a request!  Or are you referring to the bit in brackets?  markf
May 19, 2011
May
05
May
19
19
2011
02:04 PM
2
02
04
PM
PDT
Joseph & KF: I'll try and get back to you on your points tomorrow. But I think we're reaching an impasse. It happens.ellazimm
May 19, 2011
May
05
May
19
19
2011
01:51 PM
1
01
51
PM
PDT
EZ: You can assert (or even believe) anything you want. What you need to do -- if you are to think scientifically -- is to produce empirical observational, factual data that tests and adequately supports the claims; without ideological blinkers on. Which in the case of macroevo resulting from cumulative filtered microevo, you simply do not have. It is not in the fossils -- overwhelmingly, sudden appearance, stasis and disappearance; and it is not in the implications of code or integrated functional, complex structure. After 150 years of trying. You are in effect proposing to move from "See Spot Run" to a book, one tiny functional step at a time. Or from "Hello world" to an operating system, one tiny non-foresighted step at a time. Doesn't work. You would go broke trying to write books or software that way, real fast. All of this goes to underscore the fundamental misconceptions that are blinding people from seeing he significance of functionally specific complex information as a sign of most credible cause. DNA and the wider systems of life are replete with FSCI, and there is exactly one empirically demonstrated, routinely observed cause of such FSCI: intelligence. The chance based engines of variation are not credibly able to generate the required FSCI. Not for first life and not for novel body plans. I include first life as I insist that without a credible root the Darwinian tree of life has no basis. So, until there is a credible, empirically warranted chance plus necessity chem evo scenario that leads to coded DNA based life with genomes beyond 100,000 bases that has in it cells that metabolise and have a von Neumann self-replicating capacity, there is no basis to even discuss macroevolution. Going beyond that, unless you have a similarly empirically warranted mechanism for chance variations and natural selections etc to arrive at novel body plans requiring 10 - 100+ million new functional bases viable from embryogensis on, on Earth dozens of times over in the past 500 - 600 MY, you have no basis for confidence in macroevolutionary models. I hardly need to underscore that here is no sound empirical, observational warrant for such -- macro evo thrives by ideological censorship and lockout of the only observed source of FSCI, intelligence. This is a triumph of ideology over evidence. What we do have, as just pointed out, is the evidence that genetic engineering is real -- cf Venter and colleagues. Indeed, a molecular nanotech lab a few generations beyond Venter would be a sufficient cause for what we see. Beyond that, we do have empirical evidence of adaptation to environmental constraints, at micro-level. Over-extrapolation backed by ideological materialistic a priorism is not a sound basis for science, and never was. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
01:41 PM
1
01
41
PM
PDT
ellazimm:
the environment and competition favour those who are ‘fitter’.
Bu 'fitness' is determined by whoever leaves the most offspring due heritable genetic variatiion.
The differential reproduction/survival has to do with the ability to survive/exploit the situation BECAUSE of different genomic influence.
There are several reasons why organisms survive and reproduce- better genetics is just one.
I am arguing that macro-evolution is micro-evolution over long periods of time.
There isn't any data to support that claim.Joseph
May 19, 2011
May
05
May
19
19
2011
01:39 PM
1
01
39
PM
PDT
KF: The mutations add the information in a step-by-step manner with the environment selecting which variations 'make sense'. I suspect you're going to now trounce on me for claiming that random mutations or duplications add information. I am arguing that macro-evolution is micro-evolution over long periods of time. The argument against this is probabilistic: there's not enough time for that many mutations to occur. That's the battle ground. Data is being generated. Lenski's work is pertinent. I'm assuming the recent work reported on ID: the future is pertinent. But, logically, you cannot prove a negative. You cannot prove a highly improbable event didn't occur. You can only say it's extremely unlikely. Which kind of gets us back to a fine-tuning type argument. Such and such is sooooooo highly improbable that it's more reasonable to assume it's by design. I get that. I just don't like making assumptions. And, I agree, I am NOT addressing the origin of the first replicator. I'm not qualified to make those arguments. BUT, given a minimal replicator then I refute the further need to search the entire configuration space. And I accept that's a big given.ellazimm
May 19, 2011
May
05
May
19
19
2011
12:11 PM
12
12
11
PM
PDT
EZ: Re:The environment/filter carves out the information by selecting the random variation which is more successful from the other random variations. In short, NS is a subtract-er, a culler; not an adder, a creator of info. You are back to an implicit appeal to chance variation as the source of information. And so you are right up against the needle in the haystack problem for first life and for origin of body plans. You have a theory of what is not in serious dispute even by Young Earth Creationists: micro-evo. To extrapolate this to body plan level macro-evo, you have to show much more, and in the teeth of the config space hurdles identified. GEM of TKIkairosfocus
May 19, 2011
May
05
May
19
19
2011
11:26 AM
11
11
26
AM
PDT
CD: Yup, regulatory networks/circuits and the machinery that makes DNA info work in the living cell, are just as important. Only, much less understood. I gotta clip and respond to EZ, then get out of here to my next appointment. Gkairosfocus
May 19, 2011
May
05
May
19
19
2011
11:22 AM
11
11
22
AM
PDT
Joseph: the environment and competition favour those who are 'fitter'. Different environments and different situations favour different variations. Maybe you prefer the term filter to choose? The differential reproduction/survival has to do with the ability to survive/exploit the situation BECAUSE of different genomic influence. But the test is the 'environment'. That's the filter. KF: The information comes from generations of variation being culled and bred by the situation and environment. A mutation is valueless UNLESS it conveys an advantage. After eons of advantages stacking up you have a compendium of information that is capable of producing a fairly fit individual. My son plays video games. He doesn't like reading manuals and he doesn't spend much time trying to figure out the puzzle. He's 9. But he can remember. He tries things at random and 'dies', a lot. But, eventually, his information database is created and honed by the game environment. And don't start telling me that because the game was designed that argues for design. I'm talking about the way a series of random variation can be guided into a font of information by a filtering environment. This is why natural selection is NOT random. There is a 'memory' of what works. And new variation builds on what's worked in the past. That's how you add information. You start with randomly selecting options. The successful options you remember (i.e. those individuals survive). You add another layer of random tries. Save the winners. Etc. The environment/filter carves out the information by selecting the random variation which is more successful from the other random variations.ellazimm
May 19, 2011
May
05
May
19
19
2011
11:18 AM
11
11
18
AM
PDT
1 4 5 6 7 8 10

Leave a Reply