Uncommon Descent Serving The Intelligent Design Community
meniscus

FOOTNOTE: On Einstein, Dembski, the Chi Metric and observation by the judging semiotic agent

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

(Follows up from here.)

Over at MF’s blog, there has been a continued stream of  objections to the recent log reduction of the chi metric in the recent CSI Newsflash thread.

Here is commentator Toronto:

__________

>> ID is qualifying a part of the equation’s terms with subjective observation.

If I do the same to Einstein’s, I might say;

E = MC^2, IF M contains more than 500 electrons,

BUT

E **MIGHT NOT** be equal to MC^2 IF M contains less than 500 electrons

The equation is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.

Dembski claims a mathematical evaluation of information is sufficient for his CSI, but in practice, every attempt at CSI I have seen, requires a unique subjective evaluation of the information in the artifact under study.

The determination of CSI becomes a very small amount of math, coupled with an exhausting study and knowledge of the object itself.>>

_____________

A few thoughts in response:

a –> First, let us remind ourselves of the log reduction itself, starting with Dembski’s 2005 chi expression:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]  . . . eqn n1

How about this (we are now embarking on an exercise in “open notebook” science):

1 –> 10^120 ~ 2^398

2 –> Following Hartley, we can define Information on a probability metric:

I = – log(p) . . .  eqn n2

3 –> So, we can re-present the Chi-metric:

Chi = – log2(2^398 * D2 * p)  . . .  eqn n3

Chi = Ip – (398 + K2) . . .  eqn n4

4 –> That is, the Dembski CSI Chi-metric is a measure of Information for samples from a target zone T on the presumption of a chance-dominated process, beyond a threshold of at least 398 bits, covering 10^120 possibilities.

5 –> Where also, K2 is a further increment to the threshold that naturally peaks at about 100 further bits . . . . As in (using Chi_500 for VJT’s CSI_lite):

Chi_500 = Ip – 500,  bits beyond the [solar system resources] threshold  . . . eqn n5

Chi_1000 = Ip – 1000, bits beyond the observable cosmos, 125 byte/ 143 ASCII character threshold . . . eqn n6

Chi_1024 = Ip – 1024, bits beyond a 2^10, 128 byte/147 ASCII character version of the threshold in n6, with a config space of 1.80*10^308 possibilities, not 1.07*10^301 . . . eqn n6a . . . .

Using Durston’s Fits from his Table 1, in the Dembski style metric of bits beyond the threshold, and simply setting the threshold at 500 bits:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond  . . . results n7

The two metrics are clearly consistent . . . .one may use the Durston metric as a good measure of the target zone’s actual encoded information content, which Table 1 also conveniently reduces to bits per symbol so we can see how the redundancy affects the information used across the domains of life to achieve a given protein’s function; not just the raw capacity in storage unit bits [= no.  of  AA’s * 4.32 bits/AA on 20 possibilities, as the chain is not particularly constrained.]

b –> In short, we are here reducing the explanatory filter to a formula. Once we have specific, observed functional information of Ip bits,  and we compare it to a threshold of a sufficiently large configuration space, we may infer that the instance of FSCI (or more broadly CSI)  is sufficiently isolated that the accessible search resources make it maximally unlikely that its best explanation is unintelligent cause by blind chance plus mechanical necessity. Instead, the best, and empirically massively supported causal explanation is design:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Fig 1: The ID Explanatory Filter

c –> This is especially clear when we use the 1,000 bit threshold, but in fact the “practical” universe we have is our solar system. And so, since the number of Planck time quantum states of our solar system since the usual date of the big bang is not more than 10^102, something that is in a config space of 10^150 [500 bits worth of possibilities] is 48 orders of magnitude beyond that threshold.

d –> So, something from a config space of 10^150 or more (500+ functionally specific bits) is on infinite monkey analysis grounds, comfortably beyond available search resources. 1,000 bits puts it beyond the resources of the observable cosmos:

Fig 2: The Observed Cosmos search window

e –> What the reduced Chi metric is telling us is that if say we had 140 functional bits [20 ASCII characters] , we would be 360 bits short of the threshold, and in principle a random walk based search could find something like that. For, while the reduced chi metric is giving us a value, it tells us we are falling short and by how much:

Chi_500(140 bits) = 140 – 500 = – 360 specific bits, within the threshold

f –> So, the Chi_500 metric tells us instances of this could happen by chance and trial and error testing.   Indeed, that is exactly what has happened with random text generation experiments:

One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the “monkeys” typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t” The first 19 letters of this sequence can be found in “The Two Gentlemen of Verona”. Other teams have reproduced 18 characters from “Timon of Athens”, 17 from “Troilus and Cressida”, and 16 from “Richard II”.[20]

A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took “2,737,850 million billion billion billion monkey-years” to reach 24 matching characters:

RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d

g –> But, 500 bits or 72 ASCII characters, and beyond this 1,000 bits or 143 ASCII characters, are a very different proposition, relative to the search resources of the solar system or the observed cosmos.

h –> That is why, consistently, we observe CSI beyond that threshold [e.g. Toronto’s comment] being produced by intelligence, and ONLY as produced by intelligence.

i –> So, on inference to best empirically warranted explanation, and on infinite monkeys analytical grounds, we have excellent reason to have high confidence that the threshold metric is credible.

j –> As a bonus, we have exposed the strawman suggestion that the Chi metric only applies beyond the threshold. Nope, it applies within the threshold and correctly indicates that something of such an order could come about by chance and necessity within the solar system’s search resources.

k –> is a threshold metric inherently suspicious? Not at all. In control system studies, for instance, we learn that once you reduce your expression to a transfer function of form

G = [(s – z1)(s- z2) . . . ]/[(s – p1)(s-p2)(s – p3) . . . ]

. . . then, if poles appear in the RH side of the complex s-plane, you have an unstable system.

l –> A threshold, and one that, when poles approach close to the threshold from the LH half-plane, will show up in a tendency that can be detected in the frequency response as peakiness.

m –> Is the simplicity of the math in question, in the end [after you have done the hard work of specifying information, and identifying thresholds], suspicious? No, again. For instance, let us compare:

v = i* R

q = v* C

n = sin i/ sin r

F = m*a

F2 = – F1

s = k log W

E = m0*c^2

v = H0D

Ik = – log2 (pk)

E = h*νφ

n –> Each of these is elegantly simple, but awesomely powerful; indeed, the last — precisely, a threshold relationship — was a key component of Einstein’s Nobel Prize (Relativity was just plain too controversial). And, once we put them to work in practical, empirical situations, each of them ” . . .  is no longer purely mathematical but subject to other observations and qualifications that are not mathematical at all.”

(The objection is clearly selectively hyperskeptical. Since when was an expression about an empirical quantity or situation “purely mathematical”? Let’s try another expression:

Y = C + I + G + [X – M].

How are its components measured and/or estimated, and with how much application of judgement calls, including those tracing to GAAP? [Cf discussion here.] Is this expression therefore meaningless and of no utility? What about M*VT = PT*T?)

o –> So, what about that horror, the involvement of the semiotic, judging agent as observer, who may even intervene and– shudder — judge? Of course, the observer is a major part of quantum mechanics, to the point where some are tempted to make it into a philosophical position. But the problem starts long before that, e.g. look at the problem of reading a meniscus! (Try, for Hg in glass, and for water in glass — the answers are different and can affect your results.)

Fig 3: Reading a meniscus to obtain volume of a liquid is both subjective and objective (Fair use clipping.)

p –> So, there is nothing in principle or in practice wrong with looking at information, and doing exercises — e.g. see the effect of deliberately injected noise of different levels, or of random variations — to test for specificity. Axe does just this, here, showing the islands of function effect dramatically. Clipping:

. . . if we take perfection to be the standard (i.e., no typos are tolerated) then P has a value of one in 10^60. If we lower the standard by allowing, say, four mutations per string, then mutants like these are considered acceptable:

no biologycaa ioformation by natutal means
no biologicaljinfommation by natcrll means
no biolojjcal information by natiral myans

and if we further lower the standard to accept five mutations, we allow strings like these to pass:

no ziolrgicgl informationpby natural muans
no biilogicab infjrmation by naturalnmaans
no biologilah informazion by n turalimeans

The readability deteriorates quickly, and while we might disagree by one or two mutations as to where we think the line should be drawn, we can all see that it needs to be drawn well below twelve mutations. If we draw the line at four mutations, we find P to have a value of about one in 10^50, whereas if we draw it at five mutations, the P value increases about a thousand-fold, becoming one in 10^47.

q –> Let us note how — when confronted with the same sort of skepticism regarding the link between information [a “subjective” quantity] and entropy [an “objective” one tabulated in steam tables etc] — Jaynes replied:

“. . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly ‘objective’ quantity . . . it is a function of [those variables] and does not depend on anybody’s personality. There is no reason why it cannot be measured in the laboratory.”

r –> In short, subjectivity of the investigating observer is not a barrier to the objectivity of the conclusions reached, providing they are warranted on empirical and analytical grounds. As has been provided for the Chi metric, in reduced form.  END

Comments
F/N: A bit of explanation on targetting with servosystems will help. Think about an air-to-air missile like the classic sidewinder. That the target -- a flying jet usually -- normally moves, does not remove the fact of targetting, and the missile hits as long as it can move fast enough to close to the moving target and as long as it has an oracle -- the IR signal from the jet exhaust being the usual one. Such a missile goes ballistic if it loses lock and is no longer in target-location controlled flight.kairosfocus
May 26, 2011
May
05
May
26
26
2011
05:33 AM
5
05
33
AM
PDT
Hello CannuckianYankee (re: posts 201 and 202). Two good posts: the first one being particularly interesting. Computer simulations of evolution created by intelligent designers are exactly that. They shed absolutely no light whatsoever on how amino acids came to form DNA and how DNA itself evolved through random mutations (let alone how the cell itself evolved). This has to be the starting point of any computer simulation attempting to demonstrate the power of random mutation and natural selection. Evolutionists like "Mathgrrl" take their starting point from Dawkins: those who don't believe in evolution are "ignorant, stupid, or insane, (or wicked, but I’d rather not consider that)." Obviously, we cannot expect any respect or decency from people like that. We can only expect rudeness, evasiveness and double-standards. So the sooner such people withdraw from this debate the better. Fortunately, we have a few serious, courteous opponents who are open-minded and conversant with the facts. More like them please!Chris Doyle
May 26, 2011
May
05
May
26
26
2011
05:30 AM
5
05
30
AM
PDT
CY: Significant. Especially so, since you can see above that I have been made a target of abusive slander, in obvious connexion with the mess that is going on at MF's blog. The slanderer's notion that blocking abusive comments is improper protection and privileging is in turn quite revealing. It seems that we are at a stage where the Alinsky mentality has so pervaded sectors of the public, that they are unable to think that protecting civil discussion towards assessing the warrant of claims is a legitimate act. And, all the time, the slanderer unwittingly reveals just why there is a pattern of evo mat advocates being banned at UD: far too many of them tend to be uncivil and abusive. Now, on the "co-evo" of binding and reception sites, this boils down to, we let the targets wander around a bit, so the negative feedback used to reduce "mistakes" -- i.e. to reduce Hamming distance -- is more of a servo-mechanism than a straight regulator; to use control system terms. All this usually means is the system is inherently more unstable [servos tend to be more headache-y as control systems], as the amount of tweaking we see above supports. (And you have in reserve self-modifying i.e. so-called adaptive control mechanisms.) The fact of targetting -- as Mung documented in so much specific detail -- has not changed. Nor has the basic reality we see: the system is designed, is tuned to produce a particular performance, and profits from injected active information. That is how it beats the search space limits. Intelligent design. GEM of TKIkairosfocus
May 26, 2011
May
05
May
26
26
2011
05:09 AM
5
05
09
AM
PDT
Joseph, "MathGrrl has to be a ruse and this is all a prank. When she spews stuff like:" Just for the record, it appears that MG came here originally as an assignment of some sort from a blog called "In Moderation" ... http://mfinmoderation.wordpress.com/2011/05/07/mathgrrls-csi-thread-cont/#comment-2381 ...hosted by markf. The blog holds a discussion among people have been banned from commenting on UD for one reason or another. Many of them are angry at UD for having placed them in moderation, and the discussion on that blog is almost exclusively centered around UD's moderation policy. There's not much discussion on the merits of either ToE or ID. In those discussions, many of the people who post here have been mentioned - sometimes in slanderous language - but I don't fault markf for that. I've been reading posts there for several weeks, and it appears that some of the comments from markf here are intended to test whether certain things he says will lead to him being moderated. He does not believe that people are moderated due to any particular policy, but based on the emotional whims of the moderators. So I would not be surprised if MG's continuous repetition is the result of an agreed-upon test of our moderation policy among the readers of that blog. If so, the premise of her question is not so much in trying to get answers to a scientific question, but rather to test how far she can go before being moderated, for the purpose of further confirming that moderations are arbitrary and frequent towards dissenting views. This leads to another issue. If MG is posting on a blog for former UD posters of dissenting views, then likely she is one of those former posters and is using another name. I got a hint of that when on the other blog, she erroneously posted under the name of one "Patrick," on 3 recent posts, then after catching herself and saying that she outed herself there, she explained that she was using her father's laptop, and that markf could decide what he was going to do with her 3 posts under that name; which is interesting, since markf apparently doesn't censor anything on that blog.CannuckianYankee
May 26, 2011
May
05
May
26
26
2011
04:47 AM
4
04
47
AM
PDT
MG, "In that comment I make the point that “ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured." Darwinian ToE holds that complex life (now acknowledged as containing highly complex information in the form of DNA), and the required increase in such information, is an accident of chemical and physical processes without intervention from a mind or intelligence of any sort. Thus, evolution did not involve a computer algorithm with a goal to co-evolve, or to assist evolution in any way according to Darwinian evolutionary understandings. One must continually keep this in mind when using computer algorithms to somehow evolve complex information or synthetic organisms. Unfortunately, Darwinists do not appear to keep this in mind. They ignore the very premise they're attempting to confirm. Computer programs, which purport to demonstrate how evolution can produce complex biological information from mere chemical and physical processes, are therefore suspect when there is a "goal" as you say. Evolution supposedly has no goal or "target." Schneider's own language regarding ev is full of indications of a targeted search, as has been pointed out several times. I find it interesting that you keep attempting to drive home a point regarding "rigorous mathematical quantifications" for CSI when the very premise by which Darwinian evolutionists attempt to rigorously quantify evolution - via computer programs that are designed, is suspect right from the very premise of the methodology compared with Darwinian evolution's own definition. The only way a computer program purporting to demonstrate the efficacy of Darwinian evolution by it's own definition could do so, would be for the computers to first of all design and construct themselves, and then to design and construct the programs that demonstrate how it is possible for Darwinian evolution to work. Computer programs are always artificial. The key part of "artificial" is "art." Art is the product of mind and intelligence. Supposedly biology is not artificial, but "natural." It doesn't produce synthetic organisms, but natural organisms. Therefore artificiality can in no way demonstrate natural processes according to Darwinian definitions and understandings of "natural." I have to repeat here what KF pointed out from Dembski in a peer reviewed paper: http://evoinfo.org/papers/vivisection_of_ev.pdf "ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful." The problem with ev is that it is not a blind, unguided search; it is as Dembski states, an algorithm, which exploits sources of knowledge (read: "information") to reach a goal (of increased information), which is then interpreted from a Darwinian standpoint as demonstrating how Darwinian process can achieve a similar increase in information. I don't think anyone's denying that with ev there is an increase in information. What the detractors are saying is that it does so not by Darwinian processes as explained by the Darwinian ToE, but by artificial processes programmed into it by designers. Therefore, it is by definition, a targeted search, with the goal to confirm what the programmers already believe about Darwinian processes; and Mung showed how so in his several posts on the matter, and if you read Dembski's entire paper, he demonstrates this empirically. This is also the point of Meyer's 13th Chapter in SITC. Designed evolutionary algorithms are nothing more than an exercise in elusive question-begging and viewpoint confirmation on the part of Darwinian evolutionists. And this recognition is extremely important in relation to your initial question regarding a rigorous mathematical quantification of CSI. And in demonstrating this, those who pointed it out are actually doing you a favor. It appears as though your initial question stems from an assumption that Darwinian processes ARE capable of producing and increasing complex information (so you also require a rigorous mathematical quantification of CSI and you should be thankful that such quantification has been provided in several posts over the last several months). Unfortunately what has been provided does not confirm your worldview. The logical thing to do would be to acknowledge this and move on, rather than attempting to drive home an already well-refuted point. You appear to base this assumption on examples such as the ev algorithm, which have been shown to be counterproductive - well that is assuming you're looking for an honest evaluation of evolution's abilities, and not simply a confirmation for what you already believe.CannuckianYankee
May 26, 2011
May
05
May
26
26
2011
04:10 AM
4
04
10
AM
PDT
kf - True, but you didn't respond to my response at 158 to your reply at 157. I also asked you something (along similar lines) at 187.Heinrich
May 26, 2011
May
05
May
26
26
2011
01:35 AM
1
01
35
AM
PDT
H: I don't know about J's response, but I answered at 157. GEM of TKIkairosfocus
May 26, 2011
May
05
May
26
26
2011
12:27 AM
12
12
27
AM
PDT
Onlookers: It is time to draw some conclusions (some of which, regrettably but needfully, will be painful) on the past several months worth of exchanges at UD on this general topic. Some of those conclusions -- as just pointed out -- are not happy ones; and, it is to be noted before I go on that this morning I have received a comment elsewhere along the following lines:
[Condescending diminutive of my name] you're a delusional, dishonest, hypocritical, pompous, narcissistic dolt. You're going to get a lot of exposure here: [blog address of an attack blog, communicated to management, UD] Your [homosexual reference] buddies at UD won't be able to protect you there. The truth about you and your insane religious and political agenda will come out for all to see. Consider yourself 'outed'.
This is an example of the turnabout accusation rhetorical attack, crudely slanderously uncivil and self-justifying mentality we unfortunately too often have to deal with on the part of objectors to design thought; here in the crudest form of utterly unwarranted personal insults. Perhaps, too, this commenter needs to know that there are jurisdictions that are applicable (jurisdictions where the US's fatally flawed libel laws do not hold), in which patently false and utterly unwarranted accusations are actionable. And even before we get to the level of action, the notion that "this is not a Sunday School," or the like, is a thinly disguised way of admitting that one is being rude, uncivil and out of order. The red herring, led away to the strawman caricature, and then the pouring on of ad hominems and igniting through incendiary rhetoric, the better to cloud, choke, confuse, poison and polarise the atmosphere, is the strongest proof of a want of basic broughtupcy and of utter want of a serious case on the merits. Such a person should therefore pause and think twice before hitting send, when that message is going to be received in jurisdictions other than what s/he -- most likely, he -- has become used to. (And BTW, if you will take the moment to look above, you will see that when J went overboard above, I corrected him at once. Civility is the first requirement of serious dialogue that moves towards soundness and truth.) A commentator like this -- instead of resorting to abuse and insult -- would better expend his or her energy seriously addressing on the merits the issues here, where I have laid out what serious minded citizens have to think through if they are going to come to grips with origins science and the significance of the dominant a priori evolutionary materialist school of thought for not only the world of thought but for our wider civilisation. People like the just cited, sadly, do not seem to understand the matches they are playing with, or the fires they can set in our civilisation, even though Plato warned in his The Laws, Bk X 2350 years ago as follows:
[[The avant garde philosophers, teachers and artists c. 400 BC] say that the greatest and fairest things are the work of nature and of chance, the lesser of art [[ i.e. techne], which, receiving from nature the greater and primeval creations, moulds and fashions all those lesser works which are generally termed artificial . . . . [[T]hese people would say that the Gods exist not by nature, but by art, and by the laws of states, which are different in different places, according to the agreement of those who make them; and that the honourable is one thing by nature and another thing by law, and that the principles of justice have no existence at all in nature, but that mankind are always disputing about them and altering them; and that the alterations which are made by art and by law have no basis in nature, but are of authority for the moment and at the time at which they are made.- [[Relativism, too, is not new; complete with its radical amorality rooted in a worldview that has no foundational IS that can ground OUGHT. (Cf. here for Locke's views and sources on a very different base for grounding liberty as opposed to license and resulting anarchistic "every man does what is right in his own eyes" chaos leading to tyranny.)] These, my friends, are the sayings of wise men, poets and prose writers, which find a way into the minds of youth. They are told by them that the highest right is might [[ Evolutionary materialism leads to the promotion of amorality], and in this way the young fall into impieties, under the idea that the Gods are not such as the law bids them imagine; and hence arise factions [[Evolutionary materialism-motivated amorality "naturally" leads to continual contentions and power struggles; cf. dramatisation here], these philosophers inviting them to lead a true life according to nature, that is, to live in real dominion over others [[such amoral factions, if they gain power, "naturally" tend towards ruthless tyranny; here, too, Plato hints at the career of Alcibiades], and not in legal subjection to them . . .
In the slightly more sophisticated form of the so-called new/gnu atheists, the same underlying attitude unfortunately still applies: a priori materialists see themselves as the "brights," and any who differ with them are therefore ignorant, stupid, insane or wicked. At the further sophisticated level we have been dealing with for some months now, all of that crudity of thought is fuzzed out by using indirection, allusion and suggestion, rather than direct declaration. That is how for instance MG managed to suggest by citing Galileo's apocryphal "It still moves," that this is a case of religion persecuting science. Somehow, it slipped her attention that no-one is threatening anyone with the thumbscrews here, and if anything it is the Materialist Neo-Magisterium in the Holy Lab Coat that has been persecuting those who it deems heretics in recent years. Similarly, in the eagerness to play the rhetorical game of pushing persuasive talking points through the tactic of drumbeat repetition -- see how easy it is ("nothing wrong with repeating a point over and over again is there . . . ?"), it became all too easy for MG to lose sight of the duties of care to truth, fairness, and reciprocity in a serious discussion. And, in the end, such behaviour becomes subtly willfully deceptive; tantamount to lying. But such a process is so subtle that one may not see what one has actually done; until it is far too late. And that is why the thread above is so subtly painful. Oh, that it had gone in a different path, of genuine exchange of thoughts; as MG et al were invited to, over and over and over, in her case to the point of a guest post at UD. But, day by day, week by week, it became all too plain that the point was to project talking points and play the game of selectively hyperskeptical objection, not to actually engage in genuine exchange of ideas. So, the real bottomline for this thread was laid out in 34 - 35 above, which in the course of all but a fortnight since, MG has plainly been unable to respond. We can therefore freely conclude that -- despite the many talking points to the contrary -- the concept, complex specified information is meaningful and relates to a key challenge in origins science. Secondly, the Chi metric -- as the log reduced form shows -- is based on well accepted information theory concepts, starting with the common basic definition of quantified information, Ik = log (1/pk). It then raises the issue of a threshold sufficient to swamp the search resources of the solar system or the whole cosmos, and in so doing arrives at a highly useful result. Namely, a criterion of difficulty by which sufficiently specific pieces of functionally meaningful information will be so isolated in the space of possible configurations, that it is maximally implausible to try to explain them on chance and/or necessity. This is backed up by the needle in the haystack/infinite monkeys type analysis similar to that used to statistically ground the second law of thermodynamics. Such FSCI, however, is routinely and only observed to be the product of intelligence. And so, we are well warranted to infer from CSI or FSCI as reliable sign to the best, empirically and analytically warranted explanation, design. Never mind the ongoing drumbeat repetition of the many talking point objections to the contrary. (Indeed, we recall here how at a certain point Einstein's theory of Relativity became a subject of ideological objection in his native land. At one point, he was subjected to a public meeting with one speaker after another rising to subject the theory to shrill objections. His reply was, that if his theory was false, just one speaker on the merits would have sufficed to overturn it. Likewise, in the face of a cloud of angry mosquitoes tanked up on talking points and spreading them far and wide, we have yet to see that one sound speaker on the merits.) GEM of TKIkairosfocus
May 26, 2011
May
05
May
26
26
2011
12:16 AM
12
12
16
AM
PDT
Joseph - as you're still following this thread, could you answer my comments @152?Heinrich
May 25, 2011
May
05
May
25
25
2011
09:01 AM
9
09
01
AM
PDT
MG: I am finished with trying to answer you on points, as the only result is dismissal and reiteration. The message you have got through at length is that you are so far utterly unresponsive to duties of care about truth, fairness or reciprocity in discussion. Secondarily, after coming on three months now, you show no signs of relevant capacity to handle the concepts and the mathematical reasoning associated with those concepts. That includes your yet unexplained confusion of a log reduction of the Dembski metric with a probability calculation, and your attempt to dismiss the point of the issue of isolation of islands of function in large spaces of possibilities as irrelevant. In addition, as you seem to be an advocate for Schneider, you need to address the case where he tried to "correct" Dembski when the latter used the most common definition of information from my experience, Ik = log(1/pk) = - log pk (which is what I was introduced to in telecomms many years ago now as the main quantification of info, all Dembski has done is to add the criterion of the relevant configs in the string being from a zone of interest, often related to meaning-based, coded function in a system such as in DNA); and did so by trying to substitute a rarer synonym, "surprisal." Also, you need to answer to how Durston et al used their functional state H-metric (based on Shannon's avg info per symbol metric AKA entropy AKA uncertainty) and indicated in their 2007 paper that:
The number of Fits quantifies the degree of algorithmic challenge, in terms of probability [info and probability are closely related], in achieving needed metabolic function. For example, if we find that the Ribosomal S12 protein family has a Fit value of 379, we can use the equations presented thus far to predict that there are about 10^49 different 121-residue sequences that could fall into the Ribsomal S12 family of proteins, resulting in an evolutionary search target of approximately 10^-106 percent of 121-residue sequence space. [notice the use of the concept of an island of function in a space] In general, the higher the Fit value, the more functional information is required to encode the particular function in order to find it in sequence space.
Of course, Dembski's Chi metric in log reduced form [cf original post here and the onward linked thread], shows that an easy way to quantify that search challenge is to use a threshold beyond which sufficiently specific and isolated zones of interest -- notice the case Durston et al cite -- will be maximally hard to find on random walk plus trial and error searches, especially where the zone of interest is based on function, i.e. we have isolated islands of function in vast config spaces beyond the search resources of the observed solar system or cosmos. Which last, you -- as already noted -- tried to dismiss as irrelevant. In short, on the evidence we have in hand, the claim you often make of a lack of adequate warrant for an empirically based mathematical model and metric of an observed phenomenon described in the technical literature at least since Orgel and Wicken in the 1970's, i.e. complex specified information -- the only meaning of lack of rigour that is reasonable [notice your unresponsiveness to 34 - 35 above] -- is a product of your own refusal to engage the key concepts and their roots in standard work in information theory and in light of the infinite monkeys/needle in the haystack type analysis. In further short, the well-warranted conclusion is that you are -- on evidence of coming on three months of attempted discussion in the teeth of drumbeat repetition of a wall of dismissive talking points -- being selectively hyperskeptical and/or willfully obtuse to the point of being willfully defiant and dismissive of what you know or should know. Which, in the context of promoting highly misleading talking points by drumbeat repetition in defiance of repeated correction, is tantamount to making willfully deceptive false claims. To lying, in brutally direct short. (A word I do not like to use, but which -- regrettably -- is looking ever more like the appropriate one.) And I am still deeply offended whenever I recall your snide, atmosphere-poisoning allusion to Galileo's whispered "it still moves" after he was forced to publicly recant by threat of torture. I remind you, that no-one is threatening anyone with torture here, and that if anyone is playing the august magisterium imposing its views by fiat and threats to careers, it is the evolutionary materialist magisterium, as say the recent Gaskell case shows, and earlier ones going back to the likes of Sternberg, Bishop, and Kenyon made all too plain. In short, you have indulged in a turnabout, blame the victim, false accusation. You have some serious explaining and apologising to do, madam. For weeks or months now. I simply point you to 195 just above and the onward links above and in the previous thread. If you are interested in getting serious after coming on three months that is. Good day, madam GEM of TKIkairosfocus
May 25, 2011
May
05
May
25
25
2011
02:12 AM
2
02
12
AM
PDT
Joseph: It is increasingly clear that MG is simply pushing talking points, even in the teeth of patent reality. But this is reality, not one of those comedies where denial denial denial and dismissal, drumbeat fashion can substitute for reality. The episode with the clips in 171 above is perhaps the clearest immediately accessible proof of it. The bottomline is that the -- peer-reviewed, Dec 15, 2010 Bio Complexity 2010(3):1-6. doi:10.5048/BIO-C.2010.3 -- Dembski et al vivisection of ev turns out to be quite correct, despite all dismissals and obfuscations. Abstract:
ev is an evolutionary search algorithm proposed to simulate biological evolution. As such, researchers have claimed that it demonstrates that a blind, unguided search is able to generate new information. However, analysis shows that any non-trivial computer search needs to exploit one or more sources of knowledge to make the search successful. Search algorithms mine active information [f/n 1: "active information is defined as -log2(p/q) where p is the probability of success for an unassisted search and q is the probability of success for an assisted search. Informally, it is the amount of information added to the search that improves the probability of success over the baseline search."] from these resources, with some search algorithms performing better than others. We illustrate these principles in the analysis of ev. The sources of knowledge in ev include a Hamming oracle [f/n 3: "A Hamming oracle uses the Hamming distance (number of bits that differ from a target sequence) as its fitness metric" where from f/n 2: "A software oracle is a software object that answers queries posed to it. In our case, a software oracle is a function that takes in a configuration and returns a value denoting the fitness of that configuration"] and a perceptron structure that predisposes the search towards its target.[nb f/n 8: "Although all 256 positions along the genome [used in ev] are evaluated for errors and contribute to an organism’s fitness, the randomly placed binding sites are restricted to the second half of the genome. In Figure 1 of reference 16 [16. Schneider TD (2000) Evolution of biological information. Nucleic Acids Res 28: 2794-2799. doi:10.1093/nar/28.14.2794], these correspond to bases 126 to 261. There are other nucleotides whose identities are interpreted as weights, window values, or the bias in the construction of the perceptron. Five additional bases are used at the end to accommodate a sliding window used in ev." and f/n 9: "The target binding sites start at location 131 (zero-indexed) in the first Figure of reference 16. Thus, location 10 here corresponds to nucleotide 141"] The original ev uses these resources in an evolutionary algorithm. Although the evolutionary algorithm finds the target, we demonstrate a simple stochastic hill climbing algorithm uses the resources more efficiently.
Let's just say that in the current climate of hostility, Dembski et al would not have been published in such a journal unless their article had serious merit on matters of substance. Mung simply provided clips and comments from Schneider that inadvertently corroborated the point of the critique of ev in the literature. Schneider's race horse page, as the rest of the discussion in the CSI thread will show, is particularly rich in such implicitly telling admissions. Similarly, we again see that MG is unwilling to face and address on the merits the specific challenges to her main claims. Notice, how she is clearly unable and/or unwilling to click on links and address specific points on the merits. Let's repeat, again. First, on CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) She knows or should know better than she has acted. GEM of TKIkairosfocus
May 25, 2011
May
05
May
25
25
2011
01:52 AM
1
01
52
AM
PDT
kairsfocus- MathGrrl has to be a ruse and this is all a prank. When she spews stuff like:
The record shows that no ID proponent has provided a rigorous mathematical definition of CSI as described by Dembski
For the record I have to call her a liar- either that or she is purposely obtuse.Joseph
May 24, 2011
May
05
May
24
24
2011
04:34 PM
4
04
34
PM
PDT
kairosfocus,
Simply go up to 171 and look to see the targetting in action for ev.
Your comment 171 shows no such thing. You seem to think that the recognizer or the binding site is some sort of target, but that simply shows confusion about how ev works. As I noted here: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 and very recently requoted to CannuckianYankee, "ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured. The only feedback provided is the number of sites recognized. There is no target for the content of either the binding sites or the recognizers. In fact, the makeup of those parts of the genome will be different in different runs." ev absolutely does not have a target for the solution. Again, if you disagree, please identify the target either in the ev paper or in the Evj source code.MathGrrl
May 24, 2011
May
05
May
24
24
2011
04:14 PM
4
04
14
PM
PDT
Onlookers: Notice how I am now repeating the links to the answers that MG has studiously avoided for ten or more days now, just in this thread, including the stunt of looking at from 61 or so on when the links went to comments above her artfully chosen cutoff. And this is just for this thread, she has studiously been unresponsive to cogent answers for over two months now, in thread after thread. GEM of TKIkairosfocus
May 24, 2011
May
05
May
24
24
2011
04:08 PM
4
04
08
PM
PDT
Onlookers: Simply go up to 171 and look to see the targetting in action for ev. As for "tweaking," the clip in 171 shows it for what it is, fine-tuning to achieve intelligently designed purposeful performance. The sad joke is that after composing the program, fine tuning for hitting targets measured with Hamming distances (number of "mistakes" -- digital values to change to transform one point to another in a digital space -- is a Hamming distance metric by another name) and more, Schneider imagines that his program is a model of blind watchmaker chance variation plus natural selection creating macroevo. The creation of Shannon info as such is no big deal, tossing a coin at random will create what can be quantified in a Shannon metric as information. The real challenge is to create FSCI beyond the threshold, without intelligent direction, and tha tis precisely the problem with Schneider's ev and the exact significance of the targetting tuning and selection of nice trendy fitness functions that give rise to hill climbing. Again and again MG et al fail or refuse to see that the real issue is not hill-climbing within an island of function (micro-evo in effect) but getting to shores of islands of function in large config spaces. And meanwhile it still remains the case that on CSI and its "rigour," that has been addressed over and over again, in most specificity to the issue of rigour, at 34 - 5 above. Similarly, the talking points MG tends to use over and over as thought hey have not been cogently answered, were last dissected in 23 - 24 above. And, the overall summing up of the issues MG has needed to explain herself on has been kept up in the editorial response to Graham at no 1 in the CSI newsflash thread; which MG has persistently ignored. When it comes to ev, 137 above shows my links to the places in the CSI Newsflash thread where it is dissected by Mung. (One of MG's tactics seems to be to wait until something is buried under enough posts in a thread, or has been continued in a successor thread, before repeating the assertion that was rebutted.) MG is studiously ignoring the fact that her favourite talking point has been more than adequately answered, over and over again. Which is actually quite rude or uncivil, just as CY pointed out. She knows or should know better than she has acted and written. GEM of TKIkairosfocus
May 24, 2011
May
05
May
24
24
2011
04:00 PM
4
04
00
PM
PDT
kairosfocus, Your issues with the "tweaking" of parameters by Schneider to beat Dembski's UPB is addressed here: https://uncommondescent.com/intelligent-design/news-flash-dembskis-csi-caught-in-the-act/#comment-378783 The relevant paragraph is:
The second point is the discussion of Schneider's "horserace" to beat the UPB. You both make a big issue about Schneider tweaking the parameters of the simulation, population size and mutation rate in particular, but you don't discuss the fact that, once the parameters are set, a small subset of known evolutionary mechanisms does generate Shannon information. This goes back to my discussion with gpuccio on Mark Frank's blog where we touched on the ability of evolutionary mechanisms to result in populations that are better suited to their environment than were their parent populations. That, in turn, suggests that, while it might be possible to make a case for cosmological ID, there is no need to posit the involvement of intelligent agency in biology.
MathGrrl
May 24, 2011
May
05
May
24
24
2011
03:49 PM
3
03
49
PM
PDT
CannuckianYankee, On a separate point....
I sense that civility is waning with your recent repetitions. Repetition can be uncivil when it doesn’t respect the fact that a question was answered with careful patience and knowledge-based insight.
Your assumption is incorrect, hence your conclusion does not follow. I am continuing to ask for a rigorous mathematical definition of CSI, as described by Dembski, and a detailed example calculation because neither have yet been provided. Perhaps you would care to answer the questions I posed to kairosfocus in my comment 59 of this thread? Here it is again for your convenience:
I have read through all of your responses since my comment numbered 60 in this thread and have yet to see you address the two very simple questions I've asked. Let's try to make some progress by breaking this down into simple questions that can be answered succinctly. First, you repeatedly claim that CSI has been rigorously defined mathematically, but nowhere do you provide that rigorous mathematical definition. You could eliminate the need for your assertions by simply reproducing the definition here in this thread, in a single comment without any extraneous material. Could you please do so? Second, you have yet to reply to my question in comment 59:
CSI, I have explicitly said, many times, is a descriptive concept that describes an observed fact
By this, are you asserting that it is not possible to provide a mathematically rigorous definition of CSI, even in principle? If your answer is yes, I think you have a disagreement with some of your fellow ID proponents. If your answer is no, could you please simply state the mathematically rigorous definition of CSI, as described by Dembski, in a single, stand alone comment, without myriad tangential points, postscripts, and footnotes? It would go a long way to clarifying your position.
With these two questions answered, again as succinctly as possible, I believe we can make some progress in the discussion. Are you willing to work with me on this?
Since you are claiming that I am continuing to ask questions that have already been answered, I presume that it is not a problem for you to reproduce those answers in response to this comment.MathGrrl
May 24, 2011
May
05
May
24
24
2011
03:49 PM
3
03
49
PM
PDT
CannuckianYankee, Welcome to the discussion!
I really want to address one thing to MathGrrl: What is your criteria for determining that the ev program does not involve a targeted search? I think this is really key to one of the main disagreements here. So far I’ve only seen you assert that it does not, . . .
You must have missed the two comments I referenced above, this one in particular: https://mfinmoderation.wordpress.com/2011/03/14/mathgrrls-csi-thread/#comment-1858 In that comment I make the point that "ev has a goal of co-evolving binding sites and their recognizers so that the Shannon information in the binding sites can be measured. The only feedback provided is the number of sites recognized. There is no target for the content of either the binding sites or the recognizers. In fact, the makeup of those parts of the genome will be different in different runs."MathGrrl
May 24, 2011
May
05
May
24
24
2011
03:47 PM
3
03
47
PM
PDT
kairosfocus,
Similarly, Mung is not speculating, he gave citations from the text by Schneider (which we can all follow up), and I was able to confirm some of the key points through my own clips from Schneider.
I see that you continued your discussion of ev later in comment 171, but did not identify any target in your discussion there, despite quoting from the ev paper. This is a simple issue to resolve. If you believe that ev can be modeled as a targeted search, please identify the target either in the ev paper or in the Evj source code.MathGrrl
May 24, 2011
May
05
May
24
24
2011
03:47 PM
3
03
47
PM
PDT
And I am not saying “i don’t need to calculate,” I am saying we have empirical data in hand on the matter that tells us the sort of order we are looking at, and that this is consistent with what common sense would have told us
Where is this empirical data? How is the resemblance of a portrait to a face objectively measured? IOW, how do you make the specification?Heinrich
May 24, 2011
May
05
May
24
24
2011
08:56 AM
8
08
56
AM
PDT
Dr Bot Please, try not to twist what I actually said, which was that the specification of a portrait -- not some vague resemblance like burn marks on toast can yield or the like -- will require sufficient complexity and specificity of information that it will not be achieved by chance and necessity on the gamut of our cosmos, with so high a degree of confidence that it is practically certain; similar to other cases of FSCI. The evidence -- as actually cited from those who do this sort of thing professionally -- is the required info for a sculptural portrait is of the order of Mbits of info. (And I am not saying "i don't need to calculate," I am saying we have empirical data in hand on the matter that tells us the sort of order we are looking at, and that this is consistent with what common sense would have told us.) As I read your string of one objection after another, I keep getting the feeling that you are twisting me into pretzels to try to fit some strawman ignoramus. Now, when you say many natural phenomena will not be found on a random walk, in part my answer is of course, e.g. the DNA and the machinery to put it to work in the living cell. Such cells may be self-replicating, but heir origin seems to be intelligent, per the basic point highlighted 200 years ago by Paley in the Ch II on the self-replicating watch that is seldom mentioned when objectors hastily dismiss his watch argument in ch 1. Namely, when we see intricate machinery that does a job and then has the additional -- additional-ity is crucial here -- provisions that make it self-replicating, then that is a further reason to infer design. In other cases, what you are suggesting is of the order that if one sets up a given outcome, and then hopes to replicate it by chance and necessity, then that is unlikely on the gamut of the cosmos. For example if 200 dice are tossed and the record of the toss is kept, the exact pattern is unlikely to recur in the history of the cosmos. That would be because the first toss has been tuned into a specification, of a very narrow cluster of possibilities. Each possibility is equiprobable, but he cluster of at random tosses that are in no particular order so outweigh the one you are interested in that to find it a second time would be a practical impossibility. This is similar to how the same dice reading all 1's would be a practical impossibility on the gamut of the observed cosmos, from chance and or the necessity of falling then tumbling and settling. If you see 200 dice reading all 1's the best bet is that this was by design. This is similar to the thermodynamic result and reasoning that explains how the O2 molecules in the room where you sit could with equal probability be in any one possible outcome as any other one. But, the ones where all the O2 molecules are clumped to one end of the room are so utterly outweighed by the numbers where they are more or less evenly scattered, that we will reliably see the latter not the former. Indeed, if you see a room that has the O2 molecules clumped like that, it is almost certainly by design, even if we do not know how that were done. So, the attempt to dismiss the needle in haystack and infinite monkeys illustrations, fails. BTW< the IM example was formerly advanced quite frequently by advocates for chance + necessity to yield OOL and evolution, including online. Of course Weasel type arguments tried to weight the case as thought he fitness function did not have to address seas of non-function and isolated islands of function. But that too is overwhelmingly reasonable, on many grounds, starting with what is needed, per observation to get codes and algorithms. It is only now that we have shown what is being suggested that this has been abandoned and has been turned now into an attempt to suggest that pointing out that the chance that has to be the source of variation -- contingency -- in the darwin type model, is not viable is somehow a strawman misrepresentation. But in fact the natural selection half is a description that some variations will do worse than others and will be culled out over time. The variations have to come from chance processes, at least if you are a darwinist. NS may explain survival of the fittest, but it does not explain the arrival of the fittest, this last being understood as reproductive advantage. I suggest you read App 1 point 6 here to see the point on macro vs micro states and relative statistical weight. GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
11:33 AM
11
11
33
AM
PDT
You are not my teacher and I am not a lazy student ducking on an assignment.
So you claim that a 3d shape that resembles a real person cannot ever exist without design, and that it would contain > 1000 bits of functional information but you don't need to do any calculations to know this is true. Fair enough!
Further to all this, the point of the 500 – 1,000 bit threshold for FSCI is precisely that the quantum state Planck time resources of our solar system and of our observed cosmos beyond that, would be an impossibly small fraction of what would be required to search out a reasonable fraction of configs to reasonably expect to arrive at the relevant island of function on random walks plus trial and error. That has been pointed out in detail over and over again, including the point that the relevant scale of interaction, chemical interaction, takes up ~ 10^30 P-times for the fastest (ionic) interactions.
All true, and if you use the same criteria to judge many complex but natural phenomena you find that a random walk will not stand a chance of finding them. Your arguments are, and always have been, based on flawed reasoning but I guess I only have myself to blame for failing to educate you in this matter. Infinite monkeys will not produce lots of things observed to be the products of natural forces. It is a straw-man argument.DrBot
May 23, 2011
May
05
May
23
23
2011
06:37 AM
6
06
37
AM
PDT
Dr Bot: You are not my teacher and I am not a lazy student ducking on an assignment. Right from the beginning, the link I gave on the nodes-arcs approach has in it an onward link on 3-d modelling, which was actually a supplemental for teaching math in high school. You will find in it a report on the typical sort of scope of information used in sculptural 3-d models, and it is as I have reported. Let me clip a relevant paragraph:
To get a sculptural face that looks closely like that of George Washington or Nefertiti [[i.e. we have defined a specific function], a dense network of quite precisely located points has to be set up; so that a smooth, accurate portrait can be made. [by contrast, Old Man of the Mountain or anything reasonably close would be recognisable as somewhat face-like, and would be “acceptable”; so it is not anywhere nearly so tightly specified. That's why with a spot of imagination, one can easily see face-like figures in wood paneling, clouds in the sky, and in brown marks on toast.]
The first link in that paragraph in its original location goes here, to the referenced Math supplement note. Clipping:
Often the first step in creating the life-like computer generated characters we are now so used to in the movies — such as King Kong, Iron Man, WALL-E and Gollum — is for an artist to produce a highly detailed physical scultpure of the creature, just like the ones that now decorate Dench's office. Once the studio is happy that the creature looks just right, a 3D scanner is used to produce a highly detailed three-dimensional digital model of the object that can then be manipulated by animators on a computer. A 3D scanner shines a line of red laser light onto the object's surface, and a camera records the profile of the surface where the line of light falls. The position and direction of the laser and the camera lens are known, hence it is possible to calculate the position of each point on the surface highlighted by the laser (a unique triangle is formed by the point on the surface, the laser and camera, of which the length of one side and two angles — the orientation and distance between the laser and camera — are known). The three-dimensional coordinates of each point are stored digitally, building up an intricate mesh made from triangular faces that mimics the surface of the real object. The resulting digital model is amazingly realistic — you almost forget that you are looking at a two-dimensional screen, and particularly that you are looking at a surface entirely made of flat triangles. The life-like quality comes from the massive amount of detail: a 3D scan can produce a model with as many as six million triangles making up the surface. The resulting model can be viewed on the computer screen either as a wire frame, or more realistically with each flat face shaded as it would be in real three-dimensional life . . .
I therefore find your latest objection annoyingly repetitive and stubborn in the teeth of already provided and reasonable information. Failure to do due diligence before objecting on your part does not constitute failure to warrant claims on mine. Further to all this, the point of the 500 - 1,000 bit threshold for FSCI is precisely that the quantum state Planck time resources of our solar system and of our observed cosmos beyond that, would be an impossibly small fraction of what would be required to search out a reasonable fraction of configs to reasonably expect to arrive at the relevant island of function on random walks plus trial and error. That has been pointed out in detail over and over again, including the point that the relevant scale of interaction, chemical interaction, takes up ~ 10^30 P-times for the fastest (ionic) interactions. The objections are looking ever more selectively hyperskeptical. GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
05:59 AM
5
05
59
AM
PDT
F/N 3: The key significance of this is that -- cf here, including the video clip -- the DNA information is transferred to mRNA as a template, that in the ribosome the anticodons key-lock fit -- i.e this is closely related to a sculpture -- to attach successive coded for AA's at their opposite ends, to the growing protein. At 300 AA's a typical length, we are looking at 1800 bits of digital info storage capacity expressed sculpturally (as did von Neumann's kinematic replicator). This is of course well beyond the 1,000 bit threshold, and there are thousands of proteins involved in typical cell based life.kairosfocus
May 23, 2011
May
05
May
23
23
2011
05:28 AM
5
05
28
AM
PDT
You are ducking the point that 500 bits is a practical upper limit for the nodes and arcs pattern (for chance to be a credible explanation), and we can use this objectively and quantitatively as I did. An acceptable sculptural portrait will normally require much more than 500 – 1,000 bits of specific info as assessed by the nodes and arcs method.
As my math teacher would say: Show me your working out! Remember, when we are talking about the subjective notion of a likeness and calculating probabilities of them occuring by natural forces you don't want to limit yourself to one single example (Lincoln). How do the numbers work out for an object looking like any particular individual who exists, or who used to exist.
If you doubt me on this, show a case of such a tree, or swirls in wood, or a cloud shape, or burn marks on toast, etc that produces a sculptural, realistically detailed, accurate portrait of Lincoln.
It order to test your claim I need to survey the entire universe including viewing all transient phenomena from all viewing angles?DrBot
May 23, 2011
May
05
May
23
23
2011
05:11 AM
5
05
11
AM
PDT
F/N 2: Observe carefully as well, you are strawmannising in order to set up a slectivley hyperskeptical objection, as I am speaking of a sculptural, realistic portrait, the particular context of Mt Rushmore. A lot of things may vaguely look like Lincoln, and be within the range of information that is reachable on chance, e.g. marks in bark on a tree. If you doubt me on this, show a case of such a tree, or swirls in wood, or a cloud shape, or burn marks on toast, etc that produces a sculptural, realistically detailed, accurate portrait of Lincoln.kairosfocus
May 23, 2011
May
05
May
23
23
2011
05:00 AM
5
05
00
AM
PDT
F/N: The Lincoln case is in a context, and there is a photograph that is the more or less standard of reference, both for the Mt Rushmore statue and the US penny. You are ducking the point that 500 bits is a practical upper limit for the nodes and arcs pattern (for chance to be a credible explanation), and we can use this objectively and quantitatively as I did. An acceptable sculptural portrait will normally require much more than 500 - 1,000 bits of specific info as assessed by the nodes and arcs method.kairosfocus
May 23, 2011
May
05
May
23
23
2011
04:55 AM
4
04
55
AM
PDT
And (post ferry trip no 1 for morning), Your second red herring notwithstanding, the meniscus example is a typical case of how subjectivity and objectivity interact in scientific work. By the time of my N2 Chem course, it was routine for us to be able to read end-point reliably to within one drop in 25 ml, i.e within a few parts per thousand. Rule was to do three runs and average. We often were able to get the same value for volume on each run. That includes the dummy variable, colour change -- a subjective judgement with an objective basis again. And, the point is that judging that one is at the correct eye level to read the volume of the pipette and the burette -- for the latter a start point and an end point that had to be subtracted -- was a skill, and exercised through judgement, but one that yielded objectively reliable and accurate results. Subjectivity and objectivity are not opposites and are routinely involved in scientific measurements and related mathematical models and analyses. In contexts that are often quite momentous, including life and death. Also, you may want to see the related discussion here on the Glasgow Consciousness Scale. I repeat, subjectivity and objectivity are not opposites, and many subjective things can be reliable and quantitative, on an appropriate scale. GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
04:50 AM
4
04
50
AM
PDT
KF:
Did you pay attention to the nodes, arcs and interfaces approach that you noted on previously?
Yes. If you use that as the basis for a measure then the measure will depend on Lincoln's age and expression. What degree of accuracy are you after, each wrinkle? How would that work for a caracuture? People would see the likeness wouldn't they but is there an objective measure? Try a different approach - if you spend your life rearing pigs you will typically be able to tell the different pigs apart, and even recognise a portrait as representing a distinct pig. If someone else looked at the portrait, and your pigs, they wouldn't be able to differentiate. The problem is that ultimatly what it comes down to is that you are claiming that there can never, anywhere in the universe, be an object, of any scale, that some people would regard as looking like Lincoln's face.DrBot
May 23, 2011
May
05
May
23
23
2011
04:30 AM
4
04
30
AM
PDT
Dr Bot: Did you pay attention to the nodes, arcs and interfaces approach that you noted on previously? GEM of TKIkairosfocus
May 23, 2011
May
05
May
23
23
2011
04:23 AM
4
04
23
AM
PDT
1 2 3 4 5 6 10

Leave a Reply