Uncommon Descent Serving The Intelligent Design Community

Questioning The Role Of Gene Duplication-Based Evolution In Monarch Migration

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Each year about 100 million Monarch butterflies from Canada and northeastern United States make their journey to the Mexican Sierra Madre mountains in an astonishing two-month long migration (Ref 1).  They fly 2500 miles to a remote area that is only 60 square miles in size (Ref 1).  No one fully understands what triggers this mass movement of Lepidopterans.  But there is no getting away from the fact that this is a phenomenon that, as one review summed up, “staggers the mind”, especially when one considers that these butterflies are freshly-hatched (Ref 1).  In short, Monarch migrants are always “on their maiden voyage” (Ref 2).  The location they fly to is home to a forest of broad-trunked trees that effectively retain warmth and keep out rain- factors that are essential for the Monarchs’ survival (Ref 1).
 
With a four-inch wingspan and a weight of less than 1/5th of an ounce, it is remarkable that the Monarchs survive the odyssey (Ref 1).  Making frequent stops for nectar and water, they fly approximately 50 miles a day avoiding all manner of predator.  Rapidly shifting winds over the great lakes and scorching desert temperatures in the southern states provide formidable obstacles (Ref 1).  Nevertheless the Monarchs’ finely-tuned sense of direction gets most of them across.
 
It was not until 1975 that scientists first uncovered the full extent of the Monarch’s migration (Ref 1).  What has become clear since then is that only Monarchs travel such distances to avoid the “certain death of a cold winter”.   According to University of Toronto zoologist David Gibo, soaring is the key to making it to Mexico (Ref 1). Indeed flapping wings is about the most energy inefficient way of getting anywhere.  Other aspects of the Monarch’s migration-linked behaviors, such as the reproductive diapause that halts energy-draining reproductive activity during its journey, continue to fascinate scientists worldwide (Ref 2).  Both diapause and the 6-month longevity characteristic of Monarchs are caused by decreased levels of Juvenile Hormone which is itself regulated by four genes (Ref 2).
 
Exactly how Monarchs navigate so precisely to such a specific location is a subject of intense debate.  One theory suggests that they respond to the sun’s location, another that they are somehow sensitive to the earth’s magnetic field (Ref 1).  Recent molecular studies have shown that Monarchs have specialized cells in their brains that regulate their daily ‘clock’ and help keep them on course (Ref 3).  Biologist Chip Taylor from the University of Kansas has done some remarkable tagging experiments demonstrating that even if Monarchs are moved to different locations during the course of their journey south, they are still able to re-orient themselves and continue onwards to their final destination (Ref 1). 
 
A study headed by Stephen Rappert at the University of Massachusetts has elucidated much of the biological basis of the timing-component of Monarch migration (Ref 3).  Through a process better known as time-compensated sun compass orientation, proteins with names such as Period, Timeless, Cryptochrome 1 and Cryptochrome 2 provide Monarchs with a well-regulated light responsiveness during both day and night (Ref 3).  While Cryptochrome 1 is a photoreceptor that responds specifically to blue light, Cryptochrome 2 is a repressor of transcription, efficiently regulating the period and timeless genes during the course of a 24-hour light cycle (Ref 3).  Investigations using Monarch heads have not only provided exquisite detail of the daily, light-dependent oscillations in the amounts of these proteins but have also revealed a ‘complex relationship’ of molecular happenings. 
 
Indeed, the activities of both Cryptochrome 2 and Timeless are intertwined with at least two other timing proteins called ‘Clock’ and ‘Cycle’ (Ref 3).  Preliminary results suggest that Period, Timeless and Cryptochrome 2 form a large protein complex, with Cryptochrome 2 being a repressor of Clock and Cycle transcription.  Cryptochrome 2 is also intimately involved with an area of the Monarch’s brain called the Central Cortex that likely houses the light-dependent ‘sun compass’, so critical for accurate navigation (Ref 3).
 
Rappert’s team have speculated that the Monarch’s dual Cryptochrome light response system evolved into the single Cryptochrome systems found in other insects through a hypothetical gene loss event (Ref 3).  Furthermore they have suggested that the dual Cryptochrome system itself arose through a duplication of an ancestral gene (Ref 3).  Biologist Christopher Wills wrote of gene duplication as a ‘rare occurrence’ in which “an extra copy of a gene gets placed elsewhere in the genome” (Ref 4, p.95).  Seen from an evolutionary perspective, these two gene copies are then “free to evolve separately…shaped by selection and chance to take on different tasks” (Ref 4, p.95).
 
While experiments have shown that transgenic Monarch Cryptochrome 1 can rescue Cryptochrome deficiency in other insects such as fruit flies, what still remains elusive is how exactly gene duplication could have lead to two proteins with such widely-differing functions as those found in the two Monarch Cryptochromes.  Indeed biochemist Michael Behe has been instrumental in revealing the explanatory insufficiencies of terms such as gene duplication and genetic shuffling within the context of molecular evolution.  As Behe expounded:
 
“The hypothesis of gene duplication and shuffling says nothing about how any particular protein or protein system was first produced- whether slowly or suddenly, or whether by natural selection or some other mechanism….. In order to say that a system developed gradually by a Darwinian mechanism a person must show that the function of the system could “have formed by numerous, successive slight modifications”…If a factory for making bicycles were duplicated it would make bicycles, not motorcycles; that’s what is meant by the word duplication.  A gene for a protein might be duplicated by a random mutation, but it does not just “happen” to also have sophisticated new properties” (Ref 5, pp.90, 94).
 
When it comes to supplying a plausible mechanism for how gene duplication and subsequent natural selection led to two distinctly functioning Cryptochromes and how these then integrated with other time-regulatory proteins in Monarch brains, there is a noticeable absence of detail.  Each successive slight modification of a duplicated gene would have had to confer an advantage, for selection and chance to get anywhere.  Furthermore the newly duplicated Cryptochrome would have had to have become successfully incorporated into a novel scheme of daylight processing for migration patterns to begin. 
 
Evolutionary biology must move beyond its hand-waving generalizations if it is to truly gain the title of a rigorous scientific discipline.  In the meantime, protein systems such as the Monarch’s Cryptochromes will continue to challenge what we claim to know about evolutionary origins.
     
References
1. NOVA: The Incredible Journey Of The Butterflies, Aired on PBS on the 27th January, 2009, See http://www.pbs.org/wgbh/nova/butterflies/program.html
 
2. Haisun Zhu, Amy Casselman, Steven M. Reppert (2008), Chasing Migration Genes: A Brain Expressed SequenceTag Resource for Summer and Migratory Monarch Butterflies (Danaus plexippus), PLoS One, Volume 3 (1), p. e1345
 
3. Haisun Zhu, Ivo Sauman, Quan Yuan, Amy Casselman, Myai Emery-Le, Patrick Emery, Steven M. Reppert (2008), Cryptochromes Define a Novel Circadian Clock Mechanism in Monarch Butterflies That May Underlie Sun Compass Navigation, PLoS Biology, Volume 6 (1), pp. 0138-0155
 
4. Christomper Wills (1991), Exons, Introns & Talking Genes: The Science Behind The Human Genome Project, Oxford University Press, Oxford UK
 
5. Michael Behe (1996), Darwin’s Black Box, The Biochemical Challenge To Evolution,  A Touchstone Book Published By Simon & Schuster, New York

 

Copyright (c) Robert Deyes, 2009

Comments
JT, I never meant to imply that you are splitting hairs over "determinism vs. non-determinism." That is a very interesting question, yet it has no use to the fundamentals of ID Theory. I explained this is comments #136-138, where I concluded with a basic summary of the key issue: "foresight is foresight whether it is determined to use its foresight in a certain way or not; whether it is determined to exist or not. We experience and use our foresight every day, thus we know it exists whether we have free will or not; whether the universe has a deterministic structure or not. Have you ever used a map and your foresight to plan a route to a future destination you wished to travel to?" As I stated, I can't continue this discussion until you answer that very simple question, since it is the type of question which begins the foray into ID Theory.CJYman
March 10, 2009
March
03
Mar
10
10
2009
10:23 AM
10
10
23
AM
PDT
JT: A few correctives: 1] 141: On the concept of FSCI, I believe that all Durston has done would be to come up with a definition of it such that it can be shown to be at high levels in DNA and programs written by humans, but not elsewhere. So in essence he’s formalized what is already intuively obvious to everyone, that there is something different about biology. But he has not attempted to show what the probablity is of getting a binary string with FCSI by blind chance. First, if you look at the paper, you will see that his formalisation is QUANTITATIVE to the point where he has published 35 measured values for proteins and related molecules. Second, he has done so by doing studies ont eh actual observed patterns of variations in key proteins, i.e he is doing a frequentist, a posteriori probability measure, and using that to assess the information per symbol. In so doing, he is also looking at the config space as a whole. His metrics are information theoretic ones, starting with H, and extend to any other cases of digital information where there are islands of observed function and variations within the islands. Codes with self-corrective redundancies in them try simple "three-peat," majority vote codes as a simple case] are an obvious parallel case. 2] 143: “Qualitative” is a keyword indicating that an idea is not fully formed, i.e. subjective. Note that Durston’s own concept of FSCI is quantitative, i.e. they provide a specific method for measuring something, that is a reason they have a paper. You don’t write a scientific paper to address something you can’t quantify. (This is not to imply of course that being quantitative is enough to establish validity or utility on its own.) Multiple misconcepts. First, when we measure "how much," there is an implicit "of what?" in it. That is, quality and quantity are inseparable. What we do is we identify something then set up a "yardstick" for it then apply a scale to "quantify": RION -- ratio, interval, ordinal, nominal [i.e. state]. When Trevors and Abel identified and actually generated a 3-d graphical scale showing the contrasts between orderly, random and functional sequence complexity, they discussed the matter in both qualitative and quantitative terms. For instance, Shannon type metrics will scale OSC as low on complexity, wil give a rather high value to a random sequence, and will give a somewhat lower value to a functional one. This last, because it embeds some redundancy. [Recall, the Shannon metric on h gives a weighted sum on frequency of appearance of symbols, not a flat random distribution. So, it measures redundancy and patterns, giving random sequences -- thus very aperiodic -- high values,a nd giving ordered determined strings low info content. Redundancy as a key fact of life in a noisy world makes functional strings more redundant than random ones, but to contain info they must have a lot of aperiodicity.]] Similarly, Random sequences are not K-compressible. [you have to read out the exact lottery winning combination, as it is not compressible into a simpler descriptive format.] Finally, functional sequences will have: function, which is recognisable. In context, T & A et al have focussed on algorithmic functionality. Structural functionality is also a possibility: wings will not work if they are just any shape, though symmetric wings fly very well thank you contrasting the simplistic explanation that omits the circulation issue . . . angle of attack my friends, angle of attack]. Durston then built on this by providing a specific metric for FSC, based on empirics. 3] 142: he’s [Durston] claiming to be able to distinguish “order” randomness and functional info - there’s nothing there about the probability of FCSI. JT, the basic metric being used is H = SUM pi log2pi. this is a standard info theoretic metric and embeds probability. further, the context of the discussion has rto do with empirical islands of function vs the config space as a whole. Once you speak of bits, you speak of probability. Like or lump it, info in bits is based on probability, and lurking behind is statistical form thermodynamics. 4] 144: nothing in his paper is intended to demonstrate that chance and law cannot generate function. He assumes that going in. First, you ate missing the point of the cited remark:
As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5].
In short, and ever since Orgel in 1973, we have known that informational molecules in biology are FUNCTIONAL, and that this is distinct from the characteristics of orderly and random sequences: functional, specified complexity, not random complexity. This is an observable and quantifiable fact of life. (A NOTE: You need to FIRST read for understanding, not to make rhetorical talking points. that way, you are more likely to get the point and to understand the evidence and reasoning. A good rest for understanding is accurate summary. Another is the ability to have "closure" on key ideas -- you run into the same cluster of ideas again and again, and are not being quickly caught out by novelties. I just for instance ran across a key point on Faraday's generator that I had not seen in classes or books: when the cylindrical magnet is spun instead of the disk, no emf forms. Similarly, if both are locked together and are jointly spun with no relative motion, an emf forms. Why? ANS: spinning a bar magnet on its axis does not spin the effective solenoidal B field -- it is relative, cross-ways motion of field and charge that give rise to the Lorentz force's magnetic component. [Magnetism is in significant part a relativistic effect.] And there are THREE entities involved: (a) magnet, (b) disk and (c)circuit that the disk is a part of. Spinning disk and magnet locked together still has relative motion to the rest of the loop for the circuit, so an emf will appear. [So, thanks to the didactics of a complex subtlety, there was a little hole in my knowledge about the Faraday generator. Kudos to Wiki for a good little article on it.] relevance? Hoyle's theory on magnetic braking in solar system formation, and onward extensions trying to fix the gaps in Hoyle's work. Verdict: the gaps are not closed.) Moreover, it was never a question that in principle random chance plus mechanical forces could get us to any config. The real issue is a search space one; the very same one that lies at the base of the statistical validation of the second law of thermodynamics. Namely, macrostates that move to higher entropy invariably by far outweigh the ones that lead to interesting configs. So, undirected contingency will overwhelmingly trend drastically to greater entropy, to the point of making a reliable law. In principe the O2 molecules in teh room where you sit could art random move to one end, leaving you gasping for breath, but reliably, that will not happen on gthe gamut of our observed cosmos over its lifetime. Likewise, on very similar grounds, a clutch of rocks avalanching down a hillside may possibly spontaneously form: "WELCOME TO WALES," bu the odds are again beyond merely astronomical. Thus, we see why lucky noise is not a reasonable source for FSCI. In short, probabilities emerge naturally here, they are not arbitrarily imposed. And, that answers to your selectively hyperskeptical issue on "demonstration." Durston is not trying to "prove" that random chance CANNOT get us to FSC, he is looking at the relevant probabilities and is thus drawing out the resulting information metrics in light of information theory ideas and principles. Sufficiently successfully to have published 35 peer reviewed values. 5] 141: I said at one point a while back that FCSI was a subset of CSI, but that was an assumption on my part. It seems to me now that FCSI revolves around the idea of a symbolic program, and thus a different idea from CSI. And furthermore, FCSI, contrary to CSI, would presumably not be inversely proportional to pattern complexity, as is the case with CSI. Of course, that is the problem with CSI - very simple patterns are highly unlikely as well. However, there are no formal proofs regarding the probability of getting FCSI by chance. FSCI is that subset of CSI where the specification is by observed function. Such funcitons may -- for example -- be
(a) linguistic [e.g. 143+ ASCII characters comptising contextually responsive English text], (b) algorithmic [e.g. stored programs or data strings/structures of at least 1,000 bits], (c) structural [e.g the drawing data for a wing or other functional feature of a mechanical system, or even a good old fashioned sailing ship, house or arrowhead, which will naturally take up of course more than 1,000 bits]
FOOTNOTE: I keep getting the impression that you keep looking so hard for destructive counter-examples that you have not paused to properly understand the examples. in fact, CSI began as FSCI, in a biological, origin of life studies context. Specified, organised complexity of informational character was seen as the key element in the nanomachines of he cell, by contrast with crystals [which are 3-d "polymers"] and random tars. Dembski picked up the concept, linked it to wider contgexts, and has sought to model the key factors. In so doing, he has hit on the target zone of interest in the config space as the key issue, and the onward question is to identify the zone of interest. K-compressibility, absence of which is very often tied to non-function [at random long strings, overwhelmingly will not be compressible or simply and independently describable], turns out to be a key aspect of that more general work. But also, function is not equivalent to compressibility. And, hence the significance of T & A's work. (Cf. their fig 4.) Durston extends this work, providing an empirically anchored metric and giving 35 specific values. note that Chiu, Trevors and Able are contributing authors, as well. _______________ I trust these notes will help you clear up your key misunderstandings. GEM of TKIkairosfocus
March 10, 2009
March
03
Mar
10
10
2009
04:09 AM
4
04
09
AM
PDT
Durston As Abel and Trevors have pointed out, neither RSC nor OSC, or any combination of the two, is sufficient to describe the functional complexity observed in living organisms, for neither includes the additional dimension of functionality, which is essential for life [5]. The above is at the beginning of the Durston paper. IOW, nothing in his paper is intended to demonstrate that chance and law cannot generate function. He assumes that going in.JT
March 10, 2009
March
03
Mar
10
10
2009
01:34 AM
1
01
34
AM
PDT
[142]: Durston, et. al in the conclusion of their paper (http://www.tbiomed.com/content/4/1/47/) say they are able to distinguish "functional information from "order" (or "OSC") and randomness, so there's a question about what their concept of of "order" is. At the beginning of the paper we're informed where they're getting their idea of "order" from: "Abel and Trevors have delineated three qualitative aspects of linear digital sequence complexity [2,3], Random Sequence Complexity (RSC), Ordered Sequence Complexity (OSC) and Functional Sequence Complexity (FSC). [emphasis added]" "Qualitative" is a keyword indicating that an idea is not fully formed, i.e. subjective. Note that Durston's own concept of FSCI is quantitative, i.e. they provide a specific method for measuring something, that is a reason they have a paper. You don't write a scientific paper to address something you can't quantify. (This is not to imply of course that being quantitative is enough to establish validity or utility on its own.) Nothing in the Durston paper is presented to further specify or quantify the notion of "order" so their claim of being able to distinguish order from function in the conclusion is misleading - all they could possibly mean is that they've applied their measure to some specific sequences that have been previously characterized as orderly, and gotten back a number consistent with what they had intended (I would say). But lets go to the paper from Abel and Trevors, as Durston says that's where his idea of "order" comes from: (http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1208958): "A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order." Note that this is how I myself characterized I.D's conception of order in [120]:
Let me talk about law for a moment. I.D. wants to associate “law” exclusively with processes characterized by algorithmic simplicity. So in terms of programs, I believe that I.D would say that laws are to be associated with very small programs. All (TM) programs are deterministic. I would say that “necessity” refers to processes characterizable by programs. It would seem to me to be completely arbitrary to say that in order to be considered “law”, programs cannot exceed a certain degree of complexity (i.e. length).
To say something is highly compressible means that is generatable by a small program. So this "order" to I.D. just means "generated by a small program". But how small? It isn't clear. Any binary string, even one that contains "functional information", is generatable by a program of some size. However, Abel and Trevors appear to deny this: Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. First of all a very small program could generate some sort of program with functional information, despite what Abel and Trevors say. But their problem with programs and law appears to be not so much related to size but rather that in their mind a program (of any size) precludes choice. IOW, they think to write a program containing functional information is not actually within the ability of a program to do. But that is nonsense. And if the intention in I.D. is to rule out what nature can achieve on its own, because of the assumption that nature is controlled by "laws" (i..e. a small program in ID's conception), this implies that the only things predictable in nature are really simple things, that anything in nature that is complex is also unpredictable. This would seem to be an absurd and completely unproductive assumption for someone to make about nature. A trivial proof that one program can generate another: I just zipped a program on my hard drive. it reduced the size from 722k to 220k. (This also indicates that functional information is "highly compressible", BTW). So the unzip program plus a 220K random string not identifiable as anything results in a very complex functional program. You could say that the real program was "already there" in the compressed version. But there is nothing there in that random 220K string to indicate that. So why couldn't you look out in nature prior to life and find a lot of diffuse and disparate things out there that don't look anything like life, but were in fact transformed into life. And note that our unzip program did not have any "foresight" either. Of course, you could absolutely note that all those factors out in nature that resulted in life equated to life, just as our zipped file of random data + the unzip program equals our complex functional program. But it shows what should be obvious - that life can emerge via blind physical processes from something that does not look like life at all. And of course, you can absolutely say that this implies something existing at the beginning of the universe that exceeds the power of chance to create. However, that observation does not enlighten us as to the actual naturalistic method of how life actually emerged after the universe began. ------------------ And a side note to CJYMan: I can try to revisit your latest post more systematically tomorrow, but here's what I remembered of I wanted to say: You say that I am splitting hairs about determinism vs. non-determinism. And I believe you imply that that it is an ongoing and presumably irreconcilable debate whether or nor nature is ultimately deterministic or not. But considerations of QM aside for example, that is just not the case. Science always tries to derive a deterministic mechanism, a program, that will account for the emergence of some phenomena in nature. They can't proceed on the assumption that, "Well phenomena at the nano-scale has proved unpredictable, so lets just assume that as a strong possibility with whatever new phenenon we encounter or whichever phenomenon we haven't explained yet." Even QM theory is expressed in terms of deterministic laws (although probabilisitic). If you want to derive probabilisitic laws pertaining to life with comparable rigor of QM theory, that's one thing. But otherwise, your implication that life may just be one of those nondeterminstic things (i.e. one that requires a nondeterminsitic, unpredicatable intelligence) is the real red herring, IMO.JT
March 10, 2009
March
03
Mar
10
10
2009
12:29 AM
12
12
29
AM
PDT
KF: Here's the conclusion of that paper: A mathematical measure of functional information, in units of Fits, of the functional sequence complexity observed in protein family biosequences has been designed and evaluated. This measure has been applied to diverse protein families to obtain estimates of their FSC. The Fit values we calculated ranged from 0, which describes no functional sequence complexity, to as high as 2,400 that described the transition to functional complexity. This method successfully distinguishes between FSC and OSC, RSC, thus, distinguishing between order, randomness, and biological function. So, he's claiming to be able to distinguish "order" randomness and functional info - there's nothing there about the probability of FCSI. I will go back now and take a closer look at his conception of "order" because as I've said, any type of string whatsoever can be output by a program (i.e. a set of laws.)JT
March 9, 2009
March
03
Mar
9
09
2009
09:32 PM
9
09
32
PM
PDT
KF: You actually zeroed in on a relatively important point of mine, so let me respond to you first. CJYMan's comment was, Tell me, what is the probability of the random generation of Shakespeare’s Hamlet?...Are you still not understanding that FSCI is highly unlikely to generate itself by chance." As I remarked to CJYMan (and I assume you recognize this as well) you can't talk about the probability of just getting Hamlet, as that is just one particular string. You have to talk about the probablity of getting some property by chance (e.g. compressibility). On the concept of FSCI, I believe that all Durston has done would be to come up with a definition of it such that it can be shown to be at high levels in DNA and programs written by humans, but not elsewhere. So in essence he's formalized what is already intuively obvious to everyone, that there is something different about biology. But he has not attempted to show what the probablity is of getting a binary string with FCSI by blind chance. I would remark that every binary string is a program. If the assumption is that strings exhibiting FSCI are extremely rare, what is the proof for that. As far as CSI, that conception revolved around compressibility, that is how small a description exists for a given string. Furthermore, the smaller such a description exists for a string, the more unlikely it is to occur by chance. And at one point Dembski says that we know the percentage of compressible strings is extremely small (though infinite). He doesn't have to mention that this would be a formally derived conclusion, I'll take him at his word that. But trust me, its not just a matter of common sense or assertion. I know that I said at one point a while back that FCSI was a subset of CSI, but that was an assumption on my part. It seems to me now that FCSI revolves around the idea of a symbolic program, and thus a different idea from CSI. And furthermore, FCSI, contrary to CSI, would presumably not be inversely proportional to pattern complexity, as is the case with CSI. Of course, that is the problem with CSI - very simple patterns are highly unlikely as well. However, there are no formal proofs regarding the probability of getting FCSI by chance. But anyway, as I notice now that the Durston paper itself is available from that link you supplied, I'll review it again and see if I need to revise my comments above.JT
March 9, 2009
March
03
Mar
9
09
2009
09:21 PM
9
09
21
PM
PDT
JT, ... and one final thing. If you equate law with determinism, then you'd have to provide evidence that our universe has a deterministic structure. It seems, though, that quantum mechanics shows that our universe may not have a deterministic structure and thus your definition of "law" may not even exist. This is yet another reason why equating law with "determinism" is both non-essential and counter-productive in discussing the fundamental tenets of ID Theory.CJYman
March 9, 2009
March
03
Mar
9
09
2009
06:51 AM
6
06
51
AM
PDT
JT: I will only add that FSCI is first a descriptive term for a phenomenon observed by OOL researchers in the 1970's and 80's, i.e. before ID existed as a scientific school of thought. Workers like Orgel, Yockey, Wickens et al. (Cf the WAC ands glossary at the top of this page -- just what exactly is imprecise and useless or confused in the descriptions and definitions given, especially in light of given examples? [Or else, you are sounding a whole lot like you are simply making closed minded objections to try to rationalise a closed mind.]) Its basic meaning is as a matter of recognising something as commonplace as contextually responsive ASCII text in English taking up at least 143 characters. Since it is a functionally specific subset of complex specified information -- and since function is recognised -- the definition of CSI and its metrics also apply. but FSCI is less difficult to specify, as we simply need to recognise function. Also, Abel , Trevors, Chiu, Durston et al have been working on Funcitonal Sequence Complexity and h have published a table of 35 values of FSC in Fits. You can inspect their method, which is based on standard approaches. All of this has been said before, and the discussions in the WAC's and glossary above give links to the details as well. So,the problem does not seem to be any real lack of clarity or definition in any reasonable sense, but that you evidently do not wish to recognise the existence of such a ubiquitous phenomenon and its well-known source, as that would be at once fatal to your case. If you doubt me on how common FSCI is, look above at the posts int his thread: are they functional in any reasonable sense of the term? Are they informational? Are they complex in the sense of beyond 1,000 bits of info storage used? the answers are utterly obvious. So, JY, it looks rather like a case of selective hyperskepticism again. A selective hyperskepticism that contradicts itself so soon as you post or read a post here and take it as more than mere lucky noise mimicking what intelligent designers are said to do. Reductio ad absurdum, yet again. GEM of TKIkairosfocus
March 9, 2009
March
03
Mar
9
09
2009
06:42 AM
6
06
42
AM
PDT
P.P.S foresight is foresight whether it is determined to use its foresight in a certain way or not; whether it is determined to exist or not. We experience and use our foresight every day, thus we know it exists whether we have free will or not; whether the universe has a deterministic structure or not. Have you ever used a map and your foresight to plan a route to a future destination you wished to travel to?CJYman
March 9, 2009
March
03
Mar
9
09
2009
06:31 AM
6
06
31
AM
PDT
JT, P.S. When referring to gravity as a "law," scientists are referring to the foundational principle which causes the regularity described by the mathematical equation for gravity. When a scientist says that the bonding of two atoms is caused by "law," he is referring to a bonding regularity imposed by the physical properties of the atoms. Equating "law" with "determinism" muddies the waters of the debate since intelligence could be determined to exist by a previous intelligence or intelligence may not equate to libertarian free will and is thus determined in its behavior but only if the very structure of the universe is also deterministic since intelligent behavior utilizes input gathered from our interactions with our universe. So, since an idea of determinism is not the issue which has been debated for centuries (teleology is the issue) and since determinism isn't foundational and isn't necessary to ID Theory, criticisms of determinism or non-determinism [while providing interesting philosophical discussion] is not to be focused on when debating core ID Theory principles and methodology.CJYman
March 9, 2009
March
03
Mar
9
09
2009
06:10 AM
6
06
10
AM
PDT
JT: "...or somehow feel things the way a human does in order to create complex artifacts." It has nothing to do with feelings. As I continually state and you continually ignore, it has everything to do with foresight (awareness of future goals which do not yet exist). This issue of teleology (simply, end products determining present configurations; future possibilities influencing the present) is the core issue of the debate which has been going on for centuries. It matters not that a central figure to modern ID has certain personal thoughts about determinism or free will. The rest of the ID community is not bound to agree with those views unless it is shown that they are foundational to ID, and I have shown that they are not necessary to ID Theory. That's the beauty of free-thought ... being able to think for yourself, add your own two cents, and not be inhibited by what the "authorities" state. Equating law with determinism or non-determinism is not the issue. The scientific view of law as mathematical descriptions of regularity or organization caused by physical properties of matter is the only necessary and IMO useful definition of law as it pertains to ID Theory. If you equate law with determinism you reduce the debate to endless banter about the metaphysical definition or even possibility of determinism vs. non-determinism. Michael Polonyi (a distinguished chemist turned philosopher) discussed life as being founded upon non-physical-chemical principles -- that is, not founded upon law as scientists use the term. Again I ask, do you possess that capability to be aware of future goal that do not yet exist (foresight)? I can tell you that at the very least, engineers sure do. Honestly answering that question is the first step in the right direction to understanding the fundamentals of ID Theory. Thus, until you answer this question, I can't take your criticisms of ID Theory seriously since you have not yet understood the extreme basics.CJYman
March 9, 2009
March
03
Mar
9
09
2009
05:28 AM
5
05
28
AM
PDT
JT: In one of the patronizing remarks directed at me earlier in the thead, Joseph provided me a reading list, but none of the sources were on line.
Umm I was asking a question to see if there was any common ground for a discussion. Now it appears that you haven't read any ID literature which means you argue from ignorance. That is never a good thing. And that is why I have read about 100 books on biology, evolution, and evo-devo. These books were not online. I either had to buy them or get them from a library. Now if you come to a blog in order to learn something about a topic then it is clear that you really are not interested in it.Joseph
March 9, 2009
March
03
Mar
9
09
2009
04:50 AM
4
04
50
AM
PDT
CJYMan: Intelligence equates to CSI therefore it is best explained by previous intelligence and not *merely* chance and law. That is basically what the papers on active information prove In one of the patronizing remarks directed at me earlier in the thead, Joseph provided me a reading list, but none of the sources were on line. Whatever papers you're referring to, if they're online, I'll go read them now.JT
March 8, 2009
March
03
Mar
8
08
2009
07:34 PM
7
07
34
PM
PDT
CJYMan:
That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program. It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven’t admitted, yet) exists and separates intelligent systems from non-intelligent systems — systems controlled by only randomness and law.
AS I've indicated you were implying something about foresight, awareness and consciousness, which I didn't completely pick up on and may have made my posts 127-129, especially 127, not as responsive specifically to what you were implying. Nevertheless I do make several points in 127-129 that should make it easier to understand my own personal viewpoint, so I hope you read them all anyway. And if you have any thing else to say, I'll try to read it more carefully next time. But on the subject of consciousness, it doesn't seem to be relevant at all, no matter how important it seems to some people. I would say that a stomach and heart and various other biological systems don't seem to be conscious, although they do very complex things and goal oriented things. Consciousnness really means nothing more than what it feels like to be us. But feelings are determined by chemicals. The fact is, I think I.D. advocates don't seem to actually mention "consciousness" specifically all that often (for whatever reason) even though I think now that must be in fact what is most crucial to them - that some thing has to be conscious, or somehow feel things the way a human does in order to create complex artifacts.JT
March 8, 2009
March
03
Mar
8
08
2009
05:57 PM
5
05
57
PM
PDT
Or foresight could be, "If I do this, then such and such will happen." And that could be a conclusion based purely on experience. Baby humans spend a LONG time learning about the physical world strictly through trial and error. And to tie a whole bunch of if-then propositions in your mind to reach a conclusion about what behavior to follow to reach some future goal is a computational activity. From my vantage point, its ridiculous to mystify foresight in any way, or allow such a notion to stand even by being noncomittal on it, and saying, "It may be a deterministic process, it may not be who knows? It doesn't matter." Sorry that just doesn't cut it for me.JT
March 8, 2009
March
03
Mar
8
08
2009
03:43 PM
3
03
43
PM
PDT
I do need to reiterate that your primary point did apparently elude me - this idea of Foresight, involving the metaphysical awareness of future states not currently existing. I think such an idea should be disgarded. A cheetah can say, "based on how that gazelle is running, he will be at point B in 2 seconds." That's the only type of foresight that exists. Whatever foresight humans have is the same thing only conceivably to a greater degree -extrapolating further into the future base on an assessment of the dynamics of an unfolding event or on an assessment of previous states of an event. This would be an imperfect ability for anyone without unlimited information.JT
March 8, 2009
March
03
Mar
8
08
2009
03:29 PM
3
03
29
PM
PDT
Some of my responses were not responsive enough to the main point you were making - let me try to go back and rectify that piecemeal:
CJY: That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program.
It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven’t admitted, yet) exists and separates intelligent systems from non-intelligent systems — systems controlled by only randomness and law.
I think this is what A MacNeil was alluding to possibly, this idea in I.D. (apparently) that humans have some metaphysical awareness of something that doesn't exist yet. I don't think humans operate that way. We're able to take something that already exists and modify it in some way.JT
March 8, 2009
March
03
Mar
8
08
2009
03:11 PM
3
03
11
PM
PDT
CJYMan [126]:
JT: “It would seem that there would be no way to distinguish any two entities said to be intelligent agents.”
Personality, talents, abilities, etc. all effect how one uses his foresight. The intelligent agent we call Beethoven used his foresight differently than how Einstein used his. There is definitely a distinguishable difference in intelligent agents.
Yes, but personality talents abilities are all due to objective deterministic causes. If someone has musical abilities presumably they do no eminate mysteriously and nonphysically and nonmechanically from some metaphysical dimension. Furthermore, whether or not someone has the ability to cultivate these "innate" gifts is dependant on objective conditions in their environment, e.g. how much money their parents make, the society they live in and so on. All of these things can be tied to objective identifiable physical things.
JT: “Also with intelligent agency, If it were possible to look at some arbitary string and say, “OK this was definitely output by Intelligent Agent X” or “This was definitely not output by Intelligent Agent X” it would imply you had a program characterizing Agent X, which would mean it was not an intelligent agent.”
Huh? Where are you getting this mumbo jumbo from? How does me recognizing the work of Beethoven mean that Beethoven did not use his foresight tocreate musical masterpieces.
My argument was a little unclear here, admittedly. I'm saying if something cannot be described via a program it cannot be described at all, and how do you distinguish two things that cannot be described. There is hardly anything in nature or human society or human behavior that someone has not attempted to simulate on a computer. Do you understand that the historical position of I.D has been that Intelligent Agency does not operate according to law, meaning that it is not determinsitic (meaning that it cannot be simulated on a computer.) If I.D. is moving away from that now, great. But being noncomittal or saying, "it really doesn't make any difference" doesn't cut it. Really, science treats everything as deterministic. I know all about QM, but my statement is essentially correct.
JT: “If a human being is characterizable as a binary string (which seems reasonable given DNA) then we know there exists some mechanism (i.e. program) that will output it.”
For the sake of argument, sure. Yet that mechanism will include foresight — whether foresight itself is a “mechanism” or not we know that it exists and is used in the generation of certain patterns. We have repeatedly shown you this and it is now up to you to provide counter examples.
No, I don't think such a mechanism would necessarily have "foresight" not at least with all the transcendental baggage I think you want to associate with the term. Does the mechanism of epigensis have foresight? What about a mechanism that takes some highly compressed and encrypted binary file and produces a beautiful painting from it? Does such a program have foresight? All I'm saying is that you could have some diffuse set of factors in the universe that resulted in life. I admit that the connotation of this is noteworthy from an I.D.standpoint, in that whatever mechanism and initial phyiscal factors involved, it would be like saying an encrypted compressed version of the mona lisa existed out there in the universe (if you catch my meaning, and you should). So this is where I would join with I.D, in remarking on the obvious fact that if we exist it means that something equating to us predated us (just as a compressed encrypted version of the Mona Lisa + the decryption program equates to the Mona Lisa). Which would be easier to get by chance - the Mona Lisa, or a compressed encrypted version of the Mona Lisa + the decryption algorithm together by chance. Do you catch my meaning now? Where I part company with I.D. is allowing to stand, even by being noncommital (as you are being) some notion of transcendtal foresight being necessary to create complex things.
I am a methodological naturalist. I have never ruled out a physical process as having caused life. For me to be in accordance with ID Theory all I need to realize is that the mechanism included foresight either as part of the mechanism [if foresight is indeed mechanistic] or outside of and influencing the mechanism.
Same here, so what would be your primary objection to naturalstic explanations? The only place the supernatural would conceivably have to come in is at the very beginning, but you could take that out as well by assuming an infinite regress of active information (as you alluded to, and if I understand what is meant by that - I note you equate it to what I was saying.) As far as naturalistic explanations the only valid objection is if someone is appealing to randomness primarily as an explanation. But you could have some set of factors out there in the universe that accounted for life, without saying those factors primarily came into existence for no reason at all- most of those identifiable factors could in essence be eternal, i.e. not coming into existence at a point in time by chance. Origin theories are very much moving away from randomness as an explanation. Maybe you don't understand what I've been saying here either.
Now, for further clarity, how are you defining the term “mechanism” as you are using it?
Basically a mechanism is everything. Everything whose behavior can be systematically described (potentially) and potentially predicted. Everything that functions in a potentially describable way - all these thing operate according to law.
But seriously now, I actually agree with you. Intelligence equates to CSI therefore it is best explained by previous intelligence and not *merely* chance and law. That is basically what the papers on active information prove. I’m glad to see you are in agreement.
So there's this added extra ingredient that can't be described by law. Then it can't be described at all, so what can science do with such a notion? A better alternative would be eternal laws at some point (or "infinite regress of information", etc.)
By “mechanism” do you mean “*only* law and chance”; or do you mean “there was cause and effect.”
Same difference: "Law", "Program", "Cause", "Determinant" Something specific that is characterizable, describable. Something that can potentially be written down.
JT: “So all you have left for an “explanation” is either randomness, or if we are to accept I.D., also possibly “Intelligent agency”. What difference does it make which one of those two you pick?”
One has foresight and the other does not. Is it really taking you this long with such convoluted arguments to figure it out?
But to me "foresight" that does not operate via a mechanism means that we cannot describe it. If we're talking about conditions at the beginning of the universe, here your foresighted "Ingtelligent Agent" does not have any mecanism any program to associate with it. Its just something that magically ouput something amazing. Why not assume the initial set of objective causal factors were eternal (as opposed to being magically materialized by an "Agent"). I've been writing for a long time today. I'm just ending it abruptly here. Sorry if I did not address something. [I wrote: "No, I don’t think such a mechanism would necessarily have “foresight” not at least with all the transcendental baggage I think you want to associate with the term." Maybe that was a mischaracterization of your position, I dont know.]JT
March 8, 2009
March
03
Mar
8
08
2009
02:54 PM
2
02
54
PM
PDT
CJYMan [125]:
2. The issue of determined vs. non-determined is not essential to the fundamental precepts of ID as a Theory as I have briefly shown above. ... 2. Determinism is not an issue for the foundation of ID Theory as shown above.
Its a tactic obviously on boths sides to very gradually move away from an argument they're losing. That's not a bad thing really, until they eventually start denying they made the argument to begin with. So Allan_Macneil will say that no modern evo-theorists assume that RM-NS is sufficient , though ignore the fact that its still presumably presented by itself in elementary textbooks. In case you were not aware, "Intelligent Agency" has been historically presented in I.D. as something distinct from either chance or law, and law has been presented as a synonym for determinism. The concept of "choice" in I.D. as something unique to "Intelligent Agents" has also been very prominent. Dembski talks repeatedly that mechanism or necessity or law don't create contigency because their outcome is predetermined. Do you need my to hunt down some quotes for you (or do you concede the point)?
JT: “There are certain classes of strings which would be extremely improbable to occur by pure randomness, for example strings that are algorithmically compressible. Actually this is the only class of strings that I personally understand why their probability is small. If someone has demonstrated that FSCI for example is highly unlikely to occur by randomness, I am not aware of it, (not necessarily denying it though).”
…ummmmmmmm this is what KF and others, including myself have been constantly trying to explain to you. Tell me, what is the probability of the random generation of Shakespeare’s Hamlet? Now, how many quantum calculations have gone on during the universes 15 billion year history? Compare the two numbers and even from a purely mathematical POV, without further trying to gyrate and twist randomness into magically having the ability to bestow meaning, the probabilistic resources are horribly lacking. Now, just do some small scale experimenting on your own. Either take KF’s advice and start rolling some dice at a casino or have a computer randomly mutate a string of letters and compare the number of bit flips (probabilistic resources) with the bit length of a sentence when and if it materializes. Are you still not understanding that FSCI is highly unlikely to generate itself by chance.
I'm not talking about commonsensical, arguments from incredulity, like how could monkeys typing randomly get the works of Shakespeare. Do you understand that that is not a formal argument. An argument to the effect that one particular string is extremely rare and so could not occur by chance is irrelevant. Dembski talks about this principle at great length. So the odds of getting A Merchant of Venice by chance is the same as getting any other string of comparable length by chance. You have to find a property that is shared by an infinite number of strings and talk about the probablity of getting a string with that property. Dembski says that compressible strings are an extremely small percentage of all strings and compressibility is formally defined (and possessed by an infinite number of strings.) So that's why its meaningful to talk about the probability of getting a compressible string. FCSI has not been formally defined, and no proof has been given as to what percentage of strings exhibit FCSI.
1. As long as the output is regular, it can be described as resulting from law (no matter the complexity of said law).
No, something could be quite irregular and complex and be the output of some complex set of laws. Just to be clear, a program can write programs. A series of directions a program gives you is itself a program. That's a trivial example. A compiler is an extremly complex program, to take some human language artifact and optimize it and convert it into something entirely different. Programs write programs all the time.
“JT: Neither a process that is random nor a process that is an Intelligent Agent has any sort of description at all.”
1. Yes and no. Yes, randomness is not so much a process as it is a lack of causal description. But, no, randomness does have a statistical mathematical description. 2. No and no. A process that is an intelligent agent is described as a process which is aware of and can generate future targets which do not yet exist and then harness chance and law to engineer a solution to accomplish that goal. This process can be mathematically described as CSI or active info.
That someone can write a fairy tale doesn't mean the thing they're describing is real. (My intention is a substantive point here, not insult, btw). The historical position of I.D. is that an intelligent agent is something distinct from law or program or deterministic cause. That's what I was addressing. If a program doesn't exist to characterize something, then your talking about something that's make believe. You can describe a perpetual motion machine too.
JT: “You could never have a course, “Introduction to Intelligent Agents” that porported to describe how intelligent agents function.”
I disagree. One can already describe how AI functions. I see no reason why the time will not come in the future when we will be able to describe how conscious intelligence functions. Penrose and Hameroff may even have gotten us moving in the right direction. The strength of a conscious experience may indeed be described as E=h/t.
No, I'm talking about an Intelligent Agent as has been historically described in I.D. Are you assuming that I don't think concious experience could not be expressible in terms of physical laws? How often do you read my posts? Once again, I'm using "Intelligent Agent" in the sense that I.D. has historically used that term.
You don’t have foresight?????? Please don’t ever go into engineering for the sake of the safety of humanity.
Try to understand: I think all programs have foresight, to one degree or antother. I think a human being has to be treated as a complex chemical program. I do not believe in I.D.'s magical "Intelligent Agent" that operates via some method that can't actually be characterized or described via a program and magically produces foresight and goals. You're agnostic or noncommital on whether I.D. can be a program, but that says nothing about the historical position of I.D. If I.D. is gradaully transitioning away from it, now that they see the implications, that's great. But people like KF for example (and please correct me if I'm wrong) still hold to the historical view of ID regarding "Intelligent Agency".JT
March 8, 2009
March
03
Mar
8
08
2009
01:22 PM
1
01
22
PM
PDT
CJYMan [124] [responses to 125 and 126 forthcoming.] [CJYMan: All I want to do is communicate, and if any of the following becomes derisive at some point, its just in the heat of the moment. I just don't have time to go back and reedit everything. My views do actually converge with I.D at certain points. I just don' t think they're stating it the correct way. But actually some of them (not including you evidently) would be violently opposed to the idea of agents operating according to a program, or any potential threat to the notion of free-will.] [But anyway, the end of my post in 120 became a little confusing evidently, and I will try to subsequently clear that up. Stick with my ideas as long as you care to. If I'm not successful in getting my points across I have only myself to blame.]
JT: “I was always under the impression that anything that’s a program cannot be intelligent agency in the I.D. conception.”
That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program.
Well, it provides I.D with a moving target, if proponents don't even agree among themselves on this point. As I indicate later, if an intelligent agent can be a program, then an intelligent agent can be determinstic. Frankly, the only type of agent I would be personally interested in is one that could actually be described (i.e. one for which a program potentially existed to characterize its behavior.)
It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven’t admitted, yet) exists and separates intelligent systems from non-intelligent systems — systems controlled by only randomness and law.
If law is a synonym for determinism, then any program is law. And if an intelligent agent is a program (which you've agreed to) then an intelligent agent operates according to law. [Its difficult for me to discuss this, because you imply its possible some agents might be programs and the traditional view point in I.D. has been that agents are not programs.] The types of laws we tend to associate with nature are generally expressed compactly. At least one reason for this is that it is always a goal in science to express things as efficiently as possible, to strip out all extraneous verbiage, to distill, i.e. to express the essence of. But it also has to do with narrowing the focus of attention, so you can be talking about something specifically. Furthermore, the reason that laws of nature would be traditionally expressed as mathematical formulas, as opposed to algorithmically, is that computer science has only been around for 75 years or so, and prior to that all they had was math. That's a rough generalization but essentially correct. Of course they did have logic in Ancient Greece, granted. But the point is, there is a certain limitation of expression when dealing strictly with mathematical formulas, and would be another reason why laws of nature appear simplisitic. Mathmetical formulas are only a small subset of "Law". "Law" would be any characterization of how something operates. On what basis would we assume that nature can only be described by simple laws. Its like saying that nature is only deterministic in really simple things, that to whatever extent it is "complex" it is also unpredictable. This would be philosphical assumption, and a very odd one at that, from my vantage point. If nature is simple, why does it take an incredibly large program to simulate the weather for example? To operate according to law means to have a description and vice versa.
Well, technically, even statistical randomness can be deterministic if one is able to calculate all initial conditions to infinite precision. That’s where the problem lies, though. This issue of determinism vs. non-determinism has been hotly debated as a metaphysical issue for quite some time now and is actually not essential to this fundamentals of this debate. So out with the herrings …
I don't know where the red herrings are. Chance is something assumed in I.D. literature. So, in your mind the only things might be law and intelligent agency.
JT: “I was always under the impression that anything that’s a program cannot be intelligent agency in the I.D. conception.”
That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program.
JT: “So I’ll assume that for the moment.”
Actually, that assumption is not necessary, since even if foresight can be the result of a program, it has been shown above that program would also need previous foresight in its full causal chain (or else exist eternally). Thus foresight would breed further foresight, CSI, etc etc etc; and *only* law and chance would again not be the best explanation.
What makes you think a program cannot have foresight (Oh wait, you're undecided on this point). And once again any program is deterministic, not just "law" or simple laws. But a simple program can have simple goals. A complex program can have complex goals, a complex saved internal state and governed by complex processes. How much memory a process has is one determinant of what sort of foresight it can have. If it can save a lot a lot of data from the external world, it can recognize complex scenarios in the external world if they arise again. Actually, though I've seen Allan_Macniel characterize I.D.'s conception of foresight as something quite bizarre, operating outside of space-time or something, and I suggested that no one in I.D. thought that, and in reply to me I believe he posted a list of sites from google, and I never checked into it. But when I think of foresight, I think of a program having a goal of some sort, and possibly being able to simulate the external world in memory such that it can predict [perhaps imperfectly] future states of the world and then navigate towards those that are consistent with some goal. But all this can be realized by a program, i.e. some deterministic process, i.e. something that operates according to complex laws. In fact, I wouldn't know what approach to take in analyzing the concept of foresight, other than a computational approach. The program is the formalism for characterizing how some process operates. The program supersedes all other forms of description - be it math or a natural language or whatever. I believe that naturalist would say that nature doesn't need a certain amount of memory in order to simulate itself. And I know that evo-theorists would be the ones who say evolution has no goals, or foresight, but I would say that's not actually possible. But not because the creation of humans requires something outside nature or law to explain it. Rather because there will be foresight inherent in any determinstic process that causes something to happen, i.e. if f(x) results in y, then f(x) is just y in another form. When you talk about a human being having a goal, that goal will be expressed in his brain as a certain complex configuration of chemicals. A goal in a computer would be a series of electrical impulses. But these internal artifacts map to something in the external world. In a deterministic universe, if everything is predetermined then humans have always existed for eternity as a concept. Determinism is I.D's friend. [the above discussion would be relevant as well to your closing remarks in 124 about foresight, so I won't adress those specifically].JT
March 8, 2009
March
03
Mar
8
08
2009
11:16 AM
11
11
16
AM
PDT
To JT (part 3) JT: "It would seem that there would be no way to distinguish any two entities said to be intelligent agents." Personality, talents, abilities, etc. all effect how one uses his foresight. The intelligent agent we call Beethoven used his foresight differently than how Einstein used his. There is definitely a distinguishable difference in intelligent agents. However, even if that were not the case, this in no way sets back the fundamental position of ID Theory: 1. Intelligence exists. 2. Intelligence is necessary to produce certain effects. 3. Law and chance are not adequate explanations for those effects. JT: "Also with intelligent agency, If it were possible to look at some arbitary string and say, “OK this was definitely output by Intelligent Agent X” or “This was definitely not output by Intelligent Agent X” it would imply you had a program characterizing Agent X, which would mean it was not an intelligent agent." Huh? Where are you getting this mumbo jumbo from? How does me recognizing the work of Beethoven mean that Beethoven did not use his foresight to create musical masterpieces. Is chance or law a better explanation? If so, please provide evidence that background noise and an arbitrary collection of laws will produce that type of music. JT: "So there’s no way to distinguish various intelligent agents from each other." I think you are confusing yourself as much as you are confusing me. First, it seems that you are arguing that if there is no way to distinguish intelligent agents then intelligence is no different than randomness since there is also no way to distinguish between different samples of randomness. Again a fallacious argument, similar to your previous fallacious argument: Intelligence and randomness are subsets within the larger set of "not being able to distinguish between sub-sub-sets" therefore the subsets are equal. That most definitely is not necessarily true. Draw a Venn diagram and you'll see why. Then you seem to argue that if you can distinguish between intelligent agents, they therefore reduce to programs. 1. It doesn't matter. 2. HUH??? Mind running that logic by me again? JT: "I.D. will make comments to the effect, “We do know that intelligent agents routinely output [fill in the blank] FCSI,CSI, symbolic programs, etc. So if we see such things and don’t know of a mechanism (i.e. program) that caused it, we are justified in assuming it was output by intelligent agency.” Mechanism is a secondary question as shown above in the gravity and Big Bang question. Here, I'll simplify things for you a bit so that you don't have to provide an Olympic's worth of mental gymnastics and gyrations. 1. Does it have foresight? 2. Does it use its foresight to produce certain effects? 3. Are those effects plausibly explainable in terms of *only* law and chance absent previous foresight? 4. Can foresight be caused by *only* law and chance? JT: "If a human being is characterizable as a binary string (which seems reasonable given DNA) then we know there exists some mechanism (i.e. program) that will output it." For the sake of argument, sure. Yet that mechanism will include foresight -- whether foresight itself is a "mechanism" or not we know that it exists and is used in the generation of certain patterns. We have repeatedly shown you this and it is now up to you to provide counter examples. JT: "So its not clear on what basis you could rule out some physical process in the universe that could have caused life." I am a methodological naturalist. I have never ruled out a physical process as having caused life. For me to be in accordance with ID Theory all I need to realize is that the mechanism included foresight either as part of the mechanism [if foresight is indeed mechanistic] or outside of and influencing the mechanism. Now, for further clarity, how are you defining the term "mechanism" as you are using it? JT: "If you have some program-input f(x) that outputs y then f(x) equates to y. So any mechanism-input that is proposed as an explanation for y equates to y. So if you have some disparate, diffuse set of numerous factors existing out there in the universe that collectively resulted in life, that set of factors would still equate to life. As far as I know the significance of this eludes everyone in this forum but me." It must be because you are just sooooo intelligent ... er ... or not?!?!?! ... or maybe that statement equates to randomness or maybe law ... was it determined ... or maybe your thoughts which created that statement are only tricking us into thinking it came from JT when really it came from a different program which equates to randomness ... oh really?!?!? But seriously now, I actually agree with you. Intelligence equates to CSI therefore it is best explained by previous intelligence and not *merely* chance and law. That is basically what the papers on active information prove. I'm glad to see you are in agreement. JT: "But to continue, Say that some binary string exists and that somehow it can be ruled out that it was the output of a mechanism. (Consider for example conditions at the beginning of the physical universe)." By "mechanism" do you mean "*only* law and chance"; or do you mean "there was cause and effect." JT: "So all you have left for an “explanation” is either randomness, or if we are to accept I.D., also possibly “Intelligent agency”. What difference does it make which one of those two you pick?" One has foresight and the other does not. Is it really taking you this long with such convoluted arguments to figure it out? JT: "Actually there is a third alternative: the binary string in question could have always existed (and thus need not have been “caused” by randomness or “Intelligent Agency”.)" Ah, yes ... the infinite regress of active information. But of course, you do realize that this would mean that there is an eternal bias in the very nature of reality for the production of life, evolution, intelligence, and all the patterns which are observed to be hallmarks of intelligent (foresighted) agents and are not properly/fully explained by either chance or law. There are some pretty interesting implications for that line of thought, but yes it is, alongside ID Theory the only other really valid scientific option. One more thing, though. It seems as if scientists would rather provide explanations for patterns through observed cause and effects relationships rather than just saying "We have no idea what causes it ... that's just the way it is." If the foundation is intelligence, at least this provides a closed intelligence - information - intelligence loop. If the foundation is eternal active info, we have no real explanation for our universe, life, evolution, or intelligence. It's neither law nor chance nor intelligence. If you want to take that as your idea and run with it go ahead, just don't fool yourself into thinking that you've somehow overturned ID Theory or shown it to be incoherent or unscientific -- especially when you do not see your arguments as the result of intelligence (foresight, logical planning, goal oriented structuring, etc.) To everyone else reading this ... if anyone is following anymore, I apologize for taking up so much room. It's just that I think there may be hope that JT will finally understand the basics of ID Theory.CJYman
March 7, 2009
March
03
Mar
7
07
2009
10:44 PM
10
10
44
PM
PDT
To JT (part 2) JT: "So, if intelligent agency is one type of nondeterminism then it cannot be described via a program. This is in contrast to what CJYMan has said I believe, that AI programs can be examples of intelligence agency. (KF, I’m not recalling at the moment where you stood on this.)" 1. The jury is still out on the issue of determinism vs. non-determinism. Yet, I will point out that quantum mechanics seems to have brought into the discussion the ability for there to exist a truly non-deterministic foundation to our universe. 2. The issue of determined vs. non-determined is not essential to the fundamental precepts of ID as a Theory as I have briefly shown above. 3. My thoughts on foresight, AI, and programs has been covered in my last three comments and where KF has quoted me above. JT: "There is no binary string that cannot be output by a program. In fact for any given binary string, there are an infinite number of programs that will output it." Fair enough. JT: "There is no binary string that cannot be the result of pure randomness either." Technically ... you're right. Practically ... you're wrong. The problem is that, sure, randomness can be summoned to explain away anything ... even law like behavior. The question is "what is the best explanation?" In fact, if we took your premise here and ran with it then science, as the discovery of laws of nature would not exist as there would be no concept of law. It could just all be explained by chaotic randomness. "Planets orbiting the sun?" ... easily computable as a random string; nothing to see here. Randomness did it. No true correlation to a fundamental principle at the foundation of our universe. "Chemicals bonding regularly?" ... easily computable as a random string; nothing to see here. Randomness did it. JT: "There are certain classes of strings which would be extremely improbable to occur by pure randomness, for example strings that are algorithmically compressible. Actually this is the only class of strings that I personally understand why their probability is small. If someone has demonstrated that FSCI for example is highly unlikely to occur by randomness, I am not aware of it, (not necessarily denying it though)." ...ummmmmmmm this is what KF and others, including myself have been constantly trying to explain to you. Tell me, what is the probability of the random generation of Shakespeare's Hamlet? Now, how many quantum calculations have gone on during the universes 15 billion year history? Compare the two numbers and even from a purely mathematical POV, without further trying to gyrate and twist randomness into magically having the ability to bestow meaning, the probabilistic resources are horribly lacking. Now, just do some small scale experimenting on your own. Either take KF's advice and start rolling some dice at a casino or have a computer randomly mutate a string of letters and compare the number of bit flips (probabilistic resources) with the bit length of a sentence when and if it materializes. Are you still not understanding that FSCI is highly unlikely to generate itself by chance? JT: "Note that if you’re talking about compressibility, that would definitely include strings exhibiting the sort of pattern-simplicity that I.D. seems to equate to “Law”." Yes, that is what law is -- a mathematical description of regularity. JT: "It would seem to me to be completely arbitrary to say that in order to be considered “law”, programs cannot exceed a certain degree of complexity (i.e. length). It seems that if someone says that laws only refer to very simple programs, and that its possible for a very complex program to be an Intelligent Agent, then that means its possible for an Intelligent Agent to be determinstic. But I am going to say that any sort of program can be characterized as “Law”, not just really simple programs." 1. As long as the output is regular, it can be described as resulting from law (no matter the complexity of said law). 2. Determinism is not an issue for the foundation of ID Theory as shown above. 3. A program will definitely have a "Law" component to it in that it will operate according to set laws once it is fashioned correctly. However, if the core of the program is a set of specified instruction beyond all probabilistic resources that outputs function as opposed to mere regularity; and if the organization of those instructional states are not themselves defined by law (regularity) nor caused by the physical properties of those states, then the core of the program is neither caused by nor defined by law. Thus, the program has a non-lawful component to it. It is not "merely" law. 4. So yes, every program can be characterized by law, but not every program can be characterized by *only* law. JT: "Neither a process that is random nor a process that is an Intelligent Agent has any sort of description at all." 1. Yes and no. Yes, randomness is not so much a process as it is a lack of causal description. But, no, randomness does have a statistical mathematical description. 2. No and no. A process that is an intelligent agent is described as a process which is aware of and can generate future targets which do not yet exist and then harness chance and law to engineer a solution to accomplish that goal. This process can be mathematically described as CSI or active info. JT: "You could never have a course, “Introduction to Intelligent Agents” that porported to describe how intelligent agents function." I disagree. One can already describe how AI functions. I see no reason why the time will not come in the future when we will be able to describe how conscious intelligence functions. Penrose and Hameroff may even have gotten us moving in the right direction. The strength of a conscious experience may indeed be described as E=h/t. However, that also is debatable, yet is not necessary to the fundamental position of ID Theory. Just as we can detect the effects of gravity without knowing what causes gravity (Newton before Einstein) and detect the effects of and infer back to a Big Bang; so we can also experience the existence of foresight, observe its effects, and infer from its effects to its previous existence without yet knowing how it functions. The only other necessary part of the hypothesis would be that law and chance *absent* previous foresight will not generate foresight. Any takers on counter examples? JT: "And you can’t characterize an Intelligent Agent on the basis of it output, because that is what a program is - a concise characterization of the output of a process." 1. No one is characterizing intelligence on the basis of its output. Intelligence is characterized by its foresight. It is *detected* on the basis of its output. Or are you trying to say something different? 2. What you just stated makes no sense. I see no logical flow from one concept to the next, not do I see a logical flow from this statement to the rest of your argument. JT: "... but I don’t think a human being for example is an intelligent agent. (OK Tim, KF, CJYMan, et. al. time to go into paroxysms over that last remark.))" You don't have foresight?????? Please don't ever go into engineering for the sake of the safety of humanity.CJYman
March 7, 2009
March
03
Mar
7
07
2009
10:38 PM
10
10
38
PM
PDT
Hello JT, you've left me much to respond to, so I will break my comments down into sections. JT: "A lot of you seemed really irritated by my comment that I.D’s conception of intelligence is indistinguishable from randomness." Well, I for one get a little irritated when one tries *continuously* to pass assertion and logically fallacious arguments as some type of "criticism." I mean, I can understand making a mistake in a logical argument ... but seriously ... to continue to push the fallacy after it has been exposed countless times ... sorry man; I'm sure that would even irritate you. It seems that you are just completely missing the fact that awareness of future targets (which you possess but haven't admitted, yet) exists and separates intelligent systems from non-intelligent systems -- systems controlled by only randomness and law. JT: "I was always under the impression that anything that’s a program cannot be intelligent agency in the I.D. conception." That is debatable. If conscious awareness of future targets which do not yet exist (true foresight) can be the result of a program then an intelligent agent can be a program. JT: "So I’ll assume that for the moment." Actually, that assumption is not necessary, since even if foresight can be the result of a program, it has been shown above that program would also need previous foresight in its full causal chain (or else exist eternally). Thus foresight would breed further foresight, CSI, etc etc etc; and *only* law and chance would again not be the best explanation. JT: "I believe Atom said that nondeterminism could be either randomness or intelligent agency." Well, technically, even statistical randomness can be deterministic if one is able to calculate all initial conditions to infinite precision. That's where the problem lies, though. This issue of determinism vs. non-determinism has been hotly debated as a metaphysical issue for quite some time now and is actually not essential to this fundamentals of this debate. So out with the herrings ... Foresight could be determined to come into existence, libertarian free will may be non-existent, our universe from beginning to end may be wholly determined yet ID Theory would still stand as long as: 1. Foresight exists. 2. Foresight is necessary in the creation of certain patterns. 3. Merely law and chance *absent foresight* will not best explain, nor practically generate said patterns. 4. Foresight itself is not caused by *only* law and chance absent previous foresight.CJYman
March 7, 2009
March
03
Mar
7
07
2009
10:35 PM
10
10
35
PM
PDT
P.S. artificial foresight (the mimicking of results provided by true foresight) always derives from, but is not equal to, true foresight. There is absolutely no awareness of future goals involved. Thus, the results of AI are not due to the AI but due to the foresight which programmed it, thus the results are indirectly the output of true foresight.CJYman
March 7, 2009
March
03
Mar
7
07
2009
04:19 AM
4
04
19
AM
PDT
I have to to go to work today, but I will be back to continue to repeat the points I've made which ROb and JT continually and blatantly ignore. Furthermore, when I return I will show that KF is correct that I agree on the substance of the issue of foresight that matters in this debate. For now, it will suffice to state that KF in #121 has adequately represented my views on the matter of foresight.CJYman
March 7, 2009
March
03
Mar
7
07
2009
04:15 AM
4
04
15
AM
PDT
Re Rob, 118: On points: 1] Halting prob vs knowing when to call it quits. Are you trying to tell me that people don't know -- providing they have common sense -- when to call it quits? (Or do you expect to be able to show a fMRI in which you see a little flowchart appear in the brain image with a solution to the halting problem for algorithms?) Here's heuristic for you: if you are deep in a hole and need to get out, stop digging in deeper. (In this case, into a reductio . . . ) Notice onlookers: this functions semantically and metaphorically -- ways that algorithms simply do not; but we do because precisely we have real intelligence and foresight and insight and imagination. All of which are self-evidently plain to the point where the rejection lands the objector in repeated absurdities. 2] since ID theorists disagree on whether computers can have foresight Last I checked, CJY boils down to saying that the smarts in a computer are put there by the programmer, i.e they are not native to the AI but to the programmer. That is, he is not substantially different from me or Tim. To check, I keyed in foresight into my find feature, and this at 77 is a good excerpt on CJY's basic view:
Foresight is self evident, since we [i.e known, conscious, self-aware intelligent agents] all experience it every day. We use our foresight to imagine a future goal that does not yet exist and then work to produce that goal. In many cases, these goals are neither best definable by law nor by randomness.
In 86, he goes on to make a remark that is probably being taken out of context:
Second, AI systems are called *artificial intelligence” for a reason. They model future possibilities and work toward a future target which does not yet exist, as in the chess program example. Thus, they have the most rudimentary form of foresight without being conscious of their foresight. They have artificial, as opposed to conscious or “real” foresight. Finally, AI systems are more than just law and chance. As already explained, and completely ignored by yourself, AI fundamentally consists of programming . . . KF unfortunately had to remind you [JT] of the very simple fact that the programming necessary for an AI system comes from a programmer using his foresight (one aspect of intelligence).
So, CJY EXPLICITLY agrees with me that there is no inherent foresight in AI systems, just what is written into a program per its algorithms. In short here is no gotcha there, apart form taking words out of context and twisting them into what they plainly do not mean; he better to project to unwary onlookers the idea of a disagreement on substance between ID proponents. (50c gets you $5 that we will hear of this utterly irreconcilable disagreement that how how ID is blah blah blah again. (Just like with so many other artfully constructed strawmen based on quote mining and used against ID. Sorry if I sound disgusted at such distortions and the way they have been used to mislead and manipulate, but I am. For good reason.) Rob, you are beginning to sound here like the squid that squirted ink to get away behind. 3] Disagreement resolved. Thank goodness. Not so fast. Computers execute under the constraints of physics, but their structure, function and information content reflect directed contingency. A point hat is decisive on the difference between chance and contingency and he onward distinction between undirected and directed contingency. And at no point have I or anyone else in this thread supportive of ID said any differently. Recall, most of us work with PC hard and/or software so we know. 4] your verbose ad nauseam repetition of points I’ve already addressed. Please excuse me if I take a break from you for awhile. Translated: running away behind a cloud of ink, laced with ad hominems, onlookers; while pretending to have cogently answered the issues on the merits. In fact -- just scroll up to check -- Rob has not COGENTLY addressed the issues he has been faced with, not only from the undersigned but from several others. Also, if short remarks are made, they are wrenched and abused rhetorically, if longer more methodical ones are made, they are ducked and the author is attacked. That does not sound like the approach of someone who knows he has a serious case on the merits. Cho man, do betta dan dat! Shaking the head sadly, GEM of TKIkairosfocus
March 7, 2009
March
03
Mar
7
07
2009
03:50 AM
3
03
50
AM
PDT
Rob, appreciative of your comments as always (not in defense of me, just in general.) Collin [26]:, if you really want to learn something, read Rob's posts, not mine. ------------- A lot of you seemed really irritated by my comment that I.D's conception of intelligence is indistinguishable from randomness. And I have gone through the posts of the last couple of days and the following is what I want to remark: (And if you're looking for absolute clarity and precision in terminology goto Rob's posts, not mine. ) Le'ts talk about binary strings. A binary string could be generated by pure randomness, as could be modelled by flipping a coin multiple times. A binary string could also be generated by some program. I'm thinking of a program that takes some arbitrary binary string as input and halts at some point and produces a binary string as output. Note that the program itself is also a binary string. Let's also consider the program and a specific input to it together as a single entity. (And for the computer its running on, we're imagining some very simplistic TM-equivalent device.) And then according to ID, there is a third type of causality called "intelligent agency" that can output binary strings. (Although, actually I'm not sure I would consider randomness a cause as such. Its certainly not an explanation.) I was always under the impression that anything that's a program cannot be intelligent agency in the I.D. conception. So I'll assume that for the moment. I believe Atom said that nondeterminism could be either randomness or intelligent agency. So, if intelligent agency is one type of nondeterminism then it cannot be described via a program. This is in contrast to what CJYMan has said I believe, that AI programs can be examples of intelligence agency. (KF, I'm not recalling at the moment where you stood on this.) Now for clarity, lets review: There is no binary string that cannot be output by a program. In fact for any given binary string, there are an infinite number of programs that will output it. There is no binary string that cannot be the result of pure randomness either. There are certain classes of strings which would be extremely improbable to occur by pure randomness, for example strings that are algorithmically compressible. Actually this is the only class of strings that I personally understand why their probability is small. If someone has demonstrated that FSCI for example is highly unlikely to occur by randomness, I am not aware of it, (not necessarily denying it though). Note that if you're talking about compressibility, that would definitely include strings exhibiting the sort of pattern-simplicity that I.D. seems to equate to "Law". And then according to I.D., there is no binary string that cannot be output by intelligence. Let me talk about law for a moment. I.D. wants to associate "law" exclusively with processes characterized by algorithmic simplicity. So in terms of programs, I believe that I.D would say that laws are to be associated with very small programs. All (TM) programs are deterministic. I would say that "necessity" refers to processes characterizable by programs. It would seem to me to be completely arbitrary to say that in order to be considered "law", programs cannot exceed a certain degree of complexity (i.e. length). It seems that if someone says that laws only refer to very simple programs, and that its possible for a very complex program to be an Intelligent Agent, then that means its possible for an Intelligent Agent to be determinstic. But I am going to say that any sort of program can be characterized as "Law", not just really simple programs. Note that every program has a description namely the program itself. You can allude to some program, because that program itself is a binary string (And once again, I think its helpful to consider the input to the program as part of the program). If a process is determinstic, you could say, "Here is a description for that process", and point to a binary string which is a program characterizing that process. Neither a process that is random nor a process that is an Intelligent Agent has any sort of description at all. A random process is not characterizable by a program, and thus does not have a description. Same with Intelligent Agency. This means that you cannot have an english language description of something purported to be an intelligent agent. You could never have a course, "Introduction to Intelligent Agents" that porported to describe how intelligent agents function. No such description is possible, because no program describes an intelligent agent. And you can't characterize an Intelligent Agent on the basis of it output, because that is what a program is - a concise characterization of the output of a process. Note also that there is really only one type of randomness. (We're not modelling randomness mixed with determinism here, btw.) If process A is random and generates strings, and process B is random and generates strings, process A and process B are indistinguishable. You don't have varieties of pure randomness. It would seem that would have to be the case with Intelligent Agency as well. (For this post at least, I will refrain from putting quotes around "intelligent agency", but I don't think a human being for example is an intelligent agent. (OK Tim, KF, CJYMan, et. al. time to go into paroxysms over that last remark.)) It would seem that there would be no way to distinguish any two entities said to be intelligent agents. With two purely random processes there is no pattern by which you can distinguish their output. Also with intelligent agency, If it were possible to look at some arbitary string and say, "OK this was definitely output by Intelligent Agent X" or "This was definitely not output by Intelligent Agent X" it would imply you had a program characterizing Agent X, which would mean it was not an intelligent agent. So there's no way to distinguish various intelligent agents from each other. And as I acknowledged, some people here want to say very complex programs (but not simple ones) can be intelligent agents. That appears to be a minority position in I.D. Most of you want to say Agency is a flavor of nondeterminism. (And Atom, I believe at one point you said a nondeterminstic FSA modelled a human, but a nondeterminstic FSA would be chance+ necessity.) Not sure here how I want to continue this discourse at the moment. But hopefully it starts to provide a framework for people in this forum to to understand where I'm coming from. Actually I do remember what more I need to say: I.D. will make comments to the effect, "We do know that intelligent agents routinely output [fill in the blank] FCSI,CSI, symbolic programs, etc. So if we see such things and don't know of a mechanism (i.e. program) that caused it, we are justified in assuming it was output by intelligent agency." But as I said there isn't any string that can't be output by a mechanism. I.D has not provided a basis for establishing that its more probable that some string was output by Intelligent Agency than a mechanism, namely because Intelligent Agents don't have descriptions. However, If you have two programs, and one is a lot more complex than the other, you might have the basis for saying the more complex one is more likely to have generated some particular type of complex binary string. And once again, you can't characterize an Intelligent Agent on the basis of it output, because that is what a program is - a concise characterization of the output of a process. If a human being is characterizable as a binary string (which seems reasonable given DNA) then we know there exists some mechanism (i.e. program) that will output it. So its not clear on what basis you could rule out some physical process in the universe that could have caused life. Now what is of note here, and in fact I have noted it many, many times without acknowledgement, I might also add it being one of the very few things I mention repeatedly, is the following: If you have some program-input f(x) that outputs y then f(x) equates to y. So any mechanism-input that is proposed as an explanation for y equates to y. So if you have some disparate, diffuse set of numerous factors existing out there in the universe that collectively resulted in life, that set of factors would still equate to life. As far as I know the significance of this eludes everyone in this forum but me. But to continue, Say that some binary string exists and that somehow it can be ruled out that it was the output of a mechanism. (Consider for example conditions at the beginning of the physical universe). So all you have left for an "explanation" is either randomness, or if we are to accept I.D., also possibly "Intelligent agency". What difference does it make which one of those two you pick? Actually there is a third alternative: the binary string in question could have always existed (and thus need not have been "caused" by randomness or "Intelligent Agency".)JT
March 7, 2009
March
03
Mar
7
07
2009
03:27 AM
3
03
27
AM
PDT
"The Halting Problem" Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. We say that the halting problem is undecidable over Turing machines. (from Wikipedia) ROb, if I read you right, you said that my argument was fallacious, but I gave an example where humans showed foresight in agreeing to call a draw in a king on king chess match. Somehow the humans were able to do something no computer could do, ever. They were able to innovate, or create. I may not have been clear when I wrote that the two computers playing each other had ALSO traded down to king v. king. In that case, though, they have no foresight whatsoever in terms of the outcome of the game; they chase and chase and chase. They never checkmate because, well for one thing, they are precluded from occupying adjacent squares (not that that would do any good). Now, I don't know as much about computers as I do about chess, but I think my little scenario holds up just fine. Two absolute beginners in chess will, without any coaching, come to this conclusion. Two computers NEVER will. Yes, never is a strong word, but I'll rely on an application of Turing's proof over Turing machines. Hmm. This suggests that those two humans are somehow qualitatively different than the computers; perhaps they are not merely physical embodiments of Turing machines. Wait a minute! That almost speaks of some type of agency, something final and outside of strict materialism -- a divine toe in the door?Tim
March 7, 2009
March
03
Mar
7
07
2009
12:22 AM
12
12
22
AM
PDT
kairosfocus:
A computer has nothing resembling foresight, as Tim has just exemplified.
Tim's example was fallacious, as I showed, unless you think that intelligent agents can solve the halting problem. Is that your position? And since ID theorists disagree on whether computers can have foresight (see CJYMan for a position contrary to yours and Tim's), I'll assume that ID theory hasn't resolved the question.
First, do you understand computer architecture, classically “the machine language’s view of the system”?
Not that it matters, but I have an MS in electrical engineering, specifically computer architecture. You can use as technical language as you like on the subject.
It is simply a machine that mechanically processes programmed instructions at bit level based on arrangements of logic gates and registers and clock and control signals, to give controlled predictable outputs.
Which is exactly what I mean when I say that computers operate according to physical laws, or chance and necessity. Disagreement resolved. Thank goodness.
If you do not understand something that basic, sorry, but you are in no position to seriously discuss the issues you have raised. Instead you need to do some 101 level reading.
Given that I agree with you on the physical operation of computers, and with CJYMan on whether computers can have foresight, your condescension seems ill-advised. And annoying, as is your verbose ad nauseam repetition of points I've already addressed. Please excuse me if I take a break from you for awhile.R0b
March 6, 2009
March
03
Mar
6
06
2009
03:59 PM
3
03
59
PM
PDT
ROb: "A common point of criticism by the scientific community is ID’s vagueness (cf “written in jello”). IMO, the definitions offered in the glossary lend weight to that criticism." 1. http://cjyman.blogspot.com/2008/02/is-dr-dembskis-work-written-in-jell-o.html 2. Which definitions do you have a problem with and why? You do realize that the same can be said of any scientific theory -- words can only be defined so much until you are left with a circularity of definitions and/or some concepts which are not necessarily clear and may seem to border on the metaphysical. ie: define "force" and define "matter" or looking in the direction of evolution, define "random" in "random mutation and natural selection." Oh, and please define these terms without relying on Webster. ROb: "They’re great as a basis for endless philosophical debates, but they don’t work as a basis for a scientific theory. The scientific community is waiting for a technical treatment of ID theory, not hand-holding explanations by way of examples and Webster definitions." 1. I've already dealt with why examples -- observations -- are so important above. You have not countered with anything but assertion. 2. CSI and active info provide excellent technical treatment, as does any research into intelligence -- the modeling of the future and generation of targets -- whether artificial or conscious. Fundamentally, though, all you need to do is realize that you do indeed possess foresight and that you use your foresight to generate certain patterns, such as these comments, which wouldn't exist if your foresight did not exist. Do you agree or disagree? Will chance and law, *absent foresight* produce these comments? That is what the basic hypothesis of ID relies upon. Then, the math as laid out by Dembski provides a rigorous method of detecting previous intelligence (foresight) and shows why law and chance will not purchase CSI. All the examples -- observations -- back up the ID hypothesis and there are no counter examples. ROb: "By focusing on this blunder, we miss the meat of the issue, which is that ID seems to portray intelligence as random." I'm sorry but I missed your argument. PLease show me how the modeling of future possibilities, the generation of targets, and the harnessing of law and chance to generate those targets can be portrayed as "random." YOu may have meant that the outcome of intelligence can appear random upon first inspection, but that has already been dealt with earlier as both randomness and intelligence produce highly contingent patterns. Its just that intelligence produces highly contingent patterns which are functionally/meaningfully specified and use up all probabilistic resources (highly improbable), whereas chance/randomness does not. ROb: "Computers operate according to physical laws." Yes, once they are put together and programmed, they will follow physical laws. Everything within nature must follow physical law. Still, they are not "only" chance and law. They also contain that non-lawful and non-random, programming of states. Hmmmmm ... then that means that there exists something within nature which is both non-lawful and non-random. In this case it's called instructional information and it is derived from foresighted systems. There is a difference between "following" law and "incorporating chance" versus "reducible" to *only* law and chance as I've just shown above. ROb: "ID needs to spell out the distinction between “directed” and “undirected” in a scientific fashion, rather than a Webster definition." You mean something like "directed" = "modeling future possibilities, generating targets, and harnessing law and chance to generate those targets", and "undirected" = "the lack of such modeling and targeting". Again, refer to top of this comment re: definitions. ROb: "If a computer program uses data to predict the consequences of various courses of action, and then takes the course of action with the most favorable predicted consequence, does that count as “directed”? How about “intelligent”?" That counts as artificial intelligence, as I have explained above. Conscious foresight is actually able to envision future states, though, and that is what ID refers to as intelligence -- I would personally qualify that with conscious or "true" intelligence as opposed to artificial or non-conscious intelligence. We observe that all AI requires true intelligence in a complete causal chain. I'm honestly not seeing the point that you are trying to make re: "intelligence" "randomness" and "law."CJYman
March 6, 2009
March
03
Mar
6
06
2009
02:32 PM
2
02
32
PM
PDT
1 2 3 4 6

Leave a Reply