Uncommon Descent Serving The Intelligent Design Community

An Eye Into The Materialist Assault On Life’s Origins

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Synopsis Of The Second Chapter Of  Signature In The Cell by Stephen Meyer

ISBN: 9780061894206; ISBN10: 0061894206; HarperOne

When the 19th century chemist Friedrich Wohler synthesized urea in the lab using simple chemistry, he set in motion the ball that would ultimately knock down the then-pervasive ‘Vitalistic’ view of biology.  Life’s chemistry, rather than being bound by immaterial ‘vital forces’ could indeed by artificially made.  While Charles Darwin offered little insight on how life originated, several key scientists would later jump on Wohler’s ‘Eureka’-style discovery through public proclamations of their own ‘origin of life’ theories.  The ensuing materialist view was espoused by the likes of Ernst Haeckel and Rudolf Virchow who built their own theoretical suppositions on Wohler’s triumph.  Meyer summed up the logic of the day

“If organic matter could be formed in the laboratory by combining two inorganic chemical compounds then perhaps organic matter could have formed the same way in nature in the distant past” (p.40)

Darwin’s theory generated the much-needed fodder to ‘extend’ evolution backward’ to the origin of life.  It was believed that “chemicals could “morph” into cells, just as one species could “morph” into another “ (p.43).   Appealing to the apparent simplicity of the cell, late 19th century biologists assured the scientific establishment that they had a firm grasp of the ‘facts’- cells were, in their eyes, nothing more than balls of protoplasmic soup.   Haeckel and British scientist Thomas Huxley were the ones who set the protoplasmic theory in full swing.  While the details expounded by each man differed somewhat, the underlying tone was the same- the essence of life was simple and thereby easily attainable through a basic set of chemical reactions.

Things changed in the 1890s.  With the discovery of cellular enzymes the complexity of the cell’s inner workings became all too apparent and a new theory that no longer relied on an overly simplistic protoplasm-style foundation, albeit one still bounded by materialism, had to be devised.  Several decades later, finding himself in the throws of a Marxist socio-political upheaval within his own country, Russian biologist Aleksandr Oparin became the man for the task. 

Oparin developed a neat scheme of inter-related processes involving the extrusion of heavy metals from the earth’s core and the accumulation of atmospheric reactive gases all of which, he claimed, could eventually lead to the making of life’s building blocks- the amino acids.  He extended his scenario further, appealing to Darwinian natural selection as a way through which functional proteins could progressively come into existence.  But the ‘tour de force’ in Oparin’s outline came in the shape of coacervates- small, fat-containing spheroids which, Oparin proposed, might model the formation of the first ‘protocell’.

Oparin’s neat scheme would in the 1940s and 1950s provide the impetus for a host of prebiotic synthesis experiments, most famous of which was that of Harold Urey and Stanley Miller who used a spark discharge apparatus to make the three amino acids- glycine, alpha-alanine and beta-alanine.  With little more than a few gases (ammonia, methane and hydrogen), water, a closed container and an electrical spark Urey and Miller had seemingly provided the missing link for an evolutionary chain of events that now extended as far back as the dawn of life.  And yet as Meyer concludes, the information revolution that followed the elucidation of the structure of DNA would eventually shake the underlying materialistic bedrock.          

Meyer’s historical overview of the key events that shaped origin-of-life biology is extremely readable and well illustrated.  Both the style and the content of his discourse keep the reader focused on the ID thread of reasoning that he gradually develops throughout his book.

Comments
PS: VJT's images are very useful. George Washington's facial image is plainly far more specifically facial and accurate to an individual than the man of the mountain. Analysis in terms of polygons and colour-shading would be relevant, as well as the now fairly common analysis of faces. (Cf interesting discussion here on the average = beautiful thesis. Turns out the REALLY "gorgeous" face is not average, though the average is relatively attractive.)kairosfocus
July 24, 2009
July
07
Jul
24
24
2009
04:14 AM
4
04
14
AM
PDT
It is plain that the GA was written by an intelligent programmer, and that when executed, it mechanically pursues a path based on an algorithm that is just as much a matter of intelligence. Its output will exhibit FSCI, and thast will be directly traceable to the action of the author and the designer of the PC that ran it.
Yes, as I have been saying ALL ALONG, GA's are designed. According to you they generate this FSCI, except they don't because they are actually just transferring it from the person who wrote the GA? I'm still unclear on exactly how you measure or quantify this transference. IF I run the same experiment several times will It always produce the same level of FSCI? Can it produce more than the thing that created the GA. Is FSCI additive or cumulative, can I use a GA to generate a billion robots that assemble to make a thing with more FSCI than me? I'm also still unclear on how you would go about calculating the FSCI in a GA. I'm not trying to be obtuse, I just want yo to demonstrate that it is possible and that you understand how it can be done. If you can't then it is not a problem, I would just rather not rely on your promise that it is obvious.
Now, rather minor adjustments of facial characteristics are enough to destroy recognisability as a specific individual, i.e. — surprise! — we have here a config space topology exhibiting definite islands of function.
You have one island of function here, that of a human face. Making minor modifications to it will turn it into a different human face. Whether it is recognizable as a person depends on who is looking at it. If you are trying to make allusions to nature here then it fails because nature is not trying to achieve a fixed single target such as a particular persons face. All you have done by setting this as a target is try and define a fitness heuristic that is 'un-natural' and explicitly creates the islands of function you are so obsessed with. Consider how the heuristic changes if you allow anything recognizable as any kind of face from the animal kingdom.
Similarly, in a GA, its code is not allowed to vary at random as part of the normal execution. Nor that of the underlying operating system. And, neither the OS nor the GA — both of which massively exhibit FSCI
Yes, the GA operates within the constraints imposed by the computational substrate just as the self replicating organisms in nature operate within constraints imposed by the laws of physics. In this way the compute substrate and simulation environment complete with GA are analogous to the physical world. Trying to argue that they are bad analogies because the rules that constrain them don't randomly change is like saying that the laws of physics should randomly change. Clearly they don't so neither should those constraints when used in a model. This is just incoherent distraction.
Recall, at just 1,000 bits of functionally specific information-bearing capacity, we are dealing with a situation where the resources of the entire observed cosmos would be unable to scan 1 in 10^150 of the possible configurations.
Yes - more incoherent irrelevance. The universe is entirely parallel, it doesn't do things 'one step at a time' like a computer. Not all configurations need to be 'scanned' as cumulative processes can 'search' much more effectively in the types of search spaces found in nature. If a GA operating in a simulation can 'generate' FSCI then why can't natural selection operating in the world generate FSCI? After all I'm not arguing that the universe wasn't intelligently created so if it was created then, just as with the GA, FSCI ought to be produced.BillB
July 24, 2009
July
07
Jul
24
24
2009
04:06 AM
4
04
06
AM
PDT
VJT & Jerry: Thanks. You both have valid points. Others: Very little of what has proceeded further is going anywhere other than what should be trivial or obvious. For instance, the program above is obviously in itself FSCI-bearing, as Jerry highlighted. It is fucntionally specific and complex beyonf the complexity of 143 ASCII characters -- disturb it at random and see what happens to its executability, even at 5% probability fo mutation of the individual character. (And, a basic FSCI metric relevant to what was just inferred was long since provided above and in the linked, as well as in the Weak Argument Correctives, so BB is simply being willfully obtuse.) It is plain that the GA was written by an intelligent programmer, and that when executed, it mechanically pursues a path based on an algorithm that is just as much a matter of intelligence. Its output will exhibit FSCI, and thast will be directly traceable to the action of the author and the designer of the PC that ran it. THIS INSTANTIATES THAT OBSERVED CASES OF FSCI WHERE WE DIRECTLY KNOW THE CAUSE, ARE PRODUCTS OF DESIGN. As to specification by compressibility, this is of course an allusion to the fact that a specification is as a rule simply describable and as well to the Kolmogorov applications used by others in this general field. In the case of Mt Rushmore in the first and second [digitised] photos, in both cases design would be resident in the nature and structural patterns of a photograph. A photo is a designed object. No false positive on "design" there. [And it is relevant to see that this is yet another case of millions of cases of FSCI where the cause is independently known and turns out routinely to be intelligent. In short, distractive and dismissive rhetoric above notwithstanding, the challenge to identify a case of FSCI which is of known origin and has not originated in intelligent action still stands unmet.] At the second level, the issue is the difference between a mountain in its more or less natural state -- which is a complex shape driven by law + chance [which would of course lead to non-inference to design -- complex but not functionally specific in any relevant sense] -- and in the modified state due to design by a sculptor. In the latter case, the mountain was modified to accord with the facial characteristics of four historically important individuals, which indicates a rather tight specification, which can be regarded as functional [though by "function" algorithmic or linguistic function are far more relevant to our concerns on e.g. OOL . . . note the red herring led out to an intended strawman rhetorical tendency here]. Now, rather minor adjustments of facial characteristics are enough to destroy recognisability as a specific individual, i.e. -- surprise! -- we have here a config space topology exhibiting definite islands of function. Bit content of the Mt Rushmore to get the degree of precision of shape to be recognisable as portraits of specific persons will obviously be well beyond 1,000 bits. (As to Old Man of the Mountain, this is more of a function of our internal tendency to see faces -- socially and protectively important -- than of any particular shape; the shape being easily recognised as a pattern stemming from chance + necessity. The shape is complex but not particularly specific.) If one were to rework Mt Rushmore to resemble a natural mountain, and one were to look at the mountain, it would seem to be complex but not particularly specific -- moderate random variations in its shapes etc would not destroy the class of pattern etc [regarding pattern as a type of function]. Thus, it would then not exhibit functionally specific constraints on its complexity, i.e it would NOT be FSCI. Indeed, after the collapse of the "man" feature, the mountain in New Hampshire is still a mountain with the same general class of features, just one interesting feature used as found object art has now eroded away. However, say one were to digitise a picture of a natural mountain and use some of the binary code string to form a cipher. For instance, in the red spectrum part of the code for the photo, use the least significant bit to store ASCII text, then take the difference between the natural mountain's picture and the modified picture to extract the text, i.e steganography. In this case, the natural mountain as digitised in one particular image would be a specification, and functionality would be derived from subtle differences of a related image from it. However, if the original were to be allowed to vary at random, even moderately, the ability to extract information would soon be destroyed. Same, if the cipher-bearing transformed copy were to vary at random. That is, an island of functionality topology has now emerged. (Think of how in the US Colonial era, leaves were incorporated into runs of bank notes to foil counterfeiting: in effect a particular found object pattern was now used as a specification, and divergence therefrom was proof of counterfeiting. The NH state quarter has some of that same functionality -- if a claimed quarter turned up with the Man significantly modified, it would have to be counterfeit.) Rob is of course right that function is in a context, but hat was not at issue, I trust the above on steganography brings out a little more of what I meant. in particular at random aspects as used to function, are always used in a controlled fashion. E.g., lottery winning tickets and numbers must fit certain specifications. Similarly, in a GA, its code is not allowed to vary at random as part of the normal execution. Nor that of the underlying operating system. And, neither the OS nor the GA -- both of which massively exhibit FSCI -- arrived by random processes that rewarded observed algorithmic function. That, at a basic level, for obvious search space exhaustion reasons. Which illustrates the issue FSCI raises -- not the improvement of function through whatever hill climbing algorithm one may wish to use [even one that uses controlled randomness], but the arrival on the shores of an island of function in a vast sea of non-function. Recall, at just 1,000 bits of functionally specific information-bearing capacity, we are dealing with a situation where the resources of the entire observed cosmos would be unable to scan 1 in 10^150 of the possible configurations. So, until you get TO shores of function, you have a problem of specifying an unintelligent mechanism that can cut down on the implications of mere trial and error in a vast sea of non-function. GEM of TKIkairosfocus
July 24, 2009
July
07
Jul
24
24
2009
03:03 AM
3
03
03
AM
PDT
Mr Jerry, I did not know that God was writing comments on this site. Did He also write the code that Nakashima gave us. While I think you are expressing it somewhat jokingly, this question of where to assign credit (or blame) for FSCI is key. Do we give credit to the first cause or to the last cause? If the first cause, then I understand naming God as the author of the FSCI. If the last cause, then the GA itself is the author of the FSCI, not of itself, but of the Pop data array inside it. (Note that the bits in Pop come from the random function or copied from other places in Pop. The ultimate source of every bit in Pop is random.Nakashima
July 23, 2009
July
07
Jul
23
23
2009
07:39 PM
7
07
39
PM
PDT
Mr Vjtorley, Hi. The idea you describe would be appropriate for estimating the FSCI of a sculpture - the 3D object itself. What KF-san has been discussing is simply a 2D image, whether of Mt Rushmore, Mona Lisa, a block of text or a screenful of static. Your idea is similar to how objects are often represented in computer graphics/ The surface is approximated by a set of triangles. if you arrived at this yourself I congratulate you.Nakashima
July 23, 2009
July
07
Jul
23
23
2009
07:29 PM
7
07
29
PM
PDT
Nakashima, The code you presented when implemented on an appropriate computer is FSCI. It is information that is complex and specifies another entity which has a function. Sparc, "Indeed, every of your comments contains FSCI. Just like KF’s posts the FSCI content of your comments is a constant that equals God." I did not know that God was writing comments on this site. Did He also write the code that Nakashima gave us.jerry
July 23, 2009
July
07
Jul
23
23
2009
06:16 PM
6
06
16
PM
PDT
Mr Nakashima You wrote:
Let me ask you about a thought experiment. Let’s take two photos, taken from the same vantage point, the same season, the same time of day, of Mt Rushmore, the first in 1925 (before the sculpture) and the second in 1941 (after the sculpture was completed).
FYI: Here is a picture of the Old Man of the Mountain in Vermont, before it was destroyed. http://www.funnycoke.com/om4.jpg Here is a photo of George Washington's face under construction on Mt. Rushmore, in 1932. http://www.imageenvision.com/md/stock_photography/men_constructing_mt_rushmore.jpg To calculate the bit content, you could treat each face as a set of equal-sized planes (like the faces of a pyramid, a cube, or an octahedron) and then calculate how many of these planes you'd need to generate a pretty good likeness of each face, with the same amount of detail. Each plane has its own mathematical equation. You could assign one bit of information to each plane. You then asked about morphing from one to the other.
In which of these images is it correct to infer design, and in which of them is it incorrect?
The number of bits in the Old Man of the Mountain is clearly within the scope of nature to generate. Let's say (very generously) that a structure of equivalent specified complexity to the Old Man of the Mountain is generated by natural processes, somewhere in the world, once every year. We can use that fact to calculate the probability of obtaining N bits. George Washington's face has considerably more than N bits. You should be able to calculate the number of bits that natural processes might generate once every 4,536,000,000 years (the age of the Earth). That's your cutoff point.vjtorley
July 23, 2009
July
07
Jul
23
23
2009
03:56 PM
3
03
56
PM
PDT
Nakashima, Ah, excellent, if I'm not mistaken, it's the microbial GA! I was considering posting the same thing myself. I was wondering about how the representation of the algorithm will affect the FSCI? Perhaps your psuedocode could be compared to a plain text description and a compiled executable. Should they all contain the same amount?BillB
July 23, 2009
July
07
Jul
23
23
2009
12:09 PM
12
12
09
PM
PDT
Mr Jerry, If you want to discuss the FSCI of a GA algorithm, I thought it would help if we had one to reference, so I wrote this pseudocode during lunch. It is a steady state GA with uniform crossover. It is about the smallest GA I could write, but it should still work.Nakashima
July 23, 2009
July
07
Jul
23
23
2009
10:50 AM
10
10
50
AM
PDT
PopSize = 1000 IndSize = 1000 MaxTime = 1000 mutationRate = 0.05 allocate Pop[PopSize, IndSize], Fitness[PopSize] for i = 1, PopSize for j = 1, IndSize Pop[i, j] = rnd(0, 1) next j Fitness[i] = evaluate(Pop[i, *]) next i allocate NewInd[1, IndSize] for t = 1, MaxTime * PopSize for j = 1, IndSize p1 = rnd(1, PopSize) p2 = rdn(1, PopSize) if Fitness[p1] > Fitness[p2] then newBit = Pop[p1, j] else newBit = Pop[p2, j] if rnd(0,1) < mutationRate then newBit = not(newBit) NewInd[1, j] = newBit next j p3 = rnd(1, PopSize) Pop[ p3, *] = NewInd[1, *] Fitness[p3] = evaluate(Pop[p3, *]) next t evaluate( Ind ) { }Nakashima
July 23, 2009
July
07
Jul
23
23
2009
10:45 AM
10
10
45
AM
PDT
kairosfocus:
the randomne4ss has no ma=eqanintg or function in itself.
Can you name anything that has function "in itself"? Random sequences are useful in many situations. That you deem them non-functional seems rather arbitrary.
Complex, but not specific in itself.
In your always linked, you say that specificity can be identified through functionality. Random sequences serve several important functions.
And, to move from RA decay to “encoded” string*, we have brought in: algorithms, codes, processing and storage, etc; all of which are the work of intelligent designers.
Ah, so information in AGTCCTGACTTCAGGGCT comes from the intelligent designer who encoded the genetic sequence, not from the actual DNA. And here I was thinking that functional DNA was an example of FSCI.
The onward process of intelligently using the sequence, however generated, is what would give it specificity and function. AFTER that process has been completed,t he originally random sequences have now become specific, complex and functional.
So random phenomena do have FSCI, but only after they're used for something. So why do you credit the FSCI to the consumer rather than the producer? Is there any reason other than the fact that the latter would invalidate your claims?
Let’s see, how many bits does it take to code and store the analysis of tidal motion under gravitational forces? How many to store the reports on ecological function of tides? [In short, the INFORMAIONAL aspects come from outside the tidal system and processes.]
That makes no sense. It sounds like you're saying the there is no information in tidal behavior until we record and report that previously non-existent information. Again, why is this not also true for biological information?
They are not INFORMATIONALLY functional in themselves.
Again, can you name anything that is informationally functional, whatever that means, in itself?
Again, the information and its specified complexity and functionality are not the product of the tides as such but of the process that an intelligent agent applies.
By that logic, you're creating the information in this comment as you view and mentally process the pixels. Any critiques of this comment should therefore be directed to yourself.R0b
July 23, 2009
July
07
Jul
23
23
2009
09:40 AM
9
09
40
AM
PDT
Jerry:
y the way FSCI is obvious and in order to see where it is in a GA you would have to have the algorithm itself to examine. It exist very clearly in language, computer programing and in DNA. All are clear examples. Bringing up other examples are sometimes problematic. Something may or may not have FSCI but there are probably examples where is not obvious. But it does not exist in nature and that is the point. If one wants to pursue it to other things, go right ahead but it is not relevant to the basic argument. All the other attempts are just extending it to intelligent processes, not natural ones.
Indeed, every of your comments contains FSCI. Just like KF's posts the FSCI content of your comments is a constant that equals God.sparc
July 23, 2009
July
07
Jul
23
23
2009
09:18 AM
9
09
18
AM
PDT
Jerry,
By the way FSCI is obvious and in order to see where it is in a GA you would have to have the algorithm itself to examine. It exist very clearly in language, computer programing and in DNA. All are clear examples. Bringing up other examples are sometimes problematic. Something may or may not have FSCI but there are probably examples where is not obvious.
This is the point isn't it. The reason why I'm arguing here is that FSCI isn't obvious and with these ambiguities I can't just take your word for it when you claim:
But it does not exist in nature ...
I would help if you can give me a demonstration of this:
in order to see where it is in a GA you would have to have the algorithm itself to examine.
Pick which ever one you want, there are plenty of examples on the web.BillB
July 23, 2009
July
07
Jul
23
23
2009
09:07 AM
9
09
07
AM
PDT
Billb, I have been here 4 years and this is my analysis of what passes for discourse here. I have only found one (maybe a second) anti ID person here in that time that did not fit the description I have made. At first there are attempts to be reasonable but after time they all drift into the same behavior pattern. By the way FSCI is obvious and in order to see where it is in a GA you would have to have the algorithm itself to examine. It exist very clearly in language, computer programing and in DNA. All are clear examples. Bringing up other examples are sometimes problematic. Something may or may not have FSCI but there are probably examples where is not obvious. But it does not exist in nature and that is the point. If one wants to pursue it to other things, go right ahead but it is not relevant to the basic argument. All the other attempts are just extending it to intelligent processes, not natural ones.jerry
July 23, 2009
July
07
Jul
23
23
2009
08:08 AM
8
08
08
AM
PDT
KF-san, PPS: Nakshima-San: there are many real life situations that give different equally valid metrics for phenomena. At a very simple level, think about how we measure angles. In these cases, we know how to convert between metrics. We can convert between degrees and radians, as you say. So it would very much help your case to show that these three different metrics are convertible, or explain why they are not.Nakashima
July 23, 2009
July
07
Jul
23
23
2009
07:47 AM
7
07
47
AM
PDT
KF-san, PS: Contrast a screen-full of white noise “snow”: complex yes, specific — no. Vulnerable to random perturbation — no. So, the specific variable would set it to zero on the simple FS Bits metric. (Cf here granite vs DNA on Orgel’s remarks.) Well, this is exactly why I asked for your help. Based on your definition, viz. b] Let specificity [S] be identified as 1/0 through functionality [FS] or by compressibility of description of the information [KS] or similar means. I set S = 1 due to the low compression ratio. I said so previously and asked for your confirmation that this was appropriate. Let me ask you about a thought experiment. Let's take two photos, taken from the same vantage point, the same season, the same time of day, of Mt Rushmore, the first in 1925 (before the sculpture) and the second in 1941 (after the sculpture was completed). 1 - do you agree that 'design detection' in the first photo is a false positive? 2 - do you agree that 'design failure' in the second photo is a false negative? Now using Photoshop or some similar tool we create a series of morphs between the two photos, let us say 98 intermediates at 1% intervals. In which of these images is it correct to infer design, and in which of them is it incorrect?Nakashima
July 23, 2009
July
07
Jul
23
23
2009
07:44 AM
7
07
44
AM
PDT
jerry. Well done! there's nothing like a resort to personal attack to help solidify your position in a debate ;)BillB
July 23, 2009
July
07
Jul
23
23
2009
07:05 AM
7
07
05
AM
PDT
In every case of FSCI — which is observable, and measurable — where we know the source independently, we see that its source is in intelligence.
I'm still waiting for you to show me how to measure the FSCI of a GA. Without knowing how to do something like that I'm not sure how to apply it to anything else, particularly as your definition of FSCI seems to extend to any observation of nature. ----
Do you not see that first the Mona Lisa or Rushmore have in them particular functions and specification?
If I were to take my stonemasons tools and carefully re-design and re-work Mt Rushmore so that it looks like a mountain would it still contain FSCI. In fact if I designed a whole planet with oceans and mountains specifically so that life would evolve would that planet contain FSCI, and if so how much, and how would you tell the difference between my planet and one that formed naturally, without having prior knowledge that a designer was at work?
... And such digital maps or pictures will in every observed case of known origin be DESIGNED. What is so hard to see or accept about that?
Speaking for myself: nothing, you are quite right, every digital map or picture that is known to have been designed has been designed. Of course I am using your definition of design that would include pictures generated algorithmically or with GA's. We know that it is possible to create mechanisms that can create pictures, which I presume will contain FSCI. BTW, a screen of what looks like white noise could be a representation of a large encryption key, which would be very vulnerable to perturbation. Without knowing the cause of the signal how do you tell if it has FSCI? I don't actually have a problem with the idea of an intelligent cause to the universe but I still don't buy your claims that this FSCI you argue is in living systems can't be gathered from the environment and must be placed there by an agency.BillB
July 23, 2009
July
07
Jul
23
23
2009
07:00 AM
7
07
00
AM
PDT
"Do you not see that first the Mona Lisa or Rushmore have in them particular functions and specification?" kairosfocus, they all see. What they do is sit around and think how they can make up something that will confuse the issue. They are not driven by desire to understand, only confuse people as best they can. That is why the interesting thing is why grown men do this. Many of these are not children or young adults but supposedly mature adults and they actively engage in this behavior. Have some pity on them. There must be something strange or wrong with them to engage in such behavior. I have relatives who are like them, in their 30's and 40's and some older who do not lead serious lives and find amusement in making other people unhappy. I have other relatives who are serious and leading very productive lives with families who would not waste a second on such juvenile behavior. When the pro ID people here try to be an adult with them, you send them off to find another bit of non relevant trivia that they will hope confuse the issue. They are a rather pathetic lot but they are here so most of the pro ID people feel the need to deal with them. But it just feeds them. It is like the bulk spam servers looking for those who will answer the inane emails sent out. By answering them you are only encouraging them to send out more spam and try to screw up your computer. Think of the anti ID people here as the spammers. If we charged them .01 cents a word to post a comment here, we would see the last of themjerry
July 23, 2009
July
07
Jul
23
23
2009
06:06 AM
6
06
06
AM
PDT
kairosfocus,
mechanism that implements actions step by step per the design of an external intelligent agent.
I believe what you just described fits the description for an electronic computing device, like what we normally use for information processing. It is all about information processing, right? I am afraid I don't quite get your point.Cabal
July 23, 2009
July
07
Jul
23
23
2009
05:44 AM
5
05
44
AM
PDT
PPS: Nakshima-San: there are many real life situations that give different equally valid metrics for phenomena. At a very simple level, think about how we measure angles. Similarly, think about different digital encodings for images or other analogue phenomena, which is actually a metric process. The issue is fitness for particular purpose in view and being clear on what convention you are using in the particular context.kairosfocus
July 23, 2009
July
07
Jul
23
23
2009
05:17 AM
5
05
17
AM
PDT
PS: Contrast a screen-full of white noise "snow": complex yes, specific -- no. Vulnerable to random perturbation -- no. So, the specific variable would set it to zero on the simple FS Bits metric. (Cf here granite vs DNA on Orgel's remarks.)kairosfocus
July 23, 2009
July
07
Jul
23
23
2009
05:04 AM
5
05
04
AM
PDT
Nakshima-san: Do you not see that first the Mona Lisa or Rushmore have in them particular functions and specification? Do you not see that a digitised map, drawing or picture of these will then have in IT FSCI? And that such would be most evidently designed? As to the simple metric: > 1 kbits -- yes, functional as a map or picture according to some coding scheme and associated algorithms, yes. Vulnerable to sufficient random perturbation; plainly, yes. So, FSCI, and the bit measure of functionally specific bits will give a bit value if beyond 1,000 bits, specific and functional. And such digital maps or pictures will in every observed case of known origin be DESIGNED. What is so hard to see or accept about that? GEM of TKIkairosfocus
July 23, 2009
July
07
Jul
23
23
2009
04:44 AM
4
04
44
AM
PDT
Onlookers: The past few days have sufficed to demonstrate the actual balance on the merits. In every case of FSCI -- which is observable, and measurable -- where we know the source independently, we see that its source is in intelligence. We also know on search space grounds, that such cases are not likely to emerge on chance + necessity only. Thus, FSCI is a reliable sign of intelligence, and int he case of OOL -- the actual main focus dor this thread -- it means that OOL is best explained by reference to intelligent action. GEM of TKI PS: BB, all we need is to know the underlying causal force that best explains FSCI. That turns out to be intelligence. We can then explore circumstances to see whodunit all we wish. That tweredun comes first. PPS: BB, in fact the issue is that the observed universe necessitates a cause, per its credible origin and contingent character, typically estimated at 13.7 BYA. Given its evident complex finetuning for cell based life, that is on best explanation, intelligent, and powerful and of course not made up from the matter we observe. [Matter does not create itself out of noting . . . ] Going further, by isolating that intelligences are observed and have characteristic fingerprints, then we are in a position to let empirical evidence speak -- without imposing a priori materialism -- on the subject of whether or not immaterial intelligences exist or have left traces in our cosmos -- and the decisive issue on that hinges not on OOL but cosmological fine tuning and inference to its best explanation; FSCI in lifge simply speaks to intelligence as its cause, but life on earth could in principle be a design by a creator within the cosmos (as TMLO, the foundational design theory technical work, discussed from 1984). The Lewontinian imposition of materialism on science censors the evidence.kairosfocus
July 23, 2009
July
07
Jul
23
23
2009
04:38 AM
4
04
38
AM
PDT
KF-san, Nakashima-San: You already have adequate examples on FSCI and related concepts, and on the quantifications thereof. Well, though I am a poor student, I guess you agree with how I've calculated the FSCI above, with regard to a screenshot of the Mona Lisa and static. It seems that your metric always returns either 0 or the number of raw bits. If you can't detect the difference between a picture of Mt Rushmore taken in 1925 and one take in 1941, you have accomplished nothing, and labored mightily to do so.Nakashima
July 23, 2009
July
07
Jul
23
23
2009
04:34 AM
4
04
34
AM
PDT
4] we can easily exceed the 1000-bit threshold if we record the information long enough, so we have FSCI
Congratulations, you are so close to finally understanding. GA's and biological organisms acquire this from the environment through a cumulative process.
In short, as the highlighted exhibits, the FSCI is created by a process of measuring, encoding, recording and compiling in a relevant data structure.
Now you are in trouble again because you seem to be requiring an intelligent observer for FCSI to exist. If naturally occurring processes don't comntain FCSI until they are recorded and encoded then how can we tell if biological organisms contain FCSI when, by definition, studying them turns everything we look at into FCSI.
... an intelligent agent’s action is creating the FSCI, per observation, analysis, encoding, data structure and storage operations.
You need to present a method of differentiating between FCSI that is created when we study something and any FCSI that already exists, otherwise you are just placing FCSI in the eye of the beholder.BillB
July 23, 2009
July
07
Jul
23
23
2009
03:48 AM
3
03
48
AM
PDT
Now, implement the relevant controller for us
It's Derek Smith’s controller, why should I do his work for him? Also, I haven't ever claimed that GA's are intelligent, the debate is about whether they can generate FCSI. You are claiming that only an intelligence can generate FCSI so I suppose from your standpoint if a GA can generate it then it must be intelligent because you have defined FCSI as requiring an intelligence to generate. I would offer up the decades of research into GA's but I know that you will reject them out of hand by claiming that the FCSI is smuggled in when the GA is created.
…how much FCSI does a GA contain? Can you supply some numbers please. and then we can start to answer this: …can it generate more FCSI than it contains?
How are the calculations coming along?
And, BTW, I have said nothing about whether or no a designer “must be” immaterial.
So you accept that life on earth can have a material cause then? Presumably this means that FCSI can be the product of purely material forces (i.e. the laws of nature), you just believe that it requires a certain class of natural system (an 'intelligence') to generate this FCSI. Somewhere in your chain of reasoning there has to be an immaterial cause of FCSI or intelligence, otherwise you are accepting that FCSI and intelligence can arise from natural (material) processes.BillB
July 23, 2009
July
07
Jul
23
23
2009
03:33 AM
3
03
33
AM
PDT
R0b: Interestingly some designers of electronic products will fill the spare memory of a micro-controller with junk code and data, just lots of random bytes, in order to confuse any attempts at back engineering. This presumably has FCSI because it serves a purpose. Unfortunately I think we are flogging a dead horse here as KF seems to enjoy his portable goalposts a little too much ;) So, I'm left wondering how much FCSI a machine that turns random noise into its numerical representation can contain?
the algorithm and its instantiation are where the intelligent inputs come in.
Do things like this qualify as 'algorithms'?
Ocean tides serve important functions
BillB
July 23, 2009
July
07
Jul
23
23
2009
03:19 AM
3
03
19
AM
PDT
Footnote on Rob's examples: Explaining how the strawman comes up: 1] Rob, 222: random processes in nature, such as radioactive decay, can be encoded as bit sequences of arbitrary length, so exceeding the 1000-bit threshold is no problem. Of course, the process is known random, so is irrelevant to the question of intelligent design -- a 1,000+ bit string of known random numbers would not even be in question. Complex, but not specific in itself. (Cf Orgel on Granite or look at a complex organic tar in the OOL context.) And, to move from RA decay to "encoded" string*, we have brought in: algorithms, codes, processing and storage, etc; all of which are the work of intelligent designers. __________ *I use the string as the primitive of data structures because more complex structures can be expressed as suitably related groups of strings. 2] Any given random sequence is useful for a variety of purposes, including testing and encryption, so it’s functional, and therefore FSCI. The onward process of intelligently using the sequence, however generated, is what would give it specificity and function. AFTER that process has been completed,t he originally random sequences have now become specific, complex and functional. [To see that, think about what happens if we then onward allow significant random perturbation: the code assignments suddenly break down. A one-time message pad -- by design -- has no further use once used up.] 3] Ocean tides serve important functions, including nutrient mixing and the creation of intertidal ecologies. How many bits does it take to store tidal behavior? Let's see, how many bits does it take to code and store the analysis of tidal motion under gravitational forces? How many to store the reports on ecological function of tides? [In short, the INFORMAIONAL aspects come from outside the tidal system and processes.] The tides, of course are a product of known mechanical forces, and exhibit as well some random elements. They are not INFORMATIONALLY functional in themselves. (Even the sand bugs that live in the intertidal zones on beaches are sensing and responding to the tidal circumstances, i.e. contain their own processing and programs of response. That's why we used to catch them for bait by slapping down a dead fish on a string as the waves back washed: the bugs would pop up to investigate what they felt and smelled or tasted. We in turn would spot them and scoop them up. Smart bugs knew how to slip away when we tried that.) 4] we can easily exceed the 1000-bit threshold if we record the information long enough, so we have FSCI. In short, as the highlighted exhibits, the FSCI is created by a process of measuring, encoding, recording and compiling in a relevant data structure. Again, the information and its specified complexity and functionality are not the product of the tides as such but of the process that an intelligent agent applies. 5] How many bits does it take to store the trajectory of the moon? Again, an intelligent agent's action is creating the FSCI, per observation, analysis, encoding, data structure and storage operations. 6] the problems stem from your definitions, and that the objections also apply to your own examples. Quite the opposite. The problems come from Rob's consistent failure to see the -- quite obvious actually -- role of the intelligent agents in getting to functional, specific, coded, complex information. That sort of blindness to what is otherwise obvious, tells us a lot about how the evolutionary materialist view and imposition impoverishes early C21 science, imposing blinders that block quite intelligent and articulate contributors from seeing what SHOULD be obvious. So, we should take warning, and understand that something is rotten in the state of early C21 science. ___________ GEM of TKIkairosfocus
July 23, 2009
July
07
Jul
23
23
2009
03:15 AM
3
03
15
AM
PDT
BB: Now, implement the relevant controller for us, so we can see how a GA is a credible example of a real intelligence, rather than simply a mechanism that implements actions step by step per the design of an external intelligent agent. And, BTW, I have said nothing about whether or no a designer "must be" immaterial. (I happen to believe that designers will be informationally based, and am open to the possibility of both materially based and immaterially based designers, just as information is not locked down to any one material expression. [It is materialists who are a priori committed to the idea that all must reduce to matter-energy and space-time, acting through chance + necessity. As I have linked, that runs such into serious difficulties, often expressed today as "the hard problem of consciousness." It is hard because it is trying to resolve a self referential absurdity, implicitly.] ) GEM of TKIkairosfocus
July 23, 2009
July
07
Jul
23
23
2009
02:37 AM
2
02
37
AM
PDT
1 4 5 6 7 8 14

Leave a Reply