Uncommon Descent Serving The Intelligent Design Community
ds_cyb_mind_model
The Eng Derek Smith Cybernetic Model

ID Foundations, 2: Counterflow, open systems, FSCO/I and self-moved agents in action

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In two recent UD threads, frequent commenter AI Guy, an Artificial Intelligence researcher, has thrown down the gauntlet:

Winds of Change, 76:

By “counterflow” I assume you mean contra-causal effects, and so by “agency” it appears you mean libertarian free will. That’s fine and dandy, but it is not an assertion that can be empirically tested, at least at the present time.

If you meant something else by these terms please tell me, along with some suggestion as to how we might decide if such a thing exists or not. [Emphases added]

ID Does Not Posit Supernatural Causes, 35:

Finally there is an ID proponent willing to admit that ID cannot assume libertarian free will and still claim status as an empirically-based endeavor. [Emphasis added] This is real progress!

Now for the rest of the problem: ID still claims that “intelligent agents” leave tell-tale signs (viz FSCI), even if these signs are produced by fundamentally (ontologically) the same sorts of causes at work in all phenomena . . . . since ID no longer defines “intelligent agency” as that which is fundamentally distinct from chance + necessity, how does it define it? It can’t simply use the functional definition of that which produces FSCI, because that would obviously render ID’s hypothesis (that the FSCI in living things was created by an intelligent agent) completely tautological. [Emphases original. NB: ID blogger Barry Arrington, had simply said: “I am going to make a bold assumption for the sake of argument. Let us assume for the sake of argument that intelligent agents do NOT have free will . . . ” (Emphases added.)]

This challenge brings to a sharp focus the foundational  issue of counter-flow, constructive work by designing, self-moved initiating, purposing agents as a key concept and explanatory term in the theory of intelligent design. For instance, we may see from leading ID researcher, William Dembski’s No Free Lunch:

. . .[From commonplace experience and observation, we may see that:]  (1) A designer conceives a purpose. (2) To accomplish that purpose, the designer forms a plan. (3) To execute the plan, the designer specifies building materials and assembly instructions. (4) Finally, the designer or some surrogate applies the assembly instructions to the building materials. (No Free Lunch, p. xi. HT: ENV.) [Emphases and explanatory parenthesis added.]

This is of course, directly based on and aptly summarises our routine experience and observation of designers in action.

For, designers routinely purpose, plan and carry out constructive work directly or though surrogates (which may be other agents, or automated, programmed machines). Such work often produces functionally specific, complex  organisation and associated information [FSCO/I;  a new descriptive abbreviation that brings the organised components and the link to FSCI (as was highlighted by Wicken in 1979)  into central focus].

ID thinkers argue, in turn, that that FSCO/I in turn is an empirically reliable sign pointing to intentionally and intelligently directed configuration — i.e. design — as signified cause.

And, many such thinkers further argue that:

if, P: one is not sufficiently free in thought and action to sometimes actually and truly decide by reason and responsibility (as opposed to: simply playing out the subtle programming of blind chance and necessity mediated through nature, nurture and manipulative indoctrination)

then, Q: the whole project of rational investigation of our world based on observed evidence and reason — i.e. science (including AI) — collapses in self-referential absurdity.

But, we now need to show that . . .

More subtly — through the question of “counterflow,” i.e. constructive work —  the issue AIG raised first surfaces questions on the thermodynamics of energy conversion devices, the link of entropy to information, the way that open systems increase local organisation, and  the underlying origin of energy conversion devices that exhibit FSCO/I, especially those in biological organisms.

This issue has been on the table since the very first ID technical book, The Mystery of Life’s Origin [TMLO], by Thaxton, Bradley and Olsen [TBO], in 1984. For, these authors noted as they closed their Ch 7, on the basic thermodynamics of living systems, that:

While the maintenance of living systems is easily rationalized in terms of thermodynamics, the origin of such living systems is quite another matter. Though the earth is open to energy flow from the sun, the means of converting this energy into the necessary work to build up living systems from simple precursors remains at present unspecified (see equation 7-17). The “evolution” from biomonomers of to fully functioning cells is the issue. Can one make the incredible jump in energy and organization from raw material and raw energy, apart from some means of directing the energy flow through the system? In Chapters 8 and 9 we will consider this question, limiting our discussion to two small but crucial steps in the proposed evolutionary scheme namely, the formation of protein and DNA from their precursors.

It is widely agreed that both protein and DNA are essential for living systems and indispensable components of every living cell today.11 Yet they are only produced by living cells. Both types of molecules are much more energy and information rich than the biomonomers from which they form. Can one reasonably predict their occurrence given the necessary biomonomers and an energy source? Has this been verified experimentally? These questions will be considered . . . [Emphasis added. Cf summary in the peer-reviewed journal of the American Scientific Affiliation, “Thermodynamics and the Origin of Life,” in Perspectives on Science and Christian Faith 40 (June 1988): 72-83, pardon the poor quality of the scan. NB: as the journal’s online issues will show, this is not necessarily a “friendly audience” for design thinkers.]

Let us take this up in steps:

1 –> “Counterflow” generally speaks of going opposite to “time’s arrow” [a classic metaphor for the degradation impact of the 2nd law of thermodynamics], by performing constructive work.

2 –> That is, by in effect harnessing an energy-conversion device, a local increase in order — indeed, in organisation — can be created; according to a pattern, blueprint, plan, or at least an intention. As we may illustrate:

A heat Engine partially converts heat into work

Fig. A: Energy flows and work.  The joint action of the first and second laws of thermodynamics shows how a heat engine/energy converter may only partly convert imported energy (which must be in an appropriate form fitted to the device) into work. Specifically, as part (b) shows, increment of heat flow d’Qi from heat source A partly goes into increase of internal energy of device B, dEb, partly into shaft work dW, and partly into exhausted heat increment d’Qo that ends up in heat sink D.

(NB: Under the second law, at each interface where heat flows, increment in entropy,  dS >/= d’Qrev/T; T being the relevant absolute temp.  In part (a), the loss of heat from A causes B (at a lower temp) to gain heat; A’s loss of heat  reduces its entropy, but since B is at a lower temp, its rise in entropy will be greater, so the entropy of the universe as a whole will rise, when the two are netted off.)

__________

3 –> As fig. A shows, open systems can indeed readily — but, alas, temporarily — increase local organisation by importing energy from a “source” and doing the right kind of work. But, generally only in a context of guiding information based on an intent or program,  or its own functional organisation, and at the expense of exhausting compensating disorder to some “sink” or other.  (NB: here, something like a timing belt and set of cams is a program.)

4 –> Heat –in short: energy moving between bodies due to temperature difference, by radiation, convection or conduction —  cannot wholly be converted to work. (Here, the radiant energy flowing out from our sun’s surface at some 6,000 degrees Celsius to earth at some 15 degrees Celsius, average, is a form of heat.)

5 –> Physically, by definition: work is done when applied forces impart motion along their lines of action to their points of application, e.g. when we lift a heavy box to put it on a shelf, we do work. For force F, and distance along line of motion dx, the work is:

dW = F*dx,  . . . where, strictly * denotes a “dot product”

6 –> But, that definition does not say anything about whether or not the work is constructive — a tornado ripping off a roof and flying its parts for a mile to land elsewhere has done physical work, but not constructive work.

(Side-bar, constructive work is closely connected to the sort we get paid for: if your work is constructive, desirable and affordable, you get paid for it. [Hence, the connexion between energy use at a given general level of technology and the level of economic activity and national income.])

7 –> Similarly, it says nothing about the origin of the energy conversion device.

8 –> When that device itself manifests functionally specific, complex organisation and associated information — FSCO/I (e.g. a gas engine- generator set or a solar PV panel, battery and wind turbine set, as opposed to, e.g. the natural law-dominated order exhibited by tornadoes or hurricanes as vortexes), we have good reason to infer that the conversion device was designed.

(Side-bar: Now, there is arguably a link between increased information and reduction in degrees of microscopic freedom of distributing energy and mass. Where, entropy is best understood as a logarithmic measure of the number of ways energy and mass can be distributed under a given set of macro-level constraints like pressure, temperature, magnetic field, etc.:

s = k ln w, k being Boltzmann’s constant and w the number of “ways.”

Jaynes therefore observed, aptly [but somewhat controversially]: “The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its [macro-level observable] thermodynamic state. This is a perfectly ‘objective’ quantity . . . There is no reason why it cannot be measured in the laboratory.”[Cited, Harry Robertson, Statistical Thermophysics, Prentice Hall, 1993, p. 36.]

This connects fairly directly to the information as negentropy concept of Brillouin and Szilard, but that is not our focus here, which is instead on the credible source/cause of energy conversion devices exhibiting FSCO/I. As this thought experiment shows [cf.TMLO chs 8 & 9], the correct assembly of such from microscopic components scattered at random in a vat or a pond would indeed drastically reduce entropy and increase the functionality [which would define an observable functional state], but the basic message is that since the scattered microstates so overwhelm the clumped then the functional ones, it is maximally unlikely that such would ever happen spontaneously. Nor, would heating up the pond or striking it with lightning or the like be likely to help matters out.

Just as, we normally observe an ink spot dropped in a vat diffusing throughout the vat, not collecting back together again.

In short, to produce complex, specific organisation to achieve function, the most credible path is to assemble co-ordinated, well-matched parts according to a known good plan.)

9 –> The reasonableness of the inference from observing a high-FSCO/I energy converter to its having been designed would be sharply multiplied when the device in question is part of a von Neuman, self-replicating automaton [vNSR]:

Fig. B: A concept sketch of the von Neuman self replicator [vNSR], in the form of a “clanking replicator”

____________________

10 –> Here, we see a machine that not only functions in its own behalf but has the ADDITIONAL — that is very important — capacity of self replication based on stored specifications, which requires:

(i) an underlying storable code to record the required information to create not only
(a) the primary functional machine [here, for a “clanking replicator” as illustrated, a Turing-type “universal computer”; in a cell this would be the metabolic entity that transforms environmental materials into required components etc.] but also
(b) the self-replicating facility; and, that
(c) can express step by step finite procedures for using the facility;
(ii) a coded blueprint/tape record of such specifications and (explicit or implicit) instructions, together with
(iii) a tape reader that reads and interprets the coded specifications and associated instructions; thus controlling:
(iv) position-arm implementing machines with “tool tips” controlled by the tape reader and used to carry out the action-steps for the specified replication; backed up by
(v) either:
(1) a pre-existing reservoir of required parts and energy sources, or

(2) associated “metabolic” machines carrying out activities that as a part of their function, can provide required specific materials/parts and forms of energy for the replication facility, by using the generic resources in the surrounding environment.

11 –> Also, parts (ii), (iii) and (iv) are each necessary for and together are jointly sufficient to implement a self-replicating machine with an integral von Neumann universal constructor.

12 –> That is, we see here an irreducibly complex set of core components that must all be present in a properly organised fashion for a successful self-replicating machine to exist. [Take just one core part out, and self-replicating functionality ceases: the self-replicating machine is irreducibly complex (IC).]

13 –> This irreducible complexity is compounded by the requirement (i) for codes, requiring organised symbols and rules to specify both steps to take and formats for storing information, and (v) for appropriate material resources and energy sources.

14 –> Immediately, we are looking at islands of organised function for both the machinery and the information in the wider sea of possible (but mostly non-functional) configurations.

15 –> In short, outside such functionally specific — thus, isolated — information-rich hot (or, “target”) zones, want of correct components and/or of proper organisation and/or co-ordination will block function from emerging or being sustained across time from generation to generation.

16 –> So, we may conclude: once the set of possible configurations of relevant parts is large enough and the islands of function are credibly sufficiently specific/isolated, it is unreasonable to expect such function to arise from chance, or from chance circumstances driving blind natural forces under the known laws of nature.

17 –> As a relevant historical footnote, the much despised and derided William Paley actually saw much of this in his Natural Theology, ch 2, where he extended his analogy of the watch to the case of additional capacity to self-replicate:

Suppose, in the next place, that the person who found the watch should after some time discover that, in addition to all the properties which he had hitherto observed in it, it possessed the unexpected property of producing in the course of its movement another watch like itself — the thing is conceivable; that it contained within it a mechanism, a system of parts — a mold, for instance, or a complex adjustment of lathes, baffles, and other tools — evidently and separately calculated for this purpose . . . .
The first effect would be to increase his admiration of the contrivance, and his conviction of the consummate skill of the contriver. Whether he regarded the object of the contrivance, the distinct apparatus, the intricate, yet in many parts intelligible mechanism by which it was carried on, he would perceive in this new observation nothing but an additional reason for doing what he had already done — for referring the construction of the watch to design and to supreme art . . . . He would reflect, that though the watch before him were, in some sense, the maker of the watch, which, was fabricated in the course of its movements, yet it was in a very different sense from that in which a carpenter, for instance, is the maker of a chair — the author of its contrivance, the cause of the relation of its parts to their use.

[Emphases added. (Note: It is easy to rhetorically dismiss this argument because of the context: a work of natural theology. But, since (i) valid science can be — and has been — done by theologians; since (ii) the greatest of all modern scientific books (Newton’s Principia) contains the General Scholium which is an essay in just such natural theology; and since (iii) an argument ‘s weight depends on its merits, we should not yield to such “label and dismiss” tactics. )]

18 –> So far, the sub-argument has been on how FSCO/I, especially in a context of symbolic digital codes and algorithms,  credibly points to design as its best explanation. But as Figs. C and D just below will show, our reasoning on the vNSR is directly relevant to the case of the living cell:

Fig. C: The protein synthesis process in the living cell, showing the source of messenger RNA, its transmission to the cytoplasm, and its use as a digital coded tape to produce proteins [Courtesy Wikimedia, under GNU. (Also, cf a medically oriented survey here.)]

Fig. D: A “close-up” of the Ribosome in action during protein translation, showing the 3-letter codons fitting tRNA anticodons in the A and P sites; with the tRNA’s serving as transporters of successive specified amino acids and as position-arm devices with tool-tips that “click” the successive amino acids [AA’s] into position until a stop codon triggers release. [Courtesy Wikimedia under GNU.]

Clay animation video [added Dec 5, 2011]:

[youtube OEJ0GWAoSYY]

More detailed animation [added Dec 5, 2011]:

[vimeo 31830891]

Fig D.1: Videos.

________________

19 –> Thus, we not only see the relevance of the vNSR to the living cell, but we see how the metabolic and self-replicating facilities of the living cell deeply embed codes, step-by step execution of instructions to achieve a functional product, and an astonishing incidence of FSCO/I.  This justifies the inference on best, empirically based explanation, that the living cell is an artifact of design.

20 –> And, on our abundant experience and observation, the best explanation for a design is a designer. Such an inference on reliable sign to its signified, would still obtain simply on induction, whether or not the designer is in fact the possessor of that elusive property called free will.  (Which is why Mr Arrington argued in the linked by laying this vexed issue to one side for the sake of moving his particular argument forward.)

However, the third part of the task still remains: why do design thinkers often hold that a designer is best understood as a self-moved, initiating agent cause?

[Continued here].

Comments
F/N: Pardon a remark on the relevance of this post to the ID project. Here, I have just used this ID Foundations 2 post, as a reference foundation that applies to the context of genetic determinism and the myth of genes for this and that, and also addresses the isue of our being self-moved agents, which is discussed on p 2 of the post above. GEM of TKIkairosfocus
January 24, 2011
January
01
Jan
24
24
2011
08:56 PM
8
08
56
PM
PDT
BA: Quantum entanglement is an interesting field, with active research, that cuts across our usual experience/expectations of the world. Wiki defines:
Quantum entanglement is a property of the quantum mechanical state of a system containing two or more objects, where the objects that make up the system are linked in such a way that the quantum state of any member of the system cannot be adequately described without full mention of the other members of the system, even if the individual objects are spatially separated.
This brings to bear Bell's inequality theorem of 1964, and the issue of local realism/hidden variables and Einstein's concerns on "spooky" effectively instant action at a distance: ________________ >> In theoretical physics, Bell's theorem (AKA Bell's inequality) is a no-go theorem, loosely stating that:
No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics . . . .
it indicates that every quantum theory must violate either locality or counterfactual definiteness. In conjunction with the experiments verifying the quantum mechanical predictions of Bell-type systems, Bell's theorem demonstrates that certain quantum effects travel faster than light and therefore restricts the class of tenable hidden variable theories to the nonlocal [thus, we may say "transcendent"] variety . . . . As in the Einstein–Podolsky–Rosen (EPR) paradox, Bell considered an experiment in which a source produces pairs of correlated particles. For example, a pair of particles may be produced in a Bell state so that if the spins are measured along the same axes they are certain to produce identical results. The particles are then sent to two distant observers: Alice and Bob. In each trial, both of the observers independently chooses to measure the spin of their respective particle along a particular axis [around the full circle], and each measurement yields a result of either spin-up (+1) or spin-down (-1). Whether or not Alice and Bob obtain the same result depends on the relationship between the orientations of the two spin measurements, and in general is subject to some uncertainty. The classical incarnation of Bell's theorem is derived from the statistical properties observed over many runs of this experiment. Mathematically the correlation between results is represented by their product (thus taking on values of ±1 for a single run). While measuring the spin of these entangled particles along the same axis always results in identical (perfectly correlated) results, measurements along perpendicular directions have a 50% chance of matching (uncorrelated) . . . . Bell achieved his breakthrough by first assuming that a theory of local hidden variables could reproduce these results. Without making any assumptions about the specific form of the theory beyond basic consistency requirements, he was able to derive an inequality that was clearly at odds with the result described above, which is both predicted by quantum mechanics and observed experimentally. Thus, Bell's theorem ruled out the idea of local realism as a viable interpretation of quantum mechanics, though it still leaves the door open for non-local realism [fancy way of saying that in effect one way to view all this is that our space-time is in effect connected through a transcendent -- hence Einstein's "spooky" -- realm that allows for effective supra-light connexions]. Over the years, Bell's theorem has undergone a wide variety of experimental tests. Two possible loopholes in the original argument have been proposed, the detection loophole[1] and the communication loophole[1], each prompting a new round of experiments that re-verified the integrity of the result[1]. To date, Bell's theorem is supported by an overwhelming body of evidence and is treated as a fundamental principle of physics in mainstream quantum mechanics textbooks[2] [3]. Still, no principle of physics can ever be absolutely beyond question, and there are some people who still do not accept the theorem's validity . . . . In QM, predictions were formulated in terms of probabilities — for example, the probability that an electron might be detected in a particular region of space, or the probability that it would have spin up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness was its inability to predict those values precisely. The possibility remained that some yet unknown, but more powerful theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilistic answers given by QM. If a hidden variables theory were correct, the hidden variables were not described by QM, and thus QM would be an incomplete theory. The desire for a local realist theory was based on two assumptions:
1. Objects have a definite state that determines the values of all other measurable properties, such as position and momentum. 2. Effects of local actions, such as measurements, cannot travel faster than the speed of light (as a result of special relativity). If the observers are sufficiently far apart, a measurement taken by one has no effect on the measurement taken by the other.
In the formalization of local realism used by Bell, the predictions of theory result from the application of classical probability theory to an underlying parameter space. By a simple argument based on classical probability, he then showed that correlations between measurements are bounded in a way that is violated by QM . . . . Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter. Bell test experiments to date overwhelmingly violate Bell's inequality. Indeed, a table of Bell test experiments performed prior to 1986 is given in 4.5 of Redhead, 1987.[12] Of the thirteen experiments listed, only two reached results contradictory to quantum mechanics; moreover, according to the same source, when the experiments were repeated, "the discrepancies with QM could not be reproduced". Nevertheless, the issue is not conclusively settled. According to Shimony's 2004 Stanford Encyclopedia overview article:[1] . . . . Because detectors don't detect a large fraction of all photons, Clauser and Horne[11] recognized that testing Bell's inequality requires some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):
a light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.
Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers. The experiment was performed by Freedman and Clauser[15], who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model. The Freedman-Clauser experiment reveals that local hidden variables imply the new phenomenon of signal enhancement:
In the total set of signals from an atomic cascade there is a subset whose detection probability increases as a result of passing through a linear polarizer.
This is perhaps not surprising, as it is known that adding noise to data can, in the presence of a threshold, help reveal hidden signals (this property is known as stochastic resonance[17]). One cannot conclude that this is the only local-realist alternative to Quantum Optics, but it does show that the word loophole is biased. Moreover, the analysis leads us to recognize that the Bell-inequality experiments, rather than showing a breakdown of realism or locality, are capable of revealing important new phenomena . . . . Most advocates of the hidden variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a "non-local" hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A recent experiment ruled out a large class of non-Bohmian "non-local" hidden variable theories.[18] If the hidden variables can communicate with each other faster than light, Bell's inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process which travels backwards in time along the past Light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time[19]. A few advocates of deterministic models have not given up on local hidden variables. E.g., Gerard 't Hooft has argued that the superdeterminism loophole cannot be dismissed[20]. The quantum mechanical wavefunction can also provide a local realistic description, if the wavefunction values are interpreted as the fundamental quantities that describe reality. Such an approach is called a many-worlds interpretation of quantum mechanics. In this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. [In effect we are here looking at a quasi-infinity of worlds, at every entangled event . . . which I think raises Occam's ghost, sharp and slashing razor in hand] If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up. This implies that there is a subtle assumption in the argument that realism is incompatible with quantum mechanics and locality. The assumption, in its weakest form, is called counterfactual definiteness. This states that if the results of an experiment are always observed to be definite, there is a quantity which determines what the outcome would have been even if you don't do the experiment. Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined . . . >> ____________________ Couriouser and couriouser, as the debates and tests go on! But, bottomline, BA has a serious point in pointing to and highlighting nonlocality and in effect information linkage through a transcendent realm that is beyond our commonly experienced world. GEM of TKIkairosfocus
January 24, 2011
January
01
Jan
24
24
2011
07:58 PM
7
07
58
PM
PDT
OT: This breakthrough is just plain cool! Physicists describe method to observe timelike entanglement - January 24, 2011 Excerpt: In "ordinary" quantum entanglement, two particles possess properties that are inherently linked with each other, even though the particles may be spatially separated by a large distance. Now, physicists S. Jay Olson and Timothy C. Ralph fro...m the University of Queensland have shown that it's possible to create entanglement between regions of spacetime that are separated in time but not in space, and then to convert the timelike entanglement into normal spacelike entanglement. They also discuss the possibility of using this timelike entanglement from the quantum vacuum for a process they call "teleportation in time." http://www.physorg.com/news/2011-01-physicists-method-timelike-entanglement.html It should be noted that this experiment solidly dots the i's and crosses the the t's insofar as demonstrating that not only is 'information' transcendent of space but 'information' is also transcendent of time, with the added bonus of demonstrating dominion of matter/material regardless of the space-time constraints that matter/material is itself subject to!!!bornagain77
January 24, 2011
January
01
Jan
24
24
2011
02:05 PM
2
02
05
PM
PDT
UPDATE: I have added some adjustments in 3 - 9, and a sidebar on entropy and information at point 8. This should make it clear that while there is a relationship between entropy and information, the pivotal issue is the credible source of the FSCO/I in an energy converter. The most credible source for that is a designer, whether the device is micro- or macro- scale. GEM of TKIkairosfocus
January 23, 2011
January
01
Jan
23
23
2011
03:33 PM
3
03
33
PM
PDT
TM: We are a bit off topic, but . . . You are still discussing 1914, where battle of frontiers, seizure of Liege using Skoda 305 mm and Krupp 42 cm mortars was initially decisive, but distraction of the Belgians flooding and retreating to Antwerp (and later the Tannenberg Masurian lakes episode in the East)cost the Germans the time and forces they needed. A gap opened and the French-British spotted it by air, and sallied into it. and the Germans recoiled to the Aisne, on orders of a staff colonel sent to the front; a few dozen more miles and the last E/W railroad would have been cut, breaking France's back -- not even counting Paris. Race to the sea, and trench lines were locked in, for 4 yrs seige warfare. 1914 + 4 = 1918. In early 1918, having knocked out one eastern ally after another year by year [and having bled the French in 1916 and by blunting the Nivelle offensives in 1917, triggered mutinies], culminating in Russia in 1917, the Germans had a temporary advantage until the Americans could be deployed. That March, they struck, and drove several wedges into the Allied lines. Last line before Paris was again Marne. Chemin de Dames, 8,000 US Marines. Gkairosfocus
January 23, 2011
January
01
Jan
23
23
2011
02:41 PM
2
02
41
PM
PDT
Here is what was supposed to happen: http://rlv.zcache.com/schlieffen_plan_map_wwi_world_war_one_germany_poster-p228048009869171559tdcp_400.jpg You see the northern arm going around Paris and the French actually advancing into Germany. http://en.wikipedia.org/wiki/Schlieffen_Plan#Additional_factstragic mishap
January 23, 2011
January
01
Jan
23
23
2011
12:53 PM
12
12
53
PM
PDT
My bad about 1918. I don't doubt that the BEF fought well, but the intent of the Schlieffen plan was to have the northern arm swoop around to the north and west of Paris before turning back. The southern arm was supposed to fake a retreat, drawing most of the French army into Germany. Then the northern arm would move in and attack the French from behind. Instead the southern arm actually advanced into France, pushing the French army back towards Paris. Then the northern arm swung south too early. This turned what was supposed to be an envelopment into frontal assault. Considering the superiority of the German army, made obvious by the fact that the portion which was supposed to retreat actually advanced almost by accident, I think the plan would have worked if executed properly.tragic mishap
January 23, 2011
January
01
Jan
23
23
2011
12:45 PM
12
12
45
PM
PDT
TM: On re-looking above, I have said remarkably little about entropy proper, I may have to add a remark or two. (There is a whole informational approach to thermodynamics.) And the offensive in Qn was the 2nd major German push of March 1918 on, not the first one in 1914 which was also stopped at the last major river before Paris, the Marne. Back in 1914, the margin of failure was the month of effort and manpower fighting the Belgians, with the British coming up too. GEM of TKIkairosfocus
January 23, 2011
January
01
Jan
23
23
2011
12:05 PM
12
12
05
PM
PDT
But more on topic, I know in the past Dembski has recoiled from comparing information and entropy. I'd like to see what he says about this.tragic mishap
January 23, 2011
January
01
Jan
23
23
2011
11:52 AM
11
11
52
AM
PDT
The likely margin of failure was the early deployment those American Marines at the 2nd Marne. And, in May 1940, getting the technical tactics right did win the day for the Germans.
I'd argue that the German army failed to execute the Schlieffen plan properly in WWI. The northern arm failed to swing all the way north and west of Paris, and the southern arm failed to retreat into Germany to suck the French army in. Had the northern army especially followed the plan, there would have been no race to the sea because the Germans would have already won it.tragic mishap
January 23, 2011
January
01
Jan
23
23
2011
11:50 AM
11
11
50
AM
PDT
Meleagar: You will not believe this one, re your:
ME,3:Why would there be a distinction between death by murder and death by natural causes? Isn’t murder also a natural cause, if design agency or free will is “the same thing” as any other physical causation?
Let's excerpt the closing summation of Clarence Darrow -- he of the Scopes Trial a short while after this [Bryan had intended to call the following up, but the disgusted judge abruptly cut off the trial] -- at the Loeb-Leopold Nietzschean murder trial: ________________ >> . . . They [[Loeb and Leopold] wanted to commit a perfect crime . . . . Do you mean to tell me that Dickie Loeb had any more to do with his making than any other product of heredity that is born upon the earth? . . . . He grew up in this way. He became enamored of the philosophy of Nietzsche. Your Honor, I have read almost everything that Nietzsche ever wrote. He was a man of a wonderful intellect; the most original philosopher of the last century. Nietzsche believed that some time the superman would be born, that evolution was working toward the superman. He wrote one book, Beyond Good and Evil, which was a criticism of all moral codes as the world understands them; a treatise holding that the intelligent man is beyond good and evil, that the laws for good and the laws for evil do not apply to those who approach the superman. [Shades of Plato's critique of evolutionary materialism in The Laws, Bk X . . . ] He wrote on the will to power. Nathan Leopold is not the only boy who has read Nietzsche. He may be the only one who was influenced in the way that he was influenced . . . >> _________________ This last claim was in fact patently false, as Bryan in his c 1923 The Menace of Darwin, had written in warning to the then largely Christian public of America. Pardon the painfully harsh words Bryan felt compelled to communicate to his nation and his generation, in warning of what was to come, based on what had already begun to happen:
Darwinism leads to a denial of God. Nietzsche carried Darwinism to its logical conclusion and it made him the most extreme of anti-Christians . . . . As the [First World] war [of 1914 - 1918] progressed I [Bryan was from 1913 - 1915 the 41st US Secretary of State, under President Wilson] became more and more impressed with the conviction that the German propa-ganda rested upon a materialistic foundation. I se-cured the writings of Nietzsche and found in them a defense, made in advance, of all the cruelties and atrocities practiced by the militarists of Germany. [It didn't start with the Nazis! (Indeed, the rape and pillaging of Belgium in 1914 -- adjust for 90+ years and whatever propagandistic elements it may have, but note this is largely eyewitness testimony by a reporter -- had in it all the seeds of what would follow in the 1940's)] Nietzsche tried to substitute the worship of the "Su-perman" for the worship of God. He not only re-jected the Creator, but he rejected all moral standards. He praised war and eulogized hatred because it led to war. He denounced sympathy and pity as attributes unworthy of man. He believed that the teachings of Christ made degenerates and, logical to the end, he regarded Democracy as the refuge of weaklings. He saw in man nothing but an animal and in that animal the highest virtue he recognized was "The Will to Power"—a will which should know no let or hin-drance, no restraint or limitation . . . . His philosophy, if it is worthy the name of philos-ophy, is the ripened fruit of Darwinism — and a tree is known by its fruit . . . . The corroding influence of Darwinism has spread as the doctrine has been increasingly accepted. In the American preface to "The Glass of Fashion" these words are to be found: "Darwinism not only justifies the sensualist at the trough and Fashion at her glass; it justifies Prussianism at the cannon's mouth and Bol-shevism at the prison-door. If Darwinism be true, if Mind is to be driven out of the universe and accident accepted as a sufficient cause for all the majesty and glory of physical nature, then there is no crime or vio-lence, however abominable in its circumstances and however cruel in its execution, which cannot be justi-fied by success, and no triviality, no absurdity of Fash-ion which deserves a censure: more — there is no act of disinterested love and tenderness, no deed of self- sac-rifice and mercy, no aspiration after beauty and excel-lence, for which a single reason can be adduced in logic." [pp. 52 - 54. Emphases and explanatory parentheses added.]
That, sadly, is what amorality, stripped of genteel habits, really means. And, BTW, here is what Bryan intended but did not get the chance to say in his closing summation in Dayton, Tennessee. Excerpting, and again, the reading is painful indeed; but, I am now convinced that we must take warning from the past, lest we repeat it:
A criminal is not relieved from responsibility merely because he found Nietzsche's philosophy in a library which ought not to contain it. Neither is the university guiltless if it permits such corrupting nourishment to be fed to the souls that are entrusted to its care . . . . [Again, strongly echoing Plato's analysis; and also his recommendations. While we may not wish to withhold such books from our libraries, perhaps, we should at least allow also on the same shelves those that balance and counter them.] Mr. Darrow said: "I say to you seriously that the parents of Dicky Loeb are more responsible than he, and yet few boys had better parents." Again he says: "I know that one of two things happened to this boy; that this terrible crime was inherent in his organism and came from some ancestor, or that it came through his education and his training after he was born." . . . . He says "I do not know what remote ancestor may have sent down the seed that corrupted him [I suggest, we should at least consider: Adam . . . but that does not relieve us of responsibility for our choices and behaviour], and I do not know through how many ancestors it may have passed until it reached Dicky Loeb. All I know is, it is true, and there is not a biologist in the world who will not say I am right." Psychologists who build upon the evolutionary hypothesis teach that man is nothing but a bundle of characteristics inherited from brute ancestors. That is the philosophy which Mr. Darrow applied in this celebrated criminal case. "Some remote ancestor" - he does not know how remote - "sent down the seed that corrupted him." You cannot punish the ancestor - he is not only dead but, according to the evolutionists, he was a brute and may have lived a million years ago. And he says that all the biologists agree with him. No wonder so small a percentage of the biologists, according to Leuba, believe in a personal God. This is the quintessence of evolution, distilled for us by one who follows that doctrine to its logical conclusion.
Hard reading, and at a time of a clash of Titans. But, when we debate the validity of the choosing will, and the resulting responsible mind, that is what is at stake. So, do pardon my taking the step by step, semi-technical route, for just a short while, to lay the base for the response we must make. At least -- and again, pardon words that may wound as they lance home in the abscess, they are meant to help us to heal -- if both science and civilisation are to be kept from sliding off the cliff. We need to think, very soberly, about where we are, and what fire we are playing with. GEM of TKIkairosfocus
January 23, 2011
January
01
Jan
23
23
2011
09:19 AM
9
09
19
AM
PDT
Meleagar 3&4, Touche! http://files.sharenator.com/GMDC_TOUCHE_Sharenator_Moti_Posters-s355x453-79984-535.jpgbornagain77
January 23, 2011
January
01
Jan
23
23
2011
07:16 AM
7
07
16
AM
PDT
BTW, I'm really enjoying your contributions to this site, KF. They are very compelling, amazingly well organized, and exhaustively thorough.Meleagar
January 23, 2011
January
01
Jan
23
23
2011
06:33 AM
6
06
33
AM
PDT
The division of natural vs supernatural at the term "free will" or "design agency" is convenient, dishonest semantics when used to preclude design agency as a proper explanatory force. If the agency we refer to as "free will" did not exist as meaningfully distinct from what chance & known natural forces can produce by themselves, then why does science act as if there is a distinction between "artificial" (man-made) and "natural"? Why would there be a distinction between death by murder and death by natural causes? Isn't murder also a natural cause, if design agency or free will is "the same thing" as any other physical causation? Materialists wish to subsume "design agency" or "free will" as something produced by chance and other, already-identified natural forces without demonstrating that it is so, or even that it is theoretically reasonable, and even though it is philosophical and rational suicide to do so. If science defines "free will" as humans employ it all the time as "supernatural", then they are applying supernatural techniques. If science defines free will agency as non-supernatural subsets of "chance and natural forces", then they have no reason to deny it as an appropriate explanatory force. Currently known natural forces and chance are insufficient to account for some empirical phenomena, such as things humans generate, that consistently and reliably show specific common characteristics - such as, FSCI well over 1000 bits, and which violate the Universal Plausibility Principle. Science is left with two options; it must admit a supernatural force exists, or it must admit that a "natural" force exists that is as yet unaccounted for which is responsible for generating things like functioning aircraft carriers and space shuttles and the book "War and Peace", which cannot be explained via other, currently known forces. Just as gravity and entropy can be recognized by the manner in which they affect observable phenomena, "intentionality" or "design agency" or "free will" can also be recognized by specific and qualitative effects it has on observable phenomena. Is there a reason why the scientific community would be against the discovery of another fundamental "natural" force that must be posited to account for what we empirically and factually observe, and which our current set of natural explanations are entirely deficient in explaining, and without which our reasoning process and science itself crumbles into irrationality, and our ability to distinguish between artifice and "what other forces and chance produces" becomes nullified, and thus personal responsibility, morality, ethics and justice become nothing more than delusions? If nothing else, science must allow that a new fundamental property or force exists in nature, commonly referred to as intentional agency, that can produce what no other known combination of forces and chance can produce, and which justifies our reliance upon reason and science, and which allows for personal responsibility and morality beyond self-delusion.Meleagar
January 23, 2011
January
01
Jan
23
23
2011
06:26 AM
6
06
26
AM
PDT
Hi A: Thanks for your thought; I appreciate that the above OP (which is intended for reference and foundational purposes) is involved. However, it is also responding to what are at root a technical series of counter-arguments to the design inference, backed up by philosophical issues and challenges that go back at least 2,300 years. As such, pardon, but I believe the above is a legitimate part of the project of design theory -- one more piece of the puzzle. I hope that this OP will therefore serve as a point of reference for onward debates in other, more popular level threads. Thanks again. GEM of TKI PS: Sometimes, having the technical and tactical underpinnings is what enables a strategy to work. A classic example is the German strategy in 1918, where the whirlwind bombardment and infiltration storm-trooper tactics provided the foundation for a strategy that for the second time almost won the war for Germany. The likely margin of failure was the early deployment those American Marines at the 2nd Marne. And, in May 1940, getting the technical tactics right did win the day for the Germans.kairosfocus
January 23, 2011
January
01
Jan
23
23
2011
05:29 AM
5
05
29
AM
PDT
Pretzels, anyone? This might be a gooe time to go back and reread Phillip Johnson. Very carefully. Strategy drives tactics, and some strategies are clearly more effective than others.allanius
January 23, 2011
January
01
Jan
23
23
2011
05:09 AM
5
05
09
AM
PDT

Leave a Reply