Uncommon Descent Serving The Intelligent Design Community

How ID sheds light on the classic free will dilemma

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The standard argument against free will is that it is incoherent.  It claims that a free agent must either be determined or non-determined.  If the free agent is determined, then it cannot be responsible for its choices.  On the other hand, if it is non-determined, then its choices are random and uncontrolled.  Neither case preserves the notion of responsibility that proponents of free will wish to maintain.  Thus, since there is no sensible way to define free will, it is incoherent. [1]

Note that this is not really an argument against free will, but merely an argument that we cannot talk about free will.  So, if someone were to produce another way of talking about free will the argument is satisfied.

Does ID help us in this case?  It appears so.  If we relabel “determinism” and “non-determinism” as “necessity” and “chance”, ID shows us that there is a third way we might talk about free will.

In the universe of ID there are more causal agents than the duo of necessity and chance.  There is also intelligent causality.  Dr. Dembski demonstrates this through his notion of the explanatory filter.  While the tractability of the explanatory filter may be up for debate, it is clear that the filter is a coherent concept.  The very fact that there is debate over whether it can be applied in a tractable manner means the filter is well defined enough to be debated.

The explanatory filter consists of a three stage process to detect design in an event.  First, necessity must be eliminated as a causal explanation.  This means the event cannot have been the precisely determined outcome of a prior state.  Second, chance must be eliminated.  As such, the event must be very unlikely to have occurred, such that it isn’t possible to have queried half or more of the event space with the number of queries available.

At this point, it may appear we’ve arrived at our needed third way, and quite easily at that.  We merely must deny that an event is caused by chance or necessity.  However, things are not so simple.  The problem is that these criteria do not specify an event.  If an event does meet these criteria, then the unfortunate implication is so does every other event in the event space.  In the end the criteria become a distinction without a difference, and we are thrust right back into the original dilemma.  Removing chance and necessity merely gives us improbability (P < 0.5), also called “complexity” in ID parlance.

What we need is a third criteria, called specificity.  This criteria can be thought of as a sort of compression, it describes the event in simpler terms.  One example is a STOP sign.  The basic material of the sign is a set of particles in a configuration.  To describe the sign in terms of the configuration is a very arduous and lengthy task, essentially a list of each particle’s type and position.  However, we can describe the sign in a much simpler manner by providing a computer, which knows how to compose particles into a sign according to a pattern language, with the instructions to write the word STOP on a sign.

According to a concept called Kolmogrov Complexity [2], such machines and instructions form a compression of the event, and thus specify a subset of the event space in an objective manner.  This solves the previous problem where no events were specified.  Now, only a small set of events are specified.  While KC is not a necessary component of Dr. Dembski’s explanatory filter, it can be considered a sufficient criteria for specificity.

With this third criteria of specificity, we now have a distinction that makes a difference.  Namely, it shows we still have something even after removing chance and necessity: we have complex specified information (CSI).  CSI has two properties that make it useful for the free will debate.  First, it is a definition of an event that is neither caused by necessity or chance.  As such, it is not susceptible to the original dilemma.  Furthermore, it provides a subtle and helpful distinction for the argument.  CSI does not avoid the distinction between determinism and non-determinism.  It still falls within the non-determinism branch.  However, CSI shows that randomness is not an exhaustive description of non-determinism.  Instead, the non-determinism branch further splits into a randomness branch and a CSI branch.

The second advantage of CSI is that it is a coherent concept defined with mathematical precision.  And, with a coherently definition, the original argument vanishes.  As pointed out in the beginning of the article, the classic argument against free will is not an argument against something.  It is merely an argument that we cannot talk about something because we do not possess sufficient language.  Properly understood, the classical argument is more of a question, asking what is the correct terminology.  But, with the advent of CSI we now have at least one answer to the classical question about free will.

So, how can we coherently talk about a responsible free will if we can only say it is either determined and necessary, or non-determined and potentially random?  One precise answer is that CSI describes an entity that is both non-determined while at the same time non-random.

——————-

[1] A rundown of many different forms of this argument is located here:http://www.informationphilosopher.com/freedom/standard_argument.html

[2] http://en.wikipedia.org/wiki/Kolmogorov_complexity

Comments
Ilion - thanks for making that point. Yes, I do find this site something of a nightmare to keep track of! In fact, I'm right now trying to set up my own blog, where I will try to address various issues that have been raised with me here, and will be a sitting duck for anyone who wants to challenge :) Eric Holloway: My take on CSI is this: even if it were a good design detector in principle, which I don't (I should make that clear, I don't want to lead you on!) I think it sets far too high a bar for design to have to climb. And, while I can't verify this for myself, there is at least one article out there claiming that the Number-of-Events-in-the-Universe constant is actually many orders of magnitude too low, and if it were correct, CSI would be completely useless even at detecting known examples of design! But as I said, that's not my math, so I don't take a personal stand on that, but I still think it's way too high. If I thought that life depended on the fortuitous coming together of the raw ingredients of even the simplest living cell we can imagine, I'd immediately reject it as an explanation without even bothering to calculate a p value. So if I were Dembski, I'd drop that constant (which trips people up anyway) and define CSI not as a quantity that exceeds some threshold, but as a simple metric, and leave the threshold-setting as a separate argument: "This biological object contains x bits of CSI of x; this means that the probability that of it occurring by chance in the entire history of the universe is less than one; therefore, design". Which would leave the measure open to other, less dramatic claims, such as: "this astronomical signal contains x bits CSI; this means that it is highly unlikely to have been generated by chance processes, and we should start investigating possible intelligent origins for the signal".Elizabeth Liddle
July 28, 2011
July
07
Jul
28
28
2011
05:33 AM
5
05
33
AM
PDT
@EL: Not sure if you are still reading this thread, but I'm intrigued by your comment that CSI is too conservative. I would agree that CSI is conservative. By definition it cannot describe all instances of intelligent activity. What would you propose as a more comprehensive definition of intelligent activity?Eric Holloway
July 28, 2011
July
07
Jul
28
28
2011
04:20 AM
4
04
20
AM
PDT
re. Ilion. point taken. sorry EL. For example, I too, thought this was finished but just happened to notice it was still active.tgpeeler
July 26, 2011
July
07
Jul
26
26
2011
06:19 AM
6
06
19
AM
PDT
So, if you spread your PB with a spoon, you don’t have a PB sandwich.
That's the great thing about operational definitions!
So, if you spread your PB with a spoon, you don’t have a PB sandwich.
That depends on your operational definition of butter knife.Mung
July 25, 2011
July
07
Jul
25
25
2011
06:56 PM
6
06
56
PM
PDT
Much as I dislike Dr Luddite's penchant for passive-aggressive argument-by-hand-waving, etc, I have to say that it's not fair to accuse her of having run away. Even if it weren't the fact that real life keeps happening, these UD threads are far too difficult to keep track of.Ilion
July 25, 2011
July
07
Jul
25
25
2011
05:00 PM
5
05
00
PM
PDT
Mung's link: "The operational definition of a peanut butter sandwich might be simply "the result of putting peanut butter on a slice of bread with a butter knife and laying a second equally sized slice of bread on top"" So, if you spread yout PB with a spoon, you don't have a PB sandwich.Ilion
July 25, 2011
July
07
Jul
25
25
2011
04:49 PM
4
04
49
PM
PDT
Not run away - didn't realise it was still going. Doesn't seem to have got very far though.Elizabeth Liddle
July 25, 2011
July
07
Jul
25
25
2011
03:09 PM
3
03
09
PM
PDT
I guess... I'll give odds that she'll make the same tired assertions in other threads even though she's run away from this one.tgpeeler
July 25, 2011
July
07
Jul
25
25
2011
02:41 PM
2
02
41
PM
PDT
Well, Lizzie found out that syllogisms were dangerous and should be avoided at all cost.Mung
July 24, 2011
July
07
Jul
24
24
2011
06:42 PM
6
06
42
PM
PDT
What happened to this thread??!! Upright and Mung finished it with a bee dance explanation and a PBJ definition. I guess we'll never have to fight about this again with them, will we... ha ha hatgpeeler
July 24, 2011
July
07
Jul
24
24
2011
06:30 PM
6
06
30
PM
PDT
For an operational definition of a peanut butter sandwich
But don't we need operational definitions of peanut butter, bread, slice and butter knife? Seriously UPB. I don't know why what you have provided since early on in this discussion doesn't qualify as an operational definition.Mung
July 22, 2011
July
07
Jul
22
22
2011
05:51 PM
5
05
51
PM
PDT
Interesting post, tgpeeler, but what about a bee doing a dance to inform other bees of the whereabouts of flowers? Information is certainly being conveyed, but is “free will” involved here? OK, so maybe what you mean to say is that ultimately free will has to be involved in the ability to create information, like, say, a free willed creator endowing bees (or computers) with an ability to tranfer information, but not that the proximate conveyor of information (the bee) is necessarily free?
The relational mapping between a bee's input of 'bob-up-and-down-this-way' and the resulting output of 'the food is over here' represents a physical (causal) break that can be found in all forms of information tranfer. Without that special inert quality, information could not exist at all, because quite literally nothing could represent anything else. The alternative view, of course, is that 'bob-up-and-down-this-way' somehow inherently means 'the food is over here' as the end product of physical law. Not only is that view irrational, but it doesn't stand up to the observed physical evidence captured in every single instance of recorded information known to exist. Not only does the representation have to exist (where one discrete thing has relational mapping to another discrete thing) but the receiver of that information must be able to access a protocol in order to establish the relationship (which does not exist in either the representation, or the thing being represented). For the materialist, the headache is conceptualizing of a physical process whereby 1) a non-physical relationship (represenation) is established between two discrete objects, and 2) a protocol can be established at the receiver which re-establishes this non-physical relationship, in order that it may cause an effect. In other words, two seperate feats of coordination never observed to be the product of physical forces, and at their very heart is the glaring quality of not being physically determined. This is the underlying fact leading to Crick's claim of a "frozen accident" in DNA. Yet it must be understood that every parcel of information that has ever existed on this planet diplays the same dynamic. You are in trouble with the evidence if your worldview doesn't allow for the physical freedom that information requires in order to exist. The constant flow of information on this planet woud be equal to a miracle.Upright BiPed
July 22, 2011
July
07
Jul
22
22
2011
02:26 PM
2
02
26
PM
PDT
lol...Upright BiPed
July 22, 2011
July
07
Jul
22
22
2011
01:39 PM
1
01
39
PM
PDT
For an operational definition of a peanut butter and jelly sandwich see: http://en.wikipedia.org/wiki/Operational_definitionMung
July 22, 2011
July
07
Jul
22
22
2011
01:14 PM
1
01
14
PM
PDT
Agreed. He didn't quite capture that as precisely as we would have liked. I missed it so thanks from me for pointing that out. Still, he got it that life and information are like peanut butter and jelly. Can't have one without the other. Now someone can come back and say, you CAN TOO have one without the other so you must be able to have life without information... sheesh...tgpeeler
July 22, 2011
July
07
Jul
22
22
2011
01:02 PM
1
01
02
PM
PDT
"p.5 The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect." At the same time, there is no such thing as "living matter" -- just as there is no such thing as "formless matter" -- rather, there are living entities.Ilion
July 22, 2011
July
07
Jul
22
22
2011
12:36 AM
12
12
36
AM
PDT
One rude little fact . . .kairosfocus
July 22, 2011
July
07
Jul
22
22
2011
12:06 AM
12
12
06
AM
PDT
Was it something I said???tgpeeler
July 21, 2011
July
07
Jul
21
21
2011
10:07 PM
10
10
07
PM
PDT
See Yockey (2005) Chapter 4 - The measure of the information content in the genetic message and Chapter 5 - Communication of information from the genome to the proteome and Chapter 6 - The information content of complexity of protein families.tgpeeler
July 21, 2011
July
07
Jul
21
21
2011
12:23 PM
12
12
23
PM
PDT
Mung,
Whence the skepticism? What’s your point?
Just curiosity. I've never seen the math.Doveton
July 21, 2011
July
07
Jul
21
21
2011
08:01 AM
8
08
01
AM
PDT
Doveton:
But this is nothing more than a claim that it can be done, not that is has been. Do you have a reference for Yockey actually measuring DNA information?
Whence the skepticism? What's your point?Mung
July 21, 2011
July
07
Jul
21
21
2011
07:47 AM
7
07
47
AM
PDT
EL - these from "Philosophy of Mind" by John Heil. p.23 Modern science is premised on the assumption that the material world is a causally closed system. This means, roughly, that every event in the material world is caused by some other material event (if it is caused by any event) and has as effects only material events. ... We can reformulate this idea in terms of explanation: an explanation citing all of the material causes of a material event is a complete causal explanation of the event. p.23 The notion that the material world is causally closed is related to our conception of natural law. Natural laws govern causal relations among material events. Or Jaegwon Kim "Mind in a Physical World." p.40 One way of stating the principle of physical causal closure is this: If you pick any physical event and trace out its causal ancestry or posterity, that will never take you outside the physical domain. That is, no causal chain will ever cross the boundary between the physical and the nonphysical. p.119 So all roads branching out of physicalism may in the end seem to converge at the same point, the irreality of the mental. This should come as no surprise: we should remember that physicalism, as an overarching metaphysical doctrine about all of reality, exacts a steep price. Bravo!! A candid admission that this is complete nonsense. I'm not making this up.tgpeeler
July 21, 2011
July
07
Jul
21
21
2011
12:15 AM
12
12
15
AM
PDT
mike1962 - if you can offer any improvements (or anyone else for that matter) please feel to chime in.tgpeeler
July 21, 2011
July
07
Jul
21
21
2011
12:06 AM
12
12
06
AM
PDT
EL @ 211 Some quotes from Yockey's "Information Theory, Evolution, and the Origin of Life." from the preface (x) The genetic information system is essentially a digital data recording and processing system. p.2 The reason that there are principles of biology that cannot be derived from the laws of physics and chemistry lies simply in the fact that the genetic information content of the genome for constructing even the simplest organisms is much larger than the information content of these laws. p.3 The genetical information system, because it is segregated, linear, and digital, resembles the algorithmic language by which a computer completes its logical operation. p.5 The belief of mechanist-reductionists that the chemical processes in living matter do not differ in principle from those in dead matter is incorrect. There is no trace of messages determining the results of chemical reactions in inanimate matter. If genetical processes were just complicated biochemistry, the laws of mass action and thermodynamics would govern the placement of amino acids in the protein sequences. p.6 Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies. p.7 Similarly, the sequences of nucleotides or amino acids that carry a genetic message have explicit specificity. (Otherwise how does the organism live?) p.7 The genetic information system is the software of life and, like the symbols in a computer, it is purely symbolic and independent of its environment. Of course, the genetic message, when expressed as a sequence of symbols, is nonmaterial but must be recorded in matter or energy. p.94 …They find the genetic code to be very near or possibly at a global optimum for error minimization. p.118 …no natural chemical procedure exists to form an optically active biochemistry. p.119 All speculation on the origin of life on Earth by chance can not survive the first criterion of life: proteins are left-handed, sugars in DNA and RNA are right-handed. Omne vivum ex vivo. Life must only come from life. And a couple from Richard Dawkins in "River Out Of Eden." p.17 … we know that genes themselves, within their minute internal structure, are long strings of pure digital information. What is more, they are truly digital, in the full and strong sense of computers and compact disks, not in the weak sense of the nervous system. … The machine code of the gene is uncannily computerlike … p.19 Life is just bytes and bytes and bytes of digital information. (Finally, Dawkins says something that is almost correct. Life is more than digital information. But still, credit where credit is due. He gets it that life and information are inextricably linked.) Enjoy...tgpeeler
July 21, 2011
July
07
Jul
21
21
2011
12:04 AM
12
12
04
AM
PDT
mike1962 @ 221 From my post #1 on this thread: "Third, now I’d like to pose some questions about information generated by humans. Not the biological information that anti-ID types blather on about endlessly – “you can’t define or measure it so it’s not science.” I am avoiding that entire swamp right now and restricting this discussion to information (like this post, for instance) created and encoded, sent, and decoded and understood by human beings. That should be uncontroversial enough that the actual issues can be addressed. Heh, heh, what an optimist. The whole point is that onotologic naturalism fails and I can show that by using human language. Concerning the bee dance, the question of free will as human beings exercise it, I would say of course not. I would also contend, although I haven't made a case for it, that the code is somehow contained in the bee DNA so that certain cues code to certain "dance steps" that can be interpreted by the bees in the hive. I don't know. I don't know that anyone does at this point. In any case, my project was to show that any version of a robust naturalism, one that includes the causal closure of nature, is, in principle, incapable of accounting for human information. So it fails. If "nature" or the "material" or "physical" world are all that exist, and that's a textbook definition, or part of one (granted, naturalism can be pretty plastic but that's why I stuck to a minimalist version that contains only the core commitments), then certain other things necessarily follow. These are (at least): no God, no minds or souls, no purpose, no design, no free will. All explanations for EVERYTHING in nature must then, by definition, law of identity, be grounded in the laws of physics. I don't know why this is so hard to get. If all that exists is natural then all of that is necessarily explainable by physics. But physics cannot account for information since the prerequisites for information (reason, language - symbols and rules, free will, and purpose) are in principle inexplicable by physics. I've explained that endlessly to EL but don't seem to have made a dent. I probably need to go back to writing class.tgpeeler
July 20, 2011
July
07
Jul
20
20
2011
11:44 PM
11
11
44
PM
PDT
Doveton: In an engineering context the metric for information is more concerned with the characteristics of symbols than their meaning. That is why there is a negative log probability of symbols metric. That says nothing about the fact that information to be recognised as such has characteristics different from noise, and yet int eh same context signal to noise ratio is a key construct. The yardstick has a few peculiarities. Yockey believes and hopes tha the can account for the origin of the info on ultimately forces of chance and necessity. I did not cite him on that issue -- where I disagree with him on grounds summarised here -- but in answer to your request as to whether he measured info in bio-molecular contexts on Shannon-Hartley derived metrics. Plainly, he did. And equally plainly, when observed events E from a definable narrow and UNrepresentative zone T in a space of possible configs W, is such that E contains over 500 functionally specific bits, then we begin to see the needle in a haystack challenge faced by a search on the scope of our solar system, where its 10^57 or so atoms could not go through as many as 1 in 10^48 of the number of states that would correspond to the number of configs for a 500 bit string. That is why I hold that Chi_500 = I*S - 500, bits . . . will identify cases where the resources of our solar system are hopelessly inadequate to find UNrepresentative, functionally specific configs. (Samples tend to pick up what is typical, not what is rare and atypical.) Functional DNA, RNA and proteins in a great many cases easily pass this threshold. The only empirically identified source for such FSCI is intelligent design. With an Internet full of cases in evidence. GEM of TKIkairosfocus
July 20, 2011
July
07
Jul
20
20
2011
09:47 AM
9
09
47
AM
PDT
KF,
Yockey and others have measurements of info in bio molecules. (This book preview shows how they go about it. You will find an answer to the “dna is not a code” objection, here, note in that the diagram from Yockey mapping the genetic system to Shannon’s model of a comm system.)
Thank you for the links! My objection to the Cosmic Fingerprints explanation is already laid out in 148 above. As for Information Theory, Evolution, and the Origin of Life, I'm just curious, KF - given that Yockey's whole point (one he can't emphasize enough it seems) in that book is that although evolution is without question a natural process (and demonstrates this mathematically) and although it definitely points to the origin of life being natural (which he also takes great pains to demonstrate mathematically), the natural process for the origin of life is likely unknowable - why do you rely on this work as a reference? He makes no bones about his perspective on Intelligent Design even goes so far as to make fun of Behe's argument. What is it about this work you admire? That said, I'll take a look at his calculations, though it's rather plain that this is not a rebut of my point. Yockey goes so far as to note on pages 29-30:
"For example, the word information in this book is never used to connote knowledge or any other dictionary meaning of the word information not specifically stated here."
If that's the case, it seems this work and the previous discussion are not dealing with the same subject.
The just above should not be amazing or a matter for serious objection. The matter is quite similar to how we can have information measures of strings of text symbols in English, like in this comment. Actually such should not be controversial or even an issue, once you see that these are string structures and the various elements have a message-function or are directly derived from such (for proteins). The way this is measured is — as e.g. Shannon did with the original paper back in 1948 for English text — to simply tot up symbol relative frequencies, converted into probabilities, and to then do the negative log probabilities. This yields an additive measure, in bits if the base is 2. [Or if you want to detour, you can calculate Shannon's H, an avg info per symbol measure. That is what it boils down to.] If the distributions were flat random [as they can in principle be, the chemistry of chaining is rather unconstraining on sequence successions . . . ] we would have 2 bits per symbol for D/RNA, and 4.32 bits per symbol for typical 20-AA proteins. That they are not quite that way in real life is not unexpected, once we deal with constraints imposed by target function or code patterns, and what happens is rarer symbols get higher info per symbol measures [such as X's in English] and more common ones do not [such as E's in English, about 1/8 of English text is made up of Es]. Durston et al’s metric on protein families is directly related, as well. Why are you making a mountain out of a molehill on this?
I'm not trying to; people just keep posting questions to me on the subject.Doveton
July 20, 2011
July
07
Jul
20
20
2011
09:17 AM
9
09
17
AM
PDT
Doveton: Yockey and others have measurements of info in bio molecules. (This book preview shows how they go about it. You will find an answer to the "dna is not a code" objection, here, note in that the diagram from Yockey mapping the genetic system to Shannon's model of a comm system.) The just above should not be amazing or a matter for serious objection. The matter is quite similar to how we can have information measures of strings of text symbols in English, like in this comment. Actually such should not be controversial or even an issue, once you see that these are string structures and the various elements have a message-function or are directly derived from such (for proteins). The way this is measured is -- as e.g. Shannon did with the original paper back in 1948 for English text -- to simply tot up symbol relative frequencies, converted into probabilities, and to then do the negative log probabilities. This yields an additive measure, in bits if the base is 2. [Or if you want to detour, you can calculate Shannon's H, an avg info per symbol measure. That is what it boils down to.] If the distributions were flat random [as they can in principle be, the chemistry of chaining is rather unconstraining on sequence successions . . . ] we would have 2 bits per symbol for D/RNA, and 4.32 bits per symbol for typical 20-AA proteins. That they are not quite that way in real life is not unexpected, once we deal with constraints imposed by target function or code patterns, and what happens is rarer symbols get higher info per symbol measures [such as X's in English] and more common ones do not [such as E's in English, about 1/8 of English text is made up of Es]. Durston et al's metric on protein families is directly related, as well. Why are you making a mountain out of a molehill on this? GEM of TKIkairosfocus
July 20, 2011
July
07
Jul
20
20
2011
08:03 AM
8
08
03
AM
PDT
Pardon me if this has been brought up within the context of this thread already, but the talk of "codes" reminded me of posting at another blog. The blog linked to the Isaac Asimov TOE (Theory of Everything...not Evolution) discussion. Neil DeGrasse Tyson was moderating. On the panel was Jim Gates, of Super Symmetry fame. He has a paper (4 years old) that talks of his discovery of "Error Correction Codes" found when using N-supersymmetric models to "explain" (for lack of a better term) the "stuff of existence" (i.e. particles, forces, etc...). I've linked to the paper here for your reference. Now, this all hinges on whether string theory (in one of its more peculiar forms) is "true"; as to whether the universe contains ECC codes. ECC codes (parity checking, essentially) are used computer systems extensively to insure the fidelity of the data being manipulated. Very useful when you want to insure the credibility of your calculations. The use of specialized RAM (Random Access Memory) known as ECC-RAM is even required in Highly Available, Fault Tolerant systems such as a bank's data center or the control systems of "mission critical" components for space vehicles, fighter craft, ship/land based "Fire Control" equipment (weaponry controllers), and similar such devices. However, this sort of thinking still informs the two ideological camps. The "Mathematical Axioms are a product of the physical realm" camp and the "Mathematical Axioms exist in the physical realm" camp. Not quite the same meaning. Essentially boiling down to whether the axioms exist because of, or in spite of, the known universe.ciphertext
July 20, 2011
July
07
Jul
20
20
2011
07:18 AM
7
07
18
AM
PDT
Mung, I prefer this diagram myself: http://upload.wikimedia.org/wikipedia/commons/3/30/Transactional.pngDoveton
July 20, 2011
July
07
Jul
20
20
2011
07:09 AM
7
07
09
AM
PDT
1 2 3 10

Leave a Reply