Uncommon Descent Serving The Intelligent Design Community

Essay contest: “Do Life and Living Forms present a problem for materialism?”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

THE ROYAL INSTITUTE OF PHILOSOPHY

2015 Philosophy Essay Prize Competition

The Royal Institute of Philosophy and Cambridge University Press are pleased to announce the 2015 Philosophy Essay Prize. The winner of the Prize will receive £2,500 with his or her essay being published in Philosophy and identified as the essay prize winner.

The topic for the 2015 essay competition is:

‘Do Life and Living Forms present a problem for materialism?’

Old style vitalism, attributing an internal animating substance or force to living things gave way to the idea that life may yet be a property over and above physical and chemical ones. Subsequent to that it was widely thought that life is an organisational or functional feature of bodies instantiated by their physical properties. With ongoing debates about analogous issues relating to mind (especially consciousness and intentionality) still running, and renewed interest in anti-reductionist interpretations of emergence and of teleological description and explanation the question is posed: do life forms present a problem for materialism?

In assessing entries priority will be given to originality, clarity of expression, breadth of interest, and potential for advancing discussion.

All entries will be deemed to be submissions to Philosophy and more than one may be published. In exceptional circumstances the prize may be awarded jointly in which case the financial component will be divided, but the aim is to select a single prize-winner.

Entries should be prepared in line with standard Philosophy guidelines for submission.

They should be submitted electronically in Word, with PRIZE ESSAY in the subject heading, to assistant@royalinstitutephilosophy.org.

The closing date for receipt of entries is 1st October 2015.

Entries will be considered by a committee of the Royal Institute of Philosophy, and the winner announced by the end 2015. The winning entry will be published in Philosophy in April 2016.

See also: Why origin of life is a difficult problem for naturalism (materialism)

Follow UD News at Twitter!

Comments
Thank you very much!JoeCoder
March 2, 2015
March
03
Mar
2
02
2015
11:55 AM
11
11
55
AM
PDT
I’ve done some googling and I can’t find anything about a 2015 Royal Institute of Philosophy essay contest
They sent out a press release email but haven't posted anything to their site yet. This page tells you where you can submit your entry: http://comments.gmane.org/gmane.science.philosophy.region.europe/14584Silver Asiatic
March 2, 2015
March
03
Mar
2
02
2015
09:27 AM
9
09
27
AM
PDT
Once again another UD post that links to other UD pages but not an actual link to the source being discussed! I've done some googling and I can't find anything about a 2015 Royal Institute of Philosophy essay contest, which is a shame because I was considering entering. All I see on their site is the one from 2014.JoeCoder
March 2, 2015
March
03
Mar
2
02
2015
07:52 AM
7
07
52
AM
PDT
nightlight: You say:
That’s not correct. Consciousness has no causal power to affect anything as far as present natural science or experiments can establish. There is no equation containing some quantity C for consciousness that could connect it to any quantity we know how to detect or measure. No one knows how to conducted an experiment that would detect or measure consciousness. It is simply not part of natural natural laws as we know them presently in any form or shape. Hence it is absurd to say that “consciousness can attain results” of any sort. You can’t move a feather with it, let alone do something more challenging."
I respect your position, but please note that you are doing philosophy here. Philosophy of consciousness and philosophy of science. My philosophy of consciousness and philosophy of science are certainly different from yours. And there is no doubt that scientific problems, like OOL, can influence both yours and mine. You say:
Are you saying that “consciousness” came down from wherever and somehow manipulated and arranged molecules into cells?
Yes. Conscious agents do that all the time. It's called design. You say:
No experiment has ever demonstrated that “consciousness” can interact with molecules, or anything else. There is no consciousness in any equation of natural science either. If you know of any, experiment or equation, you welcome to share the reference.
Have you ever seen a painting? A computer program? A book?gpuccio
March 2, 2015
March
03
Mar
2
02
2015
07:03 AM
7
07
03
AM
PDT
"There is no omniscient or omnipotent being in the computational perspective I hold." So you have made unguided and non-living computation your false god? i.e. your idol? You live in a fairyland nightlight.bornagain77
March 2, 2015
March
03
Mar
2
02
2015
06:34 AM
6
06
34
AM
PDT
#37 ba77 "Here is what Gregory Chaitin, a world-famous mathematician, said about the limits of the computer program You are turning upside down what I said. My point is that is that universe is computing precisely because of the fundamental limitations on what computation can do. You are confusing my position with deism with omniscient entity which figured it all out in advance and set it into motion. You seem to believe that what I am suggesting is a computer model for such omniscient being. That's completely opposite from my position. There is no omniscient or omnipotent being in the computational perspective I hold. There is no solution book either.nightlight
March 2, 2015
March
03
Mar
2
02
2015
04:35 AM
4
04
35
AM
PDT
#39 gpuccio OOL, and in general CSI, show that conscious cognition and purpose can attain results that simple laws cannot attain, without violating any existing laws. That's not correct. Consciousness has no causal power to affect anything as far as present natural science or experiments can establish. There is no equation containing some quantity C for consciousness that could connect it to any quantity we know how to detect or measure. No one knows how to conducted an experiment that would detect or measure consciousness. It is simply not part of natural natural laws as we know them presently in any form or shape. Hence it is absurd to say that "consciousness can attain results" of any sort. You can't move a feather with it, let alone do something more challenging. Are you saying that "consciousness" came down from wherever and somehow manipulated and arranged molecules into cells? No experiment has ever demonstrated that "consciousness" can interact with molecules, or anything else. There is no consciousness in any equation of natural science either. If you know of any, experiment or equation, you welcome to share the reference.nightlight
March 2, 2015
March
03
Mar
2
02
2015
04:24 AM
4
04
24
AM
PDT
nightlight: You miss the point. OOL, and in general CSI, show that conscious cognition and purpose can attain results that simple laws cannot attain, without violating any existing laws. That is the simple point of ID theory. A point that you miss.gpuccio
March 2, 2015
March
03
Mar
2
02
2015
03:40 AM
3
03
40
AM
PDT
Robert Marks' - Evolutionary Informatics Lab - List of publications http://evoinfo.org/publications.htmlbornagain77
March 2, 2015
March
03
Mar
2
02
2015
03:19 AM
3
03
19
AM
PDT
well put box, as well to reiterate reality vs. nightlight's dreamworld, Here is what Gregory Chaitin, a world-famous mathematician, said about the limits of the computer program he was trying to develop to prove that Darwinian evolution was mathematically feasible: At last, a Darwinist mathematician tells the truth about evolution – VJT – November 2011 Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” https://uncommondescent.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/ On Algorithmic Specified Complexity by Robert J. Marks II – video paraphrase (All Evolutionary Algorithms have failed to create truly novel information including ‘unexpected, and interesting, emergent behaviors’) – Robert Marks https://www.youtube.com/watch?v=No3LZmPcwygbornagain77
March 2, 2015
March
03
Mar
2
02
2015
02:37 AM
2
02
37
AM
PDT
Nightlight #32,
NL: The unifying power is harmonization, synchrony between activity of the parts.
No, this absolutely doesn’t answer my question and is absolutely wrong. You are mixing up cause and effect and this crucial mistake is foundational for everything that comes next. Obviously “harmonization” and “synchrony between activity of the parts” are the effects of an unifying power. The synchronized parts don’t explain unity, but are explained by unity instead. Why else would parts synchronies their activities if not due to a unifying power?
NL: What unifies hundreds of cogs into a clock is the tight meshing of its gears, springs, levers…
One thing is for sure: the unification is not done by gears, springs, levers themselves. There is exactly zero motivation and zero ability for them to do so. We need to invoke upon an outside unifying force.
Box: Now my question to you is exactly the same. You speak of “systems in which replicating less intelligent agents and linking them in an interacting network forms a more intelligent agent”. What force unifies less intelligent agents into one more intelligent agent? The key philosophical insight is this: agency – or consciousness – is a unity. And a unity cannot be explained bottom-up.
NL: The unity is achieved by the programs that are running in the underlying networks, from Planck scale pregeometric networks (which produce our space-time and physical fields and particles obeying physical laws), to cellular biochemical networks which produce and operate us, to social networks produced by us (technological, scientific, linguistic, economic,…).
Why would they do that? Why would they unify? And by what ability? Like the gears, springs and levers in a clock don’t have an intrinsic power / ability to unify, so is there no intrinsic unifying power in any part of any whole. Even one cell, who splits in two identical cells, who in turn split again … cannot explain a multi-cellular organism. Because there is no way for a group of similar cells to know how to proceed. None of the cells knows what to do since they are not endowed with overview. Without an unifying power acting by downward causation why would the individual cells not do whatever they want? How is process toward an adult life form directed without an unifying power who has overview and has the authority to direct?
NL: The algorithms that general adaptive networks produce and run (without any external supervision) work by creating internal models of their environment, which includes self-actor.
Stop right there, because it doesn’t make any sense. This cannot be done without overview, without a self already in place guiding the process. You don’t explain the “self-actor” at all. The parts never do.Box
March 2, 2015
March
03
Mar
2
02
2015
01:43 AM
1
01
43
AM
PDT
nightlight, Chess, as is the case of most games, is lawful. It does not follow that chess, or any other game, is mechanistic. Life may be lawful, but it does not follow that chess is mechanistic. nightlight:
But it is still fully lawful and knowable system all the way down, hence if you call that “mechanistic” then it is so for you.
Is it life that is still fully lawful and knowable system all the way down? Or is it chess that is still fully lawful and knowable system all the way down? If you don't know I'll understand. nightlight:
In this particular computational perspective, the elemental consciousness is the essence of the lowest building blocks making up the computational substratum of the universe. In that sense this is not a mechanistic perspective but a form of panpsychism.
Life is not a game. There is no "game of life." Are you talking about chess?Mung
March 1, 2015
March
03
Mar
1
01
2015
08:17 PM
8
08
17
PM
PDT
#33 Mung In this particular computational perspective, the elemental consciousness is the essence of the lowest building blocks making up the computational substratum of the universe. In that sense this is not a mechanistic perspective but a form of panpsychism. But it is still fully lawful and knowable system all the way down, hence if you call that "mechanistic" then it is so for you. More discussion on this aspect is in post1 and post2.nightlight
March 1, 2015
March
03
Mar
1
01
2015
07:51 PM
7
07
51
PM
PDT
nightlight, is life mechanistic? Is the game of chess mechanistic?Mung
March 1, 2015
March
03
Mar
1
01
2015
07:25 PM
7
07
25
PM
PDT
#30 Box
What is it that keeps the molecules together? What is it that keeps countless parts (quarks, atoms, molecules) captured into functional submission for exactly a lifetime?... Bottom line: what is this unifying power? I never got an answer.
The unifying power is harmonization, synchrony between activity of the parts. What unifies hundreds of cogs into a clock is the tight meshing of its gears, springs, levers... What produces unifying power here at UD? It is that your thinking and keyboard typing here is harmonized (in a broad sense) with thinking and typing of other members thousands of miles away. What are the odds that some bunches of atoms scattered around globe, making up the bodies of UD members, would be dancing in synchrony as they do when we read and post messages here. How can "natural forces" do that? In fact in this case we know how exactly and can explain how it works. Of course, there is very long, convoluted path of scientific discovery and advances in technology before that could happen, but it is not a mystery.
Now my question to you is exactly the same. You speak of "systems in which replicating less intelligent agents and linking them in an interacting network forms a more intelligent agent". What force unifies less intelligent agents into one more intelligent agent? The key philosophical insight is this: agency - or consciousness - is a unity. And a unity cannot be explained bottom-up.
The unity is achieved by the programs that are running in the underlying networks, from Planck scale pregeometric networks (which produce our space-time and physical fields and particles obeying physical laws), to cellular biochemical networks which produce and operate us, to social networks produced by us (technological, scientific, linguistic, economic,...). The algorithms that general adaptive networks produce and run (without any external supervision) work by creating internal models of their environment, which includes self-actor. To decide on their next action that maximizes their net 'rewards - punishments' utility function, the networks play what-if game with this internal model, run it forward in model space and time trying out different initial actions of the self-actor. Then they evaluate rewards & punishments of the resulting final states for each of the self-actor's actions tried and then perform the optimum action among these in the real world. That's just like a chess player trying out different next moves he has available on the internal/mental model of the chess position, playing responses by virtual opponent, then his own counter-responses,... as far as his processing powers allow in a given time. In each branch of such tree of possibilities he evaluates the terminal/leaf position and selects his move on the real chess board matching the virtual move that yields the maximum gain in his mental model of the game. Note that the general adaptive networks produce such programs and algorithms spontaneously and automatically without any external programmer. That's just like chess what player's brain does in the process of learning and practicing the game. There is no external programmer tweaking his neurons and uploading the ready made chess program. Just playing and studying the game does it (it's due to pure math of such networks which can be simulated with neural networks). Just as chess player becomes better at his internal modeling of the game, so do general networks at modeling of their environment as they interact with and process punishments and rewards that follow each of their decisions (in neural networks this processing is done via back-propagation algorithms). The more accurately they can predict actions of their environment and its responses to their actions, the better they become in maximizing their net rewards - punishments. Since this environment is itself made of intelligent networks of the same kind operating at all levels and all places, overlapping and permeating each other, the internal model space of each network models these other networks as separate actors in its model space. As the internal model sharpens its resolution, it starts discerning internal models in these other actors within its model space. This is like you being aware of the internal perspective of the person you are interacting with. Hence, such internal modeling of the internal modeling by other interacting networks inevitably leads, as it sharpens through interactions, to the fractal structure of these internal models. The picture is like a ring of pearls, each pearl with a reflection of the whole ring in which the tiny images of other pearls also contain their own even smaller images of the whole ring with even smaller pearls inside,.. etc, ad infinitum. The computational or material embodiment of these fractal internal models is precisely the hierarchy of smaller networks which make up the elemental building blocks of any given network (e.g. you are built as network of cells, while cells are built as biochemical networks, which in turn are built of atomic networks forming molecules, etc, down to Planck scale networks). How such embodiments result in unified consciousness propagating from inside-out is explained in post1 and post2. In order to maximize their predictive powers, interacting networks develop algorithms which harmonize their operation i.e. they choreograph their actions seeking to maximize mutual predictability (harmonization, altruism). At the level of the social networks this harmonization aiming at maximizing mutual predictability is achieved via customs, habits, common languages, laws, standardization of products and communication protocols, money, clocks, etc. Analogous harmonization mechanisms can be discerned at all other levels of networks as well, down to physical laws which can also be formulated as results of operation of anticipatory systems. This harmonization is far more advanced (or complete) at the lower levels of the hierarchy of networks (in smaller networks). Generally, the harmonization process unfolds from inside out, expanding from smaller to larger scale networks, harmonizing events at ever larger distances. Thus the social networks are far less harmonious than the networks of cells making up the organism, which in turn are far less harmonious than the biochemical networks in the cells, etc. The most harmonious and mutually predictable among the networks we presently know are the networks of physical particles & fields. They are so perfectly harmonized and predictable that their behaviors can be described by mathematical equations that fit on few pages of text. In short, the unifying process is bottom up, inside-out, driven by those optimization algorithms in which networks seek to maximize their predictive powers by maximizing mutual predictability, which in turn is done via harmonization of actions with surrounding networks. Ultimately they end up internally modeling as a whole the next larger, enclosing network reshaping it into more harmonious, more predictable, unified creation in their own image (like god creating from inside-out). The reason that bottom-up, inside-out natural sciences were so successful in describing nature over recent centuries is precisely because they closely mimic how the nature computes itself -- bottom-up, inside-out.nightlight
March 1, 2015
March
03
Mar
1
01
2015
07:19 PM
7
07
19
PM
PDT
Do Life and Living Forms present a problem for materialism? In a word, yes. Modern materialism is mechanistic. Life is not. QED.Mung
March 1, 2015
March
03
Mar
1
01
2015
05:13 PM
5
05
13
PM
PDT
Nightlight,
NL: The only scientifically valid statement is that observed complexity implies “intelligent process” as a designer.
Ridiculous isn’t it? Right?
NL: Of course, the “intelligent process” gives rise to the problem of infinite regression, i.e. the conjecture that ever more “intelligent” processes are required to explain the origin of previous “intelligent” processes.
NL: One possible way to terminate such ‘tower of turtles’ is to construct models which have a property of ‘additive intelligence’ i.e. systems in which replicating less intelligent agents and linking them in an interacting network forms a more intelligent agent.
I have a very serious philosophical - yes philosophical - objection wrt your theory of additive intelligence. I find that part of your theory as ridiculous as the naturalist claim that molecules “self-organize” into life. Your theory suffers the same problem as theirs: there is NO unifying principle. I have asked the naturalists the question: “why would molecules be anything other than molecules?” IOW what is an organism? What is it that keeps the molecules together? What is it that keeps countless parts (quarks, atoms, molecules) captured into functional submission for exactly a lifetime? If an organism is nothing but molecules why does it not simply fall apart – as it does in fact at the moment of death? Bottom line: what is this unifying power? I never got an answer. Now my question to you is exactly the same. You speak of “systems in which replicating less intelligent agents and linking them in an interacting network forms a more intelligent agent”. What force unifies less intelligent agents into one more intelligent agent? The key philosophical insight is this: agency - or consciousness - is a unity. And a unity cannot be explained bottom-up. One adult does not magically emerge out of ten babies connected by a lot of wire.Box
March 1, 2015
March
03
Mar
1
01
2015
04:31 PM
4
04
31
PM
PDT
nightlight, your answer is complete nonsense. We observe unique algorithmic information in life that cannot be reached by any front loaded computational process. You claim otherwise. Your claim is wrong! Disagree? then write a computer program with genuine mathematical insight! It can't be done! i.e. incompleteness! “Either mathematics is too big for the human mind or the human mind is more than a machine” Kurt Gödelbornagain77
March 1, 2015
March
03
Mar
1
01
2015
04:30 PM
4
04
30
PM
PDT
#27 ba77 "incompleteness puts a severe damper on your dream of godlike computing power" Incompleteness is a problem only for deism or Laplace's demon, where one assumes some system (god) had computed it all upfront. Besides halting and incompleteness problems, that perspective has problems explaining evil and free will. The incompleteness is the asset in the computational approach since it sheds light on the facts that universe is still computing (thankfully) and that there is evil (harmonization failure). In this approach there is no solution book and no one knows the solution or even whether the present universe can get there from here or whether it will require reboot.nightlight
March 1, 2015
March
03
Mar
1
01
2015
04:19 PM
4
04
19
PM
PDT
Well nightlight, you have got some severe problems with your 'hypothesis':
Not that kind of naive and sterile “front loader” (deism, the front loading by omniscient and omnipotent being),,, the front loading I support is far more economical, being in the form of simple elemental computing building blocks with additive intelligence (computing power), such as neural networks or networks of finite state automata.
First, as briefly pointed out previously, incompleteness puts a severe damper on your dream of godlike computing power:
Kurt Gödel - Incompleteness Theorem – video https://vimeo.com/92387853 Kurt Gödel and Alan Turing - Incompleteness Theorem and Human Intuition - video https://vimeo.com/92387854 "Either mathematics is too big for the human mind or the human mind is more than a machine" Kurt Gödel The danger of artificial stupidity - Saturday, 28 February 2015 "Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational" http://machineslikeus.com/news/danger-artificial-stupidity The Limits Of Reason – Gregory Chaitin – 2006 Excerpt: an infinite number of true mathematical theorems exist that cannot be proved from any finite system of axioms.,,, http://www.umcs.maine.edu/~chaitin/sciamer3.pdf
Algorithmic information has never been created by anything other than a mind! Moreover, there is not just one algorithm controlling the growth of organisms but there are countless thousands of different algorithms inherent in higher organisms that cannot be reached by any one algorithm. i.e. incompleteness!
To the skeptic, the proposition that the genetic programmes of higher organisms, consisting of something close to a thousand million bits of information, equivalent to the sequence of letters in a small library of one thousand volumes, containing in encoded form countless thousands of intricate algorithms controlling, specifying and ordering the growth and development of billions and billions of cells into the form of a complex organism, were composed by a purely random process is simply an affront to reason. But to the Darwinist the idea is accepted without a ripple of doubt - the paradigm takes precedence!” Michael Denton, Evolution: A Theory In Crisis
Moreover, as also briefly pointed out previously, these countless thousands of different algorithms inherent in higher organisms are 'species specific':
An Interview with Stephen C. Meyer TT: Is the idea of an original human couple (Adam and Eve) in conflict with science? Does DNA tell us anything about the existence of Adam and Eve? SM: Readers have probably heard that the 98 percent similarity of human DNA to chimp DNA establishes that humans and chimps had a common ancestor. Recent studies show that number dropping significantly. More important, it turns out that previous measures of human and chimp genetic similarity were based upon an analysis of only 2 to 3 percent of the genome, the small portion that codes for proteins. This limited comparison was justified based upon the assumption that the rest of the genome was non-functional “junk.” Since the publication of the results of something called the “Encode Project,” however, it has become clear that the noncoding regions of the genome perform many important functions and that, overall, the non-coding regions of the genome function much like an operating system in a computer by regulating the timing and expression of the information stored in the “data files” or coding regions of the genome. Significantly, it has become increasingly clear that the non-coding regions, the crucial operating systems in effect, of the chimp and human genomes are species specific. That is, they are strikingly different in the two species. Yet, if alleged genetic similarity suggests common ancestry, then, by the same logic, this new evidence of significant genetic disparity suggests independent separate origins. For this reason, I see nothing from a genetic point of view that challenges the idea that humans originated independently from primates, http://www.ligonier.org/learn/articles/scripture-and-science-in-conflict/
I'm curious what you mean by 'locally computable'. Do you mean that all computation in the universe is being accomplished in a materialistic fashion without reference to beyond space and time causes? If so you are mistaken.bornagain77
March 1, 2015
March
03
Mar
1
01
2015
03:18 PM
3
03
18
PM
PDT
#24 ba77
So you are a `neo front-loader'? ... First quantum mechanics, and then chaos-theory has basically destroyed it, since no amount of precision can control the outcome far in the future.
Not that kind of naive and sterile "front loader" (deism, the front loading by omniscient and omnipotent being). As explained in our previous discussions (hyperlinked TOC here), the front loading I support is far more economical, being in the form of simple elemental computing building blocks with additive intelligence (computing power), such as neural networks or networks of finite state automata. In this perspective, everything going on in the universe is being continuously and locally computed by this underlying self-programming, distributed computing substratum (matrix). No one, including the 'front loader' has any idea what the result of the computation will be, hence no one could have preloaded the perfectly harmonious universe upfront (hence, conflicts, suffering and existence of evil). The harmonization problem being computed is computationally irreducible i.e. there are no further shortcuts in the algorithm and only the computer itself can find out the solution when it completes computation, if it can be completed at all with the program currently running. E.g. the problem may be undecidable by this particular program, which would lead to either infinitely long computation (infinite expansion of the universe) or reboot of the 'matrix' with a new program in case there is a timeout on the duration of each run (e.g. triggered when the max entropy state or heath death is reached). Neither quantum theory nor chaos present a fundmental problem within this computational perspective as explained by Wolfram and others developing pregeometry underlying present physics based on a computational substratum operating at Planck scale (brief intro here).nightlight
March 1, 2015
March
03
Mar
1
01
2015
02:41 PM
2
02
41
PM
PDT
#20 gpucio "OOL, which is a scientific problem, can certainly cause problems to some philosophies of science, and support others. " It can pose difficulties only to a strawman of those philosophies created by their opponents. Explain how OOL poses problem to the naturalism which is a perspective based on belief in complete lawfulness of all phenomena, all the way down. What the present difficulties in OOL research point to is not the problem with existence of lawfulness (which, if it were the case, would undermine the basis of naturalism) but merely the limitation of what we presently consider to be the laws of nature. Namely, our present laws are very special kind of simple algorithms which were meant to be run on the 'paper and pencil computer' (such as mathematical formalism of calculus, vector spaces, manifolds, group theory, etc). As result, as soon as we need to deal with large numbers of particles (or system components), this ancient 'paper and pencil' computer is out of its depth and we have to simplify the problem by describing the system state and its initial & boundary conditions via simple probability distributions (Gaussian, Poissonian, Binomial, etc). That's obviously not adequate for describing live systems which are far from equilibrium complex systems. The 'no free lunch' theorems by Dembski and others show clearly that those simple probability distributions as the initial & boundary conditions of these systems, are inadequate for explaining the structures and functionality found in the cells. Much more subtle algorithms are needed to describe concisely those initial and boundary conditions i.e. those initial & boundary conditions are not some simple probability distributions but are computed by some more sophisticated algorithms which are not captured by what we presently consider natural laws. What you probably have in mind (and what the OP had in mind by linking this contest to OOL), is that these nano-machines found in the cells (to which 'no free lunch' results were applied) are recent discoveries, thus they would pose some new problem to naturalism i.e. to belief in complete lawfulness all the way down. In fact, the same problems for the 'paper and pencil' computer already exist if one were to apply them to us as organisms or what we are doing here (discussing on a web forum). Those old methods are equally inadequate for describing what humans, animals or plants are doing or how they are structured. The modern discoveries in molecular biology merely add more of the same kind of phenomena which are beyond the expressive capabilities of the ancient 'paper and pencil' algorithms and simple probability distributions. But that's not an essentially new kind problem. It has been there since ancient Greeks. We have only fleshed it out with more examples of the same kind. The real solution is not to throw hands up in the air, give up on the belief in lawfulness (hence in naturalism) and prostrate down to the "intelligent agency" in frozen awe. Instead, what these difficulties of the old 'paper and pencil' algorithms we use to express the lawfulness of the universe point to is the need for more general and more expressive algorithmic language, such as that we have discovered while developing computing technology in recent decades. In fact this process of reformulating our old 'paper and pencil' algorithmic language for expressing natural laws is already under way, going under names such as New Kind of Science (NKS), Complex Adaptable Systems, digital physics, cybernetics,... etc. There is a longer post here that describes this transition with references and links.nightlight
March 1, 2015
March
03
Mar
1
01
2015
01:35 PM
1
01
35
PM
PDT
So you are a 'neo front-loader'? The Front-loading Fiction - Dr. Robert Sheldon - 2009 Excerpt: Historically, the argument for front-loading came from Laplacian determinism based on a Newtonian or mechanical universe--if one could control all the initial conditions, then the outcome was predetermined. First quantum mechanics, and then chaos-theory has basically destroyed it, since no amount of precision can control the outcome far in the future. (The exponential nature of the precision required to predetermine the outcome exceeds the information storage of the medium.),,, Even should God have infinite knowledge of the outcome of such a biological algorithm, the information regarding its outcome cannot be contained within the system itself. http://procrustes.blogtownhall.com/2009/07/01/the_front-loading_fiction.thtml How well can information be stored from the beginning to the end of time? - Jan. 13, 2015 Excerpt: Information can never be stored perfectly. Whether on a CD, a hard disk drive, or a piece of papyrus, technological imperfections create noise that limits the preservation of information over time. But even if you had a perfect storage medium with zero imperfections, there would still be fundamental limits placed on information storage due to the laws of physics that govern the evolution of the universe ever since the Big Bang.,,, To do this, they modelled information transmission over a "channel" that is essentially spacetime itself, described by the Robertson-Walker metric. Their model combines the theories of general relativity and quantum information by considering the quantum state of matter (specifically, spin-1/2 particles) as the universe expands. In this model, the evolution of the universe creates noise which, in the context of quantum communication, acts like an amplitude damping channel. The physicists' main result is that, the faster the universe expands, the less well the information can be preserved.,,, So to answer the original question of how much information can be stored from the beginning to the end of time, the results suggest "not very much." http://phys.org/news/2015-01-how-well-can-information-be.html Is Theistic (Front Loaded) Evolution Plausible? - Stephen Meyer - video http://www.metacafe.com/w/5337990 "Limits to Self-Organization (From Initial Conditions)" - podcast Excerpt: Dr. Johns shows that Darwinian evolution is actually a type of a self-organizing process, and that it is limited in the types of biological structures it can produce. http://intelligentdesign.podomatic.com/entry/2012-07-09T17_09_44-07_00 of related note: An Interview with Stephen C. Meyer TT: Is the idea of an original human couple (Adam and Eve) in conflict with science? Does DNA tell us anything about the existence of Adam and Eve? SM: Readers have probably heard that the 98 percent similarity of human DNA to chimp DNA establishes that humans and chimps had a common ancestor. Recent studies show that number dropping significantly. More important, it turns out that previous measures of human and chimp genetic similarity were based upon an analysis of only 2 to 3 percent of the genome, the small portion that codes for proteins. This limited comparison was justified based upon the assumption that the rest of the genome was non-functional “junk.” Since the publication of the results of something called the “Encode Project,” however, it has become clear that the noncoding regions of the genome perform many important functions and that, overall, the non-coding regions of the genome function much like an operating system in a computer by regulating the timing and expression of the information stored in the “data files” or coding regions of the genome. Significantly, it has become increasingly clear that the non-coding regions, the crucial operating systems in effect, of the chimp and human genomes are species specific. That is, they are strikingly different in the two species. Yet, if alleged genetic similarity suggests common ancestry, then, by the same logic, this new evidence of significant genetic disparity suggests independent separate origins. For this reason, I see nothing from a genetic point of view that challenges the idea that humans originated independently from primates, http://www.ligonier.org/learn/articles/scripture-and-science-in-conflict/bornagain77
March 1, 2015
March
03
Mar
1
01
2015
01:29 PM
1
01
29
PM
PDT
#19 Box "Computers cannot create information and certainly not the kind that ID is interested in; this has been explained by Dembski, Meyer and others numerous times." You are leaping to conclusion before reading that post since I went through two types of information, addressing explicitly the issue you are bringing up. The first kind was information encoded in the sequence of chess moves (which are CSI since they correspond to legal chess moves and 2 player MinMax game), as a universal sequence compressor (such as gzip) would compute it. For universal compressor the players are black boxes, hence it doesn't have the source code for the program or brain in case of human opponent. That is the type of relation humans researcher have relative to natural phenomena. There is always the black box in natural science -- the innermost layer of phenomena beyond which their present methods cannot probe (e.g. max resolution of microscope or telescope, or max energies in the particle accelerator etc). The second kind of information discussed was the algorithmic information (also CSI as above) computable by someone who has the source code of the chess program. In that case one finds a large compression ratio which shows that the first kind of information contained in the sequence of moves was largely illusory, an artifact of the limited model that the universal sequence compressor (like gzip) had about the inner workings of the generating process. As explained, that shift of the cut or of the boundary defining the "material cause" (or material system which runs by physical laws) didn't change the conclusion that "material cause" can generate information (CSI), falsifying Meyer's dictum. It merely reduced the amount of information from 100 GB in the compressed move sequence to 100 KB in the program source code (which is also CSI). The point in bringing up the above shift of the cut defining the "material cause" was to illustrate that any information (CSI) someone claims to be present in some system depends on how one draws the boundary between the "system" considered and the "rest" of the universe. Since the modeler (for the compressor) shifting the boundary into the inner workings always leaves some part of the generator opaque to the modeler, any CSI figure declared by the compressor is a figure relative to the compressor (its inner model of the process). Hence CSI is the same kind of relative quantity as "distance of X from origin O", denote it D(X,O), where origin O corresponds to the model used by the compressor (which computes the CSI). If you shift the origin O (or shift the model boundary in the CSI computation), the distance D(X,O) changes. The same goes for the CSI computed for some process or system -- CSI value is relative to the model used by the compressor i.e. it is not some absolute property of the system alone, but also a property of the observer and his arbitrary choice of the modeling cut (i.e. of his black box boundary). Note that all our models of natural processes (the content of natural science) have an opaque boundary beyond which there is a black box as far as given theory/science goes. No matter what someone is claiming about size of CSI in some phenomenon, it's an empty bluff since no one knows what the real CSI of anything is and setting some CSI threshold figure for "intelligence" is fundamentally absurd since as soon as the black box boundary (as known in present theory) shifts, all computed CSI figures shift, just as all distances from origin O shift when you change O. Hence based on current scientific theories, different systems or phenomena can flip flop back and forth, above and below this "CSI intelligence threshold". Regarding the CSI produced by a chess program, recalling that chess is a finite game, the program will terminate after generating all chess games and the total information in that whole sequence is merely what it takes to specify rules of the game and rule for ordering of games in the listing (e.g. some alphabetic sort) since that determines uniquely the entire long sequence of all moves from all games. Hence, the "created information" by the chess program is like that in digits of number Pi (or rather digits of 1/8=0.125 since chess is a finite game) i.e. trivial. Of course, we don't know whether the universe is like digits of Pi, only appearing to us to contain huge amount of information (or CSI), yet all that is computed by some simple underlying algorithm, visible if one zooms the black box boundary of scientific models deeply enough, with comparatively trivial amount of real information (or CSI) behind it all. Hence, the Discovery Institute's neo-ID, with its "natural laws" vs "intelligent agency" or "mind" and their "CSI intelligence threshold", with present natural laws and the CSI threshold interpreted as eternal absolutes. Its "intelligent agency" is some kind of omnipotent, part time capricious entity jumping in and out at its whim to 'intelligently help out' natural laws. In short, the neo-ID is naive, deeply misguided perspective which is rightfully shunned in natural science. Note that neo-Darwinism is its mirror image, every bit as dogmatic and misguided with its "randomness of gaps". Its priesthood just happened to be more crooked, so it managed to claw its way to the research funding purse first, buying themselves a free ride, for now (our grandchildren will be laughing at their silly stories). The real ID is the one understood by many scientists and mathematicians, from ancient Greeks (since Pythagoras) through modern era (e.g. Euler, Newton, Maxwell, Einstein, Wolfram), perceived in the mathematical elegance and harmony of the natural laws. It is the lawfulness of the universe and its knowability that are the main signatures of the Intelligent Design. The more phenomena we can understand as lawful, the stronger the ID signal. That's exactly the opposite from the neo-ID's 'god of gaps' relation to science -- they seek their signature in the phenomena that are still not understood as lawful. Their deity is capricious, hence they're looking for the signs of that caprice, the unlawfulness -- phenomena unexplained by the presently known laws. That's exactly what neo-Darwinists are looking for, too, just under a different label -- randomness, which is another way to limit or push back the lawfulness.nightlight
March 1, 2015
March
03
Mar
1
01
2015
12:25 PM
12
12
25
PM
PDT
Of semi related interest: Does God Control Everything - Tim Keller - (God's sovereignty and our free will, how do they mesh?) - video (12:00 minute mark) https://www.youtube.com/watch?v=bkQ6ld8dn7Ibornagain77
March 1, 2015
March
03
Mar
1
01
2015
07:03 AM
7
07
03
AM
PDT
nightlight, by your own admission algorithmic information produced the 'created' information in the chess programs, not material processes. Thus your one counter example to refute Meyer fails to provide a purely materialistic account for the origination of information. Your robot example is a joke! No robot has ever created algorithmic information!
Algorithmic Information Theory, Free Will and the Turing Test – Douglas S. Robertson Excerpt: For example, the famous “Turing test” for artificial intelligence could be defeated by simply asking for a new axiom in mathematics. Human mathematicians are able to create axioms, but a computer program cannot do this without violating information conservation. Creating new axioms and free will are shown to be different aspects of the same phenomena: the creation of new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf The Limits Of Reason - Gregory Chaitin - 2006 Excerpt: an infinite number of true mathematical theorems exist that cannot be proved from any finite system of axioms.,,, http://www.umcs.maine.edu/~chaitin/sciamer3.pdf On Algorithmic Specified Complexity by Robert J. Marks II - video paraphrase (All Evolutionary Algorithms have failed to generate truly novel information including ‘unexpected, and interesting, emergent behaviors’) - Robert Marks https://www.youtube.com/watch?v=No3LZmPcwyg
Here is what Gregory Chaitin, a world-famous mathematician, said about the limits of the computer program he was trying to develop to prove that Darwinian evolution was mathematically feasible:
At last, a Darwinist mathematician tells the truth about evolution - VJT - November 2011 Excerpt: In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” https://uncommondescent.com/intelligent-design/at-last-a-darwinist-mathematician-tells-the-truth-about-evolution/ Algorithmic Information Theory, Free Will and the Turing Test - Douglas S. Robertson Excerpt: The basic problem concerning the relation between AIT (Algorithmic Information Theory) and free will can be stated succinctly: Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information. http://cires.colorado.edu/~doug/philosophy/info8.pdf
bornagain77
March 1, 2015
March
03
Mar
1
01
2015
05:52 AM
5
05
52
AM
PDT
Nightlight: Materialism and naturalism are philosophies. A vision of science based on methodological naturalism is a philosophy, too. There are different philosophies of science. OOL, which is a scientific problem, can certainly cause problems to some philosophies of science, and support others. As philosophies of science deal with science, it is perfectly right that scientific problems have consequences for different philosophies of science.gpuccio
March 1, 2015
March
03
Mar
1
01
2015
03:52 AM
3
03
52
AM
PDT
Nightlight: That’s an ignorant statement by Meyer. It reveals his naive, low resolution concepts of “information” and “material cause” which are easily falsified by a simple counterexample. For example, a chess playing program can create (originate, produce) as much or more “information” than the human chess grandmaster in the form of high quality, creative and instructive chess games (the created information is in the encoded moves). Hence, we can have unambiguously “material cause” (chess playing computer program) “creating information” (chess games). That directly contradicts the Meyer’s prohibition (M1).
The naive ignoramus is you. Computers cannot create information and certainly not the kind that ID is interested in; this has been explained by Dembski, Meyer and others numerous times.Box
March 1, 2015
March
03
Mar
1
01
2015
03:21 AM
3
03
21
AM
PDT
#17 ba77
[Stephen Meyer] (M1) "Now, if information is not a material entity, then how can any materialistic explanation account for its origin? How can any material cause explain it's origin? ... information is a different kind of entity that matter and energy cannot produce.
That's an ignorant statement by Meyer. It reveals his naive, low resolution concepts of "information" and "material cause" which are easily falsified by a simple counterexample. For example, a chess playing program can create (originate, produce) as much or more "information" than the human chess grandmaster in the form of high quality, creative and instructive chess games (the created information is in the encoded moves). Hence, we can have unambiguously "material cause" (chess playing computer program) "creating information" (chess games). That directly contradicts the Meyer's prohibition (M1). Meyer's defender can argue that the chess program itself was created by non-material cause ("mind" for short). But Meyer's prohibition (M1) contains no such qualification on what subset of "material causes" cannot create information. On its face, his prohibition applies to all "material causes". Say, you insist that what he actually meant is not what he said in (M1) but rather: (M2) the "material cause" (system) which is not created by "mind" cannot create information. But narrowing (M1) into (M2) won't help since one can have a program which writes chess programs, which then play those games and thus "create information". Now you would have to defend that (M2) was not really what you meant that Meyer meant in (M1), but rather, further narrowing the subset of non-creative "material causes" as: (M3) "material cause" which is created by "material cause" that was not created by "mind" cannot create information. But none of that helps much since we can have as long chain of programs which write other programs as we want, so you would have to keep amending what you and Meyer really meant via M1, M2, M3, M4,.... M999, M1001,... each time narrowing the prohibition further to smaller and smaller subset of "material causes" in the chain which allegedly cannot create information. But with the infinite chain of qualifications on the Meyer's prohibition, you have narrowed the "material causes which cannot create information" here to nothing. In short -- the subset of "material causes" to which his prohibition applies is an empty set. Another line of defense is to "refine" what Meyer really meant by "information". Say, you insist that the "information" is algorithmic (Kolmogorov's) information i.e. the length of the shortest program that can produce it. The chess programmer has input only say 100 kilobytes of source code as its information, while program can produce practically unlimited stream of content rich chess games that if viewed by chess players would contain many gigabytes of "information". On the other hand, algorithmic information that the programmer who knows the source code would assign to all these gigabytes of chess games remains exactly the hundreds kilobytes contained in the program's source code. So what we had were only the apparent gigabytes of "information" as seen by some observers (chess players inspecting produced games without knowing the source code). But this same series of chess games will actually have only hundred kilobytes of "information" as seen by the "programmer" who knows the source code. Ok, so we have compressed algorithmically the "created information" from many gigabytes contained in the moves of chess games, down to 100 KB in the source code for the chess program which played those games. But whether it is 100 GB or 100 KB of created information, the original counterexample still stands since we can extend the "material cause" so it includes the programmer's fingers which typed the 100 KB of the source code into the computer. Hence, the "material cause" consisting of fingers interacting with computer keyboard has created 100KB of information (the source code). Now you can defend that this "material cause" consisting of "fingers + computer" didn't create 100KB of information since there were nerves and hand muscles which controlled those fingers. No problem, we can include hands and hand nerves into "material cause", hence this enlarged "material cause" consisting of: hands + hand nerves + fingers + computer has created 100KB of information. We can now go into another (in principle infinite) chain, by including into the "material cause" the arms and their nerves, then brain, then neurons making up brain, then molecules making up those neurons, then atoms making up those molecules, then electrons and nuclei making up those atoms, then protons and neutrons making up those nuclei, then quarks and gluons making up those protons and neutrons... as far as physics goes. Each of those material cogs was moving some other material cogs downstream, resulting eventually in the fingers typing the 100K of source code into the computer. Hence, we have a "material cause" consisting of the above chain of interacting cogs that produced the 100KB of information as source code, contradicting Meyer's dictum. Now you can insist that none of that could have happened unless there was a "mind" controlling that whole cogworks somehow. That doesn't help since we can have a pure material, robotic programmer R1, all plain matter, doing the whole programming and typing on the keyboard. Any sequence of steps that a human can do, a suitable robot can in principle replicate. Now you can recycle your first defense, "clarifying" that the kind of "material cause" (robot) you meant is that this robot R1 cannot be made using human mind in order for R1 to create those 100KB of information. But we can have robot R2 making robot R1 which writes and types chess program. Then after you narrow the Meyer's prohibition further, we can have robot R3 which makes robot R2 which makes robot R1 which writes and types chess program.... We are back to the original counterexample and the infinite chain of qualifications each one narrowing down the subset of "material causes" to which the Meyer's prohibition applies in this case, only this time with robots as the "material causes" instead of program writing programs. That still renders the Meyer's prohibition asserting that "material cause cannot create information" applicable to empty set of such robotic "material causes" (after the infinitely many qualifications have been added). In short, Meyer's dictum "material cause cannot create information" is false by virtue of direct counterexample.nightlight
February 28, 2015
February
02
Feb
28
28
2015
10:40 PM
10
10
40
PM
PDT
With all that being said, and the primacy of quantum information over classical information (and material particles) now being established, I have no qualms with classical information being conserved. In fact, William Dembski and Robert Marks mathematically demonstrated the conservation of classical information,
Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013 Excerpt: The new form, (of conservation of information), is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,, ,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks: "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486 "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061 http://www.evolutionnews.org/2013/04/before_theyve_e070821.html
In fact, Classical Information in the cell has now been physically measured and is shown to correlate to the thermodynamics of the cell:
Maxwell's demon demonstration (knowledge of a particle's position) turns information into energy - November 2010 Excerpt: Scientists in Japan are the first to have succeeded in converting information into free energy in an experiment that verifies the "Maxwell demon" thought experiment devised in 1867.,,, In Maxwell’s thought experiment the demon creates a temperature difference simply from information about the gas molecule temperatures and without transferring any energy directly to them.,,, Until now, demonstrating the conversion of information to energy has been elusive, but University of Tokyo physicist Masaki Sano and colleagues have succeeded in demonstrating it in a nano-scale experiment. In a paper published in Nature Physics they describe how they coaxed a Brownian particle to travel upwards on a "spiral-staircase-like" potential energy created by an electric field solely on the basis of information on its location. As the particle traveled up the staircase it gained energy from moving to an area of higher potential, and the team was able to measure precisely how much energy had been converted from information. http://www.physorg.com/news/2010-11-maxwell-demon-energy.html Demonic device converts information to energy - 2010 Excerpt: "This is a beautiful experimental demonstration that information has a thermodynamic content," says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. "This tells us something new about how the laws of thermodynamics work on the microscopic scale," says Jarzynski. http://www.scientificamerican.com/article.cfm?id=demonic-device-converts-inform
Moreover, Dr. McIntosh, who is the Professor of Thermodynamics Combustion Theory at the University of Leeds, holds that regarding information as independent of energy and matter 'resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions'.
Information and Thermodynamics in Living Systems - Andy C. McIntosh - 2013 Excerpt: ,,, information is in fact non-material and that the coded information systems (such as, but not restricted to the coding of DNA in all living systems) is not defined at all by the biochemistry or physics of the molecules used to store the data. Rather than matter and energy defining the information sitting on the polymers of life, this approach posits that the reverse is in fact the case. Information has its definition outside the matter and energy on which it sits, and furthermore constrains it to operate in a highly non-equilibrium thermodynamic environment. This proposal resolves the thermodynamic issues and invokes the correct paradigm for understanding the vital area of thermodynamic/organisational interactions, which despite the efforts from alternative paradigms has not given a satisfactory explanation of the way information in systems operates.,,, http://www.worldscientific.com/doi/abs/10.1142/9789814508728_0008
Here is a recent video by Dr. Giem, that gets the main points of Dr. McIntosh’s paper over very well, in an easy to understand manner, for the lay person:
Biological Information – Information and Thermodynamics in Living Systems 11-22-2014 by Paul Giem (A. McIntosh) – video https://www.youtube.com/watch?v=IR_r6mFdwQM
Thus, all in all, considering that information, though it may be represented on a material substrate, is non-physical, then I have no problem whatsoever with classical information being 'conserved'. In other words, just because we can erase the number 7 off a chalk board, that doesn't necessarily mean that the number 7 went out of existence! Supplemental notes:
An Interview with David Berlinski - Jonathan Witt Berlinski: There is no argument against religion that is not also an argument against mathematics. Mathematicians are capable of grasping a world of objects that lies beyond space and time …. Interviewer:… Come again(?) … Berlinski: No need to come again: I got to where I was going the first time. The number four, after all, did not come into existence at a particular time, and it is not going to go out of existence at another time. It is neither here nor there. Nonetheless we are in some sense able to grasp the number by a faculty of our minds. Mathematical intuition is utterly mysterious. So for that matter is the fact that mathematical objects such as a Lie Group or a differentiable manifold have the power to interact with elementary particles or accelerating forces. But these are precisely the claims that theologians have always made as well – that human beings are capable by an exercise of their devotional abilities to come to some understanding of the deity; and the deity, although beyond space and time, is capable of interacting with material objects. http://tofspot.blogspot.com/2013/10/found-upon-web-and-reprinted-here.html “One of the things I do in my classes, to get this idea across to students, is I hold up two computer disks. One is loaded with software, and the other one is blank. And I ask them, ‘what is the difference in mass between these two computer disks, as a result of the difference in the information content that they posses’? And of course the answer is, ‘Zero! None! There is no difference as a result of the information. And that’s because information is a mass-less quantity. Now, if information is not a material entity, then how can any materialistic explanation account for its origin? How can any material cause explain it’s origin? And this is the real and fundamental problem that the presence of information in biology has posed. It creates a fundamental challenge to the materialistic, evolutionary scenarios because information is a different kind of entity that matter and energy cannot produce. In the nineteenth century we thought that there were two fundamental entities in science; matter, and energy. At the beginning of the twenty first century, we now recognize that there’s a third fundamental entity; and its ‘information’. It’s not reducible to matter. It’s not reducible to energy. But it’s still a very important thing that is real; we buy it, we sell it, we send it down wires. Now, what do we make of the fact, that information is present at the very root of all biological function? In biology, we have matter, we have energy, but we also have this third, very important entity; information. I think the biology of the information age, poses a fundamental challenge to any materialistic approach to the origin of life.” -Dr. Stephen C. Meyer earned his Ph.D. in the History and Philosophy of science from Cambridge University for a dissertation on the history of origin-of-life biology and the methodology of the historical sciences. Intelligent design: Why can't biological information originate through a materialistic process? - Stephen Meyer - video http://www.youtube.com/watch?v=wqiXNxyoof8
Verse and Music:
John 1:1-4 In the beginning was the Word, and the Word was with God, and the Word was God. He was in the beginning with God. All things were made through Him, and without Him nothing was made that was made. In Him was life, and the life was the light of men. Come As You Are (Live) ft. David Crowder - video https://www.youtube.com/watch?v=PE6QXWFL6jY
bornagain77
February 28, 2015
February
02
Feb
28
28
2015
05:23 PM
5
05
23
PM
PDT
1 2

Leave a Reply