Home » Darwinism, Evolution » Melkikh’s Improbability of Darwinism and deterministic evolution model

Melkikh’s Improbability of Darwinism and deterministic evolution model

Is is said that in the USA one can criticize politics but not evolution, while in Russia one can criticize evolution but not politics. The Russian author Alexey Melkikh provides the most spectacular improbability of Darwinian evolution that I have seen. He then proposes a mode of evolution without mutation. As some readers have asked for more science based posts, enjoy. ————-
INTERNAL STRUCTURE OF ELEMENTARY PARTICLE AND POSSIBLE DETERMINISTIC MECHANISM OF BIOLOGICAL EVOLUTION Alexey V. Melkikh, (Ural state technical university, Molecular physics chair,) Entropy 2004, 6, 223–232

It was shown that the probability of new species formation by means of random mutations is negligibly small. . . . The problem is that the Darwin mechanism of the evolution (a random process) cannot explain the known rate of the species evolution. In accordance with the very first estimates, the total number of possible combinations of nucleotides in the DNA is about 4^(2×10^9) (because four types of nucleotides are available, while the number of nucleotides in the DNA of higher organisms is about 2×10^9). . . . Thus, finally we have P = 10^57000000. This figure is vanishingly small. Therefore, a conclusion may be drawn that species could not be formed due to random mutations.

If a molecular machine, which controls the evolution (with reference samples assigned a priori as thermodynamic forces), does not exist, then the Darwin evolution contradicts to the second law, since it represents a macroscopically oriented (from the simple to the complex) fluctuation.

Melkikh then explores a novel concept of evolution without mutation:

The program of such controlled genome changes can be incorporated in internal degrees of freedom of a
particle. Therefore, the algorithm of movement of an organism from one niche to another can be
presented as follows:
1. An organism scans the environment in search for nearest free niches.
2. If niches are found, the organism decides what niche is the most favorable to move to.
3. A step-by-step movement to the nearest niche begins. The space of attributes around the
organism (including the presence of other organisms) is measured each step.
4. The process continues until the organism occupies the wanted niche. After the number of
organisms in the niche reaches a certain value, the transition of other organisms to this
niche stops.
The algorithm will be executed until the control system decides that it is more favorable to
move to another neighboring niche.

See Full Article

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

26 Responses to Melkikh’s Improbability of Darwinism and deterministic evolution model

  1. WOW! Thanks a lot for posting this. I would comment on the subject but I need time to absorb it first-

  2. Fans of front-loaded evolution take note!

  3. A very similar argument was brought up by a commenter named Timothy Reeves a couple days ago. The problem with this is that is has never been observed. This version is more far fetched since it implies an internal mechanism that knows which way to jump.

  4. Note:

    Entropy, an International and Interdisciplinary Journal of Entropy and Information Studies. ISSN 1099-4300, CODEN: ENTRFG, © 1999-2007 by MDPI. It is a peer-reviewed scientific journal, and it is published online quarterly at http://www.mdpi.org/entropy/.

    This “conclusion . . . that species could not be formed due to random mutations” is officially published in a “peer reviewed” scientific journal!

  5. I’ve been going through this paper. It would appear that Melkikh is approximately proposing that DNA operates like a quantum computer — that the intelligence that we recognize is within the cell. Its an interesting twist, but it is still absolutely steimied by the OOL equation.

    As far as entropy being a “peer-reviewed journal” goes, well, it certainly isn’t “one of the major peer-reviewed journals”. Davison has a number of articles published in an obscure “peer-reviewed journal” also. This article is about 4 years old. Though the author claims that the chance of a new species evolving via RM+NS is vastly less than the UPB, he has hardly shaken the world. I wasn’t exactly able to follow his argument, but I don’t think that the scientific establishment is that far out. I actually do not believe that speciation is beyond the ability of RV+NS.

  6. In Leibniz’s Monadology he talks about the difference between man made art and the art of God- which for me creates a very interesting problem for ID- one that could if described and understood correctly – lead to an even better understanding of Design in nature-

    {DLH – I transferred this discussion to: Leibniz: “machines of nature” >> “all artificial automata” Please respond there. I deleted the posts here}

  7. Yeah, Leibniz is amazing- . . . I don’t think the library is getting these books back! Which is maybe why his works seemed missing to Godel- that is everyone that reads them – keeps them!

  8. {DLH – I transferred this above discussion to: Leibniz: “machines of nature” >> “all artificial automata” Please respond there. I deleted the posts from here.}

  9. 9
    Timothy V Reeves

    Hmmm…. my first impressions: Melkikh is using spontaneous generation probabilities to calculate his species probabilities – he ought to at least explore the possibility of ‘ratchet probabilities’ Yes, it’s back to the old ‘Dawkins’ slopes’ on Mt Improbable question – I know it’s debatable whether ‘Dawkins slopes’ exist (Irreducible complexity denies them) but this concept, needs further exploration in this paper and the reasons for rejecting them should have been given.

    Now let me be frank: When I got to the bit about internal particle complexity I my mind suddenly popped up a hoax alert – not necessarily valid, but it spooked me. Just watch it! When someone knows that there are people out there who are just waiting to soak it up like blotting paper they will try it on … remember the Hitler and the Jack the Ripper dairies?

    Look, I’m not saying it is a hoax, but just move carefully. I’m taking a closer look. Perhaps I’m paranoid…

  10. Timothy V Reeves
    I agree on taking the new proposals with a large “grain of salt”. The long and short of it is that any such properties of molecules still do not bridge the distinction between “natural law” (or “order”) or “chance” (“randomness”) versus “Complex Specified Information” (CSI) and the rise of new functions or “Design Information” per the Dembski’s Explanatory Filter.

    That said, I found some refreshing insights into the realities of evolution. Many will point to the wonders of “natural selection”, and “gene duplication” etc. etc. but Melkikh provides some refreshing “reality checks” on the probabilities involved. e.g.:

    Whichever self-organization processes, they cannot be pre-oriented to free ecological niches (in terms of the random evolution) and, consequently, cannot accelerate the process of occupying those niches.

    By way of example, we may take a ball of some dimension hitting a target whose position is unknown. Whichever combinations of initial coordinates and speeds of the ball used, the probability of hitting the target will only depend on the ratio between the target and ball areas. From the viewpoint of the theory of random evolution, all genes are equal (no more or less important genes may exist), because all of them appeared by random mutations. In this case, an organism cannot know beforehand which genes it will need in the distant future.

    The following is interesting to ID’ers proposing “front loading”:

    There is no criterion to confirm that a set of nucleotides is the best one in a given situation (Fig.1).

    To negotiate this contradiction, one has to assume the existence of a decision-making machine with reference samples assigned a priori. Put another way, if we know beforehand the location of a target, the probability of hitting this target may increase considerably.

    The following was an interesting insight that could provide basis for faster transformations in genes or proteins.

    Nonradiative transitions take place when the state of internal degrees of freedom is changed (see for example [1,2]).

  11. Timothy V. Reeves:

    I know it’s debatable whether ‘Dawkins slopes’ exist (Irreducible complexity denies them) but this concept, needs further exploration in this paper and the reasons for rejecting them should have been given.

    I think your statement is too boolean. It is well reasonable that Dawkins slopes exist, but that they are not universal. It is only the universality of Dawkins slopes that is challenged by IC. I generally agree with you that not factoring Dawkins slopes into one’s calculations on speciation is unreasonable until such time as we can prove that Dawkins slopes are functionally nonexistant.

  12. bFast
    Good point on distinguishing

    “It is well reasonable that Dawkins slopes exist, but that they are not universal.”

    Consequently I think you later meant to say:
    “until such time as we can prove that functional Dawkins slopes are not universally present.”

    While conceptually correct, can this argument be converted to showing a local probability maximum surrounded by lower “fitness”?

    - rather than having to show a universal negative?

  13. DLH:

    Consequently I think you later meant to say:
    “until such time as we can prove that functional Dawkins slopes are not universally present.”

    Actually, no. I seriously wonder if dawkins slopes exist at all. I was stating that dawkins slopes may be sufficent to produce speciation within a genus. I guest the most accruate statement would be “until such time as we can establish to what extent functional Dawkins slopes present.”

    I still wonder how, in a field of 25,000 genes, a point mutation that produces a microscopic improvement in one of those genes can be selected for by natural selection. There seems to me to be far too much noise (alternate signal). As such, I think that for all but the most significant (beneficial or destructive) mutations, natural selection cannot separate the signal from the noise. If my view is correct, then Dawkins slopes do not exist for anything but mutation that produce significant improvement within a given environment.

  14. bFast

    I think that for all but the most significant (beneficial or destructive) mutations, natural selection cannot separate the signal from the noise.

    That is in essence the heart of John C. Sanford’s Genetic Entropy and the Mystery of the Genome.
    Add to this the published ratio of harmful to beneficial mutations of about one million to one.
    The consequence is progressive accumulation of a genetic “load” degrading function that cannot be eliminated by “natural selection”, and which drowns out all “beneficial” mutations.

    Sanford cites numerous published population models to support this. Demonstrating this will be one of the most powerful pieces of evidence that will sink neoDarwinian evolution.

  15. [...] Gerry Rzeppa started an interesting off topic train of thought on Leibniz and design of “machines of nature” vs “artificial automata” that is worth [...]

  16. I would like to make a lot of comments about this very interesting thread, but I will try to be short.

    The article seems very interesting, although I cannot understand the details of the physics (I’ll try to ask my son). I don’t know if the author is really believable, or if he is just making some big jumps, but some ideas are interesting anyway:

    1) The improbability of speciation is, I think, well calculated. Slopes, selection and similar concepts can do very little to undermine that kind of improbability. Moreover, many of those concepts are just myths or overrated realities.

    2) I find interesting that the author defines his alternative “deterministic”, to stress the difference from random darwinian mechanisms. In reality, what he is describing, as far as I can understand, is some kind of deterministic laws at sub-quantic level, which could allow the deterministic implementation of a program. In that sense, the article is definitely ID. The only difference is that the author assigns the program to some form of “intelligence” in the living beings themselves.

    That’s interesting because, in principle, I have no problems with that. Rather than being interpreted as a form of front-loading, that could be seen as a way for an intelligent principle (the designer) to realize successive adaptations of living beings (speciation) through an intelligent guidance which operates at sub-quantic levels.

    That concept is absolutely fine for me. Indeed, I have often expressed my belief that a better understanding of the laws of physics will give us the key to understand how conscious intelligent principles (designers, including humans) can impart information to matter without violating physical laws. The problem is only that at present we are far from really understanding physical laws. Quantum physics is, in my opinion, only at the beginning, and when we understand better sub-quantum physics, whatever it may be, then we will see…

    We must remember that apparently random molecular events, like mutations, are well in the range of what some form of quantum control could explain.

    Maybe these are quantum logical leaps at present, but I do believe that the interaction between consciousness and matter “will” be observable one day, at least from the side of matter. And that will not probably be at a macroscopic, or anyway conventional, level. Quantum physics, and the physical properties of systems far from equilibrium, will probably have a big role.

  17. 17
    Timothy V Reeves

    Bfast said:

    I think your statement is to boolean

    Point conceded!

  18. I wrote some speculation here in 2005 about intelligence residing in cells at the quantum level . Human designed quantum computing elements can do a lot of interesting things already and the scale is unbelievably small. IBM did a lot of basic research. I haven’t caught up on it for a couple years now. I’d seriously consider it under the rule of thumb that what human engineers come up with nature or nature’s designer did it first in the machinery of life. That said even using classical computer architecture, if you can build with atomic precision, can do a lot in a very small space. I think it might’ve been Drexler in “Engines of Creation” who wrote that you can build an atomic scale computer out of gears and levers (like Babbage’s computing engine) that could rival a modern desktop (he probably said mini or mainframe back in 1986) and still be too small for the naked eye to see.

  19. Something is wrong here. This paper is no more than somebody’s musings. It is not rigorous in its mathematical reasoning at all.

    I’m on vacation. I hope someone takes the time to investigate here. Be careful. It’s important to know where this paper comes from.

  20. Do a Google search. There’s a Ural State Technical University, but not a Ural State Technical Institute. I think this is someone’s elaborate scam. In the paper cited, he talks about extablishing a maximum by evaluating a partial derivative. But he doesn’t carry out the differentiation. His treatment of the Schrodinger equation is absolutely trite, and amounts to gibberish. I’d stop this thread until you can confirm the existence of a magazine called Entropy. I looked at another supposed paper in Volume 6, and it is again just amateurish dribble. So caution is in order here.

  21. PaV
    I’m not saying its rigorous. But it does claim to be peer reviewed, and it is thought provoking. Some checks:
    Ural State Technical University
    USTU Web site

    Entropy has been going since 1999.
    Google Scholar lists 341 links to Entropy MDPI.org

    Thought for the day:
    In the first issue Shu-Kun Lin gave this useful Editorial: Diversity and Entropy Entropy 1999, 1[1], 1-3

    1. Any information-theoretic entropy (Shannon’s entropy [2], H) should be defined in a way that its relation with information is clear.
    2. Any theories regarding thermodynamic entropy (classical entropy, S, or the entropy of Clausius, Gibbs and Boltzmann and Plank) should conform with the second law of thermodynamics. For information-theoretic entropy, if one uses entropy and information interchangeably, which has often happened even among some physicists [3], for any well defined system and processes, we cannot make meaningful intellectual discussion [3].

    (Emphasis added) References:
    2. (a) Shannon, C. E. A mathematical theory of communication. Bell Sys. Tech. J., 1948, 27, 323-332; 379-423.
    (b) Claude E. Shannon’s classic 1948 paper [2a] is now available electronically: Shannon, C. E. A. Mathematical Theory of Communication ( http://cm.bell-labs.com/cm/ms/.....paper.html ).
    3. (a) Lin, S. -K. Understanding structural stability and process spontaneity based on the rejection of the Gibbs paradox of entropy of mixing. J. Mol. Struc. (Theorochem) 1997, 398, 145-153.
    (b) Lin, S. -K. Gibbs paradox of entropy of mixing: Experimental facts, its rejection, and the theoretical consequences. J. Theoret. Chem. 1996, 1, 135-150. (This paper in pdf format can be Entropy 1999, 1 3 downloaded at http://www.mdpi.org/lin/lin-rpu.htm).
    (c) Lin, S. -K. Molecular diversity assessment: Logarithmic relations of information and species diversity and logarithmic relations of entropy and indistinguishability after rejection of Gibbs paradox of entropy of mixing. Molecules 1996, 1, 57-67. (This paper in pdf format can be downloaded at http://www.mdpi.org/lin/lin-rpu.htm ).
    (d) Lin, S. -K. Correlation of entropy with similarity and symmetry. J. Chem. Inf. Comp. Sci. 1996, 36, 367-376.

  22. Here is another paper by Melkikh:

    CAN AN ORGANISM ADAPT ITSELF TO UNFORESEEN CIRCUMSTANCES?

    Alexey V. Melkikh

    In scanning it, he addresses the issue of nurture vs nature. e.g. Larmarkism of physical traits being inherited has been discredited. However, what of information being passed down orally? i.e., when students are taught, does that help their chance of “survival”. (I expect the teachers’ unions would strongly say yes.)

    So I think it’s worth exploring physical inheritance vs oral transfer from one generation to another.

  23. DLH

    Shu-Kun Lin’s statement about interchanging thermal order for other kinds of order being non-sensicial described my immediate reaction to the “canned” response from Darwinists that the earth is an open system. That it’s nonsense to willy-nilly exchange thermal order from the sun into chemical order in living systems should be obvious. It isn’t obvious to them and it’s frustrating when careful explanation fails to get the concept across.

  24. My favorite phrase from this paper: “we shall have to assume the operation of a demon pimping molecules from one part to another”.

    The algorithm seems to be similar to turning a random sequence of letters into a target sentence, using the desired target sentence as a guide for deciding if intermediate changes are helpful. However, I don’t see how “a complicated internal structure of elementary particles”, or something operating at the quantum level, allows an organism to scan “the environment in search for nearest free niches.” Wouldn’t it need a sample of the target DNA sequence?

  25. drel
    Or could he be assuming that the full sequence of letters is the target – with no “junk” and ignoring the issues of differing probabilities of codes?

    I agree that scanning “the environment in search for nearest free niches” is a bit of a stretch – that requires actually forming that DNA with the “reproductive” consequence.

  26. DLH,

    good questions. Mostly I was wondering how a quantum computer in individual elementary particles, manipulating individual electrons, can perform work on the more macroscopic scale of nucleotides or larger. I think all these computers would need to be networked together to form a distributed system.

    But he doesn’t address the problem of how the “decision-making machine” or “quantum demon” gets replicated, or if it comes pre-installed in every particle, especially those outside the organism (food).

    Now, when the organism grows, maybe the existing quantum computers reprogram the new atoms that come in from the environment? After all, we wouldn’t expect an arbitrary atom to be pre-programmed with this algorithm would we? I hope there is builtin error checking somewhere. I wonder how the software gets uploaded.

    Or maybe there is only one computer that controls all the individual quantum events in all of its peripherals (particles)? Or maybe one intelligence that controls them with a very special set of dice.

Leave a Reply