Uncommon Descent Serving The Intelligent Design Community

Why doesn’t software industry use evolution?

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Industry is constantly searching for technologies to maximize profits and minimize costs. Software industry is no exception (the world software market exceeded $300 billion).

Actually some computers can process quadrillions floating-point operations per second (10^15 flops). It would be technically possible to implement on such computers the paradigm of unguided evolution (random variation + selection) for obtaining new programs by randomly modifying old programs. So, why software houses pay legions of human programmers to develop ex-novo applications when an automatic process could do the job? They could save truckloads of money by automatizing, at least in large part if not in toto, the software development work flow.

To have an idea, let’s perform two simplified calculations about the speed of biological evolution (BES) vs. the speed of computer aided evolution (CAES).

Biological evolution speed

Consider an initial population of 10^9 bacteria with generation/ reproduction time = 40 minutes, and mutation rate = 0.003 mutations per genome per generation. We have an initial biological evolution speed BES = (0.003 x 10^9) / (40 x 60) = 1250 mutations/sec.

Computer aided evolution speed

Consider a single 10^15 flops computer and suppose, for the sake of argument, that a program “mutation” needs an equivalent of 1000 floating-point operations. We get a computer aided evolution speed (CAES) = 10^12 mutations / sec.

Since, according to Darwin, unguided biological evolution was able to spontaneously produce all 500 million species on earth (from bacteria to man) in 3 billion years (biological evolution time = BET), computer aided evolution could automatically produce software containing an equivalent overall amount of functional complex specified information in what we call “computer aided evolution time” (CAET). In other words, we state that the product of “speed x time” is equal for biological evolution and for computer evolution:

CAET x CAES = BET x BES

CAET is then = (BET x BES) / CAES

in numbers:

CAET = (3×10^9 x 1250) / 10^12 = 3.75 years

Evolution applied to software programming would produce software equivalent to the organizational information that present and past organisms contain in less than 4 years. Then, again, why software houses don’t save billion dollars in employers by applying Darwinian evolution to the software creation?

My short answer: because Darwinian evolution works exactly zero, when the goal is to create systems. It is fully incapable to create the least system in principle. If it were capable to do that just a little, software producers would use it. To put it differently, if Charles Darwin was right Bill Gates would be far richer than he is…

I know in advance the objection that evolutionists could rise. They always deny all: “it is false that software industry doesn’t use evolution; in fact there are evolutionary algorithms”, or something like that.

My counter-objection: evolutionary algorithms (EAs) are programs designed to converge by iteration to a particular solution for a very specific problem. To recall EAs to refute my affirmation that informatics industry doesn’t use evolution to create software is nonsense like to say that, for example, in mathematics, the iterative methods to find the approximate root of an equation (like Newton’s method) can create the entire mathematics. In other words, EAs are designed routines that can be useful in certain cases to solve very small sub problems. I wrote “in certain cases” because in some other cases EAs fail and fail spectacularly, exactly as it happens – in certain conditions – for iterative methods in math. The bottom line is that EAs are toys that can do nothing to solve the big picture, the total creation of an entire software project from zero. Here the analogy is strict: EAs are unable to create new software applications, like Darwinian evolution is unable to create new organisms. And software producers do know that.

Comments
Software programs would not run if it were not for computer architectures, so it should not be surprising at all that programs run on computers that appear well-designed to run software programs. What shall we call this principle? Suggestions? The macthropic principle?Mung
October 22, 2013
October
10
Oct
22
22
2013
04:43 PM
4
04
43
PM
PDT
I solved the problem by using the time stamp as a seed.
A rather good method to get better seed is to add a pile of truly random data and add that to the equation with the timestamp. This data can be generated relatively simple by collecting mouse movements or static from a radio. SebestyenSebestyen
October 22, 2013
October
10
Oct
22
22
2013
09:12 AM
9
09
12
AM
PDT
How to Simulate Unguided Evolution in a Computer. ME: I don’t think it’s possible to simulate unguided evolution in a computer. But i think it could be an interesting topic of discussion, if niwrad or Gil or someone else were to start a thread on it. Some people might be amazed at how much intelligent design there has to be to even come close.Mung
October 22, 2013
October
10
Oct
22
22
2013
09:02 AM
9
09
02
AM
PDT
Mung said: "I don’t think it’s possible to simulate unguided evolution in a computer." Ironically, when I was writting a black jack program, I found out that the "random" function wasn't so random. I had mistakenly used the same seed for all the iterations. When I was testing the program, I found that I could predict the sequence of "cards" that came up during play. After troubleshooting the problem, I discovered that I needed to use a different seed every time I called the "random" function. I solved the problem by using the time stamp as a seed. After I did that, I came to the conclusion that computers cannot achive pure randomness. This is why you will never see me sitting at a slot machine in Vegas.alanbrad
October 22, 2013
October
10
Oct
22
22
2013
07:30 AM
7
07
30
AM
PDT
I use to think about this as well. Not exact as niwrad put it here (which is good) but somewhat related. If a person want to start a software company where new software will be develop only on the basis on random mutation and selection and if that person ask the evolutionary biologists to invest in such a company, how many of them will be doing it?T. lise
October 21, 2013
October
10
Oct
21
21
2013
08:58 PM
8
08
58
PM
PDT
SirHamster:
The reason evolution isn’t used in software programming is that at it’s heart, it’s a brute force approach. Try all combinations, and then toss out the ones that won’t work.
Yeah. This is why Darwinian evolution is a pile of superstitious nonsense. The problem with evolution and materialist OOL theories is that, no matter how good a combination gets, the succeeding random mutations or transformations will destroy it.Mapou
October 21, 2013
October
10
Oct
21
21
2013
03:01 PM
3
03
01
PM
PDT
I joined this forum just to comment on this, after lurking for years. As a computer programmer I can tell you that without the inteligence of the programmer, no computer program can 'evolve'. If a program would ever be able to evolve, it would have had to have been designed to that in the first place.alanbrad
October 21, 2013
October
10
Oct
21
21
2013
01:26 PM
1
01
26
PM
PDT
The reason evolution isn't used in software programming is that at it's heart, it's a brute force approach. Try all combinations, and then toss out the ones that won't work. Given enough time and resources, it will find the most efficient solution, assuming such a solution exists at all. But given the exponential nature of most real world computing problems, there are a nigh infinite number of ineffective and inefficient solutions to blindly stumble upon before you hit the efficient solutions. To make this brute force search work, the search field must be reduced. When done by programmers, this is an example of intelligently guided "evolution". The precision with which solutions are selected is also more an example of artificial breeding than natural selection. And of course, evolution can't apply if it doesn't have something to start with; random bit-flips are not known for their ability to create brand new functional software programs.SirHamster
October 21, 2013
October
10
Oct
21
21
2013
11:04 AM
11
11
04
AM
PDT
Dr. David Berlinski: Random Mutations (to computer programs?) - video http://www.youtube.com/watch?v=DGaUEAkqhMY What Is The Genome? It's Not Junk! (Linux Operating System Compared To Life) - Dr. Robert Carter - video http://www.metacafe.com/watch/8905583/ Comparing genomes to computer operating systems - Van - May 2010 Excerpt: we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology,,, http://www.ncbi.nlm.nih.gov/pubmed/20439753 Programming of Life - Biological Computers - Ch. 6 http://www.youtube.com/watch?v=hRooe6ehrPs&feature=c4-overview-vl&list=PLAFDF33F11E2FB840 3-D Structure Of Human Genome: Fractal Globule Architecture Packs Two Meters Of DNA Into Each Cell - Oct. 2009 Excerpt: the information density in the nucleus is trillions of times higher than on a computer chip -- while avoiding the knots and tangles that might interfere with the cell's ability to read its own genome. Moreover, the DNA can easily unfold and refold during gene activation, gene repression, and cell replication. http://www.sciencedaily.com/releases/2009/10/091008142957.htm Do you believe Richard Dawkins exists? Excerpt: DNA is the best information storage mechanism known to man. A single pinhead of DNA contains as much information as could be stored on 2 million two-terabyte hard drives. http://creation.com/does-dawkins-exist DNA Computer Excerpt: DNA computers will work through the use of DNA-based logic gates. These logic gates are very much similar to what is used in our computers today with the only difference being the composition of the input and output signals.,,, With the use of DNA logic gates, a DNA computer the size of a teardrop will be more powerful than today’s most powerful supercomputer. A DNA chip less than the size of a dime will have the capacity to perform 10 trillion parallel calculations at one time as well as hold ten terabytes of data. The capacity to perform parallel calculations, much more trillions of parallel calculations, is something silicon-based computers are not able to do. As such, a complex mathematical problem that could take silicon-based computers thousands of years to solve can be done by DNA computers in hours. http://www.tech-faq.com/dna-computer.htmlbornagain77
October 21, 2013
October
10
Oct
21
21
2013
07:50 AM
7
07
50
AM
PDT
It would be technically possible to implement on such computers the paradigm of unguided evolution (random variation + selection) for obtaining new programs by randomly modifying old programs.
I don't think it's possible to simulate unguided evolution in a computer. But say it is possible. I wonder if dropping a huge rock on the PC would lead to greater diversification of the software that survived.Mung
October 21, 2013
October
10
Oct
21
21
2013
07:33 AM
7
07
33
AM
PDT
niwrad, Since a software's true success at a company is based on end customer satisfaction. The software company would have to mutate some code, compile it and see if people like it. This would require operating the software. Of course, manuals to operate the software would not exist with the software, the customer would have to discover what it's useful for. Unless maybe, they kept a bunch of programmers off to the side to test each new mutated program...and figure it out (of course this is going back towards intelligent selection like the Dawkins weasel). Seems to me, the best test to see if a software program can evolve is to make a program that can copy itself many times over with various mutation rates. And if any progeny code can out survive the parent code, then it 'wins' to reproduce. Create a virtual word of limited resource code snippets, and see if the replicating software code can make something greater than the sum of it's parts. I think this has been done, and expect that what will happen in the end is that the code will only simplify to smaller code... and that's about the end of it. This is why for Darwinian evolution to be true, it should be expected to find that the only surviving organism is of the simplest possible kind, a single celled organism. And one that will consume ANY other creature or organic material other than itself... but such a creature doesn't exist. I wonder why. :P Of course, Darwinists might argue 'but it's complicated'. I would agree with them in a way, not because of complications in the explanation, but rather because of the complicated reasoning for seeking such an explanation.JGuy
October 21, 2013
October
10
Oct
21
21
2013
07:12 AM
7
07
12
AM
PDT
Thanks niwrad! Definitely a keeper. I've pointed this disparity out to atheists a few times (If Darwinism is true why can't we design computer 'super' programs with it?) but never had the math as an example to back the disparity up. A few related notes:
There is absolutely nothing surprising about the results of these (evolutionary) algorithms. The computer is programmed from the outset to converge on the solution. The programmer designed to do that. What would be surprising is if the program didn't converge on the solution. That would reflect badly on the skill of the programmer. Everything interesting in the output of the program came as a result of the programmer's skill-the information input. There are no mysterious outputs. Software Engineer - quoted to Stephen Meyer Applied Darwinism: A New Paper from Bob Marks and His Team, in BIO-Complexity - Doug Axe - 2012 Excerpt: Furthermore, if you dig a bit beyond these papers and look at what kinds of problems this technique (Steiner Tree) is being used for in the engineering world, you quickly find that it is of extremely limited applicability. It works for tasks that are easily accomplished in a huge number of specific ways, but where someone would have to do a lot of mindless fiddling to decide which of these ways is best.,, That's helpful in the sense that we commonly find computers helpful -- they do what we tell them to do very efficiently, without complaining. But in biology we see something altogether different. We see elegant solutions to millions of engineering problems that human ingenuity cannot even begin to solve. http://www.evolutionnews.org/2012/04/applied_darwini058591.html Before They've Even Seen Stephen Meyer's New Book, Darwinists Waste No Time in Criticizing Darwin's Doubt - William A. Dembski - April 4, 2013 Excerpt: In the newer approach to conservation of information, the focus is not on drawing design inferences but on understanding search in general and how information facilitates successful search. The focus is therefore not so much on individual probabilities as on probability distributions and how they change as searches incorporate information. My universal probability bound of 1 in 10^150 (a perennial sticking point for Shallit and Felsenstein) therefore becomes irrelevant in the new form of conservation of information whereas in the earlier it was essential because there a certain probability threshold had to be attained before conservation of information could be said to apply. The new form is more powerful and conceptually elegant. Rather than lead to a design inference, it shows that accounting for the information required for successful search leads to a regress that only intensifies as one backtracks. It therefore suggests an ultimate source of information, which it can reasonably be argued is a designer. I explain all this in a nontechnical way in an article I posted at ENV a few months back titled "Conservation of Information Made Simple" (go here). ,,, ,,, Here are the two seminal papers on conservation of information that I've written with Robert Marks: "The Search for a Search: Measuring the Information Cost of Higher-Level Search," Journal of Advanced Computational Intelligence and Intelligent Informatics 14(5) (2010): 475-486 "Conservation of Information in Search: Measuring the Cost of Success," IEEE Transactions on Systems, Man and Cybernetics A, Systems & Humans, 5(5) (September 2009): 1051-1061 For other papers that Marks, his students, and I have done to extend the results in these papers, visit the publications page at www.evoinfo.org http://www.evolutionnews.org/2013/04/before_theyve_e070821.html LIFE’S CONSERVATION LAW - William Dembski - Robert Marks - Pg. 13 Excerpt: Simulations such as Dawkins’s WEASEL, Adami’s AVIDA, Ray’s Tierra, and Schneider’s ev appear to support Darwinian evolution, but only for lack of clear accounting practices that track the information smuggled into them.,,, Information does not magically materialize. It can be created by intelligence or it can be shunted around by natural forces. But natural forces, and Darwinian processes in particular, do not create information. Active information enables us to see why this is the case. http://evoinfo.org/publications/lifes-conservation-law/ Information. What is it? - Robert Marks - lecture video http://www.youtube.com/watch?v=d7seCcS_gPk
bornagain77
October 21, 2013
October
10
Oct
21
21
2013
06:52 AM
6
06
52
AM
PDT
Every paper/book that has been published on Computational Evolutionary Algorithms should really use the full and correct title for this approach: "Intelligently Designed Evolutionary Algorithms". This would help remove any confusion of this approach with evolution as understood by materialists such as Dawkins, Myers etc.steveO
October 21, 2013
October
10
Oct
21
21
2013
06:49 AM
6
06
49
AM
PDT
I'm actually kinda interested in selvaRajan's comment @ 1. To be honest, I don't quite understand the entire thrust of his argument, but there seems to be something useful there that isn't quite the same as other 2nd Law arguments. selvaRajan, can you rephrase your argument, or explain in another way?JGuy
October 21, 2013
October
10
Oct
21
21
2013
06:43 AM
6
06
43
AM
PDT
Brilliant. Certainly a little functioning code is a lower target than functioning biological systems.butifnot
October 21, 2013
October
10
Oct
21
21
2013
06:09 AM
6
06
09
AM
PDT
I agree EAs are specific programming modules and mainly used as efficient search functions. I would also like to show that even energy system shows support for Intelligent Design: Any system – be it biological or physical- tends to move towards low energy consumption level so that the total energy used by the system becomes efficient. A random system will always use a higher energy than an intelligent, directed system, since randomness involves doing the job allocated to the system by chance. Neo-Darwins claim that random chance can arrange the amino acids sequences to produce specific proteins. We can consider the randomness as a system of Binomial probability distribution. We can envision 2 types of random attempts in the Neo-Darwin system: Organism attempts to improve chance of success by increasing random attempts without regard to energy expended Organism conserve energy by reducing attempts- which will lead to reduced rate of success Considering that the system always tries for lower energy consumption, we can ignore the scenario ‘a’. Consider scenario ‘b’: Let us denote the probability of success of arrangement of amino acids by ‘p’ . Thus, probability of no more than 2 success per 1000 attempts (which is generous and very high) in a time span will be: ?(-1+p)?^998 (1+998p+498501p^2) If we simplify the above, we get ?(-1+p)?^998 (1+499p(2+999p)) If p = 0.1 the probability will be 1.09×?10?^(-42) For p=0.9 the probability will be staggering low 4.04×?10?^(-993) Let’s denote the energy required for a attempt at arranging amino acid randomly by the symbol ‘e’, so for 1000 attempts, the energy expended will be e X 1000 = 1000e. A directed intelligent system will need only 1 attempt to arrange proper amino acid sequence. Since the probability of random success is infinitesimal, for all practical purpose, We have E = 1000e. where ‘E’ represents energy expended by directed intelligent system and is equal to a single attempt of ‘e’. Conclusion – The random, chance system of arrangement of amino acids to produce protein will expend almost as much energy as the number of attempts made by the system to arrange the amino acids. An organism can conserve the energy by reducing random attempts, which would invariably reduce the probability of success further or it can increase the attempts without regard to energy conservation, which would be highly inefficient and unlikely as a viable biological process. In fact the conclusion is so intuitive that we need not have used any math at all. I am giving below the simple Mathematica input required to derive the above results. Of course you can use whatever software (or your mind, pencil and paper) that you are comfortable with. Probability[x?2,x ?BinomialDistribution[1000,p]] Output: ?(-1+p)?^998 (1+998p+498501p^2) FullSimplify[?(-1+p)?^998 (1+998p+498501p^2)] Output: ?(-1+p)?^998 (1+499p(2+999p)) ?(-1+p)?^998 (1+499p(2+999p))/.p?0.1 ?(-1+p)?^998 (1+499p(2+999p))/.p?0.9selvaRajan
October 21, 2013
October
10
Oct
21
21
2013
02:22 AM
2
02
22
AM
PDT

Leave a Reply