There’s a Science Magazine article on research into hidden biases in the language that AI systems use. The authors write:
Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes (18). Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable
The ID position on almost every, if not every, Evolutionary Algorithm (EA) program written with the hope of simulating evolution, drags into it certain information that is essential in that program arriving at any appearance of high amounts of complex, specified information as output.
These “biases” of the AI programmers are “implicit,” and simply ‘creep into’ the programming of AI language. I would suspect the same is true of the ‘language of evolution.’ After all, in so many cases, evolution is no more than “just-so” stories. It isn’t science; it’s a ‘narrative,’ with, no doubt, hidden biases.