ID and the Science of God: Part I

In response to an earlier post of mine, DaveScot kindly pointed out this website’s definition of ID. The breadth of the definition invites scepticism: ID is defined as the science of design detection — how to recognize patterns arranged by an intelligent cause for a purpose. But is there really some single concept of ‘intelligence’ that informs designs that are generated by biological, human, and possibly even mechanical means? Why would anyone think such a thing in the first place? Yet, it is precisely this prospect that makes ID intellectually challenging – for both supporters and opponents.

It’s interesting that not everything is claimed to be intelligently designed. This keeps the phrase ‘intelligent design’ from simply collapsing into ‘design’ by implying a distinction between the intelligence and that on which it acts to produce design. So, then, what exactly is this ‘intelligence’ that stands apart from matter? Well, the most obvious answer historically is a deity who exists in at least a semi-transcendent state. But how can you get any scientific mileage from that?

Enter theodicy, which literally means (in Greek) ‘divine justice’. It is now a field much reduced from its late 17th century heyday. Theodicy exists today as a boutique topic in philosophy and theology, where it’s limited to asking how God could allow so much evil and suffering in the world. But originally the question was expressed much more broadly to encompass issues that are nowadays more naturally taken up by economics, engineering and systems science – and the areas of biology influenced by them: How does the deity optimise, given what it’s trying to achieve (i.e. ideas) and what it’s got to work with (i.e. matter)? This broader version moves into ID territory, a point that has not escaped the notice of theologians who nowadays talk about theodicy.

A good case in point is Christopher Southgate’s The Groaning of Creation, a comprehensive work written from a theistic evolutionary standpoint. Southgate is uneasy about concepts like ‘irreducible complexity’ for being a little too clear about how God operates in nature. The problem with such clarity, of course, is that the more we think we know the divine modus operandi, the more God’s allowance of suffering and evil looks deliberate, which seems to put divine action at odds with our moral scruples. One way out – which was the way taken by the original theodicists – is to say that to think like God is to see evil and suffering as serving a higher good, as the deity’s primary concern is with the large scale and the long term.

Now, a devout person might complain that this whole way of thinking about God is blasphemous, since it presumes that we can get into the mind of God – and once we do, we find a deity who is not especially loveable, since God seems quite willing to sacrifice his creatures for some higher design principle. Not surprisingly, religious thinkers complained about theodicy from day one. In the book I flagged in my last post, The Best of All Possible Worlds, Steven Nadler portrays the priest Antoine Arnauld as the critical foil of the two duelling theodicists, Nicole Malebranche and Gottfried von Leibiniz. Against them, Arnauld repeatedly pointed out that it’s blasphemous to suppose that God operates in what humans recognise as a ‘rational’ fashion. So how, then, could theodicy have acquired such significance among self-avowed Christians in the first place (Malebranche was also a priest) and, more interestingly, how could its mode of argumentation have such long-lasting secular effects, basically in any field concerned with optimisation?

The answer goes back to the question on everyone’s mind here: What constitutes evidence of design? We tend to presume that any evidence of design is, at best, indirect evidence for a designer. But this is not how the original theodicists thought about the matter. They thought we could have direct (albeit perhaps inconclusive) evidence of the designer, too. Why? Well, because the Bible says so. In particular, it says that we humans are created in the image and likeness of God. At the very least, this means that our own and God’s beings overlap in some sense. (For Christians, this is most vividly illustrated in the person of Jesus.) The interesting question, then, is to figure out how much of our own being is divine overlap and how much is simply the regrettable consequence of God’s having to work through material reality to embody the divine ideas ‘in’ – or, put more controversially, ‘as’ — us. Theodicy in its original full-blooded sense took this question as its starting point.

There was some enthusiasm for this way of thinking in the late 17th century. Here are four reasons:

(1) The sheer spread of literacy, connected both to the rise of the printing press and the Protestant Reformation (and those two events connected to each other, in terms of who operated the presses), meant that the Bible came to treated increasingly as instructions for living, as often happens today. So, the claim that we are created in the image and likeness of God was read as a mode of personal address: I am so created. This, of course, broke down the Catholic mode of Christian domination, whereby clerical authorities had modulated the biblical message for the situation at hand – e.g. by telling the faithful to treat certain aspects of the Bible as merely ‘symbolic’ or ‘metaphorical’. Theistic evolutionists routinely resort to this strategy today.

(2) On theological grounds, to deny that we are literally created in the image and likeness of God is itself to court heresy. It comes close to admitting an even worse offence, namely, anthropomorphism. In other words, if we presume that, even in sacred scripture, references to our relationship to God are mere projections, then why take the Bible seriously at all? 19th century secularisation was propelled by just this line of thought, but anti-theodicists like Arnauld who refused to venture into God’s mind could be read that way as well – scepticism masquerading as piety. (Kant also ran into this problem.) In contrast, theodicists appeared to read the Bible as the literal yet fallible word of God. There is scope within Christianity for this middle position because of known problems in crafting the Bible, whose human authorship is never denied (unlike, say, the Qur’an). One extreme result of this mentality was Thomas Jefferson’s attempt to edit the Gospels of all ‘superstitious’ elements, just as a Neo-Darwinist (say, UK geneticist Steve Jones) might re-write Origin of Species to reinstate Darwin’s fundamental principles in a firmer evidence base. To be sure, there is still plenty of room for blasphemy, but at least not for atheism!

(3) Within philosophy, theodicists, despite their disagreements, claimed legitimacy from Descartes, whose ‘cogito ergo sum’ proposed an example of human-divine overlap, namely, humanity’s repetition of how the deity establishes its own existence. After all, creation is necessary only because God originally exists apart from matter, and so needs to make its presence felt in the world through matter. (Isn’t that what the creation stories in Genesis are about?) So too with humans, so Descartes seemed to think. The products of our own re-enactment of divine thought patterns are still discussed in philosophy today as ‘a priori knowledge’. The open question is how much of our knowledge falls under this category, since whatever knowledge we acquire from the senses is clearly tied to our animal natures, which God does not share. But of course, the senses do not operate unadorned. Thus, by distinguishing the sensory and non-sensory aspects of our knowledge, we might infer the reliability of our access to the intelligent designer.

(4) There was also what we now call the ‘Scientific Revolution’, whose calling card was the fruitfulness of mechanical models for fathoming the natural world. A striking case in point was Galileo’s re-fashioning of a toy, the telescope, into an instrument of astronomical discovery. This contributed to the sense that our spontaneous displays of invention and ingenuity also reproduced the divine creative process: We make things that open up the world to understanding and control. This mode of thinking would start to kick in the scientific societies formed around the 18th century’s Industrial Revolution. One such influential society in the British Midlands, the ‘Lunar Society’, has been the subject of a recent popular book by Jenny Uglow.

Theodicy gets off the ground against these four background conditions once a specific mental faculty is proposed as triggering the spark of the divine in the human. This faculty was generally known as intellectual intuition – that is, the capacity to anticipate experience in a systematic and rational fashion. (Here’s a definition of intelligence worth defending.) We would now say the capacity to generate virtual realities that happen to correspond to physical reality, the sort of thing computer simulations do all the time, courtesy of their programmers. In the 17th century, people were especially impressed by the prospect of analytic (aka Cartesian) geometry capturing a rational world-order governed by universal laws of mechanical motion. So far, so good. But clearly something went wrong – what?

Tune in for the next instalment…

  • Delicious
  • Facebook
  • Reddit
  • StumbleUpon
  • Twitter
  • RSS Feed

165 Responses to ID and the Science of God: Part I