Uncommon Descent Serving The Intelligent Design Community

BTB: Induction, falsificationism, scientific paradigms and ID vs Evo Mat

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In the Induction thread, we have continued to explore inductive logic, science and ID vs Evolutionary Materialism.

Among the key points raised (with the help of Hilary Putnam)  is the issue that while Popper sees himself as opposed to induction, it is arguable that instead he has actually (against his intent) brought it back in once we reckon with the need for trusted theories to be used in practical contexts, and once we explore the implications of corroboration and success “so far” with “severe testing.”

As comment 48 observed:

>> . . . Hilary Putnam [notes, in an article on the Corroboration of theories], regarding Popper’s corroboration and inductive reasoning:

. . . most readers of Popper read his account of corroboration as an account of something like the verification of theories, in spite of his protests. In this sense, Popper has, contre lui [ ~ against his intent] a theory of induction . . . .

Standard ‘inductivist’ accounts of the confirmation’ of scientific theories go somewhat like this: Theory implies prediction (basic sentence, or observation sentence); if predic-tion is false, theory is falsified; if sufficiently many predictions are true, theory is confirmed. For all his attack on inductivism, Popper’s schema is not so different: Theory implies prediction (basic sentence); if prediction is false, theory is falsified; if sufficiently many predictions are true, and certain further conditions are fulfilled, theory is highly corroborated.

Moreover, this reading of Popper does have certain support. Popper does say that the ‘surviving theory’ is accepted—his account is, therefore, an account of the logic of accepting theories [–> tantamount to inductive support and confident trust in results deemed reliable enough to put to serious work] . . .

Yes, Popper points to the quasi-infinite set of possible theories and declares that the best is the most improbable, most subject to severe testing that survives thus far. But the point is, such theories are routinely seen as empirically reliable and are put to work, being trusted to be “good enough for government work.”>>

However, testing and falsification pose further difficulties, and it is worth highlighting such by headlining comment 50 in the thread:

>>The next “logical” question is how inductive reasoning (modern sense) applies to scientific theories and — HT Lakatos and Kuhn, Feyerabend and Putnam — research programmes.

First, we need to examine the structure of scientific predictions, where:

we have theory T + auxiliary hypotheses (and “calibration”) about observation and required instruments etc AI + auxiliary statements framing and modelling initial, intervening and boundary conditions [in a world model], AM, to yield predicted or explained observations, P/E:

T + AI + AM –> P/E

We compare observations, O (with AI again acting), to yield explanatory gap, G:

P/E – (O + AI) –> G = g

In an ideally successful or “progressive” theory or paradigm, G will be 0 [zero], but in almost all cases there will be anomalies; scientific theories generally live with an explanatory/predictive deficit, g for convenience. This gives meat to the bones of Lakatos’ pithy observation that theories are born, live and die refuted.

However, when a new theory better explains persistent anomalies and has some significant success with otherwise unexplained phenomena, and this occurs for some time, this allows its champions to advance. {Let us insert an infographic:}

sci_abduction

We then see dominant and perhaps minor schools of thought, with research programmes that coalesce about the key successes. Where also scope of explanation counts, i.e. a theory T1 may have wider scope of generally regarded success, but has its deficit g1 greater than g2, that of a theory T2 of narrower scope.

Where investigatory methods are more linked by family resemblance than by any global, one size fits all and only Science method.

This picture instantly means that Popper’s criterion of falsification is very hard to test, as, first, observations are themselves coloured by instrumental issues (including eyeball, mark 1 etc). Second, key theoretical claims of a given theory Tk, are usually not directly predictive/ explanatory of observations, they are associated with a world state model AMk, that is generally far less tightly held than Tk. In Lakatos’ terms, we have an armour-belt that protects the core theory.

As a consequence, the battlecruisers at Jutland principle applies.

HMS Invincible explodes at Jutland, after a critical hit
HMS Invincible explodes at Jutland, after a critical hit

That is, unless there is a key vulnerability of design or of procedures that allows a lucky shot to detonate the magazines deep in the bowels of the research programme, it has to be battered and reduced to sinking condition — it has to become a “degenerative” research programme in competition with advancing “progressive” ones. Which means that a competitor Tm has to have the resources to initiate and sustain that battering while itself being better protected against the counter-barrage.

And when a paradigm and research programme is deeply embedded in cultural power circles and their agendas, it can often dominate technical discussion, lock out controversial alternatives and drive them to the margins. That means it is going to be hard for such to hold prestigious scholars and attract graduate students. But, there can arise times of crisis when guerrilla, fringe schools can find sanctuaries and perhaps build up enough following that a time of crisis emerges.

And so, the succession of scientific theories, paradigms and research programmes is seldom smooth, and is plainly deeply intertwined with institutional and general politics, especially where grant-making is an issue.

This complex, messy picture fits well with the sorts of scientific quarrels that have been a notorious part of the history of modern science. It resonates with the story of Economics over the past century or a bit more, it fits with psychology, it speaks to the ongoing anthropogenic global warming controversy and it speaks straight to the controversies surrounding design thought.

For, ID is a narrow scope paradigm that addresses key persistent anomalies in the cluster of origins theories that fit under evolutionary materialist scientism. However, the dominant paradigm is institutionally and politically much stronger. So, ID is a marginal, often marginalised and even stereotyped and scapegoated, fairly narrow scope school of thought (at least in terms of the guild of scholars). However, it seems to be targetting key vulnerabilities of method and raises a potentially transforming insight: designing intelligence is real, often acts through directing configurations in ways that are complex, fine-tuned and information-rich, and so can be reliably detected when such traces or signs are present.

{Video:}

[youtube aA-FcnLsF1g]

Thus, the inductive challenge posed by ID is that of inference to the best current explanation, on empirically grounded, reliable sign. Backed up, by the analysis of the challenge of search resources for blind solar system or observed cosmos scale search in very large configuration spaces.

This is a powerful point, and one unanswered; likely, one that cannot be answered on evolutionary materialistic scientism. But that does not prevent institutional power from holding off the threat for a long time.

However, eventually, there will be a tipping point.

Which may be nearer than we think.

Walker and Davies, in a recent article, hint at just how near this may be:

In physics, particularly in statistical mechanics, we base many of our calculations on the assumption of metric transitivity, which asserts that a system’s trajectory will eventually [–> given “enough time and search resources”] explore the entirety of its state space – thus everything that is phys-ically possible will eventually happen. It should then be trivially true that one could choose an arbitrary “final state” (e.g., a living organism) and “explain” it by evolving the system backwards in time choosing an appropriate state at some ’start’ time t_0 (fine-tuning the initial state). In the case of a chaotic system the initial state must be specified to arbitrarily high precision. But this account amounts to no more than saying that the world is as it is because it was as it was, and our current narrative therefore scarcely constitutes an explanation in the true scientific sense.

We are left in a bit of a conundrum with respect to the problem of specifying the initial conditions necessary to explain our world. A key point is that if we require specialness in our initial state (such that we observe the current state of the world and not any other state) metric transitivity cannot hold true, as it blurs any dependency on initial conditions – that is, it makes little sense for us to single out any particular state as special by calling it the ’initial’ state. If we instead relax the assumption of metric transitivity (which seems more realistic for many real world physical systems – including life), then our phase space will consist of isolated pocket regions [–> islands . . . ] and it is not necessarily possible to get to any other physically possible state (see e.g. Fig. 1 for a cellular automata example).

W-D_island_phase_space

[–> or, there may not be “enough” time and/or resources for the relevant exploration, i.e. we see the 500 – 1,000 bit complexity threshold at work vs 10^57 – 10^80 atoms with fast rxn rates at about 10^-13 to 10^-15 s leading to inability to explore more than a vanishingly small fraction on the gamut of Sol system or observed cosmos . . . the only actually, credibly observed cosmos] {Search challenge:}

islands_of_func_chall

Thus the initial state must be tuned to be in the region of phase space in which we find ourselves [–> notice, fine tuning], and there are regions of the configuration space our physical universe would be excluded from accessing, even if those states may be equally consistent and permissible under the microscopic laws of physics (starting from a different initial state). Thus according to the standard picture, we require special initial conditions to explain the complexity of the world, but also have a sense that we should not be on a particularly special trajectory to get here (or anywhere else) as it would be a sign of fine–tuning of the initial conditions. [ –> notice, the “loading”] Stated most simply, a potential problem with the way we currently formulate physics is that you can’t necessarily get everywhere from anywhere (see Walker [31] for discussion). [“The “Hard Problem” of Life,” June 23, 2016, a discussion by Sara Imari Walker and Paul C.W. Davies at Arxiv.]

We live in interesting times.>>

Clearly, the issues of inductive logic, reasoning and science are pivotal to understanding the key design inference on inductive signs. The wider picture of how paradigms and research programmes rise and fall then sets that into a wider context that moves beyond simplistic falsificationism.

And these, we urgently need to discuss together on all sides of the design debates. END

Comments
F/N: It is worth the while to continue, highlighting the significance of diversity of schools of thought as enhancing the robustness of a discipline. In the OP, the concept was raised that research programmes as a rule have an explanatory gap, g, which of course is the focus of ongoing research, termed "puzzle-solving" by Kuhn . . . though if a crisis emerges, then a more radical approach emerges. Clipping:
The next “logical” question is how inductive reasoning (modern sense) applies to scientific theories and — HT Lakatos and Kuhn, Feyerabend and Putnam — research programmes. First, we need to examine the structure of scientific predictions, where: we have theory T + auxiliary hypotheses (and “calibration”) about observation and required instruments etc AI + auxiliary statements framing and modelling initial, intervening and boundary conditions [in a world model], AM, to yield predicted or explained observations, P/E: T + AI + AM –> P/E We compare observations, O (with AI again acting), to yield explanatory gap, G: P/E – (O + AI) –> G = g In an ideally successful or “progressive” theory or paradigm, G will be 0 [zero], but in almost all cases there will be anomalies; scientific theories generally live with an explanatory/predictive deficit, g for convenience. This gives meat to the bones of Lakatos’ pithy observation that theories are born, live and die refuted. However, when a new theory better explains persistent anomalies and has some significant success with otherwise unexplained phenomena, and this occurs for some time, this allows its champions to advance.
A progressive research programme will have a successful paradigm at its core and will be continually addressing gaps successfully, but of course such gaps are often like rabbits, they multiply. Some gaps will be persistent, and intractable within a given paradigm -- classically, the number of epicycles in the Ptolemaic system kept on growing until the cumbersomeness sent the message, something is not right. This led to the Copernican and Tycho Brahe proposals, where at the beginning of C17, Brahe seemed to have the best empirical power of explanation, there being a dominant theme that celestial motions must be built up from circles, a perfect figure. It was Kepler's work that led to the elliptical orbits view, which Newton provided an explanation for, and which then moved on to perturbations and chaotic unstable orbits due to multiple interacting bodies. (The current view is that a stable solar system is a bit of a challenge.) And along the way, the issue of relativity also emerged, shifting our concept of what gravitation is, and also explaining anomalies such as Mercury's orbit. But, the progress on the ground at any time is never as smooth as such a broad-brush summary suggests. This is where there emerges the somewhat paradoxical point that the robustness of a discipline or field of study as a whole is evidently enhanced by having rival schools of thought, with diverse paradigms. That is, it is important to keep the fringes going, as that is where the future may well come from. This is also consistent with the observation that quite often, big breakthroughs come from young scholars in their mid-twenties. Experienced enough to understand the current state of the art, but not yet locked into it. Thus, able to see with fresh eyes and thus to lead in new directions. Newton, Maxwell and Einstein are classic examples. A discipline that refuses to acknowledge this and becomes ideologically loaded and domineering, marginalising dissident schools of thought unduly, is degenerative and regressive. (The history of economics over the past 100+ years is unfortunately replete with notorious cases in point. And yes, social sciences too can be of reasonably scientific character, through they have their peculiarities. Also, yes, history counts in science, real histories not victory propaganda put out by dominant schools.) In science, given the inherent challenges of inductive reasoning and the role of explanatory gaps, diversity of schools of thought is an advantage. And, it is reasonable to expect that science education should reflect that, as well as the debates among the guild of scholars. When marginalisation and polarisation rather then responsible dialogue creep in, that is not a healthy sign. There is a worse case. Sometimes, schools of thought that are dominant become locked into ideological, policy and worldview agendas that are mutually reinforcing. Marxism is of course a classic. So is Eugenics. (On a local scale, the debates over the volcanology and policy implications of degree of ignorance and risk took on some facets of this character from 1995 - 2005. Indeed, it was a public dissent by a researcher in 2005 a month before a 10th anniversary conference that allowed the public to understand that things were not so smooth as had been presented. He predicted that we would soon see signs of activity and indeed the pause ceased just in time for the volcano to provide fireworks for the Conference. Which, was held in the midst of ash in a hotel near the mouth of the Belham Valley that would later be shut down due to the risk. And, there was sharp debate when a popular talk show at the time presented an interview with a visiting professor on the public's need and right to know the frank truth; including the truth about the degree of uncertainty and risk. A decade later, many issues still lurk but there is a much clearer sense of the degree of ignorance we face. For example, is there a 20 - 30+ year cycle of deep pulses of magma into the system, leading to surges 1896 - 7, 1933 - 37, 1966 - 7, 1992 - 5 [which broke through to eruptions with aperiodic pulses and periodic pulses on all sorts of time scales]? If so, are we now approaching the window for the next deep surge, and will that lead to fresh, strong eruptive activity? Of course, this stuff by and large is not emphasised in the public discussion. I should add, the northern, inhabited third of this island continues to be a viable community. Personally, I do not go south of the line of the Nantes river for any extended period, save for very good reason. Though, the buried town of Plymouth is also well worth the visit . . . a modern Pompeii in the midst of the Caribbean's yachting highway from the Virgin Islands to the Grenadines. Also, yes, the thinking in the OP was worked out over the course of many years in the context of this very live example of limitations of scientific, inductive thought. Where, the issues over economics were an earlier domain for contemplation. When an explanatory, analytical pattern is powerfully unifying across quite diverse fields that are not directly linked as intellectual movements, that enhances confidence in its ability to be empirically robust and perhaps even to give us a glimpse of the pot of gold at the end of the rainbow, truth.) Arguably, today, the issue of anthropogenic climate change has challenges like this. Similarly, so is origins science, especially as regards the origin and body plan level diversity of life. (And it is therefore no surprise to see that these tend to go together.) In this issue of overly ideological science, polarisation and marginalisation of minor schools of thought becomes a material issue. And, as a result also, worldview level issues regarding the structure of science and the assumptions of a key school of thought become material. In this context, we would be well advised to reassess this now notorious remark by Richard Lewontin (which I mark up to highlight key issues):
. . . to put a correct view of the universe into people's heads [==> as in, "we" have cornered the market on truth, warrant and knowledge] we must first get an incorrect view out [--> as in, if you disagree with "us" of the secularist elite you are wrong, irrational and so dangerous you must be stopped, even at the price of manipulative indoctrination of hoi polloi] . . . the problem is to get them [= hoi polloi] to reject irrational and supernatural explanations of the world, the demons that exist only in their imaginations,
[ --> as in, to think in terms of ethical theism is to be delusional, justifying "our" elitist and establishment-controlling interventions of power to "fix" the widespread mental disease]
and to accept a social and intellectual apparatus, Science, as the only begetter of truth
[--> NB: this is a knowledge claim about knowledge and its possible sources, i.e. it is a claim in philosophy not science; it is thus self-refuting]
. . . . To Sagan, as to all but a few other scientists [--> "we" are the dominant elites], it is self-evident
[--> actually, science and its knowledge claims are plainly not immediately and necessarily true on pain of absurdity, to one who understands them; this is another logical error, begging the question , confused for real self-evidence; whereby a claim shows itself not just true but true on pain of patent absurdity if one tries to deny it . . . and in fact it is evolutionary materialism that is readily shown to be self-refuting]
that the practices of science provide the surest method of putting us in contact with physical reality [--> = all of reality to the evolutionary materialist], and that, in contrast, the demon-haunted world rests on a set of beliefs and behaviors that fail every reasonable test [--> i.e. an assertion that tellingly reveals a hostile mindset, not a warranted claim] . . . . It is not that the methods and institutions of science somehow compel us [= the evo-mat establishment] to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes [--> another major begging of the question . . . ] to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute [--> i.e. here we see the fallacious, indoctrinated, ideological, closed mind . . . ], for we cannot allow a Divine Foot in the door . . . [--> irreconcilable hostility to ethical theism, already caricatured as believing delusionally in imaginary demons]. [Lewontin, Billions and billions of Demons, NYRB Jan 1997,cf. here. And, if you imagine this is "quote-mined" I invite you to read the fuller annotated citation here.]
Philip Johnson's retort that November was well warranted:
For scientific materialists the materialism comes first; the science comes thereafter. [Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence.
[--> notice, the power of an undisclosed, question-begging, controlling assumption . . . often put up as if it were a mere reasonable methodological constraint; emphasis added. Let us note how Rational Wiki, so-called, presents it:
"Methodological naturalism is the label for the required assumption of philosophical naturalism when working with the scientific method. Methodological naturalists limit their scientific research to the study of natural causes, because any attempts to define causal relationships with the supernatural are never fruitful, and result in the creation of scientific "dead ends" and God of the gaps-type hypotheses."
Of course, this ideological imposition on science that subverts it from freely seeking the empirically, observationally anchored truth about our world pivots on the deception of side-stepping the obvious fact since Plato in The Laws Bk X, that there is a second, readily empirically testable and observable alternative to "natural vs [the suspect] supernatural." Namely, blind chance and/or mechanical necessity [= the natural] vs the ART-ificial, the latter acting by evident intelligently directed configuration. [Cf Plantinga's reply here and here.] And as for the god of the gaps canard, the issue is, inference to best explanation across competing live option candidates. If chance and necessity is a candidate, so is intelligence acting by art through design. And it is not an appeal to ever- diminishing- ignorance to point out that design, rooted in intelligent action, routinely configures systems exhibiting functionally specific, often fine tuned complex organisation and associated information. Nor, that it is the only observed cause of such, nor that the search challenge of our observed cosmos makes it maximally implausible that blind chance and/or mechanical necessity can account for such.]
That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [Emphasis added.] [The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
It is high time for fresh thinking and a healthier approach. KFkairosfocus
August 13, 2016
August
08
Aug
13
13
2016
03:29 AM
3
03
29
AM
PDT
PS: The foundational nature of this thread's concerns comes up in a following thread, where I commented a bit ago as follows:
https://uncommondescent.com/intelligent-design/aeon-puts-case-squarely-must-science-be-testable/#comment-614743 Empirical evidence must have a due voice in sciences connected to the physical world; including an ultimate veto. Whether the issue is adequacy of explanations or predicting further observations, or being open to some degree of testing, that is vital. However, that brings to bear inductive logic and we must also be aware of limitations of testability, falsifiability and more, post Popper. Sometimes, there just is not a state of the art ready to test a theory, e.g. it was a century after Newton that the Gravitational Constant could be directly assessed through Cavendish’s torsion balance experiment. In Mathematics — though not an empirical discipline as such — it took a full 2 – 300 years to be seriously addressing foundations of Calculus. And maybe, that is where some of the more complex bits of physics are headed, effectively mathematical and computational modelling linked to the actual physics proper. On which, we will have to recognise the GIGO issue. But, the point is, it may be worth the high risk, high potential payoff investment as Physics has a demonstrated track record of opening up key technologies and energy sources. But, we must understand that we cannot properly present highly speculative modelling with the same air of confidence we give to something like, the local gravity field strength is 9.8 N/kg. Model away, but do not pretend that this is unshakeable gospel truth.
kairosfocus
August 10, 2016
August
08
Aug
10
10
2016
10:17 AM
10
10
17
AM
PDT
JDK, pardon but I spoke to this particular context, a current thread. So, yes you did speak to accepting induction, but there is more and there was more to the matter. In this context, I am here speaking to limitations of theories, models, explanations, generalisations, analogies etc as modes or components of inductive reasoning relevant to science and particularly origins science and the design inference; and in that context, I simply commented on an objection by Seversky in 5 above; to the effect that there are several objectors to the design inference present in and around UD, but not engaging this particular thread which is explicitly foundational, highlighting inductive reasoning. You will see a discussion that builds a framework for discussing induction and its strengths and limits, above. It is in that context that there is a matter of the question of inductive inference to best current explanation on tested, reliable sign, that has capacity to speak beyond demands to have independent knowledge of or "assumptions" about specific candidate designers. Which, came up from you and/or your circle in previous threads; I add, came up as shifting the design inference from a matter of empirical investigation and inductive reasoning warranted to be termed scientific knowledge, to a philosophical discourse . . . with onward hints of ideology or theology or anti-theology. In addition, this thread addresses specific issues raised by Popper et al -- matters you asked about also, in ways that came across as suggesting that my summary that there were objections to induction itself was unwarranted as I was not linking specific past threads, just giving a general reference from memory. KFkairosfocus
August 10, 2016
August
08
Aug
10
10
2016
08:46 AM
8
08
46
AM
PDT
kf writes,
It seems clear that you do not wish to address the logical-empirical issues that are at the core of the design inference
No, that is not clear. In fact, I've agreed with you on much of what you've said about the nature of induction. In the induction thread, comment 12, I wrote:
kf writes, "But on balance, inductive reasoning is absolutely important, legitimate and reasonable." Yes, I totally agree with that, and believe that it is a foundational principle (as stated in the 2006 KS Science Standards) of how science is understood to be practiced. So that is settled as far as I am concerned.
And here's part of comment 27:
kf writes,
In such an ordered world, we expect to find predictable patterns rather than utterly unintelligible circumstances. Then, we can identify cause-effect bonds through studying cases in point of sufficient number to gain some confidence and infer, like causes like. Then, we explore, observe, experiment with circumstances and entities. We propose ordering principles and integrate into explanatory frameworks, which we test for empirical reliability. … Inductive logic then turns on the concept that we can recognise patterns; and, presumes that we can expect such in an orderly world. One, where even chance stochastic processes are lawlike, as statistical distributions tell us. This is one way in which observations support conclusions, and where predictive power is a sign of somehow approaching a genuine insight, or at least a reliable one. But there is never a guarantee of truth beyond dispute or doubt. … PS: Let me add, that scientific facts of observation may be morally certain as true, but on the challenge of the inherent open-ended provisionality of explanatory inferences, no scientific theory can properly claim to be true beyond doubt or even beyond room for reasonable doubt; what such can properly claim is empirical reliability and high confidence in a zone of trustworthiness grounded in substantial testing.
Yes, those are all true things about our world and about the scientific process
So I have addressed the logico-empirical issues, and have mostly agreed with you about the role of induction in science. It would be nice if you acknowledged this.jdk
August 10, 2016
August
08
Aug
10
10
2016
07:40 AM
7
07
40
AM
PDT
JDK, thanks for your thoughts. It seems clear that you do not wish to address the logical-empirical issues that are at the core of the design inference. Per fair comment, unless these issues are properly grasped, one is in no condition to properly assess the merits of the design inference; which is an inference to best current empirically grounded explanation on signs. Insofar as design thought in science pivots on that induction, it is largely independent of metaphysical issues; indeed, on warrant, it is prior to such -- empirical evidence must be allowed to speak for itself in its own voice. Refusal to address the foundational logical-empirical issues therefore inappropriately undermines ability to come to well grounded conclusions that tested reliable signs pointing to intelligently directed configuration as material causal factor would otherwise strongly support. In short, it seems some serious questions are being unfortunately begged. KF PS: I draw attention to an "etc." The point is, Seversky claimed we are banned. He is not, and enough others are not that were there interest to deal with a foundational matter, the opportunity is there. And the thread above clearly indicates that a discussion would most likely not be an exchange with one individual.kairosfocus
August 10, 2016
August
08
Aug
10
10
2016
06:14 AM
6
06
14
AM
PDT
Since you invoked my name, I'll clarify: I am very interested in the nature of science; I have no interest in Ken Ham and YECism in general; I have some interest in the metaphysical aspects of the design inference but little interest in other aspects; and I've explained several times why I especially am not interested in discussing these things with you.jdk
August 10, 2016
August
08
Aug
10
10
2016
05:39 AM
5
05
39
AM
PDT
PS: In answer to Seversky at 5, the recent comments widget clearly shows that there are and have been more than enough objectors to the design inference around; starting with Seversky but including: JDK, GUN, B'OH and doubtless many more were they interested. However, they seem to be more focussed on debating a recently opened Noah's Ark display by Ken Ham etc. than on discussing foundations of inductive reasoning and how such relate to science and the design inference.kairosfocus
August 10, 2016
August
08
Aug
10
10
2016
05:10 AM
5
05
10
AM
PDT
So far, we have put inductive reasoning under the microscope, highlighting how this leads to strengths, weaknesses and limitations of scientific theorising. In particular, we have seen that falsification is usually hard to achieve for a theory, save where there is a critical vulnerability. The role of auxiliary hypotheses about the state of the world and that of hyps regarding observations and instruments also affect results. It is also clear that theories normally have explanatory gaps, and the issue is whether these are simply puzzles to be solved or are anomalies better explained by alternatives, or are so hard that we cannot currently tackle such. (A good example from Mathematics is how it took 300 years to resolve foundations of calculus issues.) Onward, it is time to look at limitations of origins theories and at a critical vulnerability of the dominant evo mat school of thought, one embedded in its insistence on methodological naturalism.kairosfocus
August 10, 2016
August
08
Aug
10
10
2016
05:04 AM
5
05
04
AM
PDT
F/N: And yes, this is in part a bit of a tutorial on inductive reasoning. I think we need to ponder this domain of reasoning as a basis for specifically thinking about origins science and the design inference. KFkairosfocus
August 9, 2016
August
08
Aug
9
09
2016
08:48 AM
8
08
48
AM
PDT
F/N: What about analogy and induction? The principle of analogy is in effect if X walks like a duck, looks like a duck, quacks like a duck, it is a duck [or is "close"] and will continue to behave like such regarding A, B, C etc. Where, the emphasis is on extending the pattern of properties from observed cases to a partially observed or anticipated one. We can view this as a simple case of inductive generalisation with the generalisation implicit, or as an evident pattern-based prediction. Alternatively, we can view inductive reasoning as strengthening analogies by setting up barriers to weak forms. Note SEP on Analogy and reasoning by analogy -- contra, the tendency to sweep analogies aside as though they are inevitably fallacies or else utterly weak and beneath the notice of a "serious" thinker:
http://plato.stanford.edu/entries/reasoning-analogy/ An analogy is a comparison between two objects, or systems of objects, that highlights respects in which they are thought to be similar. Analogical reasoning is any type of thinking that relies upon an analogy. An analogical argument is an explicit representation of a form of analogical reasoning that cites accepted similarities between two systems to support the conclusion that some further similarity exists. In general (but not always), such arguments belong in the category of inductive reasoning, since their conclusions do not follow with certainty but are only supported with varying degrees of strength. Here, ‘inductive reasoning’ is used in a broad sense that includes all inferential processes that “expand knowledge in the face of uncertainty” (Holland et al. 1986: 1), including abductive inference . . .
Of course, many models, mathematical, computational and physical, are analogues for relevant aspects of systems; that is a lot of modelling and simulation relies on analogies. (Way back, loudspeaker design was a good case in point, with electric circuit analogies. And, we have the domain of analogue computers.) Reasoning by analogy must be practiced with due caution but should not be despised. It is a core part of the inductive reasoning toolbox. KFkairosfocus
August 9, 2016
August
08
Aug
9
09
2016
08:46 AM
8
08
46
AM
PDT
F/N: I should note on traditional inductive generalisation i/l/o the above model of inductive frameworks. In effect, consider a set of relevant initial observations IO using instruments (including eyeball mark I etc) yielding an apparent consistent pattern, CP: (IO + AI) --> CP Hyp, set "theory" T = CP we have theory T + auxiliary hypotheses (and “calibration”) about observation and required instruments etc AI + auxiliary statements framing and modelling initial, intervening and boundary conditions [in a world model], AM, to yield predicted or explained observations, P/E: (T = CP) + AI + AM –> P/E Where, AM, the auxiliary World model here emphasises credit we give to the concept of an orderly, stable world such that a reasonable cross section of observations is plausibly able to capture a patter, the behaviour of the population. We compare further observations, FO (with AI again acting), to yield explanatory gap, G = g: P/E – (FO + AI) –> G = g That is, inductive generalisation fits into the model of inductive theorising, and emphasises the thesis of an orderly, stable world. Gaps still obtain as an observed pattern may be part of a wider pattern. Seeing all white swans so far does not establish the impossibility of black ones. This also shows that traditional induction and abductive reasoning are closely related. KF PS: Notice, how AI appears "everywhere," so instrument biases can be a serious common bias, distortion or delusion. Which, we will not readily spot because such is in common.kairosfocus
August 9, 2016
August
08
Aug
9
09
2016
07:32 AM
7
07
32
AM
PDT
AYP, great to hear from you as always. It was enjoyable to look at several articles in your WP blog. And yes, your year-long interaction with NCSE is sadly illuminating. For instance, look at the rhetorical tactics deeply embedded in Izaak's:
Since design is the only explanation they can imagine, they naturally consider it the best explanation. To this extent, “looks designed” is just an argument from ignorance. But many creationists further claim that this appearance of design is objective, can be (and, some say, has been) demonstrated scientifically, and therefore is suitable for teaching in public schools (for example, Dembski 2001a). The little evidence they present, though, is maddeningly vague. In most cases, the supposed evidence for design consists simply of pointing to various examples from natural history and saying, “Look, can’t you see it?” Typically, this is accompanied by the usual creationist attacks on evolution and the claim, implicit or explicit, that design is the only alternative . . . . A computer is actually a fairly simple arrangement of components — CPU, memory, various peripherals, and wires connecting them — with fairly simple interfaces among the components. Each of the components, in turn, is a simple arrangement of sub-components, which may themselves consist of smaller sub-components, and so on until the simplest level is reached. In this way, each component, at whatever level, can be treated as a separate, almost independent unit, making it relatively easy to understand
1 --> Design is not the only explanation we can IMAGINE, it is the only empirically, observationally warranted explanation per "causes now in operation" that accounts for the phenomenon FSCO/I. 2 --> Accordingly, speculative imagination on imagined powers of blind chance and mechanical necessity is being allowed to impose itself without empirical restraint, and is being enforced by the evolutionary materialist magisterium that NCSE is the bulldog for. 3 --> Looks designed is a strawman caricature, and one sustained year after year in defiance of duties of care to truth and fairness, utterly undermining the credibility of NCSE and its champions. 4 --> The conflation of design thinkers with Creationism is then setting up a straw bogeyman joined to prejudice and even bigotry expressed in stereotyping and scapegoating. 5 --> It is directly false that DI and Dembski advocate teaching design theory in the classrooms, as opposed to calling for a sounder exploration of the strengths, limitations and weaknesses of evolutionary theories, and freedom of opinion in discussion. Instead of the typical evolutionary materialistic scientism based indoctrination that so often obtains. 6 --> Functionally specific, complex organisation and/or associated information is neither vague nor hard to find, it is pervasive in the world of life. 7 --> Inference to best current, empirically warranted explanation is sound inductive reasoning, and is opposed by an established scheme that lacks first level empirical warrant. 8 --> To wit, kindly show us a case of FSCO/I beyond 500 - 1,000 bits arising in our observation per blind chance and mechanical necessity: ____________ (Even comment posts in this thread illustrate FSCO/I by design, it is THAT pervasive.) 9 --> As the OP above demonstrates, it is entirely in order to highlight the persistent explanatory gaps of a theory as a means of showing that it is in a process of degeneration, so how dare you point out our weaknesses is a demand for privileges not in accord with sound science. 10 --> As for alternatives, from the days of Plato in The Laws Bk X, it has been well known that we can speak of mechanical necessity, blind chance and intelligently directed configuration by ART as three key causal factors. (Thus the rhetorical "natural vs suspect god of the gaps supernatural" dichotomy is a strawman fallacy.) 11 --> The appropriate comparison for empirical investigations is natural [ = blind mechanical necessity and/or blind chance] vs the ART-ificial [= design based on purposeful and skilled intelligence], with appropriate empirically observable, tested and reliable signs. Of which FSCO/I and linked fine tuning are key exemplars. 12 --> The characterisation of a PC as simple and modular [i.e. showing layers of nested complex organisation . . . ] is not only incoherent but it exposes a mentality that refuses to acknowledge what functionally specific complex organisation is. You can start with the Billion dollar Fab required to create the core chips of the PC and the century plus of research behind it. KFkairosfocus
August 8, 2016
August
08
Aug
8
08
2016
01:15 AM
1
01
15
AM
PDT
Mung: Thanks for pithily summarising typical objections. (Of course, the primary focus above is the logic of induction and science, moving on beyond naive falsificationism. The battlecruisers at Jutland model is actually also an historical exemplar of how paradigms are embedded in institutions and technologies, leaving key strengths and weaknesses. Three British battlecruisers blew up at Jutland, the German ones took a battering and did not quite sink. In 1941, Hood, in effect the partly post Jutland generation of British battlecruiser . . . and named after the Admiral blown up at Jutland . . . blew up in the Iceland straights in a fight with Bismark, an example of the next generation; the fast battleship. Bismark was then hunted down, crippled and battered until it sank. All of this brings out the principles at work.) Let us look: >>keiths would tell us you’re wrong because he can’t trust his senses.>> Self-refuting. By 1897, the British philosopher, F H Bradley, in his Appearance and Reality, had given a critical hit to the Kantian picture of an ugly gulch between out inner phenomenal world and the outer world of things in themselves. Craig put it well:
insofar as these . . . assumptions include Kant’s strictures on the scope of scientific knowledge, they are deeply, fatally flawed. For Kant must at least be claiming to have knowledge of the way some things (e.g., the mind and its structures and operations) exist in themselves and not merely as they appear; he confidently affirms that the idea of God, for instance, has the property of unknowability. [10] So the theory relies on knowledge that the theory, if it was true, would not — could not — allow. [ Jesus’ Resurrection: Fact or Figment, ed. Paul Copan (Downer’s Grove, IL: IVP, 2000), p. 13. NB: Ref. [10] is to Plantinga’s Warranted Christian Belief, pp. 3 – 30, and is shortly followed by a reference to F. H. Bradley’s gentle but stinging opening salvo in his Appearance and Reality, 2nd Edn.: that “The man who is ready to prove that metaphysical knowledge is impossible has . . . himself . . . perhaps unknowingly, entered the arena [of metaphysics] . . . . To say that reality is such that our knowledge cannot reach it, is to claim to know reality.” (Clarendon Press, 1930 [edns go back to the 1890's]), p.1]
KS (as reported) trusts his senses to tell him he cannot trust his senses. Fail. A better approach would be to assess strengths and limitations, reckon with the unshakeable conviction we live in a generally orderly world, and assess that we do find empirically reliable insights. Then comes the issue: why an ordered cosmos and not a disorderly, unpredictable, unstable chaos? (Which raises the issue of a First Cause initiating and sustaining an orderly world.) >>Patrick would tell you you’re wrong on the basis of having gone through a single book and replaced “intelligent design” with “creationism” without having observed any difference in meaning.>> Patrick needs to go back and see what Thaxton et al had to say about Pandas and People, that they were in fundamental disagreement with the Creationist agenda, and sought more adequate ways to capture what they were thinking. Intelligent design (which IIRC came from Hoyle) better captured their intent. Then, Patrick needs to read The Mystery of Life's Origin, to see part of why that was so. The UD Weak Argument correctives -- if he can give them a fair reading -- will also help. The time for conspiracy theorising and setting up creationist straw bogeymen is long past. >>Tom English would tell you you’re wrong because there are no actual searches of phase space.>> TE first needs to read Walker and Davies as cited in the OP. They exactly illustrate the concept of exploration of a phase or configuration space, and raise issues of isolated zones in it. The infographic on islands of function and search resource challenges then deepens the problem. TE needs to start from a Darwin warm pond or the like and explain to us how a search framework does not apply in a chemical and physical context. Then he needs to explain to us how a genome does not exhibit a state in a config space and how chance variation is not a random search in it. Wiki gives a succinct justification for speaking in terms of metrics of distance in such spaces, with particular emphasis on strings:
the Damerau–Levenshtein distance (named after Frederick J. Damerau and Vladimir I. Levenshtein[1][2][3]) is a distance (string metric) between two strings, i.e., finite sequence of symbols, given by counting the minimum number of operations needed to transform one string into the other, where an operation is defined as an insertion, deletion, or substitution of a single character, or a transposition of two adjacent characters.
Where, any functionally specific, complex organised entity can be specified in a description language as a string of y/n answers. Thus quantifying in bits. We then can readily address the tree of life from the roots up in terms of search challenges in a context where FSCO/I as outlined, naturally comes in isolated islands of function. >>petrushka would tell you you’re wrong because he read a book by Andreas Wagner which claims it’s always possible to get there from here.>> P needs to read Walker and Davies. If the search resources are inadequate, or the space has zones that lock in paths once they hit, then you cannot explore the whole space under relevant constraints. Cf the infographic. KFkairosfocus
August 8, 2016
August
08
Aug
8
08
2016
12:47 AM
12
12
47
AM
PDT
Mung @ 3
Too bad our good friends at TSZ can’t be bothered to post here any more.
Is it "can't be bothered" or is it "are no longer able"? oooooooooooo I suggest (a) banning at UD is generally for cause [as in, go back to Dionisio's beautiful Norwegian fjords . . . ], and (b) the issue is best redirected to those with general moderator powers. There is a major and general purpose foundational issue under consideration here, let us focus on it. KFSeversky
August 7, 2016
August
08
Aug
7
07
2016
10:48 AM
10
10
48
AM
PDT
A review of some of my past articles on Intelligent Design has brought to the fore a critique on What Design Looks Like from the perspective of the Materialists at NCSE. ___________ https://ayearningforpublius.wordpress.com/2015/03/05/what-design-looks-like-an-ncse-document-with-comments-by-don-johnson/ayearningforpublius
August 7, 2016
August
08
Aug
7
07
2016
10:43 AM
10
10
43
AM
PDT
Awesome. Hits so many good points. Too bad our good friends at TSZ can't be bothered to post here any more. keiths would tell us you're wrong because he can't trust his senses. Patrick would tell you you're wrong on the basis of having gone through a single book and replaced "intelligent design" with "creationism" without having observed any difference in meaning. Tom English would tell you you're wrong because there are no actual searches of phase space. petrushka would tell you you're wrong because he read a book by Andreas Wagner which claims it's always possible to get there from here. etc.Mung
August 7, 2016
August
08
Aug
7
07
2016
09:17 AM
9
09
17
AM
PDT
Information Enigma video addedkairosfocus
August 7, 2016
August
08
Aug
7
07
2016
05:41 AM
5
05
41
AM
PDT
On inductive corroboration, falsificationism, paradigm shifts and the ID debates.kairosfocus
August 7, 2016
August
08
Aug
7
07
2016
04:48 AM
4
04
48
AM
PDT

Leave a Reply