Showing posts with label philosophy of science. Show all posts
Showing posts with label philosophy of science. Show all posts

Saturday, January 03, 2015

BYU Studies: Science as Storytelling, and the Atomization of Knowledge

While looking for something to read during my Christmas vacation, I discovered that the last 2014 issue of BYU Studies contains an article by Barry Bickmore and David Grandy titled, "Science as Storytelling." The article is nicely summarized as follows:

Much [o]f our modern world revolves around something called "science." But what is science? Interestingly, this turns out to be a very difficult question to answer because every definition seems to include something we don't consider science or seems to exclude something we do consider science. In this essay, the authors present their own definition: Science is the modern art of creating stories that explain observations of the natural world and that could be useful for predicting, and possibly even controlling, nature. They then refine this definition by offering seven rules that scientific storytelling must follow to distinguish it from other genres. These rules fall under the following general topics: reproducibility, predictive power, prospects for improvement, naturalism, uniformitarianism, simplicity, and harmony.
Bickmore and Grandy explain that their article originated as part of an introductory science course in order to address simplistic views of science and the corresponding tendency to reject scientific findings that clash with personal, religious, or political views. Theirs is a time-honored attempt to define what makes science unique from other human endeavors.

Their use of storytelling as the central concept of science made me nervous at first because it is easy to dismiss scientists as making up stories. They address this concern and explain their reason of word choice.
[W]e chose the word “stories” to emphasize the idea that the explanations scientists come up with are not themselves facts. Scientific explanations are always subject to change, since any new observations we make might contradict previously established explanations. The universe is a very complicated place, and it is likely that any explanation that humans come up with will be, at best, an approximation of the truth.
I think use the word "story" is fine as long as it is used loosely. For example, if you were to ask how water gets to the faucets in my house, I would tell you a story beginning with the entrance of the main water pipe into my basement, and how the water is distributed to various places by various pipes. Plumbers count on building codes to help ensure that the story is consistent from house to house. However, occasionally they will find deviations or expansions on the basic story. As they attempt to understand these variations, they are basically refining the story of that building's flow of water and acting accordingly in order to control the flow. When pipes are hidden by walls, floors, or ceilings, there will always be a little uncertainty in the story. But with enough probing of the system, the remaining uncertainties become negligible or practically irrelevant.

Unfortunately, the attempt to separate facts from stories (or hypotheses, theories, etc) can also be a source of mischief. It can lead to what I call the atomization of knowledge, where knowledge is broken down into the smallest possible pieces and loses its force because the formation of context and relationships between facts is prevented. Take, for example, the following quote of Hugh Nibley provided by R. Gary in the comments to my last post.
The fossil or potsherd or photograph that I hold in my hand may be called a fact—it is direct evidence, an immediate experience; but my interpretation of it is not a fact, it is entirely a picture of my own construction. I cannot experience ten thousand or forty million years—I can only imagine, and the fact that my picture is based on facts does not make it a fact, even when I think the evidence is so clear and unequivocal as to allow no other interpretation. (The Collected Works of Hugh Nibley, vol. 1, ch. 2, 25-27.)
That's all well and good, but a dedicated skeptic might question whether a fossil is actually a fact, and assert that only pixels are fact--pictures are interpretation. Actually, Bickmore and Grandy hint as much when they write:
Consider fossils. They look like the remains of living things. Is it not reasonable to suppose that they were once living things that were covered and preserved in sediment, just as dead organisms can be covered and preserved in sediment nowadays?
That's quite an underselling of fossils; the evidence is better than that. I'm not criticizing the authors--they were just making a point about the consistency of cause and effect--but it helps to illustrate the problem. It took me about 10 seconds to Google up some creationists arguing that when it comes to fossils, the only facts are their physical properties (dimensions, weight, etc). Scientific troublemakers excel at this kind of game--keeping attention on the most basic indisputable facts so that higher-ordered interpretations appear to be mere guesses.

This all goes to my view that, just as it is impossible to entirely separate science from non-science, it is impossible to entirely separate fact from interpretation, and the judgment of fact vs interpretation will differ based on knowledge-base. That carbon has 6 protons is a statement of fact to chemists because it is so well tested. A non-chemist might consider it merely an interpretation. Maybe I could formalize this as a rule of thumb: if someone seems keen to separate fact from interpretation, there is a good chance that they are selling scientific garbage. This is because specialists will tend to have an appreciation of where accepted facts end and interpretations begin in their field, so they don't need to bang on about it. Further, they will tend to dispute interpretations with additional observations or facts, rather than by getting pedantic about the demarcation between the two.

From the discussion thus far you would be forgiven for wondering how scientists make any progress at all. The rules outlined by Bickmore and Grandy are key here. I list them as they appear in the article.
Rule 1: Scientific stories are crafted to explain observations, but the observations that are used as a basis for these stories must be reproducible.

Rule 2: Scientists prefer stories that can predict things that were not included in the observations used to create those explanations in the first place.

Rule 3: Scientific stories should be subject to an infinitely repeating process of evaluation meant to generate more and more useful stories.

Rule 4: Scientific explanations do not appeal to the supernatural. Only naturalistic explanations are allowed.

Rule 5: Any scientific explanation involving events in the past must square with the principle of “uniformitarianism”—the assumption that past events can be explained in terms of the “natural laws” that apply today.

Rule 6: Scientists assume that nature is simple enough for human minds to understand.

Rule 7: Scientific explanations should not contradict other established scientific explanations, unless absolutely necessary.
Although they are all important, I think rules 1, 2, 3, and 7 are particularly important. I would argue that the remaining rules are simply extensions of those four rules working together.

Well, I've gone on long enough and I don't have a good way to wrap up. So I'll just say go read the article and see what you think.







Continue reading...

Saturday, December 03, 2011

Real-Life Science Drama

I'm late on this, but I'm guessing most people haven't paid attention to it anyway, so it's just as well. And as you'll see at the end, it's a story that hasn't finished.

In 2009 a paper was published in Science that reported an association between chronic fatigue syndrome (CFS) and a retrovirus called XMRV. Although it was premature at the time to say that XMRV caused CFS, once it was shown that XMRV was sensitive to anti-retroviral drugs used in HIV treatment, some patients started taking the drugs in an attempt to treat CFS, and patient advocacy groups rallied around the new CFS paradigm. Further, there was concern that XMRV might be in the blood supply, prompting the exclusion of donors with CFS.

However, there is mounting evidence that the claims of the original paper were not correct, with other groups repeatedly failing to confirm the results, the original group unable to reliably identify positive samples, and evidence that the virus itself is a laboratory product. Last May Science suggested that the original paper be retracted, but the authors refused. So the editor, Bruce Alberts, published an "expression of concern" which essentially said that Science no longer had confidence in the paper. Fast forward to September: the authors of the original study retracted several figures from the paper when they determined that the results were based on contaminated samples. A week later lead author Judy Mikovits was fired, apparently over a dispute about sharing of cell lines.

The XMRV history is mostly laid out in "False Positive", published in Science, but it's behind a paywall. However, the LA Times had a good story that covers much of the same material. It's fascinating to read as the evidence for the hypothesis is described, and then to watch the tide turn against what seemed like compelling findings. Mix in Mikovits digging her heels in--going so far as to claim conspiracy against her--and the hysterics of patient-advocacy groups (physically threatening scientists, in some cases), and you've got a real scientific drama.

But beyond the drama, this story helps to illuminate how science works. Philosopher Karl Popper famously argued that science works by falsification. It's not uncommon (at least in Internet discussions) to hear this view strictly applied--that it only takes a single experiment to falsify a theory. But this example shows that it isn't so simple. The impact of a single experiment depends on context, technology, and the state of the field. When contradictory results are obtained, it takes time to gain clarity. Right now it looks like the XMRV link to CFS is dead. However, Mikovits still believes that a link exists and that the difficulties in nailing it down can be attributed to the biology of the virus-host interaction. Is that ad hoc rationalization in an attempt to save a favored hypothesis, or is it perseverance in the face of a complex world that doesn't always give easy answers?

This, in turn, gives us an opportunity to think about Thomas Kuhn's notion of paradigms and what it means to know something based on collected knowledge. Going forward, the notion that XMRV or any other retrovirus causes CFS will be viewed with great skepticism by most scientists, and this collective judgment will dominate the field, while a few dedicated (intransigent?) researchers may soldier on. I don't think that CFS is prevalent enough to catch public attention like, say, vaccines have, but you never know. Will we hear complaints that the dominant view is held by closed-minded defenders of the status quo? Will the minority attempt to wrap themselves in the clothes of Galileo?

The latest is that Mikovits has been charged with two felonies in Nevada and is the subject of a civil suit from her former employer over the removal of laboratory notebooks and attempts to send materials to another lab.

It's been a roller-coaster ride so far, and it looks like the ride isn't over yet.

(In addition to links in the post, also see here.)

[Update, 12/22/11: The original paper has been fully retracted.]


Continue reading...

Monday, April 25, 2011

Smoking Guns

I finally finished reading Nonsense on Stilts: How to Tell Science from Bunk by Massimo Pigliucci. Pigliucci is an evolutionary biologist turned philosopher of science who is one of the main participants in the fine podcast Rationally Speaking. It turns out that science and bunk cannot be distinguished by hard and fast rules, but there are some guidelines. Along the way he examines examples of controversial science, clear pseudoscience, and other auxiliary phenomena such as the rise of think tanks and postmodern critiques of science.

The second chapter deals with the notion that there are so-called 'hard' sciences (e.g. physics) and 'soft' sciences (e.g. sociology). Physics, with its exactness and predictive power, is often held up as the ideal to which all other sciences should aspire. However, Pigliucci argues that the sciences are heterogeneous because they deal with different problems of different complexity, and that their methods reflect those differences. In this view, physics is not somehow better than, say, biology, it's just different.

Early in the chapter he discusses the concept of strong inference. This is where a crucial experiment can be used to rule out one or more mutually exclusive hypotheses. Essentially you set up a situation where either X or Y is true, and then perform an experiment to unambiguously rule one of them out. This kind of approach can work well in physics. However, although it is a very logical way to proceed, not all scientific questions are amenable to this method because of the complexity of the subject matter and because not all answers are black and white. (For example, you might find that many people with a disease show improvement with a particular drug. It's not a cure vs. no cure, nobody vs. everybody dichotomy.)

He then goes on to discuss historical sciences (e.g. geology, paleontology) which are sometimes maligned as having untestable theories. In this view, experimental findings are privileged above all else, and since you cannot perform experiments with past occurrences, those occurrences must remain unverifiable. (This is a common argument made by creationists seeking to cast doubt on evolution or anything else that contradicts their interpretation of the Bible). However, as Pigliucci points out, science is about more than doing experiments (e.g. astronomy), and drawing on work by Carol E. Cleland he argues that historical hypotheses can indeed be verified with a high degree of certainty. It's worth spending a moment on this.

A difference between historical and experimental sciences is captured in the philosophical (and certainly philosophical-sounding) term, asymmetry of overdetermination. This term simply refers to the difference in our ability to know cause and effect when looking into the future vs. the past. Cleland uses the example of a house fire: someone investigating a house fire may be able to determine that it was caused by a short circuit. This is because of all of the clues left behind after the event. However, that knowledge does not translate into the ability to predict whether a particular future short circuit will result in a fire. Similarly, police investigating a crime often have many clues with which to reconstruct what happened, but they cannot predict how and when a future crime might occur. On the other hand (my own interpretation), an instructor at a fire fighting school might be able to arrange things such that a short circuit reliably causes a fire, but this is because the instructor has eliminated variables that might prevent the fire from starting and constrained conditions to give a reliable result.

Historical sciences rely on the fact that historical occurrences leave behind many clues, only a few of which may be needed in order to establish the reality of the occurrence. Scientists in this mode of investigation look for 'smoking guns' from which they can infer what happened. Experimentalists, on the other hand, seek to limit variables that might interfere with the experiment in question--trying to eliminate false negatives and false positives. These constrained conditions allow them to make more precise predictions and measurements, but that precision can quickly disappear in a more complex context. It's also worth pointing out that these two approaches can be employed together such that they inform one another.

Cleland summarizes:

When it comes to testing hypotheses, historical science is not inferior to classical experimental science. Traditional accounts of the scientific method cannot be used to support the superiority of experimental work. Furthermore, the differences in methodology that actually do exist between historical and experimental science are keyed to an objective and pervasive feature of nature, the asymmetry of overdetermination. Insofar as each practice selectively exploits the differing information that nature puts at its disposal, there are no grounds for claiming that the hypotheses of one are more securely established by evidence than are those of the other.





Further reading:

Cleland, Carol E. (2001). "Historical science, experimental science, and the scientific method," Geology 29, pp. 987-990.


Continue reading...

  © Blogger templates The Professional Template by Ourblogtemplates.com 2008

Back to TOP