Thursday, March 05, 2009

What makes a theory compelling?*

Karl Popper was a philosopher of science that was very much interested in this question. He tried to distinguish 'science' from 'pseudoscience', but got more and more dissatisfied with the idea that the empirical method (supporting a theory with observations and experiments) could effectively mark this distinction. He sometimes used the example of astrology “with its stupendous mass of empirical evidence based on observation”, but also nuanced it by stating that “science often errs, and that pseudoscience may happen to stumble on the truth.”

Next to his well-known work on falsification, Popper started to develop alternatives to determine the scientific status or quality of a theory. He wrote the complex yet intriguing sentence “confirmations [of a theory] should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.” (Popper, 1963).

Popper was especially thrilled with the result of Eddington’s eclipse observations, which in 1919 brought the first important confirmation of Einstein's theory of gravitation. It was the surprising consequence of this theory that light should bend in the presence of large, heavy objects (Einstein was apparently willing to drop his theory if this would not be the case). Independent of whether such a prediction turns out to be true or not, Popper considered it an important quality of ‘real science’ to make such ‘risky predictions’. Interesting thought, not?

I still find this an intriguing idea. The notion of ‘risky’ or ‘surprising predictions’ might actually be the beginning of a fruitful alternative to existing model selection techniques, such as goodness-of-fit (which theory predicts the data best) and simplicity (which theory gives the simplest explanation). Also in music cognition measures like goodness-of-fit (r-squared, percentage variance accounted for, and other measures from the experimental psychology toolkit) are often used to confirm a theory. Nevertheless, it is non-trivial to think of theories that make surprising predictions. That is, a theory that predicts a yet unknown phenomenon as a consequence of the intrinsic structure of the theory itself. If you know of any, let me know!

ResearchBlogging.orgK. R. Popper (1963). Conjectures and Refutations. London: Routledge.

* Repeated blog entry from July 23, 2007 (celebrating finalizing a research proposal with Jan-Willem Romeijn on these issues, hoping to be able to address these issues head-on ;-)

4 comments:

  1. In 1928, Dirac predicted the anti-electron (positron) based on his relativistic version of quantum theory. That was a surprising prediction, no? It was observed by Carl Anderson in 1932, and the rest, as they say, is history.

    ReplyDelete
  2. Popper rules.

    But what is a "theory", which of the "things" we know about music cognition can be called theories in Popperian sense, which are just associations between variables that have been tested in experiments?

    I think there's a distinction between having theories about music cognition that generate hypotheses that can be falsified and just making sure our experiment designs are good so that we don't just try to confirm our hunches about how for instance tempo of music effects the identification of happy / sad emotional content of the said music.

    I'm just not so sure what these theories are - what are the basic founding theories of music cognition? Is it stuff like statistical learning of tonality? That perception and action are linked, music cognition is embodied (whatever that means)? That we are pattern-finding creatures that generate expectations and react emotionally to whether they are met or not? That familiarity with a segment of music reduces the perceived complexity of it, while perhaps modulating liking of it?

    Tommi

    ReplyDelete
  3. Indeed I was not thinking of ‘theories’ that are ‘just associations between variables that have been tested in experiments’. I was thinking of those (sub)domains of music cognition for which there are clear formal/computational theories (think of beat induction, categorization, tempo tracking, etc). Given these theories we can investigate how surprising their predictions are (or how ‘risky’, as Popper puts it).
    Next to the usual ‘goodness-of-fit’ measures as often used in experimental psychology or ‘simplicity’ as used in computer science, a ‘measure of surprise’ seems a promising alternative (see here for a case study on kinematic and perceptual models of tempo rubato. Quite a small result, but it served as an inspiration for a research proposal I just finalized, with Jan Willem Romeijn from the University of Groningen. The abstract says:
    "This research project is located on the intersection of analytic philosophy, statistical method, and cognitive science. Specifically, it concerns the development of ideas from confirmation theory towards tools in statistical model selection, and the application of these tools to computational models of music cognition. The eventual objective is to analyse and quantify the role of surprise in the confirmation of these computational models, taking into account both the surprise following unexpected empirical findings, and the surprise stemming from unforeseen empirical consequences of the models. The project presents a challenging case study to confirmation theory, aimed at improving statistical methods in the sciences. At the same time, the project will resolve some pressing questions on model selection for computational cognitive science.")

    ReplyDelete
  4. Latest update: despite (repeatedly) positive reviews the project still is awaiting funding...

    ReplyDelete