The KenRingGate saga offers us an opportunity to ponder differences between science and other human endeavours, particularly with respect to complex systems. The inverse square law was an early success for those involved in the Royal Society, and the theory can be derived by just thinking about it. The surface area of a sphere is 4pr2 (a result first derived by Archimedes), and so observations of a force or energy emanating from a point source should be expected to diminish in proportion to the square of r, the distance of the observer from the source. This is because the same amount of energy per unit time is spread out over an ever larger surface area as r increases, and the surface area increases linearly with r2. Predictions made with this theory agree very consistently with actual observations within the usual scales of dimensions we inhabit, and so the theory is a strong one that we confidently use for predictions. But do theories have to be as precise as the inverse square law in order to qualify as "scientific"? As the complexity of the topic of study increases, the precision of predictions tends to decline. Some would argue that this merely reflects our lack of understanding of complex processes such as biology or climate, but it arises also from our inability to precisely define and measure complex starting and finishing conditions. Does this mean that researchers studying complex processes are not practicing science?
The philosopher Karl Popper distinguished a scientific theory from a non-scientific one by saying that the former was falsifiable; that an experiment could be conducted with at least one outcome that would recognisably contradict the theory, which says little about complexity. Many of my colleagues are "Popperian" in their approach to science, although we accept that there is still plenty of healthy discussion about the nature of science. So let's consider the question of complexity in Popperian terms.
Biologists and climate scientists can offer scientific theories or models so long as they could be falsified in ways that take account of probability. We accept that our estimates of starting and finishing states are subject to error, and also that our models of processes may be incomplete, but we can still falsify them if we test them with independent data and use statistical methods. Our theories, models and means of measurement need to be precisely stated, and we also need to include statements of probability. Imprecision and bias can both be regarded as criteria for falsification. If I assert that 95% of estimates obtained from my model of forest growth will be within a given range under certain conditions, and then someone finds significantly more than 5% of estimates outside that range then my model can be said to be falsified. Similarly, if I say that under those conditions errors of estimates should be normally distributed around the model with mean zero and someone finds that they are significantly different from normal or that the mean of the distribution of errors is significantly different from zero then my model can be said to be falsified.
Ken Ring’s “theory” of earthquakes, by contrast, is vaguely stated and predictions from various interpretations of his stated methods do not agree at all well with observations. His predictions sometimes include bounds, but when an event happens outside those bounds he will assert that it was only a bit outside the stated bounds and therefore "validates" his prediction. The vagueness and malleability of his predictions apparently make them unfalsifiable. He does not appear to be behaving as a scientist and his predictions are not terribly useful. The fact that he propagated his predictions and scared the bejesus out of people in an already stressed out city is therefore monstrous, and he deserves strong criticism.
The philosopher Karl Popper distinguished a scientific theory from a non-scientific one by saying that the former was falsifiable; that an experiment could be conducted with at least one outcome that would recognisably contradict the theory, which says little about complexity. Many of my colleagues are "Popperian" in their approach to science, although we accept that there is still plenty of healthy discussion about the nature of science. So let's consider the question of complexity in Popperian terms.
Biologists and climate scientists can offer scientific theories or models so long as they could be falsified in ways that take account of probability. We accept that our estimates of starting and finishing states are subject to error, and also that our models of processes may be incomplete, but we can still falsify them if we test them with independent data and use statistical methods. Our theories, models and means of measurement need to be precisely stated, and we also need to include statements of probability. Imprecision and bias can both be regarded as criteria for falsification. If I assert that 95% of estimates obtained from my model of forest growth will be within a given range under certain conditions, and then someone finds significantly more than 5% of estimates outside that range then my model can be said to be falsified. Similarly, if I say that under those conditions errors of estimates should be normally distributed around the model with mean zero and someone finds that they are significantly different from normal or that the mean of the distribution of errors is significantly different from zero then my model can be said to be falsified.
Ken Ring’s “theory” of earthquakes, by contrast, is vaguely stated and predictions from various interpretations of his stated methods do not agree at all well with observations. His predictions sometimes include bounds, but when an event happens outside those bounds he will assert that it was only a bit outside the stated bounds and therefore "validates" his prediction. The vagueness and malleability of his predictions apparently make them unfalsifiable. He does not appear to be behaving as a scientist and his predictions are not terribly useful. The fact that he propagated his predictions and scared the bejesus out of people in an already stressed out city is therefore monstrous, and he deserves strong criticism.
6 comments:
Euan... I like what you say about modelling. I now would like to know if the sunspot model in this graph has been falsified by observed observations to date? In other words, what does it take to falsify a model when the model (red line) has no variability for Jan 2011?
http://www.swpc.noaa.gov/SolarCycle/sunspot.gif
BTW... I think there were 71 sunspots on average in Jan 2011.
Actually the international sunspot numbers were 19 and 29.5 for January and February respectively. It would be good if they left their predictions in after they plotted the data.
Having delved into modelling this myself, I would guess that the actual numbers are within the 95% CIs of Hathaway's model, but also that the actual numbers may well undershoot the mean expected path. They should show CIs, either for each month or for the overall pattern.
BTW, my model's mean expectation for February 2011 was a sunspot number of 23 :). Having emitted a slight crow, I should add that I have an idea for a better fit to the overall cycle pattern, and I'm not overly enthused by my current model. When I get time I'll fit a new one.
I will now just assume the sunspot model in this graph has been falsified by observed observations to date.
BTW... I have made a bet on the number of sunspots for January 2018.
Thanks for correcting the 71 "unadjusted" claim... which I got from this widget....
http://cache4.intelliweather.net/wcw/world_climate_widget_sidebar.gif
I will not use that widget anymore.
What was your bet? My model suggests a similar number to what we are seeing now, around 20.
My bet was that the mean for January 2018 would be less than 23 or greater than 27 sunspots. This is the same as not 25 sunspots (+ or - 2). I am curious... what does your model have for the variation in estimates for that date?
The limits would be pretty wide with the current model, probably -20 and +20 at least. I need to do a new analysis and pay more attention to the error structure.
Post a Comment