This is OSweetMrMath's Typepad Profile.
Join Typepad and start following OSweetMrMath's activity
Join Now!
Already a member? Sign In
Recent Activity
I should really be posting about this on the forum, but let me say a little more about my extent model. Larry Hamilton fits the data to a sigmoid. When the data comes in well below the fitted curve (in 2012), the prediction for the next year is that the extent will return to the trend curve (rebound in 2013). The most simple version of my model is a random walk with drift. That is, the year over year change is a random number with a non-zero mean. (In this case, -50 thousand sq km.) By assumption, the mean and variance are constant, and the year over year changes are independent. For predictions, the point prediction is computed from the mean, while confidence intervals are estimated from the variance. This means that the set of future predictions at any time is a straight line. However, this line should not be interpreted as a trend line, because a trend line implies that if the observed values deviate from the line, they will still return to (or near) the trend line in the future. In the case of a random walk, any deviations reset the y-intercept of the line. This produces a new linear prediction, but there is no expectation that future values will return to the previous prediction line (or any other previous prediction line). My model finds that the year over year random changes are not independent. They are negatively correlated (rho = -0.49) which means that if one year has greater than predicted loss, the next year is predicted to have less than average loss, or even a gain. (This predicted gain could be called a rebound.) This happened in 2012 and 2013, where the loss was so much larger than predicted in 2012 that my model predicted a gain in 2013, but still underestimated the size of the gain. Because there was a larger gain than predicted in 2013, my model predicted a larger loss than average in 2014. This did not come to pass, and the loss was below my prediction. As a result, my prediction for this year is again that the loss will be larger than average. There is no longer term persistence in my model (I don't use changes from two years ago to predict this year's loss). There is not any statistical evidence that considering more than one year would improve prediction accuracy. Note that even with the rebound effect, this is a random walk model. There is no trend line that the data is expected to follow. As another complication, my model also accounts for monthly changes, in the sense that if this month's (year over year) change is larger than average, next month's change is also predicted to be larger than average. A physical interpretation is that if this month ends with less ice than expected, next month effectively has a head start on melting. When the March extent this year came in at a record low, there was a greater year over year loss than average, which means that the predicted year over year losses for all future months increased. This predicted change dies out rapidly, so there was a substantial change in the prediction for April, but essentially no observable change in the prediction for September.
Details on my model are scattered here and there on the forum. I was planning on a larger post describing my model and analyzing its performance, but I've been inordinately busy for the past few months and haven't had time to complete it. Larry Hamilton just fits the data to a sigmoid of some kind, right? That's not what I'm doing at all. I don't have a function I'm fitting the data to. My methodology is closer to describing the data as a random walk with constant drift. My estimated standard error on predicting September extent from April data is 0.356, or 95% CI of + or - 0.70. (I never report more than one decimal digit.) That's estimated under a normality assumption. One of my goals is to produce a bootstrap estimate, which I expect to be wider due to slightly heavy tails. My method doesn't explicitly retain predictions several months in advance, recomputing all future predictions every month. So I've had to back out historical predictions from the data. Assuming I've done it correctly, hindcast prediction errors of September extent from April data are listed below. 1980 0.42 1981 -0.22 1982 0.03 1983 0.18 1984 -0.15 1985 -0.33 1986 0.42 1987 0.3 1988 0.3 1989 -0.07 1990 -0.84 1991 -0.33 1992 0.82 1993 -0.42 1994 0.43 1995 -0.62 1996 1.4 1997 -0.076 1998 -0.23 1999 -0.48 2000 -0.19 2001 0.3 2002 -0.45 2003 -0.14 2004 -0.1 2005 -0.51 2006 0.0098 2007 -1.6 2008 -0.81 2009 0.053 2010 -0.39 2011 -0.49 2012 -1.4 2013 0.71 2014 0.56 Obviously I had a big miss in 2012, In 2013, the model predicted a rebound, but underestimated it. In 2014, the model expected a rebound on the rebound, which didn't happen.
FishOutOfWater said: I have no idea how anyone can predict the ice area scientifically if they don't have a clue how thick the ice is. Clearly, to make a prediction one must pick a preferred thickness model or use a statistical approach that uses a distribution of possible thicknesses. I have a prediction model for sea ice extent based only on extent and using a time series model. I don't know if you would consider this to be predicting "scientifically", but my predictions last year and this year (as long as I have run the model) have been consistent with the SIPN median. You could say that I have an implicit thickness model that the distribution of thickness is more or less constant, and therefore I don't need to represent the thickness in my predictions. My way of thinking is that thickness data is not more informative about extent than extent data alone, so there is no need for me to consider thickness.
OSweetMrMath is now following The Typepad Team
Jun 30, 2015