Wednesday, July 26, 2006

New Nature manuscript on climate sensitivity

I was on the verge of posting to say what a damp squib the Nature peer review trial had been. There had been no manuscripts of any interest to me, and no comments either. (I'm not blaming Nature for this, just making an observation. The essays on peer review are good though, and well worth a browse.)

But then this appeared on the RSS feed. Someone has produced a new estimate of climate sensitivity, based on some sort of optimal fingerprint of the solar cycle on global temperatures. They claim a lower limit for S of 2.3C, and a rather vague upper limit of 6.4C (although they say they think it is more likely close to the 3C+-0.7C estimate that they deduce for the periodic response). I guess I might as well make it clear that this is purely at the "submitted" stage - not yet refereed, but it obviously got through the initial editorial vetting.

Comments are invited from institutionalised people only - that's Nature's policy, anyone can comment here of course! I'll keep my powder dry for the time being...

Saturday, July 22, 2006

Japanese Language Proficiency Test

The JLPT tests are held once a year in early December around the world and throughout Japan. There are 4 levels, and I'm planning on taking level 2 (2nd hardest) this year. This post is mainly to see if I can learn anything useful from others who are furiously googling "jlpt" to help devise effective studying strategies :-)

As a brief background for the regular readers, each of the 4 levels roughly doubles in difficulty over the previous one. Eg this site says
  • 4: 150h study, 100 kanji and 800 words
  • 3: 300h, 300 kanji, 1500 words
  • 2: 600h, 1000 kanji, 6000 words
  • 1: 900h, 2000 kanji, 10,000 words
Learning kanji is the biggest headache, especially at the higher levels. For those who don't know about kanji, they are an incredibly obtuse and dysfunctional writing system which takes the natives more than a decade to learn to a tolerable level. But I'll save the anti-kanji rant for another post. I'll not persuade the Japanese to give them up in the next few months, so there is little alternative for now but to learn them!

Looking at a kanji frequency list, it seems to me that JLPT level 2 reaches a bit of a sweet spot. The most common 1000 kanji represent 95% of all characters by usage, which should be enough to get the gist of most stuff pretty well. You need to learn another 600 just to get up to 99% (which still means that every two or three sentences, there will be something you cannot read or understand). In contrast, the first 500 kanji only cover 80% of usage, which is helpful for understanding signs and stuff but obviously not a great deal of use when it comes to reading normal written material.

I've been in Japan for 5 years but not really studied or learnt the language very seriously at all so far. Our work environment is highly geared towards English speaking (which is pretty much essential for all climate scientists) and coping with daily life via pointing and body language hasn't been too hard! Moreover, over the last couple of years it had started to seem fairly likely that we would be leaving Japan about now, so in fact we had virtually given up making any effort at all. But our futures here now seem a bit more secure, and recently our company arranged for a Japanese teacher to come on site twice a week, which is much more effective than our previous arrangement of trudging into the city after work for an evening class when we are too tired to learn. So I've decided to put a bit of effort in for the time being and see how things go.

I haven't taken any of the JLPT tests before, but reckoned that even with my limited previous study, I was probably already at about level 3 when I started with the new teacher in May. Another 7 months to brush up on the basics and push towards level 2 seemed like a tough but realistic challenge.

Hints and tips for learning strategies are welcome. Here is what I am doing:

Most of my effort is going in to kanji and vocabulary learning, and for this I'm working through the "Kanji in Context" books (both reference and workbook). This seems to be the best system for kanji/vocab I've seen (and is often praised on sci.lang.japan), because simultaneously with the individual kanji, you learn a number of common words that use them, and see the words used in their normal context. The example usage and sentences in the workbook make good reading/grammar practice too. At ~5 kanji per day it is a bit of a struggle (and I'm not even trying to learn every single compound) but I reckon it won't matter if I don't get right through the 1000 anyway, so long as I get as close as possible. Currently I'm just past 400, comfortably ahead of schedule, but of course I already knew a lot of the more common ones. KiC isn't specifically arranged to match the JLPT kanji list order (ie the 1st 1000 in the book aren't quite the same as the JLPT 2 set), but obviously it's going to be close. I'm also learning new vocab as I come across it while practicing past tests.

I have a simple flashcard application that I wrote for my Sharp Zaurus. It's my one and only Java program and extremely ugly and flaky but it does just what I want (which none of the available software seemed to). It uses a slightly modified Leitner system and is quite easy to add new words to the flashcard list by cutting and pasting from the dictionary (either the inbuilt one, or Jim Breen's excellent Edict). So as I learn new words, they go into this and I try to spend 20 minutes each day going through them.

I've also recently started got the Unicom JLPT level 2 grammar book, which seems good so far. As well as the grammar points, it is also useful reading practice as the explanations are 100% Japanese, with furigana over the harder kanji. I've just bought the listening comprehension book by the same people, which I plan to work through while my teacher is on holiday for August.

I try to listen to one of the news stories on the ANN news page each day, which has a brief streaming TV broadcast of each story. The spoken script doesn't always exactly follow the text but it's generally very close. With rikaichan I can check the readings and meanings of unknown text, but keeping up with the newsreader is a challenge!

On top of this, I'm wading through past tests. Learning how to handle them effectively is definitely an important skill. To be honest, after spending the last couple of decades on stuff that I can understand pretty well (ie science and maths) it is a bit of a shock to have to learn how to muddle through in a sea of confusion without too much panic!

If any other Japanese learners have any hints or tips, I'm all ears...

Friday, July 21, 2006

Axe the gaijin tax!

There seems to be a lot of interest in tinkering with immigration rules in Japan recently, with the competing tensions of an aging, shrinking native population versus the fear-mongering of the anti-immigration right wing. There is some pandering to the latter but also one or two encouraging signs of sanity in this article if you look carefully enough.

One thing that I've grumbled about before is the grotesquely unfair pension system. We are all forced to pay a significant pension contribution (directly taken from our salaries), but unless we pay for 25(?) years, we get no pension entitlement at all. If/when we leave the country, we get a partial refund - but this is capped after 3 years, so for those who stay any longer (like me) the arrangement is nothing less than state-sponsored theft. Since pension provision in the UK is dependent on the tax system, I have long since resigned myself to not actually building one up while out of the country (basically, as a non-UK-taxpayer, there is no point in contributing to one, saving privately is more flexible and efficient) but actually having a substantial sum of money directly taken from my salary and getting nothing in return is rubbing salt in the wound.

According to the above report, the Govt is considering changing the refund system. Not before time.

Wednesday, July 19, 2006

More on detection, attribution and estimation 4: The Literature

Time for a look at the literature.

I'll start with a mild disclaimer: the purpose of this comment is not to have a go at (or embarass) people who have confused confidence intervals with credible intervals. Indeed I've only recently started to think more clearly about what is going on here myself - and I certainly wouldn't promise that my current understanding is complete and correct. Moreover, given that almost everyone has been getting it wrong for years, it would not be reasonable to single out a handful of individuals for blame. Really, the purpose of this comment is just to point out how completely ubiquitous this confusion is in the literature.

First the results of a bit of web-surfing. It's actually really hard to find a good definition of a confidence interval on the web. Here's a typical faulty version which I found linked from Wikipedia:
The confidence interval defines a band around the sample mean within which the true population [mean?] will lie, to some degree of confidence:

For example, there is a 95% probability that the true population mean will lie within the 95% confidence interval of the sample mean.
As I've already demonstrated, this is false in general.

The contents of the Wikipedia page itself was misleading until recently - I have had a go at fixing it, rather clumsily. More editing is welcome...

The very first google hit for confidence interval is an interesting case. It is a Lancaster Uni mirror of some widely distributed educational material:, which contains the commendably careful definition:
If independent samples are taken repeatedly from the same population, and a confidence interval calculated for each sample, then a certain percentage (confidence level) of the intervals will include the unknown population parameter.
which makes it clear that the probability is based on the frequentist concept of repeated sampling to generate a population of confidence intervals. So far so good. However, their definition actually started off with the at best ambiguous:
A confidence interval gives an estimated range of values which is likely to include an unknown population parameter, the estimated range being calculated from a given set of sample data.
and concludes with the rather unfortunate:
Confidence intervals ... provide a range of plausible values for the unknown parameter.
(That's not to say that confidence intervals never provide such a range - but it is not necessarily what they are designed to do, and they well may fail to achieve this.) Despite the careful description of repeated sampling in the middle of their text, it seems quite possible that some readers will end up with a rather misleading impression of what confidence intervals are.

The first edition of the highly-regarded book by Wilks ("Statistical methods in the atmospheric sciences") also equated confidence and credible intervals (here), and said things like "H0 [the null hypothesis] is rejected as too unlikely to have been true" - despite the frequentist paradigm explicitly forbidding the attachment of probabilities to hypotheses. I was pleased to find that Prof Wilks quickly agreed with me that this was misleading, and stated that in fact the recently-published second edition (which I have not seen) does not contain the comment about credible intervals.

Perhaps most surprisingly, the mistake is committed by people even when they are railing against the limitations of standard frequentist hypothesis testing! Eg, in this comment, Nicholls recommends reporting confidence intervals:
The reporting of confidence intervals would allow readers to address the question 'Given these data and the correlation calculated with them, what is the probability that H0 is true?'
Oops.

So, how well does climate science come out of this? Well, the confusion seems to pop up just about everywhere that you see these words "likely" and "very likely" (eg throughout the IPCC TAR). The one exception would seem to be the climate sensitivity work, the bulk of which is explicitly Bayesian right from the start, with clearly stated priors. One could probably argue that many of the TAR judgements were based on experts carefully weighing up the evidence, but in many cases (especially the D&A chapter), it seems entirely routine to directly interpret a confidence interval (say, from a regression analysis) as a credible interval. I've seen an absolutely explicit occurrence of this in a recent D&A paper, co-authored by 2 prominent figures in the field.

I promised TCO I would say something about the Hockey Stick. Of course, it's the same story here. On the basis of regression coefficients, MBH (specifically in their 1999 GRL paper, which is repeated in the TAR) make a statement about how "likely" it is that the 1990 were warmer than the previous millenium. I can't help but be amused to note that with all the "auditing" and peer-review, including by professional statisticians, none of them seems to have noticed this little detail.

Of course an important question to consider is, how much does this all matter? And the answer is...it depends. In many cases, the answers you get probably won't turn out too different. There was some explicitly Bayesian estimation briefly mentioned in the D&A chapter of the TAR, and broadly speaking it seemed to give results that were similar to (in fact perhaps even stronger than) the more mainstream stuff. Moreover, there is a lot of slack in terms like "likely", so changing the probability might not invalidate such statements anyway. So I am not by any means suggesting that the TAR needs to be thrown out, and therefore maybe some people will claim this is all a fuss about nothing. However, I would argue that it is still surely a good thing to at least understand what is going on, so that people can consider how important an issue it is in each particular case. For instance, equating confidence intervals and credible intervals seems to assume (inter alia) the choice of a uniform prior: this decision can by no means be an automatic one, and equating it with "no prior" or "initial ignorance" is definitely wrong. Eg, no-one, not even the most rabid septic, has actually ever believed that CO2 is as likely to cool as warm the planet (at least not since Arrhenius), so assigning an equal prior probability to positive and negative effects would surely be hard to defend. Like the example of the "negative mass" apple, a confidence interval that covers a wide range does not mean that we think the parameter has a significant probability of taking an extreme value! At an absolute minimum, it certainly needs to be stated clearly that this choice of prior was made.

There are a number of other technical issues such as model error which are intimately related, (eg, what does it mean to determine a parameter or regression coefficient to n decimal places, in a model that is an incomplete representation of the real world?). We can try a bit of hand-waving and claim it doesn't matter too much, but ultimately I think if we are going to try to make useful and credible estimates then there is little alternative but to try to deal with these things more coherently and consistently, even if it does mean more work for the statisticians!

Sunday, July 16, 2006

More on detection, attribution and estimation 3: The Prosecutor's Fallacy

The error of equating P(Data|Hypothesis) and P(Hypothesis|Data) is known as the Prosecutor's Fallacy, due to its frequent appearance in criminal trials (misinterpretation of DNA evidence etc) (the dispute on that wikipedia page seems to refer to the details of its applicability to a particular legal case, not the underlying theory). Typically, it is illustrated via a simple discrete yes/no question along the following lines: if the probability of a random person matching a DNA sample from a crime scene is 1 in 1,000,000, then what is the probability that a suspect is guilty, given only that their DNA matches? The fallacial answer is 999,999 in 1,000,000. An easy way to see the flaw in this is to note that in the UK, there are 60,000,000 people so there will be 60 people whose DNA matches, only 1 of whom will be the guilty one (note various other assumptions I've made, including the fact that a crime actually took place at all, it was committed by one person, and that there is no other evidence as to the suspect).

Formally, the exact calculation of P(H|D) when we are given P(D|H) requires Bayes' Theorem:
P(H|D)=P(D|H)P(H)/P(D)
which requires the specification of a "prior" P(H) (P(D) is a normalisation constant which provides no real theoretical difficulties, although it might be hard to calculate in practice). It must be understood that this equation does not depend on "being a Bayesian" or "being a frequentist". It is simply a law of probability, which follows directly from the axioms (in particular, P(D,H)=P(D|H)P(H)=P(H|D)P(D)). So it's not something we can choose to obey or not - at least, without abandoning any pretence that we are talking about probability as it is usually understood.

Although the prosecutor's fallacy is generally demonstrated through discrete probability, Bayes' Theoreom applies equally to continuous probability distribution functions, with f(h|d) being related to f(d|h) via f(h|d)=f(d|h)f(h)/f(d). This explains the distinction between confidence intervals and credible intervals demonstrated in the last post, since an experimental observation gives us f(d|h) (a likelihood function), and in order to turn it into a posterior pdf f(h|d) we need to use a prior f(h).

For example, given the previous apple-weighing example, we might have a prior belief that the apple will weight about 100g, plus or minus 20g at 1 standard deviation (and strictly speaking, the prior should be truncated at 0). The likelihod function arising from the measurement is itself a Gaussian shape centred on the observed 40g, with a width of 50g - this function does extend to negative values, as a hypothetical negative-mass apple would have a nonzero probability of a returning a 40g measurement. Applying Bayes' Theorem formally gives us the well-known result of optimal interpolation between two gaussians, which in this case works out to 91.7+-18.6g. In this case, the observation is so poor that it hardly affects our prior belief, but if our scales had an error of only 5g we'd obviously depend far more on their output. In no case would we end up believing that the apple's mass was negative!

Next, and perhaps last (for now at least): what the literature says.

Saturday, July 15, 2006

More on detection, attribution and estimation 2: Incredible confidence intervals

Following on from this post, here are a couple of simple examples where perfectly valid confidence intervals are clearly not credible intervals at the same level of probability.

For the first, rather natural example, let's assume we are trying to measure some simple non-negative quantity such as the mass of an apple. We have a set of scales which have a random (but well-characterised) error of +-50g (Gaussian at 1 standard deviation). That is, if we take a calibrated mass of value X, repeatedly use the scales and plot a histogram of the results of each measurements, the outputs will form a nice gaussian shape with mean X and standard deviation 50g. [OK, I know I'm doing this at a very boring pace, but I need to make sure it is all clearly set out.]

One obvious and very natural way to create a confidence interval for the apple's mass is to take a single measurement (call the observed mass m) and then write down (m-50,m+50), which is a symmetric 68% confidence interval for M, the true mass. That is to say, if we were to hypothetically repeat this experiment numerous times, and construct the set of confidence intervals (mi-50,mi+50) where i indexes the measurements generated in the experiments, then 68% of these intervals would include M, 16% would be wholly greater than M and 16% wholly smaller (this is guaranteed by the specified observational uncertainty). We don't, of course, actually do this infinite experiment - but this is precisely what is meant by "(m-50,m+50) is a symmetrical 68% confidence interval for M". [Are you asleep yet? The punchline is coming up...]

It is incorrect to interpret the specific confidence interval (m-50,m+50) as implying that M lies in that range with probability 68% (and above and below with probability 16%).

To see why this is the case, consider the following: what if the reading happens to be m=40g? (Which it might well be, if the true mass is say 80g.) Is the confidence interval (-10,90) really a symmetric credible interval at the 68% level? That is to say, would anyone believe that the apple's mass is <-10g with probability 16% (would they bet on it - if so, please point them this way...)? Of course not. Any symmetric 68% credible interval (ie an interval so that one believes M lies in it with probability 68%, and below and above with probability 16% each) must necessarily be truncated somewhere above zero. Yet it is trivial to show that the confidence interval as constructed above is entirely valid. Under repeated observations, 16% of the measurements will be lower than M-50, 16% greater than M+50, and the remainder in between, so the population of confidence intervals has exactly the statistical properties required of it.

One can, perhaps, state that "negative mass is not statistically inconsistent with the measurement" or maybe even say "negative mass cannot be ruled out by the measurement", but these statements cannot be interpreted as implying that anyone thinks the apple actually has negative mass!

There are other examples of non-credible confidence intervals that are quite striking. Here's one I found on the web (description lightly modified):

Let's say we want to estimate a parameter x. Let's ignore all the available measurements entirely! In their place, start by using a random number generator to generate y uniformly in [0,1]. If y > 0.68, then define the confidence interval to be the empty interval. If y < 0.68, then define the confidence interval to be the whole number line. That's it! Again, this routine trivially generates a 68% CI - that is, exactly 68% of the time, the CI contains x whatever value this takes. But neither of the two possible intervals that the algorithm generates is credible at the 68% level - it should be clear that one of the possible intervals contains x (and the other does not) with certainty, even without knowing what x is.

In the next part, I'll try to reconcile these results with the underlying theory.

More on detection, attribution and estimation 1: Confidence intervals and credible intervals

I wrote this some time ago, and have been occasionally looking into the D&A stuff since then. It all seems rather a lot murkier than I had expected...this will take a lot of writing to explain so I'm splitting it into parts. This part is an introduction to the distinct concepts of confidence intervals and credible intervals.

To recap, D&A is an essentially frequentist procedure, which seeks to determine whether the observational record is "statistically inconsistent" with what we could have expected in the absence of anthropogenic influence, and "not statistically inconsistent" with what we think the anthropogenic influence should have been.

It is fundamentally a frequentist approach (comparing the real data to the population of model outputs which differ due to natural variability). Therefore, it has no direct interpretation as a probabilistic estimate of the magnitude or existence of the anthropogenic influence. Such estimates are absolutely incompatible with the frequentist paradigm. This is the fundamental message of these posts, and I can't stress it too fully.

Unfortunately, this distinction has been badly blurred in some of the literature, including even the IPCC TAR itself. In fact, it seems like there is rather widespread confusion between a (frequentist) Confidence Interval, and a (Bayesian) Credible Interval, so I will expand more on this now. A confidence interval (warning: page may be a bit dodgy, precisely due to the confusion I'm discussing) for a parameter x is an interval constructed according to a specific method, such that if we were to repeat the experiment numerous times, with a new set of observational data (with different random errors) for each experiment, then p% of the confidence intervals we construct using this method would contain the true (fixed) value of x, whatever that is.

This is not the same thing as an interval such that we believe x lies in the interval with probability p! For a frequentist, to make a probabilistic statement about x is to commit a category error: x is a fixed but unknown parameter, it is either in the specific interval or not.

A credible interval (which can also be abbreviated to CI: how confusing) is an inherently Bayesian concept: it is an interval such that the parameter is believed to lie in the interval with probability p. Fundamentally, the belief (probability) attaches to the person who makes the statement, rather than the parameter itself - in other words, it is subjective. In cases where many people agree on a particular credible interval (ie because they share similar judgements about priors, methods and data), it is sometimes called intersubjective. Calling a credible interval objective is potentially rather misleading IMO - it may be taken as implying that there is a true probability that we may be able to discover through careful analysis (analogous to the probability of 4 heads in 5 coin tosses, say). However, the only "true" probability that could apply in this sense is 0 or 1 - the event is either going to happen, or not (NB "going to" can apply to past events which are currently unknown to me, such as the probability that it rained in Birmingham on this day last year). The probability here applies to our belief in the truth of an unknown proposition, not any frequentist limit as the number of replications increases. Different people may quite reasonably differ in their opinion, without any one of them being wrong!

It seems quite clear from a bit of web-surfing (and a literature trawl) that the vast majority of people automatically and intuitively interpret confidence intervals as credible intervals: that is not so surprising, as the precise definition of the confidence interval is rather complex, counterintuitive, and not very useful in real life (in contrast, what people actually want to know is well-encapsulated by the notion of a credible interval). Moreover, even authoritative literature specifically on the subject of statistics frequently gets it wrong (as I'll show later). However, the two concepts are not the same, and it is trivial to construct confidence intervals that are in no way credible. I'll give some examples of this in part 2.

Wednesday, July 12, 2006

Mission: Impossible

Your mission, should you choose to accept it, is to forecast natural disasters up to 30 years ahead. As always, should you or any of your team be caught or killed, the Secretary will disavow any knowledge of your actions. This web-page will self-destruct in five seconds.
A few weeks ago, I sent off a research proposal concerning regional climate prediction on the 30-year time scale. I'm bored with climate sensitivity - for starters, we already know the answer (we are just waiting for others to agree) and furthermore, it doesn't actually matter much in terms of how the climate will evolve within my lifetime. So, I want to do something more useful, by attempting to describe how climate will change (in probabilistic terms) on the time and spatial scales that actually affect people directly. Sadly, the proposal wasn't funded. The response (which I received only a couple of days ago) was that the work was undoubtedly extremely important, but that the plan was somewhat lacking in credibility. It had been thrown together in a bit of a rush and we weren't really sure what they were after, so I wasn't hugely surprised or disappointed. I was, however, rather surprised to see this story in the news today! No doubt the details are somewhat puffed up in the story - I can't imagine any scientist is really going to claim to predict whether the west side of Kamakura or the east side of Kamakura is going to get hit by more typhoons in 30 years (the Japanese version of the story mentions 1km resolution).

Our proposal was of course rather less ambitious in scope than this news report - I reckon that we should certainly be able to make meaningful (albeit imprecise) predictions about regional temperature trends, maybe something vague about overall precipitation levels if we are lucky...but beyond that, I wouldn't like to say. In no small part, the goal was to try to work out how credibly we can say anything at all on these sub-global scales. Nevertheless, our proposal was rather similar in overall scope and aims to this one.

I guess I should make it clear that I don't think for a minute that they have ripped off our idea! The 30-year prediction idea is becoming increasingly common in climate prediction circles - that time scale hits a bit of a sweet spot between the interannual variability problem of year-to-year forecasting, and 100 years ahead when the scenarios have a huge impact and which none of us will see anyway. 30 years is near-term enough to actually affect some current infrastructure decisions (eg power stations, water resources), and the Hadley Centre's QUMP group (apparently web-page-less, which is a shame) are already on the case as far as the UK goes. I'm very pleased to see that the Japanese are also heading in this direction.

Apparently this project will have ~$100million to allocate. Yes, really - that is ¥1010. Maybe they are planning on just buying up the Hadley Centre, lock stock and barrel, and shipping it over here? If a very small proportion of that money heads in our direction it should keep us in beer for a few more years :-)

Update

I see that RPSnr has found the story reported here. I agree that there is a risk of over-hyping what can be achieved here - and also a risk of scientists over-selling their abilities in order to get their hands on the funding. So long as that is avoided, I'm not as negative as he is though - when long-term infrastructure decisions are being made, it would be irresponsible to not try to use all the information at our disposal, including judgements about future climate changes.

Sunday, July 09, 2006

Yokohama by night

A friend had some discount coupons, so 4 of us went on a dinner-cruise in the bay on Friday night. The Sakuragichou (Minato Mirai 21) area has some very nicely-designed (IMO) architecture, very well planned compared to most Japanese cities in which buildings are put up one-at-a-time in a rather disorganised jumble.



Here is a view of the scene around sunset. The tallest building is the Landmark Tower, Japan's tallest building (which has impressive views from the top on a clear day). Then to the right are the 3 towers of Queens Square and the sail-like (but not from this direction) Pan Pacifico hotel. The lower building in front is the recently reopened Aka Renga (Red Brick warehouse)



This low-quality video shows the light display on the Ferris wheel in the small amusement park. By the time it's been YouTubed it is not that great though.

Saturday, July 08, 2006

It's all kicking off on Climate of the Past

Climate of the Past is a fairly new on-line journal with an interesting review/publication model, which I've mentioned before.

This recent hockey-stick-related submission has sparked a flurry of interest, with no fewer than 3 comments (at time of blogging) from Anonymous referee #2, who sets out with "This manuscript is deeply flawed, and its publication would damage the reputation of this promising journal" before moving on to "unpublishability of their submissions", "red herrings which serve only to obfuscate", "spurious claims", "disingenuous cherry pick", "simply impossible to take any of the nonsense they offer up here, at all seriously" etc etc. Ouch. I don't have a dog in this fight and will not dare to offer an opinion :-)

Many of the other manuscripts currently "in open discussion" there are from the special issue (arising from a session at the EGU General Assembly) that Jules is jointly editing. Not that there is any discussion...perhaps it's a shame these aren't collated separately.

Travelling

Peter Hearnden offered a good rant in response to the news I was off to the UK recently. He makes an interesting point that I've often thought about myself: to what extent is it justifiable to fly around the world regularly for conferences, especially when we all say that AGW is a problem? Why not just use video conferencing?

Well, first it's worth pointing out that we do in fact use a lot of other forms of communication, regularly - especially email and telephone. Work would be very different without either of these tools. Video conferencing is much less common, and less useful (IMO and IME). It works ok for small groups, like adding a web-cam to a phone call (although the benefit of this is relatively minor). However, I've been at larger joint meetings where ~3 groups dispersed throughout the UK are all connected by a sophisticated video conferencing system (the access grid nodes at the e-science centres in the UK), and although it is much better than no meeting at all, it is really rather disappointing by comparison with everyone meeting at a single site. Within one group, rapid multi-way and parallel interaction is no problem, but across the system, it is very much one at a time, slow and hard to keep track of, even with this dedicated state-of-the-art system set up at significant expense. Broadcasting a presentation to a dispersed audience is one thing, but if we want to enable 10 conversations to go on in parallel (or even 3) out of a group of 50, with the participants switching from one to another randomly, then there really is no technological alternative (yet) to getting them in the same place at the same time. When it becomes more practical, I'll welcome it, as the past week of jet-lag has not been much fun.

That said, I wouldn't like to defend the volume of travelling that some people do - and I wouldn't like to do it myself. That applies as much to daily commuting as it does to conferences. I reckon that about 1 trip a year is a sensible level for me - any less and I'd really not meet many people at all, much more and it turns into the same old circuit of faces, where no-one has that much new to say from one meeting to the next. One trip per year is also our Institute's basic policy (ie budget allowance). In the past, I've often managed to combine more than one meeting in the same trip, and it also includes what I have in the way of foreign holiday (including visiting "aged parents", as they self-deprecatingly label themselves). Recently, however, I've had 3 long-haul trips in quick succession - the first was originally intended to be my trip for the year, then I went back to the UK briefly for personal reasons and most recently this workshop invitation came out of the blue and seemed too good an opportunity to miss. During the latter, I met several people for the first time in person who I'd only previously corresponded with via email, and also reacquainted myself with several more who I hadn't met for a year or more (including someone I'm supposed to be writing a paper with right now...). The scientific process is more social than many people may realise, and it is no exaggeration to say that a couple of beers in a pub every so often in diverse company may be more productive than hours at the desk.

Another perspective that must be borne in mind is that although research funds are limited and we have to spend wisely, that cuts both ways since travel is in fact pretty cheap. This workshop I just attended is part of a series of 3 in as many years funded as some sort of special project. I don't know the details but the total cost is probably comparable to employing at most a couple of postdocs over that length of time. I wouldn't be at all surprised if these meetings make a more substantial contribution to scientific progress than anything these postdocs might have done. If air-fares were higher to account for the full environmental impact then perhaps the balance would shift a bit the other way, but in fact most attendees were fairly locally-based anyway.

Probably this is sounding a bit defensive. Well, so it should: I do think that the volume of travelling that some people do is a bit crazy, and we should have to justify it carefully in terms of the costs (including the time taken) and benefits. I'm particularly conscious of the issue now I'm living a long-haul flight away from just about everywhere else in the scientific world :-) I'm sure that in many cases some rationalisation could cut the amount of travel down substantially, but probably not to zero, at least not without seriously impacting the work that we do.

[A minor additional detail: I feel I should point out that Egham, where we recently went, is an acceptably comfortable venue for a workshop, but it's hardly at in the top flight of tourist destinations - conference life is not all pina coladas on the beach watching the sun go down :-) The culinary highlight was the night we escaped from the college canteen to a decent Thai retaurant in town.]

Thursday, July 06, 2006

BBC NEWS | Science/Nature | Climate panel: The verdict

There's an interesting "Climate panel verdict" on the BBC, provoked by Lovelock's book. There are some serious names in their group, let's see how they do...
1. It is likely that temperatures will rise by 3C to 5C by the year 2100 unless we act swiftly to cut greenhouse gas emissions and protect natural forests. VERDICT: YES 7, NO 0
No way. >3C is possible, but likely? Seriously, I have no idea where they pulled that from. Wigley and Raper's somewhat contentious interpretation of the IPCC TAR research is a "likely" range of 1.7-4.9C, and that is predicated on no action at all in the next 100 years, not the BBC's "unless we act swiftly".
2. Temperatures might rise by as much as 8C by 2100, but this is less likely. VERDICT: YES 7, NO 0
Much much less likely, and way outside the upper range of the IPCC TAR which was 5.8C (of course, the forthcoming AR4 might bump things up a bit, I haven't checked that bit of it, but 8C is still hard to credit). In fact, I think it's virtually impossible, although it's hard to quantify these sort of extreme probabilities with much precision. "Might" is hard to exclude, but to merely say "less likely" is rather misleading IMO.
3. A temperature rise of 3C to 5C would probably bring severe changes for humans. VERDICT: YES 7, NO 0
Yup. (At least, for some humans.)
4. A temperature rise of 3C to 5C would probably bring catastrophic changes for humans. VERDICT: YES 0, NO 3, ABSTAIN 4
Nope (ie, I agree with the panel).
5. A global recession would result in rapid, dangerous climate change as a result of the diminution of aerosols in the atmosphere. VERDICT: YES 0, NO 7
Agreed.
6. Continuing to increase CO2 will have a major effect on oceans through temperature stratification and acidification. VERDICT: YES 1, NO 0, ABSTAIN 6
Oooh, I think I'd actually sway towards a "yes" here on the grounds of acidification. OTOH it is not clear how hard it will be for the ecosystems to adapt. Probably abstaining is safer.
7. We are being reckless with the planet through greenhouse gas emissions combined with broader human-driven environmental change. VERDICT: YES 7, NO 0
A no-brainer with the inclusion of "broader environmental change", but that means it is no longer really a climate change question. Of course reckless doesn't actually mean that the actions are wrong, just that they aren't adequately thought through...
8. James Lovelock's metaphor that the Earth will react against us like an irritant if we continue treating it this way is helpful in public understanding. VERDICT: YES 5, NO 2
Um...abstain. I suppose I should ask the public and see whether they have a better understanding...
9. The climate system is so complex that individual climate experts struggle to see the whole picture. VERDICT: YES 7, NO 0
Yup.
10. Politicians need to draw on intuition in formulating climate policy. VERDICT: YES 5, NO 1, ABSTAIN 1
?
11. Professor Lovelock insufficiently acknowledges in the book the uncertainty over how hot the climate will become. VERDICT: YES 5, NO 1, ABSTAIN 1
Haven't read the book, but didn't like his plug in the Indy.
12. Population growth is a major issue. VERDICT: YES 7, NO 0
Yes (although probably a little less of an issue than the SRES make it out to be).
13. Professor Lovelock is wrong to give the impression that nuclear fission is our only realistic short-term solution. VERDICT: YES 7, NO 0
Yes
14. In the UK context, nuclear fission is one of several options that merits full public and political discussion. VERDICT: YES 7, NO 0
Yes
15. In the UK context, Professor Lovelock is wrong in the book to reject wind power. VERDICT: YES 7, NO 0
Yes (not that I've read it)
16. His apocalyptic comments made around the time of the launch of the book, such as: "There will be a few breeding pairs of humans in the Arctic", are likely to lead to despair and disengagement rather than determination to act. VERDICT: YES 4, NO 3
Probably. I certainly don't think it did climate science any favours.
17. Politicians are unlikely to cut greenhouse gas emissions sufficiently until it is too late to prevent dangerous warming. VERDICT: YES 6, NO 1
"Dangerous" by UNFCCC definition, sure (+2C). We are already just about there.
18. James Lovelock is a towering figure in environment science and has been a major influence on understanding the way in which the Earth system works. VERDICT: YES 6, NO 1
Yes ("was" a towering figure? Oh, no need to be snarky)
19. The book is helpful in the climate debate. VERDICT: YES 7, NO 0
Pass.
20. Climate change is real, dangerous and significant in our own lifetimes. VERDICT: YES 7, NO 0
Dangerous to who? I don't think that a large proportion of people alive today will actually be endangered by anthropogenically-forced climate change. It's certainly real, significant (in several interpretations of the word) and will cause some problems for our descendants. I guess it would be a bit hyper-critical to actually mark that answer as wrong.

Overall, they get up to 90%, assuming I give them all my abstentions and don't knows. I suppose that's not too bad. But I'm surprised at the temperature predictions that they lead off with, and these are the only two specific quantitative questions in the list. I'd be interested to hear anyone's ideas as to where they got those from. Could they have meant those temperature rises to apply to the UK alone? Physically, that makes some sort of sense. But this interpretation is inconsistent with the later questions referring to the same 3-5C range and quite clearly implying global changes. Honestly, I'm baffled. William, your boss was on the panel - maybe you could ask him at coffee-time?

World Calamity Doesn't Make Cut

There's an interesting op-ed by Gary Yohe and Michael Schlesinger here.

I agree with the basic premise that we should attempt to account for uncertainty in a meaningful and consistent way in our decisions.

I disagree with their presentation of fundamentally subjective probabilistic estimates as if they are objective facts ("Our own work has shown that there is a 1 in 5 chance...")

I disagree strongly with their specific assertion of a THC collapse by 2050 with 20% probability.

Michael Schlesinger was at the workshop last week. Interesting guy, but I didn't agree with everything he said...

Wednesday, July 05, 2006

Tipping points

Someone recently asked me about "tipping points", and I said that I was thinking of blogging about it...I see that Gavin has beaten me to it on RealClimate.

He's a bit more polite than I would have been, but the underlying message is pretty much the same as I'd have given. It seems to be pretty much a made-up metaphor designed to spur people into action. David Appell has also recently written along similar lines.

The first comment on the RC article refers to Al Gore saying there are tipping points in politics too. I think that's a good description of what the profusion of "tipping points" in the media is...

Tuesday, July 04, 2006

Uncertainty, probability, models and climate change

This was the title of an interesting workshop I recently attended in the UK. It's not really that good blogging fodder, as mostly it was rather technical stuff to do with estimation methods, and there wasn't a whole lot of new climate science. Apparently there will be a web page put up with the talks (and maybe other info) some time.

Not surprisingly, the mix of attendees reflected the interests of the organiser, so about half of the presentations were from climate scientists, and half were from Bayesian statisticians who have developed and formalised a general framework by which inferences can be drawn from models. There was a heavy focus on and promotion of the use of emulators, which seem like a really cool idea but do appear to have some limitations (I'll probably post more about that separately). In fact one of the climate scientists said that he felt like he was being sold a 2nd-hand car...a sentiment I have some sympathy with. Jules sugested this might be somewhat cultural, due to the commercial background (backing) of some of the Bayesian statisticians who have previously worked with the oil industry for things like forecasting well ouput. Perhaps there is some defensiveness and resistance to new ideas from within the climate science community too.

I enjoyed giving my rant about climate sensitivity, and heard nothing to change my opinion (as outlined on the last slide of that talk) that the vast majority of published (and widely publicised) "pdfs of climate sensitivity" are pathological by construction and virtually worthless as a result. Indeed, I got the distinct impression that what I had to say was basically teaching about half the audience to suck eggs. There were of course some quibbles about the details of the calculation I performed, but no substantive criticism of the basic idea. I was pleased to note that several of the following speakers were rather apologetic for their use of uniform distributions as representing "ignorance", so at least that idea is catching on. Unfortunately, several members of the potential audience either left before or arrrived after my talk. In the following discussion, someone did suggest that we ought to deliberately exaggerate the probability of high S, in order to counter the tendencies of others to ignore the range of uncertainty. That's not a viewpoint that I have any sympathy for.

There was one bizarre moment when someone paused in the middle of their scientific presentation to show this Greenpeace advert. Really. I'm not making it up.

Overall, it was a very interesting week, but there were a few minor disappointments for us. I think we failed to adequately convey the amazing power of the EnKF in generating solutions to nominally difficult problems (ie those involving high dimensional, computationally intensive, nonlinear and chaotic models). This became apparent in the final discussion when people started talking about the potential for using adjoints to solve these problems, citing this paper in particular...perhaps they didn't realise, or didn't accept, that we developed the EnKF as a means of addressing the specific, well-known (or so I thought) and intractable problems that adjoints have in such situations. I guess if anyone starts to go down that path they'll learn the hard way soon enough. [I shouldn't be too thoroughly negative - there are some things an adjoint is useful for, and maybe they will get something worthwhile out of it. But I'm certainly not tempted to try it myself.] Also, although we had gone there expecting to learn how emulators are the solution to all our problems, it seems that they are very hard work for what seems like a rather modest gain. Of course it is useful to have found that much out prior to putting in all that hard work ourselves.

On the plus side, although thanks to the trains we barely overlapped with one or two of the attendees, it was good to meet for the first time several people who I had previously only corresponded with via email (and to meet again others whom I only see occasionally). Electronic communication is all very well and good but it's not a full substitute for some face-mail.

After the workshop, we popped into London for the Friday evening, and had a pleasant dinner in the vicinity of Broadcasting House where we heard some of the back-story surrounding the "Overselling climate change" radio programme. That was an interesting tale...

Monday, July 03, 2006

There are two types of people...

...those who divide people up into two types, and those who don't.

I've been reminded of this aphorism when travelling between the UK and Japan recently, as both airports are running an advert by HSBC along the walls of various airbridges, corridors and tunnels. This advert comes in many versions, each of which basically consists of 2 copies of two contrasting images in an ABAB pattern, with contrasting one-word descriptions printed over them in abba form - ie, the descriptions are swapped for the second pair of photos. Eg, the photos are a laptop and a baby, with the words "work" and "play" overprinted in both orders. Or two people in casual and smart clothes labelled "boss" and "worker". The basic point seems to be about how different people may have different interpretations and perspectives on the the same facts.

The Japanese version of this campaign is rather limited. There is just one version (that I've seen), with the words "global" and "local" overprinted on a plate of sushi and a burger. I'm not sure if this is a post-modern ironically self-deprecating dig at the famed Japanese propensity to view the world as being divided up into "Japanese" and "Foreign", or whether this really was the best idea the ad-men could come up with. My suspicion that it was the latter was increased by our treatment while waiting in the queues for passport control, where no fewer than 3 separate people helpfully told us that we were in the wrong queue, and that the foreigners were supposed to be queueing up elsewhere. Now, the signs could be a little clearer, as there are prominent "Japanese" and "Foreign" labels and somewhat separate indications that anyone with a re-entry permit is supposed to use the Japanese queue, but still the assumption that anyone who doesn't have an appropriate colour skin and vertically-challenged stature can't be Japanese seems unreasonably deep-seated. Of course, statistically speaking, it is a fair bet at Narita Airport, but I'm sure that's true at Heathrow too and I certainly wouldn't go up to someone with a brown skin there and say "excuse me, but the foreigners are supposed to go in that queue over there"!