CSIRO: A Limited Hang out??

CSIRO has done the right thing in respect to the drought data used in its recent report and an archive is now available. David Stockwell reports here.

Update: I’ve now done a quick look at their supposed data archive http://www.bom.gov.au/climate/droughtec/download.shtml and it is far from clear that this is anything like an adequate data archive. It may be more like a the sort of limited hang-out that we often see when climate scientists grudgingly release a little bit of data to comply with pressure, but without a commitment to an “open and transparent” process. For example, I did not see any archive of the underlying data, merely the summaries. For example, the article estimates the percentage area affected by drought and gives the 5 percentile series (which is fine as far as it goes and part of a proper archive), but is well short of being an archive that enables one to replicate their result. If this is it, then this is the equivalent of archiving the MBH reconstruction without any of the underlying data and we’ve seen that movie. Maybe David Stockwell will look into this and advise.)

In order to build a true “consensus” to deal with important problems, it’s necessary for climate scientists to be thoroughly committed to an “open and transparent” process. This means more than IPCC authors taking in one another’s laundry. It means more than a bunch of IPCC scientists telling everyone else what to think – even if they’re right and perhaps especially if they’re right. It means that data and methods to support articles used for climate policy must be routinely available concurrent with the publication of the article. Not after the fact.

I think that maybe some progress is being made here, though its been slow. Whatever anyone may think of the role of blogs, they obviously are relevant in trying to get to an “open and transparent” process.

The funny thing is that I’ think that once authors get used to “open and transparent”, they’ll like it.

Even the process of archiving source code, which CSIRO didn’t do. I can vouch for this on source code: it makes me feel comfortable knowing that source code is archived and available. If someone finds a mistake, so be it. It’s out there and you deal with it. But you remove all temptation to be over-defensive, because you’ve got it off your chest. I sometimes archive code in blog posts and I find this handy after the fact because it easier to figure out what I did; and it’s easier for others to do the same thing; it’s something that I’m going to do even more consistently.


  1. ladygray
    Posted Jul 24, 2008 at 8:07 AM | Permalink

    But you remove all temptation to be over-defensive, because you’ve got it off your chest.

    A “know the truth, and the truth will set you free” type of thinking. Goes along with that old saying, that you should always tell the truth, because then you don’t have to have a good memory. Not a bad idea for most aspects of life.

  2. Dave Andrews
    Posted Jul 24, 2008 at 8:18 AM | Permalink

    Steve, OT. There’s a paper in this weeks Nature that may be of considerable importance for the use of proxies (or it may not, I don’t know enough to judge)but you may be interested – otherwise please delete.


    Nature 454, 511-514 (24 July 2008) | doi:10.1038/nature07031; Received 10 March 2008; Accepted 28 April 2008; Published online 11 June 2008

    Subtropical to boreal convergence of tree-leaf temperatures

    Brent R. Helliker1 & Suzanna L. Richter2

    1. Department of Biology,
    2. Department of Earth and Environmental Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA

    Correspondence to: Brent R. Helliker1 Correspondence and requests for materials should be addressed to B.R.H (Email: helliker@sas.upenn.edu).

    Top of page

    The oxygen isotope ratio (delta18O) of cellulose is thought to provide a record of ambient temperature and relative humidity during periods of carbon assimilation1, 2. Here we introduce a method to resolve tree-canopy leaf temperature with the use of delta18O of cellulose in 39 tree species. We show a remarkably constant leaf temperature of 21.4 plusminus 2.2 °C across 50° of latitude, from subtropical to boreal biomes. This means that when carbon assimilation is maximal, the physiological and morphological properties of tree branches serve to raise leaf temperature above air temperature to a much greater extent in more northern latitudes. A main assumption underlying the use of delta18O to reconstruct climate history is that the temperature and relative humidity of an actively photosynthesizing leaf are the same as those of the surrounding air3, 4. Our data are contrary to that assumption and show that plant physiological ecology must be considered when reconstructing climate through isotope analysis. Furthermore, our results may explain why climate has only a modest effect on leaf economic traits5 in general.

  3. TerryBixler
    Posted Jul 24, 2008 at 9:15 AM | Permalink

    Archiving of code as well as data is always necessary when working on software. At a minimum daily incremental archives and full weekly archive. I have used this standard for over 35 years. To do less is irresponsible at best. Off site storage is very desirable as well.

  4. Dishman
    Posted Jul 24, 2008 at 9:15 AM | Permalink

    I’m lazy. I aspire to laziness. If someone makes a statement and makes available the evidence behind it, I may check it, or not. It depends on my mood. The availability of evidence is in and of itself evidence of trustworthiness.

    If someone says “You have to trust me”, I treat it as prima facie evidence that they are a liar. There is no trust without acceptance of verification.

  5. Steve McIntyre
    Posted Jul 24, 2008 at 9:48 AM | Permalink

    #3. OT, but can you give me suggestions on backup software for ordinary operations. My directories are pretty big now, but most don’t change on a week-to-week basis.

  6. steven mosher
    Posted Jul 24, 2008 at 10:22 AM | Permalink


  7. Mark T.
    Posted Jul 24, 2008 at 10:37 AM | Permalink

    #3. OT, but can you give me suggestions on backup software for ordinary operations. My directories are pretty big now, but most don’t change on a week-to-week basis.

    If you go out and get yourself a Seagate external HDD, it likely will come with built-in backup software. I use a FreeAgent Pro for mine at the office. It auto archives my entire project. I’m sure other drives have something similar. You can also get a little deeper and delve into some form of configuration management software such as subversion, CVS or git, but that takes a bit more work and is more for version control.


    I have an I/O Magic external hard drive, whatever that is.

  8. Posted Jul 24, 2008 at 10:38 AM | Permalink

    Subversion for version control. It is open source, reasonably well documented, and clients exist for most operating systems. I second carbonite for off-site backups.

  9. Mark T.
    Posted Jul 24, 2008 at 11:00 AM | Permalink

    It does not look like I/O Magic has its own software.

    If you aren’t averse to spending a little money, Seagate owns a company called EVault (www.evault.com) that does archiving. I have never used such a service so I have no idea how much they cost.


  10. Gunnar
    Posted Jul 24, 2008 at 11:05 AM | Permalink

    Windows has it’s own backup software built in. It’s not fancy, but I think it basically works. Also, if you use McAfee anti-virus, it also has a backup module.

  11. Sam Urbinto
    Posted Jul 24, 2008 at 11:06 AM | Permalink

    How about putting in a blank DVD every (or every other or whatever) Wed (or what have you) and have a backup task (OS’s program should work) fire off at 12 AM (or whenever) and just burn the data to DVD? Assuming the data’s not too big to fit on a DVD.

    Although carbonite is rather inexpensive (what, $50 a year?)

  12. Steve McIntyre
    Posted Jul 24, 2008 at 11:10 AM | Permalink

    It looks like I spoke too soon here. Their supposed data archive
    does not appear to include any of the underlying data, merely the summaries. For example, the article estimates the percentage area affected by drought and gives the 5 percentile series (which is fine as far as it goes), but is well short of being an archive that enables one to replicate their result. It may even be the equivalent of archiving the MBH reconstruction without any of the underlying data and we’ve seen that movie. Maybe David will look into this.

  13. TerryBixler
    Posted Jul 24, 2008 at 11:42 AM | Permalink

    I need to know a little more. How much data. What about programs. Typically a good archive includes not only source code of software written but the software used to compile the programs. What operating system. I use PVCS which requires that a program or data set be checked out from the archive and checked in after modification and I require commentary as to why! Of course I keep track of when and by whom. All programs/ data sets can be compared to the previous version. The programs and data are located on a server that is backed up daily. The machines that are used to develop or work on the programs are not the same as the server. I keep track of everything with archives going back 30 years. I cannot afford to do less as the effort to develop software is very large. I support many millions of lines of code with systems running online real time worldwide. You can contact me directly if you wish and maybe I can get one of my people if not myself to help. Typically we use a team approach but my QA guys and one of my programmers are very sharp in this area.

  14. Barclay E. MacDonald
    Posted Jul 24, 2008 at 11:51 AM | Permalink

    Back on topic, as we saw in the PR Challenge “Open access to reconstruction codes, documentations, data and validation methods…” is only a “goal”, not a requirement. It remains elusive and evanescent.

    Perhaps a few more conferences in Wengen and Trieste will help climate science grasp the relevance, import and true meaning of this “goal”. We can only hope.

  15. Rob W.
    Posted Jul 24, 2008 at 12:02 PM | Permalink

    It’s common practice to use a third party Hosting Provider who stores your code on their servers and provides access to it through Version Control software like Subversion (SVN). The Hosting Provider is, of course, off-site, their servers are often located in a secure data center which provides uninterrupted power, daily backups, hardware maintenance, etc. There are commercial providers and open source providers. Much of the popular open source software is hosted for free by providers like sourceforge.net and tigris.org. As the project administrator, you can control how others can access your code. You can setup a policy where anyone can download it, but only certain people can commit changes to the repository.

    As I mentioned earlier, you use version control software to manage changes. Many hosting providers offer two or more different choices for version control software. The ones I’ve seen most commonly are CVS and Subversion (SVN). Both CVS and SVN are free, open source programs that are actively maintained and updated. SVN is a more modern system and I recommend it. Version control software works like this: 1. Assuming you already have the code you want to put under version control locally on your computer, you perform a Check-in to the repository located on the hosting provider’s server. The repository will now have a copy of your code that matches what’s on your local computer. 2. Now you work on you local copy as you normally would and when you’re happy with your changes, you Commit the code to the repository. 3. Continue to Commit changes as your code evolves. At any time, you can go back to any previous revision so you don’t have to worry about keeping old copies. Version control has been compared to having a Time Machine for your code. SVN is commonly invoked from the command line in Windows or Linux. If you don’t want to memorize a bunch of arcane commands, there are various open source GUI programs that make it easier. I use TortoiseSVN under Windows XP which works as a context menu extension (right-click menu) inside Windows Explorer.

    So using a hosting provider and version control software, you can make your code available to everyone, allow certain people to make changes, and keep it safe in an off-site data center for very little cost.

  16. Rob W.
    Posted Jul 24, 2008 at 12:19 PM | Permalink

    I should have said it’s common practice among software developers. I don’t know what’s common practice among climate scientists. Sorry ’bout that.

  17. Doug White
    Posted Jul 24, 2008 at 12:55 PM | Permalink

    I don’t think many climate scientists would like complete disclosure. As Steve has shown, climate data can often be interpreted in many, sometimes contardictory ways. Keeping the data out of sight allows them to make sweeping statements with no chance of contradiction. If all the data is on the table, the weakness of their conclusions becomes apparent, and somehow I don’t think a lot of climate scientists want that. Just my 2% of a dollar.

  18. MPaul
    Posted Jul 24, 2008 at 1:22 PM | Permalink

    My 2 cents. I think the approach of putting the pressure of public opinion on them is working a bit and could continue to work. Keep the pressure up. Law suits and FOI battles will likely fail. I do think you need to allow some room for face saving in the end. So, for what its worth, I think rhetoric such as ‘There, that didn’t hurt so much, did it CSIRO?’ tends to needlessly extend the time to capitulation and would tend to harden their battle resolve. Whereas, if you give them the opportunity to claim leadership through their actions in promoting transparency and reward them for forward progress, you might see further progress and you might be able to drive a wedge between the recalcitrant old guard (who are likely impervious to public pressure) and the administrators.

    Steve: good point. I’ll remove that snark.

  19. OldUnixHead
    Posted Jul 24, 2008 at 1:58 PM | Permalink

    FWIW, wrt CSIRO’s “Data from the Exceptional Circumstances Report” _download page_ [link], I see that by going to the report’s main page on the Exceptional Circumstances Report [link], that there is a link to its Supplimentary Info [656KB pdf]. That pdf does contain some data tables as well as a lot of charts.

    Still, that’s not really anything like the archives we’d like to see.

  20. Gunnar
    Posted Jul 24, 2008 at 1:59 PM | Permalink

    Software folks, I think Steve asked for a backup solution, and you give him lots of details about software configuration management tools. This is like a microcosm of the big problem with the entire software industry. Try listening to the customer.

  21. Posted Jul 24, 2008 at 2:51 PM | Permalink

    Just as a quick look at the data provided from CSIRO for the Drought Exceptional Circumstances Report, I made density plots (frequency histograms) for the rainfall data over two periods, 1900-2010 and 2010-2040 for the South-west of Western Australia, the area with the highest drought predictions.

    I don’t have the time to do more than check the hypothesis tests on these data myself. The data delivery format is not ideal, but at least its workable. There is a data center at the Bureau of Rural Sciences, http://www.daff.gov.au/brs/data-tools. I would suggest the full data set for replication should find a home there eventually.

  22. Posted Jul 24, 2008 at 4:00 PM | Permalink

    I added the R script for the plot.

  23. TerryBixler
    Posted Jul 24, 2008 at 4:37 PM | Permalink

    I need
    1) Operating system
    2) Approximate size of data
    3) some idea of software currently used

    I cannot offer solution or help without basic information. As an aside I do not trust off site data backup solutions. I suspect if Steve were to lose his box he would lose a lot of work. I do not expect that the public sector has a very high index of archiving or indexing their work and datasets. My experience with the public sector is with money and they have very little reguard for money.

  24. Steve McIntyre
    Posted Jul 24, 2008 at 5:37 PM | Permalink

    #23 and others. I have an external hard drive. I can back up just copying everything, which is what I do from time to time, but it takes a long, long time. I just thought that there might be some simple backup software that just backed up what had changed.

  25. conard
    Posted Jul 24, 2008 at 5:45 PM | Permalink

    Windows: VSS (volume shadow service)
    OS X: Time Machine

  26. ad
    Posted Jul 24, 2008 at 5:50 PM | Permalink

    help xcopy

  27. ad
    Posted Jul 24, 2008 at 5:55 PM | Permalink

    xcopy /s/i/d/y S: Z:

    where S: is the source drive where you do stuff, use a directory as well if you just want to copy a sub-dir.
    where Z: is the backup drive, use a directory as well ….


    xcopy /s/i/d/y S:\stuff Z:\stuff

    put it in a file backup.bat on S: drive. schedule it to run periodically using the windows task scheduler.

  28. Scott-in-WA
    Posted Jul 24, 2008 at 6:05 PM | Permalink

    I can back up just copying everything, which is what I do from time to time, but it takes a long, long time.

    A few years back I discovered that some of my USB ports were not configured with the proper USB 2.0 driver and were very slow as a result.

    Another thing I’ve done is to reformat my external hard drives to the NTFS file system as opposed to using the FAT32 format the drives originally came with. That added some speed.

    My primary computer is always specified with three SATA drives for redundancy — one is the system drive and the other two are for primary data storage and for application scratch file space.

    My most important digital material is triple stored — one copy on an internal drive, one copy on an external drive that is local to my desk, and one copy located offsite on an external drive stored in safety deposit box.

    I used to write it all to DVDs, but the volume of material — 40 years worth of digitized Kodachrome slides — got way too large for that.

  29. Posted Jul 24, 2008 at 10:48 PM | Permalink

    The climate data was paid for by our tax dollars. It needs to be public (once precedence of work/publication etc is established). In an age where terabyte drives are consumer items and megabyte/sec connections are common, there is no excuse for any but the most gigantic scientific datasets to be secret.

    Likewise, the data formats need to be kept with the data. Source code is useful too, although it is a heck of a lot of work to go through it and figure out anything. But it still should be there, along with the binaries used to get the result.


    Someone above mentioned Carbonite. I also use it, and it works well and is very inexpensive. I currently have 25GB stored out on their servers.

    Carbonite is made for the non-computer-nerd, and runs on Windows. I’ve been doing software for over 40 years, and I recommend this sort of off-site solution. There are also other vendors.

    Of course, without knowing the volume of data, it’s hard to know whether it will work.

    BTW, my primary computer also copies it’s main drive to a backup every night, in addition to using carbonite.

    I set it all up to be automatic because otherwise I’d forget.

  30. Dr Slop
    Posted Jul 25, 2008 at 1:55 AM | Permalink

    #24 Try Unison.

  31. RomanM
    Posted Jul 25, 2008 at 6:18 AM | Permalink

    By the way, a good example of the “lack-of-proper-statistics” methodology that I refer to above is the CSIRO document being discussed in another thread. The word “significant” (or “significantly”) occurs 12 times in the main document. None of these has a statistical meaning. The word “significance” does not appear at all. Neither word appears in the Supplementary Information. A variety of tables and graphs of complicated and unfamiliar variables (e.g. Average percentage area having exceptionally hot years for selected 40-year periods – where the 40-year periods overlap substantially) are displayed without a single statistical evaluation of their import. In most of them, I can’t see any particular pattern, never mind that they certainly don’t show the patterns consistent with the strong conclusions given in the document. The policy makers who read the document will certainly be forced to completely rely on the scientifically unsupported results stated by the authors in the summary section.

  32. Locri
    Posted Jul 25, 2008 at 7:16 AM | Permalink

    A really good backup solution for Windows is Comodo Backup. It lets you do incremental backup which solves the issue of directories that sit there unchanged for weeks from being backed up repeatedly. It’s user friendly and free.

    Steve: That sounds like what I want. I’ll try it.

  33. bender
    Posted Jul 25, 2008 at 8:41 AM | Permalink

    #31 uncertainty-free science. policy makers love it. it’s cheap to produce. win-win, right?

  34. James Lane
    Posted Jul 26, 2008 at 10:07 PM | Permalink

    Art Raiche, former Chief Research Scientist at the CSIRO, has had a serve at the organisation. It perhaps sheds some light on the CSIRO’s preoccupation with IPR:

    It is my strong belief that CSIRO has passed its use-by date. The organisation that bears the name of CSIRO has very little in common with the organisation that I joined in 1971, one that produced so much of value for Australia during its first seven decades. Of course, one cannot kill off a national icon directly but it can be slowly reduced in size and function. It is anachronistic to have a single organisation operating across so many sectors with sector funding allocations left in the hands of CSIRO’s incompetent management. I suggest that that many of its divisions should be excised with the funding of these divisions to be used to relocate its more competent scientists into relevant universities or government departments. In particular, I suggest excising divisions associated with minerals, IT, human nutrition and medicine because these are largely irrelevant to and lacking the competence of their industry sectors.

    The argument that CSIRO should be kept whole because it can marshal resources across divisions was true once but is so no longer. Divisions have been transformed into business units who jealously guard their intellectual property from other divisions. Knowledge is traded on an essentially mercantile basis. This can be true between groups within the same division. The is made all the more complex by the existence of the flagship structures. Because of this, it is often a lot easier to collaborate with groups outside CSIRO than within.

    One simple example concerns a senior scientist who worked in one of the IT divisions but wanted to transfer to a minerals division, the CSIRO Exploration & Mining (CEM), because that was the focus of much of his work. A condition of the transfer was he was not allowed to use any of the platforms developed by the IT division when working for CEM unless there was a large financial transfer back to the IT division. That sort of thing might be reasonable when transferring from one company to another but CSIRO? This sort of thing was widely replicated across the organisation.

    Pre-1988, before the McKinsey “reforms” were instituted, CSIRO was quite effective because it was highly decentralised into semi-independent small divisions, for the most part led by competent chiefs of good scientific reputation, who knew their industry sectors. Management kept in touch by talking directly to staff rather than through formal layers of reports. Until then, it also had the advantage of a large open organisation where scientists were free to consult across divisional boundaries with no concept of internal costing or worries about protecting intellectual property internally. In other words, the situation described above would never have happened.

    He’s only getting warmed up. The whole thing here:


  35. Ian Castles
    Posted Jul 27, 2008 at 1:46 AM | Permalink

    I hope that non-Australian readers will forgive me for this necessarily lengthy posting about the CSIRO/Bureau of Meteorology (BoM) report which is the subject of this thread. I think that what follows is especially pertinent in the light of Steve’s comment in the main post that ‘This means more than IPCC authors taking in one another’s laundry. It means more than a bunch of IPCC scientists telling everyone else what to think – even if they’re right and perhaps especially if they’re right.’

    The key finding of the report, as presented in the opening words of Minister’s Burke’s media release of 6 July, was that ‘Australia could experience drought twice as often and the events will be twice as severe within 20 to 30 years.’ At his press conference on the same day, the Minister said that “What we’ve done is taken the best climate scientists in Australia and asked them to come up with their best information … They’ve worked through that scientifically … [T]he bottom line of all this is that this [report is] entirely written from beginning-to-end by scientists.’ On the following day Mr. Burke reiterated his total confidence in the scientific excellence of the report: ‘My priority on all of this is to make sure that we go with the best available science and I do believe that’s what the CSIRO and the Bureau of Meteorology gave us yesterday.’

    What the Minister didn’t mention was that the report hadn’t been reviewed by outsiders. It was authored by 11 scientists from two government agencies. The only review comments had come from scientists from the same agencies and from the Publications Unit staff at BoM. There doesn’t seem to have been any involvement of statisticians – even from the 100-strong CSIRO Division of Mathematical and Information Sciences.

    As it happens, a peer-reviewed study of the prospective incidence of droughts in Australia HAS just been published. The paper ‘Comparison of suitable drought indices for climate change impacts assessment over Australia towards resource management’ appears in the current (August 2008) edition of the International Journal of Climatology – the journal of the UK Royal Meteorological Society. It is authored by four CSIRO scientists, three of whom were also authors of the BoM/CSIRO report (Freddie Mpelasoka, Kevin Hennessy and Bryson Bates) and two of whom were Coordinating Lead Authors of the IPCC’s 2007 Assessment Report (Kevin Hennessy and Roger Jones).

    The International Journal of Climatology paper (hereafter Mpelasoka et al., 2008) presents two drought indices for Australia – the Rainfall Deciles-based Drought Index (RDDI) and the Soil-Moisture Deciles Based Drought Index (SMDDI) – which are stated to be measures of, respectively, rainfall deficiency and soil-moisture deficiency attributed to drought and potential evaporation. The results are reported for two climate models (CSIRO Mk 2 and the CCCmaI model developed by the Canadian Climate Centre), two periods (the 30-year periods centred on 2030 and 2070) and two emissions scenarios (the IPCC SRES scenarios B1 and A1FI).

    The most suitable basis for comparison with the BoM/CSIRO report finding that ‘Australia could experience drought twice as often … within 20 to 30 years’ is the Mpelasoka et al (2008) results for the 2030-centred period using the high (A1FI) emissions scenario assumptions. This basis is used in the discussion that follows: changes in drought frequency represent increases or decreases in the 2030-centred period by comparison with the 1975-2004 baseline period.

    Consider first the RDDI results. The ‘2030 high’ outcome for the CSIRO model is ‘A general increase of 0-20% over Australia, except for scattered patches with DECREASES [in drought frequency] of 0-20% over the southern parts of the Murray-Darling Basin, the south-western parts of the Western Plateau, and Tasmania’ (EMPHASIS added). And for the CCC model the ‘2030 high’ outcome is ‘A 0-20% increase over most areas of Australia except for a DECREASE [in drought frequency] of 0-20% over the southern parts of the Murray-Darling Basin, the South Australian gulf areas, and Tasmania.’ (EMPHASIS added).

    For the SMDDI, the RDDI is extended ‘to include evapotranspiration in order to produce a measure of soil-moisture deficit.’ The SMDDI for the ‘2030 high’ outcome for the CSIRO model shows ‘A 0-20% DECREASE over most of the western half of Australia, and increases of 0-20% in the eastern one-third and parts of coastal WA’ (EMPHASIS added). The SMDDI ‘2030 high’ outcome for the CCC model is ‘A general increase of 0-20% over most areas; 20-40% increase over south-western coast catchment. Patches of DECREASE of 0-20% over Timor Sea areas and northeast coast catchment’ (EMPHASIS added).

    In short, the simulations from the two models suggest that in the medium-term (20-30 years) there will be a tendency towards a small increase in drought frequency in most of Australia, but with decreases in drought frequency in some areas. Although the peer-reviewed study comes from several of the same scientists, it gives no support to the cries of alarm and disaster which have accompanied the release of the BoM/CSIRO report. (A CSIRO media release of 15 July provides a link to a podcast in which Kevin Hennessy ‘explains why farmers and the Government have reacted with alarm to a joint report from CSIRO and the Bureau of Meteorology’).

    The impression given by the peer-reviewed paper is vastly different from that given in the report to the Australian Government. This may be partly explained by the difference in the definition of ‘drought’; the authors of the report to government were obliged to adopt the definition specified in the terms of reference. This however have prevented them from giving an explanation of why the two studies yield such different results.

    The BoM/CSIRO report states that ‘it is likely that there will be changes in the nature and frequency of exceptionally hot years, low rainfall years and low soil moisture years’, and claims that ‘previous studies have not adequately done this analysis’ (p. 13). In that connection, it identifies Mpelasoka et al (2007) as one of six such analyses. But, surprisingly, the BoM/CSIRO report gives the wrong citation: the Mpelosaka et al paper was published in Int.J.Climatol. in 2008, not 2007; it appears on pps. 1283-1292 of vol. 28 of the journal, not pps. 1673-1690 of vol. 27; and its DOI reference number is ‘joc.1649′, not ‘joc.1508′.

    The volume, page details, year of publication and DOI reference number given in the BoM/CSIRO report for Mpelasoka et al. relate not to a paper by Mpelasoka et al but to a different paper by four Australian scientists which is entitled ‘Effect of GCM bias on downscaled precipitation and runoff projections for the Serpentine catchment, Western Australia.’

    It’s surprising that this error was not picked up by any of the 11 authors or 3 reviewers of the paper – not to mention the officials and staffs of the departments and ministerial offices serving the Prime Minister and several other Ministers with interests in the science and policy of climate change and the agency for which we’re now told the report was prepared: the Bureau of Rural Sciences. This raises a question about how closely the report was scrutinised before publication.

    Finally, it should be noted that a Mpelasoka et al (2007) paper on the future frequency of Australian droughts was cited in the ‘Australia and New Zealand’ chapter of the 2007 IPCC report. Here’s the statement and the full recommended citation:

    ‘Up to 20% more droughts (defined as the 1-in-10 year soil moisture deficit from 1974 to 2003) are simulated over most of Australia by 2030 and up to 80% more droughts by 2070 in south-western Australia (Mpelasoka et al., 2007)’ (Hennessy, K., B. Fitzharris, B.C. Bates, N. Harvey, S.M. Howden, L. Hughes, J. Salinger and R. Warrick, 2007: Australia and New Zealand. Climate Change 2007: Impacts Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, M.L. Parry, O.F. Canziani, J.P. Palutikof, P.J. van der Linden and C.E. Hanson, Eds., Cambridge University Press, Cambridge, UK, p. 515).

    There’s a big difference between ‘up to 20% more droughts … over most of Australia by 2030′ and ‘Australia could experience droughts twice as often [i.e., 100% more frequently] within 20 to 30 years.’ The citation to the ‘Mpelasoka et al., 2007′ paper which supported the former statement was given in the list of References as follows:

    ‘Mpelasoka, F., K.J. Hennessy, R. Jones and J. Bathols, 2007. Comparison of suitable drought indices for climate change impacts assessment over Australia towards resource management. Int.J.Climatol., accepted.’ (‘Climate Science 2007: Impacts, Adaptation and Vulnerability’, p. 537).

    These bibliographical details were also wrong. The fourth-named author of the paper as published in Int.J.Climatol was B.C. Bates (one of the Lead Authors of the IPCC Chapter), not J. Bathols. The year of publication was 2008, not 2007. And the paper could not have been ‘accepted’ by the journal at the time that the IPCC report was finalised: according to the details on the cover page, Mpelasoka et al. (2008) was revised on 18 September 2007 and accepted on 2 October 2007 – six months after the WGII Contribution to the IPCC Report had been approved in Brussels by 130 governments.

  36. Geoff Sherrington
    Posted Jul 28, 2008 at 6:37 AM | Permalink

    Here is an illustration that is not easy to follow:

    Look in the upper dark brown area on the east coast. This is supposed to have a lower rainfall, at least 50 mm per decade. It includes the small town of Tully, whose rainfall for the decade is some 42000 mm. Do you suppose that 50 mm in 42000 mm is easily derived from the detail of a GCM?

    At the other extreme, look near the NE border of South Australia, where sits the town on Moomba on the course to SW Queensland, rainfall 2,000 mm per decade. It is supposed to suffer a loss of about 5 mm per decade. As if that would make much difference.

    Unless I am misinterpreting the map, which is conveniently labelled as annual rainfall, with the contours expressed as rainfall change per decade, I wonder what it all means?

  37. Geoff Sherrington
    Posted Jul 28, 2008 at 6:52 AM | Permalink

    I find the image reproduction routine difficult. Here is the URL derived from an image in the photo hosting group named Photobucket.

    The image is 4b of the Drought Exceptional Circumstances Report by K Hennessy et al July 2008.

    That spoiled my evening.

    • MrPete
      Posted Sep 3, 2008 at 4:52 AM | Permalink

      Re: Geoff Sherrington (#37),
      Here are a few hints. (For others passing by, here’s a link to the report.)
      * Open the report, get to the right spot
      * Make the image of interest as big as you want on your screen. The resolution is limited by your screen. For use on a website such as this one, you want it about 600 pixels across if possible. That’s about 2/3 of a 1024×768 screen.
      * Use the Acrobat image selection/copy tool to copy to the clipboard. Or, just press PrtSc.
      * Open up MSpaint (start->run… and type ‘mspaint’) or other editing software
      * Paste what you clipped, save as a jpg file, and upload to Picasaweb, photobucket or other online archive
      * When referencing an image in CA, include a width parameter in the IMG tag, e.g. width=”500″. This will shrink overly-large images, or expand small ones. (An expanded image will be fuzzy, no getting around that.)

      I’ve added a higher resolution/larger copy to Geoff’s comment to make it easier for my eyes to make out the graphic.

  38. tty
    Posted Jul 28, 2008 at 3:02 PM | Permalink

    Re 36

    I should think that they really mean change in annual precipitation per decade. Still even 500 mm out of 42,000 or 50 out of 2,000 doesn’t sound calamitous.

  39. Aynsley Kellow
    Posted Jul 29, 2008 at 3:22 PM | Permalink

    Should this be 4200mm? 42m would seem to be even more rain than Tully receives! 4.2m seems more in the ballpark.

  40. bernie
    Posted Jul 29, 2008 at 4:03 PM | Permalink

    The wonders of the internet indicate the 4200 mm is correct, viz., Tully
    No wonder the washing never dries in Tully and even the cars have snorkels!!

  41. tty
    Posted Jul 29, 2008 at 4:47 PM | Permalink

    The amount was per decade. 42000 milliter is correct i. e. 4200 x 10 = 42000.

  42. Aynsley Kellow
    Posted Jul 29, 2008 at 10:27 PM | Permalink

    Sorry Geoff – my misreading.


One Trackback

  1. […] What the Minister Tony Burke didn’t mention was that the report hadn’t been reviewed by outsiders…. […]

%d bloggers like this: