There is a need to reconsider the method of how these datasets are generated.
Some discussions have already started with other MIP contributors and the dialogue is included below - I'm tagging @taylor13, @gleckler1 for ref:
Subject: RE: PRIMAVERA / HighResMIP boundary conditions
Date: Fri, 24 Apr 2015 08:33:24 +0000
From: Roberts, Malcolm
To: Karl Taylor
CC: Gleckler, Peter, Rein Haarsma
Dear Karl, Peter (cc Rein),
Thanks for your email. I could not agree more that we do not want to duplicate our efforts, especially since
we (HighResMIP/PRIMAVERA) have a very tight deadline to have the protocol set up, ideally for testing later
this summer, and for real at the latest by the end of the year.
Yes, certainly we have used the PCMDI boundary condition extensively in the past and know that it is the
standard for CMIP AMIP-style runs. However, I think we have several extra demands for the forcing dataset
for HighResMIP:
1. We want high resolution as standard (that is, we want to provide the high resolution data to groups, and
allow groups to interpolate to lower resolutions as necessary). From personal experience, interpolating
the 1 degree dataset to a higher resolution (as well as of course not being able to add detail) needs to
be done in a careful way so as not to introduce anomalous features (specifically in the gradients of the
field).
2. Additionally, SST gradients such as those around boundary currents become very important as model
resolution is increased and is able to "feel" such gradients.
3. As you say, we want to produce a smooth dataset into the future to 2050. This is a new and interesting
challenge that we're only beginning to get our heads around, but we have some ideas for how to do it.
4. We would like daily data, since we expect that, as resolution increases, the air-sea interaction at shorter
timescales may well become more important. Of course we can only have "simulated" daily variability in
the dataset.
So given these requirements, I talked with Nick Rayner and John Kennedy here at the Met Office.
Specifically this is what John said about the process for HadISST:
"The conversion from 1 degree to 0.25 degree and from monthly to daily happens when we still have
anomalies. The interpolated anomalies are then added to the high resolution climatology.
We didn’t use linear interpolation of the monthly data because it introduces a monthly cycle in the
variance of the anomalies. The mid month points have higher variance than say the first and last days of
the month. It’s easy to see why because the first day of the month will be an average of the two mid-month
points either side of it, so it will have a lower variance. We used a cubic interpolation instead which can be
tuned to give a more consistent variance, though it still isn’t perfect."
From this I understand that I can attempt to use the anomaly field to construct the full 1950-2050 dataset
(by stitching an earlier period onto the end of the present day period, matching up phases of global modes
as far as possible), and then we can use the HadISST machinary to construct the daily, 1/4 degree dataset
for the whole period.
We have a couple of ideas for the sea-ice forcing too for the future period, but that is a little less developed.
I think the good part of this is that we will be able to have a comparable overlap period. The idea is that the
HighResMIP low resolution simulation will use basically the same model as used in DECK simulations
(though with the different HighResMIP forcing datasets, configuration, etc), so that we will have a common
period (1978-2008 I guess) to compare our simulations to DECK, potentially learning about the impact of
our protocol too.
I hope that this makes sense, and I hope we can keep in touch as we develop our datasets.
Thanks,
Malcolm & Rein
Malcolm Roberts Manager, High resolution global climate modelling
Met Office Hadley Centre
FitzRoy Rd, Exeter, Devon EX1 3PB, UK
Tel: +44 1392 884537 Fax: +44 1392 885681
email: malcolm.roberts http://www.metoffice.gov.uk/research/people/malcolm-roberts
Further ideas/comments can be captured in this issue.
There is a need to reconsider the method of how these datasets are generated.
Firstly to:
Some discussions have already started with other MIP contributors and the dialogue is included below - I'm tagging @taylor13, @gleckler1 for ref:
Further ideas/comments can be captured in this issue.