I run the benchmark test case (2016,2day) of CMAQ-5.3.1 and use m3diff tool to calculate the difference. There are very large difference for variables NUMACC, NUMATKN and NUMCOR. I attached the run script and compile script here. What should i do to fix this?
bldit_cctm.csh (28.6 KB)
run_cctm_Bench_2016_12SE1.csh (34.0 KB)
Those negative values worry me (all three variables).
And you’re running with
setenv CTM_CKSUM Y #> checksum report [ default: Y ]
which should trap all conc-variables to be at least
CMIN (which should be non-negative)… but you’re still getting negative number-concentrations ;-(
Another thing that worries me is the advective time-step constraint you’re using:
setenv CTM_ADV_CFL 0.95 #> max CFL [ default: 0.75]
which is much larger than I’m comfortable with: are these negative number-concentrations the result of numeric-algorithm errors in the advection coming from too large an advective time-step?
Thanks for your suggestion, I changed the CTM_ADV_CFL variable to 0.75 and rerun the model. The differences are still this large for the three variables.
Best I can tell, what is shown in these plots are not negative negative number concentrations, but both positive and negative differences in number concentrations between a simulation performed by @lwu127 and the reference CMAQ benchmark output dataset distributed by CMAS. Please correct me if this is not the case.
Absolute values for NUMATK, NUMACC, and NUMCOR for the benchmark case are on the order of 1E+06 to 1E+10, so the difference values shown above aren’t necessarily concerning. It would be more informative to compute relative rather than absolute differences to judge whether your simulations produce results that are reasonably close to the reference data sets.