Error running CCTM of CMAQ5.2.1

Hi all,

I am running CCTM of CMAQ5.2.1 and get some problems. CTM LOG files don’t show any errors, but in my qsub log files, it shows:
Value for IOAPI_CHECK_HEADERS not defined;returning default: FALSE
Value for IOAPI_OFFSET_64: NO returning FALSE
Value for IOAPI_CFMETA not defined;returning default: FALSE
Value for IOAPI_CMAQMETA not defined; returning defaultval ': ‘NONE’
Value for IOAPI_CMAQMETA not defined; returning defaultval ': ‘NONE’
Value for IOAPI_SMOKEMETA not defined; returning defaultval ': ‘NONE’
Value for IOAPI_SMOKEMETA not defined; returning defaultval ': ‘NONE’
Value for IOAPI_TEXTMETA not defined; returning defaultval ': ‘NONE’
Value for IOAPI_TEXTMETA not defined; returning defaultval ': ‘NONE’
Error putting netCDF file into data mode.
netCDF error number -62 processing file “CTM_CONC_1”
NetCDF: One or more variable sizes violate format constraints

 *** ERROR ABORT in subroutine OPCONC on PE 000          
 Could not open CTM_CONC_1
 Date and time 0:00:00   March 26, 2018 (2018085:000000)

0.008u 0.012s 0:01.89 0.5% 0+0k 0+144io 0pf+0w
Fri Sep 29 15:06:06 CST 2023

run_cctm_hj.csh (24.4 KB)
rsl.out.txt (68.5 KB)
CTM_LOG_001.v521_intel_2018FJ_20180326.txt (17.2 KB)

BTW, I use ifort (IFORT) 17.0.5 20170817, netCDF 4.9.2 to compile CMAQ.
Any ideas about the errors? Thanks for your help!

Jason

See https://cjcoats.github.io/ioapi/ERRORS.html#ncf331

You probably have some scripting error that is causing one or more of the file-dimensions to be either zero or negative.

You might also be running up against netcdf classic format constraints (I think arrays need to be < 2GB per time step) and need to enable 64 bit offset mode by adding ‘setenv IOAPI_OFFSET_64 TRUE’ to your run script (at least that was my experience sometimes in the past when encountering the “NetCDF: One or more variable sizes violate format constraints” error message).

Hi Jason,

If you issue has not been resolved, please let us know. I can provide a couple more suggestions.

Cheers,
David

Thanks. I solved it after setting ‘setenv IOAPI_OFFSET_64 TRUE’

I see that now. Why in the world does the script turn it off? …from, the original post,

(The only reason I can think of is that the script expects someone to be running a 30-years-obsolete netCDF version before 2.7 with some of the software that deals with these files! …or expects to be running on a 32-bit machine with an obsolete OS without large file support (equally obsolete).)

Please, can this be changed?

IOAPI_OFFSET_64 is set to YES in all of the CCTM run scripts we provide.

In the run script script that was distributed with CMAQv5.2.1, IOAPI_OFFSET_64 was indeed unfortunately set to F. Why, I don’t know, but it was certainly changed to T in the more recent CMAQv5.3 and CMAQv5.4 releases that we hope users are using.

Thank you for identifying this problem in the run script. We have now updated the v5.2.1 run script in the USEPA/CMAQ GitHub repository to correctly set IOAPI_OFFSET_64 to YES.

Thanks @foley.kristen for making this change.

Strictly speaking, the CMAQv5.2 run script did not turn off this setting. CMAQv5.2 was released in June 2017, and the I/O API default setting of IOAPI_OFFSET_64 was changed from F to T in December 2017.

But yes, rather than superfluously setting this environment variable to its (then) default setting, the CMAQv5.2 run script should have set it to T, because backward compatibility with ancient netcdf libraries or obsolete 32-bit machines should no longer have been a concern then, and even standard applications (let alone ISAM or DDM applications) were already routinely exceeding the 2GB-per-timestep constraint. The benchmark case of course did not exceed the constraint. As noted in the I/O API documentation linked above, “(Assuming only (4-byte) REAL and INTEGER variables, a timestep in a GRIDDED file has size 4*NVARS*NCOLS*NROWS*NLAYS + 8*NVARS bytes.)”

1 Like