Benchmark case for CMAQ 5.3.2 fails

CMAQ is failing when is trying to create the CONC file when we try to run the benchmark case. We are able to run CMAQ5.3.2 for a different set of input files and domain, but somehow it crashes with the benchmark run script. Any ideas? (attached is the log file)

 >>--->> WARNING in subroutine OPEN3
 File not available.
 Could not open CTM_CONC_1 for update - try to open new

 Conc File Header Description:
    Concentration file output
    From CMAQ model dyn alloc version CTM
    Set of variables (possibly) reduced from CGRID
    For next scenario continuation runs,
    use the "one-step" CGRID file
    Layer  1 to  1
 
 Value for IOAPI_CHECK_HEADERS:  N returning FALSE
 Value for IOAPI_OFFSET_64:  YES returning TRUE
 Value for USR_DFLAT_LVL not defined; returning default:  9
 Value for COMPRESS_NC not defined;returning default:   TRUE
 Value for IOAPI_CFMETA not defined;returning default:   FALSE
 Value for IOAPI_CMAQMETA not defined; returning defaultval ':  'NONE'
 Value for IOAPI_CMAQMETA not defined; returning defaultval ':  'NONE'
 Value for IOAPI_SMOKEMETA not defined; returning defaultval ':  'NONE'
 Value for IOAPI_SMOKEMETA not defined; returning defaultval ':  'NONE'
 Value for IOAPI_TEXTMETA not defined; returning defaultval ':  'NONE'
 Value for IOAPI_TEXTMETA not defined; returning defaultval ':  'NONE'
 Error creating netCDF file
 netCDF error number  -36  processing file "CTM_CONC_1"
 NetCDF: Invalid argument
 NetCDF: Invalid argument
 /pln6/MCS/CMAQ5.3.2/CMAQ-master/data/output_CCTM/CCTM_CONC_v532_cb6_intel_Bench_2016_12SE1_20160701.nc


 *** ERROR ABORT in subroutine OPCONC on PE 000          
 Could not open CTM_CONC_1

PM3EXIT: DTBUF 0:00:00 July 1, 2016
Date and time 0:00:00 July 1, 2016 (2016183:000000)

CTM_LOG_000.v532_cb6_intel_Bench_2016_12SE1_20160701.txt (35.4 KB)

What version of netCDF and IOAPI are you using? Did you compile netCDF with large file support?
https://www.unidata.ucar.edu/software/netcdf/faq-lfs.html

Hi Chris,

Thanks for your reply.

These are the versions of iopai and netcdf that we are using:

ioapi-3.2: $Id: init3.F90 98 2018-04-05 14:35:07Z coats $                            
 Version with PARMS3.EXT/PARAMETER::MXVARS3= 2048                                     
 netCDF version 4.3.3.1 of Dec  4 2020 14:51:57 $   

Do you know how I can enable large file support? from the link you sent it seems to imply that there is no special compiler flag needed for Large File Support:

Do I need to use special compiler flags to compile and link my applications that use netCDF with Large File Support?

No, except that 32-bit applications should link with a 32-bit version of the library and 64-bit applications should link with a 64-bit library, similarly to use of other libraries that can support either a 32-bit or 64-bit model of computation.

Since netCDF 3.3 (IIRC), large file support has been automatic. You don’t need special compile flags, etc. for that.

This may be off topic, but I don’t recognize the following two environment variables listed in your log file along with the other IOAPI environment variables like IOAPI_OFFSET_64 (indeed set to T), IOAPI_CFMETA, etc.

I was not aware that IOAPI supports netCDF internal compression which these two variables seem to refer to.

A look at netcdf.h suggests that netCDF error number -36 refers to the following

#define NC_EINVAL (-36) /**< Invalid Argument */

which may point to an error when netCDF functions are called from within IOAPI, but that shouldn’t happen unless modifications were made to IOAPI.

Again, I’m probably on the wrong track here, so apologies in advance if this is indeed the case.

You say that you are able to get CMAQv5.3.2 to run using a different domain and input files. How is that domain different?

Thank you all for your help.

As Christian noticed, it turns out that we are using a modified version of IOAPI that allows for reading/writing netcdf4 files. We were able to run CMAQ5.3.2 with our 4km-by-4km grid Southern California domain with the saprc07tc_ae6_aq mechanism without any issues, so I don’t exactly understand why it wasn’t working with the benchmark case.

We set the following variables in the run script and that fixed the issue:

setenv IOAPI_OFFSET_64 NO
setenv HDF5_USE_FILE_LOCKING FALSE

Again, this is for a modified IOAPI version based on version 3.2, so I hope this doesn’t confuse other people into using these settings.

Thanks again for your time!

Marc