I am running the 2-day benchmark for CMAQv5.3. The 1st day successfully runs, but the 2nd day fails almost immediately with an indexing error “NCVGT: : NetCDF: Index exceeds dimension bound”
The last statement in all the CTM log files is:
"E2C_CHEM_YEST" opened as OLD:READ-ONLY
File name "/share/fgarcia4/jdeast2/cmaq-5.3-intel2017-BENCH/CMAQ_REPO/data/2016_12SE1//land/epic_festc1.4_20180516/2016_US1_time20160701_bench.nc"
File type GRDDED3
Execution ID "????????????????"
Grid name "SE53BENCH"
Dimensions: 80 rows, 100 cols, 42 lays, 41 vbles
NetCDF ID: 2555904 opened as READONLY
Starting date and time 2016183:000000 (0:00:00 July 1, 2016)
Timestep 240000 (24:00:00 hh:mm:ss)
Maximum current record number 1
Checking header data for file: E2C_CHEM_YEST
Inconsistent values for NLAYS: 42 versus 35
This seems as if the E2C_CHEM_YEST file has mismatching dimensions, but the file for day 1 has the same dimensions and there is no error in day 1. The inconsistent NLAYS does not seem to cause the model to fail when it happens in other files.
Any help solving this error is appreciated! My run script is attached. There are only minor changes to paths from the run script packaged with the benchmark data.
I encountered with this NCVGT error many times. The scenarios I was in when I saw this error message was the dimensions of my inputs file were not matching each other. Especially, I had this error when my initial condition or boundary condition had a different dimension such as time step. It could be a hint for you to check the time step of this soil file or other input files. As far as I know, once the dimension issue is fixed, you should have no problem at this point.
Thank you! In this case, the dimensions of the input boundary and initial conditions files match (80x100x35). Besides the inconsistent NLAYS, all input files appear to have 80x100 except DOT met files and point emissions
Searching with that command shows no error messages in any of the log files.
Additionally, here’s the output created by the system job scheduler that contains the actual error and some other output not contained in the LOG files.
Should any changes to the benchmark run script provided with the CMAQ distribution have to be made to run the simulation? I had to make a few changes.
You should not have a problem running the benchmark version with the scripts provided.
However, it looks like you are using an older version of the I/O API library.
Please try downloading and using the latest version. Here is the information on the version that I am using:
ioapi-3.2: $Id: init3.F90 120 2019-06-21 14:18:20Z coats $
Version with PARMS3.EXT/PARAMETER::MXVARS3= 2048
netCDF version 4.7.0 of Sep 16 2019 15:07:57 $
I followed these instructions to download and install.
You are correct that we have the same Models-3 I/O API version number, but the dates are different.
In your out.cmq53.761790.txt file is the following output:
ioapi-3.2: $Id: init3.F90 98 2018-04-05 14:35:07Z coats $
netCDF version 4.6.1 of Nov 12 2018 11:27:53 $
I’m also having problem running CMAQ v5.3. However, I’m did not find any message that may point to the error.
Do you think that if we update I/O API library it would run succesfully?
Please find attached run script and log files.
If you find any possible mistake, please let me know. I would appreciate the support.
What about version of I/O API-3.2?
I’m running with:
ioapi-3.2: Id: init3.F90 3 2017-04-15 20:19:16Z coats
netCDF version 4.4.1.1 of Aug 7 2017 18:04:11 $
It is conceivable that you can get rid of this by getting rid of all the executable, object, module, and library files, and then re-building from scratch.
The October 16 I/O API release increased the maximum-files parameter MXFILE3 from 64 to 256 at the request of EPA. This increase should have been transparent to any application which does not include/use I/O API-internal/private data structures in STATE3.EXT – doing so is forbidden, but CMAQ’s “pario” uses these anyway. This may be the cause of the problem.
I am not familiar with the error that is reported in your log_run_cctm_2010_RMGV.txt
"forrtl: severe (27): too many records in I/O statement, unit -5, file Internal Formatted "
Can you change the run script to set the CLOBBER_DATA to true and try running again?
#> Keep or Delete Existing Output Files
set CLOBBER_DATA = TRUE
Re-compile the code with “-traceback” and that will at least indicate the source-file and line at which the error occurs. I would not be surprised if the error happens due to a bad format during the construction of an error-message for the “real” error; if so, then fixing that format might then help to get the real error-message out to the log, and we can go from there (two-stage process, sorry).