CCTM doesn't write CGRID file

I am trying to run CMAQv5.2.1 and was able to successfully run the benchmark case. But now in trying to run my own simulation, the CCTM is not writing all of the necessary output files before it moves on to the next day of the simulation; it is only writing the CONC, ACONC, WETDEP1, and WETDEP2 files. The fatal error comes when it tries to open the CGRID file from the previous day but it isn’t there. Is there a way that I can attach my run and log scripts to this post so that I don’t need to copy/paste in everything?
Thanks,
Elyse

The CGRID file represents the state of the model. It is written at the end of a (typically 1-day) model run, and it can be used as the ICON file for the subsequent run. It would be an error for the CGRID file to already exist when the model is trying to write the file out. The run script checks for this before model execution begins.
“set DISP = delete” deletes existing output files; “set DISP = keep” should abort the model run script before you begin, if output files already exist).

Hi, I am experiencing the same issue now:

S_CGRID         :/pool/sao_atmos/ahsouri/MODELS/CMAQ_DDM/data/output/CCTM_CGRID_KORUS_2016121.nc

 >>--->> WARNING in subroutine OPEN3
 File not available.

 Could not open S_CGRID file for update - try to open new

    State CGRID File Header Description:
 => Computational grid instantaneous concentrations
 => - for scenario continuation.

    State CGRID File Variable List:
 => VNAME3D(  1 ): NO2
 => VNAME3D(  2 ): NO
 => VNAME3D(  3 ): O
 => VNAME3D(  4 ): O3



 Error putting netCDF file into data mode.

ncendef: ncid 1769472: NetCDF: One or more variable sizes violate format constraints
netCDF error number -62 processing file “S_CGRID”
NetCDF: One or more variable sizes violate format constraints

 *** ERROR ABORT in subroutine WR_CGRID on PE 000
 Could not open S_CGRID file
 Date and time 0:00:00   May 1, 2016    (2016122:000000)

application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 0

Thanks!

You don’t say what size your modeling domain is, so it is difficult to tell, but the problem may be that you are running up against the 2 Gb limit for classic netCDF. You need to be sure that you are using a version of netCDF that supports large files, and that IOAPI is compiled with large file support.
https://www.unidata.ucar.edu/software/netcdf/netcdf/Large-File-Support.html
https://www.cmascenter.org/ioapi/documentation/all_versions/html/NEWSTUFF.html#aug1010

1 Like

Hi all,

If this error is presenting itself after one day has been completed, and before the next day has begun, then try a work around to add a sleep command in the run script just before the script checks to see if the S_CGRID file has been written. I have run into this problem, and I am not sure how long of a sleep command must be used, but this gives the I/O time to write out the CGRID file prior to the next day starting.

Here is what I have used:

  #> Add a few seconds before next run starts so that the CGRID file can be written out in time to start next day
  sleep 30

  #> Abort script if abnormal termination
  if ( ! -e $S_CGRID ) then
2 Likes

It has been solved by turning on " setenv IOAPI_OFFSET_64" options.

“Sleep 120” was helpful if this issue would result from running the model on different cpu speeds.

Thanks!

Hy,

I’m having similar issues. I wanted to use the CGRID file from the previous day (May 3, 2020) and run the simulation for May 4, 2020. My job runs ok and gives me other output files (ACONC, AOD, APMDIAG APMVIS, ASENS etc…). However, it doesn’t give me any CGRID file.

Can you please take a look into my scripts and log file and let me know where did I do wrong?

CTM_LOGS.txt (3.8 MB)
run_cctm_rasel.csh_sbatch.txt (29.4 KB)
run_cmaq_rasel.sh.txt (797 Bytes)
?

Thanks
Rasel

First, it seems that you are largely missing emissions data. You are getting error messages that emissions could not be found for many (all?) species (including such key species as NO2 and SO2) for each time step.

Second, your log file contains no system-level errors in it because you have directed them to a separate file:

#SBATCH --output=jCMAQ-5.2-%N-%j.out
#SBATCH --error=jCMAQ-5.2-%N-%j.err

I believe you can either delete the line directing the error stream, in which case it will default to going to the same place as the output stream, or you could consult the error stream output file (*.err) and see what the messages say.

If you continue to have difficulty, please start a new thread.