CMAQ5.3 subroutine gridded_files_se

Hi
I am trying to run CMAQ 5.3, however, I am facing some error that I cannot fix. The only tip that the log gave me was:
*** ERROR ABORT in subroutine gridded_files_se on PE 000
Open failure for INIT_CONC_1
PM3EXIT: date&time specified as 0
Date&time specified as 0

Apparently the CCTM_CGRID file called by the variable INIT_CONC_1 is not being created. The simulation stops shortly after startup, on the first day to be simulated. I have attached the .log file and the script used. Is someone can help me?

run_cctm_2016_12US1.csh (35.2 KB) cctm_log.txt (8.5 KB)

Thank you very much.

Hi,
I am facing the same error with you. After carefully reading your scripts, I guess this error was caused by the commented lines between line 312- line 363.
I noticed that you only have a emission file which is area emission file (The same as me). But I think it is wrong for directly commenting theses lines. In CMAQ5.2.1, there is a setting up for PT3DEMIS for doing this but I didn’t have any clues for CMAQ5.3 as the PT3DEMIS has been removed in the newest version.
Please give us some suggestions for the condition that we only have one area emisison ncf file. How shall we revise the part of emission control in the run.cctm script?
Looking forward to hearing from you!
@tlspero @lizadams @bbaek @hogrefe.christian @cgnolte @fliu

@Ben_Murphy
Please have a look at this issue.
Thanks !

That is right, I only have one emission file that was processed by SMOKE v4.6. I used the EDGAR emissions inventory, processed each sector separately (aircraft, agriculture, transport, energy, industry, residential and ship), and then processed smk_mrgall to get together all the sectors. I have attached the SMOKE mrggrid log.

mrggrid.edgar_ALL.txt (1.9 MB)

ICON and BCON were run in the default profile mode. MCIP was generated by version 5.0
I have commented the lines cited related to the emissions part since my run case is not in the US region.

Thank for your help Ryan.

Hi,
I did the same as you but got the following error message.

—>> WARNING in subroutine XTRACT3
Error in layer-bounds specification for file GR_EMIS_001 variable AACD
M3WARN: DTBUF 0:00:00 Aug. 20, 2019

 *** ERROR ABORT in subroutine retrieve_time_de on PE 023
 Could not extract GR_EMIS_001      file

PM3EXIT: DTBUF 0:00:00 Aug. 20, 2019
Date and time 0:00:00 Aug. 20, 2019 (2019232:000000)

@ykaore,

looking at the run script and log file you posted, I am not sure they are matching.

In the run script, you specify NEW_START as TRUE, ICpath as ${CMAQ_DATA}/icon, ICFILE as ICON_v53_Guildford_profile_20141231 (since NEW_START is TRUE), and INIT_CONC_1 as $ICpath/ICFILE, i.e. {CMAQ_DATA}/icon/ICON_v53_Guildford_profile_20141231

However, the error in the log file indicates CCTM is trying to open /scratch/ykaore/guildford/winter/aqm/cmaq.53/cctm/CCTM_CGRID_v53_gcc_Guildford_20141231.nc and cannot find that file.

The run script is set up such that CCTM should only be looking for a CGRID file as initial conditions if NEW_START is set to FALSE.

Could you please double check that the run script and log file are from the same simulation?

Additionally, as pointed out by @Ryan, you should not comment out the entire emissions section of the run script. This didn’t cause the crash you reported because it occurred before any emissions were being read, but it will cause a problem once you solve the problem with the initial conditions.

To configure a run with a single gridded emissions file, make sure that at least the following environment variables are set:

setenv N_EMIS_GR 1
set EMISfile = emissions.ncf
setenv GR_EMIS_001 {EMISpath}/{EMISfile}
setenv GR_EMIS_LAB_001 GRIDDED_EMIS

setenv N_EMIS_PT 0 #> Number of elevated source groups

@Ryan

the error you are reporting in your last post may indicate inconsistencies in the vertical layer setup used for your gridded emissions file vs. your MCIP files.

Hi hogrefe.christian,
May I ask that we must keep the vertical layer setup used for gridded emissions file and MCIP files same in CMAQ5.3 ?
For example, I should made the MCIP files for 14 layers if my gridded emissions file is 14 layers. Am I right ?
(Because I can run CMAQ5.2.1 with different vertical layer setup used for gridded emissions file and MCIP files. So I am confused with this.)
@hogrefe.christian
Regards,
Ryan

Yasmin,
The run script is set up to do a series of 24-hour simulations, with the first simulation beginning on START_DATE and the last simulation beginning on END_DATE.
It should be possible to to do a single simulation of 773 hours, as you are attempting to do here, though this is not the way the model is typically run at EPA.
Set your END_DATE to 2014-12-31, so that the script will execute CMAQ only once.
Then in your script, NEW_START will be TRUE, and ICFILE will be ICON_v53_Guildford_profile_20141231. The CGRID file is written only once, at the end of the execution of CMAQ. It is used to restart the model. Since you are doing a single simulation of 773 hours, you should not be reading the CGRID file at all.

Thank you @cgnolte and @hogrefe.christian for yours replies.
So I have changed the run script with yours suggestions (attached again, and also the log). From now, the program stops with a warning message related to GRID_BDY_2D (I guess). It seems that the numbers are different because of they were rounding.

 "GRID_BDY_2D" opened as OLD:READ-ONLY
 File name "/scratch/ykaore/guildford/winter/aqm/cmaq.53/../mcip_d03/GRIDBDY2D_d03.nc"
 File type BNDARY3
 Execution ID "mcip"
 Grid name "Guildford_CROSS"
 Dimensions: 118 rows, 118 cols, 1 lays, 7 vbles, 1 cells thick
 NetCDF ID:    983040  opened as READONLY
 Time-independent data.
 Checking header data for file: GRID_BDY_2D
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: LUFRAC_CRO
     Inconsistent values for NLAYS: 21 versus 31
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: OCEAN_1
     Inconsistent values for GL_NCOLS: 119 versus 118
     Inconsistent values for GL_NROWS: 119 versus 118
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: MET_BDY_3D
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: MET_DOT_3D
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: MET_CRO_2D
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: MET_CRO_3D
     Inconsistent values for P_GAM: -3.2065E-01 versus -3.2100E-01
     Inconsistent values for XCENT: -3.2065E-01 versus -3.2100E-01
 Checking header data for file: INIT_CONC_1
 Checking header data for file: BNDY_CONC_1

 >>--->> WARNING in subroutine FLCHECK on PE 000
 Inconsistent header data on input files
 M3WARN:  DTBUF 1:00:00   Dec. 31, 2014 (2014365:010000)

run_cctm_2016_12US1.csh (35.2 KB)
log.txt (42.1 KB)

Can you run ncdump -h on your MET_CRO_2D file and post the result as well as your GRIDDESC file so we can have a closer look at the inconsistency being detected for your meteorological input files?

For your OCEAN_1 file, it appears that the number of columns and rows does not match what is expected based on the GRIDDESC file and other gridded files. Depending on whether your domain includes ocean and whether or not you want to include halogen chemistry and sea salt aerosols in your simulation, setting CTM_OCEAN_CHEM to F might be an option for you. This would eliminate the need for the OCEAN_1 file if it is a problem to generate it for your modeling domain.

The warning is just a warning… it might be important to address, but by itself it is not causing your model run to crash.
Check the log files for your other processes. Do any of them contain an ABORT message?

Thanks @hogrefe.christian
These are the prints:


GRIDDESC.txt (186 Bytes)

Thanks for the tip about the Ocean file.

Thanks. As @cgnolte suggested, it would also be helpful if you could look at the last ~10 lines of each of your CTM_LOG_* processor log files to see what error message(s) might be reported there.

Hi @cgnolte
All issues reported here, it already solved and I could run the model successfully. I did what you suggested, and the issue was gone.
#> Set Start and End Days for looping
setenv NEW_START TRUE
set START_DATE = “2014-12-31”
set END_DATE = “2014-12-31”

#> Set Timestepping Parameters
set STTIME = 010000
set NSTEPS = 7740000
set TSTEP = 010000

However I remained with this doubt regarding how the EPA runs the model. May I ask how is it done?
In my case, I wanted to run CMAQ for the entire month of January 2015.

Typically, we prepare daily input files (meteorology and emissions), and set the run script like this:

#> Set Start and End Days for looping
setenv NEW_START TRUE
set START_DATE = “2014-12-31”
set END_DATE = “2015-01-31”

#> Set Timestepping Parameters
set STTIME = 000000
set NSTEPS = 240000
set TSTEP = 010000

so that the model executes 32 times, each time for 24 hours, with the first beginning at 2014-12-31_00:00:00 and the last beginning at 2015-01-31_00:00:00.

This is done for convenience and QA/QC purposes. Also, if the model crashes, it can be restarted from the end of the previous day’s run, rather than from the beginning of the simulation.

3 Likes

Hi @cgnolte ,I have set my CCTM script with reference to your answer above, and the output now is one file per day. However, the script makes error after several days of running and log_file reports the following errors:
GR_EM_SYM_DATE_0 | F
INIT_CONC_1 :/data/WangHY/DATA/cctm/everyday/201908px/CCTM_CGRID_v532_intel_D01_20190719.nc
>>—>> WARNING in subroutine OPEN3
File not available.

*** ERROR ABORT in subroutine gridded_files_se on PE 021
Open failure for INIT_CONC_1
PM3EXIT: date&time specified as 0
Date&time specified as 0

And My nohup.out shows as below:
Abort(538976288) on node 20 (rank 20 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 20
Abort(538976288) on node 23 (rank 23 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 23
Abort(538976288) on node 29 (rank 29 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 29
Abort(538976288) on node 34 (rank 34 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 34
Abort(538976288) on node 38 (rank 38 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 38
Abort(538976288) on node 21 (rank 21 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 21
Abort(538976288) on node 22 (rank 22 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 22
Abort(538976288) on node 25 (rank 25 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 25
Abort(538976288) on node 26 (rank 26 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 26
Abort(538976288) on node 39 (rank 39 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 39
Abort(538976288) on node 30 (rank 30 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 30
Abort(538976288) on node 31 (rank 31 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 31
Abort(538976288) on node 33 (rank 33 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 33
Abort(538976288) on node 32 (rank 32 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 32
Abort(538976288) on node 27 (rank 27 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 27
Abort(538976288) on node 37 (rank 37 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 37
Abort(538976288) on node 24 (rank 24 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 24
Abort(538976288) on node 35 (rank 35 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 35
Abort(538976288) on node 36 (rank 36 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 36
Abort(538976288) on node 28 (rank 28 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 538976288) - process 28
tail: cannot open ‘buff_CMAQ_CCTMv532_WangHY_20231226_075741_596837740.txt’ for reading: No such file or directory


** Runscript Detected an Error: CGRID file was not written. **
** This indicates that CMAQ was interrupted or an issue **
** exists with writing output. The runscript will now **
** abort rather than proceeding to subsequent days. **


(standard_in) 2: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error
(standard_in) 1: syntax error

==================================
***** CMAQ TIMING REPORT *****

Start Day: 2019-07-17
End Day: 2019-08-31
Number of Simulation Days: 4
Domain Name: D01
Number of Grid Cells: 2041200 (ROW x COL x LAY)
Number of Layers: 35
Number of Processes: 40
All times are in seconds.

Num Day Wall Time
01 2019-07-17
02 2019-07-18
03 2019-07-19
04 2019-07-20
Total Time =
Avg. Time =

I don’t know exactly what’s wrong, could you please help me check what’s wrong with it?Thank you!

The model attempted to execute a run for 2019-07-20 but the initial condition file (CCTM_CGRID_v532_intel_D01_20190719.nc) was not found.
Check your logs from the run on 2019-07-19. Did that run complete? If so, there will be a message saying so, as well as an “elapsed time” message. More likely, that run crashed.
You might try something like:
grep -i abort /data/WangHY/DATA/cctm/everyday/201908px/CTM_LOG_*20190719*

If this does not solve your problem, please start a new thread.

1 Like

Hi, @cgnolte, thank you for helping me so quickly with my question.
At the end of the CTM_LOG_001.v532_intel_D01_20190719, it shows that :
Timestep written to CTM_CONC_1 for date and time 2019201:000000

 Timestep written to A_CONC_1         for date and time  2019200:230000
  =--> Data Output completed...    0.3 seconds

 S_CGRID         :/data/WangHY/DATA/cctm/everyday/201908px/CCTM_CGRID_v532_intel_D01_20190719.nc
 
 >>--->> WARNING in subroutine OPEN3
 File not available.
 
 Could not open S_CGRID file for update - try to open new

 Timestep written to S_CGRID          for date and time  2019201:000000


 ==============================================
 |>---   PROGRAM COMPLETED SUCCESSFULLY   ---<|
 ==============================================
 Date and time 0:00:00   July 20, 2019  (2019201:000000)
 The elapsed time for this simulation was    2390.7 seconds.

If you don’t see any errors in any of the log files, and the CGRID file is in the right place, you should be able to start a new job setting the start date as July 20.
Perhaps there is a system latency issue. I just noticed that our run scripts set the -v flag for all of our output files, indicating they are IOAPI volatile files, with the exception of S_CGRID. You might try changing that line of the script to read setenv S_CGRID "$OUTDIR/CCTM_CGRID_${CTM_APPL}.nc"

1 Like