Ptfire_grass in CRACMM run script

I am trying to run “cracmm1_aq” mechanism benchmark for 2018 with the input files provided in "(AWS S3 Explorer) " . There is an emission file named “ptfire_grass”. There is no specific place for it in the run scripts. where should I use this emission file and how should I address it in the cctm run script?

Thanks

A draft run script for the CRACMM mechanism is available here:

After the run script has been reviewed and approved by EPA it will be made available on the s3 bucket. Please provide feedback to this issue if you find any issues.

Thank you for providing the script.
I tried it and now I have an error regarding my NETCDF library. please let me know if I need to start new topic for this question.

EXECUTION_ID:
Error opening file at path-name:
netCDF error number -128 processing file “MET_CRO_3D”
NetCDF: Attempt to use feature that was not turned on when netCDF was built.

I followed following link for compiling the libraries. CMAQ/CMAQ_UG_tutorial_build_library_gcc.md at 5.4+ · lizadams/CMAQ · GitHub

do I need to compile the libraries different for Cracmm dataset?

Thanks

Many of the files in the cracmm mechanism are netCDF-4 compressed and have the extension .nc4
You can either use nccopy to convert the files to a netCDF classic version, and use your existing libraries, or you can build new netCDF4 compressed libraries, and then also build I/O API. If you opt to build netCDF4 compressed libraries, then I also recommend using custom modules to manage your environment. I have a description of how to do that here: 3.1. Create a HB120rs_v2 Virtual Machine - azure-cmaq documentation but you may also need to do a google search.
This tutorial should help you build the netCDF4 compressed libraries. EQUATES_BENCHMARK/CMAQ_UG_tutorial_build_library_gcc_support_nc4.md at main · lizadams/EQUATES_BENCHMARK · GitHub

Script that you can use to convert the nc4 files to nc files:

It will find all files below the directory that you are running the script that have the .nc4 extension and convert them to .nc or perhaps we should be using .nc3 extension.

Just to add, as also discussed in another recent thread, I believe that the option to use nccopy to convert netCDF-4 compressed files back to netCDF files using the classic data model hinges upon whether your netCDF installation (which nccopy is a part of, just like ncdump and ncgen) was built with netCDF4 support or not.

Put differently, to use these files, at a minimum you will need to have a netCDF installation with netCDF4 support to be able to use nccopy. Once you have that installation, you could then also opt to use that installation to build the I/O API libraries used in building CCTM in which case CCTM should be able to read these netCDF4 files directly without having to convert them with nccopy first.

Full disclaimer: I do not have a netCDF installation without netCDF4 support, so I did not verify that nccopy from such an installation would fail in converting the netCDF4-compressed files to netCDF classic, but conceptually that’s what I would expect to happen. I’d be glad to be wrong here, and I very well might be wrong, but I just thought I’d spell out these considerations since these netCDF4-compressed files are becoming more common, including in our own work.

2 Likes

Thank you for the answer.
It worked and I could change the .nc4 met data to .nc files with nccopy command and my script moved on to have another error.
the new error is the following:

At line 206 of file CGRID_SPCS.F (unit = 98, file = ‘/path/cracmm/CMAQ_REPO/CCTM/scripts/BLD_CCTM_v54_gcc/GC_cracmm1_aq.nml’)
Fortran runtime error: Cannot match namelist object name ‘’

Error termination. Backtrace:
MET_TSTEP | 10000 (default)

 MET data determined based on WRF ARW version 4.3.3

I should mention I built my “bldit_cctm.csh” script with cracmm1_aq mechanism and I am using gcc/6.3.0 and openmpi_3.0.0/gcc_6.3.0

Thank you.

The namelist files used in CRACMM have comments at the end of each line that are generated as part of the build process for this mechanism. It is compiler specific and we found intel didn’t have the issue, but gcc does.

A version of these namelist files without the comments is available here:

I checked the folder, and I could not see the difference of these files with original ones in cracmm1_aq mechanism in /CCTM/src/Mech/cracmm1_aq. They all have the “!” at the beginning of species and with them, I still have the same error. should I just change the compiler without changing the mechanisms files?

Thanks

I believe that this error:
Fortran runtime error: Cannot match namelist object name ‘’

Is due to the comments at the end of each line in the namelist files

Here is an example of the comment at the end of the line in this file:

!RepCmp,ExplicitorLumped,DTXSID,SMILES

You will find that comment in the file provided in the git repo from the link above.
I have also placed the first two lines of that file here (you need to scroll to the end):

!SPECIES    ,MOLWT  ,Aitken ,Accum ,Coarse ,OPTICS ,IC ,IC_FAC ,BC ,BC_FAC ,DRYDEP SURR ,FAC ,WET-SCAV SURR,FAC ,AE2AQ SURR ,TRNS    ,DDEP    ,WDEP    ,CONC,!RepCmp,ExplicitorLumped,DTXSID,SMILES
'ASO4'      , 96.00 ,T      ,T     ,T      ,''     ,'' ,-1     ,'' ,-1     ,'VMASS'     , 1  ,'SO4'        , 1  ,'SO4'      ,'Yes'   ,'Yes'   ,'Yes'   ,'Yes',!Sulfate ion,L,DTXSID3042425,[O-]S(=O)(=O)[O-]

The files that I provided do not have anything after the , at the end of each line in the namelist files.

!SPECIES    ,MOLWT  ,Aitken ,Accum ,Coarse ,OPTICS ,IC ,IC_FAC ,BC ,BC_FAC ,DRYDEP SURR ,FAC ,WET-SCAV SURR,FAC ,AE2AQ SURR ,TRNS    ,DDEP    ,WDEP    ,CONC,
'ASO4'      , 96.00 ,T      ,T     ,T      ,''     ,'' ,-1     ,'' ,-1     ,'VMASS'     , 1  ,'SO4'        , 1  ,'SO4'      ,'Yes'   ,'Yes'   ,'Yes'   ,'Yes',

I am also copying my notes from when I experienced the same error.
Error termination. Backtrace:

At line 206 of file CGRID_SPCS.F (unit = 98, file = ‘/shared/build/openmpi_gcc/CMAQ_v54+/CCTM/scripts/BLD_CCTM_v54+_gcc_M3DRY_CRACMM/GC_cracmm1_aq.nml’)

Fortran runtime error: Cannot match namelist object name ‘’

To solve this, I removed the ! Comment in the last column of the namelist files.

Successfully ran after editing the namelist files to remove the comments at the end of each column.

Perhaps try running with Debug mode to see what file it is still failing on, and verify that the files have had all comments removed.

Hi,
Thank you for explaining the change in name list files. It worked and I solved the Fortran runtime error.
now when my script reaches the “OCEAN_07_L3m_MC_CHL_chlor_a_12US1.nc” file it stops. I am running month 7.
At first, the error was " NetCDF: Attempt to use a feature that was not turned on when netCDF was built" so I change the file with “nccopy -k claasic” command. now I keep receiving this error :slight_smile:
netCDF error number -40

 >>--->> WARNING in subroutine RDTFLAG
 Error reading netCDF time step flag for INIT_CONC_1
 M3WARN:  DTBUF 0:00:00   July 1, 2018  (2018182:000000)

 >>--->> WARNING in subroutine XTRACT3
 Time step not available for file:  INIT_CONC_1
 M3WARN:  DTBUF 0:00:00   July 1, 2018  (2018182:000000)


 *** ERROR ABORT in subroutine retrieve_time_de on PE 014
 Could not extract INIT_CONC_1                              file

PM3EXIT: DTBUF 0:00:00 July 1, 2018
Date and time 0:00:00 July 1, 2018 (2018182:000000)

Hi Farzan,

I was able to use nccopy -k classic to successfully convert the file.
I’ve uploaded a version of it to the following location on the s3 bucket.

https://s3.console.aws.amazon.com/s3/object/cmas-cmaq-modeling-platform-2018?region=us-east-1&prefix=2018_12US1/surface/OCEAN_12_L3m_MC_CHL_chlor_a_12US1.ncf3

Note, there are multiple places in the run script where the Ocean file is defined, so you need to edit both to use the netCDF classic version of the OCEAN file.

setenv CTM_OCEAN_CHEM Y      #> Flag for ocean halogen chemistry and sea spray aerosol emissions [ default: Y ]
  setenv CMAQ_MASKS $SZpath/OCEAN_${MM}_L3m_MC_CHL_chlor_a_12US1.ncf3  #> horizontal grid-dependent ocean file
  setenv OCEAN_1 $SZpath/OCEAN_${MM}_L3m_MC_CHL_chlor_a_12US1.ncf3

The error abort message you are getting could be because the first day run failed, but the run proceeded to try and start the second day, and then failed when trying to read in the initial condition file that would have been generated by the first day, if it had succeeded.

Thanks for generating the file.
I made the same Ocean file with nccopy in put it in both lines that you mentioned. I keep receiving the same error regarding this file.
I was checking the log files in
cyclecloud-cmaq/run_cctm5.4+_Bench_2018_12US1_CRACMM.192.16x12pe.2day.cyclecloud.lustre_lim_craccm.log at main · CMASCenter/cyclecloud-cmaq · GitHub

this is for Dec 2017. have you used the 2018 Ocean files in that test run ? can I just generate another ocean file for this run ?

Yes, I am using the 2018 Ocean file for the 2017 spinup portion of the run. Do you see any error messages in your CTM_LOG files?

grep -i error CTM_LOG*

Liz

Yes. the error that now keeps repeating is :

Error reading netCDF time step flag for INIT_CONC_1
*** ERROR ABORT in subroutine retrieve_time_de on PE 014

 Performing Basic Error Checks for Emission Scaling Rules

CTM_LOG_015.v54_cracmm1_aq_WR705_2018gc2_2018_12US1_20180701: NetCDF error number -40
Error reading netCDF time step flag for INIT_CONC_1
*** ERROR ABORT in subroutine retrieve_time_de on PE 015

I believe both are based on Ocean files

Perhaps it is an issue with the INIT_CONC_1 file, rather than the OCEAN file.

Please use the following command to identify what the INIT_CONC_1 environment variable is defined as:

grep -A 2 -i INIT_CONC_1 CTM_LOG_001*

Then use ncdump -h to see the timesteps available in the file.

See https://www.cmascenter.org/ioapi/documentation/all_versions/html/ERRORS.html#ncf331: netCDF Error Numbers list

Error -40 is basically “attempt to read past end-of-file for the requested variable”…

The result of the “grep -A 2 -i INIT_CONC_1 CTM_LOG_001*” was the following :slight_smile:
“INIT_CONC_1” opened as OLD:READ-ONLY
File name “/path/icbc/CCTM_CGRID_v532_cb6r3_ae7_aq_mapped_to_cracmm1_aq_WR413_MYR_STAGE_2017_12US1_20171221.nc”
File type GRDDED3

 Error reading netCDF time step flag for INIT_CONC_1
 M3WARN:  DTBUF 0:00:00   July 1, 2018  (2018182:000000)


Time step not available for file: INIT_CONC_1
M3WARN: DTBUF 0:00:00 July 1, 2018 (2018182:000000)


Could not extract INIT_CONC_1 file
PM3EXIT: DTBUF 0:00:00 July 1, 2018
Date and time 0:00:00 July 1, 2018 (2018182:000000)

I am using exacly the initial file you mentioned in the run script. CCTM_CGRID_v532_cb6r3_ae7_aq_mapped_to_cracmm1_aq_WR413_MYR_STAGE_2017_12US1_20171221.nc

Is this the problem ?

Yes, I have an updated script and updated s3 download script for 2 days starting Dec. 22, 2017.

run_cctm_2018_12US1_v54_cb6r5_ae6.20171222.2x96.ncclassic.csh (39.7 KB)

The S3 download script for those two days is available here. Note, you will need to change the path to where you want to download the files to.

Hi again,
The run script and s3 bucket that I shared in the message above was for the other mechanism.

The s3 download script for the cracmm mechanism is : cyclecloud-cmaq/s3_copy_nosign_2018_12US1_conus_cmas_craccmm_opendata_to_lustre_20171222.csh at main · CMASCenter/cyclecloud-cmaq · GitHub
The run script is
run_cctm_2018_12US1_v54+_Base_M3DRY_CRACMM.lustre.csh (42.7 KB)

These are under development, so please let me know if you find issues. You may also need the CRACMM namelist available here: AWS S3 Explorer

Hi Liz,

Thank you for your help.
I could run Cracmm for 2017.12.22 with the data that you provided. I am interested in 2018.07. I can not get the result for it as the initial condition file is not working for it. How do you think I can run it for July?