twoway:Could not create the GR_EMDGFILE_001 file

Hi, everyone:
I’m trying to run twoway mode, and the test case works fine, but when I enter some self-generated emission data, I get the following error:

—>> WARNING in subroutine OPEN3
File already exists.

*** ERROR ABORT in subroutine OPEN_EMISS_DIAG on PE 000
Could not create the GR_EMDGFILE_001 file
Date and time 1:00:00 May 15, 2021 (2021135:010000)

I have tried to find the answer in the forum including the source code, but have not found the answer, hope to get some tips, thanks.

This is a variant on a familiar question: it is a “scripting problem” coming from the fact that the source code opens the files with status-argument

FSNEW3 ==3 for read/write of new files (file must not
               already exist)

instead of

FSUNKN3==4 for read/write/update of unknown (new vs. old)
               files.  If file does not exist, create it; 
               else check for consistency with user-supplied
               file description.

As a result , the script must ensure that the file does not exist (generally, delete it) before running the program.

Thank you for your reply. In order to continue the test, I reset the parameters to setenv EMISDIAG N. Then, I get the new error message:

double free or corruption (! prev)

Program received signal SIGABRT: Process abort signal.

Backtrace for this error:
#0 0x14bbcb24720f in ???
#1 0x14bbcb24718b in ???
#2 0x14bbcb226858 in ???
#3 0x14bbcb2913ed in ???
#4 0x14bbcb29947b in ???
#5 0x14bbcb29b12b in ???
#6 0x2da08de in ???
#7 0x2b4028d in ???
#8 0x2b13981 in ???
#9 0x2b0cdfd in ???
#10 0x138aeb5 in ???
#11 0x12271b1 in ???
#12 0x472073 in ???
#13 0x408a61 in ???
#14 0x408523 in ???
#15 0x14bbcb2280b2 in ???
#16 0x40856d in ???
#17 0xffffffffffffffff in ???

It looks like you need a traceback to find out where this problem is occurring. To do that, you need to compile and link the model with the -traceback (for Intel or PGI compilers) or -fbacktrace (for GNU compilers) flag, and then re-run.

Thanks for your help. I just recompiled the model using the -traceback parameter. There seems to be more error messages.
PX LSM will use the MODIS landuse tables
forrtl: error (78): process killed (SIGTERM)
Image PC Routine Line Source
wrf.exe 00000000036641A4 Unknown Unknown Unknown
libpthread-2.31.s 000014A4490B73C0 Unknown Unknown Unknown
wrf.exe 000000000375CCBC Unknown Unknown Unknown
wrf.exe 0000000003316020 Unknown Unknown Unknown
wrf.exe 00000000030C1C5A Unknown Unknown Unknown
wrf.exe 000000000309735F Unknown Unknown Unknown
wrf.exe 0000000003092928 Unknown Unknown Unknown
wrf.exe 000000000146CEA8 solve_em_ 5855 solve_em.f90
wrf.exe 00000000012B2510 solve_interface_ 121 solve_interface.f90
wrf.exe 000000000055D8BB module_integrate_ 329 module_integrate.f90
wrf.exe 0000000000405E94 module_wrf_top_mp 324 module_wrf_top.f90
wrf.exe 0000000000405E4F MAIN__ 29 wrf.f90
wrf.exe 0000000000405DE2 Unknown Unknown Unknown 000014A4489A50B3 __libc_start_main Unknown Unknown
wrf.exe 0000000000405CEE Unknown Unknown Unknown

I observed that the pattern always stops running after outputting part of CCTM_CONC:
-rw-r–r-- 1 root root 12908 Aug 13 23:15
-rw-r–r-- 1 root root 28146748 Aug 13 23:15
-rw-r–r-- 1 root root 58348 Aug 13 23:15
-rw-r–r-- 1 root root 21884 Aug 13 23:15
-rw-r–r-- 1 root root 22600 Aug 13 23:15
-rw-r–r-- 1 root root 3435 Aug 13 23:15 CCTM_v411532_20210515.cfg
-rw-r–r-- 1 root root 3435 Aug 13 23:16 CCTM_v411532_20210516.cfg
-rw-r–r-- 1 root root 50508 Aug 13 23:15

So routine SOLVE_EM is dying at line 5855
It is being called by SOLVE_INTERFACE at line 121, etc.

Now, at least we know where to look; we’d need to look at the relevant part of the code to have some idea why this is happening…

hi,dear cjcoats
I try to run on a single core and get a message:
wrf.exe: malloc.c:4036: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)’ failed.

So there is an array-allocation failing at this point in SOLVE_EM… some set of array dimensions at this point is suspect; it would be useful to print them out. Most probably, there is some setup-problem causing this; knowing what is messed up might help in diagnosing the situation.

I did some tests and confirmed that it was the emission inventory data file. I’m working on finding out what the problem is. It’s still difficult. Thank you for your support.

dear cjcoats,
I just realized that my emissions file is 5GB, isn’t it too big for CMAQ?

If the file itself is 5 GB, but no time step is larger than 2 GB then you should be perfectly fine – any post-2000 version of netCDF can deal with “huge” files; the problem is just with the size of single time steps (and what that needs is to compile the CCTM with -mcmodel=medium flags).
I myself have worked with I/O API files as large as 1.2 TB, for 33-year runs of a very high resolution hydrology model.

dear cjcoats,
I finally found the cause, caused by inconsistent layering parameters.
Thank you very much for your help.