Hi, everyone:
I’m trying to run twoway mode, and the test case works fine, but when I enter some self-generated emission data, I get the following error:
|> END EMISSIONS SCALING PREPARATION AND DIAGNOSTIC OUTPUT
—>> WARNING in subroutine OPEN3
File already exists.
*** ERROR ABORT in subroutine OPEN_EMISS_DIAG on PE 000
Could not create the GR_EMDGFILE_001 file
Date and time 1:00:00 May 15, 2021 (2021135:010000)
I have tried to find the answer in the forum including the source code, but have not found the answer, hope to get some tips, thanks.
This is a variant on a familiar question: it is a “scripting problem” coming from the fact that the source code opens the files with status-argument
FSNEW3 ==3 for read/write of new files (file must not
already exist)
instead of
FSUNKN3==4 for read/write/update of unknown (new vs. old)
files. If file does not exist, create it;
else check for consistency with user-supplied
file description.
Thank you for your reply. In order to continue the test, I reset the parameters to setenv EMISDIAG N. Then, I get the new error message:
double free or corruption (! prev)
Program received signal SIGABRT: Process abort signal.
Backtrace for this error: #0 0x14bbcb24720f in ??? #1 0x14bbcb24718b in ??? #2 0x14bbcb226858 in ??? #3 0x14bbcb2913ed in ??? #4 0x14bbcb29947b in ??? #5 0x14bbcb29b12b in ??? #6 0x2da08de in ??? #7 0x2b4028d in ??? #8 0x2b13981 in ??? #9 0x2b0cdfd in ??? #10 0x138aeb5 in ??? #11 0x12271b1 in ??? #12 0x472073 in ??? #13 0x408a61 in ??? #14 0x408523 in ??? #15 0x14bbcb2280b2 in ??? #16 0x40856d in ??? #17 0xffffffffffffffff in ???
It looks like you need a traceback to find out where this problem is occurring. To do that, you need to compile and link the model with the -traceback (for Intel or PGI compilers) or -fbacktrace (for GNU compilers) flag, and then re-run.
hi,dear cjcoats
I try to run on a single core and get a message:
wrf.exe: malloc.c:4036: _int_malloc: Assertion `(unsigned long) (size) >= (unsigned long) (nb)’ failed.
So there is an array-allocation failing at this point in SOLVE_EM… some set of array dimensions at this point is suspect; it would be useful to print them out. Most probably, there is some setup-problem causing this; knowing what is messed up might help in diagnosing the situation.
I did some tests and confirmed that it was the emission inventory data file. I’m working on finding out what the problem is. It’s still difficult. Thank you for your support.
If the file itself is 5 GB, but no time step is larger than 2 GB then you should be perfectly fine – any post-2000 version of netCDF can deal with “huge” files; the problem is just with the size of single time steps (and what that needs is to compile the CCTM with -mcmodel=medium flags).
I myself have worked with I/O API files as large as 1.2 TB, for 33-year runs of a very high resolution hydrology model.