Hello, I am getting the following error in the SMKMERGE program while running point sector inline (daily ptagfire). Here are the last few lines of the SMKMERGE log file that show the error. All other programs complete normally, the intermediate file PDAY seems to be generated correctly and the stackgroups file is written to the output but the inlnts* files are missing:
Determining elevated/plume-in-grid sources...
NOTE: 1 sources will be elevated, and the rest are
low-level (layer 1)
NOTE: No PinG sources will be modeled
Value for OUTPUT: '/data/mmb/ams/work/users/mvenece/SMOKE48_nc7//data/run_wus_by18_fy18_point_ptagfire/output/merge'
Value for OUTPUT: '/data/mmb/ams/work/users/mvenece/SMOKE48_nc7//data/run_wus_by18_fy18_point_ptagfire/output/merge'
Value for ESCEN: 'wus_by18_fy18_point_ptagfire'
Value for ESCEN: 'wus_by18_fy18_point_ptagfire'
Value for SMKMERGE_CUSTOM_OUTPUT: Y returning TRUE
Value for MRG_FILEDESC not defined; returning defaultval ': ' '
Value for MRG_FILEDESC not defined; returning defaultval ': ' '
Number of variables per file array is not allocated for file set INLN;
using default of 2048 variables per file
>>--->> WARNING in subroutine OPENSET
Could not generate individual file names for file set "INLN"
Could not open file set "INLN".
*** ERROR ABORT in subroutine OPENMRGOUT
Could not open file set "INLN".
Any advice on resolving this issue is appreciated. Thanks!
Melissa
We think that “the INLN variable isn’t defined when Smkmerge is run” is the problem in that it is not being set in the scripts you are using.
The EPA platform scripts set this variable.
Hi Alison,
Do you have any suggestions on where I can set this in my smoke scripts? I see in the platform scripts during part 4 (smkmerge) on smk_pt_daily_emf.csh
Run m3stat script on INLN, unless we’re not running inline approach
# C. Allen update 3/31/09: Added support for multiple INLN files per day, required for tagging runs
# (We had never done a tagging, inline run prior to now)
if ( $INLINE_MODE != off ) then
if ( $?INLN && $RUN_M3STAT == Y ) then
# [c] If $POUT exists, this implies there is only one Smkmerge file per day. In this case, run m3stat once and get out
if ( -e $INLN ) then
$m3stat $INLN inln
if ( $status != 0 ) then
echo "ERROR: running m3stat_chk for $ESDATE"
$EMF_CLIENT -k $EMF_JOBKEY -m "ERROR: running m3stat_chk for $ESDATE" -t "e" -x $m3stat -p $EMF_PERIOD
set exitstat = 1
goto end_of_script
endif
# [c] Otherwise, check to see how many files there are, and run m3stat on each one
else
# [c] "inln_prefix" is prefix of $INLN, everything before the file number
set inln_prefix = $INTERMED/inln_${MM}_${SECTOR}_${ESDATE}_${GRID}_${SPC}_${CASE}
set num_smkmerge_files = `/bin/ls -1 $inln_prefix.*.ncf | wc -l`
echo "SCRIPT NOTE: Running m3stat on $num_smkmerge_files model-ready inline files"
set fc = 0
# [c] This allows for case where number of files is 0; in that case the while loop is skipped altogether
# [c] The run_m3stat script has an optional 3rd command-line parameter that I originally added to handle inline approach;
# [c] this is a "file stamp" appended to the file names so that the files don't get overwritten each time
while ( $fc < $num_smkmerge_files )
@ fc = $fc + 1
$m3stat $inln_prefix.$fc.ncf inln_$fc
if ( $status != 0 ) then
echo "ERROR: running m3stat_chk for $ESDATE"
$EMF_CLIENT -k $EMF_JOBKEY -m "ERROR: running m3stat_chk for $ESDATE" -t "e" -x $m3stat -p $EMF_PERIOD
set exitstat = 1
goto end_of_script
endif
end # while
endif # -e $INLN
endif # $?INLN && $RUN_M3STAT == Y
endif # $INLINE_MODE != off
Should I add this statement to my smk_run.csh for part 4?
Melissa
Check your assigns file (which will likely be a lot different from the assigns files in the EPA platforms).
Are there any setenv INLN statements in there? If not, here is ours:
setenv INLN $OUTPUT/$SECTOR/inln_${MM}_${SECTOR}_${ESDATE}_${GRID}_${SPC}_${CASE}.ncf
Ahh that was a much easier fix than I anticipated. Yes, I see. We were sourcing the older version “INLNTS_L” in the assigns. Once I changed it to “INLN” it worked. Thank you very much! I appreciate the fast help
Great news – glad that you got it to work…