SMOKE v5 “output statement overflows record” when HFLUX ≳10¹⁴—any workaround without recompiling?

Hello everyone,

I’m running SMOKE v5.0 on a QFED→FF10 wildfire inventory. Some of my point-day HFLUX values are on the order of 10¹⁴. When SMOKE writes these rows, I get:

forrtl: severe (66): output statement overflows record, unit -5, file Internal Formatted Writeforrtl: severe (66): output statement overflows record, unit -5, file Internal Formatted Write

If I simply divide HFLUX by 10³ (so the longest formatted field stays ≤ 12 characters), SMOKE completes successfully. My plan is to then multiply the HFLUX back by 10³ downstream (e.g. in post-processing) to recover the original BTU/hr.

What I’d like to know:

  1. Is there a better way?
  2. Has anyone seen this overflow error in smkmerge (wrmrgrep)?
  3. Would switching to scientific notation (and accepting a small loss of precision) be a reasonable workaround?
  4. Is there any runtime or environment setting—ulimit, Fortran flag, etc.—to enlarge the internal write buffer or otherwise shorten the formatted output, without rebuilding SMOKE?
  5. If recompilation is unavoidable, what buffer size (character length) would you recommend in wrmrgrep.f to safely handle values up to ~10¹⁶ BTU/hr?

Any pointers, small patches, or example snippets would be greatly appreciated!

Thanks,
Dave

Hi,

Please use the latest version of SMOKE from github (GitHub - CEMPD/SMOKE: Create emissions inputs for multiple air quality modeling systems with unmatched speed and flexibility) that contains a field expansion to accommodate large HFLUX values (Update maximum column header width for fields with values > 16 charac… · CEMPD/SMOKE@39b8918 · GitHub).

Additionally, how are you estimating heat flux for the QFED grid cells? Assuming an 8000 BTU/lb heat content, 10^16 BTU/hr works out to 625 million tons of woody material burned in an hour within a single 0.1x0.1 degree grid cell.