Inline fire emission file results to zeros in emission diagnostics files

I generated some Inline fire emission files from the FINN inventory.
They look good for the plot:


When I feed them into CMAQ, no Error message is given. However, the result does not show the added fire emissions. Also, the emission diagnostics files generated for the fire point source shows zero value for all pollutants.
I have no clue where went wrong. Hope I can get some advice here.
Thank you!
The emission files are here: fire_emis.zip

What procedure did you use to create the fires from FINN?

I am using python with PseudoNetCDF.
Here is my code:

#!/usr/bin/env python
# coding: utf-8
import pandas as pd
import numpy as np
import requests
import urllib
import json
import csv
import PseudoNetCDF as pnc
import io
import re
import os
import tarfile
import gzip
import argparse
from collections import defaultdict
from datetime import datetime, timedelta

start_time = datetime.now()

def get_fips_num(df,dataset):
    df_1=df[['lon','lat','FIREID']]
    fips_lst=[]
    lat=[]
    lon=[]
    FIREID=[]
    name=[]
    locid=[]
    count=0
    df_loc=pd.read_csv(dataset)
    loc_file=open(dataset,'a')
    # loc_file.write('\n')
    writer= csv.writer(loc_file,delimiter=',')
    for i,e,o in df_1.itertuples(index=False):
        try:
            lo=i
            la=e
            ven=o
            link="https://geo.fcc.gov/api/census/area?lat="+str(la)+"&lon="+str(lo)+"&format=json" 

            # 
            count=count+1

            if ven in df_loc['FIREID'].unique() :
                county_fips=df_loc[(df_loc['FIREID'] == ven)]['county_fips'].values.tolist()
                county_name=df_loc[(df_loc['FIREID'] == ven)]['county_name'].values.tolist()
                block_fips=df_loc[(df_loc['FIREID'] == ven)]['block_fips'].values.tolist()
                print(str(count)+': '+str(la)+' '+str(lo)+' '+str(county_fips[0])+' '+county_name[0])
                fips_lst.append(county_fips[0])
                name.append(county_name[0])
                locid.append(block_fips[0])
            else:
                print(str(count)+': '+link)
                with urllib.request.urlopen(link) as url:
                    data = json.loads(url.read().decode())
                county_fips=data['results'][0]['county_fips']
                county_name=data['results'][0]['county_name']
                block_fips=data['results'][0]['block_fips']
                fips_lst.append(county_fips)
                name.append(county_name)
                locid.append(block_fips)
                row=[ven,la,lo,county_fips,county_name,block_fips]
                writer.writerow(row)

            FIREID.append(ven)
            lat.append(la)
            lon.append(lo)
        except (RuntimeError, TypeError, NameError,IndexError):
            pass
            # exit()
    # print(fips_lst)
    df1=pd.DataFrame(fips_lst,columns=['IFIP'])
    df1['FIREID']=FIREID
    df1['LOCID']=locid
    df1['SCC']='0'
    df1['FIRENAME']=name
    df1['LATITUDE']=lat
    df1['LONGITUDE']=lon

    return df1


finn_file="FINNv2.4_MODVRS_SAPRC_2018_CA.csv"
df_finn=pd.read_csv(finn_file)

outdir='emis_netcdf/'

# Extract the CA fires from the whole world #cut down the size 
# df_ca=df_finn[(df_finn['LATI'] > 20) & (df_finn['LATI'] < 60) & (df_finn['LONGI'] > -130) & (df_finn['LONGI'] < -80)]

# print(df_finn)
# df_ca.to_csv('FINNv2.4_MODVRS_SAPRC_2018_CA.csv', index=False)
gf = pnc.pncopen('GRIDDESC_4km', format='griddesc', GDNAM='State_321x291')

ig, jg = gf.ll2ij(df_finn.LONGI.values, df_finn.LATI.values)
df_finn['ROW']=ig
df_finn['COL']=jg
NCOLS=gf.NCOLS
NROWS=gf.NROWS
df_ca = df_finn.query(f'ROW <= {NCOLS} and ROW >= 0 and COL <= {NROWS} and COL >= 0 and AREA >= 25')

# df_ca2=df_finn[(df_finn['LATI'] > 22) & (df_finn['LATI'] < 49) & (df_finn['LONGI'] > -125) & (df_finn['LONGI'] < -100)]
# print(df_ca2)
# df_ca=df_finn[(df_finn['LATI'] > 39) & (df_finn['LATI'] < 39.1) & (df_finn['LONGI'] > -122.1) & (df_finn['LONGI'] < -122)]

group=df_ca.groupby(['FIREID'])
fire_id=group['FIREID'].agg(np.mean)
lon=group['LONGI'].agg(np.mean)
lat=group['LATI'].agg(np.mean)
Day=group['DAY'].agg(np.mean)
df_pol=group.agg(np.sum)
df_pol=df_pol.drop(columns=['DAY','POLYID','GENVEG','LONGI','LATI','ROW','COL'])
df_pol=df_pol.merge(Day.rename('DAY'), left_index=True, right_index=True)
df_pol=df_pol.reset_index()

# print(df_pol)

#generate ORL FIRE 

# df_orl=fire_id.to_frame()

# df_orl=df_orl.merge(lon.rename('lon'), left_index=True, right_index=True)
# df_orl=df_orl.merge(lat.rename('lat'), left_index=True, right_index=True)
# print(df_orl)
# df_fips=get_fips_num(df_orl,'fips_loc.csv')

# df_fips.to_csv('fips_la_lo_CA_2018.csv')
df_fips=pd.read_csv('fips_la_lo_CA_2018.csv')
#generate ORL FIRE Daily EMIS

df_pol=df_pol.merge(df_fips, how='right',on='FIREID')

# print(df_pol)
hour=[0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.020116676725,0.04023335345,0.07040836853750002,0.100583383625,0.130758398713,0.1609334138,0.170991752163,0.12070006035000003,0.07040836853750002,0.04023335345,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213,0.00533091933213]
# STVAL=['ISTACK','LATITUDE','LONGITUDE','STKDM','STKHT','STKTK','STKVE','STKFLW','STKCNT','ROW','COL','XLOCA','YLOCA','IFIP','LMAJOR','LPING','ACRESBURNED']
POLL=['CO','NH3','SO2','NO','NO2','HONO','POC','PEC','ACET','ALK1','ALK2','ALK3','ALK4','ALK5','ARO1','ARO2','BALD','CCHO','AACD','ETHE','HCHO','FACD','ISOP','MEK','MEOH','MACR','MGLY','MVK','OLE1','OLE2','NPHE','xPROD2','RCHO','TERP']
PM_SP=['PNCOM', 'PNA', 'PMG',  'PAL', 'PSI', 'PCL', 'PK',   'PCA', 'PTI',  'PMN', 'PFE', 'PNH4', 'PMOTHR','PH2O','PNO3', 'PSO4']
PM_FR=[0.2026, 0.004093, 0.0003733,0.0202,0.03069,0.04421,0.03481,0.0129,0.001461,0.00009967,0.009911,0.01009,0.207, 0.00,0.001797, 0.0437]
#Speciation adopted from Hongliang Zhang(TAMU, Feb.,2011) and Lin Li (REACH@NUIST, Mar. 2021)
# MLMS=[28,17,64,30,46,47,1000,1000,58.08,30.07,36.73,58.61,77.60,118.89,95.16,118.72,106.13,44.05,60.05,28.05,30.03,46.03,68.12,72.11,32.04,70.09,72.07,70.09,72.34,75.78,139.11,116.16,58.08,136.24]
im, jm = gf.ll2ij(df_pol.LONGITUDE.values, df_pol.LATITUDE.values)
xm, ym = gf.ll2xy(df_pol.LONGITUDE.values, df_pol.LATITUDE.values)
df_pol['ROW']=im
df_pol['COL']=jm
df_pol['XLOCA']=xm
df_pol['YLOCA']=ym
# Remove fires outside the domain or with detects
# less than 50 m2 -- assumed false detect.
indf = df_pol.query('ROW >= 0 and COL >= 0 and AREA >= 50')
#CHANGE NAME FOR CMAQ
indf = indf.rename({'OC':'POC','BC':'PEC','CCOOH':'AACD','ETHENE':'ETHE','HCOOH':'FACD','ISOPRENE':'ISOP','METHACRO':'MACR','PHEN':'NPHE','PROD2':'xPROD2','TRP1':'TERP'},axis='columns')
print(indf)

stk_var=['ISTACK','LATITUDE','LONGITUDE','STKDM','STKHT','STKTK','STKVE','STKFLW','STKCNT','ROW','COL','XLOCA','YLOCA','IFIP','LMAJOR','LPING','ACRESBURNED']
stk_unit=['','degrees','degrees','m','m','degrees K','m/s','m**3/s','','','','','','','','','acres/day']
stk_desc=['Stack group number','Latitude','Longitude','Inside stack diameter','Stack height above ground surface','Stack exit temperature','Stack exit velocity','Stack exit flow rate','Number of stacks in group','Grid row number','Grid column number','Projection x coordinate','Projection y coordinate','FIPS CODE','1= MAJOR SOURCE in domain, 0=otherwise','1=PING SOURCE in domain, 0=otherwise','number of acres burned for a fire in one day']
same=['LATITUDE','LONGITUDE','ROW','COL','XLOCA','YLOCA','IFIP']
#variable list for group stack file
# stk_ref = pnc.pncopen('Inline_Ref/stack_groups_ptfire.ncf')
# print(stk_ref)
# exit()

for julian in indf['DAY'].unique():
    df_day=indf[indf['DAY'] == julian]
    ROW=len(df_day['FIREID'].unique())
    day=datetime(2018, 1, 1)+timedelta(days=int(julian) -1)
    tomorrow=day+timedelta(days=1)
    DATE=day.strftime("%Y%j")
    TOMR=tomorrow.strftime("%Y%j")
    emisf = pnc.pncopen('GRIDDESC_4km', format='griddesc', GDNAM='State_321x291')
    emisf.SDATE = int(DATE)
    emisf.createDimension('ROW', ROW).setunlimited(False)
    emisf.createDimension('COL', 1).setunlimited(False)
    emisf.createDimension('LAY', 1).setunlimited(False)
    del emisf.dimensions['nv']
    del emisf.variables['DUMMY']
    del emisf.variables['layer']
    del emisf.variables['level']
    del emisf.variables['lambert_conformal_conic']
    del emisf.variables['x']
    del emisf.variables['y']
    del emisf.variables['latitude']
    del emisf.variables['longitude']
    del emisf.variables['latitude_bounds']
    del emisf.variables['longitude_bounds']
    emisf.TSTEP = 10000 
    stackf = emisf.copy(variables=True, dimensions=True, props=True, data=False)
    emisf.createDimension('TSTEP', 25).setunlimited(True)
    stackf.createDimension('TSTEP', 1).setunlimited(True)
    # print(stackf)
    # delattr(emisf, 'VAR-LIST')
    ind=0
    key='TFLAG'
    tsvar=stackf.createVariable(key, 'f', ('TSTEP', 'VAR', 'DATE-TIME'))
    tsvar.long_name = key.ljust(16)
    tsvar.var_desc = "Timestep-valid flags:  (1) YYYYDDD or (2) HHMMSS                                " 
    tsvar.units = "<YYYYDDD,HHMMSS>"
    tvar=emisf.createVariable(key, 'f', ('TSTEP', 'VAR', 'DATE-TIME'))
    tvar.long_name = key.ljust(16)
    tvar.var_desc = "Timestep-valid flags:  (1) YYYYDDD or (2) HHMMSS                                " 
    tvar.units = "<YYYYDDD,HHMMSS>"
    adj='Test'
    avar = stackf.createVariable(adj, 'f', ('TSTEP', 'LAY', 'ROW', 'COL'))
    avar.long_name = adj.ljust(16)
    avar.var_desc = adj.ljust(80)
    avar.units = "Adjust the valble"
    avar = emisf.createVariable(adj, 'f', ('TSTEP', 'LAY', 'ROW', 'COL'))
    avar.long_name = adj.ljust(16)
    avar.var_desc = adj.ljust(80)
    avar.units = "Adjust the valble"
    for varkey in stk_var:
        var = stackf.createVariable(varkey, 'f', ('TSTEP', 'LAY', 'ROW', 'COL'))
        var.long_name = varkey.ljust(16)
        var.var_desc = stk_desc[ind].ljust(80)
        var.units = stk_unit[ind].ljust(16)
        ind=ind+1
    for varkey in POLL:
        var = emisf.createVariable(varkey, 'f', ('TSTEP', 'LAY', 'ROW', 'COL'))
        var.long_name = varkey.ljust(16)
        var.var_desc = varkey.ljust(80)
        var.units = "moles/s         "
        if (varkey == 'POC') or (varkey == 'PEC'):
            var.units = "g/s             "
    for varkey in PM_SP:
        var = emisf.createVariable(varkey, 'f', ('TSTEP', 'LAY', 'ROW', 'COL'))
        var.long_name = varkey.ljust(16)
        var.var_desc = varkey.ljust(80)
        var.units = "g/s             "
    key='HFLUX'
    newvar=emisf.createVariable(key, 'f', ('TSTEP', 'LAY', 'ROW', 'COL'))
    newvar.long_name = key.ljust(16)
    newvar.var_desc = key.ljust(80)
    newvar.units = "moles/s         "
    stackf.IOAPI_VERSION="ioapi-3.2: $Id: init3.F90 320 2016-02-26 15:55:00Z coats $                      "
    stackf.EXEC_ID = "????????????????                                                               "
    stackf.VGTOP = -9.e+36
    stackf.VGTYP = -9999 
    stackf.VGLVLS = 0., 0.
    stackf.UPNAM = "OPENEOUT        "
    emisf.UPNAM = "CREATESET        "
    emisf.IOAPI_VERSION="ioapi-3.2: $Id: init3.F90 320 2016-02-26 15:55:00Z coats $                      "
    emisf.EXEC_ID = "????????????????                                                               "
    emisf.VGTOP = -9.999e+36
    emisf.VGTYP = -9999 
    emisf.VGLVLS = 0., 0.
    emisf.updatemeta()
    stackf.updatemeta()
    del emisf.variables['Test']
    del stackf.variables['Test']
    for i in range(ROW):
        tv=0
        tpvar=stackf.variables['TFLAG']
        for v in stk_var:
            var = stackf.variables[v]
            value = -9.999e+36 #defualt value
            # value = 10 #defualt value
            if v == 'ISTACK':
                value = i
            if v in same:
                value = indf[v][i]
            # if v == 'STKTK':
            #     value = 600
            # if v == 'LPING':
            #     value = 0
            if (v == 'STKFLW') or (v == 'LPING') :
                value = 0
            if (v == 'STKCNT') or (v == 'LMAJOR'):
                value = 1
            if v == 'ACRESBURNED':
                value = indf['AREA'][i]*0.000247105#m2/day to acre/day
            var[0,0,i,0] = value
            tpvar[0,tv,0] = DATE
            tpvar[0,tv,1] = 0
            tv=tv+1
        #filling emission file
        ev=0
        tevar=emisf.variables['TFLAG']
        for varkey in POLL:
            var = emisf.variables[varkey]
            value = indf[varkey][i]/(60*60.0) #mole/day to mole/s
            if (varkey == 'POC') or (varkey == 'PEC'):
                # value = 1000.0*indf[varkey][i]/(24*60*60.0) #kg/day to g/s
                value = 1000.0*indf[varkey][i]/(60*60.0)
            for h in range(25):
                var[h,0,i,0] = value*hour[h]
                if h < 24:
                    tevar[h,ev,0] = DATE
                    tevar[h,ev,1] = h*10000
                else:
                    tevar[h,ev,0] = TOMR
                    tevar[h,ev,1] = 0
            ev=ev+1
        #for PM10
        ind = 0
        for varkey in PM_SP:
            var = emisf.variables[varkey]
            fr = PM_FR[ind]
            value = fr*1000.0*indf['PM10'][i]/(60*60.0) #kg/day to g/s
            ind=ind+1
            for h in range(25):
                var[h,0,i,0] = value*hour[h]
                if h < 24:
                    tevar[h,ev,0] = DATE
                    tevar[h,ev,1] = h*10000
                else:
                    tevar[h,ev,0] = TOMR
                    tevar[h,ev,1] = 0
            ev=ev+1
        newvar = emisf.variables['HFLUX']
        #heat flux calculation:
        #         Heat Flux (BTU/hr) = [ Area Burned (acre/day) x Fuel Loading (tons/acre) x Heat Content (BTU/lb)
        # x (2000lb/ton) ] / Duration of fire (hr/day)
        # Detail information on how the fire emissions are calculated can be found on the proceeding presented by
        # Pouliot et al., (2005) [https://www3.epa.gov/ttnchie1/conference/ei14/session12/pouliot.pdf].
        # constant default heat content (8000 BTU/lb).
        area = indf['AREA'][i]*0.000247105
        fuel = indf['BMASS'][i]*4.4609 #kg/m2 to ton/acre
        value = area * fuel * 2000.0 * 8000.0
        for h in range(25):
            newvar[h,0,i,0] = value*hour[h]
            if h < 24:
                tevar[h,ev,0] = DATE
                tevar[h,ev,1] = h*10000
            else:
                tevar[h,ev,0] = TOMR
                tevar[h,ev,1] = 0

    os.makedirs(outdir, exist_ok=True)
    emisf.save(outdir+'emis_mol_wildfire.'+DATE+'.4km.cmaq.saprc.ncf').close()#DATE
    stackf.save(outdir+'stack_groups.wildfire.'+DATE+'.4km.cmaq.saprc.ncf').close()
1 Like

Hi, we took a quick look at your SMOKE input files and didn’t see anything immediately obvious. We suspect that perhaps there is a configuration issue with your CMAQ run?

Have you tried using some fire emissions that we prepared as an example [from one of the EPA modeling platforms] before substituting in your own?

Hi Alison,

Thank you for your information.
I finally turns to SMOKE and convert the FINN inventory to FF10 format. Then, I modified the NEI2014v2 scripts to process the FF10 files for ptinv and ptday. The resulting files work with my CMAQ setting and I could get normal emission diagnostics files eventually. The CMAQ out put also shows the fire impacts now. So, I believe my CMAQ setting is fine, but there are some unknown issues of the netCDF generate by PseudoNetCDF caused the previous problem.
While, with the SMOKE processing, I also got a new problem. Since California is on TimeZone 8. To process the Jan. of the year, the SMOKE would require the input of Dec. 31 of the previous year on the inventory. How can I provide such information on the FF10 file? My understanding is the base year of the inventory is deternmind by the #YEAR=2014 tag on the header section. Does it means I need a sperate file of a FF10 witht the #YEAR=2013?
All the best

Yes, you need another FF10 for the previous year. Typically we take the last day of the model year, copy it to a new FF10, and then change the header to the previous year. This is not typically a problem because this is a quiet time of year for fires.

Looking at the python/PseudoNetCDF-processed stack_groups file you posted, it seems the XLOCA and YLOCA stack locations may be off, referring to locations possibly outside your modeling domain.

The file header defines your domain as

            :GDTYP = 2 ;
            :P_ALP = 30. ;
            :P_BET = 60. ;
            :P_GAM = -120.5 ;
            :XCENT = -120.5 ;
            :YCENT = 37. ;
            :PRJNAME = "ARBstatewide" ;
            :XORIG = -684000. ;
            :YORIG = -564000. ;
            :XCELL = 4000. ;
            :YCELL = 4000. ;

and the LONGITUDE values in the stack_groups file show values ranging somewhere between -123 and -114, i.e. both west and east of the projection center.

However, the XLOCA values are all positive (with values somewhere between 400,000m and 1,200,000m), indicating locations east of the projection center.

While I don’t know the number of columns and rows of your modeling domain because the values of those attributes have a non-standard meaning for the inline point source files, with a grid spacing of 4,000m this would put these sources about 100 to 300 grid cells east of your projection center which may be outside your modeling domain. CMAQ would then ignore any such sources.

Similarly, all YLOCA values are positive which would put all of these sources north of the projection center (latitude 37N/-120.5W) despite LATITUDE values of about 33N-42N.

1 Like