Segmentation fault in temporal process


I met a segmentation fault message when “smk_point.csh” was running in temporal process:

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
temporal 00000000005B43AD Unknown Unknown Unknown
libpthread-2.27.s 00007F98E56C88A0 Unknown Unknown Unknown
temporal 00000000004172B0 proctpro_ 270 proctpro.f
temporal 0000000000429B84 MAIN__ 477 temporal.f
temporal 000000000040371E Unknown Unknown Unknown 00007F98E50E2B97 __libc_start_main Unknown Unknown
temporal 0000000000403629 Unknown Unknown Unknown

But I wasn’t able to find exact reason why segmentation fault occurred in temporal process. When I reduced my point inventory list to 303,640 rows, “smk_point.csh” ran successfully. But a list more than 303,640 rows yielded the segmentation fault.

Is there some limitation of number of rows for annual point inventory?

My log files are attached.
grdmat.point.capss2016.Cheongju_1km.txt (15.4 KB)
smkinven.point.capss2016.txt (38.9 KB)
temporal.point.capss2016.20181103.txt (13.9 KB)

Thank you.

Is there some limitation of number of rows for annual point inventory?

No, there is no limit on number of records in your inventory file. Sounds like you may need more RAM memory to handle your full size of inventory. More records, more memory required to process.

And make sure you have
limit stacksize unlimited
limit memoryuse unlimited

How many records are in the inventory being processed? I’ve seen some notes about setting a stacksize limit to be unlimited, but not sure if that is something to check in this situation [and I’m not sure if the specific syntax for that]