I successfully installed IOAPI-3.2 and pnetCDF into a Dell Linux station with 48 processors. However, I have errors in running cmaqv5.3.1 that say
…
Abort(269031429) on node 21 (rank 21 in comm 0): Fatal error in PMPI_Allreduce: Invalid communicator, error stack:
PMPI_Allreduce(402): MPI_Allreduce(sbuf=0x7ffc1cfedab0, rbuf=0x7ffc1cfec53c, count=1, datatype=MPI_LOGICAL, op=MPI_LAND, comm=MPI_COMM_NULL) failed
PMPI_Allreduce(325): Null communicator
Abort(134813701) on node 22 (rank 22 in comm 0): Fatal error in PMPI_Allreduce: Invalid communicator, error stack:
PMPI_Allreduce(402): MPI_Allreduce(sbuf=0x7ffeade7a7b0, rbuf=0x7ffeade7923c, count=1, datatype=MPI_LOGICAL, op=MPI_LAND, comm=MPI_COMM_NULL) failed
PMPI_Allreduce(325): Null communicator
Abort(134813701) on node 23 (rank 23 in comm 0): Fatal error in PMPI_Allreduce: Invalid communicator, error stack:
PMPI_Allreduce(402): MPI_Allreduce(sbuf=0x7ffce2cb5430, rbuf=0x7ffce2cb3ebc, count=1, datatype=MPI_LOGICAL, op=MPI_LAND, comm=MPI_COMM_NULL) failed
PMPI_Allreduce(325): Null communicator
Abort(470358021) on node 25 (rank 25 in comm 0): Fatal error in PMPI_Allreduce: Invalid communicator, error stack:
PMPI_Allreduce(402): MPI_Allreduce(sbuf=0x7ffdbec4a630, rbuf=0x7ffdbec490bc, count=1, datatype=MPI_LOGICAL, op=MPI_LAND, comm=MPI_COMM_NULL) failed
PMPI_Allreduce(325): Null communicator
Abort(873011205) on node 26 (rank 26 in comm 0): Fatal error in PMPI_Allreduce: Invalid communicator, error stack:
PMPI_Allreduce(402): MPI_Allreduce(sbuf=0x7ffe209e56b0, rbuf=0x7ffe209e413c, count=1, datatype=MPI_LOGICAL, op=MPI_LAND, comm=MPI_COMM_NULL) failed
PMPI_Allreduce(325): Null communicator
Abort(1007228933) on node 29 (rank 29 in comm 0): Fatal error in PMPI_Allreduce: Invalid communicator, error stack:
PMPI_Allreduce(402): MPI_Allreduce(sbuf=0x7ffcd0296e30, rbuf=0x7ffcd02958bc, count=1, datatype=MPI_LOGICAL, op=MPI_LAND, comm=MPI_COMM_NULL) failed
PMPI_Allreduce(325): Null communicator
…
I wonder what I missed in installing the ioapi/pnetcdf.