Skip to content

Conversation

@mvhulten
Copy link
Contributor

This is needed when GCC+OpenMPI is used. This is tested on JURECA.

With Intel+ParaStationMPI this already went right; there is no effect when compiling with Intel LLVM.

Resolves: #87


A potential problem I see with my commit is with systems that don't use Slurm.

This is needed when GCC+OpenMPI is used.  This is tested on JURECA.

With Intel+ParaStationMPI this already went right; there is no effect when
compiling with Intel LLVM.

Resolves: #87
@mvhulten mvhulten requested a review from kvrigor August 15, 2025 09:39
@kvrigor
Copy link
Member

kvrigor commented Aug 15, 2025

A potential problem I see with my commit is with systems that don't use Slurm.

Yes. The solution should be generic—at least across all machines defined in default.2025.env:

if [[ $SYSTEMNAME == *"jedi"* || $SYSTEMNAME == *"jupiter"* ]]; then
export TSMP2_ENV_FILE=${env_dir}/jsc.2025.gnu.openmpi
elif [[ $SYSTEMNAME == *"jureca"* || $SYSTEMNAME == *"juwels"* || $SYSTEMNAME == *"jusuf"* ]]; then
export TSMP2_ENV_FILE=${env_dir}/jsc.2025.intel.psmpi
elif [[ $SYSTEMNAME == *"marvin"* ]]; then
export TSMP2_ENV_FILE=${env_dir}/uni-bonn.gnu.openmpi
elif [[ $SYSTEMNAME == *"UBUNTU"* ]]; then
export TSMP2_ENV_FILE=${env_dir}/ubuntu.gnu.openmpi
else
echo "WARNING: Unknown default environment for machine '$(hostname)'"
known_machine="false"
fi

@mvhulten
Copy link
Contributor Author

Addressed better in #90.

@mvhulten mvhulten closed this Aug 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Multi-process ParFlow stand-alone fails on GNU stack

3 participants