MPI processes in cosmomc
-
- Posts: 16
- Joined: October 06 2008
- Affiliation: University of Manchester
MPI processes in cosmomc
I have compiled cosmomc with MPICH2 in order to run on a cluster, but when I run the command,
mpirun -np 4 ./cosmomc params.ini,
The code only works out one chain. The output should say something like:
Number of MPI processes: 4, but instead says:
Number of MPI processes: 1
Number of MPI processes: 1
Number of MPI processes: 1
Number of MPI processes: 1
This results in only one chain being produced.
I've also tried using a PBS script, setting up the number of nodes, and processors per node, and although the code runs on all processors, I still only get one chain? Does anyone know how I can get it to compute 1 chain per node?
mpirun -np 4 ./cosmomc params.ini,
The code only works out one chain. The output should say something like:
Number of MPI processes: 4, but instead says:
Number of MPI processes: 1
Number of MPI processes: 1
Number of MPI processes: 1
Number of MPI processes: 1
This results in only one chain being produced.
I've also tried using a PBS script, setting up the number of nodes, and processors per node, and although the code runs on all processors, I still only get one chain? Does anyone know how I can get it to compute 1 chain per node?
-
- Posts: 16
- Joined: October 06 2008
- Affiliation: University of Manchester
MPI processes in cosmomc
It appears that the problem was something to do with multiple mpirun.exe files being on the cluster.
-
- Posts: 50
- Joined: March 26 2006
- Affiliation: DESY
- Contact:
MPI processes in cosmomc
Hi Steven,
depending on the MPI installation, you need to run lamboot before running mpirun.
Pascal
depending on the MPI installation, you need to run lamboot before running mpirun.
Pascal
-
- Posts: 16
- Joined: October 06 2008
- Affiliation: University of Manchester
MPI processes in cosmomc
We seem to have sorted this now. If anyone runs into a similar problem, check you're using the mpiexec file from the same folder that you compiled with. I was compiling with MPICH2, but unknown to me, there was a some other mpi program installed on the cluster meaning mpiexec didn't correspond to MPICH2.