I am using CosmoMC in an SGE cluster with specifications:
Code: Select all
(CosmoPython) [narayan@dirac CosmoMC]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
Code: Select all
# The shell used to run the job
#$ -S /bin/bash
# The name of the parallel queue to submit the job to
#$ -N cosmomc
# Define the parallel runtime environment and number of nodes
# NB: number of nodes is one more than needed as one copy resideson the master node
#$ -pe mpi 8
# Use location that job was submitted as working directory
#$ -cwd
module load intel-compiler
source /home/softwares/intel/impi/2019.1.144/intel64/bin/mpivars.sh
mpirun -np 8 ./cosmomc test.ini\
#mpiexec -iface ib0 -f /home/softwares/Hostfiles/hosts.ifc -np $NSLOTS ./cosmomc test.ini
Code: Select all
mpirun -np 8 ./cosmomc test.ini
Code: Select all
MPI startup(): I_MPI_F77 environment variable is not supported.
MPI startup(): To check the list of supported variables, use the impi_info utility or refer to https://software.intel.com/en-us/mpi-library/documentation/get-started.
....
....
....
Chain:7 drag accpt: 0.4059041 fast/slow 67.91409 slow: 745
Chain:5 drag accpt: 0.4245283 fast/slow 67.21973 slow: 760
Chain:1 drag accpt: 0.4069051 fast/slow 68.56148 slow: 732
Chain:2 drag accpt: 0.4176334 fast/slow 63.66264 slow: 744
Chain:4 drag accpt: 0.4031355 fast/slow 67.36502 slow: 789
Chain:0 drag accpt: 0.4422604 fast/slow 67.64106 slow: 755
Chain:6 drag accpt: 0.4386952 fast/slow 68.45614 slow: 798
Chain:5 drag accpt: 0.4211663 fast/slow 67.33900 slow: 823
Chain:3 drag accpt: 0.4235294 fast/slow 67.64450 slow: 782
Chain:7 drag accpt: 0.4040404 fast/slow 67.89487 slow: 818
Chain 2 MPI communicating
Chain 8 MPI communicating
Chain 6 MPI communicating
Chain 3 MPI communicating
Chain 5 MPI communicating
Chain 7 MPI communicating
Chain 4 MPI communicating
Chain:0 drag accpt: 0.4352679 fast/slow 67.54447 slow: 832
Chain:0 drag accpt: 0.4325438 fast/slow 67.68958 slow: 902
Chain:0 drag accpt: 0.4302103 fast/slow 67.58334 slow: 972
Chain:0 drag accpt: 0.4316547 fast/slow 67.59238 slow: 1023
Code: Select all
mpiexec -iface ib0 -f /home/softwares/Hostfiles/hosts.ifc -np $NSLOTS ./cosmomc test.ini
andproxy_upstream_control_cb
errors!HYDI_dmx_poll_wait_for_event
Even if I do not submit the job in a queue and run in a master node, these problems prevail.
Anyone, please help!
Srijita