cosmomc test error
Posted: September 02 2017
$mpirun -np 1 ./cosmomc test_planck.ini
I got the following results/errors. I searched and found similar problem mentioned in previous posts, but no clear solution. Anyone has solved the same problem successfully ?
OS and compilers: Ubuntu 14.04/gcc-6.3.0/ gfortran-6.3.0, if it helps.
Number of MPI processes: 1
file_root:test
Random seeds: 6837, 14165 rand_inst: 1
Using clik with likelihood file ./data/clik/hi_l/plik/plik_dx11dr2_HM_v18_TT.clik
----
clik version 723c1a4b0580
smica
Checking likelihood './data/clik/hi_l/plik/plik_dx11dr2_HM_v18_TT.clik' on test data. got -380.979 expected -380.979 (diff -8.68408e-09)
----
TT from l=0 to l= 2508
Clik will run with the following nuisance parameters:
A_cib_217
cib_index
xi_sz_cib
A_sz
ps_A_100_100
ps_A_143_143
ps_A_143_217
ps_A_217_217
ksz_norm
gal545_A_100
gal545_A_143
gal545_A_143_217
gal545_A_217
calib_100T
calib_217T
A_planck
Using clik with likelihood file ./data/clik/low_l/bflike/lowl_SMW_70_dx11d_2014_10_03_v5c_Ap.clik
BFLike Ntemp = 2876
BFLike Nq = 1407
BFLike Nu = 1407
BFLike Nside = 16
BFLike Nwrite = 32393560
WARNING: camb_tau0.06_r0.00_Aprior.dat not found or not enough columns
using default values
info = 0
----
clik version 723c1a4b0580
bflike_smw
Checking likelihood './data/clik/low_l/bflike/lowl_SMW_70_dx11d_2014_10_03_v5c_Ap.clik' on test data. got -7899.49 expected -5247.87 (diff 2651.62)
----
TT from l=0 to l= 29
EE from l=0 to l= 29
BB from l=0 to l= 29
TE from l=0 to l= 29
Clik will run with the following nuisance parameters:
A_planck
Doing non-linear Pk: F
Doing CMB lensing: T
Doing non-linear lensing: T
TT lmax = 2508
EE lmax = 2500
ET lmax = 2500
BB lmax = 2500
PP lmax = 2500
lmax_computed_cl = 2508
Computing tensors: F
max_eta_k = 14000.0000
transfer kmax = 5.00000000
adding parameters for: lowl_SMW_70_dx11d_2014_10_03_v5c_Ap
adding parameters for: smica_g30_ftl_full_pp
adding parameters for: BKPlanck_detset_comb_dust
adding parameters for: plik_dx11dr2_HM_v18_TT
Fast divided into 1 blocks
23 parameters ( 9 slow ( 0 semi-slow), 14 fast ( 0 semi-fast))
Time for theory: 1.61697
Time for lowl_SMW_70_dx11d_2014_10_03_v5c_Ap: 0.15172410011291504
Time for smica_g30_ftl_full_pp: 1.3208389282226562E-004
Time for BKPlanck_detset_comb_dust: 1.4269351959228516E-003
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 128.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Time for plik_dx11dr2_HM_v18_TT: 6.2429904937744141E-003
loglike chi-sq
22.258 44.516 CMB: BKPLANCK = BKPlanck_detset_comb_dust
6.079 12.157 CMB: lensing = smica_g30_ftl_full_pp
581.392 1162.783 CMB: plik = plik_dx11dr2_HM_v18_TT
7900.861 15801.722 CMB: lowTEB = lowl_SMW_70_dx11d_2014_10_03_v5c_Ap
Test likelihoods done, total logLike, chi-eq = 8510.714 17021.428
Expected likelihoods, total logLike, chi-eq = 5859.141 11718.282
** Likelihoods do not match **
MpiStop: 0
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 16417 on
node virgo03 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
I got the following results/errors. I searched and found similar problem mentioned in previous posts, but no clear solution. Anyone has solved the same problem successfully ?
OS and compilers: Ubuntu 14.04/gcc-6.3.0/ gfortran-6.3.0, if it helps.
Number of MPI processes: 1
file_root:test
Random seeds: 6837, 14165 rand_inst: 1
Using clik with likelihood file ./data/clik/hi_l/plik/plik_dx11dr2_HM_v18_TT.clik
----
clik version 723c1a4b0580
smica
Checking likelihood './data/clik/hi_l/plik/plik_dx11dr2_HM_v18_TT.clik' on test data. got -380.979 expected -380.979 (diff -8.68408e-09)
----
TT from l=0 to l= 2508
Clik will run with the following nuisance parameters:
A_cib_217
cib_index
xi_sz_cib
A_sz
ps_A_100_100
ps_A_143_143
ps_A_143_217
ps_A_217_217
ksz_norm
gal545_A_100
gal545_A_143
gal545_A_143_217
gal545_A_217
calib_100T
calib_217T
A_planck
Using clik with likelihood file ./data/clik/low_l/bflike/lowl_SMW_70_dx11d_2014_10_03_v5c_Ap.clik
BFLike Ntemp = 2876
BFLike Nq = 1407
BFLike Nu = 1407
BFLike Nside = 16
BFLike Nwrite = 32393560
WARNING: camb_tau0.06_r0.00_Aprior.dat not found or not enough columns
using default values
info = 0
----
clik version 723c1a4b0580
bflike_smw
Checking likelihood './data/clik/low_l/bflike/lowl_SMW_70_dx11d_2014_10_03_v5c_Ap.clik' on test data. got -7899.49 expected -5247.87 (diff 2651.62)
----
TT from l=0 to l= 29
EE from l=0 to l= 29
BB from l=0 to l= 29
TE from l=0 to l= 29
Clik will run with the following nuisance parameters:
A_planck
Doing non-linear Pk: F
Doing CMB lensing: T
Doing non-linear lensing: T
TT lmax = 2508
EE lmax = 2500
ET lmax = 2500
BB lmax = 2500
PP lmax = 2500
lmax_computed_cl = 2508
Computing tensors: F
max_eta_k = 14000.0000
transfer kmax = 5.00000000
adding parameters for: lowl_SMW_70_dx11d_2014_10_03_v5c_Ap
adding parameters for: smica_g30_ftl_full_pp
adding parameters for: BKPlanck_detset_comb_dust
adding parameters for: plik_dx11dr2_HM_v18_TT
Fast divided into 1 blocks
23 parameters ( 9 slow ( 0 semi-slow), 14 fast ( 0 semi-fast))
Time for theory: 1.61697
Time for lowl_SMW_70_dx11d_2014_10_03_v5c_Ap: 0.15172410011291504
Time for smica_g30_ftl_full_pp: 1.3208389282226562E-004
Time for BKPlanck_detset_comb_dust: 1.4269351959228516E-003
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 128.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Time for plik_dx11dr2_HM_v18_TT: 6.2429904937744141E-003
loglike chi-sq
22.258 44.516 CMB: BKPLANCK = BKPlanck_detset_comb_dust
6.079 12.157 CMB: lensing = smica_g30_ftl_full_pp
581.392 1162.783 CMB: plik = plik_dx11dr2_HM_v18_TT
7900.861 15801.722 CMB: lowTEB = lowl_SMW_70_dx11d_2014_10_03_v5c_Ap
Test likelihoods done, total logLike, chi-eq = 8510.714 17021.428
Expected likelihoods, total logLike, chi-eq = 5859.141 11718.282
** Likelihoods do not match **
MpiStop: 0
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 16417 on
node virgo03 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------