BICEP likelihoods for Cosmomc

Use of Cobaya. camb, CLASS, cosmomc, compilers, etc.
Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

BICEP likelihoods for Cosmomc

Post by Gansukh Tumurtushaa » March 28 2014

Dear Antony,

I was running Planck likelihoods with Cosmomc, one before the last update.

Now I want to try with Bicep2 one, so can I ask for repository account.
And where I can get the more details on running?

Best,
Gansukh Tumurtushaa

Antony Lewis
Posts: 1941
Joined: September 23 2004
Affiliation: University of Sussex
Contact:

Re: BICEP likelihoods for Cosmomc

Post by Antony Lewis » March 28 2014

you can email if you need a repository account, though Bicep likelihood is also in the March 2014 version. See readme files and sample .ini files for help.

Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

BICEP likelihoods for Cosmomc

Post by Gansukh Tumurtushaa » April 03 2014

I have been trying to reproduce following figure: https://www.dropbox.com/sh/qxj1ecj2odg ... _ns-r.pdf with the latest CosmoMC version, March2014. I think I can produce green contours with the use of PLA chain. But I do not know how to plot others with Bicep2 data.

I know the bicep2 data set is provided in /data/BICEP but how one can use them to plot the figure showed in a link above?
Thank you.

Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

BICEP likelihoods for Cosmomc

Post by Gansukh Tumurtushaa » April 07 2014

I have compiled the "test.ini" file (the latest cosmomc) with action=0 using following command:

Code: Select all

mpirun -np 8 ./cosmomc test.ini
And, it took about 54 hours. Is it usual? Everything was in default form.
After changing the test.ini file little bit, it's compiling over 54 hours by now. I wonder whether everything is going okay or not. Is it usual?

Antony Lewis
Posts: 1941
Joined: September 23 2004
Affiliation: University of Sussex
Contact:

Re: BICEP likelihoods for Cosmomc

Post by Antony Lewis » April 07 2014

A standard test run for me goes in about 6 hours to good convergence (R<0.1), and 24 hour to excellent convergence (R~0.01), using 4 cores per chain on fairly modern computer (I usually run 4 chains on one 16 core node, 4 cores per chain).

On a cluster you should usually edit the job_script file as appropriate for your machine, and if necessary the submitJob function in python/jobQueue.py, and then run

Code: Select all

python python/runMPI.py myini
(I just updated this bit of the readme, it was out of date)

Make sure your run is using openmp correctly, e.g. each chain process is actually using the number of cores it should be using (if you are not using openmp then it will be slower walltime but slightly more efficient use of computer resources).

Post Reply