I am running CosmoMC compiled with WMAP 9 year option data + PLANCK data.
I currently running on 6 processors, running a "vanilla" run.
Run time is over 4 weeks and going strong.
output file is >230 Mb.
Does this seems to make sense anyone?
or do I have a problem with some loop gone wild?
did not make any changes to CosmoMC (yet) - just "factory" version.
System is:
Code: Select all
[i]Intel core i7 (8 processors).
8 Mb of RAM
Ubunto 12.04 X64 bit
ifort 2013.1.117 + associated LAPACK.
mpiifort 4.1.2.040
Planck dependencies installed.[/i]
Code: Select all
DEFAULT(batch1/CAMspec_defaults.ini)
DEFAULT(batch1/lowl.ini)
DEFAULT(batch1/lowLike.ini)
#planck lensing
DEFAULT(batch1/lensing.ini)
#Other Likelihoods
DEFAULT(batch1/BAO.ini)
#DEFAULT(batch1/HST.ini)
#DEFAULT(batch1/Union.ini)
#DEFAULT(batch1/SNLS.ini)
#DEFAULT(batch1/WiggleZ_MPK.ini)
#DEFAULT(batch1/MPK.ini)
#general settings
DEFAULT(batch1/common_batch1.ini)
#high for new runs
MPI_Max_R_ProposeUpdate = 30
propose_matrix= planck_covmats/base_planck_lowl_lowLike.covmat
high_accuracy_default=T
start_at_bestfit =F
feedback=1
use_fast_slow = T
num_threads=6
checkpoints=T
#sampling_method=7 is a new fast-slow scheme good for Planck
sampling_method = 1
dragging_steps = 3
propose_scale = 2
indep_sample=1
use_clik=T
#Folder where files (chains, checkpoints, etc.) are stored
root_dir = chains/
#Root name for files produced
file_root=output/V2
action = 0
#these are just small speedups for testing
get_sigma8=F
#Uncomment this if you don't want one 0.06eV neutrino by default
#num_massive_neutrinos=3
#param[mnu] = 0 0 0 0 0