Running Planck Simulation On CosmoMC?

Use of Cobaya. camb, CLASS, cosmomc, compilers, etc.
Post Reply
Joseph Smidt
Posts: 7
Joined: May 05 2009
Affiliation: UC Irvine

Running Planck Simulation On CosmoMC?

Post by Joseph Smidt » August 06 2009

Hello. I am trying to run a planck simulation with cosmoMC. I want to use TT, EE, BB and TE data. I couldn't find much documentation on how to create a .newdat file for this so I tried to follow the format CBI used: http://www.astro.caltech.edu/~tjp/CBI/d ... index.html

I therefore made the planck_mock.newdat where here are the columns:
1) Band #
2) Power spectrum (l(l+1)C_l/2pi, in uK^2)
3) and 4) - error bars.
5) noise spectrum, used in the offset-lognormal approximation.
6) and 7) the lower/upper ell range contained in the band
8) the iliketype flag for the band. 1 means use offset-lognormal, 0
means Gaussian.

I have tried to form the covariance matrix as [tex]{\rm Cov}_{\ell \ell'} = {2 \over (2 \ell+1) \Delta \ell f_{sky}} {\mathbf C}_\ell^2 \delta_{\ell \ell'} [/tex] where [tex]{\mathbf C}_\ell = C_\ell + C_\ell^N[/tex] with [tex]C_\ell[/tex] being the normal power spectrum and [tex]C_\ell^N[/tex] being the noise power spectrum.

For the normalized covariance matrices, these would be just unit matricies since I am not considering cross correlations. (yet) The full covariance matrix at the end is not a unit matrix however.

In order to be more clear I am including a version of of my planck_mock.newdat file. In this file I have used huge bins so that I can fit it in a reasonable amount of space. The one I am using for the simulations has much smaller bins:

Code: Select all

planck_mock_
 4 4 4 0 4 0
BAND_SELECTION
 1 4
 1 4
 1 4
 0 0
 1 4
 0 0
1 1.00  0.0260
 0 0 0
 2 #iliketype
TT
 1 3158.2    8.9    8.9  1.280 0 500 1
 2 2027.4    3.3    3.3 13.488 500 1000 1
 3  914.7    1.3    1.3 74.377 1000 1500 1
 4  362.6    0.8    0.8 358.916 1500 2000 1
  1.000   0.000   0.000   0.000 
 0.000    1.000   0.000   0.000 
 0.000   0.000    1.000   0.000 
 0.000   0.000   0.000    1.000 
EE
 1  7.461  0.028  0.028  2.560 0 500 1
 2 23.670  0.083  0.083 26.976 500 1000 1
 3 21.672  0.216  0.216 148.754 1000 1500 1
 4 12.745  0.781  0.781 717.833 1500 2000 1
  1.000   0.000   0.000   0.000 
 0.000    1.000   0.000   0.000 
 0.000   0.000    1.000   0.000 
 0.000   0.000   0.000    1.000 
BB
 1  0.020  0.007  0.007  2.560 0 500 1
 2  0.069  0.044  0.044 26.976 500 1000 1
 3  0.070  0.188  0.188 148.754 1000 1500 1
 4  0.044  0.767  0.767 717.833 1500 2000 1
  1.000   0.000   0.000   0.000 
 0.000    1.000   0.000   0.000 
 0.000   0.000    1.000   0.000 
 0.000   0.000   0.000    1.000 
TE
 1  9.033  0.031  0.031  1.989 0 500 0
 2 -22.359  0.002  0.002 20.957 500 1000 0
 3 -34.649  0.102  0.102 115.565 1000 1500 0
 4 -11.722  0.584  0.584 557.674 1500 2000 0
  1.000   0.000   0.000   0.000 
 0.000    1.000   0.000   0.000 
 0.000   0.000    1.000   0.000 
 0.000   0.000   0.000    1.000 
 79.6966   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   11.0999   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   1.5645   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.5948   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0008   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0068   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0465   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.6098   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0001   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0019   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0354   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.5888   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0010   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0105   0.0000 
 0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.3405 
[/code]

The top of my params.ini file looks like this:

Code: Select all

#Sample parameters for cosmomc in default parameterization

#Root name for files produced
file_root = chains/planck_orig

#action = 0:  MCMC, action=1: postprocess .data file, action=2: find best fit point only
action = 0

#Maximum number of chain steps
samples = 200000

#Feedback level ( 2=lots,1=chatty,0=none)
feedback = 2

#Temperature at which to Monte-Carlo
temperature = 1

#filenames for CMB datasets and SZ templates (added to C_l times parameter(13))
#Note you may need to change lmax in cmbtypes.f90 to use small scales (e.g. lmax=2100)
cmb_numdatasets = 1
cmb_dataset1 = data/planck_mock.newdat
cmb_dataset_SZ1 = data/WMAP_SZ_VBand.dat
cmb_dataset_SZ_scale1 = 1
#filenames for matter power spectrum datasets, incl twodf
mpk_numdatasets = 0
mpk_dataset1 = data/sdss_lrgDR4.dataset
#mpk_dataset1 = data/2df_2005.dataset

#if true, use HALOFIT for non-linear corrections (astro-ph/0207664).
#note lyman-alpha (lya) code assumes linear spectrum
nonlinear_pk = F

use_CMB = T
use_HST = F
use_mpk = F
use_clusters = F
use_BBN = F
use_Age_Tophat_Prior = T
use_SN = F
use_lya = F
use_min_zre = 0

Running CosmoMC gives this error:

Code: Select all

 Matrix_Inverse: very small diagonal
 MpiStop:            1
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1027369599.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
 has pols:  TT EE BB TE
 Matrix_Inverse: very small diagonal
 MpiStop:            0
 has pols:  TT EE BB TE
 Matrix_Inverse: very small diagonal
 MpiStop:            2
 has pols:  TT EE BB TE
 Matrix_Inverse: very small diagonal
 MpiStop:            3
This seems to suggest the elements in the diagonal of the covariance matrix are too small.

Does anyone have any advice on how to fix this? Is my .newdat file incorrect? Is the equation I am using for the covariance matrix incorrect? Is there something special I need to do with my params.ini file to do such a mock run? Anything else?

Thanks.

Joseph Smidt
Posts: 7
Joined: May 05 2009
Affiliation: UC Irvine

Running Planck Simulation On CosmoMC?

Post by Joseph Smidt » August 06 2009

Okay, the diagonal problem seems to be fixed. It appears you need scientific notation format with the "e" in the number to get it to work correctly. Now I get this error:

Code: Select all

 Reionization_zreFromOptDepth: Did not converge to optical depth
 tau =  0.482418577432752      optical_depth =   0.504062950611115
   40.0000000000000        39.9987792968750
 MpiStop:            2
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
with errorcode 36185104.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 2908 on
node compute-0-4 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
Again, any advice is welcome.

Post Reply