Planck data with running spectral index.

Use of Cobaya. camb, CLASS, cosmomc, compilers, etc.
Post Reply
Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

Planck data with running spectral index.

Post by Gansukh Tumurtushaa » March 20 2014

Dear all,

I am working on building an inflationary model and got some results for tensor-to-scalar ratio and spectral indices.
In order to finalize the project, I need to constrain our theoretical prediction with observational data by Planck, something like one shown in Fig.1 of the following link: http://arxiv.org/pdf/1303.5082v2.pdf . Am I right?

But I don't know how to constrain as shown in Fig.1.
So that, I am now running CosmoMC with Planck data.
Is doing this or running CosmoMC with Planck data helpful for my purpose?
Please, correct me if I am wrong or give me some more comments on how to do this if I am on a right path.

Or is it possible to find the Planck data for the tensor-to-scalar ratio vs spectral index with running as a text file? I mean, corresponding data for red contour of the Fig.4 of the link above. So that, one could plot the contours in Mathematica with ListPlot command and easily constrain with theoretical prediction.

I hope you understood what I mean.

Please, comment on this.
Thank you in advance.

Best wishes,
Gansukh

Eric Sabo
Posts: 4
Joined: March 30 2014
Affiliation: University of Delaware

Planck data with running spectral index.

Post by Eric Sabo » March 30 2014

To answer your question, yes, you do need to compare it to data. While it is best to use likelihood plots such as in the paper you quoted, many inflationary papers simply provide a plot of the inflationary observables without the likelihoods. They then quote the current experimental data and point out that the model goes through this region in their plots. However, that being said, I am also interested in using CosmoMC to generate the likelihood plots. This would certainly make any plot of inflationary observables stronger. Hopefully somebody can point us in the correct direction with regards to using CosmoMC for this purpose.

Eric

Antony Lewis
Posts: 1941
Joined: September 23 2004
Affiliation: University of Sussex
Contact:

Re: Planck data with running spectral index.

Post by Antony Lewis » March 30 2014

To download and use existing Planck chains see instructions at

http://cosmologist.info/cosmomc/readme_planck.html

For instructions to make new chains and plots using cosmomc see

http://cosmologist.info/cosmomc/readme_python.html

Jason Dossett
Posts: 97
Joined: March 19 2010
Affiliation: The University of Texas at Dallas
Contact:

Planck data with running spectral index.

Post by Jason Dossett » March 30 2014

While you can, of course, download CosmoMC and run the program to place constraints on different combinations of cosmological parameters. If you simply want to be able to quickly reproduce the plots in the Planck paper for your own publications, you can also download the planck chains from:
http://www.sciops.esa.int/wikiSI/planck ... Public_PLA.

You can run getdist on these chains and produce the plots you want. Getdist is included in the CosmoMC package. You can download CosmoMC and then run:

Code: Select all

make getdist
After that you can use the included python scripts or your own plotting routines to produce plots of the parameter constraints.

-Jason

Eric Sabo
Posts: 4
Joined: March 30 2014
Affiliation: University of Delaware

Planck data with running spectral index.

Post by Eric Sabo » March 31 2014

Thank you for your reply. I have seen these instructions. I am not entirely fluent in the statistical terminology of chains and priors (etc), but I think I understand enough to follow the links. I am currently waiting for my university to renegotiate their Intel contract so they can upgrade from ifort 13 to 14. I assume CosmoBox will not be upgraded since ifort requires a license?

Eric

Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

Planck data with running spectral index.

Post by Gansukh Tumurtushaa » March 31 2014

Thank you all for the reply,

Using the Planck chain, I have plotted a figure, and uploaded here or link: https://www.dropbox.com/s/h3u9z1s30p0ixka/nrun_r.pdf . I am happy about my progress and for your helpful comments.
But I have not accomplished my purpose yet, and now I need to constrain my prediction (an inflationary model) with an observational data.

The corresponding python script for the figure is:

Code: Select all

import planckStyle as s
from pylab import *

g=s.getSinglePlotter()
g.settings.colorbar_rotation =-90
g.settings.lab_fontsize=10

roots = ['base_nrun_r_planck_lowl_lowLike_post_BAO','base_nrun_r_planck_lowl_lowLike_highL']
params = g.get_param_array(roots[0], ['ns','r'])

g.setAxes(params)
g.settings.lw_contour = 0.6
g.add_2d_contours(roots[0], params[0], params[1], filled=True, color='#ff0000')
g.add_2d_contours(roots[1], params[0], params[1], filled=True, color='#0000ff')

g.add_x_marker(0)
ylim([0, 0.4])
xlim([0.925, 1])
ylabel(r'Tensor-to-Scalar Ratio ($r$)')
xlabel(r'Primordial Tilt ($n_{s}$)')
g.rotate_yticklabels()
for ticklabel in gca().yaxis.get_ticklabels():
    ticklabel.set_rotation("vertical")

g.add_legend([s.WP+'+BAO'+'($dn_{s}/d\ln k$)',s.WP+'+highL'+'($dn_{s}/d\ln k$)'],legend_loc='upper right',colored_text=True);
g.export('nrun_r.pdf')

I think that here we call "base_nrun_r_planck_lowl_lowLike_post_BAO' and 'base_nrun_r_planck_lowl_lowLike_highL" from somewhere, and I guess they are placed in /plot_data directory in /PLA directory, but not sure. Am I right?

By the way, are they both data files?
If yes, from where can I see their data form? Are they in a plot_data directory in PLA or somewhere else?
Because, I look for it from /PLA/plot_data/ directory and found some data files under same name, and data forms or values are much different than that of my expectation. So little confused...

I am sorry if I am asking stupid questions, but there is none who knows about all these stuff at our institute. That's why I am asking for details.
Anyways, thank you all once again, and I hope to hear from you soon.

Best,
Gansukh Tumurtushaa

Jason Dossett
Posts: 97
Joined: March 19 2010
Affiliation: The University of Texas at Dallas
Contact:

Planck data with running spectral index.

Post by Jason Dossett » March 31 2014

So the general data format produced by GetDist to be used for plotting 2D contours (these are the outputs in plot_data/) is

rootname_2D_AA_BB
rootname_2D_AA_BB_cont
rootname_2D_AA_BB_x
rootname_2D_AA_BB_y

rootname is the base name of the file, such as "base_nrun_r_planck_lowl_lowLike_post_BAO".

2D indicates that the datafile is for the 2D plot, not the 1D.
AA and BB are the names of the variables you want to plot using CosmoMC naming conventions from params_CMB.paramnames.

The _x file contains the values for the BB variable giving essentially a vector BB[i].
The _y file is the same as _x except for the AA variable so it gives AA[i].
The AA_BB file is roughly a table of normalized probability values, say z in the form z[i,j]=Prob(AA[i],BB[j])/maxProb
Finally, the _cont file contains the z values for the 68%, 95%, and 99% confidence regions.

Hope that helps,
Jason

Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

Planck data with running spectral index.

Post by Gansukh Tumurtushaa » March 31 2014

Thank you for your reply.
Could you explain once again details of z[i,j] or provide me some references about it?
Actually, I don't get the what z values do. How do we use them?

Jason Dossett
Posts: 97
Joined: March 19 2010
Affiliation: The University of Texas at Dallas
Contact:

Planck data with running spectral index.

Post by Jason Dossett » March 31 2014

z is a tabel of values for the probability density funciton. Considering the bivariate case in the preceding link:

Code: Select all

z[i,j] =f(BB[j],AA[i])

Gansukh Tumurtushaa
Posts: 34
Joined: October 05 2013
Affiliation: Sogang univ.

Planck data with running spectral index.

Post by Gansukh Tumurtushaa » April 01 2014

I have opened the files
  • 1.base_nrun_r_planck_lowl_lowLike_highL_2D\_r\_ns
    2.base_nrun_r_planck_lowl_lowLike_highL_2D\_r\_ns\_cont
    3.base_nrun_r_planck_lowl_lowLike_highL_2D\_r\_ns\_x
    4.base_nrun_r_planck_lowl_lowLike_highL_2D\_r\_ns\_y
from plot_data/ to understand better what you explained earlier:
The AA_BB file is roughly a table of normalized probability values, say z in the form z[i,j]=Prob(AA,BB[j])/maxProb
Finally, the \_cont file contains the z values for the 68%, 95%, and 99% confidence regions.

But bit confused because the numerical values in \_x and \_y files do not match with that of n_s-r contours, ( https://www.dropbox.com/s/3znm4qv3xawq39u/nrun_r.png ).
I think that maybe z values from \_cont file, for 68% and 95% CR, is used to plot n_s-r contours, but I don't know whether it's so or not, please comment on it.
Do we use the data values of \_x and \_y files to plot n_s-r contour? or we use them to estimate for plotting n_s-r? It seems those values don't used directly to plot n_s-r. How do we use them?

Thank you.

Jason Dossett
Posts: 97
Joined: March 19 2010
Affiliation: The University of Texas at Dallas
Contact:

Planck data with running spectral index.

Post by Jason Dossett » April 01 2014

Hi,

So _x and _y are values for the BB and AA variables that define the x and y axis.

You are correct, the values in _cont are the z-values that define the 68% and 95% confidence regions.

_x, _y and the base file that contains what I have called z are used to produce a contour plot. You can then use _cont to plot lines on the contour plot that define the confidence regions. Essentially, you have a 3D grid of (x,y,z) that you are plotting as a contour pot on the x-y plane. You can interpolate from this grid and draw a line connecting all the given z values given in _cont for a particular confidence region. In this way you can plot the confidence contours.

Eric Sabo
Posts: 4
Joined: March 30 2014
Affiliation: University of Delaware

Planck data with running spectral index.

Post by Eric Sabo » April 17 2014

Okay, so my cluster has been updated and I'm trying to install the Planck chains. I have downloaded them from the website as in the readme, but when I run

Code: Select all

./waf configure --install_all_deps
it says
Waf: Run from a directory containing a file named 'wscript'
Did I download the wrong file? (I got the full 2.8 GB file.) I have searched the CosmoMC code for a wscript file and cannot find one. Perhaps one was supposed to be generated upon build or come in the PLA folder?

Any help would be appreciated.

Thanks,
Eric[/quote]

Antony Lewis
Posts: 1941
Joined: September 23 2004
Affiliation: University of Sussex
Contact:

Re: Planck data with running spectral index.

Post by Antony Lewis » April 17 2014

You don't need to install clik (the Planck likelihood wrapper) if you just want to use the chains. See

http://cosmologist.info/cosmomc/readme_planck.html

Of course if you use the provided python scripts you don't need to access GetDist's files directly either - the python scripts do all that for you.

Eric Sabo
Posts: 4
Joined: March 30 2014
Affiliation: University of Delaware

Planck data with running spectral index.

Post by Eric Sabo » April 18 2014

Thank you for your reply. The reason for the previous post is because I am having errors with the link you provided.

When I download the 2.8 GB Planck data file and run

Code: Select all

python python/makeGrid.py PLA settings_planck_nominal
it produces the error

Code: Select all

Traceback (most recent call last):
  File "python/makeGrid.py", line 40, in <module>
    batch.makeItems&#40;settings.groups&#41;
  File "/lustre/work/shafi_hep/sw/cosmomc/python/batchJob.py", line 227, in makeItems
    item.makeImportance&#40;group.importanceRuns&#41;
  File "/lustre/work/shafi_hep/sw/cosmomc/python/batchJob.py", line 79, in makeImportance
    if len&#40;arr&#41; > 2 and not arr&#91;2&#93;.wantImportance&#40;self&#41;&#58; continue
  File "/lustre/work/shafi_hep/sw/cosmomc/python/settings_planck_nominal.py", line 50, in wantImportance
    return planck in jobItem.dataname_set and &#40;not'omegak' in jobItem.param_set or &#40;len&#40;jobItem.param_set&#41; == 1&#41;&#41;
AttributeError&#58; jobItem instance has no attribute 'dataname_set
However,

Code: Select all

python python/makeGrid.py
works without error. So I am trying to understand how the PLA folder is supposed to be setup.[/code]


Post Reply