COBAYA: Issue with chains not converging

Use of Cobaya. camb, CLASS, cosmomc, compilers, etc.
Post Reply
Gabriele Autieri
Posts: 4
Joined: August 06 2024
Affiliation: SISSA

COBAYA: Issue with chains not converging

Post by Gabriele Autieri »

Hello everyone, I was trying to make some cobaya runs to understand how the program works.
For example, I ran with the following .yaml file, which is taken from the cobaya input generator ( plus I removed the planck likelihood):

Code: Select all

theory:
  camb:
    extra_args:
      halofit_version: mead
      bbn_predictor: PArthENoPE_880.2_standard.dat
      lens_potential_accuracy: 1
      num_massive_neutrinos: 1
      nnu: 3.044
      theta_H0_range:
      - 20
      - 100
likelihood:
  bao.sixdf_2011_bao: null
  bao.sdss_dr7_mgs: null
  bao.sdss_dr12_consensus_bao: null
params:
  logA:
    prior:
      min: 1.61
      max: 3.91
    ref:
      dist: norm
      loc: 3.05
      scale: 0.001
    proposal: 0.001
    latex: \log(10^{10} A_\mathrm{s})
    drop: true
  As:
    value: 'lambda logA: 1e-10*np.exp(logA)'
    latex: A_\mathrm{s}
  ns:
    prior:
      min: 0.8
      max: 1.2
    ref:
      dist: norm
      loc: 0.965
      scale: 0.004
    proposal: 0.002
    latex: n_\mathrm{s}
  theta_MC_100:
    prior:
      min: 0.5
      max: 10
    ref:
      dist: norm
      loc: 1.04109
      scale: 0.0004
    proposal: 0.0002
    latex: 100\theta_\mathrm{MC}
    drop: true
    renames: theta
  cosmomc_theta:
    value: 'lambda theta_MC_100: 1.e-2*theta_MC_100'
    derived: false
  H0:
    latex: H_0
    min: 20
    max: 100
  ombh2:
    prior:
      min: 0.005
      max: 0.1
    ref:
      dist: norm
      loc: 0.0224
      scale: 0.0001
    proposal: 0.0001
    latex: \Omega_\mathrm{b} h^2
  omch2:
    prior:
      min: 0.001
      max: 0.99
    ref:
      dist: norm
      loc: 0.12
      scale: 0.001
    proposal: 0.0005
    latex: \Omega_\mathrm{c} h^2
  omegam:
    latex: \Omega_\mathrm{m}
  omegamh2:
    derived: 'lambda omegam, H0: omegam*(H0/100)**2'
    latex: \Omega_\mathrm{m} h^2
  mnu: 0.06
  omega_de:
    latex: \Omega_\Lambda
  YHe:
    latex: Y_\mathrm{P}
  Y_p:
    latex: Y_P^\mathrm{BBN}
  DHBBN:
    derived: 'lambda DH: 10**5*DH'
    latex: 10^5 \mathrm{D}/\mathrm{H}
  tau:
    prior:
      min: 0.01
      max: 0.8
    ref:
      dist: norm
      loc: 0.055
      scale: 0.006
    proposal: 0.003
    latex: \tau_\mathrm{reio}
  zrei:
    latex: z_\mathrm{re}
  sigma8:
    latex: \sigma_8
  s8h5:
    derived: 'lambda sigma8, H0: sigma8*(H0*1e-2)**(-0.5)'
    latex: \sigma_8/h^{0.5}
  s8omegamp5:
    derived: 'lambda sigma8, omegam: sigma8*omegam**0.5'
    latex: \sigma_8 \Omega_\mathrm{m}^{0.5}
  s8omegamp25:
    derived: 'lambda sigma8, omegam: sigma8*omegam**0.25'
    latex: \sigma_8 \Omega_\mathrm{m}^{0.25}
  A:
    derived: 'lambda As: 1e9*As'
    latex: 10^9 A_\mathrm{s}
  clamp:
    derived: 'lambda As, tau: 1e9*As*np.exp(-2*tau)'
    latex: 10^9 A_\mathrm{s} e^{-2\tau}
  age:
    latex: '{\rm{Age}}/\mathrm{Gyr}'
  rdrag:
    latex: r_\mathrm{drag}
sampler:
  mcmc:
    drag: true
    oversample_power: 0.4
    proposal_scale: 1.9
    covmat: auto
    Rminus1_stop: 0.01
    Rminus1_cl_stop: 0.2
output: output/bao_test
I ran this on a cluster with 4 MPI processes for 48 hours and the chains did not converge! (I had 150000 accepted steps).
The R-1 computed oscillated a lot and never decreased below a value of ~ 5.
Furthermore, the .progress file in the output is always empty and no progress regarding the convergence is written into it.
However progress can be seen from the standard cobaya output where I can see chains progressing and R-1 being computed.
I would like to understand if there is anything wrong with the way I'm setting up the run or any other issue.

Thank you!
Antony Lewis
Posts: 1980
Joined: September 23 2004
Affiliation: University of Sussex
Contact:

Re: COBAYA: Issue with chains not converging

Post by Antony Lewis »

You have a lot of totally unconstrained parameters because there is no data that can constrain them (logA, ns..).
Your proposal width for e.g. logA is tiny 0.001, much smaller than the prior, so it will take a very long time to explore the entire unconstrained prior range. Normally if you have proposal learning on, that would be corrected over time.
Post Reply