Thank you for your response!
To clarify, I get results that differ from the LCDM results when I specify "b" in the "extra_args" section of the input YAML file like so:
Code: Select all
theory:
camb:
path: .../cobaya/cosmo_packages/code/CAMB_modified_2
extra_args:
halofit_version: mead
bbn_predictor: PArthENoPE_880.2_standard.dat
lens_potential_accuracy: 1
num_massive_neutrinos: 1
nnu: 3.044
theta_H0_range:
- 20
- 100
b: [-1,-0.9,-0.8,-0.9,-1]
However, if I don't specify "b" in the YAML file, the results I get are identical to the LCDM results, even though I have "def set_params(self,
a=[0.2,0.4,0.6,0.8,1], b=[-1,-0.9,-0.8,-0.9,-1], w=-1, wa=0, cs2=1.0)" in the definition of set_params(). In any case, I don't need to specify "a" in the YAML file to get results that differ from LCDM.
- Why doesn't Cobaya read "b" from the default in the definition of set_params() such that I must manually specify "b" in the YAML file to get the expected results?
- And then, why does Cobaya read "a" from the default in the definition of set_params() such that I don't need to specify it in the YAML file?
Next, when I try to specify "b" in the "params" section of the YAML file, I get the following error:
[model] *ERROR* Could not find anything to use input parameter(s) {'b'}.
Now, regarding the vector parameters, to run:
Code: Select all
theory:
camb:
path: .../cobaya/cosmo_packages/code/CAMB_modified_4
extra_args:
halofit_version: mead
bbn_predictor: PArthENoPE_880.2_standard.dat
lens_potential_accuracy: 1
num_massive_neutrinos: 1
nnu: 3.044
theta_H0_range:
- 20
- 100
x:
prior:
min: 0
max: 1
proposal: 0.1
drop: true
b:
value: 'lambda x: [-1*x,-1*x,-1*x,-1*x,-1*x]'
derived: false
I had to modify the code to accept a dictionary for b:
Code: Select all
def set_params(self, a=[0.2,0.4,0.6,0.8,1], b=[-1,-0.9,-0.8,-0.9,-1], w=-1, wa=0, cs2=1.0):
"""
Set the parameters so that P(a)/rho(a) = w(a) = w + (1-a)*wa
:param w: w(0)
:param wa: -dw/da(0)
:param cs2: fluid rest-frame sound speed squared
"""
if isinstance(b, dict):
b = b['value']
else:
b = b
self.w = w
self.a = a
self.b = b
self.wa = wa
self.cs2 = cs2
self.validate_params()
self.set_w_a_table(a,b)
When I run it with "--test", I get:
[mcmc] Getting initial point... (this may take a few seconds)
[model] *ERROR* Could not find random point giving finite posterior after 1080 tries
Upon checking with "--debug", I think the problem is with the "lambda" function of "b":
2024-06-12 18:40:01,358 [camb.transfers] Ignored error at evaluation and assigned 0 likelihood (set 'stop_at_error: True' as an option for this component to stop here and print a traceback). Error message: ValueError("could not convert string to float: 'lambda x: [-1*x,-1*x,-1*x,-1*x,-1*x]'")
2024-06-12 18:40:01,358 [model] Calculation failed, skipping rest of calculations
2024-06-12 18:40:01,358 [model] *ERROR* Could not find random point giving finite posterior after 1080 tries
- Do you know what the problem is, and what should I do to fix it?
Lastly, I do have a camb.py file, but I didn't touch it:
Code: Select all
#!/usr/bin/env python
# Python command line CAMB reading parameters from a .ini file
# an alternative to fortran binary compiled into fortran/camb using "make camb".
# To use .ini files from a python script use camb.run_ini or camb.read_ini
# If you have installed the camb package, you can just use "camb params.ini" without using this script.
from camb._command_line import run_command_line
run_command_line()
- Should I modify this as well? What is this file, and what is its function?
Thanks a lot; your help is valued :)