Page 1 of 1

[CosmoMC] Delays in Planck 2018 with same 15 configuration

Posted: September 03 2019
by Luke Hart
Dear all,

I apologise for the various posts however I think this might just caption some issues that might hopefully help others as well.

I've ported all the batch3 files and installed Planck 2018, as recommended by the Plik readme and Antony, and attempted runs of some parameter scenarios from 2015 data, with the 2018 data. I have noticed that these runs are taking a suspiciously longer amount of time to run than the 2015 runs. For example, these runs that would have taken a day with Planck 2015, doesn't even close below a convergence of 1 after a day in the 2018 case.

Numerous issues which I'm not sure are potential causations:
- With 2015, I used lowTEB instead of the commander lowE + lowl combination
- I am Stil currently using the base planck TTTEEE+lowTEB+lensing covmat for the proposal distribution.

Otherwise I don't really know what could be causing the slowdown..

Any ideas would be gratefully received :)

Luke

Re: [CosmoMC] Delays in Planck 2018 with same 15 configuration

Posted: September 03 2019
by Antony Lewis
If you are using an old CAMB version, the HMcode implementation (rather than Takahashi halofit) may be slowing it down a bit, but otherwise I think it should be roughly the same speed.

Re: [CosmoMC] Delays in Planck 2018 with same 15 configuration

Posted: September 03 2019
by Luke Hart
I just checked my runtimes and I wildly underestimated how long the old ones took sorry Ant! Having said that I just had a discussion with someone here: do you think that the choice of proposal matrix could be having a significant impact?

Thanks
Luke

Re: [CosmoMC] Delays in Planck 2018 with same 15 configuration

Posted: September 04 2019
by Antony Lewis
Certainly if it it's too large, and generally it will take longer the less accurate it is.

Re: [CosmoMC] Delays in Planck 2018 with same 15 configuration

Posted: September 13 2019
by Luke Hart
I thought I'd update this thread with how I've finally gotten it all to work properly..

I have completely abandoned ifort17 as it just wasn't giving me sensible chains results, perhaps from some form of miscommunication from the MPI. I'm now using our cluster's default of ifort/...///-14. These work fine, they still enable the fortran version of getdist and they still work with the 2018 likelihood. I have been able to replicate the 2018 results with these now finally.

I switch to anaconda3 for the installation (./waf etc...) and then switch back to Python 2.7 for actual CosmoMC because ifort14 isn't compatible with Python>=3.

I'm not sure if this helps anyone, but it seems that something, at least with our compilers, isn't quite right with intel_compilers 17.

Luke