### cosmomc: problems \"learning\" proposal covariance

Posted:

**October 27 2005**Dear All,

I have been trying to find an appropriate proposal distribution for a problem which has 4 new "fast" parameters (I do not use the standard fast parameters). I have set 8 chains running under MPI on a beowulf cluster, with the following parameters:

a trial covmat which keeps the correlations between the standard slow parameters, with zeros for the fast parameters.

some relevant cosmomc parameters:

estimate_propose_matrix = F

propose_scale = 2.4

sampling_method = 1

use_fast_slow = T

oversample_fast = 1

MPI_Converge_Stop = 0.03

MPI_StartSliceSampling = T

MPI_Check_Limit_Converge = F

MPI_LearnPropose = T

MPI_R_StopProposeUpdate = 0.4

There are a couple of recalcitrant parameters in the fast parameter space whose distributions are poorly known a priori (so probably have bad initial guesses for step sizes etc).

After the requested 200000 samples, the chains stopped without the "stop propose update" criterion being statisfied.

When I plot the R-1 value of the worst eigenvalue which is being printed in the output, it starts very high (around 20) and the steadily decreases to around 2 after about 100000 samples. After that, it suddenly started shooting UP again. When the code stopped after 200000 samples, the worst R-1 had climbed back to ~9.

Is this normal behaviour? Is there anything I can do to make the LearnProposeUpdate process work faster? And are these chains useful for calculating a new covmat for a new run, given that they do not constitute a Markovian process?

Thanks a lot,

Hiranya

I have been trying to find an appropriate proposal distribution for a problem which has 4 new "fast" parameters (I do not use the standard fast parameters). I have set 8 chains running under MPI on a beowulf cluster, with the following parameters:

a trial covmat which keeps the correlations between the standard slow parameters, with zeros for the fast parameters.

some relevant cosmomc parameters:

estimate_propose_matrix = F

propose_scale = 2.4

sampling_method = 1

use_fast_slow = T

oversample_fast = 1

MPI_Converge_Stop = 0.03

MPI_StartSliceSampling = T

MPI_Check_Limit_Converge = F

MPI_LearnPropose = T

MPI_R_StopProposeUpdate = 0.4

There are a couple of recalcitrant parameters in the fast parameter space whose distributions are poorly known a priori (so probably have bad initial guesses for step sizes etc).

After the requested 200000 samples, the chains stopped without the "stop propose update" criterion being statisfied.

When I plot the R-1 value of the worst eigenvalue which is being printed in the output, it starts very high (around 20) and the steadily decreases to around 2 after about 100000 samples. After that, it suddenly started shooting UP again. When the code stopped after 200000 samples, the worst R-1 had climbed back to ~9.

Is this normal behaviour? Is there anything I can do to make the LearnProposeUpdate process work faster? And are these chains useful for calculating a new covmat for a new run, given that they do not constitute a Markovian process?

Thanks a lot,

Hiranya