View previous topic :: View next topic 
Author 
Message 
Thomas Dent
Joined: 14 Nov 2006 Posts: 28 Affiliation: ITP Heidelberg

Posted: October 16 2007 


Jeong and Smoot claim to put f_{NL} = 0 outside the 95% confidence level simply by looking at the onepoint function, which appears to be sensitive at the level of O(100).
A number of questions arise. First, why are the results of their Fig.1 and Fig.2 so different, if the main errors do in fact arise from instrumental noise? 

Back to top 


Ben Gold
Joined: 25 Sep 2004 Posts: 97 Affiliation: University of Minnesota

Posted: October 23 2007 


I don't have an answer, just another question: why doesn't figure 1 seem to agree with the constraints listed in table 2? Aren't they supposed to be the same results? 

Back to top 


alexandre amblard
Joined: 25 Sep 2004 Posts: 1 Affiliation: UC, Irvine

Posted: October 29 2007 


I think Fig 1. numbers are in table 1 and Fig 2. numbers are in table 2, I think the difference between the two is that in Fig2/Table2 they take into account ("remove") nonGaussianity from noise to estimate fNL. 

Back to top 


Kate Land
Joined: 27 Sep 2004 Posts: 29 Affiliation: Oxford University

Posted: October 29 2007 


Looks like they've done a onetail rather than a twotail probability on Figure 1. ie. the red 68% region is defined with a χ^{2} upper limit such that P(χ^{2} < limit)=68%. I would have thought that a twotail probability was more meaningful, ie. the 68% confidence region is definied by upper and lower limits around the mean/mode/median such that P(limit1 < χ^{2} < limit2)=68%. This would widen those regions a little bit...
Another thought  the bestfitting f_NL values in Figure 1 are outside of the 68% region for the Q and the W band  indicting not a great goodnessoffit. I'm wondering if the simulations generally found better fits... 

Back to top 


Hans Kristian Eriksen
Joined: 25 Sep 2004 Posts: 58 Affiliation: ITA, University of Oslo

Posted: October 29 2007 


Thomas Dent wrote:  A number of questions arise. First, why are the results of their Fig.1 and Fig.2 so different, if the main errors do in fact arise from instrumental noise? 
Yes, I have the same feeling – I'm left with a number of questions after reading this paper. First and foremost, how is it possible that the 1D cumulative distribution can be competitive to the bispectrum in terms of sensitivity? All tests I've ever seen on this have turned out negative for the 1D cumulative distribution. In fact, the 1D distribution is often used as an illustration that f_{nl} ~ O(100) is indeed a *tiny* effect, and more sophisticated methods are required..
Second, I'm wondering how they computed the confidence regions. One odd feature is that the Qband confidence region is smaller than for Q+V+W, which isn't really very intuitive. In general, from Figure 1 it appears that a worse fit (= higher reduced chi^{2} at the minimum point) implies smaller error bars. Which again may perhaps suggest that the confidence regions are computed from the theoretical prediction only, as if the bestfit point was indeed perfect (with chi^{2} = 1) without first asking whether the fit is good in the first place. (In other words, a poor fit would automatically "imply" small error bars, because the total chi^{2} "rises" very rapidly – in fact, it starts out high..)
Still, even if this have been done correctly, it doesn't explain why the 1D distribution appears so sensitive, since this depends just on the width of the chi^{2} curves in Figure 1, not on the minimum value. It might be useful if somebody with access to good f_{nl} simulations repeats this experiment, and see if they can reproduce the results.. 

Back to top 


Dominik Schwarz
Joined: 08 Sep 2005 Posts: 2 Affiliation: University of Bielefeld

Posted: November 19 2007 


The test assumes that foreground subtraction is perfect in each pixel. Imagine that the foreground substraction is good on a typical pixel, but there are a few pixels (say 10%) in which it does not work. That would certainly mimic a f_{NL}. It seems to me that we have no reason to believe that forground substraction works at the level of individual pixels. It would be interesting to see how this test changes as one plays with N_{side}. 

Back to top 


