42 Matching Annotations
  1. Feb 2019
    1. Cosmological inference from Bayesian forward modelling of deep galaxy redshift surveys

      This is an interesting paper that tries to set out a new way to use galaxy redshift survey data to determine cosmological parameters with great accuracy. The method described here is based on a novel application of the Alcock-Paczynski test using Bayesian machinery developed by some of these authors in previous papers. The results described in the paper (in particular in Figure 1) are eye-catching and would be revolutionary, if they could be applied to real data.

      Unfortunately, the paper oversells the method used, because it ignores redshift-space distortions produced by galaxy peculiar velocities, which is the main confounding issue with the use of the Alcock-Paczynski test.

      What the authors have demonstrated is that in a hypothetical universe in which galaxy peculiar velocities do not contribute to their observed redshifts, a hierarchical Bayesian inference method would be able to determine cosmological parameters very accurately through the AP test (but in this scenario traditional methods would work much better than currently as well).

      As currently presented, the method cannot work on real data. If it were corrected to account for RSD in order to make it applicable to real data, I don't think LPT would provide sufficient accuracy of RSD modelling on small scales to perform such an AP test, so the true obtainable constraints would be much weaker.

    1. Cosmological inference from Bayesian forward modelling of deepgalaxy redshift surveys

      This is an interesting paper that tries to set out a new way to use galaxy redshift survey data to determine cosmological parameters with great accuracy. The method described here is based on a novel application of the Alcock-Paczynski test using Bayesian machinery developed by some of these authors in previous papers. The results described in the paper (in particular in Figure 1) are eye-catching and would be revolutionary, if they could be applied to real data.

      Unfortunately, the paper oversells the method used, because it ignores redshift-space distortions produced by galaxy peculiar velocities, which is the main confounding issue with the use of the Alcock-Paczynski test.

      What the authors have demonstrated is that in a hypothetical universe in which galaxy peculiar velocities do not contribute to their observed redshifts, a hierarchical Bayesian inference method would be able to determine cosmological parameters very accurately through the AP test (but in this scenario traditional methods would work much better than currently as well).

      As currently presented, the method cannot work on real data. If it were corrected to account for RSD in order to make it applicable to real data, I don't think LPT would provide sufficient accuracy of RSD modelling on small scales to perform such an AP test, so the true obtainable constraints would be much weaker.

    2. We present a large-scale Bayesian inference framework to constrain cosmological parameters using galaxy redshift surveys, via anapplication of the Alcock-Paczy ́nski (AP) test.

      The Alcock-Paczynski (AP) test essentially says: cosmological objects or features in the galaxy distribution that are intrinsically isotropic may appear distorted along the line-of-sight direction if the wrong cosmological model is used to convert observed redshifts into radial distances. Therefore, if an object or feature is known to be intrinsically isotropic but observed as apparently distorted, we know we've got the cosmology wrong, and can use this to determine the correct cosmology. In principle, this is a great way to measure combinations of the angular diameter distance \(D_A(z)\) and the expansion rate \(H(z)\) and so constrain things that affect these, such as \(\Omega_m\) and the dark energy equation of state \(w(z)\).

    3. Our physical model of the non-linearly evolved density field, as probed by galaxysurveys, employs Lagrangian perturbation theory (LPT) to connect Gaussian initial conditions to the final density field,

      This sentence is a clue to the problem with this method.

      The biggest difficulty with using the galaxy distribution to perform an AP test is that it is not expected to be intrinsically isotropic even if we get the cosmology right. This is because galaxy peculiar velocities also affect their redshifts and therefore introduce another major source of anisotropies, known as redshift-space distortions (RSD). RSD normally dominates over the AP distortion, so we can't perform an AP test without precisely accounting for RSD.

      But RSD modelling is hard.

      In particular, the evolution of the velocity field receives large non-linear corrections (e.g., Scoccimarro 2004), which is why perturbation theory does not work very well at describing RSD even at quite large scales. The LPT method used here can provide a reasonable description of the density evolution but would not be accurate enough for velocity fields and RSD modelling.

    4. hierarchical Bayesian inference machinery ofborg(Bayesian Origin Reconstruction from Galaxies) (Jasche &Wandelt 2013a), originally developed for the non-linear recon-struction of large-scale structures,

      This Bayesian inference mechanism is well-developed and is one of the exciting recent developments in cosmology. I have no complaints about this in general or in other applications and contexts.

    5. Comparison of cosmological constraints from BAO mea-surements and our implementation of AP test inaltair.

      This figure is absolutely astounding. If it were true that the ALTAIR algorithm could achieve this level of precision, and outperform BAO to this extent, it would truly be a revolution in large-scale structure analyses.

      However, it is not true, because ALTAIR ignores RSD complications (and this result is produced using a synthetic data set in which by construction RSD are absent). The BAO constraints are however obtained from real data and after marginalising over RSD terms. This is therefore a highly misleading comparison.

    6. A key component of theinference framework is the forward modelMpwhich links theinitial conditionsic;(r)pto the redshift space representation of theevolved density fieldf;(z)pas follows

      This Eqn. (1) and the schematic representation in Fig. 2 illustrate the problem with this forward-modelling approach.

      One component of the forward model evolves the initial conditions forward to give the non-linear final density field in comoving coordinates (in LPT), \(\delta^{\mathrm{f}(r)}_p\). This bit does not account for velocities or RSD.

      The second component of the forward model takes \(\delta^{\mathrm{f}(r)}_p\) and converts it to redshift space accounting only for the coordinate transformation from comoving to redshift space, i.e., still not accounting for peculiar velocities and the associated RSD. The detailed equations in Appendix B confirm this.

      Even if the rest of the forward modelling approach is entirely correct (and I think it is), this omission means that ALTAIR is incomplete and cannot be used for AP tests in real data.

    7. While we implement LPT to approximately describe grav-itational non-linear structure formation in this work, othermore adequate physical descriptions such as 2LPT or the non-perturbative particle mesh (see recent upgrade ofborgin Jasche& Lavaux 2018) can be straightforwardly employed

      Upgrading to 2LPT or non-perturbative approaches for forward modelling certainly would help improve this work. However, the main difference would come from upgrading the component \(\mathcal{M}_p^{(1)}\), not \(\mathcal{M}_p^{(2)}\) as stated here. If \(\mathcal{M}_p^{(1)}\) were modified to include peculiar velocity effects I don't think either LPT or 2LPT would be sufficient to model the RSD to the desired accuracy for an AP test, however the non-perturbative particle mesh approach might work better.

    8. 4. Generation of a mock galaxy catalogue

      This section describes in detail several aspects of the generation of the mock catalogue on which tests producing the headline results are performed. Great care is taken to reproduce various aspects of the true galaxy data, but the most important aspect is omitted – there is no mention of converting peculiar velocities to additional redshift distortions. I can only guess that this is because this step was not performed, i.e. the mocks do not contain RSD.

      The fact that the forward model works well on this synthetic data despite not accounting for RSD also suggests that the mocks do not contain RSD effects by construction.

    9. A back of the envelope com-putation of the information gain is as follows:

      This discussion of the information gain highlights an important point – if it were possible to use features or aspects of the galaxy distribution on smaller scales than the BAO peak for the AP test, we would certainly expect a large information gain. This is essentially because we would be able to use many more modes for a given survey volume.

      This is the reason why BOSS and other LSS surveys use "full-shape" analyses of the galaxy clustering power spectrum to add AP information to that from BAO. But these additional information gains from smaller scales are marginal (in general, AP constraints from pre-reconstruction "full-shape" measurements are weaker than from post-recon BAO-only measurements) precisely because of the AP-RSD degeneracy and the difficulty in accurately modelling RSD on small scales, that this paper has not taken into account.

    10. Appendix B: Jacobian of comoving-redshifttransformation

      This Appendix provides technical details of the transformation from comoving space to "redshift" space. Following the equations in detail shows that the coordinate transformation is assumed to have no dependence on peculiar velocity, and only accounts for the background-level transformation between comoving distance and redshift.

    1. Title:Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing

      The authors present the cosmological results from the first year of data from the Dark Energy Survey (DES-Y1), combining measurements of galaxy clustering, galaxy-galaxy lensing and shear correlations. This paper is a big step towards delivering the promised results of weak lensing surveys, but a series of comments could be addressed.

      See the full review here.

    1. in one of four redshift bins,z= [(0:20:43);(0:430:63);(0:630:9);(0:91:3)], based upon the mean of theirpBPZ(z)distributions.

      The standard deviation of redshift errors in the lens sample is a couple of percent, \(\sigma_z / (1+z) \sim 0.017\). However, in studies of their clustering and of their cross-correlation with the sources (galaxy-galaxy lensing), the authors use very wide redshift bins of \(\Delta z \gtrsim 0.20\). This seems like a sub-optimal choice, since this binning is unnecessarily smoothing the signal along the line of sight.

    2. Dark Energy Survey Year 1 Results:Cosmological Constraints from Galaxy Clustering and Weak Lensing

      The authors present the cosmological results from the first year of data from the Dark Energy Survey (DES-Y1), combining measurements of galaxy clustering, galaxy-galaxy lensing and shear correlations. This paper is a big step towards delivering the promised results of weak lensing surveys, but a series of comments could be addressed.

    3. 3We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lens-ing, using 1321 deg2ofgrizimaging data from the first year of the Dark Energy Survey (DES Y1). Wecombine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies infour redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in fiveredshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxyshears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometricredshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulkof the analysis was carried out while “blind” to the true results; we describe an extensive suite of systemat-ics checks performed and passed during this blinded phase. The data are modeled in flatCDM andwCDMcosmologies, marginalizing over 20 nuisance parameters, varying 6 (forCDM) or 7 (forwCDM) cosmolog-ical parameters including the neutrino mass density and including the 457457 element analytic covariancematrix. We find consistent cosmological results from these three two-point functions, and from their combi-nation obtainS88( m=0:3)0:5= 0:783+0:0210:025andm= 0:264+0:0320:019forCDM; forwCDM, we findS8= 0:794+0:0290:027,m= 0:279+0:0430:022, andw=0:80+0:200:22at 68% CL. The precision of these DES Y1results rivals that from the Planck cosmic microwave background measurements, allowing a comparison ofstructure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values forS8andmare lower than the central values from Planck for bothCDM andwCDM, the Bayes factor indicates thatthe DES Y1 and Planck data sets are consistent with each other in the context ofCDM. Combining DES Y1with Planck, Baryonic Acoustic Oscillation measurements from SDSS, 6dF, and BOSS, and type Ia supernovaefrom the Joint Lightcurve Analysis (JLA) dataset, we derive very tight constraints on cosmological parameters:S8= 0:799+0:0140:009andm= 0:301+0:0060:008inCDM, andw=1:00+0:040:05inwCDM. Upcoming DES anal-yses will provide more stringent tests of theCDM model and extensions such as a time-varying equation ofstate of dark energy or modified gravity

      \(\Lambda\mathrm{CDM}\) is a well established name to refer to a specific model with 6 free parameters, none of which is neutrino mass. It would be better to use another name to refer to \(\Lambda\mathrm{CDM}\)+neutrino mass, probably \(\nu\Lambda\mathrm{CDM}\) or similar.

    4. Ref. [89], which describes the galaxy clustering statis-tics, including a series of tests for systematic contam-ination. This paper also describes updates to the red-MaGiC algorithm used to select our lens galaxies andto estimate their photometric redshift

      The estimates of the redshift distribution for the lens galaxies have very short tails. In figure B1 of the paper describing the lens galaxies (Ref. [18], Elvin-Poole et al. 2017), one has the visual impression that very separated redshift samples have non-zero correlations. What could be explained if the redshift distribution had longer tails than assumed? It would be good to test what would be the effect of these potential systematic tails on the inferred cosmological parameters.

      Based on the same figure, one wonders about the overall goodness of fit of the correlations of lens galaxies, including the correlation of different redshift bins. Even if these cross-correlations were not used in the final analysis, it would be a good diagnosis of possible systematics in the data.

      The lens sample contains only a few percent of the total sample of galaxies. It would be interesting to see a discussion on how would the results improve if one could use a larger fraction of the dataset.

    5. wi() =bi2Zdll2J0(l)Zdnil(z()22H(z)PNLl+ 1=2;z()

      The authors assume that the bias of the lens galaxies in a given redshift bin can be described by a constant, \(b_i\). However, the bias of these galaxies probably evolve with \(z\), and in that case it would be better to use a parameterised function to describe \(b(z)\).

    6. The smallest angular separations forwhich the galaxy two-point function measurements are usedin the cosmological inference, indicated by the boundariesof the shaded regions in the upper panels of Figure 2, cor-respond to a comoving scale of 8h1Mpc; this scale is cho-sen such that modeling uncertainties in the non-linear regimecause negligible impact on the cosmological parameters rela-tive to their statistical errors, as shown in [79] and [87]

      Given that the non-linear scale evolves with redshift roughly with the growth factor, it would seem more logical to define a cut that also evolves with redshift, so that smaller scales are included at high redshift.

    7. For test 11, we calculated the2(=2 logL) value of the457 data points used in the analysis using the full covariancematrix. InCDM, the model used to fit the data has 26 freeparameters, so the number of degrees of freedom is= 431.The model is calculated at the best-fit parameter values of theposterior distribution (i.e. the point from the posterior samplewith lowest2). Given the uncertainty on the estimates of thecovariance matrix, the formal probabilities of a2distributionare not applicable. We agreed to unblind as long as2wasless than 605 (2= <1:4). The best-fit value2= 572passes this test, with2== 1:33:Considering the fact that13 of the free parameters are nuisance parameters with tightGaussian priors, we will use= 444;giving2== 1:2

      The overall goodness of fit is very poor (4e-5). The authors argue that “Given the uncertainty on the estimates of the covariance matrix, the formal probabilities of a \(\chi^2\) distribution are not applicable”, but later on they show that changes in the covariance matrix actually have a very small effect on the best fit. I think the authors should explore other explanations for this high value of \(\chi^2\), like possible systematics in the data. For instance, it would be interesting to see the effect of the tails in the redshift distribution of the lenses discussed above.

    8. We also find that removing all data at scales >1000yields2= 312for 277 data points (2== 1:18), not a sig-nificant reduction, and also yields no significant shift in best-fit parameters. Thus, we find that no particular piece of ourdata vector dominates our2result.

      The probability for this value is 7%, much higher than the probability of the overall analysis (0.004%). I would suggest that this result is significant enough to take a better look at the measurements on large angular separations.

    9. We use measured angular diameter distances from theBaryon Acoustic Oscillation (BAO) feature by the 6dF GalaxySurvey [132], the SDSS Data Release 7 Main Galaxy Sam-ple [133], and BOSS Data Release 12 [48], in each case ex-tracting only the BAO constraints. These BAO distances areall measured relative to the physical BAO scale correspond-ing to the sound horizon distancerd; therefore, dependenceofrdon cosmological parameters must be included when de-termining the likelihood of any cosmological model (see [48]for details). We also use measures of luminosity distancesfrom observations of distant Type Ia supernovae (SNe) via theJoint Lightcurve Analysis (JLA) data from [134]

      The authors do not include the z>2 measurements from the BOSS collaboration (Bautista et al. 2017, du Mas des Bourboux et al. 2017), even though these have smaller error bars than other measurements included.

    10. Blinded constraints from DES Y1 onmandS8from allthree combined probes, using the two independent shape pipelinesMETACALIBRATIONandIM3SHAPE

      The authors compare the cosmological constrains obtained from different pipelines (METACALIBRATION vs IM3SHAPE). The differences in \(S_8\) are considerable, at the \(1.3\sigma\) level. Different galaxies are selected by the different pipelines, so it is not trivial to quantify the significance of this difference. One could maybe study this by looking at the changes in best fit value when dropping random sub-sets of the METACALIBRATION catalogue (the largest one), either from the actual data or from simulated data.

    1. How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the Universe

      This is an interesting article dealing with the cosmological constant problem, i.e. the way vacuum energy gravitates. The problem is tackled in an original way; the main idea consists in taking into account that vacuum energy is not constant over space and time, but rather strongly fluctuating. Considering a toy-model of inhomogeneous cosmological spacetime, the authors show that the scale factor locally satisfies an equation of the form (41) $$\ddot{a} = \Omega^2(t) a$$, where \(\Omega\) is a quasi-periodic function whose amplitude and pseudo-period is dictated by the vacuum energy. As a result, the scale factor essentially takes the form $$a(t) = A(t) \cos[\Omega(t)t+\phi],$$that is, it oscillates with slowly increasing (and more and more increasing) amplitude \(A(t)\), which the authors interpret as being the slow accelerated expansion of the Universe.

      The paper is interesting and very clearly written, but in my opinion it suffers from two issues, which I further elaborate on with annotations in the text. In a nutshell:

      1) The energy of vacuum is computed in a quite naive way, without renormalisation, and hence its cutoff is not Lorentz-invariant. I am not sure that I am convinced by the discussion on that point at the end of the paper.

      2) More importantly, the oscillating behaviour of the scale factor (which is allowed to be vanishing and negative here) is not physically acceptable, because it tells us that the cosmic expansion is the result of the oscillating of motion about itself, with an increasing amplitude. This is not what is observed: galaxies are receding from us, not oscillating about us with an increasing amplitude.

      See the full review here

    1. How the huge energy of quantum vacuum gravitates to drive the slow acceleratingexpansion of the Universe

      This is an interesting article dealing with the cosmological constant problem, i.e. the way vacuum energy gravitates. The problem is tackled in an original way; the main idea consists in taking into account that vacuum energy is not constant over space and time, but rather strongly fluctuating. Considering a toy-model of inhomogeneous cosmological spacetime, the authors show that the scale factor locally satisfies an equation of the form (41) $$\ddot{a} = \Omega^2(t) a$$, where \(\Omega\) is a quasi-periodic function whose amplitude and pseudo-period is dictated by the vacuum energy. As a result, the scale factor essentially takes the form $$a(t) = A(t) \cos[\Omega(t)t+\phi],$$ that is, it oscillates with slowly increasing (and more and more increasing) amplitude \(A(t)\), which the authors interpret as being the slow accelerated expansion of the Universe.

      The paper is interesting and very clearly written, but in my opinion it suffers from two issues, which I further elaborate on with annotations in the text. In a nutshell:

      1) The energy of vacuum is computed in a quite naive way, without renormalisation, and hence its cutoff is not Lorentz-invariant. I am not sure that I am convinced by the discussion on that point at the end of the paper.

      2) More importantly, the oscillating behaviour of the scale factor (which is allowed to be vanishing and negative here) is not physically acceptable, because it tells us that the cosmic expansion is the result of the oscillating of motion about itself, with an increasing amplitude. This is not what is observed: galaxies are receding from us, not oscillating about us with an increasing amplitude.

    2. vace = crit2:571047(GeV)4;(14

      This estimation is valid only if the vacuum energy is not renormalized, in which case it does not behave as a cosmological constant (details hereafter).

    3. T00=12_2+ (r)2:(18

      The issue with using this definition for the energy of vacuum is that, by definition, it does not behave as a cosmological constant, even if it would not fluctuate! With this "naive" approach, the full expression for the stress-energy tensor of vacuum is indeed $$\langle 0 | T{\mu\nu} |0\rangle = \int \frac{\mathrm{d}^3 k}{(2\pi)^3} \frac{k\mu k_\nu}{2\omega(k)},$$ which gives an equation of state \(w=1/3\).

      This is usually solved by renormalisation (see e.g. https://arxiv.org/abs/1004.1782 ), which also implies that the energy density no longer has the same expression.

    4. ds2=dt2+a2(t;x)(dx2+dy2+dz2):(23

      This ansatz is to be compared to the "separate-universe" approach, often used in the context of inflation, in which inhomogeneity is dealt with as if each region of the Universe evolved as an individual FLRW Universe. Its main limitation is that it does not allow for local anisotropy. It also unclear what the motion of matter actually is in this model? Is matter still considered comoving?

    5. More importantly, since the scale factora(t;x) is spa-tially dependent, the physical distanceLbetween twospatial points with comoving coordinatesx1andx2is nolonger related to their comoving distance x=jx1x2jby the simple equationL(t) =a(t)xand the observedglobal Hubble rateHis no longer equal to the local Hub-ble rate _a=a. Instead, the physical distance and the globalHubble rate are de ned asL(t) =Zx2x1pa2(t;x)dl(30)

      This is an important remark, but the authors should have pushed it even further. One should add that this notion of physical distance L between two events must correspond to an integral of the metric along a space like geodesic connecting the events. This geodesic does not necessarily remains within a t=cst hyper surface, so that eq. (30) is incorrect in principle.

    6. a+ 2(t;x)a= 0;(41)where2=4G3+3Xi=1Pi!; =T00;Pi=1a2Tii:(42)If 2>0, which is true if the matter elds satisfynormal energy conditions, (41) describes a harmonic os-cillator with time dependent frequency. The most basic

      This is, in my opinion, the core of the paper's hypotheses. First, since vacuum energy is not renormalized, it behaves like radiation, i.e., it has an attractive gravitational behaviour (unlike a cosmological constant). Second, the scale factor will be allowed to vanish and become negative. Accelerated expansion is then interpreted as a slow amplification of the amplitude of the oscillations of \(a(t)\), due to parametric resonance.

    7. Due to the stochastic nature of quantum uctuations,the (t;x) in (41) is not strictly periodic. However, itsbehavior is still similar to a periodic function

      Very interesting phenomenon.

    8. pose that vacuum acts as a cosmological constant. Thisassumption leads to a huge Hubble expansion rateH=r8Gvac3/pG2!+1(88)as taking the high energy cuto  to in nity.

      Again, this is a bit naive since the non-renormalized vacuum energy, given by \(\rho_{\mathrm vac}\) , does not behave as a cosmological constant.

    9. B. The singularities ata(t;x) = 0

      Singularities are one issue, but the very fact that the scale factor is oscillating is another. A discussion about the physical interpretation of this would have been appreciated. There are essentially two possible approaches:

      1) We keep the usual interpretation of the scale factor, with comoving matter. The implies that matter itself undergoes those oscillations, which does not make much sense (galaxies recede from us, they do not oscillate about us with growing amplitude). One could say that this phenomenon should be considered on much smaller scales, but we do not see such a thing on atomic or subatomic scale either.

      2) We abandon comoving matter, but then how to relate the growing amplitude of the oscillations of a with cosmic expansion?

    1. Cosmological parameters from weak lensing power spectrum and bispectrum tomography: including the non-Gaussian errors

      The paper investigates to what extent the combining tomographic weak lensing spectra with weak lensing bispectra can improve constraints on a wCDM-cosmology. Constraints tighten up by about 60% by the inclusion of the bispectrum.

      See the full review here.

  2. arxiv.org arxiv.org
    1. Cosmological parameters from weak lensing power spectrum andbispectrum tomography: including the non-Gaussian errors

      The paper investigates to what extent the combining tomographic weak lensing spectra with weak lensing bispectra can improve constraints on a wCDM-cosmology. Constraints tighten up by about 60% by the inclusion of the bispectrum.

    2. ABSTRACTWe re-examine a genuine power of weak lensing bispectrum tomography for constrainingcosmological parameters, when combined with the power spectrum tomography, based onthe Fisher information matrix formalism. To account for thefull information at two- andthree-point levels, we include all the power spectrum and bispectrum information built fromall-available combinations of tomographic redshift bins,multipole bins and different triangleconfigurations over a range of angular scales (up tolmax= 2000as our fiducial choice). Forthe parameter forecast, we use the halo model approach in Kayo, Takada & Jain (2013) tomodel the non-Gaussian error covariances as well as the cross-covariance between the powerspectrum and the bispectrum, including the halo sample variance or the nonlinear version ofbeat-coupling. We find that adding the bispectrum information leads to about 60% improve-ment in the dark energy figure-of-merit compared to the lensing power spectrum tomographyalone, for three redshift-bin tomography and a Subaru-typesurvey probing galaxies at typicalredshift ofzs≃1. The improvement is equivalent to a 1.6 larger survey area. Thus our resultsshow that the bispectrum or more generally any three-point correlation based statistics carriescomplementary information on cosmological parameters to the power spectrum. However,the improvement is modest compared to the previous claim derived using the Gaussian errorassumption, and therefore our results imply less additional information in even higher-ordermoments such as the four-point correlation function.

      Motivation

      The questions treated in the paper are very important and very topical. Specifically, the authors set out to find out

      1. how much information is contained in weak lensing bispectra,
      2. how much covariances change due to the inclusion of non-Gaussian statistics,
      3. if a new term called halo sample variance matters. These questions are very important because much of the weak lensing signal is generated on small scales by nonlinear structures. Commonly, one increases the variance of the density on small scales in an empirical way but does not account for the deviation from a Gaussian distribution. In order to give the reader some sense of scale: The weak lensing data set from Euclid will have a total statistical significance of 1000 sigma, 80% of which originate from nonlinear structures.

      Context

      Nonlinear structure formation turns close to Gaussian initial conditions into a evolved density field that shows strongly non-Gaussian characteristics that are picked up by observables. As lensing depends (to lowest order) linearly on the density field, it inherits its statistics and therefore its non-Gaussianities. It is clear from the scaling behaviour of spectra and bispectra that they contain different parameter dependences and their combination should be able to improve cosmological constraints. Issues are the prediction of non-Gaussian properties of the lensing signal, the exact increase of the variance of the density field, and even more difficult from a technical point of view, the covariance properties of estimators in the non-Gaussian limit.

      Methods

      The methods and approximations used (flat-sky, weak lensing spectra and bispectra in Limber's approximation, halo-model, full-sky coverage) are all appropriate for demonstrating the author's point.

      Alternatives

      I see the necessity of using the Fisher-matrix formalism in particular when dealing with the bispectrum and all non-Gaussian covariance contributions out of numerical feasibility. Likewise, all lensing approximations are justified (most importantly the flat-sky approximation, because non-Gaussianities are most prominent on small angular scales). Using the formalism in the context of a Monte-Carlo Markov-chain would be really difficult, but would relax the assumption that the models for spectra and bispectra depend linearly on the parameters and would give rise to a Gaussian parameter likelihood.

      Criticism

      The authors use a Planck-type CMB-prior for their results on the statistical errors, and one can see that the prior determines size and orientation of the Fisher-ellipses a lot - I would be very interested to see the "naked" lensing results for a tomographic survey, and with the non-Gaussian contributions to the covariances individually switched on: I would argue that this gives a much better feeling about the magnitude of the effect on parameter inference.

      I like a lot that the authors provide a good deal of technical detail, in particular concerning permutations of bispectra with equal tomography bin indices and equal multipoles. I would have appreciated a more detailed justification about the selection of a comparatively small number of bispectrum configurations and and accuracy of binned summation, as well as more detail on the estimation process of the covariance matrix from numerical simulations - this is a particularly tricky issue and a lot of interesting developments have been done in the recent time.

      The only point where I am a bit lost would be the justification of using the halo model for computing the signal (where everything is done consistently) but the restriction to 1-halo-terms for the higher-order contributions to the covariance. Again, I appreciate this restriction on reasons of feasibility, but it would have been nice to see this in comparison to a simulation. Or, on the other hand, the halo-model could be replaced by standard perturbation theory, but again I see how this would not be able to be a feasible model for the HSV-term.

      One issue which I find very difficult (and where the authors are certainly not to blame) is that implicitly the estimates of the spectrum and bispectrum assume that those are estimated from a large number of modes - I wonder if differences at low multipoles, where only small numbers of modes are available, this effect might be similarly large compared to the effects discussed here.

      Open questions

      Personally, I find the HSV-term very interesting because it links large-scale and small-scale modes in a suble and funny way. I wonder if there might be other applications where analogous effects can play a role.

    1. Cosmological Zero Modes

      This paper makes some rather bold claims -- that cosmological zero modes are not only observable, but also allow us to see primordial physics beyond our observable horizon. The authors certainly pose an intriguing question, but were I refereeing this paper in the traditional manner for a journal, I'd ask that the authors first address some very basic questions that prevented me from getting past their starting point. Certain statements made in the introduction are technically incorrect as they stand, which I annotate below. I hope these provide a constructive context for followup discussions that might clarify the claims of this paper.

      See the full review here

    1. Cosmological Zero Modes

      This paper makes some rather bold claims -- that cosmological zero modes are not only observable, but also allow us to see primordial physics beyond our observable horizon. The authors certainly pose an intriguing question, but were I refereeing this paper in the traditional manner for a journal, I'd ask that the authors first address some very basic questions that prevented me from getting past their starting point. Certain statements made in the introduction are technically incorrect as they stand, which I annotate below. I hope these provide a constructive context for followup discussions that might clarify the claims of this paper.

    2. We should already notice that, sincens<1 at 7con dence level [1], the integral in Eq. (2) has an infrareddivergence. This means that there are no meaningful predictions for the amplitude of Fourier modes withk2!0 inthe concordance CDM cosmological model.

      This statement puzzles me -- we never directly measure the curvature perturbation, only its gradients. What we see in the CMB for example, are temperature anisotropies sourced by perturbations of the density \(\frac{\delta T}{T} = \frac{1}{3}\frac{\delta\rho}{\rho} + ...\) for adiabatic perturbations, for example. However \(\delta\rho \propto \Delta\zeta \), and so the zero mode in this context drops out of the observable quantity entirely -- it is not observable. Moreover, the zero mode of the curvature perturbation is indistinguishable from a rescaling of the background scale factor by a time reparametrization: consider the scale factor in the presence of a zero mode of the curvature perturbation -- \( a(t) e^{\zeta(t)} := \tilde{a}\). To first order, one sees that \(\tilde{a} = a(t + \zeta/H) \).

    3. We rst show that the existence of zero modes cannot be described by any analytic description of the powerspectrum, and requires a non-perturbative model of cosmological perturbations in the early universe. To see this, wecan write down the most generic solution to the Laplace equation (5), in spherical coordinates (;;), around anarbitrary origin:(;;) =X`;m(A`m`+B`m`1)Y`m(;);(6)whereY`m's are the spherical harmonics. Therefore, we see that any zero mode should blow up at= 0 or1(orboth), implying that it cannot be described over the whole space using a perturbative framework in. Of course, since

      This statement is incorrect. A zero mode is a zero mode everywhere and does not blow up at the origin. All one has to do above is look at the solution for \(l = m = 0\) above and set \(B_{00} = 0\) and \(A_{00}\) to be arbitrary to see that it is regular at the origin and at infinity.

    4. we have assumed adiabatic modes and no anisotropic stress, so set the two gravitational potentials equal)

      This statement, and the equation that follows (and therefore the entire starting point of this treatment) is technically incorrect due to a subtlety overlooked by the authors. It is not true that the two gravitational potentials are equal in the absence of anisotropic stress when one assumes that the cosmological perturbations to have a non-vanishing spatial average*. This can be seen for example, in the discussion surrounding eq 5.16 in Mukhanov et al, Phys.Rept. 215 (1992) 203-333.

      [See enlarged image here]

      As stated there, only by assuming the spatial average of the perturbations vanish that one can conclude that the two potentials are equal. Therefore there is an additional contribution to eq 7 that is neglected.

      *Note that the standard treatment of cosmological perturbation theory assumes that perturbations have a vanishing spatial average. Relaxing this assumptions requires re-examining the consistency of many standard expressions. The one pointed to above is unlikely to be the only place where relaxing this assumption might bite you.

    5. ds2=a2()(1 + 2 (x;))d2+ (12 (x;))d2+2d22;

      Here we can see explicitly that for the zero mode of \(\Psi\), redefining \(\tilde{a} = a(1 - \Psi(t))\) and subsequently rescaling time can bring eq 8 into the form of an unperturbed universe with a different scale factor. This scale factor can evidently be obtained from the original one by a time translation.

    6. determines the curvature perturbation (or Bardeen variable) in Eq. (1) on superhorizon scales. We can split intozero mode and non-zero mode contributions: = 0+ 6=0(9)Cosmological zero modes are de ned as components of the potential that satisfy the Laplace equation:r20= 0 (10)For all modes, the equations of motion are given byr2 + 3@aa@aa +@=4GNa2(11)@+r~v=3@ (12)and@~v+@aa~v=r (13)For the zero modes, Einstein's equation yields:3@aa@aa0+@0=4GNa20(14

      The issues raised above have puzzled me to the point that I am unable to make sense of the rest of the paper. It could be that the authors or other interested readers have straightforward replies to clarify matters and help unconfuse me, which I look forward to hearing.