6 Matching Annotations
  1. Jul 2018
    1. On 2016 Aug 01, Joaquim Radua commented:

      Re: the first comment, I think there may be some unfortunate confusion. Raw p-values of current voxelwise meta-analyses have not the same meaning as usual p-values because they are not derived from the usual null hypothesis (“there are no differences between groups”), but from another null hypothesis (“all voxels show the same difference between groups”). Thus, up to the moment one of the only ways to "approximately" know if the results of a voxelwise meta-analysis are neither too liberal nor too conservative is to compare them with the results of a mega-analysis of the same data, and that's what it was done.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 14, Lars Schulze commented:

      Dear Christopher Tench,

      we agree that it is important to control the rate of false positives in meta-analyses. Please note, that our study did apply empirically validated thresholding procedures that have been shown to not only balance sensitivity and specificity, but also to be equivalent to a corrected P-value of 0.05 (Radua et al., 2012). To even further reduce the possibility of false positive results, we used an additional voxel-wise threshold with Z-values >1.

      Thus, our meta-analysis applied validated and recommended methods to control the inflation of false positive results.

      Sincerely, Lars Schulze


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Jun 19, Christopher Tench commented:

      The methods employed do not constitute a meta analysis as uncorrected p-values offer no protection against false positive results. Consequently the study provides no evidence of significant consistency across studies. Uncorrected p-values are not useful in meta analysis of neuroimaging studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

  2. Feb 2018
    1. On 2016 Jun 19, Christopher Tench commented:

      The methods employed do not constitute a meta analysis as uncorrected p-values offer no protection against false positive results. Consequently the study provides no evidence of significant consistency across studies. Uncorrected p-values are not useful in meta analysis of neuroimaging studies.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    2. On 2016 Jul 14, Lars Schulze commented:

      Dear Christopher Tench,

      we agree that it is important to control the rate of false positives in meta-analyses. Please note, that our study did apply empirically validated thresholding procedures that have been shown to not only balance sensitivity and specificity, but also to be equivalent to a corrected P-value of 0.05 (Radua et al., 2012). To even further reduce the possibility of false positive results, we used an additional voxel-wise threshold with Z-values >1.

      Thus, our meta-analysis applied validated and recommended methods to control the inflation of false positive results.

      Sincerely, Lars Schulze


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.

    3. On 2016 Aug 01, Joaquim Radua commented:

      Re: the first comment, I think there may be some unfortunate confusion. Raw p-values of current voxelwise meta-analyses have not the same meaning as usual p-values because they are not derived from the usual null hypothesis (“there are no differences between groups”), but from another null hypothesis (“all voxels show the same difference between groups”). Thus, up to the moment one of the only ways to "approximately" know if the results of a voxelwise meta-analysis are neither too liberal nor too conservative is to compare them with the results of a mega-analysis of the same data, and that's what it was done.


      This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.