- Jul 2018
-
europepmc.org europepmc.org
-
On 2016 Aug 10, Joaquim Radua commented:
Re: the previous comments, please note that under the null hypothesis of no differences between groups, only 1 out of 20 studies should show differences between groups, which is absolutely not the case when randomizing coordinates or blocks of voxels. Random coordinates and similar approaches, which randomize the location of the findings rather than the individuals between groups, are not a valid way to exactly test this hypothesis. Rather, they are only used to yield approximated p-values that, appropriately thresholded, return a map similar but slightly more conservative than that of FWE-corrected p-values in mega-analyses. Voxel-based meta-analytic methods are young and there is room for improvement, but they are based on evidence.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Aug 09, Christopher Tench commented:
This is not a result of confusion, but of the definition of statistical inference. Uncorrected p-values do not control the type 1 error rate. A meta-analysis is performed to improve estimates, and is a statistical problem demanding statistical methods. To threshold at an arbitrary p-value controls neither the FDR nor the FWE, so no quantitative evidence that the results are critical of the null hypothesis is available. You cant know if the results are true positives without doing the full experiment, but meta-analysis is used for the case where the full experiment (mega analysis) has not been done. The one, and only, thing that can be done is to make sure that the null hypothesis is appropriately rejected; arguably the whole point of statistical inference. That requires either FWE or FDR control. Using just random coordinates and an uncorrected p-value will produce results that are apparently publishable, but obviously incorrect. Without any estimate of error rate, there is no quantifiable evidence that the results are meaningful.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Aug 01, Joaquim Radua commented:
Re: the previous comment, I think there may be some unfortunate confusion. Raw p-values of current voxelwise meta-analyses have not the same meaning as usual p-values because they are not derived from the usual null hypothesis (“there are no differences between groups”), but from another null hypothesis (“all voxels show the same difference between groups”). Thus, up to the moment one of the only ways to "approximately" know if the results of a voxelwise meta-analysis are neither too liberal nor too conservative is to compare them with the results of a mega-analysis of the same data, and that's what it was done.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Jun 14, Christopher Tench commented:
The method employed here offers no protection against type 1 error rate. It cant be considered a meta-analysis. Uncorrected p-values provide no evidence of statistical significance when large numbers of voxel-wise tests are performed.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
- Feb 2018
-
europepmc.org europepmc.org
-
On 2016 Jun 14, Christopher Tench commented:
The method employed here offers no protection against type 1 error rate. It cant be considered a meta-analysis. Uncorrected p-values provide no evidence of statistical significance when large numbers of voxel-wise tests are performed.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Aug 01, Joaquim Radua commented:
Re: the previous comment, I think there may be some unfortunate confusion. Raw p-values of current voxelwise meta-analyses have not the same meaning as usual p-values because they are not derived from the usual null hypothesis (“there are no differences between groups”), but from another null hypothesis (“all voxels show the same difference between groups”). Thus, up to the moment one of the only ways to "approximately" know if the results of a voxelwise meta-analysis are neither too liberal nor too conservative is to compare them with the results of a mega-analysis of the same data, and that's what it was done.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Aug 09, Christopher Tench commented:
This is not a result of confusion, but of the definition of statistical inference. Uncorrected p-values do not control the type 1 error rate. A meta-analysis is performed to improve estimates, and is a statistical problem demanding statistical methods. To threshold at an arbitrary p-value controls neither the FDR nor the FWE, so no quantitative evidence that the results are critical of the null hypothesis is available. You cant know if the results are true positives without doing the full experiment, but meta-analysis is used for the case where the full experiment (mega analysis) has not been done. The one, and only, thing that can be done is to make sure that the null hypothesis is appropriately rejected; arguably the whole point of statistical inference. That requires either FWE or FDR control. Using just random coordinates and an uncorrected p-value will produce results that are apparently publishable, but obviously incorrect. Without any estimate of error rate, there is no quantifiable evidence that the results are meaningful.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Aug 10, Joaquim Radua commented:
Re: the previous comments, please note that under the null hypothesis of no differences between groups, only 1 out of 20 studies should show differences between groups, which is absolutely not the case when randomizing coordinates or blocks of voxels. Random coordinates and similar approaches, which randomize the location of the findings rather than the individuals between groups, are not a valid way to exactly test this hypothesis. Rather, they are only used to yield approximated p-values that, appropriately thresholded, return a map similar but slightly more conservative than that of FWE-corrected p-values in mega-analyses. Voxel-based meta-analytic methods are young and there is room for improvement, but they are based on evidence.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-