- Jul 2018
-
europepmc.org europepmc.org
-
On 2016 Aug 01, Joaquim Radua commented:
Re: the previous comment, I think there may be some unfortunate confusion. Raw p-values of current voxelwise meta-analyses have not the same meaning as usual p-values because they are not derived from the usual null hypothesis (“there are no differences between groups”), but from another null hypothesis (“all voxels show the same difference between groups”). Thus, up to the moment one of the only ways to "approximately" know if the results of a voxelwise meta-analysis are neither too liberal nor too conservative is to compare them with the results of a mega-analysis of the same data, and that's what it was done.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Jul 22, Christopher Tench commented:
Control in statistics is not achieved by fixed p-value threshold when there are many tests. There are very fundamental reasons for using FWE and FDR, and as a minimum these must be estimated to quantify the risk of false positives. Applying a fixed p-value and fixed cluster extent is a bit like having a fixed steering wheel in a car: it will work really well when the road is straight, but fail terribly when there is a bend! Controlled methods necessarily adapt themselves to the data. Without them, there is no quantifiable evidence that an effect is truly critical of the null.
I appreciate the validation study. Unfortunately it only tested known positive data. In statistics this cant be considered validation as it promotes confirmation bias. I am aware of the claim that the threshold is equivalent to corrected 0.05, but unfortunately this is not correct. Correction implies adaptation to the data and its null, but fixed thresholds are not able to do this.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Jun 21, Fabio Richlan commented:
1) Whether or not a study can be considered a meta-analysis does not depend on the use of a correction for multiple comparisons. 2) As usual for coordinate-based meta-analyses, statistical significance was assessed by a permutation test (thresholded at a voxel-level (height) of p < 0.005 and a cluster-level (extent) of 10 voxels). Note that the use of a cluster extent threshold is a way of controlling false positives. In addition, only voxels with z > 1.0 were considered statistically significant. Based on an empirical validation, it was shown that, at least for seed-based d Mapping (SDM), this combination of voxel-level and cluster-level thresholds optimally balances sensitivity and specificity and corresponds to a corrected p-value of 0.05 (http://www.ncbi.nlm.nih.gov/pubmed/21658917). More details on the SDM method can be found at http://www.sdmproject.com/
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY. -
On 2016 Jun 15, Christopher Tench commented:
The method employed in this study offers no control over the 10<sup>5</sup> statistical tests performed. The results are can not be considered statistically significant or a meta-analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-
- Feb 2018
-
europepmc.org europepmc.org
-
On 2016 Jun 15, Christopher Tench commented:
The method employed in this study offers no control over the 10<sup>5</sup> statistical tests performed. The results are can not be considered statistically significant or a meta-analysis.
This comment, imported by Hypothesis from PubMed Commons, is licensed under CC BY.
-