- May 2020
-
michaelbarrowman.co.uk michaelbarrowman.co.uk
-
nd etaetaeta par
Smybol changed from methods, where it was an n like symbol?
-
averaged
Is it normal to average results across parameters in simulations, rather than look at results in different scenarios eperately? Possible that findings be skewed by a few scenarios?
-
Figure 4.2: Bias, Coverage and Empirical Standard Error for the Over-estimating, Perfect and Under-Estimating models across all four methods when β=1β=1\beta=1, γ=1γ=1\gamma=1 and η=1/2η=1/2\eta=\;^{1}/_{2}. Confidence Intervals are included in the plot, but are tight around the estimate
Would it be possible to blow up the bias graph from this scenario for the 'perfect model'. Given the motivating examples (i.e. QRISK) I was expecting to see the KM to be biased in this situtation, but it appears not to be?
What correlation between the censoring and survival times does beta = gamma = 1 induce? Maybe it needs to be stronger?
Another reason could be to due with the scale on the y-axis. It's on the probability scale currently if i'm not mistaken? This means that a bias of 0.02 (which would not really be observable on that graph), corresponds to a 2% over prediction in the calibration in the large, which is quite a large bias.
Just hypothesising.
-
FU(t|Z=z)=logit−1(logit(FP(t|z)−0.2))FU(t|Z=z)=logit−1(logit(FP(t|z)−0.2)) \begin{array}{rl} F_U(t|Z=z) =& \textrm{logit}^{-1}\left(\textrm{logit}\left( F_P(t|z) - 0.2\right)\right) \end{array} FO(t|Z=z)=logit−1(logit(FP(t|z)+0.2))
Logit-1(logit())) cancel out do they not? So is the under/over prediction model just always -/+0.2 of the prediction for the perfect model? In this case wouldn't you get predictions outside of [0,1]. Also, doesn't that correspond to a 20% over-prediction, which is huge? I may have misunderstood this.
-
T
For x axis, 'follow up time, t'?
-
We varied the parameters to take all the values,γ={−2,−1.5,−1,−0.5,0,0.5,1,1.5,2}γ={−2,−1.5,−1,−0.5,0,0.5,1,1.5,2}\gamma = \{-2,-1.5,-1,-0.5,0,0.5,1,1.5,2\}, β={−2,−1.5,−1,−0.5,0.5,1,1.5,2}β={−2,−1.5,−1,−0.5,0.5,1,1.5,2}\beta = \{-2,-1.5,-1,-0.5,0.5,1,1.5,2\} and η={−1/2,0,1/2}η={−1/2,0,1/2}\eta = \{-\;^{1}/_{2},0,\;^{1}/_{2}\}, that is the proportional hazard coefficients took the same values between -2 and 2, but ββ\beta did not take the value of 0 because this would make a predictive model infeasible.
Personally, I would move this to the end, possibly with the first half of the net paragraph.
Currently Beta, Y and n, are introduced in text in the previous paragraph, but then it's not described how they are used to simulate data until the end of the next paragraph.
I think it would be better to explain the simulation process in one go. Then after this is done, say the number of iterations, and what value parameters will take in one paragraph.
-
although this is rarely the case [5]
I would give some actual examples here (i.e. QRISK3 does not share baseline hazard, pretty sure Pooled Cohort Equations and ASSIGN don't either). The reference is from 20 years ago, and when the author state that 'baseline information is seldom reported', they do not give a reference or any examples.
-
expected
Not sure expected is the right word here? Maybe just 'Although still not at 95%'
-
In this paper and another [9] he proposes the comparison of KM curves in risk groups, which alleviates the strength of the independence assumption required for the censoring handling to be comparable between the Cox model and the KM curves (since the KM curves now only assume independent censoring within risk group). In these papers a fractional polynomial approach to estimating the baseline survival function (and thus being able to share it efficiently) is also provided
Add another sentence saying why this method isn't suitable for what we want. Currently it just says this method alleviates the censoring issue. Something like: 'However, this does not allow calculations of the overall calibration of the model, which is of interest here'
-