19 Matching Annotations
  1. Mar 2023
  2. Oct 2022
    1. In addition, indirect correlations due to other predictors can add so much to the original partial effect of a predictor that the standardized regression coefficient becomes higher than 1 or lower than -1. This illustrates that standardized regression coefficients are not correlations in multiple regression models because correlations can never be higher than 1 or lower than -1. In contrast, the standardized regression coefficient in a simple regression model is equal to the correlation between predictor and outcome.

      This remains a bit unclear

    2. opposite sign

      So since the indirect correlation is the product of the effect of the confounder on the predictor and that on the DV times standardized regression coefficient; does this mean that if one of these is negative, than the indirect correlation will be negative? And if both are negative, then it would be positive? Does this not depend on the standardized reg. coeff. as well? Lot of questions.

    1. Remember that the interaction variable is the product of the predictor and moderator (Section 8.3.2). If any or both of these are mean-centered, you should multiply the mean-centered variable(s) to create the interaction variable.

      So this is to say that the mean-centered value should be plugged in the equation?

    2. To see this, it is helpful to inspect the regression equation with rearranged terms [Equation (9.1)]. Every additional contact with smokers adds b3b3b_3 to the slope (b1+b3∗contact)(b1+b3∗contact)(b_1 + b_3*contact) of the exposure effect.

      I do not completely get this :/

  3. Sep 2022
    1. Remember that we are not allowed to interpret the 95% confidence interval as a probability

      As a probability in the population, right? But it CAN be interpreted as a probability as long as we say it is the probability of drawing a sample with a sample statistic falling in that interval. Or am I wrong?

    1. A probability of what

      So, if H1 is true, then the yellow area is the probability of drawing a sample that falls out of the rejection area of the null hypothesis - meaning it is the probability of making a type II error (retaining a H0 that is not true, or in other words: incorrectly failing to reject the H0).

    1. we find a sample average of 5.5

      Isnt this also covered by the H0? it is the same .5 away from 6 (which is our expected value, right?) than 6.5. Or what's goin on here? i'm lost :(

    2. boundary value

      ?

    1. A test statistic more or less standardizes the difference between the sample statistic and the population value that we expect under the null hypothesis.

      Could someone elaborate this? How does it standardize the difference?

    2. which is a boundary value in the case of a one-sided test

      what does this mean exactly?

    1. This probability is the significance level.

      This is a bit blurry to me. What does this have to do with the Type I error?

    2. test

      The test or the test result?

    1. * Check data: Assumption checks in Chapter 8. * Check for impossible values. FREQUENCIES VARIABLES=weight sweetness colour_post /ORDER=ANALYSIS. * Regression of colour_post on weight and sweetness.

      wat is dis?

    2. clarify the meaning of the statistic

      Is it that the value of the difference falls between the 2 values with a 95% confidence?

    1. In addition, it is more likely that our sample bag has an average weight that is near the true average candy weight in the population than an average weight that is much higher or much smaller than the true average. Bags with on average extremely heavy or extremely light candies may occur, but they are extremely rare (we are very lucky or very unlucky). From these intuitions we would expect a bell shape for the sampling distribution.

      So we argue that the probability of drawing a sample from the population that deters from the average is so little that we must presume that the sample we have drawn represents the population credibly?