30 Matching Annotations
  1. Last 7 days
    1. What is the difference between a population and a sample? Which is described by a parameter and which is described by a statistic?

      A population is the entire group of individuals or observations we are interested in studying. A sample is a smaller subset taken from that population.

    2. For each of the following, determine if the variable is continuous or discrete: Time taken to read a book chapter Favorite food Cognitive ability Temperature Letter grade received in a class

      A. Continuous B. Discrete C. Continuous D. Continuous E. Discrete

    3. In your own words, describe why we study statistics.

      We study statistics so we can objectively interpret information, make sense of data, and communicate research results clearly and accurately. Statistics helps us separate real patterns from random chance, evaluate evidence instead of relying on opinions, and draw reasonable conclusions from samples about larger populations. It allows us to make informed decisions on science, healthcare, psychology, and everyday life.

    1. otice that: (1.7.2)(∑X)2≠∑X2 because the expression on the left means to sum up all the values of X and then square the sum (19² = 361), whereas the expression on the right means to square the numbers and then sum the squares (90.54, as shown).

      This part really matters. Even though the formulas look similar, the order completely changes the result. This explains why it's so important to pay attention to parentheses and notation in statistics formulas.

    2. Let's say we have a variable X that represents the weights (in grams) of 4 grapes:

      This example is really helpful because it keeps things simple. Using something small like grapes makes the notation less intimidating and shows that summation works the same way no matter what the data represents.

    3. Many statistical formulas involve summing numbers. Fortunately there is a convenient notation for expressing summation.

      This is very helpful. Instead of writing out long additions every time, summation notation is basically shorthand. It's not new math, just a cleaner way to write what we're already doing.

    1. Does this prove that the fastest men are running faster? Or is the difference just due to chance, no more than what often emerges from chance differences in performance from year to year? We can't answer this question with descriptive statistics alone. All we can affirm is that the two means are “suggestive.”

      So this appears to be why inferential statistics exists in the first place. A difference between two averages can feel persuasive, but this passage basically says: a gap is not automatically evidence of a real change. The moment you ask "is this difference real or just random variation?" you've stepped into inference.

    2. Descriptive statistics are just descriptive. They do not involve generalizing beyond the data at hand. Generalizing from our data to another set of cases is the business of inferential statistics,

      This calls out typical mistakes that we've read out about earlier and see in today's world. I've definitely seen people treat a big mean difference or a striking chart like it automatically proves something about the world. This line is basically a warning: descriptive stats tell you what happened in your dataset, but they don't automatically justify claims about a population.

    1. Although this technique cannot establish causality, it can still be quite useful. If the relation between conscientiousness and job performance is consistent, then it doesn’t necessarily matter is conscientiousness causes good performance or if they are both caused by something else – she can still measure conscientiousness to predict future performance.

      This is a strong point because it explains why non-experimental research still matters. Even without causation, relationships can be valuable for prediction, decision-making, and identifying risk factors. It also makes me think about how many real-life questions are naturally correlational, so we shouldn't treat "non-experimental" as "weak", just "limited in what it can claim".

    2. That is, random sampling and random assignment are not the same thing and cannot be used interchangeably. For research to be a true experiment, random assignment must be used. For research to be representative of the population, random sampling must be used

      Random assignment helps make groups comparable. Random sampling helps make results more generalizable. They solve different problems and confusing them can lead to overstated conclusions.

    3. An experiment is defined by the use of random assignment to treatment conditions and manipulation of the independent variable.

      This line is a good "checklist" definition. I like it because it gives a clear way to test whether a study is truly experimental: Did the researcher manipulate the IV? Did they randomly assign participants to conditions? If either is missing, you're probably not dealing with a true experiment.

    4. If we want to know if a change in one variable causes a change in another variable, we must use a true experiment.

      This is the key sentence about causality. It's basically saying: if your goal is cause-and-effect, you need as much control as possible. It also hints at why experiments are so valuable in stats and psychology. They give stronger evidence than just noticing patterns.

    5. The choice of research design is determined by the research question and the logistics involved.

      This stands out because it frames research as a set of tradeoffs, not "one best method". It reminds me that the "best" design isn't always the most controlled one, it's the one that answers the question while staying feasible and ethical.

    1. Random samples, especially if the sample size is small, are not necessarily representative of the entire population.

      This point is important because it challenges a common misconception that randomness alone guarantees accuracy. It shows why sample size plays a crucial role in statistical inference and why small samples can still lead to distorted results.

    2. Those who sit in the front row tend to be more interested in the class and tend to perform higher on tests.

      This example is effective because it is relatable and clearly shows how bias can sneak into sampling. It demonstrates that convenience in choosing a sample often leads to misleading conclusions.

    3. If the sample is not representative, then the possibility of sampling bias occurs

      This is a key warning in statistics. It emphasizes that poor sampling can invalidate conclusions, even if the data analysis itself is done correctly. It shows that how data is collected matters just as much as how it is analyzed.

    4. Instead, we query a relatively small number of Americans, and draw inferences about the entire country from their responses.

      This line highlights the practical reason samples exist. It acknowledges real-world limits like time and cost, while also introducing the idea that statistics are about making educated guesses, not absolute certainty.

    5. The population is the collection of all people who have some characteristic in common; it can be as broad as “all people” if we have a very general research question about human psychology, or it can be extremely narrow, such as “all freshmen psychology majors at Midwestern public universities” if we have a specific group in mind.

      This definition stands out because it shows how flexible a population can be. A population isn't always "everyone". It depends entirely on the research question. This helps clarify why researchers must clearly define their population before collecting data.

    1. The general point is that it is often inappropriate to consider psychological measurement scales as either interval or ratio.

      This statement stands out because it challenges assumptions. Even though psychological data often looks numerical, they may not behave like physical measurements. This reinforces the need for caution and thoughtful analysis when applying statistics to human behavior.

    2. Interval scales are not perfect, however. In particular, they do not have a true zero point even if one of the scaled values happens to carry the name “zero.”

      This is subtle but a critical idea. It explains why certain intuitive comparisons, like saying something is "twice as hot", are actually incorrect. It shows how measurement scales affect interpretation, not just calculation.

    3. Variables such as number of children in a household are called discrete variables since the possible scores are discrete points on the scale.

      This is a clear and relatable example that shows why some data can only take specific values. It also contrasts nicely with continuous variables and helps explain why different statistical methods are needed for different types of data.

    4. The values of a qualitative variable do not imply a numerical ordering

      This stood out because it explains why numbers are sometimes misleading. Even if categories are assigned numbers, those numbers don't carry mathematical meaning. This helps prevent misuse of statistics, such as averaging categories that shouldn't be averaged.

    5. The experiment seeks to determine the effect of the independent variable on relief from depression.

      This sentence highlights the goal of experimental research: understanding effects, not just descriptions. It reinforces why experiments are powerful tools in science. They allow researchers to test whether one variable actually influences another.

    6. In psychology, we are interested in people, so we might get a group of people together and measure their levels of stress (one variable), anxiety (a second variable), and their physical health (a third variable)

      This example makes the concepts of variables concrete. It also highlights how multiple variables can be measured at once, which shows that psychological research is often complex and multidimensional rather than focused on just one outcome.

    7. A variable is simply a characteristic or feature of the thing we are interested in understanding

      This definition stands out because it shows how broad variables can be. A variable doesn't have to be numerical or scientific, it can be any characteristic. This helps explain why statistics can be applied in psychology, social sciences, and everyday situations.

    8. In virtually any form, data represent the measured value of variables

      This line is important because it simplifies what "data" actually means. Instead of thinking of data as complicated numbers or tables, this reminds us that data are just measurements of something we are fascinated about understanding on a deeper level. This helps ground statistics in real-world observations rather than it just being typical mathematics.

    1. They can be misleading and push you into decisions that you might find cause to regret

      This is a big problem in our world. Agenda setting and misrepresentation of data/facts can cause people to make impulsive decisions which they wouldn't usually make if they were never presented with these faulty claims

    2. Without a way to organize these numbers into a more interpretable form, we would be lost, having wasted the time and money of our participants, ourselves, and the communities we serve.

      I'm assuming this is where our Jamovi software comes in. Being able to input the data, get the numerical breakdowns, and this makes it easier for us to interpret and make sense of what these numbers mean, leading to a conclusion being made.

    1. In the broadest sense, “statistics” refers to a range of techniques and procedures for analyzing, interpreting, displaying, and making decisions based on data.

      Very interesting. This really gets me questioning the reliability of information and statistics that we are provided on a daily basis. The validity of many statistical breakdowns that we are provided by social and broadcast networks could very well be painting a false narrative just like these examples we read about above.

    2. A major flaw is that ice cream consumption generally increases in the months of June, July, and August regardless of advertisements. This effect is called a history effect and leads people to interpret outcomes as the result of one variable when another variable (in this case, one having to do with the passage of time) is actually responsible.

      First and foremost, I'd believe that since these months land in the summer time season, ice cream consumption will always increase during this time of year. So definitely understand how this history effect stands true. The advertisements increase probably helps increase the cravings for ice cream during the hot weather, but the history effect of increased ice cream consumption during the summer shows that the advertisements don't solely tell the full story of this 30% increase, falsely giving Ben & Jerry's credit for this increase. On a deeper level, obviously Ben & Jerry's is a company who is money driven, but I don't think increased ice cream consumption will ever be a good thing or should be celebrated. Although I must say ice cream is delicious.