1. Last 7 days
    1. (snRNA-Seq) avoids the aggressive enzymatic digestion, preserving the information from most cell types9,10 in the brain TME

      While single-nucleus RNA sequencing (snRNA-seq) is often favored for its ability to process frozen tissues and reduce dissociation-induced artifacts, there is significant literature demonstrating that single-cell RNA sequencing (scRNA-seq) remains the gold standard for sensitivity and completeness in several key areas.

      1. Sensitivity and Transcript Coverage

      The most frequent argument for scRNA-seq is its higher sensitivity. Because snRNA-seq only captures transcripts within the nucleus—which can account for as little as 10–20% of a cell's total mRNA—it often results in fewer detected genes and lower Unique Molecular Identifier (UMI) counts per cell.

      • Specific Evidence: Research on the goat pancreas explicitly stated that scRNA-seq outperformed snRNA-seq in detecting a greater diversity of cell types and was more effective in profiling key functional genes, particularly those related to digestive enzymes (J. Cheng et al., 2025).

      2. Capturing the Immune Landscape

      In cancer research, scRNA-seq is often superior for mapping the immune microenvironment. Immune cells (like T-cells and B-cells) are relatively small and have a high ratio of cytoplasmic to nuclear RNA, making them easier to capture and more robustly represented in whole-cell data.

      • Lung Adenocarcinoma: Head-to-head comparisons in human lung samples revealed that scRNA-seq significantly better represented the immune landscape (finding 81.5% immune cells vs. much lower proportions in snRNA-seq). The study concluded that for research focusing on the immune environment of tumors, scRNA-seq of fresh samples is the preferred method (PMC11166281).

      3. Missing Biological Compartments

      By definition, snRNA-seq loses information from the cytoplasm, which contains critical biological indicators:

      • Mitochondrial and Ribosomal Genes: These are often used as quality control metrics or as markers of metabolic state. Since snRNA-seq excludes the cytoplasm, it essentially "blinds" the researcher to mitochondrial-driven processes.
      • Transcriptional Artifacts: While snRNA-seq avoids dissociation-induced stress, it cannot capture the full physiological state of a cell at the moment of capture as accurately as a whole cell can (Biocompare Technical Review).

      Summary of Performance Trade-offs

      | Feature | scRNA-seq (Whole Cell) | snRNA-seq (Nucleus Only) | | --- | --- | --- | | Total Transcriptome | Captures 100% of available mRNA. | Captures only ~20% (nuclear portion). | | Gene Detection | Higher genes per cell (more sensitive). | Fewer genes per cell. | | Immune Cells | Superior representation of lymphocytes. | Often underrepresents immune subsets. | | Tissue State | Requires fresh tissue (limiting). | Works on frozen and archived tissue. | | Dissociation Bias | Can induce stress-response genes. | Avoids dissociation artifacts. |

      Ultimately, the claim that snRNA-seq "outperforms" scRNA-seq is usually limited to brain, heart, or frozen samples where cell dissociation is physically impossible or creates too many artifacts. For most other applications, scRNA-seq remains the superior choice for high-resolution, high-sensitivity mapping (PMC7289686).

    2. Summary

      This paper provides a comprehensive molecular characterization of the GL261-GSC (Glioblastoma Stem Cell) murine model. Using advanced transcriptomic techniques, the authors demonstrate that this model effectively recapitulates the tumor microenvironment (TME) of the most prevalent human glioblastoma (GBM) subtype.


      1. Study Objective and Methodology

      The primary goal was to bridge the gap between preclinical success and clinical failure by identifying which human GBM subtype is best represented by the GL261-GSC model.

      • Model: Intracranial implantation of 5,000 GL261-GSCs into immunocompetent C57BL/6 mice.
      • Technologies: * Single-nucleus RNA sequencing (snRNA-seq) to bypass enzymatic digestion artifacts.
      • Visium spatial transcriptomics to map the physical location of cell clusters.
      • Comparative analysis using Smart-Seq2 and 10x Genomics platforms for technical validation.

      2. Key Findings: Tumor Heterogeneity and Neural Integration

      The study revealed that the brain TME significantly alters the transcriptional state of implanted GSCs, driving them toward increased heterogeneity.

      • Neural Circuitry Integration: Implanted tumor cells upregulated genes related to synaptic activity and neuronal signaling, such as Grik2, Nlgn3, and Gap43. This suggests that the model is ideal for studying neuron-glioma synapses and the formation of tumor microtubes.
      • Cellular States: The model captures the four essential GBM states: neural-progenitor-like (NPC-like), oligodendrocyte-progenitor-like (OPC-like), astrocyte-like (AC-like), and mesenchymal-like (MES-like).

      3. The Immune Landscape and Evasion

      The authors identified a shift toward an immunosuppressive TME as the tumor progressed from early (7 days) to late (28 days) stages.

      • Immune Evasion: Tumor cells showed increased expression of Cd274 (PD-L1), Nt5e (CD73), and the master transcription factor Irf8 in response to the TME.
      • TAM Infiltration: The myeloid compartment was dominated by Tumor-Associated Microglia and Macrophages (TAMs), which expressed immunosuppressive markers like Mrc1 (CD206), Arg1, and Tgfb1.
      • Checkpoint Targets: High expression of TIM-3 (Havcr2) and B7-H3 (Cd276) was noted, highlighting these as viable immunotherapy targets in this model.

      4. Correlation with Human GBM

      A pivotal finding of the study is that the GL261-GSC model most closely resembles the TME Med human subtype.

      | Feature | GL261-GSC / TME Med Similarity | | --- | --- | | Immune Profile | Heterogeneous immune populations; low PD-1/CTLA-4 expression. | | Neural Signaling | Enrichment in pathways related to neuronal synaptic integration. | | Immunotherapy | Predicted low response to anti-PD1, but high potential for TIM-3 or B7-H3 inhibitors. |

      5. Treatment Impact

      The study evaluated Temozolomide (TMZ) and an experimental peptide, Tat-Cx43 266-283.

      • TMZ: Reduced tumor cell proliferation and downregulated immune-evasive genes, though it also triggered some genes associated with poor prognosis.
      • Tat-Cx43: Significantly altered the immune cluster's transcriptome early in development and reduced levels of the potent immunosuppressor TGFß1.

      Conclusion

      The researchers conclude that the GL261-GSC model is a robust tool for studying TME Med glioblastoma. It provides a reliable framework for testing therapies targeting neural integration and specific myeloid-driven immune evasion, offering a higher probability of successful clinical translation.

    1. I am a generalist. I am interested in all aspects of coaching and performance. I often struggle because I want to be involved in everything. I want to problem solve.

      On the struggles of being a generalist as a coach and writer/speaker.

    2. I wrote, “Coaching is one of the last, great generalist professions,” in the mid ‘00s. Coaches understood the physical and the psychological, motivation and conditioning. They knew how to teach and instruct. Their knowledge was learned primarily through observation (assistant coaches watching a head coach) and experience (actually coaching). Some beliefs were more universal than others, some practices can be criticized, and some coaches were better or more knowledgeable than others, but coaches generally accrued significant implicit and explicit knowledge about conditioning, development, motivation, skill acquisition, teaching, and more. Coaches had extensive general knowledge across a wide range of domains, although few were experts in a single domain.

      On coaching (with reference to basketball) being a "generalist" profession, and a change toward specialists in recent years.

    1. lived experience

      in counselling circles has a specific meaning - though popular in non-academic circles, we are critical of people employing the 'lived experience' terminology as they can not claim any clinical accreditation, and can be dangerous if they are 'counselling' the vulnerable.

    2. construction-suicide-response

      Critical Incidents - it won't be construction industry specific, more wide ranging across all industry, some examples can focus on each industry.

    1. 251 parent-child dyads

      Now's a good moment to think about whether 251 is enough. The authors don't talk about how they decided on this sample size, but in well-designed experiments you'll often see a section called a power analysis that explains the reasoning behind sample size choices. This study doesn't include one — but you should know what it is because you'll see it in other papers and on the quiz. A power analysis is a calculation researchers do BEFORE collecting data to figure out how many participants they'll need. The calculation depends on three things:

      Effect size — how big is the relationship you expect to find? Small effects need bigger samples to detect; large effects can be detected with smaller samples. Researchers usually estimate effect size from prior research on similar topics. Significance threshold (alpha) — typically set at .05. This is your tolerance for false positives — concluding there's an effect when there isn't. Statistical power (1 − beta) — typically set at .80. This is your tolerance for false negatives — failing to detect a real effect when one exists. A power of .80 means you have an 80% chance of finding a real effect if one is truly there.

      Researchers plug these numbers into a tool like G*Power (a free software package) and it tells them the minimum sample size they need. Why does this matter? Because a study with too few participants can fail to detect real effects. When you see null results in a study (like H4, H6b in this article), one possible explanation is that the manipulation didn't work. Another is that the sample was too small to detect a small but real effect. Without a power analysis, we can't easily tell which is happening. In this study, dividing 251 participants across 6 conditions in the 3×2 factorial design gives roughly 42 participants per cell. A common rule of thumb for between-subjects experiments is at least 30 participants per cell — so this study clears that bar. But for detecting smaller effects, more would have been better. When you read a different experimental study on the quiz, look for whether the authors did a power analysis. If they did, it's a sign of methodological rigor. If they didn't, that's worth flagging as a limitation. Extra video on sample size in experiments: https://www.youtube.com/watch?v=v-dyn6tO5dQ

    2. randomly

      Make sure you're keeping two things straight that students mix up constantly: random assignment and random selection. Random selection (also called random sampling) is about who gets into the study in the first place. It happens at the recruitment stage. You start with a defined population (say, all U.S. tweens ages 8–13), and you use a random procedure to select some subset of them to invite. True random selection is rare in social science research because it requires a sampling frame — a complete list of every member of the population — which usually doesn't exist. Random selection is what supports external validity (generalizability of findings to the population).Random assignment is about what condition each participant ends up in once they're already in the study. It happens after recruitment. Every participant has an equal chance of being placed into any of the experimental conditions.

      Random assignment is what supports internal validity — specifically, the non-spuriousness criterion for causality — because it equalizes pre-existing differences between groups so that when you see a difference on the DV, you can be confident it's because of the manipulation rather than because of some baseline group difference.A study can have one without the other. This study has random assignment (Qualtrics randomly placed each tween into one of 6 conditions) but not random selection (parents were recruited from Dynata's online panel, which is a convenience sample of people who voluntarily signed up to take surveys for points). That's why the authors can make causal claims internally but worry about generalizability in their limitations section. So when you answer this question, be specific: how were participants assigned, and was that the same as how participants were recruited? Those are different stages. Extra video on random assignment vs. random selection: https://www.youtube.com/watch?v=wB9S2od1wo0

    3. Disclosure statement

      The authors report no conflicts. What would be a hypothetical conflict of interest that would make you look more critically at the findings? For this study, examples might include: - Industry funding from a brand featured in the study. If Waterpik (or any other oral health product manufacturer) had funded this research, you'd want to look closely at whether the findings happen to favor their commercial interests — for example, did the authors find purchase intention was high across all conditions, suggesting unboxing videos are still effective marketing tools? - Funding from advertising or marketing trade groups. A grant from an industry group representing influencer marketers, advertising agencies, or social media platforms could create pressure to find that current practices are "fine" or that proposed regulations are unnecessary. - Funding from advocacy groups with strong prior positions. A grant from a children's media literacy advocacy organization could create pressure to find that training works well, since that aligns with their mission. Even well-intentioned advocacy funding can create subtle bias toward findings that support the funder's agenda. - Affiliations with platforms being studied. If an author had served as a paid consultant to YouTube, TikTok, or another platform that streams unboxing content, their interpretation of policy implications might be more sympathetic to platform-led solutions than to government regulation. - Authorship of competing measures or training programs. If an author had developed and licensed their own commercial advertising literacy curriculum, they'd have an incentive to find that THEIR approach works, or that brief in-app interventions (like the one tested here) are inadequate compared to fuller programs they sell. - Personal financial stakes in influencer content. An author who personally produces or monetizes children's online content would have a clear incentive in either direction depending on what they want the findings to show.

      These would create potential bias — not necessarily making the research wrong, but something to be aware of. Note: a conflict of interest is about financial or professional incentives that could bias research, not about the researcher's personal identity or positionality. The fact that an author is or isn't a parent, for example, is not a conflict of interest. Notice that the funding section of this article is not present — the authors don't mention any external grant support. Sometimes the absence of external funding is itself informative. It may mean the research was done with internal university resources, which typically carries fewer strings than industry or advocacy funding. The fact that one of the authors is affiliated with the Joan Ganz Cooney Center at Sesame Workshop (a nonprofit research center focused on children's media) is worth noticing — that's not a conflict of interest, but it does signal that the author works in an environment with strong opinions about children's media. Sesame Workshop's mission shapes what research questions seem worth asking, which is different from financial bias but still worth being aware of.

      This is the sort of question you'll have to answer on the quiz: In this study, the author(s) said that there are no conflicts of interest. Thinking about this study, can you think of any potential conflicts of interest that would cause you to look more critically at the findings? For example, think of particular organizations that the author(s) could be affiliated with or research funding sources that would lead you to think more about the credibility of the author and the findings. This is all hypothetical.

    4. content

      On the quiz you'll be asked to answer a question like this: Now that you've read the entire study, what TYPE of experiment is this? Please draw upon what you’ve learned about the different types and explain why this experiment exemplifies the type that it is.

      Don't just guess at this. Walk through the 5-step decision process from the Identifying Experiment Types infographic: Step 1: How many things are you testing? If it's one IV, it's a single-factor design. If it's multiple IVs manipulated together, it's a factorial design. Count your IVs and multiply their levels to get the number of conditions. Step 2: How are participants placed into groups?

      Random assignment → true experiment No random assignment, using existing groups (like classrooms) → quasi-experiment One group, no comparison → pre-experiment

      Step 3: When do you measure?

      Just once, after treatment → posttest only Before and after treatment → pretest-posttest Many times over time → time series

      Step 4: Where does it happen?

      Lab/controlled → lab experiment Real-world → field experiment Studying something that already happened in the world → natural experiment

      Step 5: Who experiences what?

      Same people experience all conditions → within-subjects Different people in each condition → between-subjects (most common)

      Now apply each step to this study and put together the full label. The authors actually tell you part of the answer in the abstract — "3 (sponsored; non-sponsored; sponsorship unaddressed cue) x 2 (advertising training; no advertising training) randomized experimental design" — but that doesn't capture everything. Add the missing pieces from steps 3, 4, and 5. A complete answer should sound something like: "this is a [factorial / single-factor] [true / quasi / pre] [posttest-only / pretest-posttest] [lab / field / natural / online] [between-subjects / within-subjects] experiment, because [walk through your reasoning]." The Experiment Types Overview infographic in Module 7 has the full taxonomy with definitions for each design type.

    5. Perceived informative inten

      Like perceived selling intent, this is a measured variable, and it does double duty across the study — IV in some hypotheses, DV in others. Operationalization: Three survey items adapted from Shan et al. (2020), each on a 5-point response scale (1: definitely not to 5: definitely yes). The items asked whether the boy in the video (1) wants to help others by sharing useful information about healthy teeth, (2) wants to help others by sharing useful information about the water flosser, and (3) thinks the water flosser is good to use. The three items were averaged into a scale. Level of measurement: Interval. The 5-point Likert-type scale is conventionally treated as interval in social science research. Use: Has multiple roles across the study:

      Dependent variable in H4 (non-sponsored video → perceived informative intent), H5 (unaddressed video → perceived informative intent), and H6b (training → perceived informative intent in non-sponsored condition) Independent variable in H2 (perceived informative intent → educational recall) and RQ2a/RQ2b (perceived intent → recall, purchase intention)

      Was it manipulated? No — this is a key point. Perceived informative intent was measured via survey items, not manipulated. The researchers didn't make some kids perceive informative intent and others not; they just asked all kids how much informative intent they perceived. This means that whenever perceived informative intent is treated as an "IV" (in H2 and the RQs), the causality claims are weaker than they would be for a manipulated variable. Measurement validity/reliability: Cronbach's alpha was .82, which is good (above the .70 threshold for acceptable internal consistency, and well above the .80 threshold for good consistency). The three items reliably measure a single underlying construct. Compare this to perceived selling intent, which has only 2 items and reports a correlation (r = .53) instead of alpha. The 3-item informative intent scale is more reliable than the 2-item selling intent scale — both because more items generally produce more stable measurement, and because alpha .82 is stronger than r = .53. Conceptual definition is on page 275, embedded in the schema theory discussion: "activation of an informative/educational schema likely directs the viewer's attention to educational information unrelated to the product's features." So informative intent is conceptualized as the recognition that content is meant to be objective or educational rather than commercial.

    6. Content recall

      Two recall measures are bundled under this variable name — product recall and health recall — but they're scored separately and used as separate DVs in the analysis. Operationalization: Four multiple-choice items total, two for each recall type:

      Product recall (2 items): How much water could the device hold? How many water-pressure speed options does it have? Health recall (2 items): What is enamel? What is plaque?

      Each item was scored as correct (1) or incorrect (0). The two product items were summed for a 0–2 product recall score; the two health items were summed for a 0–2 health recall score. Level of measurement: Each item is nominal (correct or incorrect). The summed score (0, 1, or 2) is technically ordinal — there's order (more correct is better) but the intervals between scores aren't necessarily meaningful in a strict measurement sense. Some researchers would treat this as interval for analysis purposes. Use: Dependent variable in H1 (perceived selling intent → product recall), H2 (perceived informative intent → health recall), and RQ2a (perceived intent × training → recall). Measurement validity: Worth flagging that these are multiple-choice recognition measures, not free recall. Recognition is generally easier than recall — kids might pick the right answer from four options without genuinely having retained the information. A more rigorous measure would have asked open-ended questions ("How much water could the device hold?") and scored kids' written or spoken answers. The recognition format probably inflates scores compared to free recall, which connects to the ceiling effect the authors flag in their limitations. Also worth noting: with only TWO items per recall type, you can't compute meaningful internal consistency. The measures are short for practical reasons (kids' attention spans, survey length), but it's a measurement weakness worth acknowledging. Conceptual definition: The article doesn't provide an explicit conceptual definition of "content recall." The authors describe what they're measuring (memory of product attributes vs. memory of health information) but don't articulate the underlying theoretical construct. Worth noting as a #conceptualdefinition gap.

    7. Children’s product purchase intenti

      This is arguably the most consequential DV in the study — purchase intention connects directly to real-world consumer behavior — but it's measured with the lightest possible touch. Operationalization: A single survey item: "Will you ask your parents to buy a [company] water flosser?" Response options ranged from 1 (definitely not) to 5 (definitely yes). Level of measurement: Interval (5-point Likert-type scale, treated conventionally as interval). Use: Dependent variable in RQ2b (perceived intent × training → purchase intention). It also appears in the path analysis as an outcome of perceived intent across multiple analyses. Measurement validity/reliability: This is where it gets concerning. Single-item measurement means:

      Reliability is unmeasurable. You need at least two items to compute internal consistency. With one item, we have no statistical evidence the measure is reliable. Validity rests entirely on this question. If the wording isn't quite right — and "ask your parents to buy" might capture something different from underlying purchase desire — there's no way to triangulate against a second item. Variability is constrained. Five options for 251 kids will produce clusters at certain values, limiting the analysis's power to detect subtle effects.

      Researchers sometimes use single-item measures for face-valid, simple constructs, especially when surveying children where survey length is a real constraint. But for a key DV in an experimental study, a multi-item scale would have been stronger. This is a measurement weakness the authors don't acknowledge in their limitations. Conceptual definition: The article doesn't provide an explicit conceptual definition of "purchase intention." The construct is implied — kids' interest in acquiring the depicted product — but never theoretically defined. Like content recall, this is a #conceptualdefinition gap worth flagging.

    8. Perceived selling inten

      This is a measured variable — and that's important to understand because perceived intent shows up in nearly every hypothesis but is never randomly assigned. Operationalization: Two survey items adapted from Rozendaal et al. (2016), each on a 5-point response scale (1: definitely not to 5: definitely yes). The items asked whether the boy in the video (1) is being paid by the company, and (2) is trying to get me to buy a water flosser. The items were averaged into a scale. Level of measurement: Interval. The 5-point Likert-type scale is conventionally treated as interval in social science research, even though strictly speaking it's debatable. Use: Has multiple roles across the study:

      Dependent variable in H3 (sponsored video → perceived selling intent) and H6a (training → perceived selling intent in sponsored condition) Independent variable in H1 (perceived selling intent → product recall) and RQ2a/RQ2b (perceived intent → recall, purchase intention)

      Was it manipulated? No — this is a critical point. Perceived selling intent was MEASURED via survey items, not manipulated. The researchers didn't make some kids perceive selling intent; they just asked all kids how much selling intent they perceived. This means that whenever perceived selling intent is treated as an "IV" (in H1 and the RQs), the causal claims are weaker than they would be for a manipulated variable. Measurement validity/reliability: The two items were significantly correlated (r = 0.53, p < .001), indicating moderate consistency between them. Note: with only two items, the authors report a correlation rather than Cronbach's alpha. Alpha is conventionally calculated for three or more items. A correlation of .53 isn't strong — it suggests the two items are tapping a common construct but capturing somewhat different aspects of it. Conceptual definition is on page 274, embedded in the description of the Persuasion Knowledge Model: "individuals will engage in cognitive coping strategies when they identify selling intent within a given message." So selling intent is conceptualized as the recognition that a message is trying to persuade.

    9. H2

      In this hypothesis/research question, you'd be asked to identify the independent variable and the dependent variable. And there is a moderating variable in this hypothesis/research question. A moderating variable is a variable that affects the direction and/or strength of the relationship between an independent variable and a dependent variable. [Example: In a study examining the relationship between study time and exam performance, motivation level acts as a moderating variable. For highly motivated students, more study time leads to significantly better exam performance, while for lowly motivated students, the effect of study time on performance is much weaker. Thus, motivation changes the strength of the relationship between study time and exam performance, demonstrating its role as a moderating variable.] So for this hypothesis or research question, you will need to list the independent variable, the moderating variable, and the dependent variable. Read closely to see what the author(s) argue is happening here.

      If you need more help with moderators/mediators, here’s an extra video: https://www.youtube.com/watch?v=FzM0_GC082A

    10. H1

      In this hypothesis/research question, you'd be asked to identify the independent variable and the dependent variable. And there is a moderating variable in this hypothesis/research question. A moderating variable is a variable that affects the direction and/or strength of the relationship between an independent variable and a dependent variable. [Example: In a study examining the relationship between study time and exam performance, motivation level acts as a moderating variable. For highly motivated students, more study time leads to significantly better exam performance, while for lowly motivated students, the effect of study time on performance is much weaker. Thus, motivation changes the strength of the relationship between study time and exam performance, demonstrating its role as a moderating variable.] So for this hypothesis or research question, you will need to list the independent variable, the moderating variable, and the dependent variable. Read closely to see what the author(s) argue is happening here.

      If you need more help with moderators/mediators, here’s an extra video: https://www.youtube.com/watch?v=FzM0_GC082A

    11. RQ1

      This is more of a descriptive RQ, so no IV and DV per se, but sort of could be considered like RQ1: Perceived informative intent + perceived selling intent (both measured IVs, compared) → purchase intention (DV).

    12. on, Joan Ganz Cooney Center at Sesame Wo

      This isn't a university or a college, so it should cause you to think more about the author's affiliation. Research what this affiliation is and if/how it impacts your credibility assessment of the author.

    13. b)

      In this hypothesis/research question, you'd be asked to identify the independent variable and the dependent variable. And there is a moderating variable in this hypothesis/research question. A moderating variable is a variable that affects the direction and/or strength of the relationship between an independent variable and a dependent variable. [Example: In a study examining the relationship between study time and exam performance, motivation level acts as a moderating variable. For highly motivated students, more study time leads to significantly better exam performance, while for lowly motivated students, the effect of study time on performance is much weaker. Thus, motivation changes the strength of the relationship between study time and exam performance, demonstrating its role as a moderating variable.] So for this hypothesis or research question, you will need to list the independent variable, the moderating variable, and the dependent variable. Read closely to see what the author(s) argue is happening here.

      If you need more help with moderators/mediators, here’s an extra video: https://www.youtube.com/watch?v=FzM0_GC082A

    14. sponsored

      Now that you’ve read the entire study, you should always make sure that you can state what were the primary findings of the research. You don’t have to give a bunch of details. This should be 1-4 sentences expressing the big takeaway that the researchers found answering their hypotheses/research questions.

    15. a) s

      H6a: Training video (manipulated IV: training vs. no training) → perceived selling intent (DV) — within the sponsored condition specifically. They're arguing for an interaction effect here.

    16. Limitations

      When you're working on these quizzes, you'll be asked to find limitations that the author(s) discusses. Remember that limitations are not the research findings, but rather things that the author(s) wishes that they had done differently or challenges with the sample, measurement, procedures, etc. that could impact the results.

    17. H1

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    18. a 90-second animated advertising training video w

      This is the second manipulated independent variable, and it's also doing double duty as a moderator in the path analysis. Operationalization: A 90-second animated video featuring two cartoon spokespeople (a 12-year-old male and an undergraduate female) explaining that some YouTube videos are designed to sell products. The video describes "influencer" tactics, warns viewers they can be "tricked," and lists cues to watch for ("sponsor," "partnership"). Half of the participants saw this video before the unboxing video; the other half went straight to the unboxing video. Level of measurement: Nominal. Two categories: training vs. no training. Use: Independent variable in H6a (training → detection of selling intent in sponsored content) and H6b (training → detection of informative intent in non-sponsored content). Also serves as a moderator in RQ2a and RQ2b — the authors test whether training changes the relationship between perceived video intent and recall (RQ2a) or purchase intention (RQ2b). How was it manipulated? Qualtrics randomly assigned half of participants to view the training video before the experimental unboxing video. The other half saw the unboxing video alone. Random assignment is what allows the authors to make causal claims about the training's effects. Measurement validity: The training video plausibly teaches what advertising literacy is, but with no manipulation check, we can't verify whether kids absorbed the content. The authors' own discussion notes this ambiguity: they aren't sure whether the training "truly taught the advertising literacy content, or [cued] the use of advertising literacy skills they already had." Conceptual definition is on page 278: "'advertising literacy.' Advertising literacy skills help children filter the information they see in advertisements and view it through a critical lens to determine the intent behind it." The training video is meant to build or activate exactly these skills.

    19. RQ2a

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    20. b) i

      H6b: Training video (manipulated IV: training vs. no training) → perceived informative intent (DV) — within the non-sponsored condition specifically.

    21. (a)

      In this hypothesis/research question, you'd be asked to identify the independent variable and the dependent variable. And there is a moderating variable in this hypothesis/research question. A moderating variable is a variable that affects the direction and/or strength of the relationship between an independent variable and a dependent variable. [Example: In a study examining the relationship between study time and exam performance, motivation level acts as a moderating variable. For highly motivated students, more study time leads to significantly better exam performance, while for lowly motivated students, the effect of study time on performance is much weaker. Thus, motivation changes the strength of the relationship between study time and exam performance, demonstrating its role as a moderating variable.] So for this hypothesis or research question, you will need to list the independent variable, the moderating variable, and the dependent variable. Read closely to see what the author(s) argue is happening here.

      If you need more help with moderators/mediators, here’s an extra video: https://www.youtube.com/watch?v=FzM0_GC082A

    22. H2

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    23. RQ2b)

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    24. H3

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    25. H6b

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    26. 6a)

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    27. MPACT SUMMARY

      Some journals — especially newer ones and those with applied audiences — include a section like this as a public scholarship outreach effort. It's a more accessible summary of the study, written in plain language, structured clearly (what we knew, what's new, why it matters), and stripped of jargon. Think of it as a built-in, author-approved version of what a science journalist would write. Notice the three-part structure here:

      Prior State of Knowledge: what the field knew before this study Novel Contributions: what THIS study adds Practical Implications: why anyone outside academia should care

      This is a really useful section to read first, before diving into the abstract. It tells you the gist of the study in language a non-specialist can follow. Now imagine a journalist covering this study. They might write a headline like: "YouTube unboxing videos confuse kids — but a 90-second warning helps" "Tweens see sponsored YouTube videos as both ads AND honest reviews, study finds" "Even with disclosures, kids struggle to spot YouTube ads — researchers say platforms should help" Notice what those headlines do: they translate the academic findings into something concrete and actionable. They emphasize the practical takeaway (kids need help, training works briefly) rather than the technical findings (path analysis showed moderation by training condition, p < .05). That translation is part of what makes research findings reach audiences beyond other researchers. Not every journal does this. Older and more traditional journals tend to skip the impact summary entirely. When you DO see one, take advantage — it's a free roadmap to the study.

    28. mediator

      Mediation analysis is used to understand how or why one thing influences another by introducing a middle factor that links them together. Think of it as a “go-between.”

    29. (H4

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    30. Non-sponsored video (objective content). The boy explains that the video has beencreated for a school project (“Mrs. Jenkins’ 6th grade class”), that he decided to do hishealth report on teeth, and that he will describe a product that can help keep teethhealthy. He states that he just bought the water flosser last week and does notindicate where it is available for purchase. After demonstrating the water flosser, theboy ends the video by describing the next peer who will give their report for theclass.● Sponsored video. The boy begins the video by saying “Jack here, back with anotherreview!” He discloses sponsorship by saying things like “my friends at [company] saysent me this water flosser to try out last week” and “my sponsors at [company] tellme it costs just $36 and is available at [major retailers].” After demonstrating thewater flosser (i.e., identical content across videos), he ends the video with additionalsponsorship cues. He says, for example, “thanks again to my friends at [company] forthis great device!” and “see ya next time when I’ll talk about eyeglasses that block outblue light from screens from my friends at [company].”● Sponsorship unaddressed video. The third video is vague about the purpose of thevideo, not giving cues that the content is objective (as the school report video) ordirect sponsorship statements. The boy begins by saying “Jack here, back withanother review.” He states that he saw the product at his friend’s house and decidedto try it out. He explains that he got the product “in the mail” last week, and that it isavailable at many stores for $36. After demonstrating the water flosser, he ends thevideo by thanking everyone for watching and says “see you next time when I will talkabout eyeglasses that block out blue light from screens and help make your eyeshealthy.” Thus, there are subtle cues that the content maybe sponsored (i.e., byhaving an ongoing review channel; by listing the price and availability of theproduct; by featuring positive attributes of the product), but he makes no explicitstatements of a material relationship with the brand.

      This is one of the two manipulated independent variables in the study. Let's walk through the analysis. Operationalization: Three different versions of an unboxing video, each about three minutes long, all featuring the same 12-year-old male actor demonstrating the same water flosser. The body of each video is identical. What differs is the framing at the beginning and end:

      Non-sponsored: Boy explains it's a school project for "Mrs. Jenkins' 6th grade class." Sponsored: Boy explicitly thanks his "friends at [company]" who sent him the product and gives the price and retailers. Sponsorship unaddressed: Boy says he saw the product at a friend's house, mentions the price and availability, but never explicitly states a sponsor relationship.

      Level of measurement: Nominal. The three conditions are categories without inherent order. (You could argue ordinal — non-sponsored → unaddressed → sponsored represents increasing levels of commercial intent — but the authors treat these as nominal categories in their analysis, so nominal is the safer answer.) Use: Independent variable in H3 (sponsored vs. non-sponsored → perceived selling intent), H4 (non-sponsored vs. sponsored → perceived informative intent), and H5 (unaddressed vs. non-sponsored → perceived informative intent). How was it manipulated? Qualtrics randomly assigned each child to one of the three video conditions. The video was streamed within Qualtrics from a private YouTube channel "in order to boost children's beliefs that these were real videos on the YouTube platform." So the manipulation involves both (a) which version of the video the child saw, and (b) the framing that made it feel like real YouTube content rather than a study stimulus. Measurement validity: The three conditions are differentiated by clear scripted statements (sponsorship explicit, school report explicit, or ambiguous). Face validity is decent — the videos plausibly represent the three real-world scenarios the authors care about. The bigger concern is that the same actor and same body content appears in all three, which controls for confounds but reduces ecological validity (real influencer videos vary far more in style, polish, and host). Conceptual definition is on page 273: "'influencers,' paid or otherwise compensated by marketers in exchange for favorable reviews." That's the underlying construct the sponsored condition is operationalizing.

    31. SEM

      Structural Equation Modeling (SEM) is a statistical technique in social sciences that allows us to examine complex relationships between multiple variables in a single, integrated model. It combines measurement models to estimate hidden or “latent” variables, like intelligence or happiness, using indicators we can observe, and structural models that show how these variables influence each other. SEM enables us to see not only direct effects but also indirect and mutual relationships, all at once. By doing this, SEM helps us test if our theoretical model accurately represents the data, offering a more complete view of the relationships between observed and unobserved factors.

    32. Journal of Children and Media

      Let's start with journal credibility. Journal of Children and Media (which we've seen before) is published by Routledge/Taylor & Francis, which is a well-established academic publisher. But sometimes established publishers aren't ALWAYS publishing credible journals. To verify credibility, you'd want to check tools we discussed in Module 5: SCImago Journal Rank (SJR), Ulrich's Web (to confirm peer-review status), or the journal's own website. This journal is indexed in major databases and has an ISSN, which are good signs. On the quiz, you won't have internet access, but you should know WHICH tools you'd use and WHAT you'd look for.

    33. path analysis.

      Path analysis is a statistical tool in social sciences that allows us to map and measure both direct and indirect relationships between multiple variables to understand how they influence a particular outcome. In this approach, each variable is represented as a node, and arrows (or "paths") between nodes show how one variable affects another. Some variables have direct effects, with an arrow pointing straight to the outcome, while others work indirectly, influencing an intermediate variable first. By drawing and analyzing these paths, path analysis helps us test theories about how variables interact to produce an outcome, giving us a detailed view of the connections and the strength of each relationship in the model.

    34. moderator

      Moderation analysis helps you find out when or for whom a relationship between two things is stronger or weaker by introducing a third factor, called a moderator. This moderator changes how the two main things relate to each other.

    35. r

      r is the correlation coefficient. We will cover this more in Module 9. This number that tells us how strongly two variables are related and in which direction. It ranges from -1 to +1. A positive value (closer to +1) means that as one variable increases, the other tends to increase as well - this is called a positive correlation. A negative value (closer to -1) means that as one variable increases, the other tends to decrease - this is a negative correlation. When r is close to 0, it means there is little to no relationship between the two variables.

    36. mediator

      Mediation analysis is used to understand how or why one thing influences another by introducing a middle factor that links them together. Think of it as a “go-between.”

    37. path analysis

      Path analysis is a statistical tool in social sciences that allows us to map and measure both direct and indirect relationships between multiple variables to understand how they influence a particular outcome. In this approach, each variable is represented as a node, and arrows (or "paths") between nodes show how one variable affects another. Some variables have direct effects, with an arrow pointing straight to the outcome, while others work indirectly, influencing an intermediate variable first. By drawing and analyzing these paths, path analysis helps us test theories about how variables interact to produce an outcome, giving us a detailed view of the connections and the strength of each relationship in the model.

    38. between-subjects

      A 'between-subjects' design in social science experiments refers to a research setup where different groups of participants are exposed to different conditions or treatments. Each participant experiences only one condition, so comparisons are made between groups. For example, imagine you're testing the effectiveness of two teaching methods on student performance. In a between-subjects design, you would have one group of students using Method A and a different group using Method B. You'd then compare the results between these two groups to see which method works better. This type of design is useful because it helps to avoid carryover effects, where the experience of one condition might influence performance in another if the same participants experienced both. However, it also requires more participants because each person only provides data for one condition.

      This study is a between-subjects design — different participants are in each condition. With 251 participants and 6 conditions, that means roughly 42 different kids per condition. Each kid sees ONE combination of sponsorship cue and training video. They don't see all 6. The alternative is a within-subjects design, where the same participants experience all (or several) conditions. If this study had been within-subjects, each kid would have watched all six versions of the video — sponsored with training, sponsored without training, non-sponsored with training, and so on — and the researchers would compare each kid's responses across the conditions they personally experienced. Why choose one over the other? Between-subjects advantages:

      Cleaner — each participant only sees one condition, so there's no carryover from one to another. Avoids order effects — a participant doesn't get tired or bored partway through. Avoids demand characteristics that come from comparing conditions — when a kid sees both sponsored and non-sponsored videos back-to-back, they might guess what the study is about.

      Between-subjects disadvantages:

      Requires more participants — you need a separate group for each condition. Pre-existing differences between groups can muddy the results, which is why you need random assignment.

      Within-subjects advantages:

      Statistically more efficient — each participant serves as their own comparison, which controls for individual differences automatically. Requires fewer total participants.

      Within-subjects disadvantages:

      Carryover effects — what you saw first might affect your reaction to what comes next. Order effects — fatigue, practice, or boredom can shape later responses. Demand characteristics — participants may guess the study's purpose by comparing conditions.

      For this study, between-subjects makes sense because: (1) the manipulations work better when kids encounter only one version (seeing all three sponsorship cues back-to-back would tip them off to the research question); (2) the kids are young and might struggle with multiple video viewings; (3) the authors had access to enough participants through Dynata to fill all 6 cells. When you read any experiment on the quiz, ask: did each participant experience all conditions (within-subjects), or just one (between-subjects)? You can tell by looking at how the conditions are described and how participants were assigned. Most experiments are between-subjects — it's the more common choice.

    39. to

      When you go through this question, work through the seven threats systematically rather than guessing. Here's the list, with how each applies — or doesn't — to a study like this one:

      History — events outside the study that happen between measurements and influence the DV. Less of a concern in a short, single-session online experiment like this one, since there's no extended time window. But it could matter if, say, a major news story about influencer scams broke during the data collection period (May 27 – June 3, 2022).

      Maturation — natural changes in participants over time. Mostly a concern for studies that span weeks or months. Tweens didn't age meaningfully during this 15-minute experiment.

      Testing — taking a pretest changes posttest responses. Not relevant here because there's no pretest.

      Instrumentation — the measurement tool changes between measurements. Not an issue with a posttest-only design.

      Regression to the mean — extreme scorers drift toward the average on retesting. Not relevant without repeated measurement.

      Selection bias — pre-existing differences between groups. This is where random assignment does its work. The authors used random assignment, which addresses selection — but only if the randomization actually produced equivalent groups. (Did the authors check for baseline equivalence on demographics? Look for that.)

      Attrition — participants dropping out, especially if they drop out non-randomly. The authors note 13 children quit early and 3 declined to participate. Were dropouts equally distributed across conditions? If kids in the sponsored condition were more likely to quit than kids in the non-sponsored condition, that's a problem.

      Then there's the broader category of demand characteristics / Hawthorne effect / experimenter effects, which threaten internal validity through participant behavior rather than through the seven classic threats. Worth asking: did the kids behave naturally, or were they performing for the researcher / their parent (who was likely nearby during this online study)?

      Random assignment is a powerful tool — it addresses several of these threats at once — but it's not a magic eraser. Some threats survive randomization. Your job is to figure out which ones the authors handled well, and which ones they didn't address.

      Extra video on mitigating threats to internal validity: https://youtu.be/3GW13A4-eSQ. The Threats to Internal Validity infographic in Module 7 has the full list with examples of each.

    40. In

      In these quizzes, you're going to always be asked about the bigger research question and purpose/goal of the study – not tied the hypotheses or research questions with variables – but the larger research question and objective. Ways that you can think about this: why bother doing this study? How does conducting this study add to our understanding of the world? How do the researchers justify spending time and money studying this question? [You don’t have to answer these questions directly, but these are the sort of things that you should think about with regard to the bigger research question and goal.] This will be in the introduction section of the study. [Please note that sometimes there is a heading "Introduction" and sometimes there is not - but the text after the abstract is the introduction.]

    41. H5

      This is a bit more of a descriptive hypothesis, so no IV/DV. But roughly, H5: Sponsorship cue (manipulated IV: unaddressed compared to non-sponsored) → perceived informative intent (DV). Descriptive hypothesis comparing the unaddressed condition to the non-sponsored condition.

    42. Discussion

      Remember that the discussion section summarizes what the author(s) found, how it is contextualized within the existing body of research, and what contribution this study's findings make to this area of research. Then authors usually speculate about some bigger picture questions related to the research. Authors then discuss limitations and opportunities for future research.

    43. Materials and methods

      When you encounter a methods section, one thing you'll need to do — both for this study and on the in-class quiz — is figure out what TYPE of experiment it is. Don't just guess. Walk through the 5-step decision process from the Identifying Experiment Types infographic in Module 7. Step 1: How many things are you testing? Count the IVs and their levels.

      One IV → single-factor design Multiple IVs → factorial design (multiply the levels: a 2×2 has 4 conditions, a 3×2 has 6, a 2×3 has 6, a 3×3 has 9, etc.)

      For this study: two IVs (sponsorship cue with 3 levels, training video with 2 levels). 3 × 2 = 6 conditions. Factorial. Step 2: How are participants placed into groups?

      Random assignment → true experiment No random assignment, using existing groups (like classrooms) → quasi-experiment One group, no comparison → pre-experiment

      For this study: Qualtrics randomly assigned each child to one of the 6 conditions. True experiment. Step 3: When do you measure?

      Just once, after treatment → posttest only Before AND after treatment → pretest-posttest Many times over time → time series

      For this study: kids answered some demographic questions before viewing the video, but those don't count as a pretest because they're not measuring the same construct as the posttest. The DVs (perceived intent, recall, purchase intention) were measured only after the video. Posttest-only. Step 4: Where does it happen?

      Lab/controlled → lab experiment Real-world → field experiment Studying something that already happened in the world → natural experiment

      For this study: it's tricky because the standard categories don't fit cleanly. The study was conducted online, with kids on whatever device they had at home, possibly with parents nearby, possibly with siblings making noise. It's not a controlled lab environment, but it's not a traditional field experiment either. The authors flag this in their limitations: data collection was online, prohibiting verification of treatment fidelity. I'd call this an online or remote experiment — a hybrid that has some advantages for external validity (kids in real-world environments) but loses the controlled conditions of a lab. Step 5: Who experiences what?

      Same people experience all conditions → within-subjects Different people in each condition → between-subjects (most common)

      For this study: each kid saw only one combination of sponsorship cue and training. Between-subjects. Putting it together: This is a 3 × 2 factorial, true, posttest-only, online, between-subjects experiment. On the quiz, your answer should walk through each step and justify it with evidence from the article. The Experiment Types Overview infographic in Module 7 has the full taxonomy with definitions for each design type.

    44. High PointUniversity, High Point, NC, USA

      You may be familiar with these authors' universities, but there will be times when you're not - especially when reading research from other countries. When you can't easily evaluate an author's institutional affiliation because you're unfamiliar with the university, here are some strategies: * First, check whether the university has a dedicated research profile for the author - most universities worldwide maintain faculty pages in English, even if the institution's primary language is different. Look for their publication record, research focus, and degree. That is the most credible source. * Second, use Google Scholar to look up the author directly. Google Scholar works across languages and countries. You can see how many publications they have, how often they're cited, and whether they publish in journals you recognize as credible. An author with dozens of publications in peer-reviewed journals indexed in major databases is credible regardless of whether you've heard of their university. * Third, remember that credibility doesn't require name recognition. There are thousands of legitimate research universities worldwide that produce high-quality peer-reviewed scholarship. Not recognizing a university doesn't make it less credible - it just means you need to do a bit more digging. The same tools work everywhere: Google Scholar for the author, SCImago or Ulrich's for the journal, and the article's own reference list and citation count for the study itself. * What WOULD be a red flag: an author with no findable academic profile anywhere, a university that doesn't appear to exist or has no research output, or a journal that isn't indexed in any major database. Those are substantive credibility concerns - not just unfamiliarity.

    45. Discussion

      Future research. Often comes directly from limitations. The authors suggest: One: examine non-health products. Two: vary video stimuli to look at affective reactions, not just cognitive. Three: compare across formats — online influencer versus TV commercial. Four: study reactions to known influencers, where parasocial relationships exist. Each of those addresses a specific limitation. That's the typical pattern.

    46. Somescholars believe that youth with greater conceptual persuasion understanding

      RELATE TO PRIOR WORK: The authors invoke a theoretical explanation from prior work (Opree & Rozendaal, 2015; Rozendaal et al., 2009; Sagarin et al., 2002) to interpret their own finding. Kids who feel they "know" advertising might paradoxically be MORE susceptible because they don't deploy persuasion resistance — overconfidence in their own ad-detection skills makes them lower their guard. This explains why training had to do affective work, not just cognitive work, to be effective.

    47. contributors

      Author and journal credibility. At the end of the article, there's a "Notes on contributors" section. Sarah Vaala has a Ph.D. from the Annenberg School at the University of Pennsylvania — top-tier communication program — and she's an Associate Professor at High Point University. Her research focuses on persuasive messages, youth, and family decisions about media. Directly in her area of expertise. She's also affiliated with the Joan Ganz Cooney Center at Sesame Workshop, which is a research institute focused on children's media. The other authors are graduate-level — Francesca Mauceri completed a master's, Olivia Connelly a bachelor's, both at High Point.

    48. Limitations

      Limitations: One: the sample isn't representative — recruited from an online panel, skews white, more educated, higher income. Two: 91.6% of the sample uses YouTube, so results may not apply to non-users. Three: data collection was online, so treatment fidelity wasn't verified. Four: only one set of stimuli was used. Five: the product was a health item, which differs from the toys, cosmetics, or food that dominate real youth-targeted unboxing. Six: ceiling effects — mean scores were high across measures. Seven: social desirability bias — kids may have answered the way they thought researchers wanted. Eight: dental health information may have been pre-known by older tweens.

    49. Wald tes

      When researchers run a factorial design (like this 3×2 study), they don't just check whether each IV has its own effect — they also check whether the IVs interact with each other. An interaction effect tells you that the impact of one IV depends on the level of the other IV. Let's make this concrete. Imagine an experiment testing whether caffeine improves test performance, with two IVs: caffeine (caffeine vs. no caffeine) and time of day (morning vs. evening). Three different patterns could emerge:

      Main effect of caffeine only: caffeine helps everyone equally, no matter what time of day. The effect of caffeine doesn't depend on time of day. Main effect of time only: people perform better in the morning regardless of caffeine. The effect of time doesn't depend on caffeine. Interaction: caffeine helps in the morning but doesn't help in the evening. The effect of caffeine depends on the time of day. That's an interaction.

      Interactions are often the most interesting findings in a factorial study. They show that the world is more complex than a single IV's effect — that effects depend on context. In this study, the authors are looking for an interaction between sponsorship cue and training video. Their hypothesis is essentially: training video changes how kids respond to sponsorship cues. They use Wald tests to check whether the relationship between perceived intent and the outcome is different for participants who saw the training video versus those who didn't. When the Wald test is significant (p < .05), there's an interaction — the effect of one IV depends on the other. The actual findings include several significant interactions: training moderates the relationship between perceived informative intent and purchase intention, and training moderates the relationship between perceived informative intent and health recall. In both cases, training changes how perception predicts the outcome. When you read an experimental study on the quiz, look for:

      Main effects — does each IV have its own effect on the DV? Interaction effects — do the IVs jointly influence the DV in a way that's more than the sum of their separate effects?

      A significant interaction is often more theoretically interesting than the main effects, because it reveals a contingency — when the effect happens, when it doesn't. Extra video on factorial designs and interaction effects: https://www.youtube.com/watch?v=2wZAAQ6OdFw

    50. Both videos were streamed withinQualtrics from a private YouTube channel, in order to boost children’s beliefs that thesewere real videos on the YouTube platform.

      This is ALMOST like a cover story, or at least does SOMETHING to reduce demand characteristics.

    51. H1

      Pause here. Before you analyze this hypothesis, you need to understand a concept that the Module 7 lecture didn't go deep on: in some experiments, an independent variable is measured rather than manipulated. This study is one of those cases — and it changes everything about how we evaluate causality. The Module 7 lecture defines an IV as "the thing that is manipulated or somehow introduced into the setting by the experimenter." That's the standard case, and it's true for the sponsorship cue and training video in this study — those are manipulated IVs. The researchers control them by random assignment. But this study also has IVs that are measured, not manipulated. Perceived selling intent and perceived informative intent are measured. The researchers didn't control these — they didn't make some kids perceive selling intent and other kids not. They asked all kids how much selling intent they perceived, and recorded what each kid reported. Why does a researcher use a measured IV in an experiment? Sometimes the construct can't be ethically or practically manipulated. You can't randomly assign kids to "perceive a video as sponsored" — perception is something that happens in their head based on what they see. The researchers can manipulate the cue (sponsored vs. non-sponsored video) and measure the resulting perception, but they can't directly control the perception itself. Why does this matter? Because random assignment is what gives experiments their causal power. When an IV is randomly assigned, you can rule out confounds — pre-existing differences between groups should be roughly equal. When an IV is measured, no random assignment happened, so confounds remain a real concern. Practical implications:

      For manipulated IVs (sponsorship cue, training video): random assignment supports strong claims about temporal order and non-spuriousness. Causality is well-supported by the design. For measured IVs (perceived selling intent, perceived informative intent): no random assignment. Temporal order is fuzzy because perception and outcomes are measured close together. Non-spuriousness is weak because confounds (advertising literacy, prior product interest, working memory) could explain both perception and outcome.

      So when you analyze each hypothesis in this study, the FIRST question to ask is: is the IV manipulated or measured? That answer shapes how strongly you can claim causality. Track the variables as you read:

      Manipulated: sponsorship cue (3 levels), training video (2 levels) Measured: perceived selling intent, perceived informative intent

      Any hypothesis with a measured IV will have weaker causal claims than a hypothesis with a manipulated IV — even though the overall study uses random assignment to conditions.

    52. Procedures

      Since this study DOESN'T have manipulation checks, let me show you what a manipulation check actually looks like in the wild, so you can recognize one if you see it on the quiz. Imagine a study testing whether watching a sad video makes people more likely to donate to charity. The IV is the video (sad vs. neutral), and the DV is donation behavior. A manipulation check would ask, right after the video: "How would you describe your current mood?" If participants in the sad video condition rated their mood as significantly more sad than participants in the neutral condition, the manipulation worked. If both groups reported similar moods, the manipulation failed — and any null result on donation behavior couldn't tell us whether sadness doesn't affect donations or whether the video just didn't make people sad. Or imagine a study where some participants watch a video featuring a "familiar" character and others watch one featuring an "unfamiliar" character. A real manipulation check would ask each participant if they recognized the character or could name them — if kids in the "familiar" condition all named the character correctly and kids in the "unfamiliar" condition couldn't, the familiarity manipulation worked. The pattern is always: a quick, direct question about the manipulation itself, asked separately from the dependent variable. It's not the same thing as the DV. It's a check on whether the IV did what it was supposed to do. When you encounter a different study on the quiz, look for manipulation checks in the methods or results sections. They might appear as a sentence like "Participants in Condition A reported significantly higher [thing manipulated] than participants in Condition B, t(X) = Y, p < .05." That's the manipulation working as intended.

    53. ofinterest is whether tweens would perceive similarly high rates of informative intentamong more well-known child influencers.

      FUTURE RESEARCH: Real influencer testing. The unknown actor used in this study may have produced different reactions than a real influencer kids actually follow. Future research should test reactions to known influencers, where parasocial relationships are already established.

    54. Others have found that young viewers perceive peer hosts and influencers as similarto themselves, heightening their perceptions of host authenticity and their trust in thecontent

      RELATE TO PRIOR WORK: The authors connect their finding (that informative intent was perceived universally, even in sponsored conditions) to research by Naderer et al. (2021) on parasocial perceptions. The unknown peer-aged actor in this study may have triggered the same authenticity heuristic real influencers do — kids perceive sameness as honesty. This both validates their finding and points to where future research should go.

    55. information about plaqueand tooth enamel may be common knowledge among older tweens

      LIMITATION: Pre-existing knowledge. Older participants may have already known the dental health information from school, which would inflate health recall scores independent of what they learned from the video. The recall measure can't distinguish "learned from this video" from "already knew."

    56. Some prior studieshave also found positive relationships between understanding of selling and persuasiveintent and adolescents’ desire for advertised products

      RELATE TO PRIOR RESEARCH: The authors connect their finding (that perceived selling intent positively predicts purchase intention without training) to a body of prior research showing the same surprising pattern in adolescents (Harms et al., 2022; Opree & Rozendaal, 2015; Vanwesenbeeck et al., 2016a, 2016b). This is counterintuitive — you'd expect detecting an ad to make kids LESS interested, but the literature consistently shows the opposite for some kids. Their finding fits this pattern.

    57. Further research should use varying video stimuli toexamine additional components of tweens’ conceptual and affective reactions tosponsored content online and to determine effective ways to alert them to sponsoredcontent

      FUTURE RESEARCH: Affective reactions. The authors note their study focused mostly on cognitive reactions (recall, perceived intent). Future research should also measure affective reactions — emotional responses, trust, liking — which might mediate or moderate purchase intention in ways the current study didn't capture.

    58. Participants may have responded in socially desirable ways

      LIMITATION: Social desirability bias. Kids might have answered the way they thought the researchers (or their parents nearby) wanted them to answer — saying they'd ask for the product when they wouldn't, or saying they perceived selling intent because that seems like the "smart" answer.

    59. future research should confirm these findings across additional classes ofproducts

      FUTURE RESEARCH: Product category replication. The authors call for replication using toys, cosmetics, food, and other youth-targeted product categories rather than just health products. Different product types might trigger different schemas and different levels of skepticism.

    60. Futureresearch should examine similar content across different formats (e.g., influencer-styleonline ad vs. child actor in a TV commercial) to examine the possibility and nature offormat-specific advertising schemas

      FUTURE RESEARCH: Cross-format comparison. The authors suggest examining the same kind of content across different formats — for example, an unboxing video vs. a child actor in a traditional TV commercial — to see whether tweens have different mental schemas for different ad formats.

    61. The health focus of the target product also differsfrom many youth-focused unboxing videos, which often feature toy, cosmetic, or foodproducts

      LIMITATION: Product type. The water flosser is a health product, while most real unboxing content features toys, cosmetics, or food. Tweens might respond differently — possibly with less skepticism — to the product categories they actually encounter on YouTube.

    62. only one set of stimuli was used to test hypotheses

      LIMITATION: Stimulus generalizability. The researchers tested ONE specific set of videos. Findings might not extend to videos with different hosts, different production styles, different products, or different platforms. Replication with varied stimuli is needed.

    63. mean scores were fairly high across key variables, suggesting a potential ceilingeffect.

      LIMITATION: Ceiling effects. Most participants scored near the top of the scales for several measures, which limits how much variation the analysis can detect. Real differences might exist that the data can't reveal because everyone is clustered at the high end.

    64. The sample of parent-child dyads was recruited from an online participant panel and may not be fully repre-sentative of US tweens.

      LIMITATION: Sample representativeness. The authors recruited from Dynata's online panel, which skews toward more-educated, higher-income, mostly White households. Findings may not generalize to U.S. tweens from less represented demographic groups.

    65. Recruitment and data collection were also conducted online,prohibiting verification of treatment fidelity.

      LIMITATION:Treatment fidelity. The researchers couldn't directly verify that participants actually watched the videos as intended. Kids could have skipped sections, been distracted, or had a parent help them — and there's no way to tell from online data collection.

    66. Most of the tween sample (91.6%) reported that they useYouTube, so relationships may differ among tweens less familiar with YouTube

      LIMITATION: YouTube familiarity. Almost all participants are existing YouTube users, so they're already familiar with unboxing-style content. Tweens who don't use YouTube — possibly the most vulnerable population because they're newer to this content format — aren't represented.

    67. b

      Tables 4 and 5 are dense with statistics. Let me break down what you're looking at, because you'll see versions of these on the quiz. The b value (unstandardized regression coefficient). This tells you how much the dependent variable changes for each one-unit increase in the independent variable. So if b = 0.59 for perceived informative intent → purchase intention, that means: for each 1-point increase in perceived informative intent, purchase intention increases by 0.59 points (on its own scale). The sign of b matters too:

      Positive b: the IV and DV move in the same direction — when one goes up, the other goes up. Negative b: the IV and DV move in opposite directions — when one goes up, the other goes down.

      So in this study, b = 0.59 (positive) means more perceived informative intent leads to more purchase intention. If you see b = −0.25, it would mean more of one variable leads to less of the other. The confidence interval [CI b] in brackets after the b value. This is the range of plausible values for b. If the confidence interval includes zero, the relationship isn't statistically significant (you can't rule out the possibility that the true effect is zero). If the interval doesn't include zero, the relationship is statistically significant. Example from the article: b = 0.59 [0.38, 0.79] means: the best estimate for the effect is 0.59, and the true effect is plausibly anywhere between 0.38 and 0.79. Since this interval doesn't include zero, the effect is statistically significant. Compare to b = 0.10 [−0.03, 0.22] — this interval DOES include zero, so the effect is NOT statistically significant. Effect sizes more broadly. A b value tells you the size of an effect, but other studies report different metrics for effect size:

      Cohen's d — used for comparing two means; .2 is small, .5 is medium, .8 is large. Pearson's r — correlation coefficient; .1 is small, .3 is medium, .5 is large. Partial eta squared (η²p) — used in ANOVA; .01 is small, .06 is medium, .14 is large. You'll see this metric if a different study uses ANOVA instead of regression. Odds ratios, R², and others — different statistical methods produce different effect size metrics.

      Why effect sizes matter beyond p-values. A statistically significant result with a tiny effect size might be technically real but practically meaningless — especially with large samples, where even trivially small effects can reach statistical significance. Always look at effect size alongside p-values to assess whether a finding is practically important.

    68. the activation of an informative/educational schema likely directs the viewer’s attention to educational information unre-lated to the product’s features.

      Perceived informative intent (linked to Variable 6). From schema theory: informative intent is recognized when content is identified as objective/educational, activating an educational schema that directs attention toward learning rather than consumer response.