1. Last 7 days
    1. H1

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    2. a 90-second animated advertising training video w

      This is the second manipulated independent variable, and it's also doing double duty as a moderator in the path analysis. Operationalization: A 90-second animated video featuring two cartoon spokespeople (a 12-year-old male and an undergraduate female) explaining that some YouTube videos are designed to sell products. The video describes "influencer" tactics, warns viewers they can be "tricked," and lists cues to watch for ("sponsor," "partnership"). Half of the participants saw this video before the unboxing video; the other half went straight to the unboxing video. Level of measurement: Nominal. Two categories: training vs. no training. Use: Independent variable in H6a (training → detection of selling intent in sponsored content) and H6b (training → detection of informative intent in non-sponsored content). Also serves as a moderator in RQ2a and RQ2b — the authors test whether training changes the relationship between perceived video intent and recall (RQ2a) or purchase intention (RQ2b). How was it manipulated? Qualtrics randomly assigned half of participants to view the training video before the experimental unboxing video. The other half saw the unboxing video alone. Random assignment is what allows the authors to make causal claims about the training's effects. Measurement validity: The training video plausibly teaches what advertising literacy is, but with no manipulation check, we can't verify whether kids absorbed the content. The authors' own discussion notes this ambiguity: they aren't sure whether the training "truly taught the advertising literacy content, or [cued] the use of advertising literacy skills they already had." Conceptual definition is on page 278: "'advertising literacy.' Advertising literacy skills help children filter the information they see in advertisements and view it through a critical lens to determine the intent behind it." The training video is meant to build or activate exactly these skills.

    3. RQ2a

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    4. b) i

      H6b: Training video (manipulated IV: training vs. no training) → perceived informative intent (DV) — within the non-sponsored condition specifically.

    5. (a)

      In this hypothesis/research question, you'd be asked to identify the independent variable and the dependent variable. And there is a moderating variable in this hypothesis/research question. A moderating variable is a variable that affects the direction and/or strength of the relationship between an independent variable and a dependent variable. [Example: In a study examining the relationship between study time and exam performance, motivation level acts as a moderating variable. For highly motivated students, more study time leads to significantly better exam performance, while for lowly motivated students, the effect of study time on performance is much weaker. Thus, motivation changes the strength of the relationship between study time and exam performance, demonstrating its role as a moderating variable.] So for this hypothesis or research question, you will need to list the independent variable, the moderating variable, and the dependent variable. Read closely to see what the author(s) argue is happening here.

      If you need more help with moderators/mediators, here’s an extra video: https://www.youtube.com/watch?v=FzM0_GC082A

    6. H2

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    7. RQ2b)

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    8. H3

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    9. H6b

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    10. 6a)

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    11. MPACT SUMMARY

      Some journals — especially newer ones and those with applied audiences — include a section like this as a public scholarship outreach effort. It's a more accessible summary of the study, written in plain language, structured clearly (what we knew, what's new, why it matters), and stripped of jargon. Think of it as a built-in, author-approved version of what a science journalist would write. Notice the three-part structure here:

      Prior State of Knowledge: what the field knew before this study Novel Contributions: what THIS study adds Practical Implications: why anyone outside academia should care

      This is a really useful section to read first, before diving into the abstract. It tells you the gist of the study in language a non-specialist can follow. Now imagine a journalist covering this study. They might write a headline like: "YouTube unboxing videos confuse kids — but a 90-second warning helps" "Tweens see sponsored YouTube videos as both ads AND honest reviews, study finds" "Even with disclosures, kids struggle to spot YouTube ads — researchers say platforms should help" Notice what those headlines do: they translate the academic findings into something concrete and actionable. They emphasize the practical takeaway (kids need help, training works briefly) rather than the technical findings (path analysis showed moderation by training condition, p < .05). That translation is part of what makes research findings reach audiences beyond other researchers. Not every journal does this. Older and more traditional journals tend to skip the impact summary entirely. When you DO see one, take advantage — it's a free roadmap to the study.

    12. mediator

      Mediation analysis is used to understand how or why one thing influences another by introducing a middle factor that links them together. Think of it as a “go-between.”

    13. (H4

      Working through causality questions can feel overwhelming, so here's a framework. For each hypothesis, walk through the three criteria one at a time:

      Association — Did the authors find a statistical relationship between the IV and DV in this specific hypothesis? Look at the results section. Was the hypothesis supported, partially supported, or unsupported? Temporal order — Did the IV come before the DV in time? In a true experiment with random assignment, the manipulation always comes first by design. But watch for cases where the "IV" was actually MEASURED rather than manipulated — those weaken the temporal order claim because measurement and outcome happen close together in the survey. Non-spuriousness — Did the design rule out third variables? Random assignment is the main tool here. If participants were randomly assigned to conditions, pre-existing differences should be roughly equal across groups. If the IV was measured (not manipulated), there's no random assignment to that variable, and confounds become a real concern.

      Watch out: this study has a mix of MANIPULATED variables (sponsorship cue, training video — both randomly assigned) and MEASURED variables (perceived selling intent, perceived informative intent). Some hypotheses test relationships involving measured variables, where causality claims are weaker even though the overall study uses random assignment. For a walk-through of how to apply these criteria to real experimental studies (3 examples): https://youtu.be/GaoRbiROyEw For a longer, more detailed in-class version: https://www.youtube.com/watch?v=xcItkXe9G6E

    14. Non-sponsored video (objective content). The boy explains that the video has beencreated for a school project (“Mrs. Jenkins’ 6th grade class”), that he decided to do hishealth report on teeth, and that he will describe a product that can help keep teethhealthy. He states that he just bought the water flosser last week and does notindicate where it is available for purchase. After demonstrating the water flosser, theboy ends the video by describing the next peer who will give their report for theclass.● Sponsored video. The boy begins the video by saying “Jack here, back with anotherreview!” He discloses sponsorship by saying things like “my friends at [company] saysent me this water flosser to try out last week” and “my sponsors at [company] tellme it costs just $36 and is available at [major retailers].” After demonstrating thewater flosser (i.e., identical content across videos), he ends the video with additionalsponsorship cues. He says, for example, “thanks again to my friends at [company] forthis great device!” and “see ya next time when I’ll talk about eyeglasses that block outblue light from screens from my friends at [company].”● Sponsorship unaddressed video. The third video is vague about the purpose of thevideo, not giving cues that the content is objective (as the school report video) ordirect sponsorship statements. The boy begins by saying “Jack here, back withanother review.” He states that he saw the product at his friend’s house and decidedto try it out. He explains that he got the product “in the mail” last week, and that it isavailable at many stores for $36. After demonstrating the water flosser, he ends thevideo by thanking everyone for watching and says “see you next time when I will talkabout eyeglasses that block out blue light from screens and help make your eyeshealthy.” Thus, there are subtle cues that the content maybe sponsored (i.e., byhaving an ongoing review channel; by listing the price and availability of theproduct; by featuring positive attributes of the product), but he makes no explicitstatements of a material relationship with the brand.

      This is one of the two manipulated independent variables in the study. Let's walk through the analysis. Operationalization: Three different versions of an unboxing video, each about three minutes long, all featuring the same 12-year-old male actor demonstrating the same water flosser. The body of each video is identical. What differs is the framing at the beginning and end:

      Non-sponsored: Boy explains it's a school project for "Mrs. Jenkins' 6th grade class." Sponsored: Boy explicitly thanks his "friends at [company]" who sent him the product and gives the price and retailers. Sponsorship unaddressed: Boy says he saw the product at a friend's house, mentions the price and availability, but never explicitly states a sponsor relationship.

      Level of measurement: Nominal. The three conditions are categories without inherent order. (You could argue ordinal — non-sponsored → unaddressed → sponsored represents increasing levels of commercial intent — but the authors treat these as nominal categories in their analysis, so nominal is the safer answer.) Use: Independent variable in H3 (sponsored vs. non-sponsored → perceived selling intent), H4 (non-sponsored vs. sponsored → perceived informative intent), and H5 (unaddressed vs. non-sponsored → perceived informative intent). How was it manipulated? Qualtrics randomly assigned each child to one of the three video conditions. The video was streamed within Qualtrics from a private YouTube channel "in order to boost children's beliefs that these were real videos on the YouTube platform." So the manipulation involves both (a) which version of the video the child saw, and (b) the framing that made it feel like real YouTube content rather than a study stimulus. Measurement validity: The three conditions are differentiated by clear scripted statements (sponsorship explicit, school report explicit, or ambiguous). Face validity is decent — the videos plausibly represent the three real-world scenarios the authors care about. The bigger concern is that the same actor and same body content appears in all three, which controls for confounds but reduces ecological validity (real influencer videos vary far more in style, polish, and host). Conceptual definition is on page 273: "'influencers,' paid or otherwise compensated by marketers in exchange for favorable reviews." That's the underlying construct the sponsored condition is operationalizing.

    15. SEM

      Structural Equation Modeling (SEM) is a statistical technique in social sciences that allows us to examine complex relationships between multiple variables in a single, integrated model. It combines measurement models to estimate hidden or “latent” variables, like intelligence or happiness, using indicators we can observe, and structural models that show how these variables influence each other. SEM enables us to see not only direct effects but also indirect and mutual relationships, all at once. By doing this, SEM helps us test if our theoretical model accurately represents the data, offering a more complete view of the relationships between observed and unobserved factors.

    16. Journal of Children and Media

      Let's start with journal credibility. Journal of Children and Media (which we've seen before) is published by Routledge/Taylor & Francis, which is a well-established academic publisher. But sometimes established publishers aren't ALWAYS publishing credible journals. To verify credibility, you'd want to check tools we discussed in Module 5: SCImago Journal Rank (SJR), Ulrich's Web (to confirm peer-review status), or the journal's own website. This journal is indexed in major databases and has an ISSN, which are good signs. On the quiz, you won't have internet access, but you should know WHICH tools you'd use and WHAT you'd look for.

    17. path analysis.

      Path analysis is a statistical tool in social sciences that allows us to map and measure both direct and indirect relationships between multiple variables to understand how they influence a particular outcome. In this approach, each variable is represented as a node, and arrows (or "paths") between nodes show how one variable affects another. Some variables have direct effects, with an arrow pointing straight to the outcome, while others work indirectly, influencing an intermediate variable first. By drawing and analyzing these paths, path analysis helps us test theories about how variables interact to produce an outcome, giving us a detailed view of the connections and the strength of each relationship in the model.

    18. moderator

      Moderation analysis helps you find out when or for whom a relationship between two things is stronger or weaker by introducing a third factor, called a moderator. This moderator changes how the two main things relate to each other.

    19. r

      r is the correlation coefficient. We will cover this more in Module 9. This number that tells us how strongly two variables are related and in which direction. It ranges from -1 to +1. A positive value (closer to +1) means that as one variable increases, the other tends to increase as well - this is called a positive correlation. A negative value (closer to -1) means that as one variable increases, the other tends to decrease - this is a negative correlation. When r is close to 0, it means there is little to no relationship between the two variables.

    20. mediator

      Mediation analysis is used to understand how or why one thing influences another by introducing a middle factor that links them together. Think of it as a “go-between.”

    21. path analysis

      Path analysis is a statistical tool in social sciences that allows us to map and measure both direct and indirect relationships between multiple variables to understand how they influence a particular outcome. In this approach, each variable is represented as a node, and arrows (or "paths") between nodes show how one variable affects another. Some variables have direct effects, with an arrow pointing straight to the outcome, while others work indirectly, influencing an intermediate variable first. By drawing and analyzing these paths, path analysis helps us test theories about how variables interact to produce an outcome, giving us a detailed view of the connections and the strength of each relationship in the model.

    22. between-subjects

      A 'between-subjects' design in social science experiments refers to a research setup where different groups of participants are exposed to different conditions or treatments. Each participant experiences only one condition, so comparisons are made between groups. For example, imagine you're testing the effectiveness of two teaching methods on student performance. In a between-subjects design, you would have one group of students using Method A and a different group using Method B. You'd then compare the results between these two groups to see which method works better. This type of design is useful because it helps to avoid carryover effects, where the experience of one condition might influence performance in another if the same participants experienced both. However, it also requires more participants because each person only provides data for one condition.

      This study is a between-subjects design — different participants are in each condition. With 251 participants and 6 conditions, that means roughly 42 different kids per condition. Each kid sees ONE combination of sponsorship cue and training video. They don't see all 6. The alternative is a within-subjects design, where the same participants experience all (or several) conditions. If this study had been within-subjects, each kid would have watched all six versions of the video — sponsored with training, sponsored without training, non-sponsored with training, and so on — and the researchers would compare each kid's responses across the conditions they personally experienced. Why choose one over the other? Between-subjects advantages:

      Cleaner — each participant only sees one condition, so there's no carryover from one to another. Avoids order effects — a participant doesn't get tired or bored partway through. Avoids demand characteristics that come from comparing conditions — when a kid sees both sponsored and non-sponsored videos back-to-back, they might guess what the study is about.

      Between-subjects disadvantages:

      Requires more participants — you need a separate group for each condition. Pre-existing differences between groups can muddy the results, which is why you need random assignment.

      Within-subjects advantages:

      Statistically more efficient — each participant serves as their own comparison, which controls for individual differences automatically. Requires fewer total participants.

      Within-subjects disadvantages:

      Carryover effects — what you saw first might affect your reaction to what comes next. Order effects — fatigue, practice, or boredom can shape later responses. Demand characteristics — participants may guess the study's purpose by comparing conditions.

      For this study, between-subjects makes sense because: (1) the manipulations work better when kids encounter only one version (seeing all three sponsorship cues back-to-back would tip them off to the research question); (2) the kids are young and might struggle with multiple video viewings; (3) the authors had access to enough participants through Dynata to fill all 6 cells. When you read any experiment on the quiz, ask: did each participant experience all conditions (within-subjects), or just one (between-subjects)? You can tell by looking at how the conditions are described and how participants were assigned. Most experiments are between-subjects — it's the more common choice.

    23. to

      When you go through this question, work through the seven threats systematically rather than guessing. Here's the list, with how each applies — or doesn't — to a study like this one:

      History — events outside the study that happen between measurements and influence the DV. Less of a concern in a short, single-session online experiment like this one, since there's no extended time window. But it could matter if, say, a major news story about influencer scams broke during the data collection period (May 27 – June 3, 2022).

      Maturation — natural changes in participants over time. Mostly a concern for studies that span weeks or months. Tweens didn't age meaningfully during this 15-minute experiment.

      Testing — taking a pretest changes posttest responses. Not relevant here because there's no pretest.

      Instrumentation — the measurement tool changes between measurements. Not an issue with a posttest-only design.

      Regression to the mean — extreme scorers drift toward the average on retesting. Not relevant without repeated measurement.

      Selection bias — pre-existing differences between groups. This is where random assignment does its work. The authors used random assignment, which addresses selection — but only if the randomization actually produced equivalent groups. (Did the authors check for baseline equivalence on demographics? Look for that.)

      Attrition — participants dropping out, especially if they drop out non-randomly. The authors note 13 children quit early and 3 declined to participate. Were dropouts equally distributed across conditions? If kids in the sponsored condition were more likely to quit than kids in the non-sponsored condition, that's a problem.

      Then there's the broader category of demand characteristics / Hawthorne effect / experimenter effects, which threaten internal validity through participant behavior rather than through the seven classic threats. Worth asking: did the kids behave naturally, or were they performing for the researcher / their parent (who was likely nearby during this online study)?

      Random assignment is a powerful tool — it addresses several of these threats at once — but it's not a magic eraser. Some threats survive randomization. Your job is to figure out which ones the authors handled well, and which ones they didn't address.

      Extra video on mitigating threats to internal validity: https://youtu.be/3GW13A4-eSQ. The Threats to Internal Validity infographic in Module 7 has the full list with examples of each.

    24. In

      In these quizzes, you're going to always be asked about the bigger research question and purpose/goal of the study – not tied the hypotheses or research questions with variables – but the larger research question and objective. Ways that you can think about this: why bother doing this study? How does conducting this study add to our understanding of the world? How do the researchers justify spending time and money studying this question? [You don’t have to answer these questions directly, but these are the sort of things that you should think about with regard to the bigger research question and goal.] This will be in the introduction section of the study. [Please note that sometimes there is a heading "Introduction" and sometimes there is not - but the text after the abstract is the introduction.]

    25. H5

      This is a bit more of a descriptive hypothesis, so no IV/DV. But roughly, H5: Sponsorship cue (manipulated IV: unaddressed compared to non-sponsored) → perceived informative intent (DV). Descriptive hypothesis comparing the unaddressed condition to the non-sponsored condition.

    26. Discussion

      Remember that the discussion section summarizes what the author(s) found, how it is contextualized within the existing body of research, and what contribution this study's findings make to this area of research. Then authors usually speculate about some bigger picture questions related to the research. Authors then discuss limitations and opportunities for future research.

    27. Materials and methods

      When you encounter a methods section, one thing you'll need to do — both for this study and on the in-class quiz — is figure out what TYPE of experiment it is. Don't just guess. Walk through the 5-step decision process from the Identifying Experiment Types infographic in Module 7. Step 1: How many things are you testing? Count the IVs and their levels.

      One IV → single-factor design Multiple IVs → factorial design (multiply the levels: a 2×2 has 4 conditions, a 3×2 has 6, a 2×3 has 6, a 3×3 has 9, etc.)

      For this study: two IVs (sponsorship cue with 3 levels, training video with 2 levels). 3 × 2 = 6 conditions. Factorial. Step 2: How are participants placed into groups?

      Random assignment → true experiment No random assignment, using existing groups (like classrooms) → quasi-experiment One group, no comparison → pre-experiment

      For this study: Qualtrics randomly assigned each child to one of the 6 conditions. True experiment. Step 3: When do you measure?

      Just once, after treatment → posttest only Before AND after treatment → pretest-posttest Many times over time → time series

      For this study: kids answered some demographic questions before viewing the video, but those don't count as a pretest because they're not measuring the same construct as the posttest. The DVs (perceived intent, recall, purchase intention) were measured only after the video. Posttest-only. Step 4: Where does it happen?

      Lab/controlled → lab experiment Real-world → field experiment Studying something that already happened in the world → natural experiment

      For this study: it's tricky because the standard categories don't fit cleanly. The study was conducted online, with kids on whatever device they had at home, possibly with parents nearby, possibly with siblings making noise. It's not a controlled lab environment, but it's not a traditional field experiment either. The authors flag this in their limitations: data collection was online, prohibiting verification of treatment fidelity. I'd call this an online or remote experiment — a hybrid that has some advantages for external validity (kids in real-world environments) but loses the controlled conditions of a lab. Step 5: Who experiences what?

      Same people experience all conditions → within-subjects Different people in each condition → between-subjects (most common)

      For this study: each kid saw only one combination of sponsorship cue and training. Between-subjects. Putting it together: This is a 3 × 2 factorial, true, posttest-only, online, between-subjects experiment. On the quiz, your answer should walk through each step and justify it with evidence from the article. The Experiment Types Overview infographic in Module 7 has the full taxonomy with definitions for each design type.

    28. High PointUniversity, High Point, NC, USA

      You may be familiar with these authors' universities, but there will be times when you're not - especially when reading research from other countries. When you can't easily evaluate an author's institutional affiliation because you're unfamiliar with the university, here are some strategies: * First, check whether the university has a dedicated research profile for the author - most universities worldwide maintain faculty pages in English, even if the institution's primary language is different. Look for their publication record, research focus, and degree. That is the most credible source. * Second, use Google Scholar to look up the author directly. Google Scholar works across languages and countries. You can see how many publications they have, how often they're cited, and whether they publish in journals you recognize as credible. An author with dozens of publications in peer-reviewed journals indexed in major databases is credible regardless of whether you've heard of their university. * Third, remember that credibility doesn't require name recognition. There are thousands of legitimate research universities worldwide that produce high-quality peer-reviewed scholarship. Not recognizing a university doesn't make it less credible - it just means you need to do a bit more digging. The same tools work everywhere: Google Scholar for the author, SCImago or Ulrich's for the journal, and the article's own reference list and citation count for the study itself. * What WOULD be a red flag: an author with no findable academic profile anywhere, a university that doesn't appear to exist or has no research output, or a journal that isn't indexed in any major database. Those are substantive credibility concerns - not just unfamiliarity.

    29. Discussion

      Future research. Often comes directly from limitations. The authors suggest: One: examine non-health products. Two: vary video stimuli to look at affective reactions, not just cognitive. Three: compare across formats — online influencer versus TV commercial. Four: study reactions to known influencers, where parasocial relationships exist. Each of those addresses a specific limitation. That's the typical pattern.

    30. Somescholars believe that youth with greater conceptual persuasion understanding

      RELATE TO PRIOR WORK: The authors invoke a theoretical explanation from prior work (Opree & Rozendaal, 2015; Rozendaal et al., 2009; Sagarin et al., 2002) to interpret their own finding. Kids who feel they "know" advertising might paradoxically be MORE susceptible because they don't deploy persuasion resistance — overconfidence in their own ad-detection skills makes them lower their guard. This explains why training had to do affective work, not just cognitive work, to be effective.

    31. contributors

      Author and journal credibility. At the end of the article, there's a "Notes on contributors" section. Sarah Vaala has a Ph.D. from the Annenberg School at the University of Pennsylvania — top-tier communication program — and she's an Associate Professor at High Point University. Her research focuses on persuasive messages, youth, and family decisions about media. Directly in her area of expertise. She's also affiliated with the Joan Ganz Cooney Center at Sesame Workshop, which is a research institute focused on children's media. The other authors are graduate-level — Francesca Mauceri completed a master's, Olivia Connelly a bachelor's, both at High Point.

    32. Limitations

      Limitations: One: the sample isn't representative — recruited from an online panel, skews white, more educated, higher income. Two: 91.6% of the sample uses YouTube, so results may not apply to non-users. Three: data collection was online, so treatment fidelity wasn't verified. Four: only one set of stimuli was used. Five: the product was a health item, which differs from the toys, cosmetics, or food that dominate real youth-targeted unboxing. Six: ceiling effects — mean scores were high across measures. Seven: social desirability bias — kids may have answered the way they thought researchers wanted. Eight: dental health information may have been pre-known by older tweens.

    33. Wald tes

      When researchers run a factorial design (like this 3×2 study), they don't just check whether each IV has its own effect — they also check whether the IVs interact with each other. An interaction effect tells you that the impact of one IV depends on the level of the other IV. Let's make this concrete. Imagine an experiment testing whether caffeine improves test performance, with two IVs: caffeine (caffeine vs. no caffeine) and time of day (morning vs. evening). Three different patterns could emerge:

      Main effect of caffeine only: caffeine helps everyone equally, no matter what time of day. The effect of caffeine doesn't depend on time of day. Main effect of time only: people perform better in the morning regardless of caffeine. The effect of time doesn't depend on caffeine. Interaction: caffeine helps in the morning but doesn't help in the evening. The effect of caffeine depends on the time of day. That's an interaction.

      Interactions are often the most interesting findings in a factorial study. They show that the world is more complex than a single IV's effect — that effects depend on context. In this study, the authors are looking for an interaction between sponsorship cue and training video. Their hypothesis is essentially: training video changes how kids respond to sponsorship cues. They use Wald tests to check whether the relationship between perceived intent and the outcome is different for participants who saw the training video versus those who didn't. When the Wald test is significant (p < .05), there's an interaction — the effect of one IV depends on the other. The actual findings include several significant interactions: training moderates the relationship between perceived informative intent and purchase intention, and training moderates the relationship between perceived informative intent and health recall. In both cases, training changes how perception predicts the outcome. When you read an experimental study on the quiz, look for:

      Main effects — does each IV have its own effect on the DV? Interaction effects — do the IVs jointly influence the DV in a way that's more than the sum of their separate effects?

      A significant interaction is often more theoretically interesting than the main effects, because it reveals a contingency — when the effect happens, when it doesn't. Extra video on factorial designs and interaction effects: https://www.youtube.com/watch?v=2wZAAQ6OdFw

    34. Both videos were streamed withinQualtrics from a private YouTube channel, in order to boost children’s beliefs that thesewere real videos on the YouTube platform.

      This is ALMOST like a cover story, or at least does SOMETHING to reduce demand characteristics.

    35. H1

      Pause here. Before you analyze this hypothesis, you need to understand a concept that the Module 7 lecture didn't go deep on: in some experiments, an independent variable is measured rather than manipulated. This study is one of those cases — and it changes everything about how we evaluate causality. The Module 7 lecture defines an IV as "the thing that is manipulated or somehow introduced into the setting by the experimenter." That's the standard case, and it's true for the sponsorship cue and training video in this study — those are manipulated IVs. The researchers control them by random assignment. But this study also has IVs that are measured, not manipulated. Perceived selling intent and perceived informative intent are measured. The researchers didn't control these — they didn't make some kids perceive selling intent and other kids not. They asked all kids how much selling intent they perceived, and recorded what each kid reported. Why does a researcher use a measured IV in an experiment? Sometimes the construct can't be ethically or practically manipulated. You can't randomly assign kids to "perceive a video as sponsored" — perception is something that happens in their head based on what they see. The researchers can manipulate the cue (sponsored vs. non-sponsored video) and measure the resulting perception, but they can't directly control the perception itself. Why does this matter? Because random assignment is what gives experiments their causal power. When an IV is randomly assigned, you can rule out confounds — pre-existing differences between groups should be roughly equal. When an IV is measured, no random assignment happened, so confounds remain a real concern. Practical implications:

      For manipulated IVs (sponsorship cue, training video): random assignment supports strong claims about temporal order and non-spuriousness. Causality is well-supported by the design. For measured IVs (perceived selling intent, perceived informative intent): no random assignment. Temporal order is fuzzy because perception and outcomes are measured close together. Non-spuriousness is weak because confounds (advertising literacy, prior product interest, working memory) could explain both perception and outcome.

      So when you analyze each hypothesis in this study, the FIRST question to ask is: is the IV manipulated or measured? That answer shapes how strongly you can claim causality. Track the variables as you read:

      Manipulated: sponsorship cue (3 levels), training video (2 levels) Measured: perceived selling intent, perceived informative intent

      Any hypothesis with a measured IV will have weaker causal claims than a hypothesis with a manipulated IV — even though the overall study uses random assignment to conditions.

    36. Procedures

      Since this study DOESN'T have manipulation checks, let me show you what a manipulation check actually looks like in the wild, so you can recognize one if you see it on the quiz. Imagine a study testing whether watching a sad video makes people more likely to donate to charity. The IV is the video (sad vs. neutral), and the DV is donation behavior. A manipulation check would ask, right after the video: "How would you describe your current mood?" If participants in the sad video condition rated their mood as significantly more sad than participants in the neutral condition, the manipulation worked. If both groups reported similar moods, the manipulation failed — and any null result on donation behavior couldn't tell us whether sadness doesn't affect donations or whether the video just didn't make people sad. Or imagine a study where some participants watch a video featuring a "familiar" character and others watch one featuring an "unfamiliar" character. A real manipulation check would ask each participant if they recognized the character or could name them — if kids in the "familiar" condition all named the character correctly and kids in the "unfamiliar" condition couldn't, the familiarity manipulation worked. The pattern is always: a quick, direct question about the manipulation itself, asked separately from the dependent variable. It's not the same thing as the DV. It's a check on whether the IV did what it was supposed to do. When you encounter a different study on the quiz, look for manipulation checks in the methods or results sections. They might appear as a sentence like "Participants in Condition A reported significantly higher [thing manipulated] than participants in Condition B, t(X) = Y, p < .05." That's the manipulation working as intended.

    37. ofinterest is whether tweens would perceive similarly high rates of informative intentamong more well-known child influencers.

      FUTURE RESEARCH: Real influencer testing. The unknown actor used in this study may have produced different reactions than a real influencer kids actually follow. Future research should test reactions to known influencers, where parasocial relationships are already established.

    38. Others have found that young viewers perceive peer hosts and influencers as similarto themselves, heightening their perceptions of host authenticity and their trust in thecontent

      RELATE TO PRIOR WORK: The authors connect their finding (that informative intent was perceived universally, even in sponsored conditions) to research by Naderer et al. (2021) on parasocial perceptions. The unknown peer-aged actor in this study may have triggered the same authenticity heuristic real influencers do — kids perceive sameness as honesty. This both validates their finding and points to where future research should go.

    39. information about plaqueand tooth enamel may be common knowledge among older tweens

      LIMITATION: Pre-existing knowledge. Older participants may have already known the dental health information from school, which would inflate health recall scores independent of what they learned from the video. The recall measure can't distinguish "learned from this video" from "already knew."

    40. Some prior studieshave also found positive relationships between understanding of selling and persuasiveintent and adolescents’ desire for advertised products

      RELATE TO PRIOR RESEARCH: The authors connect their finding (that perceived selling intent positively predicts purchase intention without training) to a body of prior research showing the same surprising pattern in adolescents (Harms et al., 2022; Opree & Rozendaal, 2015; Vanwesenbeeck et al., 2016a, 2016b). This is counterintuitive — you'd expect detecting an ad to make kids LESS interested, but the literature consistently shows the opposite for some kids. Their finding fits this pattern.

    41. Further research should use varying video stimuli toexamine additional components of tweens’ conceptual and affective reactions tosponsored content online and to determine effective ways to alert them to sponsoredcontent

      FUTURE RESEARCH: Affective reactions. The authors note their study focused mostly on cognitive reactions (recall, perceived intent). Future research should also measure affective reactions — emotional responses, trust, liking — which might mediate or moderate purchase intention in ways the current study didn't capture.

    42. Participants may have responded in socially desirable ways

      LIMITATION: Social desirability bias. Kids might have answered the way they thought the researchers (or their parents nearby) wanted them to answer — saying they'd ask for the product when they wouldn't, or saying they perceived selling intent because that seems like the "smart" answer.

    43. future research should confirm these findings across additional classes ofproducts

      FUTURE RESEARCH: Product category replication. The authors call for replication using toys, cosmetics, food, and other youth-targeted product categories rather than just health products. Different product types might trigger different schemas and different levels of skepticism.

    44. Futureresearch should examine similar content across different formats (e.g., influencer-styleonline ad vs. child actor in a TV commercial) to examine the possibility and nature offormat-specific advertising schemas

      FUTURE RESEARCH: Cross-format comparison. The authors suggest examining the same kind of content across different formats — for example, an unboxing video vs. a child actor in a traditional TV commercial — to see whether tweens have different mental schemas for different ad formats.

    45. The health focus of the target product also differsfrom many youth-focused unboxing videos, which often feature toy, cosmetic, or foodproducts

      LIMITATION: Product type. The water flosser is a health product, while most real unboxing content features toys, cosmetics, or food. Tweens might respond differently — possibly with less skepticism — to the product categories they actually encounter on YouTube.

    46. only one set of stimuli was used to test hypotheses

      LIMITATION: Stimulus generalizability. The researchers tested ONE specific set of videos. Findings might not extend to videos with different hosts, different production styles, different products, or different platforms. Replication with varied stimuli is needed.

    47. mean scores were fairly high across key variables, suggesting a potential ceilingeffect.

      LIMITATION: Ceiling effects. Most participants scored near the top of the scales for several measures, which limits how much variation the analysis can detect. Real differences might exist that the data can't reveal because everyone is clustered at the high end.

    48. The sample of parent-child dyads was recruited from an online participant panel and may not be fully repre-sentative of US tweens.

      LIMITATION: Sample representativeness. The authors recruited from Dynata's online panel, which skews toward more-educated, higher-income, mostly White households. Findings may not generalize to U.S. tweens from less represented demographic groups.

    49. Recruitment and data collection were also conducted online,prohibiting verification of treatment fidelity.

      LIMITATION:Treatment fidelity. The researchers couldn't directly verify that participants actually watched the videos as intended. Kids could have skipped sections, been distracted, or had a parent help them — and there's no way to tell from online data collection.

    50. Most of the tween sample (91.6%) reported that they useYouTube, so relationships may differ among tweens less familiar with YouTube

      LIMITATION: YouTube familiarity. Almost all participants are existing YouTube users, so they're already familiar with unboxing-style content. Tweens who don't use YouTube — possibly the most vulnerable population because they're newer to this content format — aren't represented.

    51. b

      Tables 4 and 5 are dense with statistics. Let me break down what you're looking at, because you'll see versions of these on the quiz. The b value (unstandardized regression coefficient). This tells you how much the dependent variable changes for each one-unit increase in the independent variable. So if b = 0.59 for perceived informative intent → purchase intention, that means: for each 1-point increase in perceived informative intent, purchase intention increases by 0.59 points (on its own scale). The sign of b matters too:

      Positive b: the IV and DV move in the same direction — when one goes up, the other goes up. Negative b: the IV and DV move in opposite directions — when one goes up, the other goes down.

      So in this study, b = 0.59 (positive) means more perceived informative intent leads to more purchase intention. If you see b = −0.25, it would mean more of one variable leads to less of the other. The confidence interval [CI b] in brackets after the b value. This is the range of plausible values for b. If the confidence interval includes zero, the relationship isn't statistically significant (you can't rule out the possibility that the true effect is zero). If the interval doesn't include zero, the relationship is statistically significant. Example from the article: b = 0.59 [0.38, 0.79] means: the best estimate for the effect is 0.59, and the true effect is plausibly anywhere between 0.38 and 0.79. Since this interval doesn't include zero, the effect is statistically significant. Compare to b = 0.10 [−0.03, 0.22] — this interval DOES include zero, so the effect is NOT statistically significant. Effect sizes more broadly. A b value tells you the size of an effect, but other studies report different metrics for effect size:

      Cohen's d — used for comparing two means; .2 is small, .5 is medium, .8 is large. Pearson's r — correlation coefficient; .1 is small, .3 is medium, .5 is large. Partial eta squared (η²p) — used in ANOVA; .01 is small, .06 is medium, .14 is large. You'll see this metric if a different study uses ANOVA instead of regression. Odds ratios, R², and others — different statistical methods produce different effect size metrics.

      Why effect sizes matter beyond p-values. A statistically significant result with a tiny effect size might be technically real but practically meaningless — especially with large samples, where even trivially small effects can reach statistical significance. Always look at effect size alongside p-values to assess whether a finding is practically important.

    52. the activation of an informative/educational schema likely directs the viewer’s attention to educational information unre-lated to the product’s features.

      Perceived informative intent (linked to Variable 6). From schema theory: informative intent is recognized when content is identified as objective/educational, activating an educational schema that directs attention toward learning rather than consumer response.

    53. The PKM contends that individuals will engage in cognitive copingstrategies when they identify selling intent within a given message (i.e., activate anadvertising schema).

      Perceived selling intent (linked to Variable 7). From the Persuasion Knowledge Model: selling intent is recognized when consumers identify that a message is trying to persuade them to buy something, prompting cognitive coping responses.

    54. The ability to understand advertising (including its forms, goals/biases, and pro-duction techniques) has been called “advertising literacy.” Advertising literacy skillshelp children filter the information they see in advertisements and view it througha critical lens to determine the intent behind it (

      Training video / advertising literacy (linked to Variable 2: Training video / no training video). The authors define advertising literacy as the cognitive skills children use to filter and critically evaluate the intent behind advertising content. The training video manipulation is meant to teach or activate exactly these skills.

    55. They receive points for participating in research, which can later be converted to money,travel miles, or prizes from Dynata.

      Compensation. Participants are paid (in points convertible to money or prizes) for participating. Compensation is ethically standard but worth noting because it can create subtle pressure to participate — especially for people who depend on panel earnings as supplementary income.

    56. Three childrenindicated they did not want to be in the study, and 13 quit the survey early.

      Right to refuse and right to withdraw. Three children declined to participate even after their parents had consented — their decisions were respected. Thirteen others quit partway through and weren't forced to continue. Both reflect the ethical principle that participation must be voluntary and ongoing, not just at the start.

    57. well-compensated “influ-encers,” paid or otherwise compensated by marketers in exchange for favorablereviews

      Sponsored video / unboxing host (linked to Variable 1: Non-sponsored / Sponsored / Unaddressed sponsored conditions). The authors define an "influencer" — and by extension a sponsored video — as content where the host has a paid or otherwise compensated relationship with the manufacturer.

    58. The Institutional Review Board at High Point University approved this study.

      The study was reviewed and approved by an Institutional Review Board before data collection — required for any university-affiliated research with human participants. Research involving minors typically receives heightened IRB scrutiny because children are a "special population" with reduced capacity for consent.

    59. nativeadvertising

      Key term to understand for this study. Native advertising is advertising that's designed to look like the regular content around it, so consumers don't immediately recognize it as an ad. Examples in different formats:

      In a magazine: a "sponsored article" that reads like editorial content but is actually paid promotion. On Instagram: a post by an influencer wearing a brand's clothes that looks like personal content but is sponsored. On YouTube: an unboxing video where the host reviews a product they were paid to feature, presented in their normal vlog style.

      The point of native advertising is to evade the consumer's "ad detection" defenses. When you see a TV commercial, you know it's an ad — there's a clear break, a jingle, a brand logo, a 30-second window. Your brain switches into "this is trying to sell me something" mode. With native advertising, those visual and structural cues are missing, so your defenses don't activate. This is exactly what the FTC sponsorship disclosure rules are trying to address. By forcing a clear "sponsored" label on native ads, the rules give consumers the cue they need to recognize the content as advertising and engage their critical processing. Why does this matter for kids? Their advertising literacy is still developing. Even when kids can spot traditional ads on TV, they often DON'T spot native ads embedded in entertainment content like unboxing videos. That's the puzzle this study is trying to address.

    60. Advertising schema and the Persuasion Knowledge Model

      This study uses TWO theoretical frameworks at once: Schema theory and the Persuasion Knowledge Model (PKM). That's worth understanding because theoretical frameworks do a lot of work in research articles, and being able to spot them is a meta-skill for reading any empirical paper. What theoretical frameworks do:

      They explain WHY the researchers expect specific relationships to exist. Without a framework, hypotheses are just guesses; with one, they're predictions grounded in existing theory. They organize the study's structure (which variables to include, what to measure, what to compare). They give meaning to the results. A finding "matters" because it supports, contradicts, or refines an existing theoretical understanding.

      What's happening in THIS study:

      Schema theory says people organize knowledge into mental schemas that direct attention and processing. The authors use schema theory to predict that kids who detect a SELLING intent will engage their advertising schema (focusing on product details), while kids who detect an INFORMATIVE intent will engage their educational schema (focusing on health info). PKM says when consumers detect persuasive intent, they engage cognitive coping strategies (often becoming more critical, more resistant). The authors use PKM to predict that kids who detect selling intent might develop defensive processing.

      Notice these two frameworks make slightly different predictions. Schema theory predicts MORE recall of relevant info; PKM predicts MORE critical processing. The authors test both possibilities, which is part of what makes RQ1 interesting (they're asking, essentially, "which framework wins out?"). When you read any empirical study, look for the theoretical framework(s) early on. They tell you what the authors expect and why. Findings that match the framework's predictions are theoretical wins; findings that don't push the framework to evolve.

    61. the informedconsent information

      Informed consent for parents. Parents read the study description and explicitly consented before any data collection began. This is the standard ethical baseline for adult participation.

    62. Informed assent language explained the study in child-appropriatelanguage and asked whether the child wished to participate.

      This sentence reflects an important ethical distinction in research with minors: parents give consent, children give assent. Consent is legally meaningful authorization that the participant fully understands what they're agreeing to. People under 18 are not considered legally capable of giving consent on their own behalf, so a parent or guardian gives it for them. Assent is the child's own agreement to participate, given in age-appropriate language they can understand. It's not legally binding the way consent is, but research ethics requires it because children deserve agency over their own participation. A parent's consent doesn't override a child's refusal — if the child says no, even after the parent says yes, the child doesn't participate. This study did this correctly:

      Parent reads the consent form and consents on the child's behalf. Parent brings child to the device. Child reads (or hears) the assent language in age-appropriate terms. Child decides for themselves whether to participate.

      The authors note three children declined to participate at the assent stage — their refusals were honored. That's research ethics in action. Without an assent process, those kids' parents could have effectively forced them into the study. This is one of the implicit ethical considerations you should be taking note of. The authors don't shout about it, but the practice is significant.

    63. Procedures

      Important note about this study: it does NOT include manipulation checks or attention checks. These are different things, and the absence of both is worth noticing. A manipulation check verifies that the IV manipulation actually worked — that participants noticed the manipulation and responded to it as the researchers intended. For this study, manipulation checks would have asked things like:

      Did the kid notice the sponsorship disclosure in the sponsored condition? Did kids correctly understand that the school report video was non-commercial? Did kids actually absorb the content of the training video?

      Without manipulation checks, we don't know whether the manipulations worked. If results are null (like H4, H6b), is that because there's truly no effect, or because the manipulation never landed? An attention check verifies that participants were paying attention generally — not that the manipulation worked, but that they were focused enough on the study for any of their responses to be meaningful. Common attention checks include items like "select 'strongly disagree' to show you're reading carefully" embedded in surveys. The authors actually flag the absence of attention checks in their limitations: they note "Recruitment and data collection were also conducted online, prohibiting verification of treatment fidelity." Treatment fidelity = whether the participants actually received the treatment as intended. Be careful not to confuse perceived selling intent and perceived informative intent with manipulation checks. They look similar (asking kids what they thought of the video), but in this study they're treated as mediators and DVs, not as checks. A real manipulation check would ask something like "Was this video sponsored, yes or no?" — a direct test of whether the manipulation registered. Extra video on manipulation checks: https://youtu.be/0mr2K9Pji7k

    64. One survey item assessed children’s water flosser purchase intention

      Worth pausing on this. Purchase intention — arguably the most important DV in the study, since it's about real-world consumer behavior — was measured with a SINGLE item: "Will you ask your parents to buy a [company] water flosser?" answered on a 5-point scale. Single-item measurement creates several measurement issues: Reliability is unmeasurable. You can't compute Cronbach's alpha with one item — there's nothing to compare it to internally. So we have no statistical evidence that this measure is reliable. Validity rests entirely on this one question. If the question doesn't quite capture what "purchase intention" really means — maybe kids interpret "ask my parents" as different from "want this product" — there's no second item to triangulate against. Variability is constrained. With only 5 response options and 251 kids, you'll get clusters at certain values, which limits your ability to detect subtle effects. Researchers sometimes use single-item measures because the construct is simple, the survey needs to be short (especially with kids), or face validity is high. But it's a real measurement weakness, and worth flagging when you assess the study's quality. Compare this to perceived informative intent (3 items, α = .82) and perceived selling intent (2 items, r = .53). Those are more reliable, even though selling intent is also fairly thin.

    65. Figure 1. Conceptual model of effects.

      Take a minute with this figure — once you can read these diagrams, you can identify the variable structure of any study in seconds. Conceptual models like this one have a standard grammar:

      Boxes are constructs (variables, conditions, outcomes). Arrows show predicted relationships, usually IV → DV (cause → effect). Position shows the role: leftmost are IVs/conditions, rightmost are outcomes/DVs, anything in the middle is a mediator (something on the path between IV and DV). A box with an arrow pointing AT another arrow (like Pre-Roll Training pointing at the path between Perceived Intention and Outcomes) is a moderator. It changes the strength or direction of a relationship without sitting on the causal path itself.

      Read this specific figure: experimental conditions (sponsored, non-sponsored, unaddressed) → perceived intention (informative or selling) → outcomes (recall, purchase intention). And training video moderates the second arrow. So the variable structure of the whole study is:

      IV (manipulated): sponsorship cue (3 levels) Mediators (measured): perceived informative intent, perceived selling intent Moderator (manipulated): training video (2 levels) DVs (measured): product recall, health recall, purchase intention

      When you encounter a complex study like this one, look for the conceptual model first. It tells you what role each variable plays before you wade into the methods section.

    66. RQ2b

      RQ2b (perceived intent × training → purchase intention): Association: Supported. Perceived informative intent predicted purchase intention in BOTH conditions, but more strongly with training (b = 0.95) than without (b = 0.59, Wald test p < .05). Perceived selling intent predicted purchase intention only WITHOUT training (b = 0.29) — training eliminated this relationship.

      Temporal order: Partial. Same pattern as RQ2a — moderator's role is well-ordered (training came first), but the IV (perceived intent) is measured, blurring time order for the IV → DV link.

      Non-spuriousness: Partial. Training is randomly assigned (supports moderator inference), but perceived intent isn't (leaves the door open for confounds like product interest, advertising familiarity, general skepticism).

      Verdict: The moderating role of training is causally supported. The underlying IV → DV relationship between perceived intent and purchase intention is correlational.

    67. H1

      Pause here. This hypothesis is about a relationship between two variables — perceived selling intent (IV) and product recall (DV) — and it's worth understanding upfront that perceived selling intent is a measured variable, not a manipulated one. What's the difference, and why does it matter? A manipulated variable is one the researcher controls directly. They assign participants to different levels (training video vs. no training video, sponsored vs. non-sponsored) and watch what happens. Random assignment makes manipulated variables powerful for causal inference. A measured variable is one the researcher just observes — they survey participants and record what they say. Perceived selling intent is measured: kids watched a video, then answered survey questions about whether they thought the host was being paid. The researchers didn't make some kids perceive selling intent and other kids not perceive it. Why this matters: when an IV is measured (not manipulated), causality claims get much weaker. You can't establish strong temporal order (because perception and outcome are measured close together). You can't rule out confounds via random assignment (because you didn't assign anyone to anything). Pre-existing differences between high-perceivers and low-perceivers might explain BOTH their perception and their recall. When you read the hypotheses and results, keep track of which variables are manipulated and which are measured. The training video and sponsorship cue are manipulated. Perceived intent (both kinds) is measured. That distinction will shape every causality answer you give.

    68. RQ2a

      RQ2a (perceived intent × training → recall): Association: Supported. Significant moderation by training video on the relationship between perceived informative intent and health recall (Wald test p < .05). With training, perceived informative intent predicted health recall (b = 0.21); without training, it didn't.

      Temporal order: Moderate. The MODERATOR (training) was manipulated and came first by design. But the IV (perceived intent) was measured, not manipulated, and assessed alongside the DV. So time order is fuzzy for the IV → DV link, even though it's clean for the moderator's role.

      Non-spuriousness: Partial. Training was randomly assigned, supporting strong inference about the moderator's role. But perceived intent wasn't, so third-variable explanations remain possible for the core IV → DV link.

      Verdict: The moderating role of training is causally supported. The underlying IV → DV relationship is correlational, not causal.

    69. H4

      H4 (non-sponsored → perceived informative intent): Association: NOT supported. Tweens perceived similar informative intent for sponsored and non-sponsored videos, regardless of training condition. [FYI - even if association is unsupported, you still need to go through the rest of the steps!]

      Temporal order: Strong. Video condition was manipulated and presented before measurement.

      Non-spuriousness: Strong. Random assignment controls for pre-existing differences.

      Verdict: Causality cannot be claimed because there's no association to begin with. Despite a strong experimental design, the predicted relationship simply didn't appear in the data. Good design + no effect = no causal claim.

    70. H6b

      H6b (training video → detection of informative intent in non-sponsored content): Association: NOT supported. No significant difference in perceived informative intent between training and no-training groups, in any condition. [FYI - even if it was unsupported, still walk through the rest of the steps.]

      Temporal order: Strong (same reasoning as H6a — training came first by design).

      Non-spuriousness: Strong (random assignment).

      Verdict: Causality cannot be claimed because there's no association. Despite a strong experimental design, the predicted relationship didn't appear.

    71. H6a

      H6a (training video → detection of selling intent in sponsored content): Association: Partial. The training × sponsorship moderation wasn't significant overall, but within the sponsored condition specifically, training did increase perceived selling intent (b = 0.50, p < .01).

      Temporal order: Strong. Training video came first, sponsored video second, perception measure third.

      Non-spuriousness: Strong. Random assignment to both training/no-training and to sponsorship condition. Standardized stimulus exposure.

      Verdict: Causality conditionally supported within the sponsored condition. Strong experimental design supports the inference.

    72. a

      RQ2a: Perceived video intent — both informative and selling (measured IV) → training video (manipulated moderator) → recall of information, both product and health (DV)

    73. H2

      H2 (perceived informative intent → educational recall): Association: Partial — perceived informative intent predicted greater health recall, but ONLY among tweens who viewed the training video (b = 0.21, p < .05). Without training, no significant relationship.

      Temporal order: Moderate. Perception was measured before recall, so sequence exists. But the IV is measured (not manipulated), which weakens the strength of any temporal claim.

      Non-spuriousness: Weak. Same problem as H1 — perceived informative intent wasn't randomly assigned, so confounds (motivation, prior interest in health, engagement with the video) could shape both perception and recall.

      Verdict: Causality only weakly supported, and only conditional on training. The schema activation mechanism remains theoretical — it's not directly tested.

    74. H3

      H3 (sponsored video w/ disclosure → perceived selling intent): Association: Partial — tweens perceived greater selling intent in the sponsored vs. non-sponsored condition, but ONLY when they had first viewed the training video (b = 0.50, p < .01). Without training, no significant difference.

      Temporal order: Strong. Sponsorship was a manipulated IV, randomly assigned, presented BEFORE perceived selling intent was measured.

      Non-spuriousness: Strong. Random assignment to sponsorship condition controls for pre-existing differences. All participants saw the same video format with only the sponsorship disclosure varying.

      Verdict: Causality conditionally supported — the sponsorship disclosure DOES affect perceived selling intent, but only in the presence of advertising literacy training.

    75. H1

      H1 (perceived selling intent → product recall): Association: Partial — there IS a statistical relationship, but it runs OPPOSITE to predicted. Higher perceived selling intent was associated with LOWER product recall, especially after training (p < .05). This contradicts H1 but is consistent with the Persuasion Knowledge Model (defensive processing).

      Temporal order: Weak. Perceived selling intent was MEASURED, not manipulated. Both perception and recall are responses to the same video, so we can't strictly establish that perception came first in time.

      Non-spuriousness: Weak. Because the core IV wasn't randomly assigned, third variables (advertising literacy, skepticism, working memory capacity) could explain both perception and recall.

      Verdict: Causality NOT established. The key IV is measured, not manipulated, and the association ran opposite to predicted.

    76. Measures

      A few tips for working through these variable questions: On level of measurement: Don't just assume. Read the actual response options. For example, an income question that asks people to pick from ranges (under $50K, $50–99K, etc.) is ordinal, not interval/ratio, even though income itself is a continuous construct. Watch for this — students lose points on it constantly. On variable role (IV / DV / mediator / moderator / control): A variable can play DIFFERENT roles in different hypotheses. Don't pick one and call it good. Walk through each hypothesis (H1, H2, H3...) and ask: "in this specific hypothesis, what role does this variable play?" List them all, like "DV in H1, RQ2a; IV in H6b." On measurement validity/reliability: Look for Cronbach's alpha (α) for multi-item scales, and look for any mention of validation studies or pilot testing. If the authors are silent on this, note that — silence isn't always a problem (some scales are well-established), but it's worth flagging. On conceptual definition: This is the abstract, theoretical definition of the construct — usually in the introduction or literature review. It's NOT the operationalization (the specific items used to measure it). Look for a sentence that defines what the construct is, not how it's measured. Sometimes you'll need to read carefully — conceptual definitions can be embedded in the middle of paragraphs, not always called out explicitly.

    77. H2

      Heads up before you tackle this one: the moderator in this hypothesis is theoretical, not measured. The authors don't have a direct survey item for "schema activation" — it's an internal cognitive process that's inferred from the pattern of results, not assessed directly. So when you identify the moderator here, you're naming the theoretical mechanism the authors propose: schema activation (advertising schema for H1, educational schema for H2). The hypothesis says perceiving selling/informative intent should trigger the schema, which then directs attention to certain kinds of information, leading to better recall of that information. In the path analysis the authors actually run, schema activation isn't a separate variable. The study tests the IV → DV link directly and uses schema theory to explain WHY that link exists. So the moderator is theoretical, not statistical. This is a good lesson in how published research works. Authors often name theoretical moderators or mediators in their hypotheses but only test some of them directly with measured variables. Read carefully to figure out which constructs are measured vs. which are described conceptually as the "reason why."

    78. In

      Students sometimes confuse the "bigger research question" with the specific hypotheses. They're not the same. The specific hypotheses are narrow, testable predictions: "tweens will perceive greater selling intent in a sponsored unboxing video compared to a non-sponsored video." That's H3. It's specific. It involves named variables. It can be supported or unsupported by data. The bigger research question is the larger problem the study is trying to address — the reason any of those specific hypotheses are worth testing in the first place. It's the answer to "why bother?" It's what you'd tell your grandma if she asked what the study is about. For this study, the bigger picture might be something like: Kids spend tons of time watching online videos, and a lot of that content is secretly trying to sell them stuff. We know how kids handle traditional ads, but unboxing videos and influencer content are a newer beast — they don't look like ads, even when they are. Tweens are at a developmental moment where they're capable of detecting commercial intent but might not do it automatically. So can we figure out (1) whether kids perceive sponsored content differently than objective content, and (2) whether brief media literacy training helps them apply skills they already have? Notice what that does: it zooms out. It explains what's at stake (kids being manipulated by stealth advertising). It explains why this age group matters (developmental window). It explains why this study contributes (existing research focuses on traditional ads, not native digital content). And it doesn't get into the specific 6 hypotheses. A good "bigger research question" answer should be a paragraph or so. Write it in your own words, not paraphrasing the abstract. Imagine explaining the point of the study to someone who's never read it.

    79. b

      Once you've got the conceptual difference between mediators and moderators down, the next skill is spotting them in published studies — which is harder than it sounds, because authors don't always label their variables that way explicitly. Here are the textual cues to watch for: Moderator language: "the relationship between X and Y depends on Z," "Z moderated the effect of X on Y," "the effect of X on Y was stronger/weaker when…," "Z × X interaction," "for participants high in Z, the effect was…" Statistical tests for moderation usually involve interaction terms in regression or ANOVA. Mediator language: "X influences Y through Z," "Z explains the relationship between X and Y," "X has an indirect effect on Y via Z," "Z is the mechanism by which X affects Y." Statistical tests for mediation include path analysis (which this study uses), Baron and Kenny's steps, or bootstrapped indirect effects. In this specific study, the authors are very clear about the training video as a moderator — they say so explicitly in the conceptual model in Figure 1, and they test it using "Wald statistics tested whether model paths varied significantly between children who saw the advertising training video and those who did not." But they're also using perceived informative intent and perceived selling intent as mediators — variables that sit between the experimental condition (IV) and the outcomes (recall, purchase intention). The path analysis structure tells you this: the experimental condition affects perceived intent, which then affects the DVs. Perceived intent is the mechanism the authors think explains why sponsorship cues affect kids' responses. Notice this study uses both at once. That's common in published research — moderators and mediators do different work in the same model. Extra video on finding moderators and mediators in published studies: https://youtu.be/nNcrmLLR_Rc

    80. H1

      Heads up before you tackle this one: the moderator in this hypothesis is theoretical, not measured. The authors don't have a direct survey item for "schema activation" — it's an internal cognitive process that's inferred from the pattern of results, not assessed directly. So when you identify the moderator here, you're naming the theoretical mechanism the authors propose: schema activation (advertising schema for H1, educational schema for H2). The hypothesis says perceiving selling/informative intent should trigger the schema, which then directs attention to certain kinds of information, leading to better recall of that information. In the path analysis the authors actually run, schema activation isn't a separate variable. The study tests the IV → DV link directly and uses schema theory to explain WHY that link exists. So the moderator is theoretical, not statistical. This is a good lesson in how published research works. Authors often name theoretical moderators or mediators in their hypotheses but only test some of them directly with measured variables. Read carefully to figure out which constructs are measured vs. which are described conceptually as the "reason why."

    81. Parents reported family demographic information, including race and ethnicity, house-hold income, and parent education.

      These are in fact variables! Race/ethnicity, household income, and parental education are demographic variables collected from parents at the start of the survey. To figure out how they're being used, ask: does this variable show up in any of the hypotheses or RQs? Look back at H1–H6, RQ1, RQ2. None of them mention race/ethnicity. None mention income. None mention parental education. So these aren't IVs or DVs in any of the tested relationships. That tells you these are control variables (also called covariates). Researchers collect them for two main reasons:

      To describe the sample. That's what Table 1 is doing — telling you who actually participated so you can judge external validity. To statistically adjust for them in the analyses. This is when researchers want to make sure the effects they're seeing aren't driven by demographic differences across conditions. (In a true experiment with random assignment, demographics should be roughly equal across conditions anyway — but checking and statistically controlling is good practice.)

      For each of these demographic variables, you'll want to identify:

      Operationalization: what specific question was asked, and what response options were given? (The article is sparse on this — for some demographics, you may have to infer from Table 1 what the response options must have been.) Level of measurement: race/ethnicity is nominal (categories with no order). Education is typically ordinal (less than bachelor's < bachelor's < graduate). Income — be careful here. If they reported it as ranges (under $50K, $50–99K, etc.), that's actually ordinal, not interval, even though the underlying construct is continuous. Use: control variable, not part of any hypothesis.

      Extra video on finding IVs and DVs (and what's not an IV or DV): https://youtu.be/SlljkpUY4J4

    82. Non-sponsored video

      Quick refresher on what a manipulation check is and why it matters for this study. A manipulation check is a measurement researchers add to verify that their IV manipulation actually worked the way they intended — that participants noticed it, understood it, or were affected by it. It's not the dependent variable. It's a check on the integrity of the experimental setup itself. For example, in a study manipulating "fear appeals" in health messages, the researcher would want to verify that participants in the high-fear condition actually felt more afraid than participants in the low-fear condition. If the manipulation check shows no difference in self-reported fear between groups, the manipulation failed — and any downstream findings on the DV become hard to interpret. Maybe there's no effect because the IV doesn't matter. Or maybe there's no effect because the manipulation never actually worked. For each of the experimental conditions in this study (non-sponsored, sponsored, sponsorship unaddressed, training video), ask yourself:

      Did the authors verify that participants noticed the manipulation? If yes, what did they ask, and what did they find? If no, what could they have asked? And how does the absence of a check affect your confidence in the findings?

      Be careful here — perceived selling intent and perceived informative intent are dependent variables in this study (and mediators in the path analysis), not manipulation checks, even though they look similar to what a manipulation check might measure. A true manipulation check would ask something like "was this video sponsored?" — a direct test of whether kids registered the sponsorship cue itself. Extra video on manipulation checks: https://youtu.be/0mr2K9Pji7k

    83. The sample of parent-child dyads was recruited from an online participant panel and may not be fully repre-sentative of US tweens

      This is the authors flagging an external validity concern, and it's worth pausing on because the trade-off it represents is one of the most important tensions in experimental research. Internal validity is about whether you can be confident the IV actually caused changes in the DV — whether you've ruled out alternative explanations. Tightly controlled experiments tend to have strong internal validity because you can rule out lots of confounds. External validity is about whether your findings generalize beyond the specific sample, setting, time, and stimuli of your study — whether what you found would hold up with other people, in other places, with other materials. These two often pull in opposite directions. The more controlled your experimental setting (which boosts internal validity), the less it resembles the real world (which hurts external validity). The more "ecologically valid" your setup (real environments, real materials, real behaviors), the harder it gets to rule out confounds. This study leans toward internal validity. The authors created controlled video stimuli, used random assignment, and standardized exposure through Qualtrics. That's good for causal inference. But it costs them on external validity:

      The sample skews toward higher-income, more-educated, mostly White families on an online research panel — not the full diversity of U.S. tweens. The stimuli were custom-made videos with an unknown actor — not real influencer content with hosts kids actually follow. The product was a water flosser — not the toys, cosmetics, or food products that dominate real youth-targeted unboxing content. The viewing happened in a survey environment with a parent nearby — not a kid alone on YouTube doomscrolling at 9pm.

      None of this means the findings are wrong. It means we should be cautious about generalizing them. The next step in this research program would be replication with more diverse samples, real influencer content, and naturalistic viewing contexts. Extra video on internal vs. external validity: https://youtu.be/ehq62uzVAzM

    84. contributors

      Author credibility - remember that we Google authors and look for specific credibility markers. These are all indicators of credibility. When evaluating author credibility, look for: institutional affiliation, degree and where it's from, research focus alignment with the study topic, and publication record. You could also look them up on Google Scholar to see their other publications and citation counts - but again, on a quiz you'd just need to know WHERE you'd look and what you'd look for.

    85. 3 (sponsored; non-sponsored; sponsorship unaddressed cue) x 2 (advertising train-ing; no advertising training) randomized experimental design

      Let's unpack this notation, because you'll see it constantly in experimental research and it's worth being able to read at a glance. When researchers write "3 × 2," they're describing a factorial design — an experiment that manipulates more than one independent variable at the same time. The numbers tell you the levels:

      The first IV (sponsorship cue) has 3 levels: sponsored, non-sponsored, sponsorship unaddressed. The second IV (advertising training) has 2 levels: training, no training. Multiply those together and you get 6 total conditions. Each tween is randomly assigned to exactly one of those 6 cells.

      This is between-subjects, which means each kid only sees one combination — not all six. That's why factorial studies often need larger samples than single-IV experiments: you're filling 6 cells, not 2. Here's why a factorial design is more powerful than just running two separate experiments. It lets you test for interaction effects — does the effect of one IV depend on the level of the other? That's exactly what the authors are after here. They don't just want to know "does sponsorship disclosure matter?" and "does training matter?" separately. They want to know whether training changes how kids respond to the sponsorship cue. A factorial design is the only way to answer that question in a single study. You'll see this notation everywhere: a 2 × 2 has 4 conditions, a 2 × 3 has 6, a 3 × 4 has 12, and a 2 × 2 × 2 (three IVs) has 8. The pattern always holds. Extra video on factorial design: https://www.youtube.com/watch?v=r0tn9E0WPks. The Identifying Experiment Types infographic in Module 7 also walks through how to count factorial conditions.

    86. Materials and methods

      Two related concepts here that students often blur together: cover stories and demand characteristics. A cover story is what the researchers tell participants the study is about — and it's often not the real research question. Why? Because if you tell a tween "we're studying whether you can detect sponsored content," they're going to perform — they'll scan for sponsorship cues much more carefully than they would normally, and you'll get inflated detection rates that don't reflect real-world behavior. Demand characteristics are the problem cover stories try to solve. They're cues in the experimental setup that tip participants off to what the researcher expects, leading participants to (consciously or unconsciously) provide responses that match those expectations rather than their genuine reactions. Now look closely at this study. Ask yourself:

      What did the kids think the study was about? (Look at the assent language and the procedures.) Were the videos presented as real YouTube content? (Yes — the authors note they streamed videos "from a private YouTube channel, in order to boost children's beliefs that these were real videos on the YouTube platform.") That framing is a deceptive element built into the stimulus, not just the study description. Did the authors explicitly mention using a cover story? Or were they relatively transparent about the topic?

      This study sits in an interesting middle ground: there's no elaborate cover story about the study purpose, but there is mild deception about the authenticity of the video stimuli. Could the authors have run this study without any deception? What would that have cost in terms of internal validity? Extra video on cover stories: https://youtu.be/DHfvtcMvKeA Extra video on demand characteristics: https://youtu.be/dSFIAoTKb0o

    87. Discussion

      On the quizzes, you'll need to find opportunities for future research that the author(s) discusses. Remember that these are ideas that the author includes, sometimes tied to limitations, for what could be done next or differently to further expand our understanding of this area.

    88. Practical implications

      This is an example of translational research. This is a requirement for this journal, but it is a cool thing that they do. They put it under the abstract on the main webpage so that anyone can read it, even without full access to the article.

    89. sent to panelists meetingstudy eligibility criteria (i.e., US parent of a child between 8–13)

      Who was studied: U.S. tweens between the ages of 8 and 13. Eligibility criteria were that participants had to be U.S. parents of a child in this age range — but notice the study is actually about the kids, not the parents. Parents were recruited as the gateway because the kids couldn't legally consent on their own. Out of 298 parents who initially clicked the survey link, 251 parent-child dyads completed the study (3 children declined assent, 13 children quit early, and 31 parents didn't complete the consent process). Notice the dropout funnel — that screening matters for the eventual sample. Let's break down the three layers:

      Population of interest: U.S. tweens ages 8–13 (especially those who watch online video content like unboxing videos). Sampling frame: Active members of Dynata's U.S. online research panel who happen to be parents of tweens. Dynata is a research panel where people sign up to take surveys in exchange for points convertible to cash, travel miles, or prizes. Sample: The 251 parent-child dyads who actually completed the study.

      Each step narrows the pool and potentially introduces bias. The population is all U.S. tweens; the sampling frame is parents who happen to be Dynata panelists (people who voluntarily sign up to take paid surveys); the sample is the subset of those whose kids agreed and completed the study. The further the sampling frame is from the actual population, the more bias creeps in — and Dynata panelists skew in particular demographic ways (whiter, more educated, higher-income than U.S. parents overall, as Table 1 shows). What's the sampling method? The authors don't explicitly name it, but this is a convenience sample (a type of non-probability sampling). They recruited from an existing pool of available panelists — not a random sample of all U.S. parents. You could also argue it has elements of purposive sampling because they specifically targeted parents of tweens (a defined characteristic), but the underlying recruitment is fundamentally convenience-based. Advantages of this approach: fast, inexpensive, access to a large pool, can screen for specific characteristics (like having a child in the right age range), can target by geography (U.S.-only). Disadvantages: not representative of all U.S. parents/tweens, Dynata demographics skew in particular ways (which the authors acknowledge in their limitations), self-selection bias (people who join paid research panels aren't random), and people who use Dynata regularly may not be typical parents. This matters for generalizability — findings may not apply to all U.S. tweens, especially those whose parents wouldn't sign up for an online survey panel. Extra video on figuring out the sampling method: https://www.youtube.com/watch?v=rstREj9jZdg

    90. After checkingreliability (α = .82)

      This is reporting the internal consistency reliability of the perceived informative intent scale. Cronbach's alpha (α) is what researchers use to check whether a multi-item scale is reliable — basically, do the items "hang together" the way they should if they're all measuring the same underlying construct? Alpha ranges from 0 to 1. Conventional thresholds:

      α ≥ .70 = acceptable α ≥ .80 = good α ≥ .90 = excellent

      An α of .82 here means the three perceived informative intent items are reliably measuring the same construct. If alpha had been low (.40, say), it would mean the items aren't really tapping a single thing — they'd need to be split into separate measures or revised. You'll also see Cronbach's alpha for the perceived selling intent scale, except wait — only TWO items measure selling intent, and the authors report a correlation (r = 0.53) instead of alpha. Why? Cronbach's alpha is technically defined for 3+ items. With only 2 items, a correlation tells you the same information more cleanly.

    91. 251 parent-child dyads.

      Is 251 enough for this study? Let's think it through. In a factorial design like this one, what matters isn't just the total N — it's the N per cell. With a 3 × 2 design and 251 participants, you're roughly looking at 251 / 6 ≈ 42 tweens per condition. (The actual breakdown is a little uneven, but that's the ballpark.) A common rule of thumb for between-subjects experiments is at least 30 participants per cell to get reasonably stable estimates, and more is better. So 42 per cell is decent — not generous, but workable. The authors were able to detect statistically significant effects in some of their hypotheses, which suggests they had enough power for the strongest relationships. But here's where it gets interesting: many of the hypotheses in this study were not supported (H1, H4, H6b were unsupported; H3, H5 were partially supported). When hypotheses fail, you have to ask: was there really no effect, or did the study just lack the statistical power to detect a small effect? With ~42 per cell, this study could detect medium-to-large effects but might miss small ones. That's a limitation worth thinking about when you read the discussion. Larger samples give you more power to detect smaller effects — but they can also make trivially small effects reach statistical significance. Sample size always involves a trade-off, and there's no single "right" number. It depends on your design, the effect size you expect, and what you're willing to risk missing. Extra video on sample size in experiments: https://www.youtube.com/watch?v=v-dyn6tO5dQ

    92. methods

      What research paradigm does this study align with? This is a post-positivist study.

      Evidence: - Hypothesis testing. The study advances 6 specific, falsifiable hypotheses (H1–H6, with sub-hypotheses) plus 2 research questions, all stated upfront before data collection. Findings are evaluated against these predictions, with hypotheses described as "supported," "partially supported," or "unsupported" — that's the language of post-positivism. - Quantification. All variables are measured numerically — Likert scales for perceived intent (1–5), correct/incorrect for recall items (0/1), summed scores for total recall (0–2), and a single rating for purchase intention (1–5). Statistical tests (path analysis, Wald tests, p-values) determine whether relationships exist. - Experimental control and random assignment. The authors manipulated two independent variables and used Qualtrics to randomly assign participants to conditions — both classic post-positivist tools for isolating cause-and-effect relationships. - Theoretical frameworks that predict relationships. The authors draw on schema theory and the Persuasion Knowledge Model to GENERATE testable predictions. Theory is used deductively (theory → hypothesis → test), not built up inductively from observations. - Search for generalizable patterns. The authors are interested in how tweens in general respond to unboxing videos — not how specific individuals interpret them. - Findings are reported as group-level statistics (means, regression coefficients), and the limitations focus on whether the sample is representative enough to generalize from. - Researcher distance from participants. Data collection happened through an online survey panel with no direct researcher-participant interaction. The setup deliberately minimizes the researcher's influence on responses — another post-positivist signature. - Acknowledgment of uncertainty. Note that this is POST-positivist, not strict positivism. The authors hedge appropriately — they discuss limitations, acknowledge that the manipulation might have worked through different mechanisms than predicted, and note ceiling effects and social desirability bias. They're seeking objective truth but acknowledge they can only approach it imperfectly.

    93. methods

      On the quizzes, you'll be asked to find when the author(s) discuss ethical considerations explicitly and implicitly. For implicit, for example, authors may not mention some ethical considerations because it is obvious to other researchers that common ethical considerations were taken. In each annotation, use the hashtag #ethicalconsideration and explain what the author(s) did and why, using course concepts related to research ethics. These will be in the methods section and possibly in the discussion/conclusion section. AND because there are limited ethical discussions in this particular study, please reply to this annotation with some suggestions about what could have or should have been done, ethically.

    94. Wald tes

      A Wald test asks whether two estimates differ reliably from each other. The authors use it here to test moderation: is the effect of perceived intent on the outcome significantly different in the training condition vs. the no-training condition? If the Wald test is p < .05, the moderation is real. If it isn't, the apparent gap between conditions could just be noise. You don't need to compute it - just recognize that when authors report a Wald test in a moderation analysis, they're testing whether two coefficients differ, not whether either one is significant on its own.

    95. U.S.

      Article: Vaala, S. E., Mauceri, F., & Connelly, O. (2024). U.S. tweens' reactions to unboxing videos: Effects of sponsorship disclosure and advertising training. Journal of Children and Media, 18(2), 272–292. https://doi.org/10.1080/17482798.2024.2338541

      Purpose: This annotated example is designed to help you prepare for the in-class experiment study identification quiz. Read through the article and these annotations carefully. The annotations explain what to notice and why — the quiz will ask you to do similar analysis on a different experiment study.

      How to use this: Read the article section by section. When you encounter a highlighted passage, read the annotation. The annotations teach you what to notice and why it matters. This is NOT a graded assignment — it's a learning tool. When you take the in-class quiz on a different experiment study, you'll need to: - Identify the bigger research question and why the study matters - Identify hypotheses/RQs with their IVs, DVs, mediators, and moderators - Identify the sampling method and its strengths/weaknesses - Identify how participants were assigned to conditions, and the strengths/weaknesses of that approach (this is separate from sampling!) - Identify the type of experiment — true, quasi, or pre-experiment; lab, field, or natural; posttest-only, pretest-posttest, or time series; between-subjects or within-subjects; factorial or single-factor - Find and annotate ethical considerations, including those specific to research with minors (parental consent, child assent, IRB scrutiny of special populations) - Analyze variables: conceptual definition, operationalization, level of measurement, role in hypotheses, reliability, and whether each was manipulated or measured - Identify the research paradigm with supporting evidence Find limitations and future research directions Find places where authors connect their findings to previous research - Evaluate author and journal credibility - Apply experiment-specific concepts: manipulation checks, attention checks, cover stories, deception, demand characteristics, threats to internal validity, and the trade-off between internal and external validity - For each hypothesis, evaluate how the authors attempted to establish causality (association, temporal order, non-spuriousness) and whether they succeeded — paying particular attention to whether the IV was manipulated or measured

      Remember to use your strategic reading skills: skim the abstract and Impact Summary first, map the structure using headings, then read the Methods and Results carefully. The introduction/theory can be skimmed for main ideas. The Discussion needs careful reading for limitations, future research, and connections to previous work.

    96. YouTube

      Practice your strategic reading here. Before reading this article front-to-back, try this: Read the abstract (1 min). Skim the headings to see the structure (30 sec). Jump to the Methods - figure out: who was studied, how, and what was measured (5 min). Then read the Discussion for findings and limitations (5 min). THEN go back and read the Introduction/Lit Review for theory. This is more efficient than reading linearly, and it's the approach you should use on the quiz.

    97. Wald tes

      A Wald test asks whether two estimates differ reliably from each other. The authors use it here to test moderation: is the effect of perceived intent on the outcome significantly different in the training condition vs. the no-training condition? If the Wald test is p < .05, the moderation is real. If it isn't, the apparent gap between conditions could just be noise. You don't need to compute it - just recognize that when authors report a Wald test in a moderation analysis, they're testing whether two coefficients differ, not whether either one is significant on its own.

    98. ARTICLE HISTORY

      Here's a real example of what the peer review process looks like in practice. Let's trace the timeline for this article:

      March 27, 2023: The author submitted the manuscript to the journal. Somewhere between March 2023 and March 2024: The editor sent it to peer reviewers (likely 2–3 experts in children's media, advertising literacy, or developmental psychology). Those reviewers read the full study and wrote detailed feedback — pointing out weaknesses, suggesting additional analyses, questioning interpretations, recommending additional literature, etc. The authors then revised the manuscript in response to that feedback. March 27, 2024: The revised manuscript was resubmitted (exactly one year after the original submission, which is on the longer end of the spectrum — it suggests substantial revisions). March 31, 2024: Accepted, just four days after the revision. That fast turnaround tells us the editor was satisfied that the revisions addressed reviewers' concerns and didn't need to send it back for another round.

      So the total process from submission to acceptance was about a year. This is fairly typical for communication journals — some are faster, some take over a year. The key point: this study went through expert scrutiny before it was published. That's what distinguishes peer-reviewed research from a blog post, a news article, or a preprint. On the quiz, you may or may not have these dates for every study, but when you DO see them, they tell you something about the rigor of the review process.

    99. Wald tes

      A Wald test asks whether two estimates differ reliably from each other. The authors use it here to test moderation: is the effect of perceived intent on the outcome significantly different in the training condition vs. the no-training condition? If the Wald test is p < .05, the moderation is real. If it isn't, the apparent gap between conditions could just be noise. You don't need to compute it - just recognize that when authors report a Wald test in a moderation analysis, they're testing whether two coefficients differ, not whether either one is significant on its own.

    1. 0101100450082 · COEBLALA

      add the three acount options, i will send account numbers for Baht and Lak

      Baht. 0101100450073 lak 0101100450064

    2. Signing of this Agreement — mobilisation of design team ·Stage 1 02 Conceptual design sign-off by Client ·Stage 2 03 Delivery of full engineered drawing set

      is this a date issue or money issue ?

    1. By signing here you affirm authority to sign for Cameron Stovold & Sunny Stovold. No signature yet.

      Sam, can we test to make sure it works efficiently?

    2. StageMilestoneAmount (USD) 01Signing of this Agreement (mobilisation of design team)$1,200.00 02Conceptual design sign-off by Client$900.00 03Delivery of full engineered drawing set$900.00 —Total$3,000.00

      One fee of 3000, eliminate the three draws

    1. One platform for all three sites. Currently split between Power Diary (personal) and Zanda Health (MHFA). I recommend consolidating on Zanda Health: its group-booking, course-iteration and class-capacity features map better to your MHFA delivery model, and clinical 1:1 booking is a feature subset Zanda handles cleanly. If there’s a strong reason to keep Power Diary, I’ll review with you in discovery.

      Only the individual counselling/supervision services use Zanda. It likely needs to remain separate from the corporate business. Reasons: - Medicare and AASW (Australian Assoc. Social Workers) accreditations demand dedicated, locked system security/privacy regs apply, and Leigh is super sensitive to this aspect. - Leigh currently uses the invoicing/payment features of Zanda + Tyro, while the Corp business MHS&T has migrated to Xero 12mths ago, for Corp invoices. Meanwhile MHS&T training bookings are via WIX with Stripe payment gateway on the backend. We presently have only 1x CBA business account (source of all financial truth). - In the long-term future the Counselling, MHS&T, or Training business may each be split/sold in part, an option made easier by maintaining separated database assets - if this occurred online assets would need review.

      Ultimately am open to update/merge booking/payment systems where we can, this would require a deeper level of review/assessment.

    2. The eventual National MHFA Referral programme uses the storefront’s lead volume to feed paid referrals, which gives you another revenue line on top of the funnel.

      We will likely look at eventually promoting training courses in other states/cities and send Leigh to run them, while simultaneously recruiting a stable of locally-based MHFA trainers in each place.

    3. MHFA storefront

      Let's re-title this the Training Storefront, so as to capture all variations of face-to-face training, MHFA or other brands, and bespoke workshops, ind. including online training etc.

    1. Customers often remain loyal to a brand because of its appearance, functionality, or price. Companies want loyal customers like these and will go to great lengths to keep them.

      Loyalty can also go generations. Some people will solely utilize the brands their parents or grandparents were loyal to and not think twice about it.

    1. including a reputation for treating customers and employees fairly and for engaging in business honestly.

      Employee retention is a very important stat for businesses in my opinion. Maintaining a positive work environment can go a long way

    1. It may also mean treating our employees, customers, and clients with honesty and respect.

      I think many businesses under value this aspect of business. The term "internal customer" was brought up at one of the first company meetings I was in post graduation and that term has stuck with me since. Maintaining employees is growing more and more difficult with job hopping increasing in popularity.

    1. However, there are common threads that run through it, and the Framework focuses on those, again with the goal of serving as a useful aid that is relatively easy to apply.

      It's unique to think about how different cultures ethics can very. I remember one of m professor's telling me in college that there are culture's that view hand shakes as disrespectful. She taught me how important it's to be aware of key differences when meeting with people from other cultures.

    1. (b) If MLH is not appointed as the construction contractor: The Drawings remain the sole and exclusive property of MLH. The Client receives a non-transferable, single-use licence to use the Drawings to construct the residence on the Site only, as set out in Section 7.2. The Drawings may not be reused on any other site, project, or by any other party.

      if client has paid in full and they chose a different contractor, the design becomes their property without liability of any type to MLH, past, present or future

      7.1 B and 7.2 cover my above point so not my writing is required? your thoughts

    2. Additional revisions beyond the included three shall be charged at USD 150.00 per round, payable in advance of work commencing on that round.

      Sam Question

      how do we make this as irelevant but necessary in the event we have a difficult client where changes are happening continuously. This clause is really for our protection, Your thoughts?

    3. Stage Milestone Amount (USD) 1 Signing of this Agreement (mobilisation of design team) $1,200.00 2 Conceptual design sign-off by Client $900.00 3 Delivery of full engineered drawing set $900.00 Total $3,000.00

      Eliminate the stages, one payment

    4. Site layout including driveway, fencing, gate, and landscaping zones (basic)

      add a bullet,

      The footprint of the home is based on a per meter price. E.g. your design is 120 m2 with a second storey of same size, your buildable size is 240 m2. Driveways, gates, fencing, landscaping, car park is separate pricing

    1. However, while some people have highly developed habits that make them feel bad when they do something wrong, others feel good even though they are doing something wrong.

      It's interesting how different people can react to the same thing. I wonder what the main causes of this are if it's primarily based on the environment they grew up in or something else that causes this.

    1. illusion

      I believe an appropriate substitute for this term would be misunderstanding. I think that social media and news outlets try to hide certain key aspects in reports that cause drastic misunderstandings when people don' tdive deeper into them.

    1. That terminal carbon atom (shown here in blue) is called the omega carbon atom. Thus a monounsaturated fatty acid with its single double bond after carbon #3 (counting from and including the omega carbon)

      why this blue carbon is called omega and the 3rd carbon position is also called omega ? it seems to me more reasonable that the blue carbon is called alpha, then the 3rd carbon is the omega on thus made its bond an omega one.

    1. If you decide to engage MLH for design, we sign a Design Contract. Six weeks of work: conceptual design, three rounds of revisions, full architectural and engineering drawings, and 3D rendered visualisations of your home before any construction begins.

      add, You have indicated that each you have design ideas and would like two separate designs included with the intent that upon reaching the final design approval MLH provides the completed technical and working drawings on that chosen design by you.

    2. If you have land in mind, we visit it together. We assess access, orientation, soil indicators, and any setback or zoning constraints. You leave the visit with an informed view on whether the plot suits your brief.

      add, we have the plot plan and dimesions but we have not visited the site as this date,

    3. Thirty minutes, by video call or in person if you are in Vientiane. Gary, our Managing Director, will walk through your initial questions, hear what you have in mind, and tell you honestly whether MLH is the right fit. No fee.

      Gary has completed initial call with both Sunny and Cameron earlier April

    4. These figures are construction-only. Land, government fees, soil testing, topographical survey, and pool engineering (where applicable) are quoted separately.

      Add,

      Note: sizes indicated are general, we have built 80m2 structures with a beautiful 40 M2 deck but averages would this range of 160 m2.

    5. Final pricing is set only after design contract and site assessment. The ranges above are real — they reflect homes we have built. We will not quote you a number we cannot honour.

      Add, we want the design and costs to fit your budget.

    6. You have design ideas in mind. You have a sense of what you want to achieve, but are not yet sure of the square metres required or the realistic price range

      , and in our preliminary discussions we know a small pool and separate (if possible) mother in law suite. All details we will engage and work through in the design process

    7. You are currently looking at land. The plot is not yet finalised, which gives us time to plan size and footprint before purchase.

      You have selected lot D2 and indicated you have made a deposit on that lot and will be completing the purchase contract shortly on your next return to lao, now expected in week or two

    1. Here individuals are viewed not in isolation, but as members of communities that are partially responsible for the behavior of their member

      This reminds me of what my college coaches used to preach. They also talked about how when you're wearing "our brand" you're representing more than yourself, you're representing our brand and our comminuty.

    1. American cultural traditions, in fact, reinforce the individual who thinks that she should not have to contribute to the community's common good, but should be left free to pursue her own personal ends.

      I feel as though when people are asked to do something and they reply with "what's in it for me" is becoming more and more common of a response. Unfortunately it seems people are starting to look past the for the greater good persepctive

    1. At home, no matter how much I contribute, my parents alwaystook it for granted as I am supposed to be filial … but here the elderly say thank you, theyrespect me, and they don’t try to control my life, to have a filial heart is much easier here.

      Highlighting quote that reinforces a main point: Filial piety doesn't depend on familial relations.

    2. One key distinction is that,with the elderly they assist, they don’t feel burdened by a sense of indebtedness,unlike the perceived obligation they experience with their own parents.

      I wonder if this is an example of psychological reactance.

    3. being stingy with the time spent visiting one’s parents shows a lack of ‘filialheart’.

      Why is filial piety being judged by circumstantial conditions? Also, drawing back to the challenge of technology, I wonder if care workers see video chatting or messaging online as spending time.

    4. Sucha seemingly double standard in defining the ‘filial heart’ highlights its fluid, context-dependent nature, shaped by varying economic and social circumstances

      Highlighting the fact that expression of filial piety is incedibly nuanced.

    5. Grandma Li’s son works inAmerica … She is so proud of him …

      Makes me wonder why generational advancement via opportunities doesn't count as a filial action to her.

    1. Retributive justice refers to the extent to which punishments are fair and just

      Location is a big factor in this. States have varying laws around just about everything. For example someone who commits a crime in one state could suffer a far greater punishment than if they were to commit the same crime in another state.

    1. How would the action affect the basic well-being of those individuals?

      I feel as this ideology is becoming harder and harder to find. With so many individuals wishing for fame and popularity.

    2. The right to privacy, for example, imposes on us the duty not to intrude into the private activities of a person.

      With all the recent technological advancements, I wonder how they have impacted this. News outlets typically advertise that all our moves, clicks, and purchases are tracked and help targeted advertisement

    1. First, the utilitarian calculation requires that we assign values to the benefits and harms resulting from our actions and compare them with the benefits and harms that might result from other action

      Each benefit and harm is subjective depending on the value of the individual making the decision. What they deem as extremely beneficial could be deemed harmful by another

    1. FDR Practitioner Register status, what “Club Awesome” actually is, The Working Mind licensing timeline, current subcontractor roster, and which insurers actively send referrals.

      FDR Practitioner Register status - Leigh has qualifications in FDR but is not a registered practitioner, not strictly speaking attracted to this type of client work, however the qualifications and earlier work in this realm feeds into her current practice.

      “Club Awesome” is Leigh's 'club' concept of a gathering of clients around her, with shared attitudes toward positivity and self-development - special client member-only access to content, newsletters, workshops etc.

      The Working Mind licensing timeline - MHFA Australia are presently training more Trainers like Leigh and are talking about a primary launch end of year or early 2027. Their video content etc is Canadian and is being Australianised. Importantly - MHFA Australia have advised that we can soft-launch and run training sessions now, but need to keep any public marketing on the down low.

      Current subcontractor roster: - We have had a local woman delivering Aboriginal MHFA courses, but she has recently talked about moving away from this. Other MHFA trainers are widely approachable.

      Which insurers actively send referrals? We have an Insurance industry client who signed our 15pg service agreement. Services include: - MHFA training courses - Corporate Workshops - Consulting on Psycho-Social compliance - Staff individual Counselling (paid by client, no Medicare rebate) - Staff individual Professional Supervision - Annual Corporate Event speaking etc.

    2. lived-experience

      remove any thought/reference to Leigh as a "lived-experience" service provider, these people are often untrained and while good for speaker roles, are not qualified as mental health 'clinicians' running a 'clinical practice'.

    3. CRM

      Relationships require commitment and resources. CRM is not a priority, however segment-able outbound email/SMS communications, and a sales/opportunity funnel management tool are attractive - hence Zoho.

    4. build EAP infrastructure on top of i

      Not a focus, however with the right 'chemistry' of counsellors working together, ovr time, a boutique level of EAP service would likely become marketable.