Content recall
No conceptual definition is provided in the article. The authors describe operationalization but never define this theoretically.
Content recall
No conceptual definition is provided in the article. The authors describe operationalization but never define this theoretically.
purchase intention
No conceptual definition is provided in the article. The authors describe operationalization but never define this theoretically.
The PKM contends that individuals will engage in cognitive copingstrategies when they identify selling intent within a given message (i.e., activate anadvertising schema).
Perceived selling intent (linked to Variable 7). From the Persuasion Knowledge Model: selling intent is recognized when consumers identify that a message is trying to persuade them to buy something, prompting cognitive coping responses.
The ability to understand advertising (including its forms, goals/biases, and pro-duction techniques) has been called “advertising literacy.” Advertising literacy skillshelp children filter the information they see in advertisements and view it througha critical lens to determine the intent behind it (
Training video / advertising literacy (linked to Variable 2: Training video / no training video). The authors define advertising literacy as the cognitive skills children use to filter and critically evaluate the intent behind advertising content. The training video manipulation is meant to teach or activate exactly these skills.
They receive points for participating in research, which can later be converted to money,travel miles, or prizes from Dynata.
Compensation. Participants are paid (in points convertible to money or prizes) for participating. Compensation is ethically standard but worth noting because it can create subtle pressure to participate — especially for people who depend on panel earnings as supplementary income.
Three childrenindicated they did not want to be in the study, and 13 quit the survey early.
Right to refuse and right to withdraw. Three children declined to participate even after their parents had consented — their decisions were respected. Thirteen others quit partway through and weren't forced to continue. Both reflect the ethical principle that participation must be voluntary and ongoing, not just at the start.
well-compensated “influ-encers,” paid or otherwise compensated by marketers in exchange for favorablereviews
Sponsored video / unboxing host (linked to Variable 1: Non-sponsored / Sponsored / Unaddressed sponsored conditions). The authors define an "influencer" — and by extension a sponsored video — as content where the host has a paid or otherwise compensated relationship with the manufacturer.
The Institutional Review Board at High Point University approved this study.
The study was reviewed and approved by an Institutional Review Board before data collection — required for any university-affiliated research with human participants. Research involving minors typically receives heightened IRB scrutiny because children are a "special population" with reduced capacity for consent.
nativeadvertising
Key term to understand for this study. Native advertising is advertising that's designed to look like the regular content around it, so consumers don't immediately recognize it as an ad. Examples in different formats:
In a magazine: a "sponsored article" that reads like editorial content but is actually paid promotion. On Instagram: a post by an influencer wearing a brand's clothes that looks like personal content but is sponsored. On YouTube: an unboxing video where the host reviews a product they were paid to feature, presented in their normal vlog style.
The point of native advertising is to evade the consumer's "ad detection" defenses. When you see a TV commercial, you know it's an ad — there's a clear break, a jingle, a brand logo, a 30-second window. Your brain switches into "this is trying to sell me something" mode. With native advertising, those visual and structural cues are missing, so your defenses don't activate. This is exactly what the FTC sponsorship disclosure rules are trying to address. By forcing a clear "sponsored" label on native ads, the rules give consumers the cue they need to recognize the content as advertising and engage their critical processing. Why does this matter for kids? Their advertising literacy is still developing. Even when kids can spot traditional ads on TV, they often DON'T spot native ads embedded in entertainment content like unboxing videos. That's the puzzle this study is trying to address.
Advertising schema and the Persuasion Knowledge Model
This study uses TWO theoretical frameworks at once: Schema theory and the Persuasion Knowledge Model (PKM). That's worth understanding because theoretical frameworks do a lot of work in research articles, and being able to spot them is a meta-skill for reading any empirical paper. What theoretical frameworks do:
They explain WHY the researchers expect specific relationships to exist. Without a framework, hypotheses are just guesses; with one, they're predictions grounded in existing theory. They organize the study's structure (which variables to include, what to measure, what to compare). They give meaning to the results. A finding "matters" because it supports, contradicts, or refines an existing theoretical understanding.
What's happening in THIS study:
Schema theory says people organize knowledge into mental schemas that direct attention and processing. The authors use schema theory to predict that kids who detect a SELLING intent will engage their advertising schema (focusing on product details), while kids who detect an INFORMATIVE intent will engage their educational schema (focusing on health info). PKM says when consumers detect persuasive intent, they engage cognitive coping strategies (often becoming more critical, more resistant). The authors use PKM to predict that kids who detect selling intent might develop defensive processing.
Notice these two frameworks make slightly different predictions. Schema theory predicts MORE recall of relevant info; PKM predicts MORE critical processing. The authors test both possibilities, which is part of what makes RQ1 interesting (they're asking, essentially, "which framework wins out?"). When you read any empirical study, look for the theoretical framework(s) early on. They tell you what the authors expect and why. Findings that match the framework's predictions are theoretical wins; findings that don't push the framework to evolve.
the informedconsent information
Informed consent for parents. Parents read the study description and explicitly consented before any data collection began. This is the standard ethical baseline for adult participation.
Informed assent language explained the study in child-appropriatelanguage and asked whether the child wished to participate.
This sentence reflects an important ethical distinction in research with minors: parents give consent, children give assent. Consent is legally meaningful authorization that the participant fully understands what they're agreeing to. People under 18 are not considered legally capable of giving consent on their own behalf, so a parent or guardian gives it for them. Assent is the child's own agreement to participate, given in age-appropriate language they can understand. It's not legally binding the way consent is, but research ethics requires it because children deserve agency over their own participation. A parent's consent doesn't override a child's refusal — if the child says no, even after the parent says yes, the child doesn't participate. This study did this correctly:
Parent reads the consent form and consents on the child's behalf. Parent brings child to the device. Child reads (or hears) the assent language in age-appropriate terms. Child decides for themselves whether to participate.
The authors note three children declined to participate at the assent stage — their refusals were honored. That's research ethics in action. Without an assent process, those kids' parents could have effectively forced them into the study. This is one of the implicit ethical considerations you should be taking note of. The authors don't shout about it, but the practice is significant.
One survey item assessed children’s water flosser purchase intention
Worth pausing on this. Purchase intention — arguably the most important DV in the study, since it's about real-world consumer behavior — was measured with a SINGLE item: "Will you ask your parents to buy a [company] water flosser?" answered on a 5-point scale. Single-item measurement creates several measurement issues: Reliability is unmeasurable. You can't compute Cronbach's alpha with one item — there's nothing to compare it to internally. So we have no statistical evidence that this measure is reliable. Validity rests entirely on this one question. If the question doesn't quite capture what "purchase intention" really means — maybe kids interpret "ask my parents" as different from "want this product" — there's no second item to triangulate against. Variability is constrained. With only 5 response options and 251 kids, you'll get clusters at certain values, which limits your ability to detect subtle effects. Researchers sometimes use single-item measures because the construct is simple, the survey needs to be short (especially with kids), or face validity is high. But it's a real measurement weakness, and worth flagging when you assess the study's quality. Compare this to perceived informative intent (3 items, α = .82) and perceived selling intent (2 items, r = .53). Those are more reliable, even though selling intent is also fairly thin.
Procedures
Important note about this study: it does NOT include manipulation checks or attention checks. These are different things, and the absence of both is worth noticing. A manipulation check verifies that the IV manipulation actually worked — that participants noticed the manipulation and responded to it as the researchers intended. For this study, manipulation checks would have asked things like:
Did the kid notice the sponsorship disclosure in the sponsored condition? Did kids correctly understand that the school report video was non-commercial? Did kids actually absorb the content of the training video?
Without manipulation checks, we don't know whether the manipulations worked. If results are null (like H4, H6b), is that because there's truly no effect, or because the manipulation never landed? An attention check verifies that participants were paying attention generally — not that the manipulation worked, but that they were focused enough on the study for any of their responses to be meaningful. Common attention checks include items like "select 'strongly disagree' to show you're reading carefully" embedded in surveys. The authors actually flag the absence of attention checks in their limitations: they note "Recruitment and data collection were also conducted online, prohibiting verification of treatment fidelity." Treatment fidelity = whether the participants actually received the treatment as intended. Be careful not to confuse perceived selling intent and perceived informative intent with manipulation checks. They look similar (asking kids what they thought of the video), but in this study they're treated as mediators and DVs, not as checks. A real manipulation check would ask something like "Was this video sponsored, yes or no?" — a direct test of whether the manipulation registered. Extra video on manipulation checks: https://youtu.be/0mr2K9Pji7k
Figure 1. Conceptual model of effects.
Take a minute with this figure — once you can read these diagrams, you can identify the variable structure of any study in seconds. Conceptual models like this one have a standard grammar:
Boxes are constructs (variables, conditions, outcomes). Arrows show predicted relationships, usually IV → DV (cause → effect). Position shows the role: leftmost are IVs/conditions, rightmost are outcomes/DVs, anything in the middle is a mediator (something on the path between IV and DV). A box with an arrow pointing AT another arrow (like Pre-Roll Training pointing at the path between Perceived Intention and Outcomes) is a moderator. It changes the strength or direction of a relationship without sitting on the causal path itself.
Read this specific figure: experimental conditions (sponsored, non-sponsored, unaddressed) → perceived intention (informative or selling) → outcomes (recall, purchase intention). And training video moderates the second arrow. So the variable structure of the whole study is:
IV (manipulated): sponsorship cue (3 levels) Mediators (measured): perceived informative intent, perceived selling intent Moderator (manipulated): training video (2 levels) DVs (measured): product recall, health recall, purchase intention
When you encounter a complex study like this one, look for the conceptual model first. It tells you what role each variable plays before you wade into the methods section.
RQ2b
RQ2b (perceived intent × training → purchase intention): Association: Supported. Perceived informative intent predicted purchase intention in BOTH conditions, but more strongly with training (b = 0.95) than without (b = 0.59, Wald test p < .05). Perceived selling intent predicted purchase intention only WITHOUT training (b = 0.29) — training eliminated this relationship.
Temporal order: Partial. Same pattern as RQ2a — moderator's role is well-ordered (training came first), but the IV (perceived intent) is measured, blurring time order for the IV → DV link.
Non-spuriousness: Partial. Training is randomly assigned (supports moderator inference), but perceived intent isn't (leaves the door open for confounds like product interest, advertising familiarity, general skepticism).
Verdict: The moderating role of training is causally supported. The underlying IV → DV relationship between perceived intent and purchase intention is correlational.
H1
Pause here. This hypothesis is about a relationship between two variables — perceived selling intent (IV) and product recall (DV) — and it's worth understanding upfront that perceived selling intent is a measured variable, not a manipulated one. What's the difference, and why does it matter? A manipulated variable is one the researcher controls directly. They assign participants to different levels (training video vs. no training video, sponsored vs. non-sponsored) and watch what happens. Random assignment makes manipulated variables powerful for causal inference. A measured variable is one the researcher just observes — they survey participants and record what they say. Perceived selling intent is measured: kids watched a video, then answered survey questions about whether they thought the host was being paid. The researchers didn't make some kids perceive selling intent and other kids not perceive it. Why this matters: when an IV is measured (not manipulated), causality claims get much weaker. You can't establish strong temporal order (because perception and outcome are measured close together). You can't rule out confounds via random assignment (because you didn't assign anyone to anything). Pre-existing differences between high-perceivers and low-perceivers might explain BOTH their perception and their recall. When you read the hypotheses and results, keep track of which variables are manipulated and which are measured. The training video and sponsorship cue are manipulated. Perceived intent (both kinds) is measured. That distinction will shape every causality answer you give.
RQ2a
RQ2a (perceived intent × training → recall): Association: Supported. Significant moderation by training video on the relationship between perceived informative intent and health recall (Wald test p < .05). With training, perceived informative intent predicted health recall (b = 0.21); without training, it didn't.
Temporal order: Moderate. The MODERATOR (training) was manipulated and came first by design. But the IV (perceived intent) was measured, not manipulated, and assessed alongside the DV. So time order is fuzzy for the IV → DV link, even though it's clean for the moderator's role.
Non-spuriousness: Partial. Training was randomly assigned, supporting strong inference about the moderator's role. But perceived intent wasn't, so third-variable explanations remain possible for the core IV → DV link.
Verdict: The moderating role of training is causally supported. The underlying IV → DV relationship is correlational, not causal.
H4
H4 (non-sponsored → perceived informative intent): Association: NOT supported. Tweens perceived similar informative intent for sponsored and non-sponsored videos, regardless of training condition. [FYI - even if association is unsupported, you still need to go through the rest of the steps!]
Temporal order: Strong. Video condition was manipulated and presented before measurement.
Non-spuriousness: Strong. Random assignment controls for pre-existing differences.
Verdict: Causality cannot be claimed because there's no association to begin with. Despite a strong experimental design, the predicted relationship simply didn't appear in the data. Good design + no effect = no causal claim.
H6b
H6b (training video → detection of informative intent in non-sponsored content): Association: NOT supported. No significant difference in perceived informative intent between training and no-training groups, in any condition. [FYI - even if it was unsupported, still walk through the rest of the steps.]
Temporal order: Strong (same reasoning as H6a — training came first by design).
Non-spuriousness: Strong (random assignment).
Verdict: Causality cannot be claimed because there's no association. Despite a strong experimental design, the predicted relationship didn't appear.
H6a
H6a (training video → detection of selling intent in sponsored content): Association: Partial. The training × sponsorship moderation wasn't significant overall, but within the sponsored condition specifically, training did increase perceived selling intent (b = 0.50, p < .01).
Temporal order: Strong. Training video came first, sponsored video second, perception measure third.
Non-spuriousness: Strong. Random assignment to both training/no-training and to sponsorship condition. Standardized stimulus exposure.
Verdict: Causality conditionally supported within the sponsored condition. Strong experimental design supports the inference.
a
RQ2a: Perceived video intent — both informative and selling (measured IV) → training video (manipulated moderator) → recall of information, both product and health (DV)
H2
H2 (perceived informative intent → educational recall): Association: Partial — perceived informative intent predicted greater health recall, but ONLY among tweens who viewed the training video (b = 0.21, p < .05). Without training, no significant relationship.
Temporal order: Moderate. Perception was measured before recall, so sequence exists. But the IV is measured (not manipulated), which weakens the strength of any temporal claim.
Non-spuriousness: Weak. Same problem as H1 — perceived informative intent wasn't randomly assigned, so confounds (motivation, prior interest in health, engagement with the video) could shape both perception and recall.
Verdict: Causality only weakly supported, and only conditional on training. The schema activation mechanism remains theoretical — it's not directly tested.
H2
H2: Perceived informative intent (measured IV) → [theoretical mediator: educational schema activation] → health/educational recall (DV)
H3
H3 (sponsored video w/ disclosure → perceived selling intent): Association: Partial — tweens perceived greater selling intent in the sponsored vs. non-sponsored condition, but ONLY when they had first viewed the training video (b = 0.50, p < .01). Without training, no significant difference.
Temporal order: Strong. Sponsorship was a manipulated IV, randomly assigned, presented BEFORE perceived selling intent was measured.
Non-spuriousness: Strong. Random assignment to sponsorship condition controls for pre-existing differences. All participants saw the same video format with only the sponsorship disclosure varying.
Verdict: Causality conditionally supported — the sponsorship disclosure DOES affect perceived selling intent, but only in the presence of advertising literacy training.
H1
H1 (perceived selling intent → product recall): Association: Partial — there IS a statistical relationship, but it runs OPPOSITE to predicted. Higher perceived selling intent was associated with LOWER product recall, especially after training (p < .05). This contradicts H1 but is consistent with the Persuasion Knowledge Model (defensive processing).
Temporal order: Weak. Perceived selling intent was MEASURED, not manipulated. Both perception and recall are responses to the same video, so we can't strictly establish that perception came first in time.
Non-spuriousness: Weak. Because the core IV wasn't randomly assigned, third variables (advertising literacy, skepticism, working memory capacity) could explain both perception and recall.
Verdict: Causality NOT established. The key IV is measured, not manipulated, and the association ran opposite to predicted.
Measures
A few tips for working through these variable questions: On level of measurement: Don't just assume. Read the actual response options. For example, an income question that asks people to pick from ranges (under $50K, $50–99K, etc.) is ordinal, not interval/ratio, even though income itself is a continuous construct. Watch for this — students lose points on it constantly. On variable role (IV / DV / mediator / moderator / control): A variable can play DIFFERENT roles in different hypotheses. Don't pick one and call it good. Walk through each hypothesis (H1, H2, H3...) and ask: "in this specific hypothesis, what role does this variable play?" List them all, like "DV in H1, RQ2a; IV in H6b." On measurement validity/reliability: Look for Cronbach's alpha (α) for multi-item scales, and look for any mention of validation studies or pilot testing. If the authors are silent on this, note that — silence isn't always a problem (some scales are well-established), but it's worth flagging. On conceptual definition: This is the abstract, theoretical definition of the construct — usually in the introduction or literature review. It's NOT the operationalization (the specific items used to measure it). Look for a sentence that defines what the construct is, not how it's measured. Sometimes you'll need to read carefully — conceptual definitions can be embedded in the middle of paragraphs, not always called out explicitly.
b
RQ2b: Perceived video intent — both informative and selling (measured IV) → training video (manipulated moderator) → purchase intention (DV)
H2
Heads up before you tackle this one: the moderator in this hypothesis is theoretical, not measured. The authors don't have a direct survey item for "schema activation" — it's an internal cognitive process that's inferred from the pattern of results, not assessed directly. So when you identify the moderator here, you're naming the theoretical mechanism the authors propose: schema activation (advertising schema for H1, educational schema for H2). The hypothesis says perceiving selling/informative intent should trigger the schema, which then directs attention to certain kinds of information, leading to better recall of that information. In the path analysis the authors actually run, schema activation isn't a separate variable. The study tests the IV → DV link directly and uses schema theory to explain WHY that link exists. So the moderator is theoretical, not statistical. This is a good lesson in how published research works. Authors often name theoretical moderators or mediators in their hypotheses but only test some of them directly with measured variables. Read carefully to figure out which constructs are measured vs. which are described conceptually as the "reason why."
In
Students sometimes confuse the "bigger research question" with the specific hypotheses. They're not the same. The specific hypotheses are narrow, testable predictions: "tweens will perceive greater selling intent in a sponsored unboxing video compared to a non-sponsored video." That's H3. It's specific. It involves named variables. It can be supported or unsupported by data. The bigger research question is the larger problem the study is trying to address — the reason any of those specific hypotheses are worth testing in the first place. It's the answer to "why bother?" It's what you'd tell your grandma if she asked what the study is about. For this study, the bigger picture might be something like: Kids spend tons of time watching online videos, and a lot of that content is secretly trying to sell them stuff. We know how kids handle traditional ads, but unboxing videos and influencer content are a newer beast — they don't look like ads, even when they are. Tweens are at a developmental moment where they're capable of detecting commercial intent but might not do it automatically. So can we figure out (1) whether kids perceive sponsored content differently than objective content, and (2) whether brief media literacy training helps them apply skills they already have? Notice what that does: it zooms out. It explains what's at stake (kids being manipulated by stealth advertising). It explains why this age group matters (developmental window). It explains why this study contributes (existing research focuses on traditional ads, not native digital content). And it doesn't get into the specific 6 hypotheses. A good "bigger research question" answer should be a paragraph or so. Write it in your own words, not paraphrasing the abstract. Imagine explaining the point of the study to someone who's never read it.
H1
H1: Perceived selling intent (measured IV) → [theoretical mediator: advertising schema activation] → product recall (DV)
b
Once you've got the conceptual difference between mediators and moderators down, the next skill is spotting them in published studies — which is harder than it sounds, because authors don't always label their variables that way explicitly. Here are the textual cues to watch for: Moderator language: "the relationship between X and Y depends on Z," "Z moderated the effect of X on Y," "the effect of X on Y was stronger/weaker when…," "Z × X interaction," "for participants high in Z, the effect was…" Statistical tests for moderation usually involve interaction terms in regression or ANOVA. Mediator language: "X influences Y through Z," "Z explains the relationship between X and Y," "X has an indirect effect on Y via Z," "Z is the mechanism by which X affects Y." Statistical tests for mediation include path analysis (which this study uses), Baron and Kenny's steps, or bootstrapped indirect effects. In this specific study, the authors are very clear about the training video as a moderator — they say so explicitly in the conceptual model in Figure 1, and they test it using "Wald statistics tested whether model paths varied significantly between children who saw the advertising training video and those who did not." But they're also using perceived informative intent and perceived selling intent as mediators — variables that sit between the experimental condition (IV) and the outcomes (recall, purchase intention). The path analysis structure tells you this: the experimental condition affects perceived intent, which then affects the DVs. Perceived intent is the mechanism the authors think explains why sponsorship cues affect kids' responses. Notice this study uses both at once. That's common in published research — moderators and mediators do different work in the same model. Extra video on finding moderators and mediators in published studies: https://youtu.be/nNcrmLLR_Rc
H1
Heads up before you tackle this one: the moderator in this hypothesis is theoretical, not measured. The authors don't have a direct survey item for "schema activation" — it's an internal cognitive process that's inferred from the pattern of results, not assessed directly. So when you identify the moderator here, you're naming the theoretical mechanism the authors propose: schema activation (advertising schema for H1, educational schema for H2). The hypothesis says perceiving selling/informative intent should trigger the schema, which then directs attention to certain kinds of information, leading to better recall of that information. In the path analysis the authors actually run, schema activation isn't a separate variable. The study tests the IV → DV link directly and uses schema theory to explain WHY that link exists. So the moderator is theoretical, not statistical. This is a good lesson in how published research works. Authors often name theoretical moderators or mediators in their hypotheses but only test some of them directly with measured variables. Read carefully to figure out which constructs are measured vs. which are described conceptually as the "reason why."
Parents reported family demographic information, including race and ethnicity, house-hold income, and parent education.
These are in fact variables! Race/ethnicity, household income, and parental education are demographic variables collected from parents at the start of the survey. To figure out how they're being used, ask: does this variable show up in any of the hypotheses or RQs? Look back at H1–H6, RQ1, RQ2. None of them mention race/ethnicity. None mention income. None mention parental education. So these aren't IVs or DVs in any of the tested relationships. That tells you these are control variables (also called covariates). Researchers collect them for two main reasons:
To describe the sample. That's what Table 1 is doing — telling you who actually participated so you can judge external validity. To statistically adjust for them in the analyses. This is when researchers want to make sure the effects they're seeing aren't driven by demographic differences across conditions. (In a true experiment with random assignment, demographics should be roughly equal across conditions anyway — but checking and statistically controlling is good practice.)
For each of these demographic variables, you'll want to identify:
Operationalization: what specific question was asked, and what response options were given? (The article is sparse on this — for some demographics, you may have to infer from Table 1 what the response options must have been.) Level of measurement: race/ethnicity is nominal (categories with no order). Education is typically ordinal (less than bachelor's < bachelor's < graduate). Income — be careful here. If they reported it as ranges (under $50K, $50–99K, etc.), that's actually ordinal, not interval, even though the underlying construct is continuous. Use: control variable, not part of any hypothesis.
Extra video on finding IVs and DVs (and what's not an IV or DV): https://youtu.be/SlljkpUY4J4
Non-sponsored video
Quick refresher on what a manipulation check is and why it matters for this study. A manipulation check is a measurement researchers add to verify that their IV manipulation actually worked the way they intended — that participants noticed it, understood it, or were affected by it. It's not the dependent variable. It's a check on the integrity of the experimental setup itself. For example, in a study manipulating "fear appeals" in health messages, the researcher would want to verify that participants in the high-fear condition actually felt more afraid than participants in the low-fear condition. If the manipulation check shows no difference in self-reported fear between groups, the manipulation failed — and any downstream findings on the DV become hard to interpret. Maybe there's no effect because the IV doesn't matter. Or maybe there's no effect because the manipulation never actually worked. For each of the experimental conditions in this study (non-sponsored, sponsored, sponsorship unaddressed, training video), ask yourself:
Did the authors verify that participants noticed the manipulation? If yes, what did they ask, and what did they find? If no, what could they have asked? And how does the absence of a check affect your confidence in the findings?
Be careful here — perceived selling intent and perceived informative intent are dependent variables in this study (and mediators in the path analysis), not manipulation checks, even though they look similar to what a manipulation check might measure. A true manipulation check would ask something like "was this video sponsored?" — a direct test of whether kids registered the sponsorship cue itself. Extra video on manipulation checks: https://youtu.be/0mr2K9Pji7k
The sample of parent-child dyads was recruited from an online participant panel and may not be fully repre-sentative of US tweens
This is the authors flagging an external validity concern, and it's worth pausing on because the trade-off it represents is one of the most important tensions in experimental research. Internal validity is about whether you can be confident the IV actually caused changes in the DV — whether you've ruled out alternative explanations. Tightly controlled experiments tend to have strong internal validity because you can rule out lots of confounds. External validity is about whether your findings generalize beyond the specific sample, setting, time, and stimuli of your study — whether what you found would hold up with other people, in other places, with other materials. These two often pull in opposite directions. The more controlled your experimental setting (which boosts internal validity), the less it resembles the real world (which hurts external validity). The more "ecologically valid" your setup (real environments, real materials, real behaviors), the harder it gets to rule out confounds. This study leans toward internal validity. The authors created controlled video stimuli, used random assignment, and standardized exposure through Qualtrics. That's good for causal inference. But it costs them on external validity:
The sample skews toward higher-income, more-educated, mostly White families on an online research panel — not the full diversity of U.S. tweens. The stimuli were custom-made videos with an unknown actor — not real influencer content with hosts kids actually follow. The product was a water flosser — not the toys, cosmetics, or food products that dominate real youth-targeted unboxing content. The viewing happened in a survey environment with a parent nearby — not a kid alone on YouTube doomscrolling at 9pm.
None of this means the findings are wrong. It means we should be cautious about generalizing them. The next step in this research program would be replication with more diverse samples, real influencer content, and naturalistic viewing contexts. Extra video on internal vs. external validity: https://youtu.be/ehq62uzVAzM
contributors
Author credibility - remember that we Google authors and look for specific credibility markers. These are all indicators of credibility. When evaluating author credibility, look for: institutional affiliation, degree and where it's from, research focus alignment with the study topic, and publication record. You could also look them up on Google Scholar to see their other publications and citation counts - but again, on a quiz you'd just need to know WHERE you'd look and what you'd look for.
3 (sponsored; non-sponsored; sponsorship unaddressed cue) x 2 (advertising train-ing; no advertising training) randomized experimental design
Let's unpack this notation, because you'll see it constantly in experimental research and it's worth being able to read at a glance. When researchers write "3 × 2," they're describing a factorial design — an experiment that manipulates more than one independent variable at the same time. The numbers tell you the levels:
The first IV (sponsorship cue) has 3 levels: sponsored, non-sponsored, sponsorship unaddressed. The second IV (advertising training) has 2 levels: training, no training. Multiply those together and you get 6 total conditions. Each tween is randomly assigned to exactly one of those 6 cells.
This is between-subjects, which means each kid only sees one combination — not all six. That's why factorial studies often need larger samples than single-IV experiments: you're filling 6 cells, not 2. Here's why a factorial design is more powerful than just running two separate experiments. It lets you test for interaction effects — does the effect of one IV depend on the level of the other? That's exactly what the authors are after here. They don't just want to know "does sponsorship disclosure matter?" and "does training matter?" separately. They want to know whether training changes how kids respond to the sponsorship cue. A factorial design is the only way to answer that question in a single study. You'll see this notation everywhere: a 2 × 2 has 4 conditions, a 2 × 3 has 6, a 3 × 4 has 12, and a 2 × 2 × 2 (three IVs) has 8. The pattern always holds. Extra video on factorial design: https://www.youtube.com/watch?v=r0tn9E0WPks. The Identifying Experiment Types infographic in Module 7 also walks through how to count factorial conditions.
Materials and methods
Two related concepts here that students often blur together: cover stories and demand characteristics. A cover story is what the researchers tell participants the study is about — and it's often not the real research question. Why? Because if you tell a tween "we're studying whether you can detect sponsored content," they're going to perform — they'll scan for sponsorship cues much more carefully than they would normally, and you'll get inflated detection rates that don't reflect real-world behavior. Demand characteristics are the problem cover stories try to solve. They're cues in the experimental setup that tip participants off to what the researcher expects, leading participants to (consciously or unconsciously) provide responses that match those expectations rather than their genuine reactions. Now look closely at this study. Ask yourself:
What did the kids think the study was about? (Look at the assent language and the procedures.) Were the videos presented as real YouTube content? (Yes — the authors note they streamed videos "from a private YouTube channel, in order to boost children's beliefs that these were real videos on the YouTube platform.") That framing is a deceptive element built into the stimulus, not just the study description. Did the authors explicitly mention using a cover story? Or were they relatively transparent about the topic?
This study sits in an interesting middle ground: there's no elaborate cover story about the study purpose, but there is mild deception about the authenticity of the video stimuli. Could the authors have run this study without any deception? What would that have cost in terms of internal validity? Extra video on cover stories: https://youtu.be/DHfvtcMvKeA Extra video on demand characteristics: https://youtu.be/dSFIAoTKb0o
Limitations
Extra video on finding limitations: https://www.youtube.com/watch?v=A4CzgLntcOc
Discussion
Extra Video: Looking for "Future Directions" in a Discussion Section https://youtu.be/2mTb6ydK3I4
content
Extra Video: Summarizing the Findings of an Empirical Study https://youtu.be/9UbLjwJRVNM
Discussion
On the quizzes, you'll need to find opportunities for future research that the author(s) discusses. Remember that these are ideas that the author includes, sometimes tied to limitations, for what could be done next or differently to further expand our understanding of this area.
Practical implications
This is an example of translational research. This is a requirement for this journal, but it is a cool thing that they do. They put it under the abstract on the main webpage so that anyone can read it, even without full access to the article.
Discussion
Extra video on contextualizing findings: https://youtu.be/UT61MWs5M4w
α
Extra video on Cronbach's alpha: https://www.youtube.com/watch?v=5gA2-MtWr8I
sent to panelists meetingstudy eligibility criteria (i.e., US parent of a child between 8–13)
Who was studied: U.S. tweens between the ages of 8 and 13. Eligibility criteria were that participants had to be U.S. parents of a child in this age range — but notice the study is actually about the kids, not the parents. Parents were recruited as the gateway because the kids couldn't legally consent on their own. Out of 298 parents who initially clicked the survey link, 251 parent-child dyads completed the study (3 children declined assent, 13 children quit early, and 31 parents didn't complete the consent process). Notice the dropout funnel — that screening matters for the eventual sample. Let's break down the three layers:
Population of interest: U.S. tweens ages 8–13 (especially those who watch online video content like unboxing videos). Sampling frame: Active members of Dynata's U.S. online research panel who happen to be parents of tweens. Dynata is a research panel where people sign up to take surveys in exchange for points convertible to cash, travel miles, or prizes. Sample: The 251 parent-child dyads who actually completed the study.
Each step narrows the pool and potentially introduces bias. The population is all U.S. tweens; the sampling frame is parents who happen to be Dynata panelists (people who voluntarily sign up to take paid surveys); the sample is the subset of those whose kids agreed and completed the study. The further the sampling frame is from the actual population, the more bias creeps in — and Dynata panelists skew in particular demographic ways (whiter, more educated, higher-income than U.S. parents overall, as Table 1 shows). What's the sampling method? The authors don't explicitly name it, but this is a convenience sample (a type of non-probability sampling). They recruited from an existing pool of available panelists — not a random sample of all U.S. parents. You could also argue it has elements of purposive sampling because they specifically targeted parents of tweens (a defined characteristic), but the underlying recruitment is fundamentally convenience-based. Advantages of this approach: fast, inexpensive, access to a large pool, can screen for specific characteristics (like having a child in the right age range), can target by geography (U.S.-only). Disadvantages: not representative of all U.S. parents/tweens, Dynata demographics skew in particular ways (which the authors acknowledge in their limitations), self-selection bias (people who join paid research panels aren't random), and people who use Dynata regularly may not be typical parents. This matters for generalizability — findings may not apply to all U.S. tweens, especially those whose parents wouldn't sign up for an online survey panel. Extra video on figuring out the sampling method: https://www.youtube.com/watch?v=rstREj9jZdg
After checkingreliability (α = .82)
This is reporting the internal consistency reliability of the perceived informative intent scale. Cronbach's alpha (α) is what researchers use to check whether a multi-item scale is reliable — basically, do the items "hang together" the way they should if they're all measuring the same underlying construct? Alpha ranges from 0 to 1. Conventional thresholds:
α ≥ .70 = acceptable α ≥ .80 = good α ≥ .90 = excellent
An α of .82 here means the three perceived informative intent items are reliably measuring the same construct. If alpha had been low (.40, say), it would mean the items aren't really tapping a single thing — they'd need to be split into separate measures or revised. You'll also see Cronbach's alpha for the perceived selling intent scale, except wait — only TWO items measure selling intent, and the authors report a correlation (r = 0.53) instead of alpha. Why? Cronbach's alpha is technically defined for 3+ items. With only 2 items, a correlation tells you the same information more cleanly.
251 parent-child dyads.
Is 251 enough for this study? Let's think it through. In a factorial design like this one, what matters isn't just the total N — it's the N per cell. With a 3 × 2 design and 251 participants, you're roughly looking at 251 / 6 ≈ 42 tweens per condition. (The actual breakdown is a little uneven, but that's the ballpark.) A common rule of thumb for between-subjects experiments is at least 30 participants per cell to get reasonably stable estimates, and more is better. So 42 per cell is decent — not generous, but workable. The authors were able to detect statistically significant effects in some of their hypotheses, which suggests they had enough power for the strongest relationships. But here's where it gets interesting: many of the hypotheses in this study were not supported (H1, H4, H6b were unsupported; H3, H5 were partially supported). When hypotheses fail, you have to ask: was there really no effect, or did the study just lack the statistical power to detect a small effect? With ~42 per cell, this study could detect medium-to-large effects but might miss small ones. That's a limitation worth thinking about when you read the discussion. Larger samples give you more power to detect smaller effects — but they can also make trivially small effects reach statistical significance. Sample size always involves a trade-off, and there's no single "right" number. It depends on your design, the effect size you expect, and what you're willing to risk missing. Extra video on sample size in experiments: https://www.youtube.com/watch?v=v-dyn6tO5dQ
Dynata’s US online panel
Extra video on figuring out the sampling method: https://www.youtube.com/watch?v=rstREj9jZdg
methods
What research paradigm does this study align with? This is a post-positivist study.
Evidence: - Hypothesis testing. The study advances 6 specific, falsifiable hypotheses (H1–H6, with sub-hypotheses) plus 2 research questions, all stated upfront before data collection. Findings are evaluated against these predictions, with hypotheses described as "supported," "partially supported," or "unsupported" — that's the language of post-positivism. - Quantification. All variables are measured numerically — Likert scales for perceived intent (1–5), correct/incorrect for recall items (0/1), summed scores for total recall (0–2), and a single rating for purchase intention (1–5). Statistical tests (path analysis, Wald tests, p-values) determine whether relationships exist. - Experimental control and random assignment. The authors manipulated two independent variables and used Qualtrics to randomly assign participants to conditions — both classic post-positivist tools for isolating cause-and-effect relationships. - Theoretical frameworks that predict relationships. The authors draw on schema theory and the Persuasion Knowledge Model to GENERATE testable predictions. Theory is used deductively (theory → hypothesis → test), not built up inductively from observations. - Search for generalizable patterns. The authors are interested in how tweens in general respond to unboxing videos — not how specific individuals interpret them. - Findings are reported as group-level statistics (means, regression coefficients), and the limitations focus on whether the sample is representative enough to generalize from. - Researcher distance from participants. Data collection happened through an online survey panel with no direct researcher-participant interaction. The setup deliberately minimizes the researcher's influence on responses — another post-positivist signature. - Acknowledgment of uncertainty. Note that this is POST-positivist, not strict positivism. The authors hedge appropriately — they discuss limitations, acknowledge that the manipulation might have worked through different mechanisms than predicted, and note ceiling effects and social desirability bias. They're seeking objective truth but acknowledge they can only approach it imperfectly.
methods
On the quizzes, you'll be asked to find when the author(s) discuss ethical considerations explicitly and implicitly. For implicit, for example, authors may not mention some ethical considerations because it is obvious to other researchers that common ethical considerations were taken. In each annotation, use the hashtag #ethicalconsideration and explain what the author(s) did and why, using course concepts related to research ethics. These will be in the methods section and possibly in the discussion/conclusion section. AND because there are limited ethical discussions in this particular study, please reply to this annotation with some suggestions about what could have or should have been done, ethically.
H1
Extra video on finding IVs and DVs: https://youtu.be/SlljkpUY4J4
In
Extra video on finding the bigger goal in a study https://www.youtube.com/watch?v=9e-pcABSu2Q
Wald tes
A Wald test asks whether two estimates differ reliably from each other. The authors use it here to test moderation: is the effect of perceived intent on the outcome significantly different in the training condition vs. the no-training condition? If the Wald test is p < .05, the moderation is real. If it isn't, the apparent gap between conditions could just be noise. You don't need to compute it - just recognize that when authors report a Wald test in a moderation analysis, they're testing whether two coefficients differ, not whether either one is significant on its own.
U.S.
Article: Vaala, S. E., Mauceri, F., & Connelly, O. (2024). U.S. tweens' reactions to unboxing videos: Effects of sponsorship disclosure and advertising training. Journal of Children and Media, 18(2), 272–292. https://doi.org/10.1080/17482798.2024.2338541
Purpose: This annotated example is designed to help you prepare for the in-class experiment study identification quiz. Read through the article and these annotations carefully. The annotations explain what to notice and why — the quiz will ask you to do similar analysis on a different experiment study.
How to use this: Read the article section by section. When you encounter a highlighted passage, read the annotation. The annotations teach you what to notice and why it matters. This is NOT a graded assignment — it's a learning tool. When you take the in-class quiz on a different experiment study, you'll need to: - Identify the bigger research question and why the study matters - Identify hypotheses/RQs with their IVs, DVs, mediators, and moderators - Identify the sampling method and its strengths/weaknesses - Identify how participants were assigned to conditions, and the strengths/weaknesses of that approach (this is separate from sampling!) - Identify the type of experiment — true, quasi, or pre-experiment; lab, field, or natural; posttest-only, pretest-posttest, or time series; between-subjects or within-subjects; factorial or single-factor - Find and annotate ethical considerations, including those specific to research with minors (parental consent, child assent, IRB scrutiny of special populations) - Analyze variables: conceptual definition, operationalization, level of measurement, role in hypotheses, reliability, and whether each was manipulated or measured - Identify the research paradigm with supporting evidence Find limitations and future research directions Find places where authors connect their findings to previous research - Evaluate author and journal credibility - Apply experiment-specific concepts: manipulation checks, attention checks, cover stories, deception, demand characteristics, threats to internal validity, and the trade-off between internal and external validity - For each hypothesis, evaluate how the authors attempted to establish causality (association, temporal order, non-spuriousness) and whether they succeeded — paying particular attention to whether the IV was manipulated or measured
Remember to use your strategic reading skills: skim the abstract and Impact Summary first, map the structure using headings, then read the Methods and Results carefully. The introduction/theory can be skimmed for main ideas. The Discussion needs careful reading for limitations, future research, and connections to previous work.
YouTube
Practice your strategic reading here. Before reading this article front-to-back, try this: Read the abstract (1 min). Skim the headings to see the structure (30 sec). Jump to the Methods - figure out: who was studied, how, and what was measured (5 min). Then read the Discussion for findings and limitations (5 min). THEN go back and read the Introduction/Lit Review for theory. This is more efficient than reading linearly, and it's the approach you should use on the quiz.
Wald tes
A Wald test asks whether two estimates differ reliably from each other. The authors use it here to test moderation: is the effect of perceived intent on the outcome significantly different in the training condition vs. the no-training condition? If the Wald test is p < .05, the moderation is real. If it isn't, the apparent gap between conditions could just be noise. You don't need to compute it - just recognize that when authors report a Wald test in a moderation analysis, they're testing whether two coefficients differ, not whether either one is significant on its own.
ARTICLE HISTORY
Here's a real example of what the peer review process looks like in practice. Let's trace the timeline for this article:
March 27, 2023: The author submitted the manuscript to the journal. Somewhere between March 2023 and March 2024: The editor sent it to peer reviewers (likely 2–3 experts in children's media, advertising literacy, or developmental psychology). Those reviewers read the full study and wrote detailed feedback — pointing out weaknesses, suggesting additional analyses, questioning interpretations, recommending additional literature, etc. The authors then revised the manuscript in response to that feedback. March 27, 2024: The revised manuscript was resubmitted (exactly one year after the original submission, which is on the longer end of the spectrum — it suggests substantial revisions). March 31, 2024: Accepted, just four days after the revision. That fast turnaround tells us the editor was satisfied that the revisions addressed reviewers' concerns and didn't need to send it back for another round.
So the total process from submission to acceptance was about a year. This is fairly typical for communication journals — some are faster, some take over a year. The key point: this study went through expert scrutiny before it was published. That's what distinguishes peer-reviewed research from a blog post, a news article, or a preprint. On the quiz, you may or may not have these dates for every study, but when you DO see them, they tell you something about the rigor of the review process.
Wald tes
A Wald test asks whether two estimates differ reliably from each other. The authors use it here to test moderation: is the effect of perceived intent on the outcome significantly different in the training condition vs. the no-training condition? If the Wald test is p < .05, the moderation is real. If it isn't, the apparent gap between conditions could just be noise. You don't need to compute it - just recognize that when authors report a Wald test in a moderation analysis, they're testing whether two coefficients differ, not whether either one is significant on its own.
0101100450082 · COEBLALA
add the three acount options, i will send account numbers for Baht and Lak
Baht. 0101100450073 lak 0101100450064
Signing of this Agreement — mobilisation of design team ·Stage 1 02 Conceptual design sign-off by Client ·Stage 2 03 Delivery of full engineered drawing set
is this a date issue or money issue ?
Three design milestones.
Changed
OTP auto-verified (preview).
test this as well
By signing here you affirm authority to sign for Cameron Stovold & Sunny Stovold. No signature yet.
Sam, can we test to make sure it works efficiently?
StageMilestoneAmount (USD) 01Signing of this Agreement (mobilisation of design team)$1,200.00 02Conceptual design sign-off by Client$900.00 03Delivery of full engineered drawing set$900.00 —Total$3,000.00
One fee of 3000, eliminate the three draws
Conversion events tagged consistently so the funnel is visible end to end.
Yay !
I design the website forms and capture flows to drop cleanly into whatever CRM you stand up.
Cool.
One platform for all three sites. Currently split between Power Diary (personal) and Zanda Health (MHFA). I recommend consolidating on Zanda Health: its group-booking, course-iteration and class-capacity features map better to your MHFA delivery model, and clinical 1:1 booking is a feature subset Zanda handles cleanly. If there’s a strong reason to keep Power Diary, I’ll review with you in discovery.
Only the individual counselling/supervision services use Zanda. It likely needs to remain separate from the corporate business. Reasons: - Medicare and AASW (Australian Assoc. Social Workers) accreditations demand dedicated, locked system security/privacy regs apply, and Leigh is super sensitive to this aspect. - Leigh currently uses the invoicing/payment features of Zanda + Tyro, while the Corp business MHS&T has migrated to Xero 12mths ago, for Corp invoices. Meanwhile MHS&T training bookings are via WIX with Stripe payment gateway on the backend. We presently have only 1x CBA business account (source of all financial truth). - In the long-term future the Counselling, MHS&T, or Training business may each be split/sold in part, an option made easier by maintaining separated database assets - if this occurred online assets would need review.
Ultimately am open to update/merge booking/payment systems where we can, this would require a deeper level of review/assessment.
They become the spine of the IA
This is my intention, however occasional tinkering by others waters down the effect.
IA = ?
pricing
we are overdue a look at our services price list
The eventual National MHFA Referral programme uses the storefront’s lead volume to feed paid referrals, which gives you another revenue line on top of the funnel.
We will likely look at eventually promoting training courses in other states/cities and send Leigh to run them, while simultaneously recruiting a stable of locally-based MHFA trainers in each place.
annual programme
annual contract to provide the full suite of training and support services.
MHFA
"MH Training Storefront"
different email sequence
Yep yep yep
clinicians
unclear what 'clinicians' this relates to, you mean GPs and psychiatrists ?
MHFA storefront
Let's re-title this the Training Storefront, so as to capture all variations of face-to-face training, MHFA or other brands, and bespoke workshops, ind. including online training etc.
Aboriginal & Torres Strait Islander MHFA
the ATSI training course may be on hold if our specialist trainer is not available - enquiries still welcome
MHFA
Change to: Book mental health related training courses and workshops, or generate leads for bespoke corporate training enquiries.
Prevention · Support · Supervision
note my email logo is outdated, the site logo was changed last year to "SUPPORT - SUPERVISION - TRAINING"
supervision contracts
Service Agreements for ongoing provision of services to each corporate client.
boutique EAP retainers
one day down the track ;-)
construction
construction, mining and energy industry
insurance
insurance and finance service...
insurance
Replace with Corporate
Customers often remain loyal to a brand because of its appearance, functionality, or price. Companies want loyal customers like these and will go to great lengths to keep them.
Loyalty can also go generations. Some people will solely utilize the brands their parents or grandparents were loyal to and not think twice about it.
including a reputation for treating customers and employees fairly and for engaging in business honestly.
Employee retention is a very important stat for businesses in my opinion. Maintaining a positive work environment can go a long way
It may also mean treating our employees, customers, and clients with honesty and respect.
I think many businesses under value this aspect of business. The term "internal customer" was brought up at one of the first company meetings I was in post graduation and that term has stuck with me since. Maintaining employees is growing more and more difficult with job hopping increasing in popularity.
However, there are common threads that run through it, and the Framework focuses on those, again with the goal of serving as a useful aid that is relatively easy to apply.
It's unique to think about how different cultures ethics can very. I remember one of m professor's telling me in college that there are culture's that view hand shakes as disrespectful. She taught me how important it's to be aware of key differences when meeting with people from other cultures.
(b) If MLH is not appointed as the construction contractor: The Drawings remain the sole and exclusive property of MLH. The Client receives a non-transferable, single-use licence to use the Drawings to construct the residence on the Site only, as set out in Section 7.2. The Drawings may not be reused on any other site, project, or by any other party.
if client has paid in full and they chose a different contractor, the design becomes their property without liability of any type to MLH, past, present or future
7.1 B and 7.2 cover my above point so not my writing is required? your thoughts
Additional revisions beyond the included three shall be charged at USD 150.00 per round, payable in advance of work commencing on that round.
Sam Question
how do we make this as irelevant but necessary in the event we have a difficult client where changes are happening continuously. This clause is really for our protection, Your thoughts?
Stage Milestone Amount (USD) 1 Signing of this Agreement (mobilisation of design team) $1,200.00 2 Conceptual design sign-off by Client $900.00 3 Delivery of full engineered drawing set $900.00 Total $3,000.00
Eliminate the stages, one payment
Site layout including driveway, fencing, gate, and landscaping zones (basic)
add a bullet,
The footprint of the home is based on a per meter price. E.g. your design is 120 m2 with a second storey of same size, your buildable size is 240 m2. Driveways, gates, fencing, landscaping, car park is separate pricing
However, while some people have highly developed habits that make them feel bad when they do something wrong, others feel good even though they are doing something wrong.
It's interesting how different people can react to the same thing. I wonder what the main causes of this are if it's primarily based on the environment they grew up in or something else that causes this.
illusion
I believe an appropriate substitute for this term would be misunderstanding. I think that social media and news outlets try to hide certain key aspects in reports that cause drastic misunderstandings when people don' tdive deeper into them.
That terminal carbon atom (shown here in blue) is called the omega carbon atom. Thus a monounsaturated fatty acid with its single double bond after carbon #3 (counting from and including the omega carbon)
why this blue carbon is called omega and the 3rd carbon position is also called omega ? it seems to me more reasonable that the blue carbon is called alpha, then the 3rd carbon is the omega on thus made its bond an omega one.
call with Gary,
or Sam, our managing directors.
contract
and will receive a credit $1200 USD to the construction contract.
If you decide to engage MLH for design, we sign a Design Contract. Six weeks of work: conceptual design, three rounds of revisions, full architectural and engineering drawings, and 3D rendered visualisations of your home before any construction begins.
add, You have indicated that each you have design ideas and would like two separate designs included with the intent that upon reaching the final design approval MLH provides the completed technical and working drawings on that chosen design by you.
If you have land in mind, we visit it together. We assess access, orientation, soil indicators, and any setback or zoning constraints. You leave the visit with an informed view on whether the plot suits your brief.
add, we have the plot plan and dimesions but we have not visited the site as this date,
Thirty minutes, by video call or in person if you are in Vientiane. Gary, our Managing Director, will walk through your initial questions, hear what you have in mind, and tell you honestly whether MLH is the right fit. No fee.
Gary has completed initial call with both Sunny and Cameron earlier April
These figures are construction-only. Land, government fees, soil testing, topographical survey, and pool engineering (where applicable) are quoted separately.
Add,
Note: sizes indicated are general, we have built 80m2 structures with a beautiful 40 M2 deck but averages would this range of 160 m2.
Final pricing is set only after design contract and site assessment. The ranges above are real — they reflect homes we have built. We will not quote you a number we cannot honour.
Add, we want the design and costs to fit your budget.
– 1,000
750 plus / m2, depends on your desires
50
700
USD 400 – 550 / m²
USD 480 - 550 /m2
into
and note that these are general numbers for discussion purposes only
or around Vientiane.
in the village of Don Dok, Vientiane.
You have design ideas in mind. You have a sense of what you want to achieve, but are not yet sure of the square metres required or the realistic price range
, and in our preliminary discussions we know a small pool and separate (if possible) mother in law suite. All details we will engage and work through in the design process
You are currently looking at land. The plot is not yet finalised, which gives us time to plan size and footprint before purchase.
You have selected lot D2 and indicated you have made a deposit on that lot and will be completing the purchase contract shortly on your next return to lao, now expected in week or two
Here individuals are viewed not in isolation, but as members of communities that are partially responsible for the behavior of their member
This reminds me of what my college coaches used to preach. They also talked about how when you're wearing "our brand" you're representing more than yourself, you're representing our brand and our comminuty.
American cultural traditions, in fact, reinforce the individual who thinks that she should not have to contribute to the community's common good, but should be left free to pursue her own personal ends.
I feel as though when people are asked to do something and they reply with "what's in it for me" is becoming more and more common of a response. Unfortunately it seems people are starting to look past the for the greater good persepctive
At home, no matter how much I contribute, my parents alwaystook it for granted as I am supposed to be filial … but here the elderly say thank you, theyrespect me, and they don’t try to control my life, to have a filial heart is much easier here.
Highlighting quote that reinforces a main point: Filial piety doesn't depend on familial relations.
One key distinction is that,with the elderly they assist, they don’t feel burdened by a sense of indebtedness,unlike the perceived obligation they experience with their own parents.
I wonder if this is an example of psychological reactance.
being stingy with the time spent visiting one’s parents shows a lack of ‘filialheart’.
Why is filial piety being judged by circumstantial conditions? Also, drawing back to the challenge of technology, I wonder if care workers see video chatting or messaging online as spending time.
Sucha seemingly double standard in defining the ‘filial heart’ highlights its fluid, context-dependent nature, shaped by varying economic and social circumstances
Highlighting the fact that expression of filial piety is incedibly nuanced.
Grandma Li’s son works inAmerica … She is so proud of him …
Makes me wonder why generational advancement via opportunities doesn't count as a filial action to her.
As such, the sense of‘limitless indebtedness’ towards parents has diminished, replaced by a focus on mutualgratitude and support, creating a two-way exchange of care
Example of power dynamics changing within a culture, more towards and egalitarian outlook of the "filial piety" value.
the disparities between rural and urban life that hadbecome ingrained in her during her years in the city.
Exemplifies cultures within cultures. Danlu seemingly assimilated into a "city" culture during her time there but after returning, she may be considered an outside group to the "rural" culture.
Retributive justice refers to the extent to which punishments are fair and just
Location is a big factor in this. States have varying laws around just about everything. For example someone who commits a crime in one state could suffer a far greater punishment than if they were to commit the same crime in another state.
How would the action affect the basic well-being of those individuals?
I feel as this ideology is becoming harder and harder to find. With so many individuals wishing for fame and popularity.
The right to privacy, for example, imposes on us the duty not to intrude into the private activities of a person.
With all the recent technological advancements, I wonder how they have impacted this. News outlets typically advertise that all our moves, clicks, and purchases are tracked and help targeted advertisement
Status.
test
First, the utilitarian calculation requires that we assign values to the benefits and harms resulting from our actions and compare them with the benefits and harms that might result from other action
Each benefit and harm is subjective depending on the value of the individual making the decision. What they deem as extremely beneficial could be deemed harmful by another
n for 80% power
What about other powers?
FDR Practitioner Register status, what “Club Awesome” actually is, The Working Mind licensing timeline, current subcontractor roster, and which insurers actively send referrals.
FDR Practitioner Register status - Leigh has qualifications in FDR but is not a registered practitioner, not strictly speaking attracted to this type of client work, however the qualifications and earlier work in this realm feeds into her current practice.
“Club Awesome” is Leigh's 'club' concept of a gathering of clients around her, with shared attitudes toward positivity and self-development - special client member-only access to content, newsletters, workshops etc.
The Working Mind licensing timeline - MHFA Australia are presently training more Trainers like Leigh and are talking about a primary launch end of year or early 2027. Their video content etc is Canadian and is being Australianised. Importantly - MHFA Australia have advised that we can soft-launch and run training sessions now, but need to keep any public marketing on the down low.
Current subcontractor roster: - We have had a local woman delivering Aboriginal MHFA courses, but she has recently talked about moving away from this. Other MHFA trainers are widely approachable.
Which insurers actively send referrals? We have an Insurance industry client who signed our 15pg service agreement. Services include: - MHFA training courses - Corporate Workshops - Consulting on Psycho-Social compliance - Staff individual Counselling (paid by client, no Medicare rebate) - Staff individual Professional Supervision - Annual Corporate Event speaking etc.
draw the lines between them sharply
Excellent pickup
lived-experience
remove any thought/reference to Leigh as a "lived-experience" service provider, these people are often untrained and while good for speaker roles, are not qualified as mental health 'clinicians' running a 'clinical practice'.
CRM
Relationships require commitment and resources. CRM is not a priority, however segment-able outbound email/SMS communications, and a sales/opportunity funnel management tool are attractive - hence Zoho.
Power Diary on the personal site, Zanda Health
Power Diary appears to have been taken over/re-branded by Zanda Health.
build EAP infrastructure on top of i
Not a focus, however with the right 'chemistry' of counsellors working together, ovr time, a boutique level of EAP service would likely become marketable.
invisible
True dat
15-pax training plus ~4 counsellor rooms in the CBD.
15-pax group training tables, up to 40-pax lecture capacity. ~2 formal counsellor rooms (capacity for 4 counsellors) located in the CBD, 400m from Central Station.
Speaking: a smaller current line, but one you’ve told me you’d like to grow.
Public Speaking - ongoing positioning of Leigh as a thought leader on organisational responses to individuals and their mental health amid the challenges of the modern corporate environement.
Disability sector training and placements for NDIS providers, both staff and clients.
Disability sector staff MHFA training and on-site counselling for NDIS participants and their support team members.
Schools training.
(this is just one sector of clientele especially attracted to MHFA training/courses, not meaningful to mention here)
Critical incident response, construction sites especially, post-suicide / accidental death / injury.
Critical Incident responses, supporting workgroups and individuals in the aftermath of traumatic events, especially post-suicide / accidental death / injury.
Corporate workshops in the Brené Brown vein: vulnerability, communication, breaking down team barriers.
Bespoke Corporate Workshops that develop individual performance within team settings. Where Leigh's professional expertise, based on decades of experience, is offered at a premium.
A six-hour corporate executive course, workplace MHFA-adjacent. The Working Mind, when licensing lets you market it.
The Working Mind - A six-hour corporate executive course, workplace MHFA-adjacent. First-mover as accredited trainer, soon to be rolled-out nation-wide.
Insurance company supervision for staff in claims-heavy roles
Insurance sector support, including MHFA training, professional supervision, and Critical Incident responses for staff working across high-stress evironments.
Professional supervision for social workers, psychologists, allied health.
Professional Supervision for human services workers, members of the public service, and education, and private sector managers.
at the Collective Wellbeing Hub and online
Collective Wellbeing Hub is defunct since 2023, was a failed partnership between Leigh and another school mum, lots of bitterness, still healing, best to avoid all reference!
post-incident response on construction sites
support and responses to Critical Incidents
in the vein of Brené Brown
executive workshops where your professional expertise attracts a premium (Leigh is sensitive/critical of Brene)
l
I believe this should be uppercase 'L'
M=1
I believe this should be M_L = 1 or M = 2.
M=1,
I believe this should be M_L = 0 or M = 2.
l
I believe this should be uppercase 'L'
h l v
I believe this should be uppercase 'L'
f l
I believe this should be uppercase 'L'
(l)
I believe this should be uppercase 'L'
l is
I believe this should be uppercase 'L'
González-Bustamante, B., & Olivares, A. (2016). Cambios de gabinete y supervivencia de los ministros en Chile durante los gobiernos de la Concertación (1990-2010). Colombia internacional, (87), 81-108.
Cita trabajo 1
URL slug
URL slug rule * Only fragment * Key trigger words only separated by hyphens * Only change slugs when article is not live then you can republish (as it may break links in beacon)
Title rules
Title Rules:
Start it with an action word that describes the action the client would be doing or search for how to do.
Never match the title of previous article
Content principles
Content Guidelines:
Structure
Structure:
personal
Put your personal touch on the saved replies so the sound more genuine, authentic and personalize to the client.
mood
Tone: * Mirror the mood of the client and try to match their emotion * Do as much as you can to resolve the issue. Do all the heavy lifting for the client * Switch it up so you dont sound repetitive and monotone.
Browser tools are different from adding elements to AI chat. Element selection lets you manually pick page elements as context for a chat prompt. Browser tools let agents autonomously interact with web pages to complete tasks.
I took a look at the article about trolling slang and I thought it was interesting that this source explains how the meaning of “troll” has changed over time. Originally, trolling online was sometimes seen as more of an inside joke or prank, but now it is often connected to harassment and cyberbullying. I found it interesting about how its severity has taken on new levels as of more recently. I also found it surprising how the article connected trolling to psychology and online anonymity, because people often act differently online when they feel anonymous.
While ’98 was the top season of McGwire’s career, Sosa would go on to have a much better — dare I even say all-time great caliber? — year in 2001, with 10.1 WAR. But both players were extremely productive during the period in and around 1998.
Noting that Sammy Sosa's best season was 2001 with 10.1 WAR, much better than his more famous 6.8 WAR season in 1998.
I am advocating for writers to prevent themselves from becoming AI.
Encouraging book reviewers to bring some originality to their reviews.
In what would be his final postgame press conference, he chided critical fans by comparing them to passengers who fled the Titanic, a literal sinking ship.
On a poor analogy by former Louisville basketball coach Kenny Payne.
While the Ketogenic Diet (KD) has emerged as a potential therapeutic strategy for glioma (the most common neuroepithelial brain tumor), its underlying mechanisms have remained elusive. This study investigates the "gut-brain axis"—specifically the "microbiota-SCFAs-microglia" signaling pathway—to determine how gut microbiota and microbial metabolites mediate KD’s anti-glioma effects.
This research delineates a novel neuro-immune-metabolic mechanism where KD exerts its anti-cancer efficacy by modulating the gut microbiome. The findings strongly suggest that microbiome-targeted interventions—whether through strict dietary regimens like KD to enrich A. muciniphila, direct probiotic supplementation of R. faecis, or exogenous administration of butyrate—represent highly promising and actionable strategies for personalized glioma therapy.
(Zeisel et al., 2018), published in Cell, presents a comprehensive transcriptomic census of the adolescent mouse nervous system. By analyzing approximately 500,000 single cells, the researchers established a high-resolution molecular atlas and a data-driven taxonomy for the mammalian nervous system.
To manage the scale and complexity of the data, the authors developed Cytograph, an automated analysis pipeline:
The study organized the nervous system into a hierarchy based on three interacting principles:
The researchers identified four primary categories of genes that distinguish neuronal types:
Conclusion: This resource provides a foundational map for understanding the molecular logic of the brain. The full dataset, taxonomy, and "report cards" for each cell type are interactively available at mousebrain.org.
These markers were used to distinguish the GL261-GSC tumor cells from the healthy brain environment and define their stemness or malignancy.
The study highlights these markers to prove the model's ability to simulate how GBM integrates into the host brain's neural circuitry.
Kainate type: Grik2.
Neuron-Glioma Synapsis Mediators: Dlg4 (PSD95), Homer1, and Nlgn3 (Neuroligin-3). Nlgn3 was explicitly found to be upregulated by the TME interaction.
These markers defined the various immune populations and their functional state within the tumor microenvironment.
The paper identifies these as potential targets for immunotherapy within this specific mouse model.
Used in spatial transcriptomics to define "healthy brain parenchyma" vs. the tumor.
based on the average expression of 250 genes in each chromosomal region4,31
They seem to use the moving average window size as a reference. Which means, the inferCNV tool calculates the mean expression of all cells in the sample and subtract it. If the sample is 80% tumor, the "baseline" is essentially the tumor itself, making it impossible to see the actual CNVs.
astrocyte markers (GFAP, Aqp4 and Aldh1l1)
In Zeisel 2018, the cluster ACMB corresponds to Dorsal midbrain Myoc-expressing astrocyte-like, with marker set: [Myoc Gfap Slc36a2 Aqp4 C4b] And there is no Aldh1l1 in any marker sets.
I'm highly skeptical that this paper didn't use Zeisel 2018 marker sets.
neuron markers (Calb1, Slc17a7 and Gabra1)
in Zeisel et al. (2018), the neuron markers (cluster TEGLU7, called Excitatory neurons, cerebral cortex) are: A830009L08Rik,Gm12371,Lamp5,Calb1,Dact2
Marriage of adolescent girls in Nigeria reduced by 80% by ‘big push’ intervention
GM’s failure to consider its stakeholders
This can make or break your business. If you make the mistake like this and you lose trust from your consumers, things can go downhill fast.
There are no shortcuts. Imperfection, self-doubt, and mistakes are part of the process.
You must learn from your ethical failures over time if you want to create success for yourself and others within your organization.
Ethical professionals work for companies whose values align with their own. How
This reminds me of culture on any team/workplace. You must surround yourself and others with people that are aligned and focused on the same goals.
At the RBA’s press conference on Tuesday announcing the – Monetary Policy Decision – the Governor said that: … when governments are spending a lot of money and we’re running up against capacity constraints, then they do need to think about whether or not there’s ways they can help the inflation problem by looking for ways to constrain demand. Next week, the Treasurer will deliver his annual fiscal statement outlining spending and tax initiatives for 2026-27.
Jesus
“hedonic calculus”
This is very interesting. Almost an "analytical" way of measuring how ethical you are.
greatest happiness for the greatest number
Key to utilitarianism. Focal point of the reading.
the means become a way of life
It is not just about reaching goals or the outcome, it is about discipline and building character along the way.
practices. A single standard of business behavior that emphasizes respect and good service appeals to all.
Changing your moral compass for certain situations is detrimental to respecting others and doing good service.
Butanalogy can also operate in mutual alignment1 analogies to reveal commonalities thatwere previously not obvious in either analog.
Projecting information from a well-understood domain can lend structure to an unfamiliar domain, as in:The mitochondria are the power supply for a cell.
Analogy is often thought of chiefly as a way to transfer knowledge from one situationto another, and indeed, it often serves that function.
We illustrate ourpoints with examples from adults and children, including examples from language evolu-tion, and across both perceptual and conceptual domains.
We propose that—bothin the history of language and in children’s learning—analogical processes are a majorway in which new relational abstractions are acquired
But the ear-lier we go in development, the less able children are to comprehend verbal explanationsof abstract ideas. In contrast, there is evidence that analogical comparison and abstractionprocesses are present in 7–9-month-old infants, and even earlier (Anderson, Chang, Hes-pos, & Gentner, under review; Ferry, Hespos, & Gentner, 2015).
Relational categories have been the focus of much recent research (Asmuth &Gentner, 2017; Gentner, 2005; Gentner & Kurtz, 2005; Goldwater & Markman, 2011;Markman & Stilwell, 2001; Ross & Murphy, 1999), in part because of their importantrole in conceptual learning and education (Goldwater & Schalk, 2016).
For example, carnivore andherbivore are abstract relational categories, while canine and feline are abstract entity cat-egories.
Relational cate-gories are categories for which the basis for membership is participation in a commonrelational structure; thus, they differ from the more studied entity categories, such as tulipand spoon, whose members share many intrinsic properties.
Our main focus is on relational abstractions, includingprinciples, rules, and schemas, as well as abstract relational categories.
For example, causal system is more abstract than posi-tive feedback system, which in turn is more abstract than the specific positive feedbacksystem by which the melting of polar ice causes lower reflectance of the sun’s heat, lead-ing in turn to more rapid melting.
Wetake the process of abstraction to be one of decreasing the specificity (and therebyincreasing the scope) of a concept.
Many such abstractions are expressed as rela-tional categories—categories like evidence, counterfactual, and proportion, and on a moremundane level, bargain, ally, and rescue.
Theassertions that make up abstract knowledge are variously referred to as schemas, rules,abstractions, principles, or overhypotheses
Abstract structured knowledge is a key feature of higher order cognition (Gentner &Medina, 1998; Hummel, 2011; Markman, 1999; Tenenbaum & Griffiths, 2001).
it is not enough to consider the distribution of examples given to learn-ers; one must consider the processes learners are applying
Wepropose that analogical generalization drives much of this early learning and allows children togenerate new abstractions from experience
contrary to the general assumption,maximizing variability is not always the best route for maximizing generalization and transfer
The Mystery of Patrick Moore's Woodstock Typewriter<br /> by [[Robert Messenger]] for oz.Typewriter<br /> accessed on 2026-05-08T13:19:49
Users of Woodstock typewriters included: - Robert Bloch<br /> - Howard Fast<br /> - Alger Hiss (1929 standard #230099)<br /> - Sir Patrick Moore<br /> - J.C. Oldfield (editor of the Associated Press's London bureau, 1930s)<br /> - Gordon Parks