9 Matching Annotations
  1. Feb 2022
    1. Research is needed to determine the situations in which the redundancy principle does not hold

      p. 144-145

      The authors describe limits to the research (circa 2016) as follows: 1. Kinds of learners, 2. kinds of material, and 3. kinds of presentation methods. Each of these situations present interesting possibilities for research related to the use of closed captions used by first-year law students while watching course-related videos.

      When considering how "kinds of learners" might be relevant, the authors ask how redundant on-screen text might hurt or help non-native speakers of a language or learners with very low prior knowledge. It is probably reasonable to consider first-year law students as having "very low prior knowledge". Is there any sense in which those same students could be understand as having overlapping characteristics with TBD

    2. Principle 2: Consider Adding On‐Screen Text to Narration in Special Situations

      p. 139-141

      Clark and Mayer describe a key exception to the first principle they describe. One of the special situations they describe consists of when a learner must "exert greater cognitive effort to comprehend spoken text rather than printed text" (p. 140). This could be when the verbal material is complex and challenging, such as when learners are learning another language or when terminology is challenging such as might be encountered in scientific, technical, or legal(?) domains (p. 141).

      [P]rinting unfamiliar technical terms on the screen may actually reduce cognitive processing because the learner does not need to grapple with decoding the spoken words.

      However, it may be necessary to ensure that video is slow-paced or learner-controlled under circumstances where both audio narration and on-screen text are provided. Mayer, Lee, and Peebles (2014) found that when video is fast-paced, redundant text can cause cognitive overload, even when learners are non-native speakers.

      Mayer, R. E., Lee, H., & Peebles, A. (2014). Multimedia Learning in a Second Language: A Cognitive Load Perspective. Applied Cognitive Psychology, 28(5), 653–660. https://doi.org/10.1002/acp.3050

    3. boundary conditions.

      p. 131-132

      Clark and Mayer provide a brief summary of the boundary conditions, the situations in which learners benefit from the use redundant on-screen text. These situations include adding printed text when 1) there are no graphics, 2) the presentation rate of the on-screen text is slow or learner-controlled, 3) the narration includes technical or unfamiliar words, and the 3) on-screen text is shorter than the audio narration.

      The first three conditions described bear some similarity to closed caption use by students in legal education watching class lecture videos, especially students in first-year courses. Typically, the students are viewing videos with very few detailed graphics, they have control over the speed, pause, review, and advance features of the video player, and the narration provides numerous legal terms.

      Although closed captions are intended for hard of hearing and deaf viewers, they may have some benefits for other learners if the boundary conditions described by the authors turn out to be true. Dello Stritto and Linder (2017) shared findings from a large survey of post-secondary students reporting that a range of students found closed captions to be helpful.

      Dello Stritto, M. E., & Linder, K. (2017). A Rising Tide: How Closed Captions Can Benefit All Students. Educause Review Online. https://er.educause.edu:443/articles/2017/8/a-rising-tide-how-closed-captions-can-benefit-all-students

    4. graphics using words in both on‐screen text and audio narration in which the audio repeats the text. We call this technique redundant on‐screen text because the printed text (on‐screen text) is redundant with the spoken text (narration or audio).

      Clark and Mayer provide a definition of redundant: Graphics accompanied by words in both on-screen text and audio narration in which that text is repeated. p. 131

      The authors go on to provide guidance about concurrent graphics, audio, and on-screen text. Based upon the research that they summarize in Chapter Seven, they advise instructors not to add printed text to an on screen graphic.

      p. 131

  2. Feb 2020
    1. TABLE 1. Practices to maximize student learning from educational videos

      Table 1. resource for planning/making effective videos

    2. Finally, the utility of video lessons can be maximized by matching modality to content. By using both the audio/verbal channel and the visual/pictorial channel to convey new infor-mation, and by fitting the particular type of information to the most appropriate channel, instructors can enhance the germane cognitive load of a learning experience.

      matching modality to content. So if you want to talk about history, or a book, or just some reflection, it makes less sense to do it over video, but if you want to talk about art history maybe you want to have a video component or be primarily video

    3. Weeding, or the elimination of interesting but extraneous information that does not contribute to the learning goal, can provide further benefits. For example, music, complex back-grounds, or extra features within an animation require the learner to judge whether he or she should be paying attention to them, which increases extraneous load and can reduce learn-ing.

      Weeding + definition, removing flash and bells and whistles that might cause the student to be distracted

    4. The benefits of signaling are complemented by segmenting, or the chunking of information in a video lesson. Segmenting allows learners to engage with small pieces of new information and gives them control over the flow of new information.

      Segmenting or chunking

    5. Signaling, which is also known as cueing (deKoning et al., 2009), is the use of on-screen text or symbols to highlight important information. For example, signaling may be provided by the appearance of two or three key words (Mayer and John-son, 2008; Ibrahim et al., 2012), a change in color or contrast (deKoning et al., 2009), or a symbol that draws attention to a region of a screen (e.g., an arrow; deKoning et al., 2009).

      Signaling definition + examples