68 Matching Annotations
  1. Apr 2019
    1. Rapidly Changing Content

      Das Internet verändert sich rasant.

      1. Inhalte werden ständig verändert, verschoben, gelöscht; sind also flüchtig.
      2. Inhalte werden nicht zwingend wiedergefunden wegen unterschiedlicher Suchansätze oder veränderten Algorithmen/parametern in Suchmaschinen.

      Zum fremdgesteuerten Auslagern des Gedächtnisses also kritisch.

    2. Yet, we can contrast all of this with the immense opportunities on the internet for learning (e.g., courses and demonstrations on YouTube, Coursera, TEDx talks) that are game-changers and enhance our educational options.

      Sehen trotzdem Chancen für Lernszenarien (YouTube Coursera, TEDx, ...)

    3. There are benefits to relying on the internet: more information, social connections, and reduced processing load (in some cases). But, relying on the internet may convey a sense of ownership over external information and reduce the depth of processing that is necessary to make information stick, likely exacerbated by the very speed with which hits are returned in response to one's search terms. Relying on the internet can also increase the influence of misinformation because anyone can edit information and do so at a fast pace.

      Zusammenfassung

      pro

      • mehr Information
      • soziale Verbindungen
      • in einigen Fällen geringere "Verarbeitungskosten"

      contra

      • Unklarheit, was eigenes und was fremdes Wissen ist (was man weiß)
      • geringere Verarbeitungstiefe
      • Speicherung von Fehlinformationen

      Effekte können auch zeitlich nachwirken

    4. We go a step further here and predict that compared to information from a friend or a book, information found on the internet may be particularly vulnerable to hindsight bias (I already knew that; Fischhoff & Beyth, 1975), inflated judgments of learning (e.g., Koriat & Bjork, 2005), or inadvertent plagiarism (cryptomnesia; Gingerich & Sullivan, 2013). Of course, these problems are not specific to the internet. Rather, what is unique about the internet is perhaps the number of opportunities it provides for authorship, and the relative anonymity of the information sources, both of which likely increase information appropriation. Furthermore, searching the internet is not entirely passive; one must input search terms and select links—activities with the potential to be confused with the cognitive operations that typically mark self-generated information (Johnson & Raye, 1981). Finally, the near-constant access to the internet that many people now have (e.g., via the smartphone almost always at hand) may further blur the distinction between what one knows and what exists on the internet—yielding the digital expansion of the mind.

      Internet anfälliger gegen Falschzuschreibung von Quellen, nicht nur, weil die unklarer sind. Ebenso: Der aktive Suchprozess könnte dazu führen, dass die Ergebnisse dem eigenen Denken zugeschrieben werden, nicht einer externen Quelle.

    5. In short, social media allows selectivity in sharing, audience tuning, and rehearsals, at a rate that may be greater than traditional means of sharing one's memories.

      Auch die eigene Autobiografie könnte im eigenen Gedächtnis beeinflusst werden durch selektives Posten (und später Erinnern) auf Instagram, usw.

    6. Is the Internet a More Potent Source of Misinformation?

      Ja. Quellen unklarer, Inhalte schnell und einfach veränderbar und verbreiten sich schneller. Verstärkung durch schnelle Rückmeldung.

    7. Depending on the types of social networks that users inhabit (e.g., relatively insular chat groups on Facebook versus much more distributed following on Twitter), information bubbles and echo chambers can coalesce the users’ opinions, whereas large, diverse, and complex interconnections yield varied mental representations.

      Echokammern, die zu ähnlichen kognitiven Strukturen führen, funktionieren auch bei indirekten Kontakten.

    8. More generally, the very act of searching may change one's beliefs about what one knows.

      Das Internet erfordert/bringt andere metakognitive Muster. Wissen, wie lange eine Suche dauern könnte, usw. Das Suchen von Antworten (nicht dessen Finden) erhöht das eigene Selbstvertrauen, auch bei anderen Fragen eine Antwort zu finden.

    9. Do people read differently when reading screens? It appears they do.

      Ja, Menschen scheinen am Bildschirm anders zu lesen als auf Papier. Bildschirm: Ermüdender für die Augen, oft weniger linear. Es gibt allerdings auch hier widersprüchliche Studien, so dass nach den Ursachen weiter gesucht werden muss.

    10. there is a cost to having to make choices (even if one doesn’t choose the distraction)

      Entscheidungen "kosten" etwas, selbst wenn man sich nicht für die Ablenkung entscheidet. Studie zu "Weibo": 50 Nachrichten mit Optionen "Weiter" vs. "Teilen" oder "Weiter". Wer die "Teilen"-Option hatte, erinnerte sich weniger an die Inhalte.

    11. reliance on the internet can become habitual

      Sich auf das Internet zu verlassen, ist zu einer Gewohnheit geworden; es wird selbst dann genutzt, wenn man nicht müsste. Das könne tatsächlich dazu führen, dass Gedächtnisleistung abnähme, da es positive Effekte auf das Erinnern gibt, wenn man es selbst versucht - auch wenn man etwas nicht erinnert, weil man dann weiß, dass man das nicht weiß.

    12. We are also concerned with the focus on the save/erase paradigm, given that we need more studies to understand these effects at the time of this writing.2 In our own studies, for example, we did not find a difference in memory as a function of whether participants were told that the computer would save versus erase what they typed (Marsh & Rajaram, 2018).

      Eigene Replikation des "save/erase"-Experiments gelang nicht. Vermutung: Heute glauben Leute eher, sie fänden die Information auch anderswo wieder, kann aber auch am Versuchsaufbau gelegen haben (online statt offline, also möglicherweise eher mit netzaffinen Menschen)

    13. Experimental work is still in its infancy, although much popular press warns (prematurely, in our opinions) about the dangers of internet reliance and whether the internet is “serving as the external hard drives for our memories” (Wegner & Ward, 2013), or even “making us stupid” (as asked on the cover of the July/August 2008 issue of The Atlantic Monthly in 2008; Carr, 2008).

      Nach den beiden Autoren gibt es bisher zu wenige Studien, um sagen zu können, ob das Internet uns dumm mache. Alarmierende Aussagen wären voreilig (vgl. zum Gegenteil Manfred Spitzer).

    14. Unlimited Scope

      Das Internet ist nie voll. Es kann daher "alles" gespeichert werden, und weil es "alles" gibt, ist es attraktiv zum Auslagern. Vergleich zum transaktiven Gedächtnis (BWL): Bei Erinnerungsaufgaben in "fachlich" gemischten Teams wird eher darauf vertraut, dass sich die anderen um Items der für einen selbst fremden Domäne kümmern.

    15. Many Connections to Others

      Das Internet bietet schnelle Kontakte zu vielen anderen Menschen und könnte etwas wie "Geteiltes Erinnern" bieten.

    16. Source Information is Obscured

      Die Herkunft von Informationen ist im Internet oft nicht klar: sponsored posts, bots, agenda setting, usw. Außerdem wird die Quelle im Zeitverlauf oft vergessen.

    17. The Ability to Author

      Das Internet kann nicht nur konsumiert werden. Eine stärkere Auseinandersetzung mit dem Inhalt (z. B. beim Ändern eines Eintrags in der Wikipedia) könnte dafür sorgen, dass Inhalte besser erinnert werden.

    18. Fast Results

      Suchmaschinen liefern sehr schnell Ergebnisse. Es besteht bei einem selbst die Tendenz, mehr Vertrauen in seine eigenen Antworten zu haben, wenn man die Antwort schnell geben kann -- unabhängig davon, ob sie tatsächlich richtig ist oder man durch Priming oder Ähnliches dazu gebracht wurde.

    19. The Requirement to Search

      Im Internet "muss" gesucht werden. Es gibt keine (für alle) logische Struktur (Semantik). Wer sich das Wissen selbst organisiert, schätzt sein Wissen höher ein.

    20. Widespread Access

      Das Internet ist von überall und einfach erreichbar. Beides zusammen steigert die Wahrscheinlichkeit, dass es genutzt wird -- auch wenn es nicht unbedingt gebraucht würde. Beispiel (jenseits des Internets): Leute drucken mehr aus, wenn ein Druck auf eine Taste genügt statt ein paar Befehle ausführen zu müssen.

    21. Many Distractions and Choices

      Das Internet bietet viele Ablenkungen und Möglichkeiten. Es handelt sich hier um Segen und Fluch. Videos können beispielsweise unterstützen, aber auch ablenken. Kommentarmöglichkeiten können zum Vertiefen/Diskutieren dienen - oder zur Ablenkung ...

    22. Inaccurate Content

      "There's something wrong on the internet, duh!"

      Gibt viel Forschung zu "falschen Erinnerungen", etwa bei Augenzeugen von Kriminalität. GIbt aber noch nicht so viel Forschung zum Internet. Problem: Es gibt eine Tendenz, solche "niedergeschriebenen Fakten" als wahr anzusehen, und auch Leute, die es besser wissen sollten, trauen Falschinformationen bei genügend häufiger Wiederholung ebenfalls.

    23. we encourage the identification of important properties of the internet to identify and build upon basic laboratory studies focused on the same properties

      methodology

      1. Identify important properties of the internet
      2. Build upon studies related to these properties
      3. Deduce what the studies tell about the influence on cognition
    24. how does relying on the internet change cognition?

      Main question

    25. “digital expansion of the mind”

      A "digital expansion of the mind" means that some information is internalized by e.g. watching tutorial videos when needed and some information is externalized by e.g. documenting thoughts and ideas on blogs. It can also mean to use tools that help you with thinking, e.g. for analysis (similar to a calculator).

    26. Our goal is to consider the impact of internet usage on many aspects of cognition, as people increasingly rely on the internet to seek, post, and share information.

      Goal of this paper

  2. Nov 2018
    1. Aufgabentypen für das Videobasierte Assessment

      Wo wird denn hier konkret auf Abschnitt 2 Bezug genommen? Wozu ist Abschnitt 2 relevant, wenn er nicht aufgegriffen wird?

    2. Abb. 3

      Was macht die hier?

    3. Durch die Unterbrechung der Wiedergabe zur Einblendung verankerter Testfragen kommt es dennoch zur Beeinträchtigung des Bewegtbildes.

      Darum lässt sich das bei H5P optional als kleiner Button lösen, der auch ignoriert werden kann.

    4. Dies erfordert Genauigkeit bei der Annotation von Fragen, um weder gesprochene Worte, noch Sätze durch eine eingeschobene Frage abzuschneiden.

      Wenn man das Videomaterial selbst produziert, sollte man daher entsprechende Stellen bereits während der Produktion berücksichtigen.

    5. Um Aufgabentypen für das Video Assessment identizieren und beschreiben zu können, wurden drei Schritte unternommen: Erstens untersuchten wir die Funktio-nalität von 121 Videolernumgebungen, von denen 15 Systeme1 bereits Formen von E-Assessment Aufgaben aufwiesen [25]. Diese Beispiele kennzeichnen somit den Ist-Stand. Im zweiten Schritt wurden Aufgaben- und Fragetypen aus gewöhnlichen, d.h. nicht videobasierten E-Assessment-Systemen und -Tests aus der Literatur [26, 27] zusammengetragen.

      Vielleicht bin ich hier sehr voreingenommen. Dass H5P (https://h5p.org) hier überhaupt nicht auftaucht, wundert mich doch sehr.

    6. Lernformen

      Fehlt hier eventuell die (wie auch immer gestaltete) Präsenz eines Lehrenden? Ist das Co-Präsenz? Webinare fallen wohl nicht unter Videos, aber was, wenn man Lehrende "anpingen" kann -- eventuell zu festen Chat-Zeiten oder in einem Forum. Dasselbe läuft ja bei Sofatutor.

    7. Flipped Classroom bzw. Inverted Classroom Methode

      Auch hier fehlende Bindestriche. Bewusst?

    8. können mit Multiple-Choice-Aufgaben im Video nur schwer abgebildet werden

      Das mag derzeit eine technische Hürde sein, aber mit Fragen wie "Welche Ursache führte im zuvor vorgestellten Beispiel zum Flugzeugabsturz" im Multi-Tier-Format mit weiteren Fragen zur Begründung klappen auch Analyse und Evaluation spielend (wenngleich immer noch in geschlossener Form).

      Mehr auch bei Jörn Loviscach https://www.youtube.com/watch?v=EgXC46ByFlY

    9. Assessment Methoden

      Was ist aus dem guten alten Bindestrich geworden, den der Duden hier fordert?

    10. Dies er-scheint zunächst als eine naheliegende Forderung, denn Lehrende die videobasierte Lernaktivitäten für ihre Studierenden entwickeln, wollen über den Erfolg ihres Ange-bots und den Lernerfolg der Teilnehmer informiert werden.

      Warum ist das a) naheliegend? Warum ist es b) eine Forderung?

      a) Wird auch etwa von PDFs erwartet, damit den Erfolg des Angebots erfassen zu können? Falls ja, ist diese Aussage wohl eher völlig losgelöst vom verwendeten Verbreitungsmedium. Dann braucht es den Einstieg nicht. Falls nein, stellt sich mir die Frage, warum das bei Videos anders ist.

      b) Wir ergibt sich aus der Feststellung einer Möglichkeit eine Forderung danach?

  3. Oct 2018
    1. Bonus Best Practice: Provide Feedback

      More work, but: provide feedback!

    2. When using multiple-choice items to facilitate learning, it is also ideal to produce tests of moderate difficulty. The main reason for this recommendation is that, as described above, multiple-choice tests can have positive and negative effects on learning.

      For learning: Moderate difficulty, cmp. flow theory, etc.

    3. In order to effectively measure learning, each multiple-choice item must help to discriminate between students who have and have not acquired the desired skills and knowledge. When this psychometric goal is achieved, it often produces tests of moderate difficulty that challenge students but allow them to succeed much of the time (assuming that they have at least some of the requisite skills and knowledge).

      For testing: Moderate difficulty for discrimination

    4. The findings from research on using multiple-choice tests for learning also suggest that it is best to have relatively few alternatives, but this recommendation is made for a different reason. When learning is the purpose, the fact that multiple-choice tests expose test-takers to a lot of incorrect information is worrisome because they could potentially learn it.

      For learning: More distractors mean exposing learners to more wrong information.

    5. Determining the number of response options that is optimal for measurement purposes has been a frequent topic of investigation in the assessment literature. After conducting a meta-analysis of this sizeable literature, Rodriguez (2005) concluded that using three response options (the correct answer and two lures) provides the best balance between psychometric quality and efficiency of administration (e.g., students can answer more three-alternative than four-alternative questions in the same amount of time). Although using three alternatives is optimal in theory, this body of research also suggests that the exact number chosen is best determined by the number of plausible incorrect responses that can be created (Haladyna et al., 2002). If only one plausible incorrect response can be identified, then it is better to have an item with just two alternatives than to add a third low-quality (incorrect) alternative; likewise, if efficiency is not a concern and three plausible incorrect responses can be identified, then using four alternatives is acceptable. A recent synthesis of this literature provides practical advice for test creators about how to develop and use distractors for multiple-choice items (see Gierl, Bulut, Guo, & Zhang, 2017); for example, the recommendations for creating effective distractors include using common errors or misconceptions and true statements that do not correctly answer the stem.

      For testing: 3 options is best as a rule of thumb (more questions answerable within the same time period), prefer plausible distractors as way to find a balance. Good distractors mirror common misconceptions or true statements that don't answer the question stem.

    6. In sum, the findings to date suggest avoiding the use of NOTA and AOTA because they are clearly detrimental to assessment and any possible benefits to learning are relatively small.

      Not much research, but ...

      • NOTA correct: Bad, because all the presented information is wrong
      • NOTA incorrect: Just another distractor
      • AOTA correct: Good
      • AOTA incorrect: unclear
    7. One common criticism that is leveled at multiple-choice tests is that they are limited to the recognition of basic factual knowledge, whereas constructed-response formats can require higher-order thinking. On the contrary, it is quite possible to create multiple-choice items that measure higher-order thinking, even if it is more difficult (Aiken, 1982), and research has shown that taking multiple-choice tests that require higher-order thinking can produce learning that improves subsequent performance (e.g., McConnell, St-Onge, & Young, 2015).

      Multiple-choice tests are not limited to dealing with basic factual knowledge, but more effort is needed to create "good questions" compared to simple fact retrieval.

      Still, for testing, you can only assess the final state, not the "process" unless you have a series of related sub-questions.

    8. ncluding “none-of-the-above” (NOTA) and “all-of-the-above” (AOTA) as alternatives on multiple-choice questions is a popular practice, but the assessment literature suggests it is best to avoid using them.

      For testing: "None of the above" and "All of the above" makes tests more prone to clueing.

    9. When creating a test for the purpose of assessment, each item should tap particular content and engage specific cognitive processes in order to provide broad coverage of the learning objectives across items while minimizing overlap among items.

      Put simple: The questions should invoke those cognitive processes that should be assessed. Cmp. Bloom's taxonomy for trigger words, etc.

    10. The assessment literature suggests that confidence-weighted multiple-choice testing may improve validity without comprising reliability because it enables the measurement of partial knowledge (Hambleton, Roberts, & Traub, 1970). Correspondingly, recent research examining how confidence-weighted multiple-choice testing affects performance on a subsequent test suggests that the cognitive processes engaged in evaluating each alternative can be beneficial to learning (Sparck, Bjork, & Bjork, 2016).

      Confidence-weighted testing may change the game.

    11. Within the assessment literature, there is a general consensus that CMC items should be avoided for several reasons. First, CMC items tend to be more prone to “clueing”—that is, they inadvertently enable test-takers to engage in strategic guessing. If test-takers can eliminate one or more of the primary responses, it can rapidly whittle down the number of plausible secondary choices. Due to clueing, CMC items tend to produce artificially higher levels of performance and have lower reliability relative to the traditional, simple multiple-choice format (Albanese, 1993). Second, research suggests that CMC items are not inherently better at measuring higher-order thinking than simpler item types (Dawson-Saunders, Nungester, & Downing, 1989), and thus do not necessarily offer any greater validity. Finally, CMC items are difficult to create and frequently dropped from the pool of potential items during the verification phase of test development because of the issues described above. Taken as a whole, the assessment literature suggests that the costs associated with CMC items greatly outweigh the benefits.

      Best practice: Don't use complex multiple-choice-questions.

      • Prone to clueing
      • Not inherently better
      • More difficult to create
    12. The primary goal of assessment is to measure the extent to which students have acquired the skills and knowledge that form the learning objectives of an educational experience (e.g., an activity, session, or course). To do so effectively, a test needs to differentiate students who have greater mastery of the to-be-learned skills and knowledge from students who have less mastery, which is referred to as discriminability. Effective assessment also requires stable and consistent results, which is called reliability, and accurate measurement of the intended skills, knowledge, or both, which is called validity (Green, 1981). In contrast, the primary goal of using tests for learning is to produce knowledge and skills that are durable, so that they will be retained over long periods of time, and generalizable, so that they can be flexibly used in different contexts.

      Goals of testing vs. goals of learning (rather retention only?)

    13. For example, multiple-choice testing has been found to improve retention and transfer on subsequent unit and final exams in middle school (McDaniel et al., 2013, Roediger et al., 2011), high school (McDermott, Agarwal, D’antonio, Roediger, & McDaniel, 2014), and college courses (Butler et al., 2014, Glass, 2009, McDaniel et al., 2012). In addition, multiple-choice testing can enhance the learning of non-tested, conceptually related information (Bjork, Little, & Storm, 2014) and restore access to previously acquired knowledge that has become inaccessible (Butler et al., 2018, Cantor et al., 2015).

      Yoi might want to check the sources for the findings and see the real results.

    14. As an aside, it is interesting to note that despite wide-spread communication of these best practices, significant flaws remain common in most multiple-choice assessments (DiBattista and Kurzawa, 2011, Downing, 2005).

      Few people perform test-wiseness tests.

    15. The ubiquity of multiple-choice testing in education today stems from the many advantages that it offers relative to other assessment formats. For example, multiple-choice tests are relatively easy to score, offer greater objectivity in grading, and allow more content to be covered by reducing the time it takes test-takers to respond to questions.

      So multiple-choice tests are chosen for organizational reasons, not pedagogical reasons. Is testing more important than learning? Let's see what the paper holds in stock on that question ...

    16. Are the Best Practices for Assessment Also Good for Learning?

      Let's see what Betteridge's Law of Headlines has to say about this ...

  4. Sep 2018
    1. The most common approach to open data, making data available on request, does not work. Researchers requested data from 140 authors with articles published in journals that required authors to share data on request, but only 25.7% of these data sets were actually shared

      I can confirm that from my experience. For a final thesis, I offered a researcher of education science to develop a machine learning approach to help him further analyze the data he used for a study. For free! The data were not published though, and he refused to give them to me.

    2. Preregistration

      Another term that I know for preregistration is (public, digital) lab notebook

    3. Cost of Access

      There is Sci-Hub which hosts many papers that are closed access and make them accessible anyway.

      I just mention it and this currently working link so people know about and don't click this link to Sci-Hub by accident. You have been warned. Don't fall in the pirates' trap, so don't click on the currently working link to Sci-Hub

    4. The Failure of Replication

      For the general problems in replicating/validating research in education, cmp. "Kann man Äpfel mit Birnen vergleichen?" by Christian Spannagel

    5. In the education sciences specifically, one study found that only about 54% of independent replication attempts are successful (Makel & Plucker, 2014)

      Beware of overemphasizing a single study yourself :-D

    6. Open Education Science is concerned with the transparency of scientific research on education but not (directly) with the openness of educational practice.

      education research, but not involved in educational practice

    7. It does not restrict any particular research practice but rather, asks researchers to be transparent and honest about their practices. There is nothing wrong with analyzing data with no a priori hypotheses, but there is something fundamentally corrosive to publishing papers that present post hoc hypotheses as a priori.
    8. For all the methodological variety in educational research, most studies proceed through four common phases that include (1) design, (2) data collection, (3) analysis, and (4) publication.

      A more detailed (yet still pretty common) set of phases can be found on slide 30 of a presentation by Cameron Neylon. It also covers reading (or maybe consumption in general), idea generation and funding which can also benefit from openness. Cameron also distinguishes between developing and planning, but that might be covered by study design here.

    9. There is no single philosophy or unified solution advanced by Open Science advocates (Fecher & Friesike, 2014)
    10. NonCommercial
    11. Each aspect of the scientific cycle—research design, data collection, analysis, and publication—can and should be made more transparent and accessible.

      Cmp. article draft from 2011 related to science in general, not particularly for education science.

  5. Aug 2018
    1. 3 DataEmpirical studies on dropout at German universities show that performanceproblems represent the single most important cause of leaving higher educationamong bachelor students (see figure 1).2If multi-selection was allowed, 70%of dropout students mention performance issues as a relevant factor for theirdecision (Heublein, 2010).

      While this statement is absolutely fine stating the source (pp. 17-21), I have severe doubt those are independent factors. Isn't the factor "problems in performance" rather an effect that caused by "lack of study motivation", "family issues", etc.? What about "failure in examinations"?

      It's probably not relevant for the course of this paper, but the figure represents the answers of dropout students moderated by the from the answer options given in the query (p. 179), mixing cause and effect, thus leading to misinterpretable conclusions.

  6. Jul 2018
    1. If lecture capture is to be utilised widely in a teaching environment, it is important to find ways to make the attendance of lectures hold value beyond their recorded substitute. One way of doing this is to ensure the experience that students get in a lecture is substantively different (and richer) than they would get from passively watching a recording. This may be through the encouragement of enhanced student interaction and/or participation during lectures or including small ‘live’ formative (or even summative) assessments during lectures.

      It's 2018. This is news?

    2. Lectures are used not only for helping deliver course content but they are a key touch point where information about assessment is transferred.

      Isn't that a sign for bad organization?

    3. not having face-to-face contact with instructors will mean that they cannot ask questions of clarification

      How many questions are actually asked in lectures?

    4. The study is unique in that it examines two different aspects of the introduction of lecture capture on student engagement and attainment: the effects of lecture capture availability to students and the effects of students’ usage of lecture capture.

      Need to look at the cited sources in Schulmeister/Loviscach 2017, but the study might not be that unique.