88 Matching Annotations
  1. Oct 2016
    1. Knowledge is not just ‘stuff’, or fixed content, but it is dynamic. Knowledge is also not just ‘flow’.

      Is knowledge matter or enegy?

  2. Feb 2016
    1. The effectsizes are mainly clustered between –.25 and.75 standard devi-ations, indicating that in most studies the Intelligent TutoringSystems groups outperformed their respective control groups.

      There's quite a lot of variability in the scores.

    2. Connections betweenvariables are specified to form a network, and inferencing aboutthe value of a variable in the network is accomplished by Bayesiancalculations on other variables connected to it (Millán, Loboda, &Pérez-de-la-Cruz, 2010).

      Does technology dictate pedagogy? If so - do certain ITS' lock in a specific pedagogy - if so which one?

    3. Under both the fixed- and random-effects models,overlap in confidence intervals indicates that effect sizes werenot moderated by whether or not the ITS provided feedback.

      Contrary to VanLehn.

    4. Post hoc analysesfound that studies which used ITS for separate, in-class activ-ities and homework had significantly higher weighted meaneffect sizes than those which used ITS for other purposes suchas principal instruction.

      Why don't we see larger gains when using ITS to replace instruction?

    5. Individual humaninstruction (i.e., human tutoring) appeared to offer a small,non-significant advantage over the use of ITS.

      Didn't VanLehn find that ITS were almost as effective as human tutoring.

    6. Bayesiannetworking is a flexible method that can be used to implementmany different types of student models, including aspects of CBMand knowledge tracing

      Bayesian models predict whether students have a concept based on their performance on assessments.

    7. Based on Ohlsson’s theory oflearning from performance errors (Ohlsson, 1994),

      Learning requires making mistakes (Piaget).

    8. Specifically, when there was more than one controlgroup in a study, any control group that received no instructionaltreatment was dropped from the main meta-analysis according toselection criterion c and other control groups that did receiveinstructional treatment were combined by calculating theirweighted mean. Similarly, when there was more than one group ina study that learned from an ITS, we combined them by calculatingtheir weighted mean.

      Loss of power?

    9. Specifically, they have attributed the effectiveness ofCBI to greater immediacy of feedback (Azevedo & Bernard,1995), feedback that is more response-specific (Sosa, Berger, Saw,& Mary, 2011), greater cognitive engagement (Cohen & Dacanay,1992), more opportunity for practice and feedback (Martin, Klein,& Sullivan, 2007), increased learner control (Hughes et al., 2013),and individualized task selection (Corbalan, Kester, & Van Mer-riënboer, 2006).

      Good summary, the advantages of CBI

    Tags

    Annotators

    1. With what they haveaccomplished so far, ITS certainly add one more good choice tothe array of educational technologies available to educators andstudents.

      It's not about replacement - it's about augmentation.

    2. How can ITS determine if a learner is a high or low self-regulating learner, and what effects will this determination have onlearners’ subsequent self-regulation?

      Thesis

    3. For example,Chi andVanLehn (2010)studied metacognitive strategy instruction withITS (i.e., target variable strategy, a domain-independent problem-solving strategy) for college students

      To Do

    4. However, relatively little has been doneon open-ended or ill-defined domains, such as history.

      !

    5. expanding ITS to open-endedor ill-defined domains deserves particular attention.

      Thesis.

    6. Last, it is possible that ITS’s effectiveness differs as the users’age or educational levels differ. The current meta-analysis focusedon studies of ITS’s impact on college students’ learning, while thelatter focused on ITS’s influence on K-12 students’ mathematicallearning. It is likely that ITS may function better for more maturestudents who have sufficient prior knowledge, self-regulationskills, learning motivation, and experiences with computers thanfor younger students who may still need to develop the abovecharacteristics and need more human inputs to learn. This hypoth-esis needs to be tested in future research.

      This is very interesting.

    7. For example, 10 of the 26adjusted effect sizes were associated with laboratories, 15 wereassociated with real environments, and one was from both. Incontrast, in the latter, the majority of the ITS were widely used inreal educational settings

      Why are the effect sizes larger in college vs K12?

    8. Foradjusted effect sizes, under a fixed-effect model, the average effectsize was .26 for the category of ITS as principal instruction (14studies), .29 for ITS-integrated class instruction (four studies), .43for ITS-assisted activities (two studies), .71 for ITS-supplementedclass instruction (three studies), and .21 for ITS-assisted home-work (three studies).

      Effect sizes:

      ITS-supplemented class instruction (.71) > ITS-assisted activities (.43) > ITS-integrated instruction (.29) > ITS principal instruction (.26) > ITS-assisted homework (.21)

    9. TS-assisted learning

      ITS can be used in a number of ways:

      1. ITS as principal instruction
      2. ITS integrated class instruction
      3. ITS supplemented class instruction
      4. ITS-assisted activities
      5. ITS-assisted homework
    10. For example, the team of Anderson, Koedinger, and Ritterspecializes in Cognitive Tutors; Graesser and his team focus onAutoTutor, a type of intelligent conversational agent; VanLehnand his team focus on dialogue-based ITS (e.g., Andes PhysicsTutoring System); and Mitrovic, Ohlsson, and their colleagueshave worked on constraint-based tutors (e.g., Structured QueryLanguage-Tutor) for more than a decade

      The different flavours of ITS

    11. For example, ITSstudies often citedSleeman and Brown (1982)as the originalsource of ITS terminology, and many ITS studies mentionedartificial intelligence as the precursor of the ITS field.

      Important historical document to ITS

    12. described ITS as tutoring systems thathave both an outer loop and an inner loop. The outer loop selectslearning tasks; it may do so in an adaptive manner (i.e., selectdifferent problem sequences for different students) based on thesystem’s assessment of each individual student’s strengths andweaknesses with respect to the targeted learning objectives. Theinner loop elicits steps within each task (e.g., problem-solvingsteps) and provides guidance with respect to these steps, typicallyin the form of feedback, hints, or error messages. In this regard,CAI, CBT, and Web-based homework are different from ITS inthat they lack an inner loop (VanLehn, 2006).

      ITS vs CAI, CBT & eLearning

    Tags

    Annotators

    1. The second finding was that ITS helped general students learnmathematical subjects more than it helped low achievers. Onepossible explanation is that ITS may function best when studentshave sufficient prior knowledge, self-regulation skills, learningmotivation, and familiarity with computers. It is possible thatgeneral students have more of the characteristics needed to navi-gate ITS than low achievers do. Therefore, they benefited morefrom using ITS. For low achievers, classroom teachers, rather thanITS, might be better leaders, motivators, and regulators to helpthem learn
    2. Last,VanLehn (2011)selected the outcome withthe largest effect size in each primary study.

      I'm not sure that's true.

    3. After controlling for theinfluence of other variables (e.g., pretest scores), the averageadjusted effect size was .01 under both a fixed-effect model andrandom-effects model also favoring the ITS condition. However,the average relative effectiveness of ITS did not appear to besignificantly different from 0 except when effect sizes were unad-justed and a fixed-effect analysis model was used. Also, whethercontrolling for other factors or not, there was a high degree ofheterogeneity among the effect sizes

      So once you control for other effects, any advantage disappears.

      But what about savings in terms of teachers' time? What about whole average improvements? Does a teacher freed from content delivery and able to engage in greater one-on-one interventions result in whole class improvements?

    4. Findings of this meta-analysis suggest that, overall, ITS had nonegative and perhaps a very small positive effect on K–12 stu-dents’ mathematical learning relative to regular classroom instruc-tion.

      No significant difference.

    5. First, the effects appeared to be greater when the ITS interventionlasted for less than a school year than when it lasted for one schoolyear or longer. This effect appeared regardless of whether themoderator analyses were conducted on unadjusted or adjustedeffect sizes with a fixed-effect or random-effects model. Second,the effects of ITS appeared to be greater when the study sampleswere general students than when the samples were low achievers.

      Again... novelty. Again... unsuitable for everybody.

    6. When the effectiveness was measured by posttest outcomesand without taking into account the potential influence of otherfactors, the average unadjusted effect size was .05 under a fixed-effect model and .09 under a random-effects model favoring ITSover regular classroom instruction

      Small effect: .05 under fixed-effect, .09 under a random-effects model. Does this suggest a great deal of variability? Why else would the random effects model posit a greater effect?

    7. utcome type

      Possible Ways of Measuring Outcome

      • Grades
      • Passing Rates
      • Scores from Tests Developed to Test Effect of Treatment
      • Scores from Standardized / Modded Standardized Tests
    8. Overall, ITS appeared to have a positiveimpact on general students. For low achievers, the average effectwas negative under both fixed-effect and random-effects models.

      ITS may not be the best for low achievers.

    9. Qt(25)180.80,p.000. This indicates that it was unlikely that sampling erroralone was responsible for the variance among the effect sizes;instead, some other factors likely played a role in creating vari-ability as well.

      Heterogeneity tests determine whether sampling error alone could create such a broad range of effect sizes.

    10. Of the 26 unadjusted overall effect sizes, 17were in a positive direction, eight were in a negative direction, andone was exactly 0.

      Quite a lot of variation - due to the many different types of treaments studied?

    11. single clear comparison

      not that singular nor that clear

      Worry that pedagogical agents being confused with tutoring systems.

    12. Because the comparison conditions in all fourtypes of situations above involved either regular classroom instruc-tion or teachers’ efforts, we grouped them together as ITS beingcompared with regular classroom instruction.

      Very different types of studies grouped together:

      • part classroom / part ITS vs all classroom
      • all ITS vs all classroom
      • ITS for homework / seatwork vs Teacher feedback & support
      • classroom + ITS vs just classroom

      Also the duration of some studies is days and in others is months.

    13. Two approaches were used to assess publication bias

      Good

    14. we conductedGrubbs (1950)tests to examinewhether there were statistical outliers among the effect sizes andsample sizes.

      Good

    15. We used a shifting unit of analysis approach (Cooper, 2010)tofurther address possible dependencies among effect sizes. Thebenefits of the shifting unit of analysis approach are that it allowsus to retain as much data as possible while ensuring a reasonabledegree of independence among effect sizes. With this approach,effect sizes were first extracted for each outcome as if they wereindependent. For example, if a study with one independent sampleused both a standardized test and a course grade to measurestudents’ learning, two separate effect sizes were calculated. Whenestimating the overall average effect of ITS, these two effect sizeswere averaged so that the sample contributed only one effect sizeto the analysis. However, when conducting a moderator analysis toinvestigate whether the effects of ITS vary as a function of the typeof outcome measures, this sample contributed one effect size to thecategory of standardized test and one to that of course grade

      Methods for ensuring independence.

    16. Hedges’gwaschosen for this meta-analysis because the samples in many ITSstudies are small.

      Implications for future research: Larger sample sizes!

    17. Second, much research on the effectiveness of math ITS hasaccumulated over the last two decades. Without rigorous summa-rization, this literature appears confusing in its findings. For ex-ample,Koedinger et al. (1997)found that students tutored byCognitive Tutor showed extremely high learning gains in algebracompared with students who learned algebra through regular class-room instruction.Shneyderman (2001)found that, on average,students who learned algebra through Cognitive Tutor scored 0.22standard deviations above their peers who learned algebra intraditional classrooms but only scored 0.02 standard deviationsbetter than their comparison peers on the statewide standardizedtest. However,Campuzano, Dynarski, Agodini, and Rall (2009)found that sixth grade students who were taught math with regularclassroom instruction throughout a school year outperformed thosewho were in regular class 60% of the school year and spent theother 40% of class time learning math with ITS, indicated by aneffect size of0.15. Thus, there is a need to gather, summarize,and integrate the empirical research on math ITS, to quantify theireffectiveness and to search for influences on their impact.

      Data on the effectiveness of math ITS is mixed, ranging from finding greater effectiveness when compared to regular classroom instruction to more modest gains that didn't translate into improved performance on statewide standardized tests to reduced learning.

    18. ITS are one type ofe-learning that can be self-paced or instructor directed, encompass-ing all forms of teaching and learning that are electronicallysupported, through the Internet or not, in the form of texts, images,animations, audios, or videos.

      Conflating tutoring & teaching here.

    19. ITS have been developed for mathematically grounded aca-demic subjects

      ITS Systems:

      • Cognitive Tutor
      • AnimalWatch
      • ALEKS
      • Andes, Atlas & Why/Atlas
      • Dialogue-based ITS, ACT Programming Tutor
      • READ180, iSTART, R-WISE
      • Smithtown
      • Research Methods Tutor
    20. First, the effects of ITSappeared to be greater when the interventions lasted for less than a school year than when they lasted for1 school year or longer.

      Novelty effects. Not good.

    21. (a) overall,ITS had no negative and perhaps a small positive effect on K–12 students’ mathematical learning, asindicated by the average effect sizes ranging fromg0.01 tog0.09, and

      Small positive effect? Miniscule positive effect.

    Tags

    Annotators

    1. Since all three approaches were con-sistent in suggesting the absence of publication bias, one can infer that publi-cation bias was not present at a level that would pose a threat to the validity ofthe findings of this meta-analysis

      Very thorough. Just as the authors outlined steps for exploring heterogeneity, they have a concise but thorough section on publication bias, and conduct three test.

    2. Pedagogical agents whichcommunicated through on-screen text produced a moderate effect size ofg= .51(p< .05). Studies in which the pedagogical agents provided narration produced asmall but statistically significant mean effect size (g= .12,p< .05).

      Text is more effective than voice? This is surprising, like the step <= substep findings that contradict the interaction granularity hypothesis.

    3. Statistically significantdifferences between groups were found (Q= 7.70,p< .05), with the highesteffect size (g= .28,p< .05) produced from studies which used agents to learnscience materials (k= 19). Similarly, studies that investigated the use of agentsin mathematics (k= 8) produced an effect size ofg= .27,p< .05, while studieswhich utilized learning materials from the humanities (k= 16) did not yield asignificant effect size (g= .06,p> 05).

      More effective for STEM than arts!

    4. Conversely, the only system-paced study in this meta-analysis pro-duced a non-significant (p> .05) effect of g = –.02.

      Interesting. Why use the system if it's not learner-paced? That's really programming the students.

    5. Laboratory studiesyielded a low effect size ofg= .16 (p< .05). Interestingly, only four studiesevaluated pedagogical agent software in a classroom setting, yet these studiesyielded the highest effect size (g= .68,p< .05).

      More effective in the classroom. Pressure to perform b/c of grades? Novelty.

    6. The prior knowledge levels of the participants, as shown in Table 3, producedsignificant differences between groups (Q= 40.54,p< .05). The use of peda-gogical agents with moderate prior knowledge participants produced an effect sizeofg= .31 (p< .05). Since the use of pedagogical agents with learners of low priorknowledge did not produce a statistically significant effect size (g= –0.01,p> .05), we question if the agents made learning the material more difficult forthese participants. However, these results must be interpreted with caution as29 studies did not report the prior knowledge level of the participants.

      Pedagogical agents may be less effective when students have less prior knowledge. But doesn't this conflict with the K12 assertion? Are ped. agents unsuitable for non-traditional / inexperienced learners?

    7. Studies with participants ingrades four through seven produced a moderate statistically significant effectsize ofg= .56 (p< .05). Results from Table 3 also show that agent studies withpost-secondary students produced a low effect size ofg= .12 (p< .05).

      There is a difference in effect size between K12 & PSE. Novelty? I'd imagine PS students could get annoyed.

    8. Perhaps pedagogical agentswere able to motivate the learners to work at a level higher than normal? Ordid agents engage the students by taking abstract scientific and mathematicalconstructs and demonstrating them in a fashion which the students were able tovisualize what they could not from other resources?

      Why are pedagogical agents more effective in STEM?

      • motivation?
      • concretizing / visualization of abstract concepts?
    9. mathematics, science, and humanities

      Good to see humanities included. Often excluded b/c of lack of hard answers (as in VanLehn's ITS review).

    10. Research Question 3:How do the effect sizes oflearning with pedagogical agents vary by subject domain,educational level, prior domain knowledge, the study setting,and the pacing of the learning system?

      Is there something else we should be controlling for?

    11. effects of gestures in human communication found a moderate effect, indi-cating that the act of gesturing is beneficial to communication (Hostetter, 2011).Thus, in the reporting of future work, researchers should more thoroughly describewhat animations the agents embody (i.e., signaling, non-signaling gesturing, orfacial expressions) as the research examined did not provide sufficient detailsto draw this conclusion

      Is there a difference between a pedagogical agent gesturing, and user-interface signalling? I.e. bolding appropriate text.

    12. Table 2 also shows statistically significant differences were found betweengroups when examining the agent’s level of animation (Q= 10.73,p< .05).Animated pedagogical agents produced a small but statistically significant effect(g= .15,p< .05). Conversely, the two studies that investigated static pedagogicalagents neither produced a positive nor negative effect on learning (g= .00).

      So does animation matter? Is there a benefit to interactivity

    13. As such, thesefindings suggest that fully anthropomorphizing the agents to appear as human-likemay not be necessary to create the illusion and benefits of a social interaction.

      What benefits?

    14. These findingscontradict the modality principle of multimedia learning (Ginns, 2005; Low &Sweller, 2005; Mayer, 2005c; Moreno, 2005);

      What about voice w/o a pedagogical agent? Possible?

    15. lower value (g= 1.00) which was slightly greater than the next-largest effectsize (g= .87) as suggested by Tabachnick and Fidell (2007)

      Strategies for dealing with outliers.

    16. Data were analyzed using Comprehensive Meta-Analysis version 2.2.048(Borenstein, Hedges, Higgins, & Rothstein, 2008) and IBM®SPSS®Statisticssoftware (version 18). TheQstatistic was used to determine heterogeneity amidthe sampled study properties. Borenstein, Hedges, Higgins, and Rothstein (2009)made the important distinction that theQstatistic andp-value is utilized fortesting the null hypothesis, and should not be used to estimate the true variance.In other words, if there is a significantpvalue that is very low, for examplep< .001, it does not indicate greater heterogeneity than apvalue ofp< .049.If thepvalue delineates that theQstatistic is statistically significant (p< .05),it indicates that heterogeneity exists (Borenstein et al., 2009; Lipsey & Wilson,2001) and moderator analysis is needed (Lipsey & Wilson, 2001). Moderatoranalysis allowed for the determination of how different features of pedagogicalagents benefited or inhibited learning. Finally, theI2statistic shows the variationthat is not due to chance, but rather the heterogeneity of the sample (Higgins,Thompson, Deeks, & Altman, 2003). Higgins et al. suggested that a lowI2value indicates that the variance is insignificant, while increasingI2valuesindicate heterogeneity. Higgins et al. delineated that a value of 25% representsa lowI2statistic, 50% represents a mediumI2statistic, and 75% represents ahighI2statistic.

      Good form - outlining how heterogeneity was assessed in the meta-analysis.

    17. Doing so allows the most accurate comparison possible to isolateonly the effect of the agent’s image, rather than other features of the software.

      But what were the effect sizes in Atkinsons? Are voice + agent ped. agents more effective? Could the lack of widely agreed upon support for ped. agents reflect their relative immaturity?

    18. To maintain statistical independence, each individual participant’s score wasonly considered once during the analysis. For example, Craig, Gholson, andDriscoll (2002) utilized three groups of participants in experiment one (agentwith gesture, agent only, and no agent groups). The agent only group’s scores wereaveraged with the agent with gestures group scores, and then compared to the noagent group. While we acknowledge that learners may perceive and interactwith the two agent groups differently, we feel as though this method allowedfor the most informed analysis of the agent groups while also ensuring that theno agent group’s participants were not considered twice. Furthermore, wherepossible, comparisons were made between similar groups to determine if theagent’s image affected learning.

      Good strategy for maintaining statistical independence.

    19. measure cognitive learning outcomes such as retention, transfer, orfree recall;

      Retention, transfer & free recall - Looking at this I'm thinking I should read up on how learning is measured.

    20. However, the obvious contradictionbetween principles still remains: can pedagogical agents support the modalityprinciple without creating a split-attention effect?

      Research problem?

    21. The modality principle states that working memory capacity can be expandedin certain situations by presenting some information visually and other infor-mation through auditory means (Low & Sweller, 2005).

      Modality principle vs dual-coding theory?

      Design content for ears & eyes.

    22. Cognitive load theorists have designated three types of cognitive load: germane,intrinsic and extraneous (Paas, Renkl, & Sweller, 2003; Sweller, 2005, 2010).Germane cognitive load can be thought of as “effective cognitive load” (Paas,Renkl, & Sweller, 2003; Paas & Van Merrënboer, 1994; Sweller, 2005, p. 27)which is the result of schemas being constructed and new information beingacquired (Sweller, 2005). Intrinsic cognitive load is due to the inherent complexityof the material being learned (Paas et al., 2003; Sweller, 2005). Intrinsic cogni-tive load can either be low or high depending on the material being learnedand its interaction with prior knowledge (Sweller, 2005, 2010). Thus, intrinsiccognitive load may vary for each individual. Finally, Sweller (2005, 2010) sug-gested that extraneous cognitive load is caused by poor instructional design,and thus is the cognitive load that is not related to the actual material to be learned,but rather its presentation to the learner.

      Types of Cognitive Load

      • germane - "effective cognitive load - making schemas, acquiring new information"
      • intrinsic - "the inherent complexity of the material being learned"
      • extraneous - "caused by poor instructional design & presentation
    23. Moreover,multiple names for such effects exist, such as “thepersona effect(Lester, Towns,& Fitzgerald, 1999),personal agent effect(Moreno et al., 2001), orembodiedagent effect” (Atkinson, 2002, p. 508).

      persona effect personal agent effect embodied agent effect

      Is the science behind captivate BS? (disclaimer: I'm hoping it is)

    24. Pedagogicalagents that communicated with students using on-screen text facilitatedlearning more effectively than agents that communicated using narration.

      So why the obsession with video!

    25. The overall mean effectsize was moderated by the contextual and methodological features of thestudies

      Design of the study dictates the results.

    26. Pedagogical agents are on-screencharacters that facilitate instruction.

      So... contrasted with ITS - pedagogical agents teach & ITS tutor. Teaching vs Tutoring.

    27. characters

      As in avatars??

    28. Research on the use of software programs and tools such as pedagogicalagents has peaked over the last decade.

      As in they're declining?

    Tags

    Annotators

    1. The Take-Home Points

      V conversational and readable paper.

    2. Within the limitations of this article, one recommenda-tion is that the usage of step-based tutoring systems shouldbe increased. Although such tutoring systems are not cheapto develop and maintain, those costs do not depend on thenumber of tutees. Thus, when a tutoring system is used bya large number of students, its cost per hour of tutoring canbe much less than adult one-on-one human tutoring. Oneimplication of this review, again subject to its limitations, isthat step-based tutoring systems should be used (typically forhomework) in frequently offered or large enrollment STEMcourses.

      Scale. First years + APs.

    3. self-generate

      But could the ITS act as a crutch?

    4. tor experiments of Evensand Michael (2006, Table 10.3). The experiment found aneffect size of 1.95 comparing human tutors to students whoread the textbook instead. The number of subjects in thisstudy was small, so the researchers repeated the experimenta few years later with more subjects and found an effectsize of 0.52 (Evens & Michael, 2006, Table 10.4).

      So much noise. Happened on a (small) group of particularly smart students?

    5. Basically,once tutoring has achieved a certain interactive granularity(roughly, step-based tutoring), decreases in interaction gran-ularity apparently provide diminishing and sometimes evennegligible returns.

      Assuming you have successfully controlled for everything - user interface complexity, cognitive load etc.

    6. These findings are consistent with the interaction plateauhypothesis

      But inconsistent with the ICAP (M. T. H. Chi, 2011) hypothesis discussed earlier. Does interaction not actually improve instruction?

    7. interaction plateau hypothesis

      Is it possible that the increased complexity of the more interactive systems results in an increased cognitive load and thus less learning?

    8. To interpret the effect sizes in Table 1, let us first find ameaningful way to display them graphically

      I like this conversational tone as the author works through the ups and downs of various options for visualization.

    9. comparisons

      Comparisons not studies because multiple comparisons within a single study?

    10. Surprised by the failure of the interaction granularity hy-pothesis, VanLehn et al. (2007) conducted five more exper-iments.

      Looking for a specific answer.

    11. no-tutoring (reading)

      I bet students have wildy different reading styles - meaning some use better metacog. strategies. I wonder if there's greater variability within scores on no-tutoring versus the tutored treatments.

    12. Nonetheless, it is likely thatnearly all the relevant studies have been located and correctlyclassified, in part because the author has worked for more thanthree decades in the tutoring research community.

      What's VanLehn's opinion on ITS & the future of edtech, as well as research required?

    13. tutors are significantly more effective than inexperienced tu-tors,given that they are all subject-matter experts(Chae,Kim, & Glass, 2005; di Eugenio, Kershaw, Lu, Corrigan-Halpern, & Ohlsson, 2006; Fossati, 2008). The Cohen et al.(1982) meta-analysis found no relationship between tutor’sexperience and their effectiveness. Clark, Snow, and Shavel-son (1976) found that giving subject-matter experts trainingand experience as tutors did not make them more effective.

      So which is it? Are trained tutors more effective? Are experts better tutors than novices?

    14. , all the studies included here used tasksthat have distinguishable right and wrong solutions

      Will these methods be useful for "soft" subjects?

    15. In particular, a GUI user interface should produce the samelearning gains as a dialogue-based user interface if they havethe same granularity.

      The media don't matter.

    16. n short, learners’ greater control over thedialogue is not a plausible explanation for why human tutorsare more effective than computer tutors.

      b/c learners seldom take advantage of asking off-script questions

    17. in-dividualized task selectio

      read: personalized learning

    Tags

    Annotators