81 Matching Annotations
  1. Mar 2021
  2. Apr 2019
    1. Annotation Profile Follow learners as they bookmark content, highlight selected text, and tag digital resources. Analyze annotations to better assess learner engagement, comprehension and satisfaction with the materials assigned.

      There is already a Caliper profile for "annotation." Do we have any suggestions about the model?

  3. Mar 2019
  4. Feb 2019
    1. Which segments of text are being highlighted?

      Do we capture this data? Can we?

    2. What types of annotations are being created?

      How is this defined?

    3. Who is posting most often? Which posts create the most replies?

      These apply to social annotation as well.

    4. Session Profile

      Are we capturing the right data/how can Hypothesis contribute to this profile?

    5. Does overall time spent reading correlate with assessment scores? Are particular viewing patterns/habits predictive of student success? What are the average viewing patterns of students? Do they differ between courses, course sections, instructors, or student demographics?

      Can H itself capture some of this data? Through the LMS?

  5. Jan 2018
  6. Nov 2017
    1. Mount St. Mary’s use of predictive analytics to encourage at-risk students to drop out to elevate the retention rate reveals how analytics can be abused without student knowledge and consent

      Wow. Not that we need such an extreme case to shed light on the perverse incentives at stake in Learning Analytics, but this surely made readers react. On the other hand, there’s a lot more to be said about retention policies. People often act as though they were essential to learning. Retention is important to the institution but are we treating drop-outs as escapees? One learner in my class (whose major is criminology) was describing the similarities between schools and prisons. It can be hard to dissipate this notion when leaving an institution is perceived as a big failure of that institution. (Plus, Learning Analytics can really feel like the Panopticon.) Some comments about drop-outs make it sound like they got no learning done. Meanwhile, some entrepreneurs are encouraging students to leave institutions or to not enroll in the first place. Going back to that important question by @sarahfr: why do people go to university?

    1. Information from this will be used to develop learning analytics software features, which will have these functions: Description of learning engagement and progress, Diagnosis of learning engagement and progress, Prediction of learning progress, and Prescription (recommendations) for improvement of learning progress.

      As good a summary of Learning Analytics as any.

  7. Oct 2017
    1. By giving student data to the students themselves, and encouraging active reflection on the relationship between behavior and outcomes, colleges and universities can encourage students to take active responsibility for their education in a way that not only affects their chances of academic success, but also cultivates the kind of mindset that will increase their chances of success in life and career after graduation.
    2. If students do not complete the courses they need to graduate, they can’t progress.

      The #retention perspective in Learning Analytics: learners succeed by completing courses. Can we think of learning success in other ways? Maybe through other forms of recognition than passing grades?

  8. Aug 2017
    1. This has much in common with a customer relationship management system and facilitates the workflow around interventions as well as various visualisations.  It’s unclear how the at risk metric is calculated but a more sophisticated predictive analytics engine might help in this regard.

      Have yet to notice much discussion of the relationships between SIS (Student Information Systems), CRM (Customer Relationship Management), ERP (Enterprise Resource Planning), and LMS (Learning Management Systems).

  9. Feb 2017
    1. this kind of assessmen

      Which assessment? Analytics aren't measures. We need to be more forthcoming with faculty about their role in measuring student learning. Such as, http://www.sheeo.org/msc

  10. Nov 2016
  11. Oct 2016
    1. Devices connected to the cloud allow professors to gather data on their students and then determine which ones need the most individual attention and care.
    1. For G Suite users in primary/secondary (K-12) schools, Google does not use any user personal information (or any information associated with a Google Account) to target ads.

      In other words, Google does use everyone’s information (Data as New Oil) and can use such things to target ads in Higher Education.

  12. Sep 2016
    1. Data sharing over open-source platforms can create ambiguous rules about data ownership and publication authorship, or raise concerns about data misuse by others, thus discouraging liberal sharing of data.

      Surprising mention of “open-source platforms”, here. Doesn’t sound like these issues are absent from proprietary platforms. Maybe they mean non-institutional platforms (say, social media), where these issues are really pressing. But the wording is quite strange if that is the case.

    2. Activities such as time spent on task and discussion board interactions are at the forefront of research.

      Really? These aren’t uncontroversial, to say the least. For instance, discussion board interactions often call for careful, mixed-method work with an eye to preventing instructor effect and confirmation bias. “Time on task” is almost a codeword for distinctions between models of learning. Research in cognitive science gives very nuanced value to “time spent on task” while the Malcolm Gladwells of the world usurp some research results. A major insight behind Competency-Based Education is that it can allow for some variance in terms of “time on task”. So it’s kind of surprising that this summary puts those two things to the fore.

    3. Research: Student data are used to conduct empirical studies designed primarily to advance knowledge in the field, though with the potential to influence institutional practices and interventions. Application: Student data are used to inform changes in institutional practices, programs, or policies, in order to improve student learning and support. Representation: Student data are used to report on the educational experiences and achievements of students to internal and external audiences, in ways that are more extensive and nuanced than the traditional transcript.

      Ha! The Chronicle’s summary framed these categories somewhat differently. Interesting. To me, the “application” part is really about student retention. But maybe that’s a bit of a cynical reading, based on an over-emphasis in the Learning Analytics sphere towards teleological, linear, and insular models of learning. Then, the “representation” part sounds closer to UDL than to learner-driven microcredentials. Both approaches are really interesting and chances are that the report brings them together. Finally, the Chronicle made it sound as though the research implied here were less directed. The mention that it has “the potential to influence institutional practices and interventions” may be strategic, as applied research meant to influence “decision-makers” is more likely to sway them than the type of exploratory research we so badly need.

    1. often private companies whose technologies power the systems universities use for predictive analytics and adaptive courseware
    2. the use of data in scholarly research about student learning; the use of data in systems like the admissions process or predictive-analytics programs that colleges use to spot students who should be referred to an academic counselor; and the ways colleges should treat nontraditional transcript data, alternative credentials, and other forms of documentation about students’ activities, such as badges, that recognize them for nonacademic skills.

      Useful breakdown. Research, predictive models, and recognition are quite distinct from one another and the approaches to data that they imply are quite different. In a way, the “personalized learning” model at the core of the second topic is close to the Big Data attitude (collect all the things and sense will come through eventually) with corresponding ethical problems. Through projects vary greatly, research has a much more solid base in both ethics and epistemology than the kind of Big Data approach used by technocentric outlets. The part about recognition, though, opens the most interesting door. Microcredentials and badges are a part of a broader picture. The data shared in those cases need not be so comprehensive and learners have a lot of agency in the matter. In fact, when then-Ashoka Charles Tsai interviewed Mozilla executive director Mark Surman about badges, the message was quite clear: badges are a way to rethink education as a learner-driven “create your own path” adventure. The contrast between the three models reveals a lot. From the abstract world of research, to the top-down models of Minority Report-style predictive educating, all the way to a form of heutagogy. Lots to chew on.

  13. Jul 2016
    1. what do we do with that information?

      Interestingly enough, a lot of teachers either don’t know that such data might be available or perceive very little value in monitoring learners in such a way. But a lot of this can be negotiated with learners themselves.

    2. E-texts could record how much time is spent in textbook study. All such data could be accessed by the LMS or various other applications for use in analytics for faculty and students.”
    3. not as a way to monitor and regulate
    1. which applicants are most likely to matriculate
    2. Data collection on students should be considered a joint venture, with all parties — students, parents, instructors, administrators — on the same page about how the information is being used.
    3. "We know the day before the course starts which students are highly unlikely to succeed,"

      Easier to do with a strict model for success.

  14. Jun 2016
    1. nothing we did is visible to our analytics systems

      If it’s not counted, does it count?

    1. It shifted its work to faculty-driven initiatives.

      DIY, grassroots, bottom-up… but not learner-driven.

    2. learning agenda on learning analytics
    3. Learning analytics cannot be left to the researchers, IT leadership, the faculty, the provost or any other single sector alone.
    4. An executive at a large provider of digital learning tools pushed back against what he saw as Thille’s “complaint about capitalism.”

      Why so coy?

      R.G. Wilmot Lampros, chief product officer for Aleks, says the underlying ideas, referred to as Knowledge Space Theory, were developed by professors at the University of California at Irvine and are in the public domain. It's "there for anybody to vet," he says. But McGraw-Hill has no more plans to make its analytics algorithms public than Google would for its latest search algorithm.

      "I know that there are a few results that our customers have found counterintuitive," Mr. Lampros says, but the company's own analyses of its algebra products have found they are 97 percent accurate in predicting when a student is ready to learn the next topic.

      As for Ms. Thille's broader critique, he is unpersuaded. "It's a complaint about capitalism," he says. The original theoretical work behind Aleks was financed by the National Science Foundation, but after that, he says, "it would have been dead without business revenues."

      MS. THILLE stops short of decrying capitalism. But she does say that letting the market alone shape the future of learning analytics would be a mistake.

    5. a debate over who should control the field of learning analytics

      Who Decides?

    1. What teachers want in a data dashboard

      Though much of it may sound trite and the writeup is somewhat awkward (diverse opinions strung together haphazardly), there’s something which can help us focus on somewhat distinct attitudes towards Learning Analytics. Much of it hinges on what may or may not be measured. One might argue that learning happens outside the measurement parameters.

    2. timely

      Time-sensitive, mission-critical, just-in-time, realtime, 24/7…

    3. Data “was something you would use as an autopsy when everything was over,” she said.

      The autopsy/biopsy distinction can indeed be useful, here. Leading to insight. Especially if it’s not about which one is better. A biopsy can help prevent something in an individual patient, but it’s also a dangerous, potentially life-threatening operation. An autopsy can famously identify a “cause of death” but, more broadly, it’s been the way we’ve learnt a lot about health, not just about individual patients. So, while Teamann frames it as a severe limitation, the “autopsy” part of Learning Analytics could do a lot to bring us beyond the individual focus.

    1. While generally misused today, analytics can (theoretically) be used to predict and personalize many facets of teaching & learning, inc. pace, complexity, content, and more.
  15. Apr 2016
  16. Mar 2016
  17. Dec 2015
    1. focus groups where students self-report the effectiveness of the materials are common, particularly among textbook publishers

      Paving the way for learning analytics.

    2. It’s educators who come up with hypotheses and test them using a large data set.

      And we need an ever-larger data set, right?

    3. a good example of the kind of insight that big data is completely blind to

      Not sure it follows directly, but also important to point out.

    1. I will investigate the details on this, including the relevant contractual clauses, when I get the chance.
    2. taking a swipe at Knewton


    3. they are making a bet against the software as a replacement for the teacher and against big data
    4. a set of algorithms designed to optimize the commitment of knowledge to long-term memory
    1. The most popular project for the MUA to tackle was Learning Analytics

      Although Dougiamas claims Moodle already has what is needed in the form of logs and reports: no need for Caliper or xAPI.

    1. Lambda Solutions [Corrected.]

      Oh? They were quite present at MoodleMoot. Wonder what their ties are. Clearly, their solution isn’t free software. Nor is it pushing Open Standards for Learning Analytics.

  18. Nov 2015
    1. grows exponentially.

      As we get into “Big Data”, individual datapoints become less important.

    2. What is the correlation between levels of student responses to each other and outcomes?

      Levels and types of responses. Just read such an analysis, based on Brookfield and Preskill’s “Conversational Moves”.

    3. read by any Caliper-compliant system

      Or any Learning Record Store.

    4. Caliper WordPress plugin

      How long before we get such a thing?

    5. most blogs have a feature called “pingbacks,”

      Annotations should have “pingbacks”, too. But the most important thing is how to process those later on. We do get into the Activity Streams behind much Learning Analytics.

    1. Personal Learning Record will define how to represent, capture and leverage user activity, including ratings, test results and performance measures in a distributed learning and work environment.
  19. Sep 2015
    1. Commercial publishers and content producers say there's reason to doubt the quality of open resources

      Have they demonstrated so clearly that their textbooks have enhanced learning? Oh, wait. They set the criteria by which we assess learning and push for their own brand of Learning Analytics, so…

  20. Aug 2015