203 Matching Annotations
  1. Jul 2022
  2. May 2022
    1. faciliter l’accès aux données des systèmes du MES, notamment par rapport à laréussite de groupes ciblés d’étudiants (par exemple, les étudiants en situationde handicap, les étudiants autochtones, les étudiants issus de l’immigration, lesétudiants de première génération3, les étudiants internationaux)
  3. Apr 2022
    1. Adam Kucharski. (2021, February 6). COVID outlasts another dashboard... Https://t.co/S9kLCva3WQ Illustrates the importance of incentivising sustainable outbreak analytics—If a tool is useful, people will come to rely on it, which creates a dilemma if it can’t be maintained. [Tweet]. @AdamJKucharski. https://twitter.com/AdamJKucharski/status/1357970753199763457

  4. Feb 2022
    1. Contemporary digital learning technologies generate, store, and share terabytes of learner data—which must flow seamlessly and securely across systems. To enable interoperability and ensure systems can perform at-scale, the ADL Initiative is developing the Data and Training Analytics Simulated Input Modeler (DATASIM), a tool for producing simulated learner data that can mimic millions of diverse user interactions. view image full screen DATASIM application screen capture. DATASIM is an open-source platform for generating realistic Experience Application Programming Interface (xAPI) data at very large scale. The xAPI statements model realistic behaviors for a cohort of simulated learner/users, producing tailorable streams of data that can be used to benchmark and stress-test systems. DATASIM requires no specialized hardware, and it includes a user-friendly graphical interface that allows precise control over the simulation parameters and learner attributes.
    1. The video profile of the xAPI was created to identify and standardize the common types of interactions that can be tracked in any video player.
  5. Jan 2022
    1. xAPI Wrapper Tutorial Introduction This tutorial will demonstrate how to integrate xAPI Wrapper with existing content to capture and dispatch learning records to an LRS.

      roll your own JSON rather than using a service like xapi.ly

    1. Storyline 360 xAPI Updates (Winter 2021)Exciting xAPI update for Storyline users! Articulate has updated Storyline 360 to support custom xAPI statements alongside a few other xAPI-related updates. (These changes will likely come to Storyline 3 soon, though not as of November 30, 2021.)
    1. Making xAPI Easier Use the xapi.ly® Statement Builder to get more and better xAPI data from elearning created in common authoring platforms. xapi.ly helps you create the JavaScript triggers to send a wide variety of rich xAPI statements to the Learning Record Store (LRS) of your choice.

      criteria for use and pricing listed on site

    1. Here you will find a well curated list of activities, activity types, attachments types, extensions, and verbs. You can also add to the registry and we will give you a permanently resolvable URL - one less thing you have to worry about. The registry is a community resource, so that we can build together towards a working Tin Can data ecosystem.

      **participant in the Spring 2022 XAPI cohort suggested that 'Registry is not maintained, and they generally suggest using the Vocab Server (which is also the data source for components in the Profile Server).'

  6. www.json.org www.json.org
    1. JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate. It is based on a subset of the JavaScript Programming Language Standard ECMA-262 3rd Edition - December 1999. JSON is a text format that is completely language independent but uses conventions that are familiar to programmers of the C-family of languages, including C, C++, C#, Java, JavaScript, Perl, Python, and many others. These properties make JSON an ideal data-interchange language.
    1. The xAPI Vocabulary and Profile Server is a curated list of xAPI vocabulary concepts and profiles maintained by the xAPI community.
    1. xAPI Foundations Leverage xAPI to develop more comprehensive learning experiences. This on-demand e-learning course is available online immediately after purchase. Within the course, you will have the opportunity to personalize your learning by viewing videos, interacting with content, hearing from experts, and planning for your future. You will have access to the course(s) for 12 months from your registration date.
    1. Learning program analytics seek to understand how an overall learning program is performing. A learning program typically encompasses many learners and many learning experiences (although it could easily contain just a few).
    2. Learning experience analytics seek to understand more about a specific learning activity. 
    3. Learner analytics seek to understand more about a specific person or group of people engaged in activities where learning is one of the outputs.
    4. There are many types of learning analytics and things you can measure and analyze. We segment these analytics into three categories: learning experience analytics, learner analytics, and learning program analytics.
    5. Learning analytics is the measurement, collection, analysis, and reporting of data about learners, learning experiences, and learning programs for purposes of understanding and optimizing learning and its impact on an organization’s performance.
    1. Social learning This is a feature the LXP has really expanded. Although some of the more advanced LMSs boast social features, the Learning Experience Platform is better formatted for them and far more likely to provide.  Firstly, the LXP caters for a broader range of learning options than the LMS. It’s usually not difficult to use your LXP to set up an online class or webinar.  LXPs also provide a chance for learners to share their opinions on content: liking, sharing, or commenting on an article or online class. Users can follow and interact with others, above or below them in the organisation. Sometimes LXPs even provide people curation, matching learners and mentors.  Users also have a chance to make the LXP their own by setting up a personalised profile page. It might seem low-priority, but a sense of ownership usually corresponds with a boost in engagement.  As well prepared as Learning & Development leaders are, there’ll be things that people doing a job every day will know that you won’t. They can use their personal experience to recommend or create learning content in an LXP. This helps on-the-job learning and gives employees a greater chance of picking up the skills they need to progress in their role. 
    1. How, exactly, can we design for engagement and conversation? In comparison to content-focused educational technology such as the Learning Management System (LMS), our (not so secret) recipe is this:1. Eliminate the noise2. Bring people into the same room3. Make conversation easy and meaningful4. Create modularity and flexibility

      Spring 2022 #xAPICohort resource

    1. To learn more, there are two books I highly recommend. "Digital Body Language," by Steve Woods, and "Big Data: Does Size Matter?" by Timandra Harkness. If you would like a deeper dive into data-driven learning design, there's a free e-book and toolkit you can download from my blog. You can also reach me there at loriniles.com. Remember, start with the data you have readily available. Data does not have to be intimidating Excel spreadsheets. Be prepared with data every single time you meet with your stakeholders. And before you design any strategy, ask what data you have to support every decision. You're on an exciting journey to becoming a more well-rounded HR leader. Get started, and good luck.

      Spring 2022 #xAPICohort resource

    1. LXPs and LMSs accomplish two different objectives. An LMS enables administrators to manage learning, while an LXP enables learners to explore learning. Organizations may have an LXP, an LMS or both. If they have both, they may use the LXP as the delivery platform and the LMS to handle the administrative work.

      Spring 2022 #xAPICohort resource

    2. 4. Highly intuitive interfaces

      Spring 2022 #xAPICohort resource

    3. 3. Supports various types of learning

      Spring 2022 #xAPICohort resource

    4. 2. Rich learning experience through deeper personalization

      Spring 2022 #xAPICohort resource

    5. Here are some other characteristics that set LXPs apart from LMS’s: 1. Extensive integration capabilities

      Spring 2022 #xAPICohort resource

    6. The gradual shift, from one-time pay to cloud-based subscription-based business has lead learning platforms to also offer Software-as-a-Service (SaaS) models to their clients. As such content becomes part of digital learning networks, they are integrated into commercial learning solutions and then become part of broader LXPs. Looking back at all these developments, from how new data consumption platforms evolved, to the emergence of newer content development approaches and publishing channels, it’s easy to understand why LXPs naturally evolved as a result of DXPs.

      Spring 2022 #xAPICohort resource

    7. The growth of social learning has also created multiple learning opportunities for people to share their knowledge and expertise. As they socialize on these platforms (Facebook, LinkedIn, YouTube, Instagram and many others), individuals and groups learn from each other through various types of social interactions – sharing content, exchanging mutually-liked links to external content. LXP leverage similar approaches in corporate learning environments, and scale learning experience and opportunities with such user-generated content as found in social and community-based learning.

      Spring 2022 #xAPICohort resource

    8. Integrations are also possible with AI. If you integrate LXP and your Human Resource Management (HRM) system, the corporate intranet, your Learning Record Store (LRS) or the enterprise Customer Relationship Management (CRM) system, and collect the data from all of them, you can identify many different trends and patterns. And based on those patterns, all stakeholders can make informed training and learning decisions. Standard LMS’s cannot do any of that. And though LMS developers are trying to get there, they’ve still got a long way to go to bridge the functionality gap with LXPs. As a result, there was an even greater impetus to the emergence of LXPs.

      Spring 2022 #xAPICohort resource

    9. Another driver for the emergence of LXP’s is the standards adopted by modern-day LMS’s – which are SCORM-based. While SCORM does “get results”, it is limited in what it can do. One of the main goals of any corporate learning platform is to connect learning with on-the-job performance. And SCORM makes it very difficult to decide how effective the courses really are, or how learners benefit from these courses. Experience API (xAPI) on the other hand – the standard embraced by LXPs – offers significantly enhanced capabilities to the platform. When you use xAPI, you can follow different parameters both while you learn and perform on the job tasks. And, what’s even better is that you can do that on a variety of digital devices.

      Spring 2022 #xAPICohort resource

    10. LMS’s primarily served as a centralized catalog of corporate digital learning assets. Users of those platforms often found it hard to navigate through vast amounts of content to find an appropriate piece of learning. LMS providers sought to bridge that gap by introducing smart searches and innovative querying features – but that didn’t entirely address the core challenge: LMS’s were still like huge libraries where you should only go to when you have an idea of what you need, and then spend inordinate amounts of time searching for what you specifically want!

      Spring 2022 #xAPICohort resource

    1. Experience API (xAPI) is a tool for gaining insight into how learners are using, navigating, consuming, and completing learning activities. In this course, Anthony Altieri provides an in-depth look at using xAPI for learning projects, including practical examples that show xAPI in action.

      Spring 2022 #xAPICohort resource

    1. The xAPI Learning Cohort is a free, vendor-neutral, 12-week learning-by-doing project-based team learning experience about the Experience API. (Yep, you read that right – free!) It’s an opportunity for those who are brand new to xAPI and those who are looking to experiment with it to learn from each other and from the work itself.

      Spring 2022 #xAPICohort resource

    1. If your current course development tools don't create the activity statements you need, keep in mind that sending xAPI statements requires only simple JavaScript, so many developers are coding their own form of statements from scratch.

      Spring 2022 #xAPICohort resource

    2. An xAPI activity statement records experiences in an "I did this" format. The format specifies the actor, verb, object: the actor (who did it), a verb (what was done), a direct object (what it was done to) and a variety of contextual data, including score, rating, language, and almost anything else you want to track. Some learning experiences are tracked with a single activity statement. In other instances, dozens, if not hundreds, of activity statements can be generated during the course of a learning experience. Activity statements are up to the instructional designer and are driven by the need for granularity in reporting.

      Spring 2022 #xAPICohort resource

    3. xAPI is a simple, lightweight way to store and retrieve records about learners and share these data across platforms. These records (known as activity statements) can be captured in a consistent format from any number of sources (known as activity providers) and they are aggregated in a learning record store (LRS). The LRS is analogous to the SCORM database in an LMS. The x in xAPI is short for "experience," and implies that these activity providers are not just limited to traditional AICC- and SCORM-based e-learning. With experience API or xAPI you can track classroom activities, usage of performance support tools, participation in online communities, mentoring discussions, performance assessment, and actual business results. The goal is to create a full picture of an individual's learning experience and how that relates to her performance.

      Spring 2022 #xAPICohort resource

    1. For any xAPI implementation, these five things need to happen:A person does something (e.g., watches a video).That interaction is tracked by an application.Data about the interaction is sent to an LRS.The data is stored in the LRS and made available for use.Use the data for reporting and personalizing a learning experience.In most implementations, multiple learner actions are tracked by multiple applications, and data may be used in a number of ways. In all cases, there’s an LRS at the center receiving, storing, and returning the data as required.

      Spring 2022 #xAPICohort resource

    2. Experience API (also xAPI or Tin Can API) is a learning technology interoperability specification that makes it easier for learning technology products to communicate and work with one another.

      Spring 2022 #xAPICohort resource

    1. Instructional DesignerWhen implementing xAPI across an organization, there isn’t usually a need for instructional designers to take on new roles or duties. However, they may experience a learning curve that presents an opportunity to understand how to best package and effectively deploy xAPI in newly created content. Your learning designer(s) is a key partner in getting good data, so keep them in the loop regarding your strategy, goals, and expected outcomes.

      Spring 2022 #xAPICohort resource

  7. Nov 2021
  8. Oct 2021
    1. Analytics is the key to understanding your app's users: Where are they spending the most time in your app? When do they churn? What actions are they taking?
    1. How to Install the DigitalOcean Metrics Agent

      DigitalOcean Monitoring

      DigitalOcean Monitoring is a free, opt-in service that gathers metrics about Droplet-level resource utilization. It provides additional Droplet graphs and supports configurable metrics alert policies with integrated email Slack notifications to help you track the operational health of your infrastructure.

  9. Aug 2021
    1. The Recorded Future system contains many components, which are summarized in the following diagram: The system is centered round the database, which contains information about all canonical event and entities, together with information about event and entity references, documents containing these references, and the sources from which these documents were obtained
    2. We have decided on the term “temporal analytics” to describe the time oriented analysis tasks supported by our systems

      RF have decided on the term “temporal analytics” to describe the time oriented analysis tasks supported by our systems

  10. Jul 2021
    1. https://blog.jonudell.net/2021/07/21/a-virtuous-cycle-for-analytics/

      Some basic data patterns and questions occur in almost any business setting and having a toolset to handle them efficiently for both the end users and the programmers is an incredibly important function.

      Too often I see businesses that don't own their own data or their contracting out the programming portion (or both).

  11. Mar 2021
    1. Plausible is a lightweight, self-hostable, and open-source website analytics tool. No cookies and fully compliant with GDPR, CCPA and PECR. Made and hosted in the EU 🇪🇺

      Built by

      Introducing https://t.co/mccxgAHIWo 🎉<br><br>📊 Simple, privacy-focused web analytics<br>👨‍💻 Stop big corporations from collecting data on your users<br>👉 Time to ditch Google Analytics for a more ethical alternative#indiehackers #myelixirstatus #privacy

      — Uku Täht (@ukutaht) April 29, 2019
      <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

  12. Feb 2021
  13. Nov 2020
    1. Spotting a gem in it takes something more. Without domain knowledge, business acumen, and strong intuition about the practical value of discoveries—as well as the communication skills to convey them to decision-makers effectively—analysts will struggle to be useful. It takes time for them to learn to judge what’s important in addition to what’s interesting. You can’t expect them to be an instant solution to charting a course through your latest crisis. Instead, see them as an investment in your future nimbleness.

      This is where the expectations from today's organizations differ and lead to a big gap in expectations from Analyts and Analytics as a function.

  14. Oct 2020
    1. Conclusiones

      Tratando de encontrarle una utilidad real al HR analytics, pudieran considerarse las siguientes:

      1. conocer los ingresos promedio que genera cada empleado, como una medida de la eficiencia de una organización
      2. conocer la tasa de aceptación de ofertas, es decir, el número de ofertas de trabajo formales aceptadas entre el número total de ofertas de trabajo, para redefinir la estrategia de adquisición de talento de la empresa.
      3. conocer los gastos de formación por empleado, para reevaluar el gasto de capacitación por empleado.
      4. conocer la eficiencia de la formación, analizando la mejora del rendimiento, para evaluar la eficacia de un programa de formación.
      5. conocer la tasa de rotación voluntaria e involuntaria, para identificar la experiencia de los empleados que lo conducen a la deserción voluntaria o para desarrollar un plan para mejorar la calidad de las contrataciones para evitar la rotación involuntaria.
      6. conocer el tiempo de reclutamiento y de contratación, para reducir este tiempo.
      7. conocer el absentismo, que es una métrica de productividad, o como un indicador de la satisfacción laboral de los empleados.
      8. conocer el riesgo de capital humano, para identificar la ausencia de una habilidad específica para ocupar un nuevo tipo de trabajo, o la falta de empleados calificados para ocupar puestos de liderazgo.
    2. Los Modelos HR Analytics

      Existe una gran variedad de software para HR analytics, entre otros Sisense, Domo, Clic data, Domo, Activ trak. Pero para utilizarlos un departamento de RRHH debe estar capacitado específicamente en ello.

    3. Lecciones aprendidas

      para utilizar esta herramienta se requiere que participen personas que conozcan los procesos organizacionales de la empresa, además de expertos en psicometría y analítica estadística.

    4. Human Resources Analytics (HR Analytics

      HR Analytics es una metodología para obtener datos de los empleados para analizarlos y hallar evidencias para la toma de decisiones estratégicas.

    1. Om Malik writes about a renewed focus on his own blog: My first decree was to eschew any and all analytics. I don’t want to be driven by “views,” or what Google deems worthy of rank. I write what pleases me, not some algorithm. Walking away from quantification of my creativity was an act of taking back control.

      I love this quote.

    2. What I dwell on the most regarding syndication is the Twitter stuff. I look back at the analytics on this site at the end of every year and look at where the traffic came from — every year, Twitter is a teeny-weeny itty-bitty slice of the pie. Measuring traffic alone, that’s nowhere near the amount of effort we put into making the stuff we’re tweeting there. I always rationalize it to myself in other ways. I feel like Twitter is one of the major ways I stay updated with the industry and it’s a major source of ideas for articles.

      So it sounds like Twitter isn't driving traffic to his website, but it is providing ideas and news. Given this I would syndicate content to Twitter as easily and quickly as possible, use webmentions to deal with the interactions and then just use the Twitter timeline for reading and consuming and nothing else.

  15. Jul 2020
  16. Jun 2020
    1. The bit.ly links that are created are also very diverse. Its harder to summarise this without offering a list of 100,000 of URL’s — but suffice it to say that there are a lot of pages from the major web publishers, lots of YouTube links, lots of Amazon and eBay product pages, and lots of maps. And then there is a long, long tail of other URL’s. When a pile-up happens in the social web it is invariably triggered by link-sharing, and so bit.ly usually sees it in the seconds before it happens.

      link shortener: rich insight into web activity...

  17. May 2020
    1. You should then also create a new View and apply the following filter so as to be able to tell apart which domain a particular pageview occurred onFilter Type: Custom filter > AdvancedField A --> Extract A: Hostname = (.*)Field B --> Extract B: Request URI = (.*)Output To --> Constructor: Request URI = $A1$B1
  18. Mar 2020
    1. If you want to disable Google Analytics-tracking for this site, please click here: [delete_cookies]. The cookie which enabled tracking on Google Analytics is immediately removed.

      This is incomplete. The button is missing.

    1. Google Analytics created an option to remove the last octet (the last group of 3 numbers) from your visitor’s IP-address. This is called ‘IP Anonymization‘. Although this isn’t complete anonymization, the GDPR demands you use this option if you want to use Analytics without prior consent from your visitors. Some countris (e.g. Germany) demand this setting to be enabled at all times.
    1. Do you consider visitor interaction with the home page video an important engagement signal? If so, you would want interaction with the video to be included in the bounce rate calculation, so that sessions including only your home page with clicks on the video are not calculated as bounces. On the other hand, you might prefer a more strict calculation of bounce rate for your home page, in which you want to know the percentage of sessions including only your home page regardless of clicks on the video.
    1. Here you need to decide if you want to take a cautious road and put it into an “anonymous” mode or go all out and collect user identifiable data. If you go with anonymous, you have the ability to not need consent.
  19. Feb 2020
    1. One important aspect of critical social media research is the study of not just ideolo-gies of the Internet but also ideologies on the Internet. Critical discourse analysis and ideology critique as research method have only been applied in a limited manner to social media data. Majid KhosraviNik (2013) argues in this context that ‘critical dis-course analysis appears to have shied away from new media research in the bulk of its research’ (p. 292). Critical social media discourse analysis is a critical digital method for the study of how ideologies are expressed on social media in light of society’s power structures and contradictions that form the texts’ contexts.
    2. t has, for example, been common to study contemporary revolutions and protests (such as the 2011 Arab Spring) by collecting large amounts of tweets and analysing them. Such analyses can, however, tell us nothing about the degree to which activists use social and other media in protest communication, what their motivations are to use or not use social media, what their experiences have been, what problems they encounter in such uses and so on. If we only analyse big data, then the one-sided conclusion that con-temporary rebellions are Facebook and Twitter revolutions is often the logical conse-quence (see Aouragh, 2016; Gerbaudo, 2012). Digital methods do not outdate but require traditional methods in order to avoid the pitfall of digital positivism. Traditional socio-logical methods, such as semi-structured interviews, participant observation, surveys, content and critical discourse analysis, focus groups, experiments, creative methods, par-ticipatory action research, statistical analysis of secondary data and so on, have not lost importance. We do not just have to understand what people do on the Internet but also why they do it, what the broader implications are, and how power structures frame and shape online activities
    3. Challenging big data analytics as the mainstream of digital media studies requires us to think about theoretical (ontological), methodological (epistemological) and ethical dimensions of an alternative paradigm

      Making the case for the need for digitally native research methodologies.

    4. Who communicates what to whom on social media with what effects? It forgets users’ subjectivity, experiences, norms, values and interpre-tations, as well as the embeddedness of the media into society’s power structures and social struggles. We need a paradigm shift from administrative digital positivist big data analytics towards critical social media research. Critical social media research combines critical social media theory, critical digital methods and critical-realist social media research ethics.
    5. de-emphasis of philosophy, theory, critique and qualitative analysis advances what Paul Lazarsfeld (2004 [1941]) termed administrative research, research that is predominantly concerned with how to make technologies and administration more efficient and effective.
    6. Big data analytics’ trouble is that it often does not connect statistical and computational research results to a broader analysis of human meanings, interpretations, experiences, atti-tudes, moral values, ethical dilemmas, uses, contradictions and macro-sociological implica-tions of social media.
    7. Such funding initiatives privilege quantitative, com-putational approaches over qualitative, interpretative ones.
  20. Jan 2020
  21. Nov 2019
  22. Sep 2019
    1. “But then again,” a person who used information in this way might say, “it’s not like I would be deliberately discriminating against anyone. It’s just an unfortunate proxy variable for lack of privilege and proximity to state violence.

      In the current universe, Twitter also makes a number of predictions about users that could be used as proxy variables for economic and cultural characteristics. It can display things like your audience's net worth as well as indicators commonly linked to political orientation. Triangulating some of this data could allow for other forms of intended or unintended discrimination.

      I've already been able to view a wide range (possibly spurious) information about my own reading audience through these analytics. On September 9th, 2019, I started a Twitter account for my 19th Century Open Pedagogy project and began serializing installments of critical edition, The Woman in White: Grangerized. The @OPP19c Twitter account has 62 followers as of September 17th.

      Having followers means I have access to an audience analytics toolbar. Some of the account's followers are nineteenth-century studies or pedagogy organizations rather than individuals. Twitter tracks each account as an individual, however, and I was surprised to see some of the demographics Twitter broke them down into. (If you're one of these followers: thank you and sorry. I find this data a bit uncomfortable.)

      Within this dashboard, I have a "Consumer Buying Styles" display that identifies categories such as "quick and easy" "ethnic explorers" "value conscious" and "weight conscious." These categories strike me as equal parts confusing and problematic: (Link to image expansion)

      I have a "Marital Status" toolbar alleging that 52% of my audience is married and 49% single.

      I also have a "Home Ownership" chart. (I'm presuming that the Elizabeth Gaskell House Museum's Twitter is counted as an owner...)

      ....and more

  23. Jul 2019
    1. We translate all patient measurements into statisticsthat are predictive of unsuccesfull discharge

      Egy analitikai pipeline, kb amit nekünk is össze kéne hozni a végére.

  24. Apr 2019
    1. Annotation Profile Follow learners as they bookmark content, highlight selected text, and tag digital resources. Analyze annotations to better assess learner engagement, comprehension and satisfaction with the materials assigned.

      There is already a Caliper profile for "annotation." Do we have any suggestions about the model?

  25. Mar 2019
  26. Feb 2019
    1. Which segments of text are being highlighted?

      Do we capture this data? Can we?

    2. What types of annotations are being created?

      How is this defined?

    3. Who is posting most often? Which posts create the most replies?

      These apply to social annotation as well.

    4. Session Profile

      Are we capturing the right data/how can Hypothesis contribute to this profile?

    5. Does overall time spent reading correlate with assessment scores? Are particular viewing patterns/habits predictive of student success? What are the average viewing patterns of students? Do they differ between courses, course sections, instructors, or student demographics?

      Can H itself capture some of this data? Through the LMS?

  27. Dec 2018
    1. And while content analytics tools (e.g., Chartbeat, Parsely, Content Insights) and feedback platforms (e.g., Hearken, GroundSource) have thankfully helped close the gap, the core content management experience remains, for most of us, little improved when it comes to including the audience in the process.
  28. Jul 2018
  29. May 2018
    1. hi there check on the SAS Training and Tutorial with better analysis On the Data and forecasting methods for better implication on Business analytics

      https://www.youtube.com/watch?v=1QPRhVGCTRE

  30. Mar 2018
  31. Jan 2018
  32. Nov 2017
    1. Mount St. Mary’s use of predictive analytics to encourage at-risk students to drop out to elevate the retention rate reveals how analytics can be abused without student knowledge and consent

      Wow. Not that we need such an extreme case to shed light on the perverse incentives at stake in Learning Analytics, but this surely made readers react. On the other hand, there’s a lot more to be said about retention policies. People often act as though they were essential to learning. Retention is important to the institution but are we treating drop-outs as escapees? One learner in my class (whose major is criminology) was describing the similarities between schools and prisons. It can be hard to dissipate this notion when leaving an institution is perceived as a big failure of that institution. (Plus, Learning Analytics can really feel like the Panopticon.) Some comments about drop-outs make it sound like they got no learning done. Meanwhile, some entrepreneurs are encouraging students to leave institutions or to not enroll in the first place. Going back to that important question by @sarahfr: why do people go to university?

    1. Information from this will be used to develop learning analytics software features, which will have these functions: Description of learning engagement and progress, Diagnosis of learning engagement and progress, Prediction of learning progress, and Prescription (recommendations) for improvement of learning progress.

      As good a summary of Learning Analytics as any.

  33. Oct 2017
    1. Examples of such violence can be seen in the forms of an audit culture and empirically-driven teaching that dominates higher education. These educational projects amount to pedagogies of repression and serve primarily to numb the mind and produce what might be called dead zones of the imagination. These are pedagogies that are largely disciplinary and have little regard for contexts, history, making knowledge meaningful, or expanding what it means for students to be critically engaged agents.

      On audit culture in education. How do personalized/adaptive/competency-based learning and learning analytics support it?

    1. By giving student data to the students themselves, and encouraging active reflection on the relationship between behavior and outcomes, colleges and universities can encourage students to take active responsibility for their education in a way that not only affects their chances of academic success, but also cultivates the kind of mindset that will increase their chances of success in life and career after graduation.
    2. If students do not complete the courses they need to graduate, they can’t progress.

      The #retention perspective in Learning Analytics: learners succeed by completing courses. Can we think of learning success in other ways? Maybe through other forms of recognition than passing grades?

  34. Sep 2017
  35. Aug 2017
    1. This has much in common with a customer relationship management system and facilitates the workflow around interventions as well as various visualisations.  It’s unclear how the at risk metric is calculated but a more sophisticated predictive analytics engine might help in this regard.

      Have yet to notice much discussion of the relationships between SIS (Student Information Systems), CRM (Customer Relationship Management), ERP (Enterprise Resource Planning), and LMS (Learning Management Systems).

  36. Mar 2017
    1. The plan should also include a discussion about any possible unintended consequences and steps your institution and its partners (such as third-party vendors) can take to mitigate them.

      Need to create a risk management plan associated with the use of predictive analytics. Talking as an organization about the risks is important - that way we can help keep each other responsible for using analytics in a responsible way.

    1. we think analytics is just now trying to get past the trough of disillusionment.
    2. Analytics isn’t a thing. Analytics help solve problems like retention, student success, operational efficiency, or engagement.
    3. Analytics doesn’t solve a problem. Analytics provides data and insight that can be leveraged to solve problems.
  37. Feb 2017
    1. this kind of assessmen

      Which assessment? Analytics aren't measures. We need to be more forthcoming with faculty about their role in measuring student learning. Such as, http://www.sheeo.org/msc

  38. Dec 2016
    1. your own website remains your single greatest advantage in convincing donors to stay loyal or in drawing new supporters to your cause. Do you know what donors are looking for when they land on your site? Are you talking about it?

      This underscores the importance of using Google Analytics, as well as website user surveys.

  39. Nov 2016
  40. Oct 2016
    1. Devices connected to the cloud allow professors to gather data on their students and then determine which ones need the most individual attention and care.
    1. For G Suite users in primary/secondary (K-12) schools, Google does not use any user personal information (or any information associated with a Google Account) to target ads.

      In other words, Google does use everyone’s information (Data as New Oil) and can use such things to target ads in Higher Education.

  41. Sep 2016
    1. Data sharing over open-source platforms can create ambiguous rules about data ownership and publication authorship, or raise concerns about data misuse by others, thus discouraging liberal sharing of data.

      Surprising mention of “open-source platforms”, here. Doesn’t sound like these issues are absent from proprietary platforms. Maybe they mean non-institutional platforms (say, social media), where these issues are really pressing. But the wording is quite strange if that is the case.

    2. Activities such as time spent on task and discussion board interactions are at the forefront of research.

      Really? These aren’t uncontroversial, to say the least. For instance, discussion board interactions often call for careful, mixed-method work with an eye to preventing instructor effect and confirmation bias. “Time on task” is almost a codeword for distinctions between models of learning. Research in cognitive science gives very nuanced value to “time spent on task” while the Malcolm Gladwells of the world usurp some research results. A major insight behind Competency-Based Education is that it can allow for some variance in terms of “time on task”. So it’s kind of surprising that this summary puts those two things to the fore.

    3. Research: Student data are used to conduct empirical studies designed primarily to advance knowledge in the field, though with the potential to influence institutional practices and interventions. Application: Student data are used to inform changes in institutional practices, programs, or policies, in order to improve student learning and support. Representation: Student data are used to report on the educational experiences and achievements of students to internal and external audiences, in ways that are more extensive and nuanced than the traditional transcript.

      Ha! The Chronicle’s summary framed these categories somewhat differently. Interesting. To me, the “application” part is really about student retention. But maybe that’s a bit of a cynical reading, based on an over-emphasis in the Learning Analytics sphere towards teleological, linear, and insular models of learning. Then, the “representation” part sounds closer to UDL than to learner-driven microcredentials. Both approaches are really interesting and chances are that the report brings them together. Finally, the Chronicle made it sound as though the research implied here were less directed. The mention that it has “the potential to influence institutional practices and interventions” may be strategic, as applied research meant to influence “decision-makers” is more likely to sway them than the type of exploratory research we so badly need.

    1. often private companies whose technologies power the systems universities use for predictive analytics and adaptive courseware
    2. the use of data in scholarly research about student learning; the use of data in systems like the admissions process or predictive-analytics programs that colleges use to spot students who should be referred to an academic counselor; and the ways colleges should treat nontraditional transcript data, alternative credentials, and other forms of documentation about students’ activities, such as badges, that recognize them for nonacademic skills.

      Useful breakdown. Research, predictive models, and recognition are quite distinct from one another and the approaches to data that they imply are quite different. In a way, the “personalized learning” model at the core of the second topic is close to the Big Data attitude (collect all the things and sense will come through eventually) with corresponding ethical problems. Through projects vary greatly, research has a much more solid base in both ethics and epistemology than the kind of Big Data approach used by technocentric outlets. The part about recognition, though, opens the most interesting door. Microcredentials and badges are a part of a broader picture. The data shared in those cases need not be so comprehensive and learners have a lot of agency in the matter. In fact, when then-Ashoka Charles Tsai interviewed Mozilla executive director Mark Surman about badges, the message was quite clear: badges are a way to rethink education as a learner-driven “create your own path” adventure. The contrast between the three models reveals a lot. From the abstract world of research, to the top-down models of Minority Report-style predictive educating, all the way to a form of heutagogy. Lots to chew on.

  42. Jul 2016