19 Matching Annotations
  1. Sep 2022
    1. thought to be a potential approach to create a better consensus in a world where multiple truths sometimes seem to co-exist. Today, each side argues only their “truth” is true, and the other is a lie, which has made it difficult to find agreement. The bridging algorithm looks for areas where both sides agree. Ideally, platforms would then reward behavior that “bridges divides” rather than reward posts that create further division.

      Bridging-based Ranking definition

      Ranking higher comments in which multiple groups can agree.

  2. Jul 2022
    1. The most common way is to log the number of upvotes (or likes/downvotes/angry-faces/retweets/poop-emojis/etc) and algorithmically determine the quality of a post by consensus.

      When thinking about algorithmic feeds, one probably ought to not include simple likes/favorites/bookmarks as they're such low hanging fruit. Better indicators are interactions which take time, effort, work to post.

      Using various forms of webmention as indicators could be interesting as one can parse responses and make an actual comment worth more than a dozen "likes", for example.

      Curating people (who respond) as well as curating the responses themselves could be useful.

      Time windowing curation of people and curators could be a useful metric.

      Attempting to be "democratic" in these processes may often lead to the Harry and Mary Beercan effect and gaming issues seen in spaces like Digg or Twitter and have dramatic consequences for the broader readership and community. Democracy in these spaces is more likely to get you cat videos and vitriol with a soupçon of listicles and clickbait.

  3. Jan 2021
  4. Oct 2020
  5. Sep 2020
  6. Aug 2020
  7. Jun 2020
  8. Apr 2020
    1. The new and improved Times Higher Education (THE) Impact Rankings 2020 were published this week with as much online fanfare as THE could muster. Unfortunately, they are not improved enough.
    2. “There are limits to what universities can do and the SDGs don’t capture everything about the impact of our research.”

      Plus, the measurement is based on journal articles from commercial databases (eg: Scopus). Those databases index have language bias. On the other hand, we are lacking of national level scientific database that provide dataset for those rankings to process.

      All rankings measure the following components, which all of them contain level of bias:

      • Teaching (the learning environment): international students vs large amount of internal high school graduates
      • Research (volume, income and reputation): high profile research vs "low level" research to solve internal national problem
      • Citations (research influence): only based on commercial database with language bias
      • International outlook (staff, students and research): lack of national data (eg: tracer study) to share with those ranking, international vs national issues
      • Industry income (knowledge transfer): this is mostly controlled by economical situation, which the universities have no control.
    3. These are the rankings that increasingly drive institutional behaviour – and competition between them.

      and not to mention it drives external economical-social setting eg: labour market, top university labeling in the mind of parents, etc.

    4. As a result, the THE clings to a methodology that despite taking insufficient account of the false precision and the uncertainties introduced by the proxy nature of the indicators used to ‘measure’ actual performance, still claims to be able to distinguish universities on scores that differ by 0.1%. It is laughable to claim this level of precision. It is to universities’ discredit that they go along.

      For less economically stable countries (eg Indonesia), many indicators are very much controlled by national level situations (regulations, funding), geographical settings, and the large sum of high school graduates to enter undergraduate degree. On the contrary, all rankings only relevant for graduate research.

  9. Mar 2019
    1. This page, Top Tools for Learning, is updated every year. It lists and briefly describes the top tech tools for adult learning. For the current (2018) list, they are YouTube, PowerPoint, and Google Search. The list proceeds through the top 200 and there are links to each tool. The purpose of this page is to list them; tutorials, etc. are not offered. Rating 4/5

    1. New Media Consortium Horizon Report This page provides a link to the annual Horizon Report. The report becomes available late in the year. The report identifies emerging technologies that are likely to be influential and describes the timeline and prospective impact for each. Unlike the link to top learning tools that anyone can use, the technologies listed here may be beyond the ability of the average trainer to implement. While it is informative and perhaps a good idea to stay abreast of these listings, it is not necessarily something that the average instructional designer can apply. Rating: 3/5

  10. Aug 2017
    1. Es gehört zwar unter Akademikern zum guten Ton, den Wert dieser Rankings herunterzuspielen, aber wenn man dort weit vorne rangiert – wie die beiden eidgenössischen Hochschulen in Zürich und Lausanne und überhaupt praktisch alle grossen Schweizer Universitäten –, ist man nur allzu gerne bereit, trotz aller Vorbehalte das eigene gute Abschneiden gebührend auszuschlachten.
  11. Jun 2016
    1. s Rennie and Flanagin(1994) remind us, there is no standard method for determin-ing order, nor any universalistic criteria for conferring au-thorship status:

      bibliography on authorship practices

    2. lphabetization through weightedlisting to reverse seniority (e.g., Spiegel & Keith-Spiegel,1970; Riesenberg & Lundberg, 1990).

      bibliography on authorship ranking and practices