276 Matching Annotations
  1. Aug 2023
  2. Jul 2023
    1. implementation

      like the distinction made earlier on implementation vs. use in practice.

    2. addresses a lack of transparency aroundprocesses of participatory design in learning analytics (Sarmiento & Wise, 2022).

      what can another team learn from this?

    3. a comprehensive account of the design and development processes

      a big PLUS

    4. lack of contextualinformation, resulting in difficulties in decoding

      the choice of Email may have led to this limitation. what if it's in a web app?

    5. quantity

      volume?

    6. emergent

      edit: that emerged?

    7. 4.4

      Q about the design strategies: Any ideas or strategies related to "collaborative" annotation?

    8. 3.2 Study Design: Participatory Design in Three Phases

      very thorough and interesting

    9. examine how analytics make a difference in educational practices

      enough progress?

    10. Taking an open-box designnarrative approach

      how is this defined?

    11. collaborative annotation

      How do you define it?

    Annotators

  3. Feb 2021
    1. Graphlet analysis

      hopefully we can see each other's comments..

    Annotators

  4. Jan 2021
    1. Table 1: Summary statistics of datasets.

      these are big networks

    1. Hence, in the later stage of a discussion, e-authors were more likely to find solutions by reading through the posts in the discussion (Wise, Speer, Marbouti, & Hsiao, 2013), which reduces the incentive to further reply to posts. Delayed replies indicated longer wait times in responding to an earlier message, which might increase the e-authors’ feeling of social distance within the online learning environment (Aragon, 2003; Sung & Mayer, 2012) and reduce their likelihood of participating further.

      useful [[references]]

    Annotators

    1. we have not yet developed a cadre of metadata workers who could effectively address the issues, and we have not yet fully faced the implications of the basic infrastructural problem of maintenance.

      do we need a strong metadata mechenism in [[Knowledge building]] infrastructure as well?

    2. Key to the infrastructure perspective is their modular, multi-layered, rough-cut character. Infrastructures are not systems, in the sense of fully coherent, deliberately engineered, end-to-end processes. Rather, infrastructures are ecologies or complex adaptive systems; they consist of numerous systems, each with unique origins and goals, which are made to inter-operate by means of standards, socket layers, social practices, norms, and individual behaviors that smooth out the connections among them. This adaptive process is continuous, as individual elements change and new ones are introduced — and it is not necessarily always successful.

      My thinking on [[Knowledge Building]] infrastructure is aligned with this view.

    Annotators

  5. Dec 2020
    1. • The growth of creative co-presence.Today’s creative and collaborative tools are ones that users open, play with, then close. They’re a discrete, defined presence in users’ lives. The tools of the future will be more like artists’ sketchbooks: always-at-hand tools that people will use to think with, and to record ideas. but because they’re also shareable, they can change group processes as well. imagine, for example, an infinitely-large digital whiteboard or notebook that everyone in a group can access, write on, annotate, or import ideas into. in such an environment, the concept of the “brainstorming session”—a discrete time set aside to be creative—will seem quaint and old-fashioned.

      This depiction is so vivid.

      constantly available, social (per user choice), and endless

    2. Simple, sociable, and symbiotic: these are the cornerstones of lightweight knowledge tools, and are the characteristics that will define their importance in the future.

      three cornerstones of [[lightweight knowledge tools]]

    3. In contrast, creative work requires attention to social and cultural context, the ability to make unexpected combinations, and a drive to focus on meaning rather than means. It is, in other words, a fundamentally human activity. The knowledge that informs innovative activity isn’t just the product of combinatorics; it’s defined by the creation of new insight out of diverse pieces of information. Creative knowledge is more like a verb than a noun: it is activity, not information; and it requires people—usually groups of people and often diverse groups—to bring it about. Using machines to take on the burden of processing tasks frees up time and energy for the human partner to provide insight and creative thinking that will in turn help the machines better predict and provide the information a user needs.

      well said!!

    4. One method that has emerged is to combine mass quantities of small human judgments along with the automated weightings of those judgments, as in Google’s PageRank algorithm.

      human judgments get combined to form a collective opinion. quite similar to [[Promising Ideas tool]], even if for different purposes.

    5. these emerging lightweight knowledge tools will be used in different ways by disparate collaborators and in the process will reshape traditional knowledge chains. In their purest form, knowledge chains consist of a clear progression of discrete steps—creation, publication, organization, filtering, and consumption—each produced, more or less, by its own set of people, tools, and systems. These kinds of knowledge chains are being replaced by knowledge ecologies, in which all of the steps involved in knowledge creation are happening all at once, and any given user, with the help of emerging knowledge tools, may be at once the creator and the consumer of knowledge, and indeed the publisher and organizer as well. Such knowledge ecologies allow us to move from tool to tool and weave together different parts of the previously linear knowledge process in new and interesting ways.

      Moving from knowledge chains to knowledge ecologies

    Annotators

  6. Nov 2020
    1. An alternative flexible spatial practice is that of sociality (Patton 2000). Sociality conveys student empowerment through active participation in structuring, with others, the conditions of his or her life. Where pedagogies of self-actualization imply an essential individualist core that can flourish or come-into-being under the proper conditions, sociality recognizes that identity construction is inseparable from the relationship between individuals, spaces, and practices. Sociality is a state of collective negotiation of the built world for the purpose of mutual empowerment. Sociality pedagogies and spaces stress student flexibility for collective empowerment and learning rather than individual empowerment through market adaptation and acquiescence.

      so interesting. very relevant to [[knowledge building]]

    2. The analysis of flexibility within material/virtual hybrid school spaces reveals another dimension of built pedagogy. The convergence of educational space and information technology embeds educational practice within larger global flows of information and capital.

      what does this mean for COVID teaching?

      What is "built pedagogy"?

    1. SRLvariablessuchas,annotation,highlighting,andvocabularylookupwerenotstatistically significant

      major

      the mapping of these features and SRL is rough

    2. ourstudyfocusesonteacher-providedfeedbackinsteadofautomatedfeedback

      major

      how relevant is this study with learning anlaytics?

      seems RQ1 that uses NLP to measure similarity of answers is relevant to LA, but the other ones not so much so.

    3. Thisphenomenoncanbeexplainedbythe​“feedbackgap”​asdescribedbyEvans[23],wherestudentsdonotaddressteachers'feedback.Anotherreasoncouldbethatstudentsfoundfeedbackdifficulttodecipher[9].Moreover,receivingfeedbackandactinguponitdependsonstudents’self-beliefs[27]andstudyhabits[19].Aswedonothaveaccesstostudentsdemographicinformationorperceptiondata,wedonotknowtheexactreasonsthey were unresponsive

      major

      is this study missing the point.

    4. Anystudentactionsexceeding30minutesweredefinedasanewsession.

      why?

    5. Toevaluatehowtheanswermodificationswereconnectedtoscoredifferences,wecalculatedSpearmancorrelation between mean cosine similarities and score difference.

      ?

    6. Researchershaveusedsequenceminingtechniquestoidentifystudents’learningbehaviors

      quite abrupt transition

    7. AL

      what?

    8. Feedback

      Is this paper written using the official template?

    Annotators

    1. FUTURE WORK

      minor

      discuss the limitations of this study, especially how the task design is affecting the tranferrablity of the method.

    2. The task metric statistics of stu8 and stu28 are shown in Table 5 and Table 6 respectively.

      minor

      it is unclear how cases of students were selected. for example, the authors write "we select stu5 from the high-performance group and stu28from the low-performance group for comparative analysis". Why these two students? It needs to be further explained.

    3. 4.3 Data Collection and Analysis

      major

      both methods and results are presented in Sec 4.3. The authors need to be meaningfully separate them into different sections to help the reader.

    4. Task Sequence Model

      minor

      it is unclear a missed opportunity that the task sequential model is not pushing hard enough on sequential information to represent more detailed picture of debugging. the derived measures are very reductionist.

    5. he test point

      knowledge component? hard to grasp

      minor

      terms are quite messed up -- hit time, test point, solving duration. hard to understanding.

    6. 2 EYE MOVEMENTS AND BASIC BEHAVIOR DATA

      major

      a major limitation of this paper is the lack of conceptual grounding. the second section is dedicated to an explanation of multimidal data but without any exploration of what debugging means from various aspects - cognitively, behaviorally, etc.

    7. It can be seen from the figure that there is a big difference in the ability of students to discover and solve errors.

      results section?

    8. However, they can neither change the main function, nor delete lines or change theprogram structure. In addition, students are not allowed to open other windows and can only compile or run in the IDE.

      pretty limiting. wondering how this method can be used in a real-world context?

    9. Case Analysis

      is this section really necessary?

    10. Stu4 has the ability to thinkglobally and is a thoughtful student

      for this paper, needs to explain the theoretical framework of debugging

    11. Fig. 2. Debugging Behavior of stu1 when Correcting the Error in Line 76

      can students scroll?

    12. In terms of measurement methods, eye tracking is the most commonly used measurement method[11].

      really?

    13. quantitative

      typo

    Annotators

    1. Schwendimann et al. suggests the definition of dashboard to be: a single display that aggregates different indicatorsabout learner(s), learning process(es) and/or learning context(s) into one or multiple visualizations

      seems to draw from a limited number of sources

    Annotators

    1. Recently, [19] proposed using a Glicko-based learner model for modelling the masteryof students studying within the Coursera platform. However, similar to the standard Elo-based learner model, theirproposed model considers only one global parameter for estimating a student’s knowledge state on the entire domainand that their model is also not sensitive to the lag time between a student’s interactions.

      what does a student interaction mean?

    2. The goal of the present article is to overcome the presented limitations of the existing Elo-based learner modelsby proposing a new learner model called MV-Glicko that is built based on the principles of Glicko rating system[10].

      hard to understand what they do

    Annotators

    1. Other important elements of an epistemically rich environment are knowledge-building concepts such as building on, rising above, explaining, and idea improvement. Young students initially understand these in very limited ways, but gradually, through their own knowledge building efforts and with help from their teachers and peers, the concepts take on fuller meaning and become not just supports for action but part of how they see the world, part of their culture.

      culture is mentioned. but it's less clear how to get students to adjust to a culture.

    2. to what extent flexibly adaptive and adaptable scaffolding could strengthen Knowledge Building.

      important question

    3. Regarding the SToG perspective notion of creating external guidance that is perfectly adapted to the learners’ internal scripts, Knowledge Building is in part “under-scripted”, meaning that too little external guidance is provided for beginning learners who do not yet have access to productive internal script components because the CSCL activity of Knowledge Building is radically new for many of them. In other parts, Knowledge Building is possibly sometimes “over-scripted” when scaffolding like semantic pointers remain although they are no longer needed.

      interesting view of KB from the script theory perspective

    Annotators

    1. Those who criticize it only show that they are ‘bad pupils’ who have not learned the game (Bartley, 1983).

      are those criticizing KB seen as bad pupils?

  7. Oct 2020
    1. AJavapackagewasimplementedtogeneratenestedcase–controldatafromMOOCeventstreams,whicharethenfedintotheCoxproportionalhazardprocedurePHREGinSASfortheestimationofnetworkeffects.Intheestimationoftheproportionaloddmodelforquizscores,samplingisnotnecessaryandalldataarefittedusingtheprocedurepolrinR

      tools used for data analysis

    Annotators

  8. Sep 2020
    1. The two tools that we will use—Tinkerplots and CODAP--have been designed specifically as data educationtools, rather than as approximations toprofessional tools.As such, they make extensive use of direct manipulation in the construction of graphs, allow for rapid transitions from one type of representationto the next, and do not require coding knowledge (as does, for example, R). While they are simple to use, however, these two tools have considerable power and give novice users the abilityto carry out relatively complex data analyses. Inparticular, both tools feature linked representations, in which data selected in one graph is highlighted in all other graphs on the screen, thus allowing a user to see complex relationships among variables. Both tools also support sophisticated filtering and subsetting by direct manipulation, rather than only through formulas (although selection by formula is also supported).

      important features of CODAP and Tinkerplots

    2. We are inspired by a perspective on curriculum dating back several decades and expounded upon in a chapter of the 1999 book Educating Minds and Hearts(McIntosh & Style, 1999). The general idea is that curriculum is a structure that provides “windows out into the experiences of others, as well as mirrors of the students’ own reality.” For students from under-represented groups, many mainstream curriculum activities skew toward being windows more than mirrors, focusing on experiences for which they are onlookers, rather than experiences in which they can see themselves. We want the topics for our modules to both reflect the lives of participants and expand their horizons by giving them insights into other people’s lives. In order to assure that topics we are considering are“mirrors” for participants, we have been working with middle school advisorswho weigh in on topics in terms of their relevance and interest to others like themselves.

      such an interesting approach

    1. To create them, the Event Horizon Telescope (EHT) collaboration — which harnesses a planet-wide network of observatories — exhumed old data on the black hole and combined these with a mathematical model based on the image released in April 2019, to show how the surroundings have evolved over eight years. Although it relies partly on guesswork, the result gives astronomers rich insights into the behaviour of black holes, the intense gravity of which sucks in matter and light around them.

      Super important to understand the nature of science.

  9. Jun 2020
    1. The questions that frame the contributions of this collection are: In what ways are reasoning with data influenced by learners’ situatedness—relative to data, data contexts, and data science as a field? And, given that, how can educators build on learners’ existing relationships with data to develop a more ethical and effective Data Science Education?

      Framing question for the special issue

    1. Would you be interested in presenting your own research at this session? This would be a short introduction to your current research. 

      maybe adding "presenting in small break-out rooms" to make it sound less risky?

      also, change 'a short introduction' to be more accurate. maybe something like "an elevator pitch (1-3 min)' to be more acccurate while being flexible?

      a typo in the first item

    2. would you prefer it to be synchronous via Zoom (we are online at the same time) or asynchronous via webpage (we are not online at same time but have a space to interact)? 

      would it make sense to add a third option of 'either is fine'

    3. an ISLS

      remove 'ISLS' (change an to a).

      it may be confusing for new members to see ICLS and ISLS. some may think they are the same thing. since we have the logo and 'the field of learning sciences' below.

      same comment applies to the final words of this paragraph.

    1. Discovery 5 and Brainspace for Enterprise provide what Brainspace calls “augmented intelligence,” an evolution of artificial intelligence, or AI. Brainspace incorporates machine learning to supplement and support human analysis, Sathyanna explains.  “While the system can learn without human intervention, we are also augmenting the users’ decision-making process and capabilities by providing deep insights which are otherwise hidden or inaccessible.”

      Augmented human intelligence is the next step in AI.

    2. Copps describes it as bringing the best of both worlds—machine and human capabilities—together. “A machine’s ability to ingest, connect, and recall information goes far beyond what is possible for humans,” he says. On the other hand, a person’s ability to use information to reason, judge, and strategize far surpasses the capabilities of machines today, he adds: “By combining these abilities inside of an augmented intelligence environment, we are able to accelerate productivity beyond what is possible with other, more traditional tools.”

      Why and how we harness machine and human capabilities.

  10. May 2020
    1. Theoretical publications do therefore not result from simply copying what can already be found in the slip box. The communication with the slip box becomes fruitful only at a high level of generalization, namely that of establishing communicative relations of relations. And it becomes productive only at the moment of evaluation, and is thus bound to a certain time and is to a high degree accidental.

      creative work does not already exist in the box, but happens at the moment of evaluation -- when "establishing communicative relations of relations".

    2. In any case, communication becomes more fruitful when we succeed to activate the internal network of links at the occasion of writing notes or making queries. Memory does not function as the sum of point by point accesses, but rather utilizes internal relationships and becomes fruitful only at this level of the reduction of its own complexity. In this way, more information becomes available at this isolated moment of an search impulse than one had in mind.

      links aid memory.

    3. If, however, we seek communication with the slip box, we must seek internal possibilities for linkings which result in the unexpected (i.e. information).

      linkings, unexpectedness.

      Usually it is more fruitful to look for formulations of problems that relate heterogeneous things with each other.

    4. The slip box needs a number of years in order to reach critical mass. Until then, it functions as a mere container from which we can retrieve what we put in. This changes with its growth in size and complexity.

      Needs a few years to evolve into a critical mass.

    5. As a result of extensive work with this technique a kind of secondary memory will arise, an alter ego with who we can constantly communicate. It proves to be similar to our own memory in that it does not have a thoroughly constructed order of its entirety, not hierarchy, and most certainly no linear structure like a book. Just because of this, it gets its own life, independent of its author. The entirety of these notes can only be described as a disorder, but at the very least it is a disorder with non-arbitrary internal structure.

      A beautiful, affective description of the system.

    6. Bibliographical notes which we extract from the literature, should be captured inside the card index. Books, articles, etc., which we have actually read, should be put on a separate slip with bibliographical information in a separate box.

      Bib notes

    7. 2. Possibility of linking (Verweisungsmöglichkeiten). Since all papers have fixed numbers, you can add as many references to them as you may want. Central concepts can have many links which show on which other contexts we can find materials relevant for them.

      The power of linking.

    8. For the inner life of the card index, for the arrangement of notes or its mental history, it is most important that we decide against the systematic ordering in accordance with topics and sub-topics and choose instead a firm fixed place (Stellordnung). A system based on content, like the outline of a book) would mean that we make a decision that would bind us to a certain order for decades in advance! This necessarily leads very quickly to problems of placement, if we consider the system of communication and ourselves as capable of development. The fixed filing place needs no system. It is sufficient that we give every slip a number which is easily seen (in or case on the left of the first line) and that we never change this number and thus the fixed place of the slip. This decision about structure is that reduction of the complexity of possible arrangements, which makes possible the creation of high complexity in the card file and thus makes possible its ability to communicate in the first place.

      The system is "no system." Reduce the complexity of arrangements to give rise to high complexity.

    9. One of the most basic presuppositions of communication is that the partners can mutually surprise each other.

      Surprise - another interesting element of the human-tool relation. Also much affection attached to it.

      It feels good to be surprised by your own notes a while ago. To be more precise, to be surprised by your communicative partner that has remembered your notes from a while ago.

    10. That slip boxes can be recommended as partners of communication is first of all due to a simple problem about technical and economic theoretical research. It is impossible to think without writing; at least it is impossible in any sophisticated or networked (anschlußfähig) fashion.

      Think about slop boxes as a partner -- a partner in communication, to be more precise. The affection is quick clear in the passages.

    11. If a communicative system is to hold together for a longer period, we must choose either the rout of highly technical specialization or that of incorporating randomness and information generated ad hoc. Applied to collections of notes, we can choose the route of thematic specialization (such as notes about governmental liability) or we can choose the route of an open organization. We decided for the latter. After more than twenty-six years of successful and only occasionally difficult co-operation, we can now vouch for the success or at least the viability of this approach.

      Some good thoughts here. Incorporating randomness and choosing the route of an open organization when building the external memory -- the partner.

    1. Reading and Connecting: Using Social Annotationin Online Classes

      Test annotation - let me know if you see it if you're annotating a downloaded copy of the PDF.

    Annotators

    1. Community is aspecial case of networks and refers to the devel-opment of a shared identity around a topic or setof challenges. It represents a collectiveintention–however tacit and distributed–tosteward a domain of knowledge and to sustainlearning about it. Networked learning differsfrom other relatedfields because of its researchfocus on networks, critical pedagogy, andlearning

      distinction between networked learning and social learning & communities of practice

    2. For networked learning the connectivityenabled by digital networks and the potential forinteractions between people, and between peopleand their resources, is absolutely central. Learningtechnologies are a means to this end rather thanthe primary focus of research.

      important feature of networked learning

  11. Mar 2020
    1. We recommend that beginning doctoralstudents take at least one course focused on the philosophy of science and alternativeresearch paradigms (Phillips2000) so that they can make a much more mindful choicewhen it comes to defining their research goals.

      good suggestion

    2. There were no papers published inETRDin either time period with critical/postmoderngoals, a finding that may reflect an insufficient coverage of critical perspectives in thecurriculum of educational technology doctoral programs.

      still the case? an important gap in here. something to discuss given the rise of 'critical pedagogy' and critical ID.

    3. Interest in design-based research has grown across many fields of educational inquiry(Anderson and Shattuck2012; McKenney and Reeves2013; Plomp and Nieveen2013),but the representation of papers with design/development goals inETRDhas remainedmarginal with 6 % in 1989–1994 and 5 % in 2009–2014. One challenge for researcherspursuing design/development goals may be that reports on their studies are often quitelengthy, far exceeding the 6000–8000 word limits of many educational technology jour-nals.

      this is a little surprising.

    4. Over the past quarter century, there has been a major increase in the representation ofpapers with descriptive/interpretivist goals from 1 to 16 %. This likely reflects the fact thatresearch with descriptive/interpretivist goals has become more acceptable in the field ofeducational research as a whole.

      important trend. makes sense as the field mature -- tech not a silver bullet.

    5. From our perspective, research methods should not be identified until the goals of theresearchers and the specific research questions they wish to address are understood.Although it seems legitimate for educational researchers to self-identify as designresearchers or as interpretivists in reference to the types of goals they pursue in theirresearch, it appears to us to be less straightforward for researchers to describe themselvesas ‘‘qualitative researchers’’ or ‘‘quantitative researchers’’ in reference to the methods theymight prefer to use.

      important argument against the Qual vs Quan dichotomy.

    1. The co-mediation of space and ideas

      what does co-mediation mean here?

    2. We found four types of actions that had unique additional criteria: “Post new” actions included creating an element that did not previously exist; “Organize” actions included relocating or resizing an existing element on the space; “Decorate with” included actions whose sole purpose was esthetic; and the “delete” action included the removing of existing elements from the space. Additionally, these actions occurred using four KF tools (headline, graphic, note, link). Putting these together created 16 possible action-tool combinations in a spatial responsibility-taking framework that can be used to system-atically count and characterize how communities take responsibility over their online spaces.

      core findings

    3. Using the constant-compar-ative method (Straus, 1987), after creating a list of all occurrences in the 42 views, we started analyzing them by applying notes that both described the actions as well as the researchers’ interpretation of what they could mean.

      interesting way to analyze the changes

    4. FLSs and knowledge building as co-mediational structuresFLS is a theoretical concept reflecting and building on research that suggests “space matters” in the learning process (Brooks, 2011; Sutherland & Fischer, 2014). To be considered an FLS, a learning environment must use emerging technologies, be designed based on principles of learn-ing, and include opportunities for lived-in experiences (Hod et al., 2019). The space itself, either virtual or physical, must be emphasized to take advantage of new technologies and spatial con-figurations to rethink education (Hod et al., 2016)

      what future learning spaces mean

    5. Widening the lens of collective responsibility

      interesting take! needed.

    Annotators

  12. Jun 2019
    1. As a reminder, the two aspects of today’s notebooks (Mathematica, Jupyter, R markdown, Emacs/OrgMode) that I consider harmful for scientific communication are: The linear structure of a notebook that forces the narrative to follow the order of the computation. The impossibility to refer to data and code in a notebook from the outside, and in particular from another notebook, making reuse of code and data impossible.

      interesting take on two limitations with current computational notebooks.

  13. May 2019
    1. Mediation does not simply take place between a subject and an object, but rather co-shapes subjectivity and objectivity. Humans and the world they experience are the products of technical mediation, and not just the poles between which the mediation plays itself out.

      interesting points on mediation

    2. Classical philosophy of technology tended to reify ‘Technology’, treating it as a monolithic force. Ihde, by contrast, shuns general pronouncements about ‘Technology,’ fearing to lose contact with the role concrete technologies play in our culture and in people’s everyday lives.

      Appreciate this work

    1. In other words, data analytics involves the actual analysis of the data, and informatics is the application of that information. Health informatics professionals use their knowledge of information systems, databases, and information technology to help design effective technology systems that gather, store, interpret, and manage the data that is generated in the provision of healthcare to patients.

      informatics vs. analytics

  14. Apr 2019
    1. n light of the intense efforts by both public and private sectors to improve the outcomes for all children in K-12 classrooms, there is an urgent need to know more about how teachers think

      good

    2. n light of the intense efforts by both public and private sectors to improve the outcomes for all children in K-12 classrooms, there is an urgent need to know more about how teachers think

      good

    1. the company doesn’t know what “health” exactly means in this context. What would a healthy Twitter look like? What even is health for a non-sentient website?

      defining health

    1. ConceptNet is a freely-available semantic network, designed to help computers understand the meanings of words that people use.

      this is super cool

    1. Tools for problem-finding The tools discussed so far are for scientists and engineers working on a problem. But let’s back up. How can someone find the right problem to work on in the first place? And how can they evaluate whether they have the right ideas to solve it?

      no good tools for this.

    2. What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.

      promisingness judgments

    3. The very concept of a “programming language” originated with languages for scientists — now such languages aren’t even part of the discussion! Yet they remain the tools by which humanity understands the world and builds a better one.

      A powerful recognition based on historical perspectives.

    4. Climate change is too important for us to operate on faith.

      Agreed. And yet much public discourse operates this way.

  15. Mar 2019
    1. We believe it’s because most rely on a narrow approach to data analysis: They use data only about individual people, when data about the interplay among people is equally or more important.

      from attribute data to relational data!

  16. Feb 2019
    1. This allows us to use network information to distinguish three stages of learning: (i) a very early learning stage where all but the phonological layer contribute substantially to prediction, (ii) an early learning stage which marks a transition period, and (iii) a late learning stage in which contribution from word associations dominates word learning.

      3 stages of word learning

    Annotators

    1. However, the computer has many other capabilities for manipulating and displaying information that can be of significant benefit to the human in nonmathematical processes of planning, organizing, studying, etc. Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly.

      Rich representations -- besides computational power -- are absolutely key for the described scenario.

      This description, written half a century ago, is more or less realized. It reminded me of the Microsoft Vision 2019 created around 2009 that is yet to be realized.

      https://www.youtube.com/watch?v=P2PMbvVGS-o

    2. the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity.

      A situation characterized as The Ingenuity Gap by Homer-Dixon.

    3. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human "feel for a situation" usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.

      I appreciate the recognition of both the "arts" and "science" of problem solving.

    1. However, the computer has many other capabilities for manipulating and displaying information that can be of significant benefit to the human in nonmathematical processes of planning, organizing, studying, etc. Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly.

      Rich representations -- besides computational power -- are absolutely key for the described scenario.

      This description, written half a century ago, is more or less realized. It reminded me of the Microsoft Vision 2019 created around 2009 that is yet to be realized.

      https://www.youtube.com/watch?v=P2PMbvVGS-o

    2. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human "feel for a situation" usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.

      I appreciate the recognition of both the "arts" and "science" of problem solving.

    3. the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity.

      A situation characterized as The Ingenuity Gap by Homer-Dixon.

    1. Engelbart’s dream was different. He believed that networked computing could empower collective intelligence, offering humanity a way to address complex problems together.

      Nice to see this link to Bret Victor's "annotation" of Engelbart's ideas. Check out Victor's work on dynamic representations and Dynamicland if you haven't yet.

  17. Jan 2019
    1. The National Science Foundation continued to exist as a basic-science funding agency. But unlike ARPA, the NSF funds projects, not people, and project proposals must be accepted by a peer review board. Any sufficiently-revolutionary project, especially at the early stages, will sound too crazy for a board to accept. Worse, requiring a detailed project proposal means that the NSF simply can't fund truly exploratory research, where the goal is not to solve a problem, but to discover and understand the problem in the first place.

      a problem with research funding

    1. What is Informatics? The study of the structure, the behaviour, and the interactions of natural and engineered computational systems. The central focus of Informatics is the transformation of information - whether by computation or communication, whether by organisms or artefacts. Understanding informational phenomena - such as computation, cognition, and communication - enables technological advances.

      I like this definition

    1. The term “informatics” broadly describes the study, design, and development of information technology for the good of people, organizations, and society.
    2. The Classification of Instructural Programs (CIP) describes Informatics as: "A program that focuses on computer systems from a user-centered perspective and studies the structure, behavior and interactions of natural and artificial systems that store, process and communicate information. Includes instruction in information sciences, human computer interaction, information system analysis and design, telecommunications structure and information architecture and management."
    1. The calendar, moreover, embodies cyclical re-enactments, retrievals, or renewals of our commitment to and engagement with the sacred, through the annual feasts and celebrations of one's religious community. Modernity attacks this traditional temporality by transforming time into a resource that is subject to the imperatives of efficiency and the logic of commodification. This much is convincing and powerful. But if sacred and traditional time is compromised in this way, are we left only with meaningless clock-time?

      good summary & q

    2. Hammer tackles the third class of strategies in his dense final chapter. He draws on the writings of Bloch and Benjamin for a vision of aesthetic experience that can fit into Adorno's conception of how experience might resist the totalizing logic of modernity. Hammer draws on central aspects of Adorno's aesthetic theory, according to which in the experience of art authoritative contents present themselves in such a way that they normatively call to the subject, but yet resist being absorbed into the logic of efficiency and commodification. The invitations these contents present are not publicly articulable in the normal way. They take the form of hopes, rather than imperatives. By holding on to hope, as it bodies forth in art that resists commodification, the subject can encounter a glimmer of redemptive experience.

      aesthetic theory. hopes rather than imperatives

    3. In chapter 3 he examines Kant's and Habermas's notions of autonomy, as well as a neo-Aristotelian attempt (in Lorenzo Simpson) to return to a pre-modern form of temporality. He then leads his reader through a chapter-by-chapter examination of Hegel, Schopenhauer, Nietzsche (both early and late, in two chapters), Heidegger, and finally in a single very dense chapter, Post-Modernism (Lyotard), Marxism (Jameson), and Critical theory (Bloch, Benjamin, and Adorno). He argues that only Adorno's appropriation of Bloch and Benjamin promises any hope of a successful response to modern temporality.

      fascinating!

    1. Blogging highlights the process, not the output – one of my early blogging chums was Tony Hirst here at the OU. He has commented that blogging reveals an ongoing process of research, but that much of our formal systems (promotion, REF, research funding) are focused on outputs. That’s not to say outputs aren’t important, but the longitudinal picture that a blog gives you allows for a better representation of developing ideas.

      Totally agree. Hope to blog more in 2019! #TrustTheProcess

  18. Dec 2018
    1. The “Big Idea” proposes to advance the educational, research and public service mission of the University of Michigan by: Offering an undergraduate experience that has real-world problem solving and engaged scholarship at its core; situating undergraduate education at the heart of the scholarly enterprise; Enhancing collaboration across disciplinary boundaries; and Amplifying the relationship of a public university to its constituencies through projects that work in collaborative partnerships with a range of communities and sectors to advance progress on significant problems; To accomplish these goals, we envision a program that is unconstrained by some of the most common operating assumptions in current higher education: grades, credit hours, and disciplinary majors.

      Very exciting!! No surprise U-Mich is doing this.

    1. "Many of the personalized learning systems now available begin with an articulation of the knowledge space – i.e. what the learner needs to know. What the learner knows is somewhat peripheral and is only a focal point after the learner has started interacting with content. Additionally, the data that is built around learner profiles is owned by either the educational institution or the software company. This isn’t a good idea. Learners should own the representation of what they know."

      Things learning analytics / AI in education needs to keep in mind. So many products are based on this (old) model.

    1. We’ve studied student learning with data and have identified six different modes through which students can work with data. Each mode has the potential to bring simplicity or sophistication to the study of data

      A great list of modes. I wonder what are the more fundamental modes behind them.

    2. As complex datasets begin to underpin every aspect of modern life, data scientists are everywhere, applying their advanced programming and statistics knowledge, disciplinary understanding, and data wrangling skills.

      This is real!

  19. Nov 2018
    1. analytics, advising, and learning assessment, encompassing course-level learning analytics, as well as planning and advising systems that focus on overall student success

      Also analytics for different stakeholders, such as instructors, students, admins, to make informed decisions.

  20. Oct 2018
    1. We, the Architects. I've made this point elsewhere, but what is both exciting and daunting is that the shift to a component-based approach provides an unprecedented opportunity to shape, rethink, plan, and design our digital learning environments.7 An architect is a proactive agent who looks to plan structures and environments to accommodate future usage. By taking the component approach, we can all adopt an architect's perspective and work to design the learning environments we want and need.

      <3 this!

      Still remember I used the word architect in the first draft of the 'unLMS' paper but was refuted by one reviewer.

      The component-based approach is probably urging us to take an architect perspective. My intuition is working as architects requires awareness of many cross-cutting ideas -- components and 'the whole', design and engineering, history and human values, and so forth.

    1. This is a CS/Data Science paper about an interesting web phenomenon. My key takeaway is the introduced method of detecting collaboration based on user actions in a well structured environment. Another takeaway is the use of Complexity Science and emergence in studying massive-scale online interactions.

      A muddy point is how collaboration is defined in this case. In this r/place case, conflicts and collaboration co-exist and cannot be separated from each other. A cool direction is to think about conflicts as well in such a contested environment. (Not sure how useful it is though; maybe just producing useless knowledge that could be useful in the future or in another scenario.)

    2. We want to make our model temporally-aware, as furtherinsights can be gathered by analyzing the temporal dy-namics of the user interactions.

      sounds exciting

    3. We introduce a generic method to infer collaboration pat-terns in environments where only user interactions are ob-servable. We show, through experiments, that the local prox-imity of users’ actions represents a sufficiently expressivesignal for the study of collaboration. Indeed, we report it tobe more predictive than the modeling of the interactions be-tween users and their environment. This finding reinforcesprevious results in the domain, that suggest the study ofemergent phenomenons requiring the modeling of interre-lationships between the parts of a system, rather than mod-eling their individual behaviors.Being able to capture rich social signals, such as collab-oration patterns, represents a unique opportunity to studycomplex social phenomenons.

      modeling collaboration patterns in a particular environment is the focus here. in this case, the environment is well defined. will be totally different if it's another environment.

      what's missing in this paper (not sure whether the authors are aware of) is the coordination carried out in reddit sub communities. it is mentioned in this paper that user ids are hashed. curious whether there is a way to map artwork with sub communities. guess not hard for some artworks (like Ubuntu).

    4. We therefore conclude that, from the considered models, theparametrization of user interrelationships is the most predic-tive method of user actions in a sandbox environment.

      user-user relationships more predictive

    5. the locality of useractions being a critical aspect in the design of a method topredict collaborations.

      locality of user actions - tied to the structure of the environment -- pixels being clearly defined.

    6. Reproducibility:We ran our experiment on a single com-puter, running a 3.2 GHz Intel Core i7 CPU, using PyTorchversion 0.2.0.45. We run the optimization on GPU NVIDIAGTX 670. We trained our model with the following parame-ters:= 0:04,= 0:01,K= 120. All code will be madeavailable at publication time6.

      reproducibility

    7. We therefore represent every user in the sys-tem by a latent representation: a real-valued vectorpuiofsizeKwhereKis the chosen dimensionality of the latentspace. We define a notion of distance between any pair ofusers in the considered population, where the distance met-ric represents the strength of collaboration between users. Iftwo users are actively collaborating, the response producedby the combination of their respective vectors (typically byusing dot product) should be high.

      collaboration is modeled by vector similarity., which represents the proximity of their actions.

    8. We opt for an embedding method, since wehypothesize less independent behaviors than individuals inthe system. Embedding methods are especially adapted toproducepersonalizedpredictions (e.g. collaborative filteringapplications), by making the assumption that the behaviorfrom an individual can be predicted by collecting data frommany users

      choice of embedding methods

    9. In this section, we introduce a predictive method that modelscollaboration between users in order to predict future useractions. In this regard, we train a model to evaluate the like-lihood of a useruito perform a particular action at a givenmoment in time.

      model

    10. We first observe, in figure 2 (left), the activity distribu-tion of the users. This distribution highlight the presence offew power-users and a vast majority of users performing amoderate number of clicks. In figure 2 (middle), we observethe same type of distribution for the number of updates per-formed on every pixel. As few pixels have been highly dis-puted, the large majority of them have only been updated afew times.

      actors & place

    11. In April 2017, the discussion platformRedditlaunchedPlace, an online canvas of 1000-by-1000 pixels, designedas a social experiment. Reddit users were allowed to changethe color of one pixel at every fixed time interval (the in-terval varied from 5 to 20 minutes during the events). Theevent lasted 72 hours and received a massive engagementfrom more than 1.2M unique users. Users collaborated tocreate various artworks by either directly interacting withthe canvas or by coordinating their actions from the discus-sion platform.

      r/place context

    12. The most relevant line of researchis probably the task of detectingoverlappingcommunities,whose members can be part of multiple groups. Those linesof research have made use of Matrix Factorization methodsin order to relax the assumption of communities being dis-joint (Zhang and Yeung 2012) (Yang and Leskovec 2013).

      community detection - esp. community overlap.

    13. High-level social behaviors, such as the bystander effect, havebeen observed inside a simple video-game based virtual en-vironment (Kozlov and Johansen 2010). Social interactionsin Massively Multilayer Online games have been studiedby Cole et al. (Cole and Griffiths 2007).

      games - another interesting mass collaboration context

    14. The termemergence has various definition across fields (Kub 2003),alike complexity (Gershenson and Fern ́andez 2012) fromwhich emergence has been suggested to arise from. Emer-gence generally refers to system-wide behaviors that can-not be explained by the sum of individual behaviors.

      cool - useful references to emergence - a concept 'collaborative learning at scale' cannot miss.

    15. The exploration-exploitation trade-off in a collaborative problem solving task has been dis-cussed by Mason and Watts (Mason and Watts 2012). Kit-tur and Kraut (Kittur and Kraut 2008) studied various typesof collaboration taking place between Wikipedia editors andmeasured the impact on quality of the resulting articles.

      useful references

    16. In order to establish a predictive model of user behavior,we consider the sandbox as a complex social system, i.e., asystem inherently difficult to model due to the large amountof interdependencies between its parts. Previous research inthe field of Complexity Science (Bar-Yam 2002) hypothe-sized that the nature of such systems is favorable to the emer-gence of global behaviors, arising from the local interactionsof the actors. Following this evidence, we propose a modelthat assesses the likelihood of a user interaction by observ-ing its social context. In other terms, we propose a predictivemodel that captures inter-user relationships instead of mod-eling independent behaviors.

      conceptualizing the canvas as a 'complex social system' makes sense. Need to look into Complexity Science.

    17. Users werenot grouped in teams nor were given any specific goals, yetthey organized themselves into a cohesive social fabric andcollaborated to the creation of a multitude of artworks.

      note: reddit sub communities did play a role.

    18. Rather thanmodeling the users as independent actors in the system, wecapture their coordinated actions with embedding methodswhich can, in turn, identify shared objectives and predict fu-ture user actions.

      this sounds cool -- focused on the identification of coordinated actions instead of actors (draw closer to the definition of collaboration); using embedding methods

    19. Latent Structure in Collaboration: the Case of Reddit r/place

      The first research paper I've seen on /r/place.

    1. Peer interaction may be able to improve the isolation of online learning, as well as improving the learning.!•Potential to automatically group people based on what misperceptions they currently have.!

      One benefit of connecting MOOC learners: reduce isolation. One method of harnessing the scale: auto group learners based on their attributes.

    2. Reputation  Systems  in  MOOC  Forum

      One tool to harness the scale

    3. All  hypotheses  confirmed  •Engaging  in  discussion  leads  to  more  correct  answers.  •  The  bonus  incentive  leads  to  more  correct  changed  answers.  •The  participants  have  substantive  discussio

      Interesting finding based on MTurk experiments. Discussion and incentive matter.

    4. MOOC  Collaboration  Today  •Forums  •Really  Q&A  Tools  •Low  participation  •Participants  do  well:  correlation  or  causation?  •Informally  Organized  Groups  •Google  Hangous,  Facebook  groups,  in-­‐person  meetings  •Formal  Project  Groups  •NovoEd  •Peer  Assessment  (anonymous,  asynchronous)  •Kulkarni,  Klemmer  et  al.  TOCHI  2

      These activities are arguable cooperative. Also, they are mostly defined by the instructor.

    1. The study assessed the behavior of 4,500 children, ages 8 to 11, by looking at their sleep schedules, how much time they spent on screens and their amount of exercise, and analyzed how those factors affected the children’s mental abilities.

      The study studied associations instead of cause-effect.

  21. Sep 2018
    1. 1. Technical understanding Designers, or whatever they’re called now or in the future, will need to know far more about what the software does, its functionality, strengths and weaknesses. In some large projects we have found that a knowledge of how the NLP works has been an invaluable skill, along with an ability to troubleshoot by diagnosing what the software can, or cannot do.

      Knowing NLP is always helpful. Wondering how much a shift has happened after the rise of AI?

  22. Aug 2018
    1. Computer-supported collaborative learning (CSCL) environments provide learners with multiple representational tools for storing, sharing, and constructing knowledge. However, little is known about how learners organize knowledge through multiple representations about complex socioscientific issues.

      gg

    2. Computer-supported collaborative learning (CSCL) environments provide learners with multiple representational tools for storing, sharing, and constructing knowledge. However, little is known about how learners organize knowledge through multiple representations about complex socioscientific issues.

      gg

  23. Jul 2018
    1. Institutionalised demands for academic hyper-performativity can also be part of formal academic workload models.

      Very true. The pressure to produce quality 'outputs' quickly is real in many places. Time with family and for self-care would be first sacrificed.

    1. The broadest (but unsatisfactory) definition of 'collaborative learning' is that it is a situation inwhich two or more people learn orattempt to learn something together.
    2. our group did not agree on any definition ofcollaborative learning. We did not even try. There is such a wide variety of uses of this term insideeach academic field

      Diverse perspectives of 'collaborative learning'

    1. My goal is to show that you needn’t be a hotshot web developer to create a custom annotation-powered app. If the pattern embodied by this demo app is of interest to you, and if you’re at all familiar with basic web programming, you should be able to cook up your own implementations of the pattern pretty easily.

      could be fun

    1. The distinction between openness in practice and openness in content is significant in cost as well. Creating content requires time, effort, and resources and opens up numerous discussions around intellectual property rights. However, openness in practice requires little additional investment, since it essentially concerns transparency of already planned course activities on the part of the educator.

      I appreciate the distinction -- between openness in content and openness in practice. But may disagree on the assessment of their associated costs. I bet the authors' thought on this has also evolved after the MOOC movement.

      In open science, both kinds of openness will incur burden and cost.

    2. Openness as Transparent PracticeThe word open is in constant negotiation. When learners step through our open door, they are invited to enter our place of work, to join the research, to join the discussion, and to contribute in the growth of knowledge within a certain field. The openness of the academy refers to openness as a sense of practice.4 Openness of this sort is best seen as transparency of activity.

      "Openness as a sense of practice"

    1. During the Ideationphase, researchers and their collaborators develop and revise their research plans. During this phase they may collect preliminary data from publicly available data repositories and conduct a pilot study to test their new methods on the existing data. When applying for research funding, they develop the required data management plans, stating where data, workflow, and software code will be archived for use by other researchers. In addition, in some cases, they may decide to preregister their research plansand protocols in an open repository, as has, for example, become common practice in clinical research.

      Annotation remains in 'the dark' in the description of Provocation and Ideation here.

    2. A related principle is that integrating open practices at all points in the research process eases the task for the researcher who is committed to open science. Making research results openly available is not an afterthought when the project is over, but, rather, it is an effective way of doing the research itself. That is, in this way of doing science, making research results open is a by-productof the research process, and not a task that needs to be done when the researcher has already turned to the next project. Researchers can take advantage of robust infrastructure and tools to conduct their experiments, and they can use open data techniques to analyze, interpret, validate, and disseminate their findings. Indeed, many researchers have come to believe that open science practices help them succeed.

      Principle 2 of Open Science by Design. I would fully abide to it. Applies to the argument of open scholarly annotation. It may sound crazy, but it's to make scholarly work easier by creating a linked system for researchers themselves. The infrastructure is not there, not to mention culture. But it was the same with open data.

    3. Theoverarching principle of open science by design is that research conducted openly and transparently leads to better science

      Principle 1 of Open Science by Design

    4. hat is needed to address complex problems is the ability to find and integrate results not only within communities, but also across communities—without paywalls or subscription barriers.Utilizing advanced machine learning tools in analyzing datasets or literature, for example, will facilitate new insights and discoveries.

      Where machine learning kicks in. Lead to machine-generated annotations of scholarly articles to aid human annotations. Something a CMU group is already doing.

    5. Greater transparency is a majorfocus of those working to increase reproducibility and replicability in science(e.g., Munafòet al., 2017).

      Yes, transparency would be an overarching term over reliability and reproducibility.

    6. Ensuring the reliability of knowledge and reported results constitutes the heart of science and the scientific method.

      The key term here -- reliability -- is also ripe for rethinking. So far (and in this report) it's mostly about how we get from data to results. But given known problem with 'the grant cycle', reliability should be broader. Another argument to cover scholarly annotations, which are also data but generated by researchers themselves (sort of like meta science).

    7. The specific ways in which cultural barriers to open science operate vary significantly by field or discipline. Overuse and misuse of bibliographic metrics such as the Journal Impact Factor in the evaluation of research and researchers is one important “bug” in the operation of the research enterprise that has a detrimental effect across disciplines. The perception and/or reality that researchers need to publish in certain venues in order to secure funding and career advancement maylock researchers into traditional, closed mechanisms for reporting results and sharing research products. These pressures are particularly strong forearly careerresearchers.

      Applause: "Building a supportive culture" is the first item suggested by the committee to accelerate progress in open science by design.

    8. •Provocation: explore or mine open research resources and use open tools to network with colleagues.Researchers have immediate access to the most recent publications and have the freedom to search archives of papers, including preprints, research software code, and other open publications, as well as databases of research results, all without charge or other barriers. Researchers use the latest database and text mining tools to explore these resources, to identify new concepts embedded in the research, and to identify where novel contributions can be made. Robust collaborative tools are available to network with colleagues.•Ideation: develop and revise research plans and prepare to share research results and tools under FAIR principles. Researchers and their collaborators develop and revise their research plans, collect preliminary data from publicly available data repositories,and conduct a pilot study to test their new methods on the existing data. When applying for research funding, they develop the required data management plans, stating where data, workflow, and software code will be availablefor use by other researchersunder FAIR (Findable-Accessible-Interoperable-Reusable) principles. In addition, in some cases, they may decide to pre-register their research plans and protocols in an open repository.

      These two components -- provocation and ideation -- are probably most relevant to public scholarly annotation that I am interested in. But they barely touch upon it, because of this document's emphasis on data sharing. Again, this reflects an neglect of value represented in annotations.

    9. In order to frame the issues and possible actions, the committee developed the concept of open science by design, defined as a set of principles and practices that fosters openness throughout the entire research life cycle(Figure S-1).

      This is a useful framework, accompanied by a useful visual that does not convey a linear lifecycle.

    10. To evaluate more fully the benefits and challenges of broadening access to the results of scientific research, described as “open science,” the National Academies of Sciences, Engineering, and Medicine appointed an expert committee in March 2017. Brief biographies of the individual committee members are provided in Appendix A. The committee was charged with focusing on how to move toward open science as the default for scientific research results, and to indicate both the benefits of moving toward openscience and the barriers to doing so.This report presents the findings and recommendations of the committee, with the majority of the focus on solutions that move the research enterprise toward open science.

      Background of this report compiled by the National Academies.

    Tags

    Annotators

    1. Before reading this report, I happened to read a much newer article titled Administrative social science data: The challenge of reproducible research published in Big Data & Society — a journal I reviewed for. There are some clear advancements the social science communities (and scholarly communities in general) have made since 1985. For instance, we have various research tools and platforms available these days to facilitate data management, sharing, and publishing. Git — a version control system highly recommended by this article — was nonexistent when the National Academies Report came out; neither were platforms and initiatives such as Open Science Framework, Harvard Dataverse Network, and Figshare. However, when juxtaposing challenges discussed in both pieces, what stroke me — again — was how slow it has been to shift academic cultures to promote data sharing. Indeed, developing tools are easier, whereas changing cultures at many levels — e.g., in research labs, departments and colleges, institutions, associations, funding agencies — are much much more difficult.

      A blog post when I was attending the Data Sharing workshop organized by AERA and NSF. Another reminder that it's hard work to change culture.

    1. Recommendation 16. Institutions and organizations through whichscientists are rewarded should recognize the contributions of appropriate data-sharing practices.

      Oh man - kinda depressing to see these recommendations put forward in 1985 -- before I was even born. It must have been so hard to bring about cultural changes in the academy.

    2. But there are potential costs for an investigator who provides data toothers: costs of time, money, and inconvenience; fears of possible criticism,whether justified or not; possible violations of trust by a breach ofconfidentiality; and forgoing recognition or profit from possible furtherdiscoveries

      These potential costs of data sharing also apply to the sharing of annotations -- another type of data generated in scholarly processes.

    Tags

    Annotators

    1. National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. https://doi.org/10.17226/2033.

      This report published by National Research Council in 1985.

    1. 4. Use CasesIn order to evaluate and demonstrate thefeasibility of the OAC Data Model, an initialset of use cases has been developed that arerepresentative of a range of common scholarlypractices involving annotation. This preliminaryset is available from the OAC Wiki as OAC UserNarratives/Use Cases2 and includes:-Citation of Non Printed Media-Commentary on Remote Resources-Shared Annotations Across Interfaces-Harvesting, Aggregating, Ranking andPresenting Annotations from Multiple Sites-Annotating Relationships Between MultipleMixed-Media Resources-Annotations which Capture NetchainingPractices-Annotations with Compound Targets

      Use cases that are quite brief. But useful.

    2. In the OAC model, an Annotation is anEvent initiated at a date/time by an author(human or software agent). Other entitiesinvolved in the event are the Content ofthe Annotation (aka Source) and the Targetof the Annotation. The model assumes thatthe core entities (Annotation, Content andTarget) are independent Web resources that areURI-addressable. This approach simplifies anddecouples implementation from the repository.An essential aspect of an annotation is the(implicit or explicit) expression of “annotates”relationship between the Content and theTarget.

      The OAC data model of annotation. Graph-based, interestingly.

    3. . The OACapproach is based on the assumption thatclients publish annotations on the Web andthat the target, content and the annotationitself are all URI-addressable Web resources.By basing the OAC model on Semantic Weband Linked Data practices, we hope to providethe optimum approach for the publishing,sharing and interoperability of annotationsand annotation applications. In this paper,we describe the principles and components ofthe OAC data model, together with a numberof scholarly use cases that demonstrate andevaluate the capabilities of the model in differentscenarios.

      This paper introduces the Open Annotation Collaboration (OAC), which preceded the W3C Open Annotation working group that led the dev of web annotation standards.

    1. y main point, then, is simple: extensive annotation can work in print, primarily because the organizational principles of the medium are firmly established and implicitly understood by most readers. Extensive annota tion in the electronic medium, however, is more problematic. On the one hand, it is extremely tempting to create superannotated editions, bringing a given text together with all its sources, all its commentary, all its reviews, all its illustrations, even all its parodies and film adaptations. But until the conventions of the electronic edition are securely established, too many potential users will find these editions too difficult to navigat

      A fair point about the difficulty with over-annotated electronic documents. This is from a publishing point of view, however. A publisher does not want to have cluttered texts. But have tech evolved far enough to mitigate this difficulty? How would scholars (moving from the poetry reading scenario) respond to the clutter problem? Time to revisit.

    2. But, even more, we realize the necessity of convincing skeptical, techno phobic colleagues of the usefulness of the electronic medium. These are people, on the whole, for whom "nonlinear" modes of thought have little appeal; they sneer at all the hype about hypertext and return to their stud ies or their library carrels to hold in their hands the objects they revere. Such scholars are not simply going to retire or disappear, and we need them, if a market for electronic editions is to develop. They can, with only slight difficulty, navigate a complex scholarly book like my Cornell volume, because they understand the organizational principles of such objects, principles that have gradually developed over a half millennium of print based scholarly editing. But turn them loose in an electronic environment, and they tend to get lost: the conventions of organizing electronic books have yet to be established

      This interesting commentary touches upon how an established culture shapes how we interact text.

    3. is edition will of course have many hypertextual features: the ability to move direcdy from the text to an image of the printed page or from the text to a critical apparatus, the ability to set different versions of poems side by side for purposes of comparison, and even (we are told) simultaneous scrolling of open text windows. But it would be a mistake, I believe, to re gard it simply as a "hypertext," at least in the sense in which promoters and theorists of hypertext have intended the term. We are not interested in "nonlinear" modes of thought; rather, we are intent on providing scholars with evidence that will allow them to draw very "linear" conclusions about this collection of poems. We are not interested in creating a vast, complex web of documents, at the center of which is a Lyrical Ballads poem, but which is so rich in annotation that the poem is buried beneath the weight of its associated texts
    4. he enthusiasm has not subsided?much?but the giddiness has, as we have confronted the practical realities of delivering an actual product. The limitations of software, the awkwardness of SGML markup (not to men tion learning how to do it), the difficulties and costs of digital reproduc tions of manuscripts, and the simple fact that Lyrical Ballads is a very well edited text forced us over and over again to rethink our project and change its scope

      Obstacles introduced by tech as well.

    5. arious layers of annotation, enforces a special discipline on those who at tempt to read the volume: readers are constantly led away from the text and back to it again; they are forced to keep track of different kinds of an notation, sometimes in different parts of the volume. In short, they cannot lightly skim. Annotation, in this respect, is a rhetorical means of impress ing readers with the significance of the poetry, and it is also a means of re habilitating the poetry, by forcing readers to scrutinize Wordsworth's efforts as a translator in unprecedented ways.

      This is quite remarkable. Multiple layers of annotations "forced" on top of poems make the reader engage with poetry in a fresh way. In this case, the value of generate annotations is apparent, at least to the editor.

    6. y challenge, then, was to find reasons why these poems are interesting and to make those reasons apparent. My means for doing so was annota tion. A typical page of my edition of Wordsworth's Aeneid has four bands of text: one containing the reading text of the translation and three in smaller type underneath. The top band in smaller type gives Coleridge's unpub lished notes to the translation, a wonderful find that, to my knowledge, only Robert Woof and Stephen Parrish had examined before I did. The middle band provides the critical apparatus of verbal variants, such as one would find in any variorum edition, and the bottom band contains exten sive annotations about Wordsworth's methods of translation?comparisons between the translation and the Latin, suggestions about ways in which his translation may have been influenced by prose paraphrases and scholarly commentaries, passages in his original poems that allude to the Aeneid, and, of course, obligatory attempts to explain Coleridge's comments. In addition, after the reading text, a lengthy set of editorial notes records Wordsworth's borrowings from four earlier translations of the Aeneid: the translations of John Ogilby (the 1650 edition that Wordsworth owned, which is now in the Wordsworth Library, Grasmere), John Dryden, Joseph Trapp, and Christopher Pitt.

      Four bands of text to help people see (translated) poems interesting. A very different goal than annotating genes.

    7. its editors have maintained that annotations not concerned with tex tual matters add an undesirable layer of clutter to volumes that are already very large and very full.

      Undesired clutter introduced by annotations

    8. It is by doing this public annotation when I realize (again) how privileged I am as an academic affiliated with a big university. There are certain articles that I can access through my university libraries. But when I annotate them with Hypothesis, my annotations become 'orphans' because the articles are not accessible by the general public. This raise questions about what the space is, and does it exist for whom.

    1. 3.1.3.Worfklow ComponentsOne of Taverna’s key values for example is the availability of services to the core system,current figures estimate this to be around 3500 mainly concentrated in the bioinformaticsproblem domain. Taverna has also began to share workflows through the myExperimentproject (21) in order to make such workflows available to the community as a whole.Taverna has a GUI-based desktop application that uses semantic annotations associatedwith services. It employs the use of semantic-enabled helper functions which will bemade available in the next public release of the software. Developers can incorporatenew services through simple means and can load a pre-existing workflow as a servicedefinition within the service palette, which can then be usedas a service instance withinthe current workflow (i.e. to support grouping). Services within the pre-existing workflowcan also be instantiated individually within the current workflow and a developer cancreate user-defined perspectives that allow a panel of pre-existing components to bespecified

      This is an important paper (based on citation numbers).

      It provides a systematic into to workflows in e-science. Similar to another paper I just annotated, it's coming from an engineering perspective. Annotation here plays a lesser role (conceptually) than the annotation I am making right now. Specifically, annotations discussed in such e-science workflows are serving more of a mechanical role (e.g., for perseverance), instead of a more epistemic role.

    1. Scienti c work ows are used by scientists not only as computational units thatencode scienti c methods that can be shared among scientists, but also to specifytheir experiments. In this paper we presented a research object model to captureall the needed information and data including the methods (work ows) and otherelements: namely annotations, datasets, provenance of the work ow results, etc.

      This is interesting and valuable work, focusing on the design and engineering aspects of a (computational-centric) workflow in sciences. There is a lite discussion about value generated by maintaining such a workflow and workflow-centric research objects. I would also appreciate more explanation of annotation activities in the workflow.

    2. Fig. 4: A sample research object lifecycle.

      Em.. a gendered analysis could be done on this figure.

    3. A research object normally starts its life as an emptyLive Research Ob-ject, with a rst design of the experiments to be performed (which determineswhat work ows and resources will be added, by either retrieving them froman existing platform or creating them from scratch). Then the research objectis lled incrementally by aggregating such work ows that are being created,reused or re-purposed, datasets, documents, etc. Any of these components canbe changed at any point in time, removed, etc

      Lifecycle of a workflow-centric research object.

    4. Figure 2 provides a more detailed view of the resources that compose work- ow templates and work ow runs. A work ow template is a graph in which thenodes are processes and the edges represent data links that connect the outputof a given process to the input of another process, specifying that the artifactsproduced by the former are used to feed the latter. A process is used to describe aclass of actions that when enacted give rise to process runs. The process speci esthe software component (e.g., web service) responsible for undertaking the ac-tion. Note that some work ow systems may specify in addition to the data ow,the control ow, which speci es temporal dependencies and conditional owsbetween processes. We chose to con ne the work ow research object model todata-driven work ows, as in Taverna [16], Triana [2], the process run NetworkDirector supplied by Kepler [4], Galaxy3, Wings [7], etc.

      This is getting clearer: A workflow template is a graph whose nodes are processes and edges are data links/moves.

      The example from bioinformatics shows that understanding/constructing such a model requires much domain knowledge (e.g., gene stuff). So annotations made in such pathways -- like annotating a gene in a publication -- has domain-specific values not shared by other disciplines.

      This domain specifity is linked to an annotation I made on 'dark data' about the credit system. In bioinformatics, annotating a gene has already been recognized as an important scientific act with value to the field, while in educational research the value of annotation is still to be discovered, debated, and agreed upon.

    5. Figure 1 illustrates a coarse-grained view of a work ow-centric research ob-ject, which aggregates a number of resources

      A sort of UML diagram illustrating relations among different objects in a workflow

    6. our model is built on earlier work on myEx-periment packs [15], which aggregate elements such as work ows, documentsand datasets together, following Web 2.0 and Linked Data principles [18, 17].The myExperiment ontology [14], which forms the basis for our research objectmodel, has been designed such that it can be easily aligned with existing on-tologies. For instance, their elements can be assigned annotations comparable tothose de ned by Open Annotation Collaboration (OAC).

      [Important information:] about the myExperiment ontology framework.

    7. To overcome these issues, additional information may beneeded. This includes annotations to describe the operations performed by thework ow; annotations to provide details like authors, versions, citations, etc.;links to other resources, such as the provenance of the results obtained by ex-ecuting the work ow, datasets used as input, etc.. Such additional annotationsenable a comprehensive view of the experiment, and encourage inspection ofthe di erent elements of that experiment, providing the scientist with a pictureof the strengths and weaknesses of the digital experiment in relation to decay,adaptability, stability, etc.

      Annotation--of various types of objects--plays an important role in scientific workflows, to support reproducibility for instance.

    8. These richly annotation objects are what we call work ow-centric researchobjects. The notion of Research Object has been introduced in previous work[20, 19, 1] { here we focus on Research Objects that encapsulate scienti c work- ows (hence work ow-centric).
    9. Scienti c work ows are used to describe series of structured activities and com-putations that arise in scienti c problem-solving, providing scientists from vir-tually any discipline with a means to specify and enact their experiments [3].From a computational perspective, such experiments (work ows) can be de nedas directed acyclic graphs where the nodes correspond to analysis operations,which can be supplied locally or by third party web services, and where theedges specify the ow of data between those operations.

      A definition of scientific workflow, and an operationalization from a computational perspective. It reminds me of work on orchestration graphs in CSCL. Wondering how much standardization there is and whether standardization of workflows is meaningful at all.

    1. Brooks Hanson, Director of Publications for the American Geophysical Union, summed up the day with a list of goals for a scholarly annotation layer: It must be built on an open but standard framework that enables global discovery and community enrichment. It must support granular annotation of elements in all key formats, and across different representations of the same content (e.g. PDF vs HTML). There must be a diversity of interoperable annotation systems. These systems must be fully accessible to humans, who may need assistance to use them, and machines that will use APIs to create and mine annotations. It must be possible to identify people, groups, and resources in global ways, so that sharing, discovery, and interconnection can span repositories and annotation services. These are lofty goals.

      Quite insightful ideas about a "scholarly annotation layer." Can we claim the existence of such a layer yet? Right now it seems such a layer still operate at the individual level. When there are public ones, they don't talk with other public annotation layers. The goals are quite lofty indeed. That's why it's hard and fascinating.

    2. The goals of the workshop were to review existing uses of annotation, discuss anticipated uses, consider opportunities and challenges from the perspective of both publishers and implementers, converge on a definition of interoperability, and identify next steps. The survey of existing uses began with UCSD’s Anita Bandrowski who presented an overview of SciBot, a tool that’s being used today to validate Research Resource Identifiers in scientific papers. Sebastian Karcher, who works with the Qualitative Data Repository at Syracuse, discussed an annotation-enhanced workflow for sharing, reusing, and citing qualitative data. GigaScience’s Nicole Nigoy presented the results of the Giga-Curation Challenge at Biocuration 2016. Saman Ehsan, from the Center for Open Science, highlighted the role annotation can play when researchers work together to reproduce studies in psychology. Mendeley’s William Gunn described annotation of research data as not merely a supplement to scholarly work, but potentially a primary activity. John Inglis, executive director of Cold Spring Harbor Laboratory Press, envisioned an annotation layer for bioRxiv. And Europe PMC’s Jo McEntyre showed an experimental system that mines text for entities (e.g. biomolecules) and automatically creates annotation layers that explain and interconnect them.

      Diverse usages of annotations by key stakeholders.

    3. As an annotator, I want to be able to assign DOIs to individual contributions, or to sets of them. As an author, I want annotation to integrate with my preferred writing tool so that annotations can flow back into the text.

      As an author, the 2nd user story is natural to me. But it's refreshing to see the 1st user story -- an annotation declaiming DOIs for individual annotations. I was like: Why? Why not?

    1. Scientists currently get credit for the citation of their published papers. Similar credit for data use will require a change in the sociology of science where data citation is given scholarly value. The publishing industry including, for example, Nature and Science is already beginning to provide a solution by allowing data to be connected with publications. However, space limits, format control, and indexing of data remain a major problem. Institutional and disciplinary repositories need to provide facilities so that citations can return the same data set that was used in the citation without adding or deleting records. Standards bodies for the sciences can set up methods to cite data in databases and not just data in publications (Altman & King, 2007).

      Reward and valuation systems are needed to give shared data more credit.

    2. Data becomes dark because no one is paying attention. There is little professional reward structure for scientists to preserve and disseminate raw data. Scientists are rewarded for creating high-density versions of their data in statistics, tables, and graphs in scholarly journals and at conferences. These publications in some ways are the sole end product of scientific inquiry. These products, while valuable, may not be as useful as some authors hope.

      Reward system in place is not rewarding the preservation of dark data.

    3. The data itself is often too voluminous or varied for humans to understand by looking at the data in its raw unprocessed form, so scientists use graphs, charts, mathematical equations, and statistics to “explain,” “describe,” or “summarize” the data. These representational tools help us to understand the world around us. The use of data simplification and data reduction methods in science is repeated at all scales of natural phenomena from the subatomic to the physics of our human scale world, to the function of a cell, a mating behavior of birds, or [End Page 286] the functioning of ecosystems. But these summary representations of data rely on the underlying data, and the published papers do not capture the richness of the original data and are in fact an interpretation of the data. If the dark data in the tail is not selectively encoded and preserved, then the underpinning of the majority of science research is lost.

      Here the article is actually getting in to the scholarly workflow, i.e. data representations generated for publications more visible and accessible than the raw data used to generate them.

    4. We can organize science projects along an axis from large to small. The very large projects supporting dozens or more scientists would be on the left side of the axis and generate large amounts of data, with smaller projects sorted by decreasing size trailing off to the right. The major area under the right side of the curve is the long tail of science data. This data is more difficult to find and less frequently reused or preserved. In this paper we will use the term dark data to refer to any data that is not easily found by potential users. Dark data may be positive or negative research findings or from either “large” or “small” science. Like dark matter, this dark data on the basis of volume may be more important than that which can be easily seen. The challenge for science policy is to develop institutions and practices such as institutional repositories, which make this data useful for society.

      Dark data--an interesting take on the "long tail" of scientific research, which includes those studies conducted by a single or a few scientists without funding.

      If data here is defined more generally--not only as data generated from empirical studies but the actual scholarly process--the idea of dark data would have new meanings. It is not only about the size of a project, but different parts of a project that get more or less recognition. For example, an opaque practice will only reveal the final publication, whereas a more transparent practice would share data, algorithms, etc. But rarely do scientists share how their ideas developed from a mere hunch to a grant proposal and then to a substantial study. Here the idea of dark data could contain data related to processes of scholarly production that do not get talked about, like how I am now annotating this article to develop an idea that's still fuzzy to myself but may (if I'm lucky) grow to something I can not imagine. To me, this is the darker data in scholarly production, beyond empirical data generated by smaller projects.

    1. In this chapter, the authors reflect on the reasons for such hybrids, specifi-cally through an exploration of eLaborate. As a virtual research environment, eLaborate targets both professional scholars and volunteers working with textual resources. The environment offers tools to transcribe textual sources, to annotate these transcriptions, and to publish them as digital scholarly editions. The majority of content currently comprises texts from the cultural heritage of Dutch history and literary history, although eLaborate does not put limits on the kind of text or language. Nor does the system impose limits on the openness of contribution to any edition project. Levels of openness and access are solely determined by the groups of users working on specific texts or editions. This Web 2.0 technology-based software is now used by several groups of teachers and students, and by scholarly, educated, and interested volunteers.

      This chapter describes a tool named eLaborate, "in which scholars can upload scans, transcribe and annotate text, and publish the results as on online text edition which is freely available to all users." On p. 123, there is an interesting critique of how the scholarly workflow has maintained static for almost 2000 years despite tech advancements.