190 Matching Annotations
  1. Jun 2019
    1. As a reminder, the two aspects of today’s notebooks (Mathematica, Jupyter, R markdown, Emacs/OrgMode) that I consider harmful for scientific communication are: The linear structure of a notebook that forces the narrative to follow the order of the computation. The impossibility to refer to data and code in a notebook from the outside, and in particular from another notebook, making reuse of code and data impossible.

      interesting take on two limitations with current computational notebooks.

  2. May 2019
    1. Mediation does not simply take place between a subject and an object, but rather co-shapes subjectivity and objectivity. Humans and the world they experience are the products of technical mediation, and not just the poles between which the mediation plays itself out.

      interesting points on mediation

    2. Classical philosophy of technology tended to reify ‘Technology’, treating it as a monolithic force. Ihde, by contrast, shuns general pronouncements about ‘Technology,’ fearing to lose contact with the role concrete technologies play in our culture and in people’s everyday lives.

      Appreciate this work

    1. In other words, data analytics involves the actual analysis of the data, and informatics is the application of that information. Health informatics professionals use their knowledge of information systems, databases, and information technology to help design effective technology systems that gather, store, interpret, and manage the data that is generated in the provision of healthcare to patients.

      informatics vs. analytics

  3. Apr 2019
    1. n light of the intense efforts by both public and private sectors to improve the outcomes for all children in K-12 classrooms, there is an urgent need to know more about how teachers think

      good

    2. n light of the intense efforts by both public and private sectors to improve the outcomes for all children in K-12 classrooms, there is an urgent need to know more about how teachers think

      good

    1. the company doesn’t know what “health” exactly means in this context. What would a healthy Twitter look like? What even is health for a non-sentient website?

      defining health

    1. ConceptNet is a freely-available semantic network, designed to help computers understand the meanings of words that people use.

      this is super cool

    1. Tools for problem-finding The tools discussed so far are for scientists and engineers working on a problem. But let’s back up. How can someone find the right problem to work on in the first place? And how can they evaluate whether they have the right ideas to solve it?

      no good tools for this.

    2. What she’d like are tools for quickly estimating the answers to these questions, so she can fluidly explore the space of possibilities and identify ideas that have some hope of being important, feasible, and viable.

      promisingness judgments

    3. The very concept of a “programming language” originated with languages for scientists — now such languages aren’t even part of the discussion! Yet they remain the tools by which humanity understands the world and builds a better one.

      A powerful recognition based on historical perspectives.

    4. Climate change is too important for us to operate on faith.

      Agreed. And yet much public discourse operates this way.

  4. Mar 2019
    1. We believe it’s because most rely on a narrow approach to data analysis: They use data only about individual people, when data about the interplay among people is equally or more important.

      from attribute data to relational data!

  5. Feb 2019
    1. This allows us to use network information to distinguish three stages of learning: (i) a very early learning stage where all but the phonological layer contribute substantially to prediction, (ii) an early learning stage which marks a transition period, and (iii) a late learning stage in which contribution from word associations dominates word learning.

      3 stages of word learning

    Annotators

    1. However, the computer has many other capabilities for manipulating and displaying information that can be of significant benefit to the human in nonmathematical processes of planning, organizing, studying, etc. Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly.

      Rich representations -- besides computational power -- are absolutely key for the described scenario.

      This description, written half a century ago, is more or less realized. It reminded me of the Microsoft Vision 2019 created around 2009 that is yet to be realized.

      https://www.youtube.com/watch?v=P2PMbvVGS-o

    2. the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity.

      A situation characterized as The Ingenuity Gap by Homer-Dixon.

    3. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human "feel for a situation" usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.

      I appreciate the recognition of both the "arts" and "science" of problem solving.

    1. However, the computer has many other capabilities for manipulating and displaying information that can be of significant benefit to the human in nonmathematical processes of planning, organizing, studying, etc. Every person who does his thinking with symbolized concepts (whether in the form of the English language, pictographs, formal logic, or mathematics) should be able to benefit significantly.

      Rich representations -- besides computational power -- are absolutely key for the described scenario.

      This description, written half a century ago, is more or less realized. It reminded me of the Microsoft Vision 2019 created around 2009 that is yet to be realized.

      https://www.youtube.com/watch?v=P2PMbvVGS-o

    2. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human "feel for a situation" usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.

      I appreciate the recognition of both the "arts" and "science" of problem solving.

    3. the complexity of his problems grows still faster, and the urgency with which solutions must be found becomes steadily greater in response to the increased rate of activity and the increasingly global nature of that activity.

      A situation characterized as The Ingenuity Gap by Homer-Dixon.

    1. Engelbart’s dream was different. He believed that networked computing could empower collective intelligence, offering humanity a way to address complex problems together.

      Nice to see this link to Bret Victor's "annotation" of Engelbart's ideas. Check out Victor's work on dynamic representations and Dynamicland if you haven't yet.

  6. Jan 2019
    1. The National Science Foundation continued to exist as a basic-science funding agency. But unlike ARPA, the NSF funds projects, not people, and project proposals must be accepted by a peer review board. Any sufficiently-revolutionary project, especially at the early stages, will sound too crazy for a board to accept. Worse, requiring a detailed project proposal means that the NSF simply can't fund truly exploratory research, where the goal is not to solve a problem, but to discover and understand the problem in the first place.

      a problem with research funding

    1. What is Informatics? The study of the structure, the behaviour, and the interactions of natural and engineered computational systems. The central focus of Informatics is the transformation of information - whether by computation or communication, whether by organisms or artefacts. Understanding informational phenomena - such as computation, cognition, and communication - enables technological advances.

      I like this definition

    1. The term “informatics” broadly describes the study, design, and development of information technology for the good of people, organizations, and society.
    2. The Classification of Instructural Programs (CIP) describes Informatics as: "A program that focuses on computer systems from a user-centered perspective and studies the structure, behavior and interactions of natural and artificial systems that store, process and communicate information. Includes instruction in information sciences, human computer interaction, information system analysis and design, telecommunications structure and information architecture and management."
    1. The calendar, moreover, embodies cyclical re-enactments, retrievals, or renewals of our commitment to and engagement with the sacred, through the annual feasts and celebrations of one's religious community. Modernity attacks this traditional temporality by transforming time into a resource that is subject to the imperatives of efficiency and the logic of commodification. This much is convincing and powerful. But if sacred and traditional time is compromised in this way, are we left only with meaningless clock-time?

      good summary & q

    2. Hammer tackles the third class of strategies in his dense final chapter. He draws on the writings of Bloch and Benjamin for a vision of aesthetic experience that can fit into Adorno's conception of how experience might resist the totalizing logic of modernity. Hammer draws on central aspects of Adorno's aesthetic theory, according to which in the experience of art authoritative contents present themselves in such a way that they normatively call to the subject, but yet resist being absorbed into the logic of efficiency and commodification. The invitations these contents present are not publicly articulable in the normal way. They take the form of hopes, rather than imperatives. By holding on to hope, as it bodies forth in art that resists commodification, the subject can encounter a glimmer of redemptive experience.

      aesthetic theory. hopes rather than imperatives

    3. In chapter 3 he examines Kant's and Habermas's notions of autonomy, as well as a neo-Aristotelian attempt (in Lorenzo Simpson) to return to a pre-modern form of temporality. He then leads his reader through a chapter-by-chapter examination of Hegel, Schopenhauer, Nietzsche (both early and late, in two chapters), Heidegger, and finally in a single very dense chapter, Post-Modernism (Lyotard), Marxism (Jameson), and Critical theory (Bloch, Benjamin, and Adorno). He argues that only Adorno's appropriation of Bloch and Benjamin promises any hope of a successful response to modern temporality.

      fascinating!

    1. Blogging highlights the process, not the output – one of my early blogging chums was Tony Hirst here at the OU. He has commented that blogging reveals an ongoing process of research, but that much of our formal systems (promotion, REF, research funding) are focused on outputs. That’s not to say outputs aren’t important, but the longitudinal picture that a blog gives you allows for a better representation of developing ideas.

      Totally agree. Hope to blog more in 2019! #TrustTheProcess

  7. Dec 2018
    1. The “Big Idea” proposes to advance the educational, research and public service mission of the University of Michigan by: Offering an undergraduate experience that has real-world problem solving and engaged scholarship at its core; situating undergraduate education at the heart of the scholarly enterprise; Enhancing collaboration across disciplinary boundaries; and Amplifying the relationship of a public university to its constituencies through projects that work in collaborative partnerships with a range of communities and sectors to advance progress on significant problems; To accomplish these goals, we envision a program that is unconstrained by some of the most common operating assumptions in current higher education: grades, credit hours, and disciplinary majors.

      Very exciting!! No surprise U-Mich is doing this.

    1. "Many of the personalized learning systems now available begin with an articulation of the knowledge space – i.e. what the learner needs to know. What the learner knows is somewhat peripheral and is only a focal point after the learner has started interacting with content. Additionally, the data that is built around learner profiles is owned by either the educational institution or the software company. This isn’t a good idea. Learners should own the representation of what they know."

      Things learning analytics / AI in education needs to keep in mind. So many products are based on this (old) model.

    1. We’ve studied student learning with data and have identified six different modes through which students can work with data. Each mode has the potential to bring simplicity or sophistication to the study of data

      A great list of modes. I wonder what are the more fundamental modes behind them.

    2. As complex datasets begin to underpin every aspect of modern life, data scientists are everywhere, applying their advanced programming and statistics knowledge, disciplinary understanding, and data wrangling skills.

      This is real!

  8. Nov 2018
    1. analytics, advising, and learning assessment, encompassing course-level learning analytics, as well as planning and advising systems that focus on overall student success

      Also analytics for different stakeholders, such as instructors, students, admins, to make informed decisions.

  9. Oct 2018
    1. We, the Architects. I've made this point elsewhere, but what is both exciting and daunting is that the shift to a component-based approach provides an unprecedented opportunity to shape, rethink, plan, and design our digital learning environments.7 An architect is a proactive agent who looks to plan structures and environments to accommodate future usage. By taking the component approach, we can all adopt an architect's perspective and work to design the learning environments we want and need.

      <3 this!

      Still remember I used the word architect in the first draft of the 'unLMS' paper but was refuted by one reviewer.

      The component-based approach is probably urging us to take an architect perspective. My intuition is working as architects requires awareness of many cross-cutting ideas -- components and 'the whole', design and engineering, history and human values, and so forth.

    1. This is a CS/Data Science paper about an interesting web phenomenon. My key takeaway is the introduced method of detecting collaboration based on user actions in a well structured environment. Another takeaway is the use of Complexity Science and emergence in studying massive-scale online interactions.

      A muddy point is how collaboration is defined in this case. In this r/place case, conflicts and collaboration co-exist and cannot be separated from each other. A cool direction is to think about conflicts as well in such a contested environment. (Not sure how useful it is though; maybe just producing useless knowledge that could be useful in the future or in another scenario.)

    2. We want to make our model temporally-aware, as furtherinsights can be gathered by analyzing the temporal dy-namics of the user interactions.

      sounds exciting

    3. We introduce a generic method to infer collaboration pat-terns in environments where only user interactions are ob-servable. We show, through experiments, that the local prox-imity of users’ actions represents a sufficiently expressivesignal for the study of collaboration. Indeed, we report it tobe more predictive than the modeling of the interactions be-tween users and their environment. This finding reinforcesprevious results in the domain, that suggest the study ofemergent phenomenons requiring the modeling of interre-lationships between the parts of a system, rather than mod-eling their individual behaviors.Being able to capture rich social signals, such as collab-oration patterns, represents a unique opportunity to studycomplex social phenomenons.

      modeling collaboration patterns in a particular environment is the focus here. in this case, the environment is well defined. will be totally different if it's another environment.

      what's missing in this paper (not sure whether the authors are aware of) is the coordination carried out in reddit sub communities. it is mentioned in this paper that user ids are hashed. curious whether there is a way to map artwork with sub communities. guess not hard for some artworks (like Ubuntu).

    4. We therefore conclude that, from the considered models, theparametrization of user interrelationships is the most predic-tive method of user actions in a sandbox environment.

      user-user relationships more predictive

    5. the locality of useractions being a critical aspect in the design of a method topredict collaborations.

      locality of user actions - tied to the structure of the environment -- pixels being clearly defined.

    6. Reproducibility:We ran our experiment on a single com-puter, running a 3.2 GHz Intel Core i7 CPU, using PyTorchversion 0.2.0.45. We run the optimization on GPU NVIDIAGTX 670. We trained our model with the following parame-ters:= 0:04,= 0:01,K= 120. All code will be madeavailable at publication time6.

      reproducibility

    7. We therefore represent every user in the sys-tem by a latent representation: a real-valued vectorpuiofsizeKwhereKis the chosen dimensionality of the latentspace. We define a notion of distance between any pair ofusers in the considered population, where the distance met-ric represents the strength of collaboration between users. Iftwo users are actively collaborating, the response producedby the combination of their respective vectors (typically byusing dot product) should be high.

      collaboration is modeled by vector similarity., which represents the proximity of their actions.

    8. We opt for an embedding method, since wehypothesize less independent behaviors than individuals inthe system. Embedding methods are especially adapted toproducepersonalizedpredictions (e.g. collaborative filteringapplications), by making the assumption that the behaviorfrom an individual can be predicted by collecting data frommany users

      choice of embedding methods

    9. In this section, we introduce a predictive method that modelscollaboration between users in order to predict future useractions. In this regard, we train a model to evaluate the like-lihood of a useruito perform a particular action at a givenmoment in time.

      model

    10. We first observe, in figure 2 (left), the activity distribu-tion of the users. This distribution highlight the presence offew power-users and a vast majority of users performing amoderate number of clicks. In figure 2 (middle), we observethe same type of distribution for the number of updates per-formed on every pixel. As few pixels have been highly dis-puted, the large majority of them have only been updated afew times.

      actors & place

    11. In April 2017, the discussion platformRedditlaunchedPlace, an online canvas of 1000-by-1000 pixels, designedas a social experiment. Reddit users were allowed to changethe color of one pixel at every fixed time interval (the in-terval varied from 5 to 20 minutes during the events). Theevent lasted 72 hours and received a massive engagementfrom more than 1.2M unique users. Users collaborated tocreate various artworks by either directly interacting withthe canvas or by coordinating their actions from the discus-sion platform.

      r/place context

    12. The most relevant line of researchis probably the task of detectingoverlappingcommunities,whose members can be part of multiple groups. Those linesof research have made use of Matrix Factorization methodsin order to relax the assumption of communities being dis-joint (Zhang and Yeung 2012) (Yang and Leskovec 2013).

      community detection - esp. community overlap.

    13. High-level social behaviors, such as the bystander effect, havebeen observed inside a simple video-game based virtual en-vironment (Kozlov and Johansen 2010). Social interactionsin Massively Multilayer Online games have been studiedby Cole et al. (Cole and Griffiths 2007).

      games - another interesting mass collaboration context

    14. The termemergence has various definition across fields (Kub 2003),alike complexity (Gershenson and Fern ́andez 2012) fromwhich emergence has been suggested to arise from. Emer-gence generally refers to system-wide behaviors that can-not be explained by the sum of individual behaviors.

      cool - useful references to emergence - a concept 'collaborative learning at scale' cannot miss.

    15. The exploration-exploitation trade-off in a collaborative problem solving task has been dis-cussed by Mason and Watts (Mason and Watts 2012). Kit-tur and Kraut (Kittur and Kraut 2008) studied various typesof collaboration taking place between Wikipedia editors andmeasured the impact on quality of the resulting articles.

      useful references

    16. In order to establish a predictive model of user behavior,we consider the sandbox as a complex social system, i.e., asystem inherently difficult to model due to the large amountof interdependencies between its parts. Previous research inthe field of Complexity Science (Bar-Yam 2002) hypothe-sized that the nature of such systems is favorable to the emer-gence of global behaviors, arising from the local interactionsof the actors. Following this evidence, we propose a modelthat assesses the likelihood of a user interaction by observ-ing its social context. In other terms, we propose a predictivemodel that captures inter-user relationships instead of mod-eling independent behaviors.

      conceptualizing the canvas as a 'complex social system' makes sense. Need to look into Complexity Science.

    17. Users werenot grouped in teams nor were given any specific goals, yetthey organized themselves into a cohesive social fabric andcollaborated to the creation of a multitude of artworks.

      note: reddit sub communities did play a role.

    18. Rather thanmodeling the users as independent actors in the system, wecapture their coordinated actions with embedding methodswhich can, in turn, identify shared objectives and predict fu-ture user actions.

      this sounds cool -- focused on the identification of coordinated actions instead of actors (draw closer to the definition of collaboration); using embedding methods

    19. Latent Structure in Collaboration: the Case of Reddit r/place

      The first research paper I've seen on /r/place.

    1. Peer interaction may be able to improve the isolation of online learning, as well as improving the learning.!•Potential to automatically group people based on what misperceptions they currently have.!

      One benefit of connecting MOOC learners: reduce isolation. One method of harnessing the scale: auto group learners based on their attributes.

    2. Reputation  Systems  in  MOOC  Forum

      One tool to harness the scale

    3. All  hypotheses  confirmed  •Engaging  in  discussion  leads  to  more  correct  answers.  •  The  bonus  incentive  leads  to  more  correct  changed  answers.  •The  participants  have  substantive  discussio

      Interesting finding based on MTurk experiments. Discussion and incentive matter.

    4. MOOC  Collaboration  Today  •Forums  •Really  Q&A  Tools  •Low  participation  •Participants  do  well:  correlation  or  causation?  •Informally  Organized  Groups  •Google  Hangous,  Facebook  groups,  in-­‐person  meetings  •Formal  Project  Groups  •NovoEd  •Peer  Assessment  (anonymous,  asynchronous)  •Kulkarni,  Klemmer  et  al.  TOCHI  2

      These activities are arguable cooperative. Also, they are mostly defined by the instructor.

    1. The study assessed the behavior of 4,500 children, ages 8 to 11, by looking at their sleep schedules, how much time they spent on screens and their amount of exercise, and analyzed how those factors affected the children’s mental abilities.

      The study studied associations instead of cause-effect.

  10. Sep 2018
    1. 1. Technical understanding Designers, or whatever they’re called now or in the future, will need to know far more about what the software does, its functionality, strengths and weaknesses. In some large projects we have found that a knowledge of how the NLP works has been an invaluable skill, along with an ability to troubleshoot by diagnosing what the software can, or cannot do.

      Knowing NLP is always helpful. Wondering how much a shift has happened after the rise of AI?

  11. Aug 2018
    1. Computer-supported collaborative learning (CSCL) environments provide learners with multiple representational tools for storing, sharing, and constructing knowledge. However, little is known about how learners organize knowledge through multiple representations about complex socioscientific issues.

      gg

    2. Computer-supported collaborative learning (CSCL) environments provide learners with multiple representational tools for storing, sharing, and constructing knowledge. However, little is known about how learners organize knowledge through multiple representations about complex socioscientific issues.

      gg

  12. Jul 2018
    1. Institutionalised demands for academic hyper-performativity can also be part of formal academic workload models.

      Very true. The pressure to produce quality 'outputs' quickly is real in many places. Time with family and for self-care would be first sacrificed.

    1. The broadest (but unsatisfactory) definition of 'collaborative learning' is that it is a situation inwhich two or more people learn orattempt to learn something together.
    2. our group did not agree on any definition ofcollaborative learning. We did not even try. There is such a wide variety of uses of this term insideeach academic field

      Diverse perspectives of 'collaborative learning'

    1. My goal is to show that you needn’t be a hotshot web developer to create a custom annotation-powered app. If the pattern embodied by this demo app is of interest to you, and if you’re at all familiar with basic web programming, you should be able to cook up your own implementations of the pattern pretty easily.

      could be fun

    1. The distinction between openness in practice and openness in content is significant in cost as well. Creating content requires time, effort, and resources and opens up numerous discussions around intellectual property rights. However, openness in practice requires little additional investment, since it essentially concerns transparency of already planned course activities on the part of the educator.

      I appreciate the distinction -- between openness in content and openness in practice. But may disagree on the assessment of their associated costs. I bet the authors' thought on this has also evolved after the MOOC movement.

      In open science, both kinds of openness will incur burden and cost.

    2. Openness as Transparent PracticeThe word open is in constant negotiation. When learners step through our open door, they are invited to enter our place of work, to join the research, to join the discussion, and to contribute in the growth of knowledge within a certain field. The openness of the academy refers to openness as a sense of practice.4 Openness of this sort is best seen as transparency of activity.

      "Openness as a sense of practice"

    1. During the Ideationphase, researchers and their collaborators develop and revise their research plans. During this phase they may collect preliminary data from publicly available data repositories and conduct a pilot study to test their new methods on the existing data. When applying for research funding, they develop the required data management plans, stating where data, workflow, and software code will be archived for use by other researchers. In addition, in some cases, they may decide to preregister their research plansand protocols in an open repository, as has, for example, become common practice in clinical research.

      Annotation remains in 'the dark' in the description of Provocation and Ideation here.

    2. A related principle is that integrating open practices at all points in the research process eases the task for the researcher who is committed to open science. Making research results openly available is not an afterthought when the project is over, but, rather, it is an effective way of doing the research itself. That is, in this way of doing science, making research results open is a by-productof the research process, and not a task that needs to be done when the researcher has already turned to the next project. Researchers can take advantage of robust infrastructure and tools to conduct their experiments, and they can use open data techniques to analyze, interpret, validate, and disseminate their findings. Indeed, many researchers have come to believe that open science practices help them succeed.

      Principle 2 of Open Science by Design. I would fully abide to it. Applies to the argument of open scholarly annotation. It may sound crazy, but it's to make scholarly work easier by creating a linked system for researchers themselves. The infrastructure is not there, not to mention culture. But it was the same with open data.

    3. Theoverarching principle of open science by design is that research conducted openly and transparently leads to better science

      Principle 1 of Open Science by Design

    4. hat is needed to address complex problems is the ability to find and integrate results not only within communities, but also across communities—without paywalls or subscription barriers.Utilizing advanced machine learning tools in analyzing datasets or literature, for example, will facilitate new insights and discoveries.

      Where machine learning kicks in. Lead to machine-generated annotations of scholarly articles to aid human annotations. Something a CMU group is already doing.

    5. Greater transparency is a majorfocus of those working to increase reproducibility and replicability in science(e.g., Munafòet al., 2017).

      Yes, transparency would be an overarching term over reliability and reproducibility.

    6. Ensuring the reliability of knowledge and reported results constitutes the heart of science and the scientific method.

      The key term here -- reliability -- is also ripe for rethinking. So far (and in this report) it's mostly about how we get from data to results. But given known problem with 'the grant cycle', reliability should be broader. Another argument to cover scholarly annotations, which are also data but generated by researchers themselves (sort of like meta science).

    7. The specific ways in which cultural barriers to open science operate vary significantly by field or discipline. Overuse and misuse of bibliographic metrics such as the Journal Impact Factor in the evaluation of research and researchers is one important “bug” in the operation of the research enterprise that has a detrimental effect across disciplines. The perception and/or reality that researchers need to publish in certain venues in order to secure funding and career advancement maylock researchers into traditional, closed mechanisms for reporting results and sharing research products. These pressures are particularly strong forearly careerresearchers.

      Applause: "Building a supportive culture" is the first item suggested by the committee to accelerate progress in open science by design.

    8. •Provocation: explore or mine open research resources and use open tools to network with colleagues.Researchers have immediate access to the most recent publications and have the freedom to search archives of papers, including preprints, research software code, and other open publications, as well as databases of research results, all without charge or other barriers. Researchers use the latest database and text mining tools to explore these resources, to identify new concepts embedded in the research, and to identify where novel contributions can be made. Robust collaborative tools are available to network with colleagues.•Ideation: develop and revise research plans and prepare to share research results and tools under FAIR principles. Researchers and their collaborators develop and revise their research plans, collect preliminary data from publicly available data repositories,and conduct a pilot study to test their new methods on the existing data. When applying for research funding, they develop the required data management plans, stating where data, workflow, and software code will be availablefor use by other researchersunder FAIR (Findable-Accessible-Interoperable-Reusable) principles. In addition, in some cases, they may decide to pre-register their research plans and protocols in an open repository.

      These two components -- provocation and ideation -- are probably most relevant to public scholarly annotation that I am interested in. But they barely touch upon it, because of this document's emphasis on data sharing. Again, this reflects an neglect of value represented in annotations.

    9. In order to frame the issues and possible actions, the committee developed the concept of open science by design, defined as a set of principles and practices that fosters openness throughout the entire research life cycle(Figure S-1).

      This is a useful framework, accompanied by a useful visual that does not convey a linear lifecycle.

    10. To evaluate more fully the benefits and challenges of broadening access to the results of scientific research, described as “open science,” the National Academies of Sciences, Engineering, and Medicine appointed an expert committee in March 2017. Brief biographies of the individual committee members are provided in Appendix A. The committee was charged with focusing on how to move toward open science as the default for scientific research results, and to indicate both the benefits of moving toward openscience and the barriers to doing so.This report presents the findings and recommendations of the committee, with the majority of the focus on solutions that move the research enterprise toward open science.

      Background of this report compiled by the National Academies.

    Tags

    Annotators

    1. Before reading this report, I happened to read a much newer article titled Administrative social science data: The challenge of reproducible research published in Big Data & Society — a journal I reviewed for. There are some clear advancements the social science communities (and scholarly communities in general) have made since 1985. For instance, we have various research tools and platforms available these days to facilitate data management, sharing, and publishing. Git — a version control system highly recommended by this article — was nonexistent when the National Academies Report came out; neither were platforms and initiatives such as Open Science Framework, Harvard Dataverse Network, and Figshare. However, when juxtaposing challenges discussed in both pieces, what stroke me — again — was how slow it has been to shift academic cultures to promote data sharing. Indeed, developing tools are easier, whereas changing cultures at many levels — e.g., in research labs, departments and colleges, institutions, associations, funding agencies — are much much more difficult.

      A blog post when I was attending the Data Sharing workshop organized by AERA and NSF. Another reminder that it's hard work to change culture.

    1. Recommendation 16. Institutions and organizations through whichscientists are rewarded should recognize the contributions of appropriate data-sharing practices.

      Oh man - kinda depressing to see these recommendations put forward in 1985 -- before I was even born. It must have been so hard to bring about cultural changes in the academy.

    2. But there are potential costs for an investigator who provides data toothers: costs of time, money, and inconvenience; fears of possible criticism,whether justified or not; possible violations of trust by a breach ofconfidentiality; and forgoing recognition or profit from possible furtherdiscoveries

      These potential costs of data sharing also apply to the sharing of annotations -- another type of data generated in scholarly processes.

    Tags

    Annotators

    1. National Research Council. 1985. Sharing Research Data. Washington, DC: The National Academies Press. https://doi.org/10.17226/2033.

      This report published by National Research Council in 1985.

    1. 4. Use CasesIn order to evaluate and demonstrate thefeasibility of the OAC Data Model, an initialset of use cases has been developed that arerepresentative of a range of common scholarlypractices involving annotation. This preliminaryset is available from the OAC Wiki as OAC UserNarratives/Use Cases2 and includes:-Citation of Non Printed Media-Commentary on Remote Resources-Shared Annotations Across Interfaces-Harvesting, Aggregating, Ranking andPresenting Annotations from Multiple Sites-Annotating Relationships Between MultipleMixed-Media Resources-Annotations which Capture NetchainingPractices-Annotations with Compound Targets

      Use cases that are quite brief. But useful.

    2. In the OAC model, an Annotation is anEvent initiated at a date/time by an author(human or software agent). Other entitiesinvolved in the event are the Content ofthe Annotation (aka Source) and the Targetof the Annotation. The model assumes thatthe core entities (Annotation, Content andTarget) are independent Web resources that areURI-addressable. This approach simplifies anddecouples implementation from the repository.An essential aspect of an annotation is the(implicit or explicit) expression of “annotates”relationship between the Content and theTarget.

      The OAC data model of annotation. Graph-based, interestingly.

    3. . The OACapproach is based on the assumption thatclients publish annotations on the Web andthat the target, content and the annotationitself are all URI-addressable Web resources.By basing the OAC model on Semantic Weband Linked Data practices, we hope to providethe optimum approach for the publishing,sharing and interoperability of annotationsand annotation applications. In this paper,we describe the principles and components ofthe OAC data model, together with a numberof scholarly use cases that demonstrate andevaluate the capabilities of the model in differentscenarios.

      This paper introduces the Open Annotation Collaboration (OAC), which preceded the W3C Open Annotation working group that led the dev of web annotation standards.

    1. y main point, then, is simple: extensive annotation can work in print, primarily because the organizational principles of the medium are firmly established and implicitly understood by most readers. Extensive annota tion in the electronic medium, however, is more problematic. On the one hand, it is extremely tempting to create superannotated editions, bringing a given text together with all its sources, all its commentary, all its reviews, all its illustrations, even all its parodies and film adaptations. But until the conventions of the electronic edition are securely established, too many potential users will find these editions too difficult to navigat

      A fair point about the difficulty with over-annotated electronic documents. This is from a publishing point of view, however. A publisher does not want to have cluttered texts. But have tech evolved far enough to mitigate this difficulty? How would scholars (moving from the poetry reading scenario) respond to the clutter problem? Time to revisit.

    2. But, even more, we realize the necessity of convincing skeptical, techno phobic colleagues of the usefulness of the electronic medium. These are people, on the whole, for whom "nonlinear" modes of thought have little appeal; they sneer at all the hype about hypertext and return to their stud ies or their library carrels to hold in their hands the objects they revere. Such scholars are not simply going to retire or disappear, and we need them, if a market for electronic editions is to develop. They can, with only slight difficulty, navigate a complex scholarly book like my Cornell volume, because they understand the organizational principles of such objects, principles that have gradually developed over a half millennium of print based scholarly editing. But turn them loose in an electronic environment, and they tend to get lost: the conventions of organizing electronic books have yet to be established

      This interesting commentary touches upon how an established culture shapes how we interact text.

    3. is edition will of course have many hypertextual features: the ability to move direcdy from the text to an image of the printed page or from the text to a critical apparatus, the ability to set different versions of poems side by side for purposes of comparison, and even (we are told) simultaneous scrolling of open text windows. But it would be a mistake, I believe, to re gard it simply as a "hypertext," at least in the sense in which promoters and theorists of hypertext have intended the term. We are not interested in "nonlinear" modes of thought; rather, we are intent on providing scholars with evidence that will allow them to draw very "linear" conclusions about this collection of poems. We are not interested in creating a vast, complex web of documents, at the center of which is a Lyrical Ballads poem, but which is so rich in annotation that the poem is buried beneath the weight of its associated texts
    4. he enthusiasm has not subsided?much?but the giddiness has, as we have confronted the practical realities of delivering an actual product. The limitations of software, the awkwardness of SGML markup (not to men tion learning how to do it), the difficulties and costs of digital reproduc tions of manuscripts, and the simple fact that Lyrical Ballads is a very well edited text forced us over and over again to rethink our project and change its scope

      Obstacles introduced by tech as well.

    5. arious layers of annotation, enforces a special discipline on those who at tempt to read the volume: readers are constantly led away from the text and back to it again; they are forced to keep track of different kinds of an notation, sometimes in different parts of the volume. In short, they cannot lightly skim. Annotation, in this respect, is a rhetorical means of impress ing readers with the significance of the poetry, and it is also a means of re habilitating the poetry, by forcing readers to scrutinize Wordsworth's efforts as a translator in unprecedented ways.

      This is quite remarkable. Multiple layers of annotations "forced" on top of poems make the reader engage with poetry in a fresh way. In this case, the value of generate annotations is apparent, at least to the editor.

    6. y challenge, then, was to find reasons why these poems are interesting and to make those reasons apparent. My means for doing so was annota tion. A typical page of my edition of Wordsworth's Aeneid has four bands of text: one containing the reading text of the translation and three in smaller type underneath. The top band in smaller type gives Coleridge's unpub lished notes to the translation, a wonderful find that, to my knowledge, only Robert Woof and Stephen Parrish had examined before I did. The middle band provides the critical apparatus of verbal variants, such as one would find in any variorum edition, and the bottom band contains exten sive annotations about Wordsworth's methods of translation?comparisons between the translation and the Latin, suggestions about ways in which his translation may have been influenced by prose paraphrases and scholarly commentaries, passages in his original poems that allude to the Aeneid, and, of course, obligatory attempts to explain Coleridge's comments. In addition, after the reading text, a lengthy set of editorial notes records Wordsworth's borrowings from four earlier translations of the Aeneid: the translations of John Ogilby (the 1650 edition that Wordsworth owned, which is now in the Wordsworth Library, Grasmere), John Dryden, Joseph Trapp, and Christopher Pitt.

      Four bands of text to help people see (translated) poems interesting. A very different goal than annotating genes.

    7. its editors have maintained that annotations not concerned with tex tual matters add an undesirable layer of clutter to volumes that are already very large and very full.

      Undesired clutter introduced by annotations

    8. It is by doing this public annotation when I realize (again) how privileged I am as an academic affiliated with a big university. There are certain articles that I can access through my university libraries. But when I annotate them with Hypothesis, my annotations become 'orphans' because the articles are not accessible by the general public. This raise questions about what the space is, and does it exist for whom.

    1. 3.1.3.Worfklow ComponentsOne of Taverna’s key values for example is the availability of services to the core system,current figures estimate this to be around 3500 mainly concentrated in the bioinformaticsproblem domain. Taverna has also began to share workflows through the myExperimentproject (21) in order to make such workflows available to the community as a whole.Taverna has a GUI-based desktop application that uses semantic annotations associatedwith services. It employs the use of semantic-enabled helper functions which will bemade available in the next public release of the software. Developers can incorporatenew services through simple means and can load a pre-existing workflow as a servicedefinition within the service palette, which can then be usedas a service instance withinthe current workflow (i.e. to support grouping). Services within the pre-existing workflowcan also be instantiated individually within the current workflow and a developer cancreate user-defined perspectives that allow a panel of pre-existing components to bespecified

      This is an important paper (based on citation numbers).

      It provides a systematic into to workflows in e-science. Similar to another paper I just annotated, it's coming from an engineering perspective. Annotation here plays a lesser role (conceptually) than the annotation I am making right now. Specifically, annotations discussed in such e-science workflows are serving more of a mechanical role (e.g., for perseverance), instead of a more epistemic role.

    1. Scienti c work ows are used by scientists not only as computational units thatencode scienti c methods that can be shared among scientists, but also to specifytheir experiments. In this paper we presented a research object model to captureall the needed information and data including the methods (work ows) and otherelements: namely annotations, datasets, provenance of the work ow results, etc.

      This is interesting and valuable work, focusing on the design and engineering aspects of a (computational-centric) workflow in sciences. There is a lite discussion about value generated by maintaining such a workflow and workflow-centric research objects. I would also appreciate more explanation of annotation activities in the workflow.

    2. Fig. 4: A sample research object lifecycle.

      Em.. a gendered analysis could be done on this figure.

    3. A research object normally starts its life as an emptyLive Research Ob-ject, with a rst design of the experiments to be performed (which determineswhat work ows and resources will be added, by either retrieving them froman existing platform or creating them from scratch). Then the research objectis lled incrementally by aggregating such work ows that are being created,reused or re-purposed, datasets, documents, etc. Any of these components canbe changed at any point in time, removed, etc

      Lifecycle of a workflow-centric research object.

    4. Figure 2 provides a more detailed view of the resources that compose work- ow templates and work ow runs. A work ow template is a graph in which thenodes are processes and the edges represent data links that connect the outputof a given process to the input of another process, specifying that the artifactsproduced by the former are used to feed the latter. A process is used to describe aclass of actions that when enacted give rise to process runs. The process speci esthe software component (e.g., web service) responsible for undertaking the ac-tion. Note that some work ow systems may specify in addition to the data ow,the control ow, which speci es temporal dependencies and conditional owsbetween processes. We chose to con ne the work ow research object model todata-driven work ows, as in Taverna [16], Triana [2], the process run NetworkDirector supplied by Kepler [4], Galaxy3, Wings [7], etc.

      This is getting clearer: A workflow template is a graph whose nodes are processes and edges are data links/moves.

      The example from bioinformatics shows that understanding/constructing such a model requires much domain knowledge (e.g., gene stuff). So annotations made in such pathways -- like annotating a gene in a publication -- has domain-specific values not shared by other disciplines.

      This domain specifity is linked to an annotation I made on 'dark data' about the credit system. In bioinformatics, annotating a gene has already been recognized as an important scientific act with value to the field, while in educational research the value of annotation is still to be discovered, debated, and agreed upon.

    5. Figure 1 illustrates a coarse-grained view of a work ow-centric research ob-ject, which aggregates a number of resources

      A sort of UML diagram illustrating relations among different objects in a workflow

    6. our model is built on earlier work on myEx-periment packs [15], which aggregate elements such as work ows, documentsand datasets together, following Web 2.0 and Linked Data principles [18, 17].The myExperiment ontology [14], which forms the basis for our research objectmodel, has been designed such that it can be easily aligned with existing on-tologies. For instance, their elements can be assigned annotations comparable tothose de ned by Open Annotation Collaboration (OAC).

      [Important information:] about the myExperiment ontology framework.

    7. To overcome these issues, additional information may beneeded. This includes annotations to describe the operations performed by thework ow; annotations to provide details like authors, versions, citations, etc.;links to other resources, such as the provenance of the results obtained by ex-ecuting the work ow, datasets used as input, etc.. Such additional annotationsenable a comprehensive view of the experiment, and encourage inspection ofthe di erent elements of that experiment, providing the scientist with a pictureof the strengths and weaknesses of the digital experiment in relation to decay,adaptability, stability, etc.

      Annotation--of various types of objects--plays an important role in scientific workflows, to support reproducibility for instance.

    8. These richly annotation objects are what we call work ow-centric researchobjects. The notion of Research Object has been introduced in previous work[20, 19, 1] { here we focus on Research Objects that encapsulate scienti c work- ows (hence work ow-centric).
    9. Scienti c work ows are used to describe series of structured activities and com-putations that arise in scienti c problem-solving, providing scientists from vir-tually any discipline with a means to specify and enact their experiments [3].From a computational perspective, such experiments (work ows) can be de nedas directed acyclic graphs where the nodes correspond to analysis operations,which can be supplied locally or by third party web services, and where theedges specify the ow of data between those operations.

      A definition of scientific workflow, and an operationalization from a computational perspective. It reminds me of work on orchestration graphs in CSCL. Wondering how much standardization there is and whether standardization of workflows is meaningful at all.

    1. Brooks Hanson, Director of Publications for the American Geophysical Union, summed up the day with a list of goals for a scholarly annotation layer: It must be built on an open but standard framework that enables global discovery and community enrichment. It must support granular annotation of elements in all key formats, and across different representations of the same content (e.g. PDF vs HTML). There must be a diversity of interoperable annotation systems. These systems must be fully accessible to humans, who may need assistance to use them, and machines that will use APIs to create and mine annotations. It must be possible to identify people, groups, and resources in global ways, so that sharing, discovery, and interconnection can span repositories and annotation services. These are lofty goals.

      Quite insightful ideas about a "scholarly annotation layer." Can we claim the existence of such a layer yet? Right now it seems such a layer still operate at the individual level. When there are public ones, they don't talk with other public annotation layers. The goals are quite lofty indeed. That's why it's hard and fascinating.

    2. The goals of the workshop were to review existing uses of annotation, discuss anticipated uses, consider opportunities and challenges from the perspective of both publishers and implementers, converge on a definition of interoperability, and identify next steps. The survey of existing uses began with UCSD’s Anita Bandrowski who presented an overview of SciBot, a tool that’s being used today to validate Research Resource Identifiers in scientific papers. Sebastian Karcher, who works with the Qualitative Data Repository at Syracuse, discussed an annotation-enhanced workflow for sharing, reusing, and citing qualitative data. GigaScience’s Nicole Nigoy presented the results of the Giga-Curation Challenge at Biocuration 2016. Saman Ehsan, from the Center for Open Science, highlighted the role annotation can play when researchers work together to reproduce studies in psychology. Mendeley’s William Gunn described annotation of research data as not merely a supplement to scholarly work, but potentially a primary activity. John Inglis, executive director of Cold Spring Harbor Laboratory Press, envisioned an annotation layer for bioRxiv. And Europe PMC’s Jo McEntyre showed an experimental system that mines text for entities (e.g. biomolecules) and automatically creates annotation layers that explain and interconnect them.

      Diverse usages of annotations by key stakeholders.

    3. As an annotator, I want to be able to assign DOIs to individual contributions, or to sets of them. As an author, I want annotation to integrate with my preferred writing tool so that annotations can flow back into the text.

      As an author, the 2nd user story is natural to me. But it's refreshing to see the 1st user story -- an annotation declaiming DOIs for individual annotations. I was like: Why? Why not?

    1. Scientists currently get credit for the citation of their published papers. Similar credit for data use will require a change in the sociology of science where data citation is given scholarly value. The publishing industry including, for example, Nature and Science is already beginning to provide a solution by allowing data to be connected with publications. However, space limits, format control, and indexing of data remain a major problem. Institutional and disciplinary repositories need to provide facilities so that citations can return the same data set that was used in the citation without adding or deleting records. Standards bodies for the sciences can set up methods to cite data in databases and not just data in publications (Altman & King, 2007).

      Reward and valuation systems are needed to give shared data more credit.

    2. Data becomes dark because no one is paying attention. There is little professional reward structure for scientists to preserve and disseminate raw data. Scientists are rewarded for creating high-density versions of their data in statistics, tables, and graphs in scholarly journals and at conferences. These publications in some ways are the sole end product of scientific inquiry. These products, while valuable, may not be as useful as some authors hope.

      Reward system in place is not rewarding the preservation of dark data.

    3. The data itself is often too voluminous or varied for humans to understand by looking at the data in its raw unprocessed form, so scientists use graphs, charts, mathematical equations, and statistics to “explain,” “describe,” or “summarize” the data. These representational tools help us to understand the world around us. The use of data simplification and data reduction methods in science is repeated at all scales of natural phenomena from the subatomic to the physics of our human scale world, to the function of a cell, a mating behavior of birds, or [End Page 286] the functioning of ecosystems. But these summary representations of data rely on the underlying data, and the published papers do not capture the richness of the original data and are in fact an interpretation of the data. If the dark data in the tail is not selectively encoded and preserved, then the underpinning of the majority of science research is lost.

      Here the article is actually getting in to the scholarly workflow, i.e. data representations generated for publications more visible and accessible than the raw data used to generate them.

    4. We can organize science projects along an axis from large to small. The very large projects supporting dozens or more scientists would be on the left side of the axis and generate large amounts of data, with smaller projects sorted by decreasing size trailing off to the right. The major area under the right side of the curve is the long tail of science data. This data is more difficult to find and less frequently reused or preserved. In this paper we will use the term dark data to refer to any data that is not easily found by potential users. Dark data may be positive or negative research findings or from either “large” or “small” science. Like dark matter, this dark data on the basis of volume may be more important than that which can be easily seen. The challenge for science policy is to develop institutions and practices such as institutional repositories, which make this data useful for society.

      Dark data--an interesting take on the "long tail" of scientific research, which includes those studies conducted by a single or a few scientists without funding.

      If data here is defined more generally--not only as data generated from empirical studies but the actual scholarly process--the idea of dark data would have new meanings. It is not only about the size of a project, but different parts of a project that get more or less recognition. For example, an opaque practice will only reveal the final publication, whereas a more transparent practice would share data, algorithms, etc. But rarely do scientists share how their ideas developed from a mere hunch to a grant proposal and then to a substantial study. Here the idea of dark data could contain data related to processes of scholarly production that do not get talked about, like how I am now annotating this article to develop an idea that's still fuzzy to myself but may (if I'm lucky) grow to something I can not imagine. To me, this is the darker data in scholarly production, beyond empirical data generated by smaller projects.

    1. In this chapter, the authors reflect on the reasons for such hybrids, specifi-cally through an exploration of eLaborate. As a virtual research environment, eLaborate targets both professional scholars and volunteers working with textual resources. The environment offers tools to transcribe textual sources, to annotate these transcriptions, and to publish them as digital scholarly editions. The majority of content currently comprises texts from the cultural heritage of Dutch history and literary history, although eLaborate does not put limits on the kind of text or language. Nor does the system impose limits on the openness of contribution to any edition project. Levels of openness and access are solely determined by the groups of users working on specific texts or editions. This Web 2.0 technology-based software is now used by several groups of teachers and students, and by scholarly, educated, and interested volunteers.

      This chapter describes a tool named eLaborate, "in which scholars can upload scans, transcribe and annotate text, and publish the results as on online text edition which is freely available to all users." On p. 123, there is an interesting critique of how the scholarly workflow has maintained static for almost 2000 years despite tech advancements.

    1. The term computational science, and its associated term computational thinking, came into wide use during the 1980s. In 1982, theoretical physicist Kenneth Wilson received a Nobel Prize in physics for developing computational models that produced startling new discoveries about phase changes in materials.
    2. Nearly everybody had something to gain. Experimenters looked to computers for data analysis—sifting through large data sets for statistical patterns. Theoreticians looked to them for calculating the equations of mathematical models.
  13. Jun 2018
    1. The desire for understanding is driven by something more human. It is peoples’ nature to seek connections - connections to others, to the earth, and to ideas. This sense of connectedness is not only at the level of individual cognition, it comes from a desire to know with one's heart and mind, emotions and cognitions, imagination and reason. People pursue understanding to feel connected in ways that tell them they are human. As Feynman suggests, people strive to understand for aesthetic reasons.

      The annotation I am making here is fundamentally a product of an aesthetic experience, rooted in my pursuit of connectivity and beauty. :)

    1. Around 2005, a new range of web tools began to find their way into general use, and increasingly into educational use. These can be loosely described as social media, as they reflect a different culture of web use from the former ‘centre-to-periphery’ push of institutional web sites.

      test

    1. When I see a paper that I find interesting, I make sure to send the author an e-mail or message them on Twitter. I say: “I just read your paper — it helped me with some concepts. I look forward to seeing your future work.” It lets people know that they have worth.

      We need to do more of this!

    1. And the fourth concerns the idea of the adjacent possible. It just may be the case that biospheres on average keep expanding into the adjacent possible. By doing so they increase the diversity of what can happen next. It may be that biospheres, as a secular trend, maximize the rate of exploration of the adjacent possible.

      For biospheres (as autonomous agents): expanding into the adjacent possible, at a maximized but secure rate, will put them in an advantage in evolution.

      For an idea (in Popperian World 3): knowing its 'genes' and the boundary it operates within leads to the exploration of the adjacent possible. This is before it can start 'evolving' in the complex game of idea development.

  14. May 2018
    1. I’m part of the policy working group for an international professional network called the Marie Curie Alumni Association. I’d like the working group to have a new mission: aligning the incentives and rewards of science with the type of work and productivity that we really want to see. We need to better reward non-traditional outcomes, such as data sets, research methods and code. And we need to better appreciate activities outside of the lab, such as public engagement, education and outreach. That’s the way towards achieving substantial and lasting change.

      Crucial yet challenging given the power system in place. More academics need to recognize the value of shifting the current valuation system.

    2. Everyone is publishing and publishing because that’s where the money in science comes from. But if everyone is publishing and nobody is reading, are we making a contribution?
  15. Apr 2018
  16. Feb 2018
    1. There seems to be a center of gravity in the core of the learning sciences and we are lucky that this core is surrounded by multidisciplinary hubs and programs that are important motors of innovation within the learning sciences. We should refrain from excluding more peripheral research from learning sciences conferences and journals because it is not using certain concepts or methods. Reducing the learning sciences to its core (e.g., to DBR) will likely cause stagnation. Instead, we should embrace new methodological developments coming from other disciplines as currently can be seen in the context of big data, learning analytics, and educational data mining methods (Koedinger, D’Mello, McLaughlin, Pardos, & Rosé, 2015 Koedinger, K. R., D’Mello, S., McLaughlin, E. A., Pardos, Z. A., & Rosé, C. P. (2015). Data mining and education. Wiley Interdisciplinary Reviews: Cognitive Science, 6(4), 333–353. doi:10.1002/wcs.1350[Crossref], [PubMed], [Web of Science ®], [Google Scholar]; Wise & Shaffer, 2015 Wise, A. F., & Shaffer, D. W. (2015). Why theory matters more than ever in the age of big data. Journal of Learning Analytics, 2(2), 5–13. doi:10.18608/jla.2015.22.2[Crossref], [Google Scholar]). Learning sciences is a discipline and, if done well, a center of gravity for a powerful orbit of interdisciplinary collaborations on learning sciences themes.

      I wonder whether seeing learning sciences -- which emerged as an interdisciplinary field -- as 'a center' is actually productive. The promise of it is its roots in different disciplines and interest areas. Is learning sciences better serve by continuing to function as connective tissues linking different areas?

    2. But what about the majority of graduate learning sciences programs that are not part of this nucleus? Is it reasonable to say they are not part of the learning sciences community? Probably not, as merely 25% of the graduate programs that self-categorize as learning sciences programs explicitly address DBR. The question is rather how to represent those programs well who do not belong to the relatively small core. In what follows we explore the CoP metaphor and suggest a possible view on the specific relationship of the core and more peripheral programs in the learning sciences.

      Important questions posed here!

    3. Comparing the results of the network analyses for the concepts and methods, it is striking that the average degree (relating to the number of connections within the network), as well as the average strength (relating to the magnitude of the connections), of the method network are considerably lower than the corresponding values of the concept network.

      Possibly because methods / methodologies are less mentioned in program websites? Maybe..

    4. FIGURE 6 Network plot of all methods including three emerging clusters representing methods aimed at individuals (red), their interaction (blue) and methods at the intersection of both other clusters (green).

      I would almost like to keep all types of nodes -- programs, concepts, methods -- within a same multidimensional network. A question we can ask is how we want to see a learning sciences program evolve? What are some of those core features a program need to have and what are some 'cool' stuff that can define the uniqueness of a learning sciences program?

      (It would be also fun to grab, for instance, of publications from members of those programs over the years and enrich the current document analysis.)

    5. The emerging networks show close relations, especially for the various concepts, represented by thick gray lines.

      This generated network looks very interesting. I wonder how it can be/become a living document to engage community members -- at the peripheral or core -- to reflect how they position themselves in the community. I know new comers who are attracted to "learnings sciences" because of neuroscience. However, they may become discouraged when seeing the fuller landscape. What we can do to keep them in the community in the meantime advocating a plural view of learning sciences?

    6. In particular, peripheral participation can take place without a trajectory towards the core because, for example the peripheral members are deeply rooted in one or more other communities and just share certain interests with the community under consideration (“peripheral experts”).

      I would like this topic further addressed by considering international representation of the society. How often do scholars from the "Global South" take on a trajectory moving towards the "core"? Do they stick around? What are needed to enrich the community's international representation?

    7. Here, the question of whether learning sciences is a discipline on its own with a clear common core and a learning sciences “brand” or whether it instead represents a tent (Nathan et al., 2016 Nathan, M. J., Rummel, N., & Hay, K. E. (2016). Growing the learning sciences: Brand or big tent? Implications for graduate education. In M. A. Evans, M. J. Packer, & R. K. Sawyer (Eds.), Reflections on the learning sciences (pp. 191–209). Cambridge, UK: Cambridge University Press.[Crossref], [Google Scholar]) for various research from different disciplines related to learning, is repeatedly brought up.

      Great questions.

    8. learning sciences

      I wonder whether scholars who were involved in the launch of this field can speak to the choice of 'sciences' (plural) in the name. We see people using 'learning sciences' and 'the science of learning' all the time as well, which may convey different views of learning. So I am curious how the plural form was picked at the first place.

    9. Results reveal that the concepts addressed most frequently were real-world learning in formal and informal contexts, designing learning environments, cognition and metacognition, and using technology to support learning. Among research methods, design-based research, discourse and dialog analyses, and basic statistics stand out.

      Not surprising. To revisit the convo (around 2004) about LS vs. ISD, how the results may look like for Instructional System Designs?

    1. corporate

      <iframe allowfullscreen="allowfullscreen" frameborder="0" height="381" src="https://h5p.org/h5p/embed/2926" width="1090"></iframe><script charset="UTF-8" src="https://h5p.org/sites/all/modules/h5p/library/js/h5p-resizer.js"></script>

    1. Find the Hotspot

      <iframe allowfullscreen="allowfullscreen" frameborder="0" height="381" src="https://h5p.org/h5p/embed/2926" width="1090"></iframe><script charset="UTF-8" src="https://h5p.org/sites/all/modules/h5p/library/js/h5p-resizer.js"></script>

  17. Jan 2018
    1. This engagement may encompass experts from areas such as human-computer interaction; data streaming, assimilation, visualization, and analytics; machine learning and deep learning; multi-modal analytics; social network analyses; and adaptive rapid experimental design.

      check expertise coverage in the proposal

    2. rich and highly adaptable environments for learners that may: (a) serve as a forum for active research and development studies by researchers; (b) serve as a testbed for analytics that support the environment's adaptability; and (c) in the spirit of design-based research, serve as a collaborative space for teachers, mentors, and learners to work with researchers as co-developers of the learning environment.

      voices from all stakeholders

  18. Oct 2017
    1. Most extraterrestrial creatures are likely deep inside their home planets, in subsurface oceans crusted over in frozen water ice

      Why?

    1. units of analysis in the social sciences can usually be divided into various subunits: Communities may be divided into families or households, geographic neighborhoods, or individual members.
    2. Units of analysis may be different from the units of observation.
    3. generalize

      only if this is the goal

    1. Reading HistoryWe draw several conclusions from this brief history, noting that it is, like all histories, somewhat arbitrary. First, each of the earlier historical moments is still operating in the present, either as legacy or as a set of practices that researchers continue to follow or argue against. The multiple and fractured histories of qualitative research now make it possible for any given researcher to attach a project to a canonical text from any of these historical moments. Multiple criteria of evaluation compete for attention in this field. Second, an embarrassment of choices now characterizes the field of qualitative research. There have never been so many paradigms, strategies of inquiry, or methods of analysis to draw on and use. Third, we are in a moment of discovery and rediscovery as new ways of looking, interpreting, arguing, and writing are debated and discussed. Fourth, the qualitative research act can no longer be viewed from within a neutral or objective positivist perspective. Class, race, gender, and ethnicity shape the process of inquiry, making research a multicultural process. Fifth, we are clearly not implying a progress narrative with our history. We are not saying that the cutting edge is located in the present. Rather, we are saying that the present is a politically charged space. Complex pressures inside and outside of the qualitative community are working to erase the positive developments of the past 30 years or so.

      a footnote about history. important.

    2. A triple crisis of representation, legitimation, and praxis confronts qualitative researchers in the human disciplines. Embedded in the discourses of poststructuralism and postmodernism, these three crises are coded in multiple terms, variously called and associated with the critical, interpretive, linguistic, feminist, and rhetorical turns in social theory. These new turns make problematic two key assumptions of qualitative research. The first assumption presumes that qualitative researchers can no longer directly capture lived experience. Such experience, it is argued, is created in the social text written by the researcher. This is the representational crisis. It confronts the inescapable problem of representation but does so within a framework that makes problematic the direct link between experience and text.The second assumption makes problematic the traditional criteria for evaluating and interpreting qualitative research. This is the legitimation crisis. It involves a serious rethinking of terms such as validity, generalizability, and reliability—terms already retheorized in postpositivist constructionist–naturalistic, feminist, interpretive and performative, poststructural, and critical discourses. This crisis asks the question: How are qualitative studies to be evaluated in the contemporary poststructural moment? The first two crises shape the third crisis, which asks the question: Is it possible to effect change in the world if society is only and always a text? Clearly, these crises intersect and blur, as do the answers to the questions they generate.

      a triple crisis

    3. The preceding arguments have been developed viewing writing as a method of inquiry that moves through successive stages of self-reflection.
    4. New models of truth, method, and representation were sought. The erosion of classic norms in anthropology (e.g., objectivism, complicity with colonialism, social life structured by fixed rituals and customs, ethnographies as monuments to a culture) was complete. Critical epistemology, feminist epistemology, and epistemologies of color now competed for attention in this arena. Issues such as validity, reliability, and objectivity, believed to be settled in earlier phases, were once again problematic. Pattern and interpretive theories, as opposed to causal linear theories, were now more common as writers continued to challenge older models of truth and meaning.

      such a dense para outlining the competing elements

    5. These works made research and writing more reflexive and called into question the issues of gender, class, and race. They articulated the consequences of Geertz's “blurred genres” interpretation of the field in the early 1980s.
    6. Geertz argued that the old functional, positivist, behavioral, and totalizing approaches to the human disciplines were giving way to a more pluralistic, interpretive, and open-ended perspective. This [Page 315]new perspective took cultural representations and their meanings as its point of departure. Calling for “thick descriptions” of particular events, rituals, and customs, Geertz suggested that all anthropological writings were interpretations of interpretations.
    7. Computers entered the situation, to be fully developed as aids in the analysis of qualitative data in the next decade, along with narrative, content, and semiotic methods of reading interviews and cultural texts.

      tech-influence

    8. By the beginning of the third stage (1970–1986), or blurred genres, qualitative researchers had a full complement of paradigms, methods, and strategies to employ in their research. Theories included symbolic interactionism, constructivism, naturalistic inquiry, positivism and postpositivism, phenomenology, ethnomethodology, critical theory, neo-Marxism, semiotics, structuralism, feminism, and various racial/ethnic paradigms.
    9. In this way, work in the modernist period clothed itself in the language and rhetoric of positivist and postpositivist discourse.
    10. Modernist ethnographers and sociological participant observers attempted rigorous qualitative studies of important social processes, including deviance and social control in the classroom and society. This was a moment of creative ferment.A new generation of graduate students across the human disciplines encountered new interpretive theories (e.g., ethnomethodology, phenomenology, critical [Page 314]theory, feminism). They were drawn to qualitative research practices that would let them give a voice to society's underclass.
    11. the Chicago School, with its emphasis on the life story and the “slice-of-life” approach to ethnographic materials, sought to develop an interpretive methodology that maintained the centrality of the narrated life history approach. This led to the production of the texts that gave the “researcher as author” the power to represent the subject's story.
    12. Old standards no longer hold. Ethnographies do not produce timeless truths. The commitment to objectivism is now in doubt. The complicity with imperialism is openly challenged today, and the belief in monumentalism is a thing of the past.
    13. The works of the classic ethnographers are seen by many as relics from the colonial past.

      the colonial past

    14. Now at the dawn of this new century, we struggle to connect qualitative research to the hopes, needs, goals, and promises of a free democratic society.
    15. The postmodern and postexperimental moments were defined in part by a concern for literary expression and the narrative turn—a concern for storytelling, for composing ethnographies in new ways.
    16. Arthur Vidich and Stanford Lyman's history covers the following (somewhat) overlapping stages: early ethnography (to the 17th century); colonial ethnography (17th-, 18th-, and 19th-century explorers); ethnography of the “other,” the American Indian (late 19th- and early 20th-century anthropology); community studies; ethnographies of American immigrants (early 20th century through the 1960s); and studies of ethnicity and assimilation (mid-20th century through the 1980s).

      when Research is bloody

    17. Qualitative research is a situated activity that locates the observer in the world. Qualitative research consists of a set of interpretive material practices that make the world visible. These practices transform the world. They turn the world into a series of representations, including fieldnotes, interviews, [Page 312]conversations, photographs, recordings, and memos to the self. At this level, qualitative research involves an interpretive naturalistic approach to the world. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.
    18. In the postmodern/experimental moment, researchers continued to move away from foundational and quasifoundational criteria. Alternative evaluative criteria were sought—those that might prove to be evocative, moral, critical, and rooted in local understandings.

      linked to criteria for judging rigor and meaning

    19. Successive waves of epistemological theorizing move across these eight moments. The traditional period is associated with the positivist foundational paradigm. The modernist or golden age and blurred genres moments are connected to the appearance of postpositivist arguments. At the same time, a variety of new interpretive qualitative perspectives were taken up, including hermeneutics, structuralism, semiotics, phenomenology, cultural studies, and feminism. In the blurred genres phase, the humanities became central resources for critical interpretive theory and the qualitative research project broadly conceived. The researcher became a bricoleur, learning how to borrow from many different disciplines.

      "researcher became a bricoleur" -- qualitative perspectives

    1. NASA researcher checking hydroponic onions with Bibb lettuce to his left and radishes to the right

    2. growing plants without soil

      [I need to understand:] How this is even possible.

    1. For details from a physicist’s perspective see Pierre-Andre’s talk

      bla

  19. Jul 2017
    1. 2. Staying with a closed, proprietary system & not moving to the adoption of open standards.

      I concur. Diigo could have been a leader in the social annotation space, way ahead of Hypothesis. But now I think H is gaining more momentum than Diigo, because it adheres to open standards.

  20. blog.diigo.com blog.diigo.com
    1. A major change will be our movement away from ‘social’ aspects in Diigo. While we may have been swayed by the ‘social media movement’, at the end of the day Diigo has always been more of a personal platform.

      Em.. this is an interesting statement. So Diigo is moving away from being social?

  21. Apr 2017
    1. Now I can read updates in that private channel, I can receive desktop notifications, and I can also push notifications to Slack on my mobile devices.

      How can I tweak what info gets displayed in the Slack Channel? For example, I wish to display the author. Is there a way to control this?

  22. Mar 2017
    1. Part II

      this means Part II of this book instead of this chapter. Just fyi #SNAEd

    2. the difference between the mathematical and statistical approaches to social network analysis

      What are the differences?

  23. Feb 2017
    1. Quality of Relational Data

      SNAEd folks only need to scan this section :)

    2. In the node-list format, the first node in each row is ego, and the remaining nodes in that row are the nodes to which ego is connected (alters).

      Please don't do this!

  24. Jan 2017
    1. Working in public is exciting and enriching, and I have seen my students thrilled by the connections they have made and engaged by the ability to produce work for a larger academic commons.  That being said, working in public, and asking students to work in public, is fraught with dangers and challenges.  Students need to understand privacy and safety issues (and so do we; in case you haven’t had FERPA waved in your face recently let me do that for you now). They may not know about trolling or how to respond to it (seriously, we can’t even say there is a universally agreed-upon best practice for handling trolling). They may (will) face vicious harassment, racism, sexism, homophobia, and all of the other things that we do a reasonably good job at regulating in our classrooms (maybe?), depending on the kind of work they do or the kind of digital profiles they put forward, purposefully or otherwise.  They will put crappy work online sometimes (sometimes they will know it’s crappy and sometimes they won’t); is that ok? Will it come back to haunt them when they look for a job (we need to take this concern seriously, given the debt they incur to study with us)? What professional risks do I assume when my pedagogy is so fully exposed? And who in the academy can afford to take those risks…and who cannot?

      my biggest concern when testing the waters of #OpenPed. Very helpful insights!

    1. “Our MIT online course data already suggests students perform better when they have help and the social connection to support their learning,” Siemens said. “This connection contributes to their willingness to persevere through the course and could come in the form of interaction on the social network platform, experience in leveraging online social capital and personal motivation.”

      Interesting - connections among students contribute to their willingness to persist in online course. Social capital and personal motivation are also mentioned. Would love to read the final reports.

    1. understanding the antecedents and consequences of network phenomena

      I am curious about a few things packed in this sentence: network phenomena, and their antecedents and consequences.

    1. After all, characteristics such as one's academic history or educational aspirations influence who one knows and spends time with.

      annotation about this piece of text.

      • item 1
      • item 2
    1. you can highlight the error and include two tags – snaEd and issues – in your annotation.

      an example of issue reporting. Don't forget to add two tags below.

    1. one of three axes: URL, tag, or user

      Is there a way to query by 'group' (which is private)? I am teaching a class and having students annotate in a private group would be ideal. But I haven't found a way to use RSS to retrieve a group's annotations.

    1. Air pollution is the introduction of particulates, biological molecules, and many harmful substances into Earth's atmosphere, causing diseases, allergies, death to humans, damage to other living organisms such as animals and food crops, or the natural or built environment. Air pollution may come from anthropogenic or natural sources.

      test

  25. Nov 2016
  26. Sep 2016
    1. Inductive reasoning should be used to develop statements (hypotheses) to be tested during the research process.

      This explanation is quite confusing... better to say develop theories to be tested in deductive reasoning

    1. Writing Science: How to Write Papers That Get Cited and Proposals That Get Funded, Joshua Schimel

      Nice book

    1. another strategy: put a word or two on the board.

    2. Gather student feedback in the first three weeks of the semester to improve teaching and learning.
      • What helps you learn?
      • What could we do to help you learn?
    3. Be redundant. Students should hear, read, or see key material at least three times.
    4. Distribute a list of the unsolved problems, dilemmas, or great questions in your discipline and invite students to claim one as their own to investigate.

      cool idea

  27. Aug 2016