388 Matching Annotations
  1. Jul 2017
    1. It wouldn’t be the first time that people used an online platform in weird ways its creators never intended—Twitter bots pushing fake news and Facebook groups sharing revenge porn.

      Err…

    2. Peer-reviewed journals could reject a manuscript if it has previously appeared as preprint.

      If even Nature accepts articles that appeared as preprints before, journals that don't should be avoided.

    3. Scientists may not acknowledge preprints as establishing priority of discovery.

      And then what? I can't imagine serious researchers having a conversation like:

      "We discovered X!"

      – "Well, actually, we discovered X and posted this preprint on it."

      "But that's a preprint, so we're still the first to discover X."

    1. I am a fan of the DiRT Directory,

      Me too! I only wish it were faster, HTTPS-only and more easily defended against spam entries.

    1. Altman M, Crosas M. The evolution of data citation: From principles to implementation. IASSIST Q. 2013;37. Available:

      The link does not resolve.

    2. Other groups have discussed change management consideration and “content drift” in more depth [2,30,31].

      It would be good to have a few sentences in this article about how this lesson builds on that research/discussion.

    3. Embedding versioning in identifiers is recommended if the prevailing use of an unversioned identifier results in “breaking changes” (e.g., a change in the hypothesized cause of a disease). However, if new information about the entity emerges slowly and the changes are “nonbreaking”, it is reasonable to instead maintain a machine-actionable change history in the entity’s metadata.

      I see there is an indirect reference to Memento at the end of the lesson, but it would help me to see some discussion on how these lessons do or do not work well with Memento.

    4. Uniform Resource Identifier (URI)[8],

      I thought the u stood for uniform too, but apparently it's universal.

    5. unique resource identifier (URI)

      The URI spec defines URI as Universal Resource Identifier, not unique.

    1. Robert Sanderson, J. Paul Getty Trust, rsanderson@getty.edu,

      The ORCiD link points to Ivan Herman's profile, instead of Rob Sanderson's.

    1. http://recogito.pelagios.org

      The creators should really really really use HTTPS.

    2. This is because OpenLayers is designed specifically as a software library to be embedded in applications, rather than as a full-featured viewer like Diva.js3939 https://ddmal.github.io/diva.js/View all notes or Mirador, which ship as more “pre-packaged” viewing environments

      Also, OpenSeaDragon is the basis for Mirador and OSD is not suitable, so Mirador is not suitable.

    3. an easy way to enrich a spreadsheet of place names with gazetteer URIs

      This is a central use case for OpenRefine.

    4. At the time of writing, this support is restricted to the registration of IIIF endpoints for individual images

      No documentation on how to do this appears to be available? It is said to be available in the tutorial, but I can't find how to do it.

    5. W3C Web Annotation data model

      Currently the Open Annotation model is offered.

    6. W3C Web Annotation Data Model

      Note that there are a few differences between Open Annotation and Web Annotation.

    1. The Rise of the Meritocracy
    2. Utilitarianism as a philosophical system states that the most moral action is the one that maximizes utility
    3. Meritocracy is the idea that individuals can and should be measured on the basis of their intellectual contributions, divorced from identity, social status, gender, race, religion, or other distinguishing characteristics.
  2. Jun 2017
    1. Develop the ‘Scholary Communication’ Steering Committee into a ‘Scholarly Communication and Research Infrastructure’ Steering Committee, addressing Open Access policy and implementation, RIs in the wider sense, including publications, data and cultural heritage infrastructures;

      This can be applied outside LIBER as well, like in (research) libraries. Open Access librarians are (now, in 2017) evolving into digital scholarship librarians, but research infrastructures are not considered a priority.

    2. The layout of this article has issues. Headings are not distinguishable from body text and it looks like there is some text duplication.

    1. As cloud computing becomes increasingly cost-efficient, and new models of deployment, such as container-based solutions, are introduced, there is a need for models in which university IT departments can partner with projects to provide expertise and facilities (for example, private cloud or container infrastructure, or extending university infrastructure to the public cloud).

      I should discuss this with our university IT.

    2. n the second phase we built in funding for a devops consultant, who helped us move to a fully configuration-managed system, so that the Perseids platform can be deployed easily by others and sustained for the long term.

      Good thinking!

    3. The key challenge for the community is to encourage and support ad-hoc collaborations to get initial solutions working, and then move from there to more formal agreements to ensure sustainability.

      Funding is often available for creating new things, not for sustaining infrastructure. I have read this a couple of times.

    4. As a publicly available and open infrastructure, we also have many users from many institutions across the world, and it is not clear what responsibility Tufts, the university hosting the infrastructure, should have for data created by external users.

      That is one drawback of centralised infrastructure and an interesting use case for Linked Research.

    5. However, our data models and approach to publications are constantly evolving, making coordination with the university library to preserve this data challenging, as they don’t necessarily fit the data models the library is already able to support.

      I would like to know more. Apparently the library does not support preserving 'blobs'?

    6. We therefore chose to budget for cloud-based resources on the Amazon Web Services (AWS) platform rather than using university IT resources.

      Interesting. I thought EU-funded projects would have to be hosted in the EU. Was this project not EU-funded?

    7. Eventually we’d like to be able to support pushing data to any external API endpoint.

      I assume such external API endpoints will have to conform to certain conditions, like

      • they accept an agreed-upon type of payload
      • they keep data authentic and secure

      Could you explain what kind of endpoints you are thinking of?

    8. This workflow uses the Hypothes.is API to pull the annotations into Perseids for review and publication (Figure 16).

      Does the user manually add annotation URIs to a Google Spreadsheet? There must be a smarter way to accomplish the task of collecting annotations using the Hypothes.is API, perhaps saving specific annotations for review to a private group that only the annotator and reviewers have access to.

    9. Bodard, G and Romanello, M eds. (2016). Digital Classics Outside the Echo-Chamber. London: Ubiquity Press.

      Romanello M. & Bodard G. 2016. Digital Classics Outside the Echo-Chamber. London: Ubiquity Press. DOI: https://doi.org/10.5334/bat

    1. The Loon’s Boring Alter Ego’s workplace for some time had a list of “acceptable journals” posted to one of its physical bulletin boards. When the Loon went looking for that list today, it was no longer there, but the Loon did find out its origin: her campus’s purchased productivity analytics package. Moreover, it appears that the said package distinguishes between “research” journals and “service” (usually applied and/or professional) journals, the former of course (this is academia! research über alles!) weighing more in the analytics package’s algorithms than the latter.

      I would not have thought of this possible source for lists of acceptable journals.

    2. If you just answered “journal impact factor,” kindly make yourself a dunce cap, march yourself to the corner of your office, turn your face to the wall, and think about what you have done. For shame.

      :D

    1. But history shows that betting against science publishers is a risky move. After all, back in 1988, Maxwell predicted that in the future there would only be a handful of immensely powerful publishing companies left, and that they would ply their trade in an electronic age with no printing costs, leading to almost “pure profit”.

      Nothing will change indeed, if we expect change from publishers. I agree with Jason Hoyt's argument he makes on Twitter that scientists (in all fields) need to understand and support change.

    2. all the things that publishers do to add value

      People involved in 'scholarly publishing' at the Scholarly Kitchen produced a list of things publishers do, partially in response to critique of journal publishers.

    3. ‘You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. Then you market the thing and your salespeople go out there to sell subscriptions, which is slow and tough, and you try to make the journal as good as possible. That’s what happened at Pergamon. And then we buy it and we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.’

      Painful to read.

    4. scientists who published in “high-impact” journals were rewarded with jobs and funding

      How long did it take for the policies for hiring and funding to look at high-impact publications?

    5. Suddenly, where you published became immensely important.

      I wonder how suddenly the publication venue became important. It's not a single instance, it must have taken Cell months to years to build its reputation, hasn't it?

    6. Scientific articles are about unique discoveries: one article cannot substitute for another. If a serious new journal appeared, scientists would simply request that their university library subscribe to that one as well.

      But to fill more journals with serious content, you need more serious research. Research presumably cannot be sustained in all fields – fields shift – so older journals would have to stop, wouldn't they?

    7. “Publishing is the expression of our work. A good idea, a conversation or correspondence, even from the most brilliant person in the world … doesn’t count for anything unless you have it published,”

      And "publishing" shouldn't (of course) mean just articles. We've had books for a very long time, and we've had the Web for over 25 years. Yet somehow only some publications "count".

    8. Pergamon would then begin selling subscriptions to university libraries, which suddenly had a lot of government money to spend.

      So since around 1950 research has accepted/ignored the double/triple pay construct?

    9. The scientific societies that had traditionally created journals were unwieldy institutions that tended to move slowly, hampered by internal debates between members about the boundaries of their field

      Were all scientific societies doing this?

    1. The agreement is now abandoned

      Not by everyone, is it?

    2. Predatory publishers are bringing down the scholarly publishing industry and taking science and peer review down with it.

      I don't think this follows from the arguments in this article. Confidence in peer review may have been affected by incidents involving "traditional" publishers, but no-one expects peer review from predatory publishers (except the few authors who thought a publisher was honest).

    3. dishonest researchers

      I find it hard to accept a premis of "dishonest researchers" who will never be caught.

    4. I think that, since the advent of predatory publishing, there have been tens of thousands of researchers who have earned Masters and Ph.D. degrees, been awarded other credentials and certifications, received tenure and promotion, and gotten employment – that they otherwise would not have been able to achieve – all because of the easy article acceptance that the pay-to-publish journals offer.

      I would like to know what this number is based on. Which institutions are not looking at quality of the work and only look at numbers?

    1. Having identified what form of machine readable metadata was best practice a simple web-based platform independent tool to generate that metadata, either through a Q&A form based approach or some form of wizard. Failing that at least a good example I could modify.

      I understand your point. Hopefully we can provide some support for your user story some day.

      One problem is that many systems don't understand each others' schemas, or (some of) the values in metadata fields, especially because they are human conventions meant for other humans to understand. Therefore systems working with the data – just storing it, or analysing it – have their own input forms.

    2. First an easily discoverable example of best practice for this kind of data collection. Something that came to the top of the search results when I search for “best practice for archiving depositing records of interviews”. An associated set of instructions and options would have been useful but not critical.

      Noted.

      Maybe "interviews" in general aren't given as much attention as "oral histories", even though they are not too different in form and file formats.

    3. I have some folder of data.

      Keeping your (master/original) data together is a good thing (and make sure you make backups).

    4. it wouldn’t hurt for RDM folks to do a better job of managing our own resources to the standards we seek to impose on researchers.

      This is a no-brainer :)

    5. should I organise by interview (audio files and notes together) or by file type (with interviews split up).

      Probably by interview, because that is probably how you would refer to it and how would like to find it later.

    1. Eén keer liep een incident zo uit de hand dat er een ambulance bij moest komen: in 1998 werd een 19-jarige politicologiestudent door een andere UB-bezoeker in elkaar geslagen, als hij hem vraagt op te houden met porno kijken. De UB ging daarna tijdelijk ’s avonds dicht, tot de beveiliging op orde was.

      Ongelofelijk.

    2. ‘Het is wel een uitdaging om een computerprogramma te schrijven dat split beaver shots van minderjarigen kan herkennen.’

      Wauw. In de tweede helft van de jaren 1990!

  3. May 2017
    1. The whole DOI infrastructure has never really been about using URIs for identifying resources on the web – especially differentiating between landing pages and resources that are described by the landing page. The way content negotiation is used here is an extension of this Cool URIs-antipattern. Or maybe it's not that bad – I'm redirected to a different URI for the JATS content than for the BibTeX metadata. Still, using the same DOI URI for both content and metadata is not Semantic Web-like.

  4. Apr 2017
    1. they wrote de-warping and color-correction and contrast-adjustment routines to make the images easier to process; they developed algorithms to detect illustrations and diagrams in books, to extract page numbers, to turn footnotes into real citations, and, per Brin and Page’s early research, to rank books by relevance.

      I want this too. Is this all part of Tesseract?

    1. The non-goals included:discussions on how toreducetool-induced bias (i.e. by im-proving the tool), to down-play the role of the tools (“thetool is only used in exploratory phase of research”) or discus-sions about the pros and cons of digital versus non-digital ap-proaches (“we would just hire 20 interns to do this by hand”)

      Very useful information and enlightening examples!

    2. As the role of digital toolsin these type of studies grows, it is important that scholarsare aware of the limitations of these tools, especially whenthese limitations might bias the outcome of the answers totheir specific research questions. While this potential bias issometimes acknowledged as an issue, it is rarely discussed indetail, quantified or otherwise made explicit.

      This is the core question of tool criticism.

    1. And for a book about trends in the humanities, this seems to be a good idea.

      But I feel that overwriting information does not do justice to the work of the earlier authors. And when reading a book on the trends in the humanities you will always have to re-read a part that stays the same across versions – readers who read multiple versions will know (some) context and it seems ignorant to not acknowledge that there is context for trends.

    1. Biased approaches – pushing for a model, neglecting the needs of the community, not interoperable or focusing on only on the technical level – have proven counterproductive

      Could you provide examples, well, proof, for this statement?

    2. t requires a scalable and diverse approach

      Could you explain why it should be scalable and diverse?

    3. Research impact and performance are often evaluated through metrics, principally through citations.

      This practice is changing (very slowly).

    4. Currently, researchers are not consistently citing datasets in journal articles, but when they do there is a great degree of variance in their practice. Datasets are mentioned in different places within a journal article: as part of the main body of the journal article, text embedded in the journal article, in the notes section, as a footnote, orwithin a dedicated section of the journal article, but infrequently in the cited in the references section of a journal article.

      This has also been a conclusion of the OpenAIRE2020 project. 10.5281/zenodo.54570

    5. showcasing the flexibility of the system.

      How does (providing) a centralised search service showcase "the flexibility of the system"? What flexibility?

    6. a sound infrastructure that connects data to publications and to authors is an important piece in publishing data

      Could you give an argument for this claim?

    7. With a goal of supporting smaller data centres, DataCite maintains an OAI-PMH3 interface for all its content and also develops a centralized search portal described in the next section.

      So theoretically a repository service does not need to provide an OAI-PMH endpoint and search interface themselves?

    8. To better understand the gaps in a service a good place to begin is an assessment of the existing infrastructure

      What kind of service is this about? A repository service, i.e. an instance of (data) repository software?

    9. persistent identifiers linked resolvable

      remove "linked"?

    10. demanding

      "demand"?

    11. data centres

      Organisations that store (research) data. Not to be confused with the IT term for building with lots of computing or storage servers.

    12. Introduction

      Pretty much the same as the abstract.

    Annotators

    1. Processing flows (or ‘pipelines’) are defined in a configuration file and not code.

      Configuration files are, in a way, code.

    1. The article “Crouching Tiger – Hidden Payload: Security Risks of Scalable Vector Graphics” covers the hazards in detail.

      This article is from 2011. That doesn't make it outdated per se, but browsers have undergone many updates since then. Don't they provide better security for SVG users now?

  5. Mar 2017
  6. Feb 2017
    1. To achieve a cross-domain level of interoperability of data andservices, however, syntactic conformance and semantic dataper seare not sufficient. Enabling tools forresearchers to structure their knowledge and map it across different domains calls for joint efforts in thedomain modelling, technical implementation across research infrastructures, training and communicationwith researchers and strong research community participation.

      FAIR?

    2. historians benefit immensely

      Bold claim

    3. Provisioning of a richer semantic editor or annotatortool to support user friendly ontology developments by researchers, along with relations between entities,notes and any other archival resources, proved to be very challenging

      Was this because of code complexity or because it's hard to get to an agreement on the ontologies?

    1. The text contained in the <title> HTML tag and the PageRank value of each page are the only metadata that Google seems to use to any meaningful and consistent extent in providing its search service

      Are you sure about this?

      https://twitter.com/valexiev1/status/833559734544453633 seems to question this statement – and I would think Google uses the embedded metadata that they help 'standardise' at http://schema.org.

    1. I would be delighted if the entire codebase were eventually replaced, having served its purpose in promoting the initiative and surfacing use cases, but I needed to operate under the assumption that this code would live on, potentially for years.

      Developer honesty :)

    1. Surveillance in higher education and libraries, call it “assessment” or “learning analytics” or what you will. The opposition is starting to mobilize, but it is much smaller in numbers than the Loon would wish.

      Keeping an eye out for this…

    1. A Description Resource must be on the same domain as the Content Resource it describes

      But manifests can reference content from differerent domains, can't they? Does this section only apply to Image API information documents?

  7. Jan 2017
    1. Reminiscing About 15 Years of Interoperability Efforts.

      The work that most clearly explains the use of HTTP Link headers for connecting PIDs and documents is Signposting the Scholarly Web.

    2. application

      The examples on http://schema.org show that JSON-LD should be embedded in <script> elements.

    1. Relating datasets to each other

      This is not discussed often enough in research about research, as far as I can tell.

    1. From a higherlevel perspective, thedata soupcomprisesraw data,knowledge base data,connected dataandCENDARI-produced datawhich require di erent data management approaches

      Different data management approaches is key. Good!

    2. We encountered a signi cant diversityamong institutions in the level of digitisation and digital presence.

      Similar findings have been reported by EHRI.

    3. applicationwhere document was create

      interesting. What kind of values can we expect? MS Word? MS Excel?

    4. support for verylarge images through an image serve

      Is this a IIIF server?

    5. Brushingand linking is implemented with one ow direction for consistency from left to right.

      What is brushing in this context?

    6. Within the project, several extensions to CKAN weredeveloped to support the data harvesting and user authentication mechanisms.

      I'm surprised CKAN did not support harvesting? Or is it different from OAI-PMH-type harvesting?

    7. Supporting researchers intheir daily work is a novel concern for infrastructures.

      Really? I'd say this is a primary concern.

    1. je hebt geen wettelijke aanspraak op data die je in een dienst opslaat, met een dienst genereert of dankzij een dienst verkrijgt. Je kunt die data niet opeisen, en van “jouw” data kun je al helemaal niet spreken.

      Wauw.

  8. Dec 2016
    1. If Bush and Rove constructed a fantasy world with a clear internal logic, Trump has built something more like an endless bad dream.

      It does feel that way if you try to follow the news.

    1. This is what happens when you click on the MARC record view.

      See the image below :)

    2. Answer all curious people. Listen to who gets told “that’s not your job” versus who is allowed to know the full answer to questions.

      This is a more general remark, isn't it? Or did you intend it to be about vendor systems?

    3. We didn’t develop an ILL module in Alma because we didn’t think ILL would exist in 4 years. –Executive, Ex Libris

      Amazing!

    4. but

      "and"?

  9. Nov 2016
    1. Track back to original sources of news items and memes. We would like to see these technology platforms use their considerable computing power to help track back and find the source of news items, photos and video, and memes.

      I had not thought of this possibility before. I like it.

    2. Make the brands of those sources more visible to users. Media have long worried that the net commoditizes their news such that users learn about events “on Facebook” or “on Twitter” instead of “from the Washington Post.” We urge the platforms, all of them, to more prominently display media brands so users can know and judge the source — for good or bad — when they read and share. Obviously, this also helps the publishers as they struggle to be recognized online.

      I notice I sometimes just say "from Twitter" instead of "from a tweet posted by NRC Handelsblad", but it also has to do with the brevity. I don't always have the time to read stories linked from a suggestive tweet and hope that the source really writes what they suggest in their tweet.

    1. Excellent, robust data with no interface isn’t easily usable (although a creative person will always find a way), but an excellent interface with terrible data or no data at all is useless as anything other than a show piece.

      Yes!

    2. My issue with IIIF is that is presents the illusion of openness without actual openness. That is, if images are published under a closed license, if you have the IIIF manifest you can use them to do whatever you want, as long as you’re doing it through IIIF-compliant software. You can’t download them and use them outside of the system (to, say, generate PDF or epub facsimiles, or collation visualizations). I love IIIF for what it makes possible but I also think it’s vital to keep data open so people can use it outside of any given system.

      It should be possible to transform an object described by a manifest using IIIF Image API services into PDF or ePub files. The manifest is a collection of Linked Data, linking images and annotations and describing how the images fit together. If you want you can look inside the manifest and download (versions of) original images.

      I do agree that at some point in the preservation and use of images you have to have a (master) file and you need to keep track of access and use rights and permissions.

    3. In the spring I presented on OPenn for a mixed group of librarians and faculty at Vanderbilt University in Tennessee, after which an art historian said to me, “this open data thing is great and all, but why can’t we just have the manuscripts as PDFs?”

      :/

    4. than

      then

    1. Be wary of anyone telling you that research needs to be performed in a certain way, especially if the claim is that things have always been done that way. Much of the really interesting science does not (and can not) happen in the trodden ways.

      Great advice!

  10. Oct 2016
    1. Open source software tools include Autodesk 123D and Sketchup Make.

      These tools are not open source; they are closed source. Yes, they are free to use (as in: you don't have to pay money to use them).

    2. light years

      … is a distance unit.

    1. It can be downloaded on GitHub

      Please provide a link then! :)

    2. -Endpoint

      SPARQL-endpoint, I presume.

    3. rom archiving in one system to final archiving.

      Is that the complete data lifecycle?

    4. the standar

      Is it necessary to have a single standard?

    5. We would like to stress some

      Why do you want to stress these points?

    6. From ourperspective

      What perspective? What is your methodology here?

    7. TEI73(Text Encoding Initia

      Note that TEI does not serve the exact same purpose as the other metadata vocabularies. TEI marks up texts and text elements – it goes inside the data, whereas the other vocabularies are used to describe items from the outside.

    8. ResearchGate,

      This article has a preprint on ArXiv and a DOI: 10.1002/meet.14504901084.

    9. Github inspired system.

      GitHub did not invent git, if that is what you're referring to.

    10. Training and assistancemust to be provided to researchers in adequate means to achieve a clear separation of text and data, submitted differently in open and ideally non proprietary format, in separate and organised data set, and including metadata of good quality.

      How does this relate to legally unshareable data?

    11. Datacite57is another repositor

      If a repository at all, DataCite is a very different kind of repository – it does not preserve (research) data!

    12. e data). It is worth to mention that Dataverse is now integrated with Open Journal System5

      It has been for quite a while.

    13. . We decid

      Why and how did you decide which issues to highlight?

    14. Legal barriers:

      Are these barriers to citing data? They look like barriers to sharing data.

    1. In this, then, lies both the limits and the promise, the frustration and the excitement, of our work at the Lab.

      Sharing this learning experience is very much appreciated :)

    1. Middleware

      Middleware has a rather specific meaning for IT people, especially in the context of software. Middleware is software that connects and translates between software components.

    2. I have to know what I want to call into my argument and make that part of the data in the database, and part of the views, frames, panels, and links of selection and display.

      I.e. "think what you want to say before settling on a system to say it with"?

    3. what constitutes a data structure that incorporates cultural diversity, exposes inequities of gender, or shows biases of nationalist perspectives?

      I wasn't taught such critical engagements with data structures in my computer science education.

    1. If annotations can be cited as scholarly objects, then people will be more comfortable creating them, because they’ll know they can cite them.

      Of course you don't need DOIs for anything to make it citable. Title, creator, year of publication, publisher and identifier are the most used information elements for citations, but plenty of objects are cited without DOIs.

      Should (Hypothes.is) annotations not have titles?

    1. The conceptual URIs are embedded into the ISO place keywords XML through the use of the xlink attribute.

      Is this a manual process?

    2. In our prototype, we’ve created a mapping for hundreds of placenames to both GeoNames and LCSH.

      … which doesn't really scale, of course.

    1. This would not apply to commercial companies.

      And neither to unaffiliated researchers. Why not?

    2. Text and data mining, or TDM, is an emerging and important tool. But it is developing only slowly, mainly due to legal uncertainty caused by today's copyright rules.

      I didn't know that copyright rules slow the development of TDM - it is however very clear that current copyright rules inhibit larger-scale use of TDM.

  11. Sep 2016
    1. The nature of UK copyright exceptions is that they are defences to accusations of infringement rather than rights, and this puts the onus of responsibility on the person doing or facilitating the copying to ensure it is legal.

      Important distinction

    1. Scholar requests for custom­built databases with web­searchable front endsindicate a need for interoperable tools and repositories that allow scholars to create,store, and work with materials in various formats (multimedia, images, text, annotation,etc.) and then provide easy online access to these materials.

      This is very close to what researchers at Leiden University have requested.

  12. Aug 2016
    1. If people don’t actively see how important this type of data is, why would they ever recommend for it to be funded?

      Great argument!

    2. When I talk about this data online or give talks, invariably people tell me “you are so lucky to have a data set like that”. My response: The entire world has a data set exactly like this because we published it! Not one of those people has published anything with it.

      Hahaha :)

    1. Edward Ayers uses the term generative in his definition to describe digital scholarship's function and effect. As generative scholarship, digital scholarship marks a move away from the passive, one-way form of communication from scholar to student or peer or novice. It is a new form of inquiry and practice that "generates new questions, new evidence, new conclusions, and new audiences as it is used." Digital scholars do not simply do this work and hand it to students to be quizzed upon. Rather, they say, "help us figure this out, help us make this better, help us make this richer" by working together.2

      This sounds like teaching turns into using students to solve research questions. Hopefully this doesn't go in the same direction as "we don't need to learn facts, because we can look them up".

    1. In the end, most Symfony developers will be declaring each form field manually in a special class file, for each entity that needs a form. It is a painstaking and repetitive process that is somehow more tedious than writing the entire form in raw HTML.
    2. It became clear that, while Backbone is commonly referred to as a framework, its name tells the real story: it is really just a skeletal add-on to JQuery that provides a few useful features.
    3. Developers felt they were wrestling with Drupal (perhaps a common feeling among Drupal developers)

      Ouch for Drupal. What would this mean for Islandora?

    4. In September 2015, two researchers with the Federal Reserve Board in Washington, D.C. released a study that shook the field of economics [1]. After attempting to replicate the findings of 67 papers published in 13 different economics journals, Andrew C. Chang and Phillip Li were only able to successfully reproduce the results of 33% of the papers. Even after directly contacting the original authors for help, this number only rose to 49%, meaning that more than half of the sample was irreproducible. Chang and Li’s paper was unflattering to the field, but it was just one of a growing number of similar studies in other sciences. Just a few months before, in August 2015, Science published a damning study proving that only 36 out of 100 psychology studies were reproducible, even when using the original study materials [2]. These papers made headlines, but what made them possible is a growing movement to encourage the sharing of research data for reuse or reproduction. Publications such as PLOS [3], and government bodies such as the White House’s Office of Science and Technology Policy now require that authors share data or provide a data sharing plan [4]. And libraries are particularly well placed to foster this sharing, with our expertise in resource classification, metadata, and discoverability, as well as our close relationship with our research communities.

      Data sharing seems unrelated to the rest of the article, which is about data discovery for (re)use.

    1. Compared to other digital content indexing solutions such as Solr or ElasticSearch, which are more web services-oriented, Sphinx was specially designed to integrate well with SQL database servers, and to be easily accessed by scripting languages.

      Good information to have. It's clearly not for every situation.

    1. We also recognize that this process of relationship-building and collaboration takes time

      Would it be possible to give an estimate?

    2. We have integrated topics like information ethics, archival theory, and scholarly communication into close to fifty courses where it did not exist previously

      Curious to know what kind of courses and disciplines these courses are in you helped develop.

    3. The second benefit of this contact is in fostering working relationships that view digital humanities work as a partnership between librarians and specialists and disciplinary experts, rather than a service model.

      This is of course related to the first benefit in that natural partners - the people that you go to for anything - are seen as partners.

    4. Studies on faculty-librarian collaboration have shown mixed experiences—some faculty tend to view librarians as collaborators and partners, while others see them as service providers (Manuel, Beck, and Molloy, 2005).

      Is this still relevant in 2016?

    5. Instruction and exercises led by us blend disciplinary concepts and course content with critical lessons on multimodal composition and publication, data evaluation and usage, and archival theory to produce digital projects in courses of all levels.

      Sounds great.

    6. This model of embedded librarianship was greatly beneficial to the students

      How was this beneficial? Did the students pass more easily because the librarian explained things or do things for them?

    1. we do often commit more time to technical development on patron research projects than we publicly promise
    2. there is a tension between incorporating the Library’s service-oriented outlook (while resisting a perception of the Lab as a drop-off service for digital projects) and empowering researchers to own their projects.

      has been said before indeed

    1. we’re not generally creating data through experimentation or observation — more often than not, we’re mining data from historical documents. You name it, we’ve tried to mine it, from whaling logs to menus to telephone directories

      Well, mining could be seen as a form of experimentation.

    2. It’s just that if you advertise that help as “data management,” they’ll have no idea you’re trying to talk to them.

      Hmm. Probably true... although I would hope it is getting better with the spread of data management requirements.

    1. Optimizing the digital humanities requires a focus on the digital humanist as an individual; once the individual’s work is facilitated, the pathway is set for greater and broader contributions. For libraries and IT, this is an essential point of convergence: centering on the user’s workflow as a roadmap for developing services and technologies that facilitate all phases of digital humanities research.
    2. RSS feeds

      I hope this will not slow down the software.

    3. Users were also very enthusiastic about the possibility of finding new research articles from within the Zotero interface

      Zotero has several interfaces, most notably the desktop application and Firefox/Chrome plugins and the web interface. Which interface is meant here?

    4. The Zotero enhancements are currently in public beta testing

      Link please :)

    5. Another important reason for low uptake of citation managers stems from the fact that those tools are tailored towards scholarly publications, and not towards archival materials such as letters, maps, photos or diaries that humanists often use in their work.

      true

    6. This is a summary of Antonijević, S. and Cahoy, E. (2014). “Personal Library Curation: An Ethnographic Study of Scholars’ Information Practices.” Portal: Libraries and the Academy, Vol. 14, No 2. Antonijević, S. (2015). Amongst Digital Humanists: An ethnographic study of digital knowledge production. London, New York: Palgrave Macmillan.

    7. we studied how scholars engage with digital research tools and resources in different phases of their research process (please see Figure 1)

      A pie chart like this may suggest ideas that did not necessarily follow from the research. Is there order of activities? Do the colours and slice sizes mean anything?

    8. Both DH and librarianship are inherently connected with users, yet user voices, especially those arising from empirical studies, are often missing from planning, developing, and implementing initiatives related to digital scholarship.

      See the report(s) from the Digital Scholarly Workflow project, I guess?

    1. As we develop open science platforms we should draw inspiration from the platform cooperativism movement in which users have ownership or governance of the platform so that the benefits of open science practices are spread across many individuals and ideally one that resists the encroachment of cheaters.

      Interesting!

    2. the locus of progress is at the wrong level, and that open-science prioritizes “scientific progress” in the abstract, above improving the lot of the individual humans that comprise it

      We indeed need to show the benefits of open to research. Open science is a means to better means to various ends.

    3. The conservative response to the timely release of pre-publication data is best summarized by the phrase: “are you kidding me? why would I do that?”
    1. Zoals een inventarisnummer met de beschrijving ‘Voedselverstrekking aan zieken’, die geen geografische koppeling aan de straat ‘Zieken’ in Den Haag behoeft. En een tekening met belevenissen uit de agenda van een verpleegster op Bandoeng, die uiteraard niet gesitueerd is in de plaats Agenda in Amerika.

      De links verwijzen niet naar persistent identifiers. Zucht.

    1. we need our DH courses to teach people more than they teach tools. We should structure our curricula not around vague gestures towards collaboration but meaningful practice of it. We should encourage library students to see their work as meaningful and integral (and we should demonstrate this to humanists as well). And we should teach humanists that there are faces behind the tools they learn in class. Not until we model effective relationships in our courses will we be able to produce digital humanities work that is just and equitable.

      Great conclusion.

    2. we should be teaching students resources for working better (both together and alone), rather than what the GUI on different mapping tools looks like.
    3. Many of the DH courses we reviewed don’t effectively discuss collaboration between humanists and librarians.

      This is an interesting observation.

    4. It seems that humanists teach students to strive towards collaboration for vague reasons, rather than teaching them about the actual labor of collaborative research.

      Perhaps many programs already provide other courses about collaborative research?

    5. one could posit that students in this cohort of LIS courses are being taught to be software generalists but not necessarily how to work with disciplinary partners, or to advocate for the library’s essential role in DH projects.

      So effectively these courses teach computer/data science?

    6. European course registry for digital humanities: https://dh-registry.de.dariah.eu/

    7. For the humanities syllabi, I also asked how many tools were being taught in each course.
    8. I looked at whether the courses: 1) required a collaborative project and 2) set aside time to discuss the challenges of collaboration or cross-disciplinary research (or had readings that indicated such).
    9. With each syllabus I looked for general focus (is it tools-, training-, or topics-focused?), the breadth of assigned readings (is the literature from librarianship, humanities fields, or both?), and the structure of project(s) (collaborative, individual).
    10. look through the Digital Humanities Syllabi collection in Zotero

      Would like to have some elaboration on the method of collecting. Did the authors succeed in collecting older descriptions?

    1. DH projects tend to be fluid in light of the swift technological developments taking place in the field, so an important part of this kind of outreach is introducing faculty and students to ways of approaching metadata questions, in addition to working with one specific schema. Faculty and student takeaways include not only an enhanced understanding of and experience working with metadata, but also greater knowledge about the fundamental approaches to structuring data in DH projects, which can empower them to engage in further such projects in the future.

      I guess that DH will also be about automating the encoding process, although students and faculty should understand the whole process of encoding and describing before any computer can.

    2. custom elements allow values to describe the location of marginalia at the page level, and specific in-page level

      ALTO?

    3. The resultant metadata schema is one that could be adapted for use by any DH project documenting handwritten or printed marginalia, an innovation that is rooted in metadata in combination with humanities scholarship.

      Open Annotation?

    4. create a custom schema capable of documenting this wide variety of materials

      Reminds me of Robots Reading Vogue, in which one experiment Fabricspace tried to cluster terms describing fabrics.

    1. "uri": "http://example.com/"

      The response code suggests the request succeeded, but the URI in the response is different from the URI in the request. And do I understand correctly that the update does not need the full object, but only changed fields are needed? (That sounds more like a PATCH request, but I'm not sure what HTTP semantics are in this context.)