50 Matching Annotations
  1. Jul 2019
    1. A tool to help determine weights (or a feature of a creditmap tool) seems most likely to be successful. Such a tool would likely have provide simultaneous views of the credit map and weights: one that allows a detailed view of any particular contriponent and its weight, and the other that provides a view (perhaps graphical) of the entire creditmap and weights.

      Great idea! Doesn't seem to exist yet, but it should also take in account the quality of each contribution (a few high-quality contributions can be more important than many low-quality contributions).

    2. The value of transitive credit is in measuring the indirect contributions to a product, which today are not quantitatively captured

      Should contributions to a product really be quantitatively captured? Wouldn't that lead to the same dead-end as with citation in scientific publication?

    3. how the credit map for a product A, which is used by a product B, feeds into the credit map for product B

      Transitive credit's 3rd element: transitive nature

    4. Any product should list all authors (as currently listed as authors of a paper), all contributors (as currently listed in the acknowledgements of a paper) and all component products that have been used, including both publications and other products such as software and data (as currently either cited, acknowledged, or not included in a paper).

      Transitive credit's 1st element: credit (called "contriponent" - combination of contributors and components)

    5. Methods for doing this weighting, whether using a taxonomy or a more traditional list of authors, and analysis of these methods and their impact would likely be developed if this overall idea moves forward.

      Transitive credit's 2nd element: weight

    1. Software Sustainability Institute, based at the University of Edinburgh, provides free, short, online evaluations of software sustainability, and fellowships of £3,000 ($US3,800) for researchers based in Britain or their collaborators.

      UK only additional opportunity for open source software funding: Software Sustainability Institute (University of Edinburgh)

    2. One Twitter thread (see go.nature.com/2yekao5) documents grants from the NSF’s Division of Biological Infrastructure, the NIH’s National Human Genome Research Institute and the National Cancer Institute, and a joint programme from the NSF and the UK Biotechnology and Biological Sciences Research Council (now part of UK Research and Innovation). Private US foundations such as the Gordon and Betty Moore Foundation, the Alfred P. Sloan Foundation and the Chan Zuckerberg Initiative (CZI) also fund open-source software support.

      Funding opportunities for open source software

    3. However long your software will be used for, good software-engineering practices and documentation are essential, says Andreas Mueller, a machine-learning scientist at Columbia University in New York City. These include continuous integration systems (such as TravisCI), version control (Git) and unit testing

      software development good practices incl.

      • continuous integration
      • version control
      • unit testing
    4. On 10 April, astrophysicists announced that they had captured the first ever image of a black hole. This was exhilarating news, but none of the giddy headlines mentioned that the image would have been impossible without open-source software. The image was created using Matplotlib, a Python library for graphing data, as well as other components of the open-source Python ecosystem. Just five days later, the US National Science Foundation (NSF) rejected a grant proposal to support that ecosystem, saying that the software lacked sufficient impact.

      This story raises the question "From the funding point of view, what is reproducible research?".

  2. Apr 2019
    1. About 98% of the research published in the Journal since 2000 is free and open to the public. Research of immediate importance to global health is made freely accessible upon publication; other research articles become freely accessible after 6 months.

      98%?!?!?! Data please!

  3. Apr 2017
    1. Illustration cynique devant l’impuissance des États à réguler cette concentration, Google a davantage à craindre de ses concurrents directs qui ont des moyens financiers largement supérieurs aux États pour bloquer sa progression sur les marchés. Ainsi, cet accord entre Microsoft et Google, qui conviennent de régler désormais leurs différends uniquement en privé, selon leurs propres règles, pour ne se concentrer que sur leur concurrence de marché et non plus sur la législation.

      Trop gros, les GAFAM ne se sentent plus soumis aux lois. Ils s'arrangent entre eux.

    2. En produisant des services gratuits (ou très accessibles), performants et à haute valeur ajoutée pour les données qu’ils produisent, ces entreprises captent une gigantesque part des activités numériques des utilisateurs. Elles deviennent dès lors les principaux fournisseurs de services avec lesquels les gouvernements doivent composer s’ils veulent appliquer le droit, en particulier dans le cadre de la surveillance des populations et des opérations de sécurité.

      Voilà pourquoi les GAFAM sont aussi puissants (voire plus) que des États.

    3. En fait, je pense que la plupart des gens ne veulent pas que Google réponde à leurs questions. Ils veulent que Google dise ce qu’ils doivent faire ensuite.

      Qui a dit que Google répond à vos questions ? Depuis longtemps, Google fait les questions et les réponses (à vos dépens).

    4. Elle définit plusieurs catégories de data en fonction de leurs sources, et qui quantifient l’essentiel des actions humaines : les hard data, produites par les institutions et administrations publiques ; les soft data, produites par les individus, volontairement (via les réseaux sociaux, par exemple) ou involontairement (analyse des flux, géolocalisation, etc.) ; les métadonnées, qui concernent notamment la provenance des données, leur trafic, les durées, etc. ; l’Internet des objets, qui une fois mis en réseau, produisent des données de performance, d’activité, etc.

      Les différentes catégories de données, telles que définies par Antoinette Rouvroy, dans Des données et des hommes. Droits et libertés fondamentaux dans un monde de données massives

    1. À la fin des années 1990, c’est au nom de ce réalisme capitaliste, que les promoteurs de l’Open Source Initiative avaient compris l’importance de maintenir des codes sources ouverts pour faciliter un terreau commun permettant d’entretenir le marché. Ils voyaient un frein à l’innovation dans les contraintes des licences du logiciel libre tel que le proposaient Richard Stallman et la Free Software Foundation (par exemple, l’obligation de diffuser les améliorations d’un logiciel libre sous la même licence, comme l’exige la licence GNU GPL – General Public License). Pour eux, l’ouverture du code est une opportunité de création et d’innovation, ce qui n’implique pas forcément de placer dans le bien commun les résultats produits grâce à cette ouverture. Pas de fair play : on pioche dans le bien commun mais on ne redistribue pas, du moins, pas obligatoirement.

      Voilà la différence fondamentale (et originelle) entre libre et open source !

    1. Privacy tech doesn’t take the place of having the law on your side.

      Nowadays, to protect your privacy you need to:

      have the law on your side + trust service providers + use privacy tech

  4. Mar 2017
    1. “At the heart of that First Amendment protection is the right to browse and purchase expressive materials anonymously, without fear of government discovery,” Amazon wrote in its memorandum of law.  

      Amazon doesn't provide information about a murder to protect users from the government. This must be a joke!

    1. Indeed, cannibalizing a federated application-layer protocol into a centralized service is almost a sure recipe for a successful consumer product today. It's what Slack did with IRC, what Facebook did with email, and what WhatsApp has done with XMPP. In each case, the federated service is stuck in time, while the centralized service is able to iterate into the modern world and beyond.

      What Slack, Facebook, WhastApp and others have done to not allow users to use their favorite app to communicate with someone who another app. Sad and bad!

      What would you do if had to use the same email client/provider as the people you want the communicate with?

    1. Si vous êtes développeur, faites-vous un cadeau et cessez d'utiliser cette abominable police Courrier. Il existe des polices spécialement adaptées pour les développeurs qui non seulement fatiguent moins la vue, mais aident à éviter les erreurs (confusions l/1, O/0, etc.).

      Point très intéressant au sujet du choix de la police de caractères en programmation!

    1. In addition, Neylon suggested that some low-level TDM goes on below the radar. ‘Text and data miners at universities often have to hide their location to avoid auto cut-offs of traditional publishers. This makes them harder to track. It’s difficult to draw the line between what’s text mining and what’s for researchers’ own use, for example, putting large volumes of papers into Mendeley or Zotero,’ he explained.

      Without a clear understanding of what a reference managers can do and what text and data mining is, it seems that some publishers will block the download of fulltexts on their platforms.

    1. This milestone sends an extremely strong signal to those who have developed proprietary annotation implementations, such as Genius, Readcube, Medium or Amazon (Kindle), that these technical recommendations have the weight of the web community behind them and can be relied upon.

      This point is for people who thought it's better to use ReadCube or Mendeley's proprietary tool to annotate PDFs within their reference manager and take the risk to lose it all if they have to migrate. Because if one thing is sure with proprietary softwares, it's that that features are not compatible/interoperable for a long time.

  5. Feb 2017
    1. A high school student in Brian Boone’s economics class at Edison High School in Huntington Beach, CA made the millionth Hypothesis annotation as a part of a class assignment

      Great to see Hypthesis used in class!

  6. Jan 2017
    1. e) We also may make use of third party tracking pixels used by advertising or analytical partners. Some such partners include, but are not limited to: (i) Google Analytics: Used to track statistical information such as page visits and traffic source information allowing us to improve the performance and quality of the Site. For more information please visit: http://www.google.com/analytics/learn/privacy.html. (ii) Google Advertising: Used to track conversions from advertisements on the Google Search and Google Display network. For more information please visit: http://www.google.com/policies/technologies/ads/. Third party pixels and content may make use of cookies. We do not have access or control over these third party cookies and this Policy does not cover the use of third party cookies.

      When the VPN client you intend to use is in fact the one that will leak your personal data!

      What a shame!

    1. Anyone using that older browser should have access to the same content as someone using the latest and greatest web browser. But that doesn’t mean they should get the same experience. As Brad Frost puts it: There is a difference between support and optimization. Support every browser ...but optimise for none.

      This is why HTML (content) must be separated from CSS (layout) and javascript (interactivity). Content is the core functionality. CSS are displayed through progessive enhancement.

    2. Likewise, the expressiveness of CSS and JavaScript is only made possible on a foundation of HTML, which itself requires a URL to be reachable, which in turn depends on the HyperText Transfer Protocol, which sits atop the bedrock of TCP/IP.

      The "layers of longevity" of the architect Frank Duffy applied to the web.

    1. But the web is not a platform. The whole point of the web is that it is cross‐platform.

      It's because the web is not a platform that it can be universal!

    2. XHTML 1.0 didn’t add any new features to the language. It was simply a stricter way of writing markup. XHTML 2.0 was a different proposition. Not only would it remove established elements like IMG, it would also implement XML’s draconian error‐handling model.

      XHTML 1.0 & 2.0 didn't respect the nature of the web and the HTML’s loose error‐handling. So they died.

    3. hover effects

      javascript hack included in CSS later

    4. rounded corners and gradients

      design hack included in CSS later

    5. required fields

      javascript hack included in HTML later

    6. Remember a facet of the web is universal readership. There is no universal interpreted programming language.

      The web is universal. No programming language is (even javascript).

    1. While it’s true that when designing with Dreamweaver, what you see is what you get, on the web there is no guarantee that what you see is what everyone else will get.

      Love to consider WYSIWYG like this!

    1. One of those values is the principle of material honesty. One material should not be used as a substitute for another. Otherwise the end result is deceptive.

      Great principle!

      Should be applied to science as well: scientific publication is meant to spread ideas and findings, not to evaluate researchers!

    2. Back then there were two major browsers competing for the soul of the web: Microsoft Internet Explorer and Netscape Navigator. They were incompatible by design. One browser would invent a new HTML element or attribute.

      If competition can lead to innovation, it's a hurdle when you try to build standards

    1. Browser software may ignore this tag.

      This is genius! Innovations and standards can live together on the web.

    2. The open architecture of the internet reflected the liberal worldview of its creators. As well as being decentralised, the internet was also deliberately designed to be a dumb network.

      Open, decentralised and not meant to know what is transmitted: that's how the Internet has been created. Perfect to protect privacy!

      Sad there are so many people who fight against the Internet today...

    3. You may have heard that the internet was designed to resist a nuclear attack. That’s not entirely correct. It’s true that the project began with military considerations. The initial research was funded by DARPA, the Defense Advanced Research Projects Agency. But the engineers working on the project were not military personnel. Their ideals had more in common with the free‐speech movement than with the military‐industrial complex. They designed the network to route around damage, but the damage they were concerned with was censorship, not a nuclear attack.

      Internet was designed to fight censorship.

      Today's use of the web by Facebook, Google... and some countries is based on narrowing information diversity, the littel sister of censorship.

    4. Each generation builds upon the work of their forebears

      This is where the need to share (ideas, codes, references, etc.) comes from.

    1. I shouldn’t have daisy-chained two such vital accounts — my Google and my iCloud account — together.

      Lesson learned: not chain different accounts by "logging in with" (most of the time Google, Facebook, Twitter)

    2. First you call Amazon and tell them you are the account holder, and want to add a credit card number to the account. All you need is the name on the account, an associated e-mail address, and the billing address. Amazon then allows you to input a new credit card. (Wired used a bogus credit card number from a website that generates fake card numbers that conform with the industry’s published self-check algorithm.) Then you hang up. Next you call back, and tell Amazon that you’ve lost access to your account. Upon providing a name, billing address, and the new credit card number you gave the company on the prior call, Amazon will allow you to add a new e-mail address to the account. From here, you go to the Amazon website, and send a password reset to the new e-mail account. This allows you to see all the credit cards on file for the account — not the complete numbers, just the last four digits. But, as we know, Apple only needs those last four digits. We asked Amazon to comment on its security policy, but didn’t have anything to share by press time.

      Is it still as eas to enter someone's Amazon account today? Hopefully not. But I'm really not sure...

    3. Google partially obscures that information, starring out many characters, but there were enough characters available, m••••n@me.com

      This is where email sub-adressing (https://en.wikipedia.org/wiki/Email_address#Sub-addressing) is also useful!

    4. Apple tech support confirmed to me twice over the weekend that all you need to access someone’s AppleID is the associated e-mail address, a credit card number, the billing address, and the last four digits of a credit card on file

      Not very complicated to hack, isn't it? Fortunately, Apple now relies on two-factor authentification.

    5. In response, Apple issued a temporary password. It did this despite the caller’s inability to answer security questions I had set up. And it did this after the hacker supplied only two pieces of information that anyone with an internet connection and a phone can discover.

      Security is not only the user's business. If the company doesn't do the job, it's useless for the user to be careful.

    6. In short, the very four digits that Amazon considers unimportant enough to display in the clear on the web are precisely the same ones that Apple considers secure enough to perform identity verification.

      Security considered from different perspectives leads to security flaws!

  7. Sep 2016
    1. educators could use annotation to build interactive classroom lessons

      It's actually a very good idea to ask students to comment their readings and to share their comments.

      Can be used in traditional or flipped classrooms.

  8. May 2016
    1. Dear Anonymous Coward, please reveal yourself so we can discuss when academia put librarians in charge.

      Nothing new.

      Before Sci-hub: Ask money to the library and let the researchers think they hae access for free.

      After Sci-hub: Blame librarians about free access to articles and tell researchers libraries are responsible for copyright infringment.

      Isn't time for researchers to defend the people who work all year long to give them access... to their work (freely given to publishers!)?

    2. Are our systems difficult? Aren’t you publishers the ones that “break” the hyperlink ethos of the web by creating the paywalls in the first place?
    3. What an Anonymous Coward. If you’re going to piss all over the people who sign the checks that keep your business running, you should at least have the guts to sign your name and take some responsibility.

      Right!

    1. Writing and submission. The process of compiling findings, writing accompanying narrative and making this available for public view and scrutiny can be simplified by the use of new improved software. These tools can help identify relevant papers through increasingly powerful learning algorithms (e.g. F1000Workspace, Mendeley, Readcube). They can also enable collaborative authoring (e.g. F1000Workspace, Overleaf, Google docs), and provide formatting tools to simplify the process of structuring an article to ensure all the necessary underlying information has been captured (e.g. F1000Workspace, EndNote). Submission for posting as a preprint, and/or for formal publication and peer review, should be as simple as a single click.

      How can an "Open Science Platform" be built upon proprietary tools only? Maybe is meaning of "open" to define here?