- Jan 2022
-
www.simuladorfacturaluz.es www.simuladorfacturaluz.es
-
- DATOS CONSUMO IBERDROLA
-
-
stackoverflow.com stackoverflow.com
-
If you need to use an expression like /\/word\:\w*$/, be sure to escape your backslashes: new RegExp( '\\/word\\:\\w*$' ). – Jonathan Swinney Nov 9 '10 at 23:04
- REGEX
- tambien escape \
-
nstead of using the /regex\d/g syntax, you can construct a new RegExp object: var replace = "regex\\d"; var re = new RegExp(replace,"g");
- REGEX
- crear regex a partir de pattern
- CIUDADO: \b -> \b; \s -> \s en la cadena pattern
-
-
sourceware.org sourceware.org
-
4.1. Removal of 'xlocale.h'
- XLOCALE.H
- eliminado tipo "locale_t"
- usar "__locale_t"
-
-
sites.uclouvain.be sites.uclouvain.be
-
- XLOCALE.H
- eliminado en GLIBC >= 2.26
/ POSIX 2008 makes locale_t official. / typedef __locale_t locale_t;
-
-
-
ldd --version
- VERSION GLIBC
-
-
www.linuxquestions.org www.linuxquestions.org
-
Just execute: ldd --version which comes with glibc package
- VERSION GLIBC
- "ldd --version"
-
-
unix.stackexchange.com unix.stackexchange.com
-
It was enough to copy the directory /usr/share/terminfo/ to /usr/share/terminfo of chroot directory.
- OK. SOLVED
- directory 78: xterm
-
-
forums.zotero.org forums.zotero.org
-
adamsmith October 15, 2018 Add Item by Identifier will work for any DOI registered in Crossref or a half-dozen other DOI registrars Really? I thought we currently "just" covered CrossRef, DataCite, and Airiti? - which does still mean it almost always works; that's probably 95%+ of active DOIs, but if there's code already that covers others, that'd be good to know as I was just going to put some work towards that. dstillman October 15, 2018 We also have DOI translators for EIDR and mEDRA.
- DOI agencies
- Zotero probes them, when search for a DOI
-
-
-
HughP commented on Sep 26, 2018 • edited I feel a bit foolish, but yes there is a Preference Pane. I found it now. Note that Zotfile puts menu item in the Tools menu for the preference. This was where I was looking, and therefore didn't see it.
- TOOLS MENU: doesnt appear
- PLUGINS: dosnt exist Preferences button!!!
Tags
Annotators
URL
-
-
api.crossref.org api.crossref.org
-
Work{institution#/definitions/WorkInstitutionWorkInstitution{...}indexed*#/definitions/DateDate{...}posted#/definitions/DatePartsDateParts{...}publisher-locationstringupdate-to[...]standards-body[...]edition-numberstringgroup-title[...]reference-count*integer($int64)publisher*stringissuestringisbn-type[...]license[...]funder[...]content-domain*#/definitions/WorkDomainWorkDomain{...}chair[...]short-container-titlestringaccepted#/definitions/DatePartsDateParts{...}content-updated#/definitions/DatePartsDateParts{...}published-print#/definitions/DatePartsDateParts{...}abstractstringDOI*stringThe DOI identifier associated with the work type*stringcreated*#/definitions/DateDate{...}approved#/definitions/DatePartsDateParts{...}pagestringupdate-policystringsource*stringis-referenced-by-count*integer($int64)title*[...]prefix*stringvolumestringclinical-trial-number[...]author*[...]member*stringcontent-created#/definitions/DatePartsDateParts{...}published-online#/definitions/DatePartsDateParts{...}reference#/definitions/ReferenceReference{...}container-title[...]review#/definitions/WorkReviewWorkReview{...}original-title[...]languagestringlink[...]deposited*#/definitions/DateDate{...}score*integer($int64)degreestringsubtitle[...]translator[...]free-to-read#/definitions/WorkFreeToReadWorkFreeToRead{...}editor[...]component-numberstringshort-title[...]issued*#/definitions/DatePartsDateParts{...}ISBN[...]references-count*integer($int64)part-numberstringjournal-issue#/definitions/WorkJournalIssueWorkJournalIssue{...}alternative-id[...]URL*stringarchive[...]relation#/definitions/WorkRelationWorkRelation{...}ISSN[...]issn-type[...]subject[...]published-other#/definitions/DatePartsDateParts{...}published#/definitions/DatePartsDateParts{...}assertion[...]subtypestringarticle-numberstring}
- SEE: FIELDS
- COMPARE: with Zotero fields
Tags
Annotators
URL
-
-
www.crossref.org www.crossref.org
-
Behind the scenes improvements to the REST API Patrick Polischuk – 2021 July 06 In REST APICommunity UPDATE, 24 August 2021: All pools have been migrated to the new Elasticsearch-backed API, which already appears to be more stable and performant than the outgoing Solr API. Please report any issues via our Crossref issue repository in Gitlab.
- API: NEW
-
-
opencitations.net opencitations.net
-
Open Citation Identifiers Each Open Citation Identifier [[OCI]] has a simple structure: the lower-case letters "oci" followed by a colon, followed by two numbers separated by a dash (e.g. https://w3id.org/oc/index/coci/ci/02001010806360107050663080702026306630509-02001010806360107050663080702026305630301), in which the first number identifies the citing work and the second number identifies the cited work. For citations in which the citing and cited works are identified by DOIs, which includes all the COCI citations, the OCI is created in the following manner, as explained more fully here. Each case-insensitive DOI is first normalized to lower case letters. Then, after omitting the initial doi:10. prefix, the alphanumeric string of the DOI is converted reversibly to a pure numerical string using the simple two-numeral lookup table for numerals, lower case letters and other characters presented at https://github.com/opencitations/oci/blob/master/lookup.csv. Finally, each converted numeral is prefixes by a 020, which indicates that Crossref is the supplier of the original metadata of the citation (as indicated at http://opencitations.net/oci). OCIs can be resolved using the OpenCitations OCI Resolution Service.
- IMPORTANT
-
Each case-insensitive DOI is first normalized to lower case letters
- WHY??? WHERE CAN THE NORM BE READ?
Tags
Annotators
URL
-
-
citation.crosscite.org citation.crosscite.org
-
DOI registration agencies such as Crossref, DataCite and mEDRA collect bibliographic metadata about the works they link to.
- DOI agencies
Tags
Annotators
URL
-
-
-
Internet gets more reliable
- BUT ALWAYS THINK:
- WHAT ARE WE GOING TO DO WHEN THE INTERNET IS TURNED OFF?
-
my main frustrations are around the lack of the very basic things that computers can do extremely well: data retrieval and search. I'll carry on, just listing some examples. Let's see if any of them resonate with you:
- 20 years waiting from Semantic Web promises!!!
- Conclusions:
- competition vs cooperation (reinventing the wheel again and again)
- minority interested in knowledge vs majority targeted to consume
-
youtube videos, even though most of them have subtitles hence allowing for full text search?
- GREAT IDEA: VIDEOS (VISUAL+AUDIO) ++ TRANSCRIPTION (FULL TEXT), permits searches!!!
-
-
beepb00p.xyz beepb00p.xyz
-
the tool I've developed
- REINVENT THE WHEEL!
- SADLY, DO IT YOURSELF IS OFTEN THE ONLY ALTERNATIVE!
-
I want URLs to address information and represent relations. The current URL experience is far from ideal for this.
- me too!
-
a more realistic and plausible target: using my digital trace (such as browser history, webpage annotations and my personal wiki) to make up for my limited memory
- OK: tools for register, but NEED "THE TOOL" for searching and RECOVER these data!
Tags
Annotators
URL
-
-
libguides.massgeneral.org libguides.massgeneral.org
-
Tip 8 You can click the DOI and URL field labels to open the field link:
- OK: click on Label
-
Tip 5 You can convert the contents of the "Title" and "Publisher" fields to either sentence or title case by right-clicking the field and using the Transform Text menu.
- CASE: Title or Sentence
-
Tip 3 To see the number of items in the selected library or collection, click an item in the middle column and use the Select All shortcut: Command + A on Mac OS X or Control + A on Windows and Linux A count will appear in the right column:
- SELECT ALL: "CTRL" + "a"
-
"Control" key on Windows
- WHICH COLLECTION?
-
Tip 2 Press "Shift" and “+” (plus) on the keyboard within a collections list or items list to expand all attachments, and “-” (minus) to collapse them.
- ME: ONLY with "+"/"-" keys, WITHOUT "Shift"
Tags
Annotators
URL
-
-
www.zotero.org www.zotero.org
-
- 2022-01: zotero field DOI: WITHOUT https, NOR URL
-
More recently, there has been a strong movement to move the web over from HTTP to the more secure HTTPS protocol. Technical changes also made it possible to link DOIs via the shorter doi.org instead of dx.doi.org. Together, this let Crossref change its recommended format to https://doi.org/10.1037/rmh0000008.
- REASONS FOR CHANGES
-
As Crossref explains in their guidelines, the original concise doi:10.1037/rmh0000008 format was recommended with the hope that web browsers would one day automatically recognize and hyperlink these DOIs.
- CONCISE format
-
Effective March 2017, Crossref, an influential DOI registration agency, now recommends the following format: https://doi.org/10.1037/rmh0000008 Note the use of “https” instead of “http”, and “doi.org” instead of “dx.doi.org”.
- 2017: URL CHANGED
- https
- doi.org
-
Crossref, an influential DOI registration agency
- CROSSREF
Tags
Annotators
URL
-
-
libguides.massgeneral.org libguides.massgeneral.org
-
Questions Still have questions? Check the following FAQ entries, or, if these don’t answer your question, use the Zotero forums: Can I use Zotero in one language and create bibliographies in another? DOI format in APA style Does Zotero support label/authorship trigraph styles, like [ddb98]? How can subsequent occurences of the same author replaced by a fixed term/symbol? How do I prevent title casing of non-English titles in bibliographies? How do I use rich text formatting, like italics and sub/superscript, in titles? How do you cite a secondary source in Zotero? How does Zotero parse things in the name fields? I need to use Chicago style. Which of the three versions that come with Zotero should I use? I'm the publisher/editor of a journal. What can I do to have Zotero support our style? Journal Abbreviations Missing Italics (or Italics-Only) in Word Bibliographies References appear in the wrong font in Word/LibreOffice Standard Citation Styles What are these DOIs doing in my bibliography? What is the official Harvard style? Why do some citations include first names or initials, and how can I prevent this from happening? Why don't titles show up in sentence case in bibliographies? Why isn't the first letter of a subtitle in uppercase in bibliographies?
- LINKS to Zotero doc!
- GOOD!
-
You can also install CSL styles (with a “.csl” extension) from local files on your computer (e.g., styles that you edit yourself or that you download from another website). In the Zotero Style Manager, click the '+' button, then find the style file on your computer.
- CSL styles
-
-
libguides.massgeneral.org libguides.massgeneral.org
-
Zotero currently uses the title, DOI, and ISBN fields to determine duplicates. If these fields match (or have no information entered), Zotero will also compare the years of publication and author/creator lists (if at least one author last name plus first initial matches) to determine duplicates.
- DUPLICATED ITEMS
- Merge them
Tags
Annotators
URL
-
-
libguides.massgeneral.org libguides.massgeneral.org
-
Debug Output Logging: To help diagnose a problem, the Zotero developers may ask you to submit a Debug Log ID. This is different from an Error Report ID above. To submit a debug log, check “Enable Logging”, then complete the sequence of steps neeeded to produce your error. Then, click “Submit Debug Report” and post the Debug ID number to the Zotero forums. Try to avoid performing unrelated actions when making a debug log.
- DEBUG ID in Forums
-
Automatic File Importing: By default, the Zotero Connector will offer to import RIS, BibTeX, and Refer/BibIX bibliographic files when you open them in your browswer. You can disable this feature or manage the sites from which data is imported here.
- ???
-
Save to Zotero.org: When the Zotero desktop client is closed, the Zotero Connector will save directly to the zotero.org servers. These settings let you reauthorize your broswer to save to your zotero.org account or clear your account credentials. You can also control whether PDF attachments and web page snapshots are automatically saved when importing to zotero.org.
- ZOTERO LIB
Tags
Annotators
URL
-
-
github.com github.com
-
the-solipsist commented on Mar 9, 2019 • edited Unfortunately, %g adds a ,, which doesn't work in those cases where there is not first name / surname (for instance, institutional authors). In those cases, %g ends up adding a comma in the end of the name. Additionally, %g provides a "Surname, Firstname" format, and there is no expression for "Firstname Surname", which some would prefer.
- PROBLEM
-
QingQYang commented on Aug 11, 2015 I have solved this problem by adding wildcard %g for author's full name as the style of Zotero's two fields display. Please check the pull request #193, thanks.
- %g works!
Tags
Annotators
URL
-
-
news.ycombinator.com news.ycombinator.com
-
xyzzy21 12 days ago | prev | next [–] Scientific American started its decline in the 1990s. It became a "rag" in the 2000s. It's 100% worthless now. reply
- OK
-
duskwuff 13 days ago | parent | prev | next [–] The decline of Scientific American into pop science started much earlier than that. I'd peg it around 2000, when they changed the cover design and stopped running classic columns like "Mathematical Games" and "Amateur Scientist". The pop-science articles started ramping up around the same time. reply green01 12 days ago | root | parent | next [–] Same thing with Popular Science and Popular Mechanics, science reporting was the first to go in the death of journalism. They were terrible 15 years ago, maybe people didn't notice until recently when they started bizarrely endorsing politicians like Biden and running CRT articles.
- OK
-
cycomanic 13 days ago | prev | next [–] I always found the New Scientist a much better publication
- I DONT THINK SO!
- very low level!
-
Cthulhu_ 13 days ago | parent | next [–] I read a good article some time ago that explained the different 'levels' of scientific writing. Level 1 was the actual papers. Level 2 is the press releases of the university or research institute in question. And level 3 and beyond are the pop sci websites, magazines, social media channels etc picking up on it - and it becomes muddled after that, because they will often pick up and rewrite from each other instead of referencing the source.
- OK
- be aware of populatization
- cuidado con la divulgacion
-
Cthulhu_ 13 days ago | parent | next [–] I read a good article some time ago that explained the different 'levels' of scientific writing.
- CITATION NEEDED!
-
systemvoltage 13 days ago | prev | next [–] I've completely lost faith in Scientific American after they tried to "cancel" James Webb (yes, JWST telescope name) for complicitness against LGBTQ people some 70 fricking years ago, more details here: https://news.ycombinator.com/item?id=29690749
- !!!
-
-
scottaaronson.blog scottaaronson.blog
-
Rama Says: Comment #111 January 5th, 2022 at 7:44 am Scientific American is no longer what it used to some decades back. The standard of articles has come down and has very low abysmal standards of written presentations. American Scientist (AS) has good content and would go for AS and ignore Scientific American.
- OK!
-
The death of the Economist and Scientific American and the New Yorker are things the GOP mourns.
- ???
-
GregW Says: Comment #8 January 3rd, 2022 at 5:53 am Scientific American was really great in the 80s but somewhere in the 90s or early 2000s it took a turn towards dumbed down science popularism and lost my respect
- ME TOO!
-
the SciAm hit-piece, and then reported back to the others that the strong emotions were completely, 100% justified in this case.
- yes! 100% justified!
-
Fortunately, there are high-quality online venues (e.g., Quanta) that partly fill the role that Scientific American abdicated.
- thank you!
- I'll change to Quanta
-
Laura Helmuth, the editor-in-chief now running SciAm into the ground
- since this change, everything for the worse
-
assumes that there are default humans who serve as the standard
- it is not correct!
- only that measures are distributed around a mean
- how to interpret that?
-
Scientific American—or more precisely, the zombie clickbait rag that now flaunts that name
- HERE! very true!!!
- catchy headlines that don't describe the news
Tags
Annotators
URL
-
-
developer.mozilla.org developer.mozilla.org
-
The encodeURIComponent() function encodes a URI by replacing each instance of certain characters by one, two, three, or four escape sequences representing the UTF-8 encoding of the character
- ENCODEURI
-
-
developer.mozilla.org developer.mozilla.org
-
Decoding query parameters from a URL decodeURIComponent cannot be used directly to parse query parameters from a URL. It needs a bit of preparation. function decodeQueryParam(p) { return decodeURIComponent(p.replace(/\+/g, ' ')); } decodeQueryParam('search+query%20%28correct%29'); // 'search query (correct)'
- DECODEURI
-
-
-
_re_id["doi"] = re.compile(r"\b10\.\d{4,}(?:\.\d+)*\/(?:(?!['\"&<>])\S)+\b") _re_id["bibcode"] = re.compile(r"\b\d{4}\D\S{13}[A-Z.:]\b") _re_id["arxiv"] = re.compile(r"\b(?:\d{4}\.\d{4,5}|[a-z-]+(?:\.[A-Za-z-]+)?\/\d{7})\b")
- REGEX
-
-
-
Subject classes do not exist for some of the older archives in the Physics group. Instead, each archive represents a subject class, e.g., hep-ex, hep-lat, hep-ph, and hep-th. The astro-ph archive currently has no subject classes, while cond-mat and physics are classified by subject classes that appear only in the metadata (not in the identifier). This scheme uses two upper-case characters to identify the subject class,
- SUBJECT CLASS
-
The canonical form of identifiers from January 2015 (1501) is arXiv:YYMM.NNNNN, with 5-digits for the sequence number within the month. The article identifier scheme used by arXiv was changed in April 2007. All existing articles retain their original identifiers but newly announced articles have identifiers following the new scheme. As of January 2015, the number of digits used in the second part of the new identifier, the sequence number within the month, is increased from 4 to 5. This will allow arXiv to handle more than 9999 submissions per month (see monthly submission rates).
- DISTINCT FORMATS
-
-
frameboxxindore.com frameboxxindore.com
-
How do I move my mobile bookmarks to Chrome? Open the Chrome Bookmarks manager (Ctrl+Shift+O) and you will see a new folder called ‘Mobile bookmarks’. All your bookmarks from your Android phone and/or iPhone will be sorted inside this folder.
- FIRST: in mobile: account: sync
- SECOND: in PC: ADD profile: same account; sync
- THIRD: in PC: Bookmark Manager: Mobile BM
-
-
github.com github.com
-
birnstiel commented on Mar 17, 2015 Thanks! The export script only returns the bibcodes, not the full entries. Is there a way to query all those bib codes? The ADS 2.0 search seems to support only one bibcode: search.
- QUESTION?
Tags
Annotators
URL
-
-
github.com github.com
-
- HYPOTHESIS ROADMAP
-
-
www.zotero.org www.zotero.org
-
target
- TARGET: for Search translators???
- no sense
-
Search translators: can look up and retrieve item metadata when supplied with a standard identifier, like a PubMed ID (PMID) or DOI.
- OK, IMPORTANT:
- "official" arXiv translator: espera campo { arXiv: }
-
dataMode For import translators, this sets the form in which the input data is presented to the translator. If set to “rdf/xml”, Zotero will parse the input as XML and expose the data through the Zotero.RDF object. If “xml/dom”, Zotero will expose the data through the function Zotero.getXML().
- IMPORTANT to import target: xml
-
browserSupport A string containing one or more of the letters g, c, s, i, representing the connectors that the translator can be run in – Gecko (Firefox), Chrome, Safari, Internet Explorer, respectively. b indicates support for the Bookmarklet (zotero-dev thread) and v indicates support for the translation-server. For more information, see Connectors. Warning: Compatible with Zotero 2.1.9 and later only.
- browserSupport : [2021-12] it seems obsolete (?)
-
translatorType An integer specifying to which type(s) the translator belongs. The value is the sum of the values assigned to each type: import (1), export (2), web (4) and search (8). E.g. the value of translatorType is 2 for an export translator, and 13 for a search/web/import translator, because 13=8+4+1.
- import + export = 1+2=3
-
- MUST READ
- metadata tags
- functions
-
-
www.zotero.org www.zotero.org
-
open a specific profile from the command line with the -p flag (e.g., -p Work),
-OK: shortcut
-
To create an additional profile, start Zotero from the command line and pass the -P flag to open the Profile Manager:
- OK, as in Firefox
- zotero.exe -P
Tags
Annotators
URL
-
-
multicommander.com multicommander.com
-
Download Portable version The Portable version is a preconfigured version of Multi Commander that is configured to store all configuration and settings in the same folder that it is run from. Just unpack the portable version (keep the folder structure) and run MultiCommander.exe. If you already have Multi Commander installed you can create a portable version by selecting "Install Multi Commander to USB Device" in the help menu.
- OK
- TEST: If you already have Multi Commander installed you can create a portable version by selecting "Install Multi Commander to USB Device" in the help menu.
-
-
forums.zotero.org forums.zotero.org
-
acortinois April 16, 2021 Well, it looks like we all need multiple windows but this forum has been active for four years and nothing seems to have changed... :)
- OK
-
-
stackoverflow.com stackoverflow.com
-
although if you are using XMLDOM with JavaScript you can code something like var n1 = uXmlDoc.selectSingleNode("//bookstore/book[1]/title/@lang"); and n1.text will give you the value "eng"
- TEST, value: "selected".text
-
@KorayTugay, No, the first expression selects, doesn't "return" -- a set of nodes, and this set of nodes is not a string. A node is not a string -- a node is a node in a tree. An XML document is a tree of nodes. lang="eng" is just one of many textual representations of an attribute node that has a name "lang", doesn't belong to a namespace, and has a string value the string "eng" – Dimitre Novatchev Oct 22 '14 at
- OK: select, not value
-
-
stackoverflow.com stackoverflow.com
-
//Parent[@id='1']/Children/child/@name will only output the name attribute of the 4 child nodes belonging to the Parent specified by its predicate [@id=1]. You'll then need to change the predicate to [@id=2] to get the set of child nodes for the next Parent. However, if you ignore the Parent node altogether and use: //child/@name you can select name attribute of all child nodes in one go.
- OK, select ALL
-
//Parent[@id='1']/Children/child/@name Your original child[@name] means an element child which has an attribute name. You want child/@name.
- OK: /@name
-
So far I have this XPath string: //Parent[@id='1']/Children/child[@name]
- [@name] NO SELECCIONA, sino que FILTRA!
- ver respuesta "382"
-
-
arxiv.org arxiv.org
-
shows the primary classification in a standard way, and is also recommended as the preferred citation format
- CITATION STYLE
-
Extension. The new scheme could be extended to use more than 4 or 5 digits for the sequence number. However, that would correspond to more than 99999 submissions per month, or over 10 times the current submission rate, and is thus not anticipated for many years.
- CODIGO 4-5
-
Old papers in archives where the archive name matches the primary subject classification (e.g. hep-th) do not have the square brackets with primary subject classification
- OK, exception in zotero translator
Tags
Annotators
URL
-
-
www.math.cmu.edu www.math.cmu.edu
-
Many bibliography databases supply a DOI (Digital Object Identifier) or arXiv eprint number with BibTeX entries. However, the standard BiBTeX style files either ignore this information or print it without hyperlinking it correctly. Here’s how you can get working clickable links to both DOI and arXiv eprints.
- BIBTEX standard doesnt use them
-
-
-
For the new style arXiv identifiers (April 2007 and onwards) we recommend these bib-style extensions: **archivePrefix = "arXiv"**, **eprint = "0707.3168"**, **primaryClass = "hep-th"**,
- BIBTEX EXTENSIONS!
- for "old style ID":
- **eprint = "hep-ph/9609357"**
Tags
Annotators
URL
-
-
forums.zotero.org forums.zotero.org
-
adamsmith 20 days ago DOI: in extra works for citations. We'll get a preprint item type and DOI added to all item types, but in the meantime that'll just work
- WAINTING for this PREPRINT type
-
dmilton 20 days ago The problem with using "Report" type for preprints is that it does not keep the doi -- only if it is imported as a journal article does the doi get saved. Please add doi to the "Report" fields -- I find myself having to change to type "Journal Article" to perserve the doi.
- HERE!
- THIS IS "THE" REASON to type=journal fro arxiv
- Workaround: DOI: in Extra
-
-
forums.zotero.org forums.zotero.org
-
When I said "migrated automatically", I was referring to the item becoming Zotero Preprint items when that type is added in the future.
- OK, UNERSTOOD AT FIRST!
-
bwiernik June 18, 2021 Zotero will get a Preprint type in an upcoming version. For now, the appropriate way to enter them is as a Report with this at the top of Extra:Type: articleThat will be migrated automatically to Preprint when the type is added.
- HERE!
- upcoming??? WHEN???
-
stared June 18, 2021 This function is useful for numerous reasons. Primarily for listing preprints (e.g. arXiv, biorXiv) and PubMed codes.See "eprint" as officially listed in an arXiv instruction (https://arxiv.org/help/hypertex/bibstyles) and Getting DOI / arXiv links with BibTeX (https://www.math.cmu.edu/~gautam/sj/blog/20171114-bibtex-doi.html), as well as some older approaches e.g. mine for giving link to arXiv for Mendeley exports https://gist.github.com/stared/5473014.Yes, it is a pity that proprietary Mendeley owned by Elsevier provides better support for open archives.
- SEE
-
emilianoeheyns September 27, 2020 Since Better BibLaTeX is already omitting Zotero's Publication field for arXiv preprints, it should probably export as '@misc' or '@online', not '@article'. That is a good point. Could you open an issue for it on BBTs github tracker? Therefore, a BBT postscript should better not depend on Publication being empty or on the presence of an arXiv ID If BBT detected an arXiv entry in any way you, in the postscript item will have an attribute item.arXiv which looks like { id: <arXiv ID>, category: <arXiv category, if found> }
- BBT
- If BBT detected an arXiv entry in any way you, in the postscript item will have an attribute item.arXiv which looks like { id: <arXiv ID>, category: <arXiv category, if found> }
-
adamsmith September 23, 2020 Here's the history on why we're importing arXiv preprints as journal articles:https://github.com/zotero/translators/issues/616If things has changed (e.g. if sites like ADS and INSPIRE are handling this differently now) we can change this on the import end of things, too, but it's really not so clear cut as to say it's wrong.
- SEE
-
emilianoeheyns September 23, 2020 The bibtex entry recommended by arXiv for example, asks for preprint items to be @misc Where do they ask this? If this is how arXiv items should appear generally, I could adjust BBT to change the entry type. Right now, BBT will add the eprint fields if you either: Set the Library Catalog to arXiv or arXiv.org and the Journal name to the arXiv ID Add arXiv: <arXiv ID> to the extra field on a line of its own
- BBT:
- example: journaltitle = {{arXiv}}, shortjournal = {{arXiv}-2005.14432v1 [quant-ph]}, eprinttype = {arxiv}, eprint = {2005.14432v1 [quant-ph]},
-
adamsmith September 23, 2020 There will almost certainly be a Zotero preprint item type added the next time any item types are added. No ETA, but the hope is that this isn't too far off (i.e. months not years)
- now: 2022-01: no field
-
-
forums.zotero.org forums.zotero.org
-
tdegeus November 2, 2021 +100 on this. It would be great though to simply have a field arxivid (that could be potentially activated on request). This would catch the case that the same article is on multiple preprint servers (which could happen I guess). dougwyu November 27, 2021 yes please add a preprint category. biologists use BioRXiv a lot!, and i have to hand-edit every downloaded article.
- many request, but no response!
-
-
en.wikipedia.org en.wikipedia.org
-
"Twist of Fate" is a song recorded by English-Australian singer Olivia Newton-John for the soundtrack album to the 1983 romantic fantasy comedy film, Two of a Kind. Written by Peter Beckett and Steve Kipner,[2] and produced by David Foster, the song was released as the first single from the album on 21 October 1983, and reached number four in Australia and Canada. It reached its peak position of number five on the US Billboard Hot 100 on January 1984, becoming Newton-John's last top-ten single on the chart to date. Billboard ranked "Twist of Fate" as the 42nd most popular single of 1984.
- HEAR
-
-
en.wikipedia.org en.wikipedia.org
-
As a songwriter and producer, he worked with Olivia Newton-John from 1971 through 1989. He wrote her number-one hit singles: "Have You Never Been Mellow" (1975), "You're the One That I Want" (1978 duet with John Travolta), "Hopelessly Devoted to You" (1978), and "Magic" (1980). He also produced the majority of her recorded material during that time including her number-one albums, If You Love Me, Let Me Know (1974), Have You Never Been Mellow (1975), and Olivia's Greatest Hits Vol. 2 (1982). He was a co-producer of Grease (1978) – the soundtrack for the film Grease.
- PRODUCER
Tags
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
Producer(s)John Farrar
-
-
"Heart Attack" is a song recorded by English-born Australian singer Olivia Newton-John for her second greatest hits album Olivia's Greatest Hits Vol. 2 (1982). Written by Paul Bliss and Steve Kipner, and produced by John Farrar, the song was the first single released from the album and was nominated for a Grammy Award for Best Female Pop Vocal Performance in 1983.
- HEAR
-
-
en.wikipedia.org en.wikipedia.org
-
Personnel[edit] From the Physical album's liner notes:[22] Olivia Newton-John – lead and backing vocals John Farrar – guitar and backing vocals Steve Lukather – guitar solo David Hungate – bass Bill Cuomo – Prophet 5 Robert Blass – keyboards Carlos Vega – drums and percussion Lenny Castro – percussion Gary Herbig – horns
- Farrar: producer
-
The song's guitar solo was performed by Steve Lukather, best known as a founding member of the American rock band Toto.
- OK
-
-
en.wikipedia.org en.wikipedia.org
-
Episodes
- The Twilight Zone (2019 TV series)
-
-
www.youtube.com www.youtube.com
-
- SEE
-
-
es.wikipedia.org es.wikipedia.org
-
El guitarrista y líder del grupo Anthrax, Scott Ian criticó severamente el hecho de que Tenacious D haya sido acreedor al premio, alegando que a pesar de que la interpretación de "The last in line" fue buena el concepto de la agrupación es "Satirica y cómica"
- HEAR
-
-
-
Solarsoft IDL
-
NOTE: the query part of URL (i.e., after "query?") is restricted to 1000 characters. This effectively limits the number of bibcodes you can specify in one query to about 40. The ADS API webpage mentions a "bigquery" alternative option, but I couldn't get this to work.
- ADS API: bigquery parameter
-
https://api.adsabs.harvard.edu/v1/search/query?bibcode=2015ApJ...799..218Y&fl=title However, it's necessary to specify your ADS key for this to work. With the Unix curl command, the query is: curl -H "Authorization: Bearer [KEY GOES HERE]" \\ "https://api.adsabs.harvard.edu/v1/search/query?bibcode=2015ApJ...799..218Y&fl=title"
- ADS API: needs API key!
Tags
Annotators
URL
-
-
guides.temple.edu guides.temple.edu
-
APIs for Scholarly Resources What is an API? API stands for application programming interface. An API is a protocol that allows a user to query a resource and retrieve and download data in a machine-readable format. Researchers sometimes use APIs to download collections of texts, such as scholarly journal articles, so they can perform automated text mining on the corpus they've downloaded. Here is a simple tutorial that explains what an API is. Below are some APIs that are available to researchers. Some are open to the public, while others are available according to the terms of Temple University Libraries' subscriptions. Many require you to create an API key, which is a quick and free process. How do I Use APIs? You can create a simple query in the address bar in a web browser. However, a more complex query generally requires using a programming language. Commonly used languages for querying APIs are Python and R. (R is the language used in the R software.) The examples given in the documentation for the APIs listed below typically do not include sample programming code; they only explain how the data is structured in order to help users write a query. List of APIs for Scholarly Research arXiv Content: metadata and article abstracts for the e-prints hosted on arXiv.org Permissions: no registration required Limitations: no more than 4 requests per second Contact: https://groups.google.com/forum/#!forum/arxiv-api, https://arxiv.org/help/api/index Astrophysics Data System Content: bibliographic data on astronomy and physics publications from SAO/NASA astrophysics databases Permissions: free to register; request a key at https://github.com/adsabs/adsabs-dev-api Limitations: varies Contact: https://groups.google.com/forum/#!forum/adsabs-dev-api, adshelp@cfa.harvard.edu BioMed Central Content: metadata and full-text content for open access journals published in BioMed Central Permissions: free to access, request a key at https://dev.springer.com/signup Limitations: none Contact: info@biomedcentral.com Chronicling America Content: digitized newspapers from 1789-1963, as well as a directory of newspapers published 1960 to the present, with information on library holdings Permissions: no registration required Limitations: none Contact: http://www.loc.gov/rr/askalib/ask-webcomments.html CORE Content: metadata and full-text of over 100 million OA research papers Permissions: free to access for non-commercial purposes, request a key at https://core.ac.uk/api-keys/register Limitations: One batch request or five single requests every 10 seconds. Contact CORE if you need a faster rate. Contact: theteam@core.ac.uk CrossRef Content: metadata records with CrossRef DOIs, over 100 million scholarly works Permissions: no registration required Limitations: guidelines to avoid overloading the servers at https://github.com/CrossRef/rest-api-doc#meta. "We reserve the right to impose rate limits and/or to block clients that are disrupting the public service." Contact: labs@crossref.org Digital Public Library of America Content: metadata on items and collections indexed by the DPLA Permissions: request a free key; instructions here https://pro.dp.la/developers/policies Limitations: none, however, "The DPLA reserves the right to limit or revoke access to the API if, in its discretion, a user engages in abusive conduct, conduct that materially degrades the ability of other users to query the API." Contact: codex@dp.la Elsevier Content: multiple APIs for full-text books and journals from ScienceDirect and citation data from Engineering Village and Embase Permissions: free to register; click 'Get API Key" to request a personal key: https://dev.elsevier.com/ Limitations: "Researchers at subscribing academic institutions can text mine subscribed full-text ScienceDirect content via the Elsevier APIs for non-commercial purposes." Usage policies depend on use cases; see list at https://dev.elsevier.com/use_cases.html Contact: integrationsupport@elsevier.com HathiTrust (Bibliographic API) Content: bibliographic and rights information for items in the HathiTrust Digital Library Permissions: no registration required Limitations: may request up to 20 records at once. Not intended for bulk retrieval Contact: feedback@issues.hathitrust.org HathiTrust (Data API) Content: full-text of HathiTrust and Google digitized texts of public domain works Permissions: free to access, request a key at https://babel.hathitrust.org/cgi/kgs/request Limitations: "Please contact [HathiTrust] to determine the suitability of the API for intended uses." Contact: feedback@issues.hathitrust.org IEEE Xplore Content: metadata for articles included in IEEE Xplore Permissions: must be affiliated with an institution that subscribes to IEEE Xplore. Temple is a subscriber. Limitations: maximum 1,000 results per query Contact: onlinesupport@ieee.org JSTOR Content: full-text articles from JSTOR Permissions: free to use, register at https://www.jstor.org/dfr/ Limitations: Not a true API, but allows users to construct a search and then download the results as a dataset for text-mining purposes. Can download up to 25,000 documents. Largest datasets available by special request Contact: https://support.jstor.org/hc/en-us National Library of Medicine Content: 60 separate APIs for accessing various NLM databases, including PubMed Central, ToxNet, and ClinicalTrials.gov. The PubMed API is listed separately below. Permissions: varies Limitations: varies Contact: varies Nature.com OpenSearch Content: bibliographic data for content hosted on Nature.com, including news stories, research articles and citations Permissions: free to access Limitations: varies Contact: interfaces@nature.com OECD Content: a selection of the top used datasets covering data for OECD countries and selected non-member economies. Datasets included appear in the catalogue of OECD databases with API access Permissions: no registration required, see terms and conditions Limitations: max 1,000,000 results per query, max URL length of 1,000 characters. Contact: OECDdotStat@oecd.org PLOS Search API Content: full-text of research articles in PLOS journals Permissions: free to access, register at http://api.plos.org/registration/ <!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> Limitations: Max is 7200 requests a day, 300 per hour, 10 per minute. Users should wait 5 seconds for each query to return results. Requests should not return more than 100 rows. High-volume users should contact api@plos.org. API users are limited to no more than five concurrent connections from a single IP address. Contact: api@plos.org PubMed Content: information stored in 38 NCBI databases, including some info from PubMed. Will retrieve a PubMed ID when citation information is input. Permissions: API key required starting May 1, 2018 Limitations: After May 1, 2018, with an API key a site can post up to 10 requests per second by default. Large jobs should be limited to outside 9-5 weekday hours. Higher rates are available by request (see contact information below) Contact: eutilities@ncbi.nlm.nih.gov <!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> Springer Content: full-text of SpringerOpen journal content and BioMed Central, as well as metadata from other Springer resources <!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> Permissions: free to access, request a key at https://dev.springer.com/signup Limitations: noncommercial use Contact: tdm@springernature.com World Bank APIs Content: APIs for the following datasets: Indicators (time series data), Projects (data on the World Bank’s operations), and World Bank financial data (World Bank Finances API) Permissions: no registration required Limitations: See Terms & Conditions of Using our Site Contact: data@worldbankgroup.org Acknowledgements We would like to acknowledge API guides created by the Libraries at MIT, Berkeley, Purdue and Drexel that informed our work on this guide. Librarian Gretchen Sneff I'm offline, chat with another librarian jQuery.getScript("https://api3.libcal.com/js/myscheduler.min.js", function() { jQuery("#mysched_8635").LibCalMySched({iid: 1621, lid: 0, gid: 0, uid: 8635, width: 500, height: 450, title: 'Schedule an Appointment with a Librarian - ', domain: 'https://api3.libcal.com'}); }); Schedule Appointment #mysched_8635 { background: #2A609A; border: 1px solid #2A609A; border-radius: 4px; color: #FFFFFF; font: 14px Arial, Helvetica, Verdana; padding: 8px 20px; cursor: pointer; } #mysched_8635:hover, #mysched_8635:active, #mysched_8635:focus { opacity: 0.6; } Contact: gsneff@temple.edu Charles Library(215) 204-4724 Subjects: Earth & Environmental Science, Engineering, Mathematics Librarian Karen Kohn Email Me Contact: Paley Library, Room 101215-204-4428 Last Updated: Dec 15, 2021 9:13 AM URL: https://guides.temple.edu/APIs Print Page Login to LibApps Report a problem. Tags: API, Application Programming Interface, research methodology, scraping
- GOOD LIST in legible format
-
-
forums.zotero.org forums.zotero.org
-
dstillman October 11, 2018 edited October 11, 2018 I've added support for the former in the latest beta.zotero://select/library/collections/:collectionKeyzotero://select/groups/:groupID/collections/collectionKey
- HOWTO use?
-
-
forums.zotero.org forums.zotero.org
-
emilianoeheyns
- emilianoeheyns
- BBT author
-
adamsmith May 8, 2018 I don't think that's possible for technical reasons at this time because the citekey is generated & stored by the BetterBibTeX extension and not Zotero itself. It's likely going to be possible in the future. bwiernik May 8, 2018 Yes, the BBT developer had to disable citekey searching because it was interfering with other parts of Zotero. You can show the citekey as a column in the center pane and sort on that.
- In 2018, and now? [2022-01]
-
-
forums.zotero.org forums.zotero.org
-
the collections pane supports find-as-you-type, so if you want to switch to a given library or collection you can press Cmd-Shift-L/Ctrl-Shift-L to highlight the collections pane and then start typing the name of the library or collection to select it. (If a library was collapsed and you wanted to get to a collection, you'd need to type the name of the library and then press right-arrow or + to expand collections (depending on how nested the one you were looking for was) and then type the collection name.)
- CRTL + SHIFT + L
selects as typing
REQUEST: Filter PANE! (like Calibre or Qiqqa)
- Filter for: author, Pub, year, tags, etc
-
-
bioregistry.io bioregistry.io
-
Pattern for Local Unique Identifiers Local identifiers in arXiv should match this regular expression:^(\w+(\-\w+)?(\.\w+)?)?\d{4,7}(\.\d+(v\d+)?)?$
- VALID ONLY for "new" format!!!
- not valid for hep-th/9108008v1
-
Pattern for Local Unique Identifiers Local identifiers in arXiv should match this regular expression:^(\w+(\-\w+)?(\.\w+)?)?\d{4,7}(\.\d+(v\d+)?)?$ Example Local Unique Identifier 0807.4956v1 Resolve Pattern for CURIES Compact URIs (CURIEs) constructed from arXiv should match this regular expression:^arxiv:(\w+(\-\w+)?(\.\w+)?)?\d{4,7}(\.\d+(v\d+)?)?$ Example CURIE arxiv:0807.4956v1
- REGEX ARXIV
-
-
github.com github.com
-
regex_arxiv.py author: Matt Bierbaum date: 2019-03-14 RegEx patterns for finding arXiv id citations in fulltext articles.
- see
-
- REGEX ARXIV
-
-
www.zotero.org www.zotero.org
-
identifiers (one on each line). Once you've typed all the identifiers, press Shift+Enter/Return to import all the items at once. You can also paste a list of multiple identifiers (each on a separate line), then press Shift+Enter/Return to finish. Zotero uses the following databases for looking up item metadata: Library of Congress and WorldCat for ISBNs, CrossRef for DOIs, NCBI PubMed for PubMed IDs, and arXiv.org for arXiv IDs.
- ZOTERO: add item (magic icon)
- use Search-type translators?
- YES!: Search translators MUST detect fields ID: {ISBN, DOI, arXiv, etc}
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
The old versions of JavaScript had no import, include, or require, so many different approaches to this problem have been developed. But since 2015 (ES6), JavaScript has had the ES6 modules standard to import modules in Node.js, which is also supported by most modern browsers. For compatibility with older browsers, build tools like Webpack and Rollup and/or transpilation tools like Babel can be used.
- JAVASCRIPT: IMPORT
-
-
developer.mozilla.org developer.mozilla.orgimport1
-
import La sentencia import se usa para importar funciones que han sido exportadas desde un módulo externo. Por el momento, esta característica sólo está comenzando a ser implementada de forma nativa en los navegadores. Está implementada en muchos transpiladores, tales como Typescript y Babel, y en empaquetadores como Rollup y Webpack.
- JAVASCRIPT: IMPORT
-
-
medium.com medium.com
-
Just last year Grunt was effectively dethroned by Gulp. And now, just as Gulp and Browserify are finally reaching critical mass, Webpack threatens to unseat them both. Is there truly a compelling reason to change your front-end build process yet again?
- CONTINUOS CHANGE!
- reinventing the wheel
-
-
stackoverflow.com stackoverflow.com
-
It's possible, but you have to be careful. Trying to require() a package means that node will try to locate its files in your file system. A chrome extension only has access to the files you declare in the manifest, not your filesystem. To get around this, use a module bundler like Webpack, which will generate a single javascript file containing all code for all packages included through require(). You will have to generate a separate module for each component of your chrome extension (e.g. one for the background page, one for content scripts, one for the popup) and declare each generated module in your manifest. To avoid trying to setup your build system to make using require() possible, I suggest starting with a boilerplate project. You can check out my extension to see how I do it.
- REQUIRE
-
-
stackoverflow.com stackoverflow.com
-
If you open console you'll see XMLHttpRequest cannot load file:///.../nav.html. Cross origin requests are only supported for protocol schemes: http, data, chrome, chrome-extension, https. It's about browser politics. It works in Firefox, but not in Chrome. If you want it to work you may run a web server on your local machine to serve the files. More information: XMLHttpRequest cannot load file. Cross origin requests are only supported for HTTP "Cross origin requests are only supported for HTTP." error when loading a local file
- LOCAL FILES from browser
-
-
www.christiandve.com www.christiandve.com
-
Cómo añadir imágenes de fondo personalizadas en Microsoft Teams para Windows Solo hay que abrir el explorador de Windows y en la barra superior, introducir esta dirección: %AppData%\Microsoft\Teams\Backgrounds\Uploads En esa carpeta que se abre, no hay más que añadir todas las imágenes en formato JPG que quieras y listo. Si se escoge la opción de “Mostrar efectos de fondo” explicada en el punto anterior, ya aparecerá y se podrá utilizar. En principio y por pruebas que he hecho, acepta casi cualquier tamaño de fichero, pero conviene que no sea excesivo (de muchos megabytes). Esta configuración no se replica entre ordenadores, por lo que habría que hacerlo en cada uno de los que usemos.
- TEAMS, FONDOS VIDEO
- TRUCO
-
-
www.openarchives.org www.openarchives.org
-
3.1.1.1 Encoding an OAI-PMH request in a URL for an HTTP GET URLs for GET requests have keyword arguments appended to the base URL, separated from it by a question mark [?]. For example, the URL of a GetRecord request to a repository with base URL that is http://an.oa.org/OAI-script might be: http://an.oa.org/OAI-script? verb=GetRecord&identifier=oai:arXiv.org:hep-th/9901001&metadataPrefix=oai_dc However, since special characters in URIs must be encoded, the correct form of the above GET request URL is: http://an.oa.org/OAI-script? verb=GetRecord&identifier=oai%3AarXiv.org%3Ahep-th%2F9901001&metadataPrefix=oai_dc
- IMPORTANT: encoding de URIs
Tags
Annotators
URL
-
-
-
doGet()/doPost() return raw text, which is useful for anything retrieving BibTeX/RIS/etc. It would be nice to be able to just do var ris = await request(url);.
- ok
-
- INTERESTING
- ZOTERO translators
-
-
www.genivia.com www.genivia.com
-
- gsoap with SSL
-
-lssl -lcrypto
- gsoap
-
-
www.genivia.com www.genivia.com
-
Non-SSL-enabled (that is, not HTTPS capable) versions of the binaries of the wsdl2h and soapcpp2 tools are also included in the gSOAP package in gsoap/bin for Windows and Mac OS platforms. The SSL-enabled and HTTPS-capable wsdl2h tool is only available for download from https://www.genivia.com/downloads.html with a commercial-use license and download key.
- !!!
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
IE will block appending any element created in a different window context from the window context that the element is being appending to. var childWindow = window.open('somepage.html'); //will throw the exception in IE childWindow.document.body.appendChild(document.createElement('div')); //will not throw exception in IE childWindow.document.body.appendChild(childWindow.document.createElement('div'));
- IMPORTANT
-
-
stackoverflow.com stackoverflow.com
-
The actual problem is that when the page is loaded with document.write, IE will always load inline javascript blocks before external javascript files, regardless of which one get defined first. This is described in dynamicdrive.com/forums/blog.php?bt=173 (section II). A good workaround is to put everything as external javascript files. – sayap Jan 5 '12 at 5:05
- IMPORTANT
-
Try something like this (untested): var s=document.createElement('script'); s.type = "text/javascript"; s.src = "test.js"; document.body.appendChild(s); ShareShare a link to this answer Copy linkCC BY-SA 2.5 Follow Follow this answer to receive notifications answered Jul 5 '10 at 5:59 Dagg NabbitDagg Nabbit 70.9k1818 gold badges104104 silver badges139139 bronze badges 1 I modified it a bit: var s = w.document.createElement("script"); s.type = "text/javascript"; s.src = "test.js"; w.document.getElementsByTagName("HEAD")[0].appendChild(s); And it does appear to work properly in IE8 on Windows 7 (as well as other browsers). I still think IE has a bug that my original code doesn't work, but this should work as a work-around. – Jennifer Grucza Jul 6 '10 at 22:15
- IT WORKED!
- var w = window; var s = w.document.createElement("script"); s.type = "text/javascript"; s.src = "./file_to_include.js"; w.document.getElementsByTagName("HEAD")[0].appendChild(s);
-
-
getpolarized.io getpolarized.io
-
Where is my data kept? Your data is encrypted and kept securely in the cloud. This means that all your files will be stored completely securely, no one will have access to them. Your data is stored with privacy first in mind.
- BAD IDEA!
-
Is Polar free? Yes, Polar is free for anyone to use. For certain features that boost learning efficiency even more, such as automatic flashcards using AI, there are premium plans.
- FREEMIUN!
-
-
news.ycombinator.com news.ycombinator.com
-
guitarbill on Jan 23, 2019 | parent | next [–] Ironically, GDPR is what Mendeley used to justify the encryption [0]. Obviously, complete rubbish. It would be interesting to see what happens if people start asking for their data in a portable format though.
.
-
jjoonathan on Jan 23, 2019 | prev | next [–] Mendeley also snitches evidence of your SciHub habit to Elsevier.
.
-
minosniu on Jan 28, 2019 | parent | prev | next [–] Second this. I use zotfile to relocate all my PDFs into a single folder, which is Dropbox-synced. This works like a breeze for 2000+ and mounting papers.
- SEE
-
mwexler on Jan 23, 2019 | prev | next [–] For those wondering, here's what I gathered as some context.Zotero = Your personal research assistant. Zotero is a free, easy-to-use tool to help you collect, organize, cite, and share research. https://www.zotero.org/Mendeley = Reference Management Software, produced by Elsevier who also happens to be the publisher of many peer-reviewed journals. Elsevier come under fire for it's high costs and gateway actions to restrict access to information they've published in journals and host in archives. This most recent action of making the database of references in Mendeley difficult to export is a continuation of their attempt to protect what they, and some legal systems, would see as their IP. Others disagree.The battle continues...
.
-
walrus01 on Jan 23, 2019 | parent | prev | next [–] Elsevier is to science as Oracle is to database software licensing.
.
-
dstillman on Jan 23, 2019 | root | parent | prev | next [–] > The lack of developers and thus slow pace of improvement [...] we are reliant on one or two volunteers to improve the productI'm not sure why you have that impression. Zotero has amazing, invaluable volunteers, but there's a paid, full-time dev team working on Zotero every day. In the last year, we've added:- Google Docs integration [1]- Unpaywall integration [2]- A new, greatly improved PDF recognition system [3]- Faster citing in large documents [3]- A much more powerful saving interface [4]- Mendeley import...- ZoteroBib, a free web service for generating bibliographies [5]- A barcode scanner for iOS [6]- Regular updates and bug fixes [7][1] https://www.zotero.org/blog/google-docs-integration/[2] https://www.zotero.org/blog/improved-pdf-retrieval-with-unpa...[3] https://www.zotero.org/blog/zotero-5-0-36/[4] https://twitter.com/zotero/status/991052142717886464[5] https://www.zotero.org/blog/introducing-zoterobib/[6] https://www.zotero.org/blog/scan-books-into-zotero-from-your...[7] https://www.zotero.org/support/changelog(Disclosure: Zotero developer)
@dstillman
-
natechols on Jan 23, 2019 | parent | prev | next [–] > ELSEVIER IS SIMPLY ANTI SCIENCE.+1. I worked in publicly-funded research labs for 15 years and there is no single organization I despise as much as Elsevier - only Springer-NPG comes close
.
-
zwaps on Jan 23, 2019 | next [–] Zotero has improved a lot, while Mendeley has repeatedly regressed.
.
-
-
news.ycombinator.com news.ycombinator.com
-
- SEE
-
-
www.qiqqa.com www.qiqqa.com
-
1 Zotero comparison is with Zotero 2.1
- cuidado!
- vs Zotero 5 (?)
-
-
forums.zotero.org forums.zotero.org
-
Qiqqa already has the option to overwrite the info in Qiqqa with any metadata that is in the import file. And it avoids duplicate PDF records.
- OK
-
I would be more than happy to add an importer into Qiqqa from Zotero. I guess the simplest route could be to augment the zotero bibtex output format to include any attached files (much like Mendeley does in the file={} field).
- I'll do a custom export translator
-
jamesjardine July 11, 2010 Hiya,It's Jimme here. I am the guy building Qiqqa.
- hello James!
-
- SEE
Tags
Annotators
URL
-
-
retorque.re retorque.re
-
Automatic export To export a library, group or collection, right-click on it in the left Zotero pane and choose “Export Library…” or “Export Collection…”. With BBT’s export translators (e.g., “Better BibTeX”), checking the Keep updated option will register the export for automation. After you’ve completed the current export, any changes to the collection or library will trigger an automatic re-export to update the file. You can review/remove exports from the BBT preferences. While I’ve gone to some lengths to make sure performance is OK, don’t go overboard with the number of auto-exports you have going. Also, exporting only targeted selections over your whole library will get you better performance. You can set up separate exports for separate papers for example if you have set up a collection for each. Managing auto-exports After you’ve set up an auto-export using an Keep updated export, you can manage your auto-exports in the BBT preferences under the Automatic exports tab. There, you can remove auto-export, change settings on them, or remove them. You cannot add new auto-exports from here, that can only be done by initiating an export.
- TIP: use with Qiqqa, keep qiqqa.bib updated from zotero
Tags
Annotators
URL
-
-
forums.zotero.org forums.zotero.org
-
dstillman April 22, 2020 Yes, CSL-JSON is specifically for citations and doesn't know anything about attachments. emilianoeheyns April 23, 2020 BBT JSON does know about attachments. It's mostly just a dump of the items as they're handed to translators with cleanup. I use it in my test framework, but when BBT is installed it's available as any other import/export format.
- SEE
-
-
forums.zotero.org forums.zotero.org
-
fcheslack January 9, 2012 Try now (you may have to force your browser to refresh without the cache with ctrl-shift-R)
- TIP:
- TEST
-
-
forums.zotero.org forums.zotero.org
-
jwright8 August 19, 2018 adamsmith, I added Zutilo to Zotero stand alone (I am not using Firefox). I was able to find the location of my attachments by right clicking on items > Zutilo > Show Attachment Paths. Now I need to do the batch change or follow dstillman's instructions: "set up a Linked Attachment Base Directory in the Advanced → Files and Folders pane of the Zotero preferences, move the directory, and then update the base directory."
- TEST
-
Have a look at the Zutilo add-on, which can show you the current path Zotero is looking and also batch-change those paths.https://github.com/willsALMANJ/Zutilo
- SEE: Zutilo
-
-
forums.zotero.org forums.zotero.org
-
It's funny: I'm always teaching new electronic research tools, I've turned my students on to Zotero but beyond a very short paper, they (and I) still return to index cards!! This would be a killer app.
- SEE: obsidian
-
-
forums.zotero.org forums.zotero.org
-
adamsmith May 19, 2013 So, the argument for the status quo is that the working paper on arxiv is a separate publication from the journal article it ends up published as. That's why it should be saved and - where it applies - cited differently. In other words, taking bibliographic data seriously, the DOI does _not_ apply to the arxiv paper and should not be saved with it. That's in line with what we do with other working paper repositories such as SSRN.
- I THINK SO
- DIFFERENT zotero items for
- arxiv
- different item for each version!!!
- doi publisher
- Each item with its PDF!!!
- DIFFERENT Citations!!!
-
aurimas May 19, 2013 Looks like that was an intentional decision to put arXiv identifier in the publication field
- BAD IDEA!
- I have changed "my" arxiv translator:
- Publication: arXiv (generic, for order by Pub)
Abbrv: arXiv-id [class]
using Zotfile to rename PDF, using Abbrv
-
-
github.com github.com
-
karnesky commented on Sep 1, 2013 A few things to note here: arXiv does have preprints, but a lot of these are linked to journal articles & some people use it as a reprint server. If an arXiv record has a DOI, I would suggest (strongly) that it should be typed as a journal article. We may even just use the ADS link, which seems to have a great BibTeX-formatted record for most eprints (though I'm torn on doing that). NASA ADS and most others classify arXiv eprints as journal articles anyway. Zotero will import any of those as journal articles, so there might be a case to import all arXiv eprints as if they were journal articles
- ok
Tags
Annotators
URL
-
-
forums.zotero.org forums.zotero.org
-
If/when we have a proper field for arXiv IDs in Zotero (which I believe has a very good chance of happening), we can handle this a lot more elegantly, of course, both on import and on export.
- NOW: 2022-01-02, and counting...
-
adamsmith November 17, 2013 The problem is that Zotero isn't just a bib(la)tex front-end and those field mappings don't make a huge amount of sense in Zotero - Archive and Loc. in Archive are at least somewhat plausible (Call number makes no sense),
- ?
-
However these fields are not filled automatically by zotero when importing from arxiv, instead a Report is created with all three fields (Archive, Loc. in Archive, Call number) empty
- OK: doesnt work, because:
- arxiv.js import doesnt fill these fields
- ADS.js import, doesnt
-
Further, by trial and error I found that setting the Journal article fields as follows: Archive: arxiv Loc. in Archive: 1234.1231 Call number: hep-ph results in zotero exporting a biblatex file containing: eprinttype = {arxiv}, eprint = {1234.1231}, eprintclass = {hep-ph},
- TIP: SEE BibLATex.js translator:
- if (item.archive == "arXiv" || item.archive == "arxiv") {
}writeField("eprinttype", "arxiv"); writeField("eprint", item.archiveLocation); if (item.callNumber) { // assume call number is used for arxiv class writeField("eprintclass", item.callNumber); }
-
According to the biblatex manual ftp://bay.uchicago.edu/CTAN/macros/latex/exptl/biblatex/doc/biblatex.pdf section 3.11.7, arxivprefix is an alias for eprinttype and primaryclass is an alias for eprintclass.
- BIBLATEX: extended fields
- the are alias
-
I don't see how we can support the bibtex extension fields (archivePrefix and primaryClass)
- ZOTERO Devs: dont add custom fields
-
I see that the field "Loc. in Arxive" is already exported as "eprint" when using BibLatex
- IMPORTANT: BibLateX
- In Zotero, only exists EXPORT translator, not import!
- use BibTex translator: type=3=import+export
-
uses in general the fields archivePrefix, eprint and primaryClass
- ATTENCION: these fields are "exported" by ADS:
- https://ui.adsabs.harvard.edu/v1/export/bibtexabs/{Bibcode} [GET] with Bearer
- SEE: https://ui.adsabs.harvard.edu/help/api/api-docs.html#post-/export/bibtexabs
-
The recommended way to add arxiv information to bibtex items is giving here http://arxiv.org/hypertex/bibstyles/
- URL NOT WORKING
-
-
-
Zotero.configure() and Zotero.displayOptions() replaced by configOptions and displayOptions Zotero.configure() and Zotero.displayOptions() no longer exist. Instead, translators should specify config and display options in the metadata block at the top of the translator, e.g. { "translatorID":"32d59d2d-b65a-4da4-b0a3-bdd3cfb979e7", [...] "configOptions":{"dataMode":"rdf/xml"}, "displayOptions":{"exportNotes":true}, "lastUpdated":"2011-01-11 04:31:00" } "dataMode":"block" and "dataMode":"line" are deprecated It is no longer necessary to specify “dataMode”:“block” or “dataMode”:“line”. If Zotero.read() is passed a numeric value, it reads a specified number of bytes; otherwise, it reads a full line.
- IMPORTANT: If Zotero.read() is passed a numeric value, it reads a specified number of bytes; otherwise, it reads a full line.
- read() == una linea
-
-
www.zotero.org www.zotero.org
-
translator.setSearch(item) For search translators. Sets the skeleton item object the translator will use for its search. translator.setString(string) For import translators. Sets the string that the translator will import from. translator.setDocument(document) For web translators. Sets the document that the translator will use.
- SEE
- funciones para asignar PARAMETROS al tipo de translator reutilizado
-
record.leader = "leader goes here"; record.addField(code, indicator, content);
- DONT UNDERSTAND
-
Calling a translator using ''getTranslators'' This code, based on the “COinS.js” code, calls getTranslators() to identify which search translators can make a complete item out of the basic template information already present. Note that translate() is called from within the event handler. Analogous logic could be used to get the right import translator for incoming metadata in an unknown format. var search = Zotero.loadTranslator("search"); search.setHandler("translators", function(obj, translators) { search.setTranslator(translators); search.translate(); }); search.setSearch(item); // look for translators for given item search.getTranslators();
- general, depends on 'detectSearch'
- BUT: if SEVERAL translators detected, WHICH translate?
-
Calling a translator by UUID This is the most common way to use another translator– simply specify the translator type and the UUID of the desired translator. In this case, the RIS translator is being called. var translator = Zotero.loadTranslator("import"); translator.setTranslator("32d59d2d-b65a-4da4-b0a3-bdd3cfb979e7"); translator.setString(text); translator.translate();
- especific, by UUID
-
Calling other translators Web translators can call other translators to parse metadata provided in a standard format with the help of existing import translators, or to augment incomplete data with the help of search translators. There are several ways of invoking other translators.
- VERY IMPORTANT
-
Batch Saving You will often need to make additional requests to fetch all the metadata needed, either to make multiple items, or to get additional information on a single item. The most common and reliable way to make such requests is with the utility functions Zotero.Utilities.doGet, Zotero.Utilities.doPost, and Zotero.Utilities.processDocuments. Zotero.Utilities.doGet(url, callback, onDone, charset) sends a GET request to the specified URL or to each in an array of URLs, and then calls function callback with three arguments: response string, response object, and the URL. This function is frequently used to fetch standard representations of items in formats like RIS and BibTeX. The function onDone is called when the input URLs have all been processed. The optional charset argument forces the response to be interpreted in the specified character set. Zotero.Utilities.doPost(url, postdata, callback, headers, charset) sends a POST request to the specified URL (not an array), with the POST string defined in postdata and headers set as defined in headers associative array (optional), and then calls function callback with two arguments: response string, and the response object. The optional charset argument forces the response to be interpreted in the specified character set. Zotero.Utilities.processDocuments(url, callback, onDone, charset) sends a GET request to the specified URL or to each in an array of URLs, and then calls the function callback with a single argument, the DOM document object. Note: The response objects passed to the callbacks above are described in detail in the MDC Documentation. Zotero.Utilities.processAsync(sets, callbacks, onDone) can be used from translators to make it easier to correctly chain sets of asynchronous callbacks, since many translators that require multiple callbacks do it incorrectly [text from commit message, r4262]
- SEE: call "chain"
- doGet() and processDocuments() admit [urls]
doPost() only admits "url"
processAsync(): NOT DOCUMENTED!
-
for each item, an item ID and label should be stored in the object as a property/value pair.
- EXAMPLE (arXiv):
- items[row.id] = row.title;
- where row: {
title: title, id: id }
-
Passing the object to the Zotero.selectItems function will trigger the selection window, and the function passed as the second argument will receive an object with the selected items, as in this example: Zotero.selectItems(getSearchResults(doc, false), function (items) { if (!items) return; ZU.processDocuments(Object.keys(items), scrape); }); Here, Zotero.selectItems(..) is called with an anonymous function as the callback. As in many translators, the selected items are simply loaded into an array and passed off to a processing function that makes requests for each of them.
- SEE TO UNDERSTAND
-
Saving Multiple Items Some webpages, such as those showing search results or the index of a journal issue, list multiple items. For these pages, web translators can be written to a) allow the user to select one or more items and b) batch save the selected items to the user's Zotero library. Item Selection To present the user with a selection window that shows all the items that have been found on the webpage, a JavaScript object should be created. Then, for each item, an item ID and label should be stored in the object as a property/value pair. The item ID is used internally by the translator, and can be a URL, DOI, or any other identifier, whereas the label is shown to the user (this will usually be the item's title). Passing the object to the Zotero.selectItems function will trigger the selection window, and the function passed as the second argument will receive an object with the selected items, as in this example: Zotero.selectItems(getSearchResults(doc, false), function (items) { if (!items) return; ZU.processDocuments(Object.keys(items), scrape); }); Here, Zotero.selectItems(..) is called with an anonymous function as the callback. As in many translators, the selected items are simply loaded into an array and passed off to a processing function that makes requests for each of them. Batch Saving You will often need to make additional requests to fetch all the metadata needed, either to make multiple items, or to get additional information on a single item. The most common and reliable way to make such requests is with the utility functions Zotero.Utilities.doGet, Zotero.Utilities.doPost, and Zotero.Utilities.processDocuments. Zotero.Utilities.doGet(url, callback, onDone, charset) sends a GET request to the specified URL or to each in an array of URLs, and then calls function callback with three arguments: response string, response object, and the URL. This function is frequently used to fetch standard representations of items in formats like RIS and BibTeX. The function onDone is called when the input URLs have all been processed. The optional charset argument forces the response to be interpreted in the specified character set. Zotero.Utilities.doPost(url, postdata, callback, headers, charset) sends a POST request to the specified URL (not an array), with the POST string defined in postdata and headers set as defined in headers associative array (optional), and then calls function callback with two arguments: response string, and the response object. The optional charset argument forces the response to be interpreted in the specified character set. Zotero.Utilities.processDocuments(url, callback, onDone, charset) sends a GET request to the specified URL or to each in an array of URLs, and then calls the function callback with a single argument, the DOM document object. Note: The response objects passed to the callbacks above are described in detail in the MDC Documentation. Zotero.Utilities.processAsync(sets, callbacks, onDone) can be used from translators to make it easier to correctly chain sets of asynchronous callbacks, since many translators that require multiple callbacks do it incorrectly [text from commit message, r4262]
- Item selection:
- each item: only ID+Title; no permite mas: eg: "tipo"
-
Notes Notes are saved similarly to attachments. The content of the note, which should consist of a string, should be stored in the note property of the item's notes property. E.g.: let bbCite = "Bluebook citation: " + bbCite + "."; newItem.notes.push({ note: bbCite });
- WARNING: notes are stored as HTML code
- use ZU.text2html
- use
+ text +
, to avoid \n missing
-
Zotero will automatically use proxied versions of attachment URLs returned from translators when the original page was proxied, which allows translators to construct and return attachment URLs without needing to know whether proxying is in use. However, some sites expect unproxied PDF URLs at all times, causing PDF downloads to potentially fail if requested via a proxy. If a PDF URL is extracted directly from the page, it's already a functioning link that's proxied or not as appropriate, and a translator should include proxy: false in the attachment metadata to indicate that further proxying should not be performed:
- SEE
-
In the very common case of saving the current page as an attachment, set document to the current document, so that Zotero doesn't have to make an additional request: newItem.attachments.push({ title: "Snapshot", document: doc });
- TIP: Snapshots, save without call URL again
Tags
Annotators
URL
-
-
www.mediawiki.org www.mediawiki.org
-
scrape[edit] The scrape function is called to save a single item. It is the most interesting function to code in a translator. We first create a new item as returned by detectWeb and then store the metadata in the relevant fields of that item. Along with the metadata, attachments can be saved for an item. These attachments become available even when one is offline. In the function shown below, we make use of another translator called Embedded Metadata. We load this translator and it scrapes information from the meta tags of the web page, filling fields and reducing our work. We can always insert and update information of fields on top of what Embedded Metadata provided.function scrape(doc, url) { var translator = Zotero.loadTranslator('web'); // Embedded Metadata translator.setTranslator('951c027d-74ac-47d4-a107-9c3069ab7b48'); translator.setDocument(doc); translator.setHandler('itemDone', function (obj, item) { // Add data for fields that are not covered by Embedded Metadata item.section = "News"; // Add custom fields if required trans.addCustomFields({ 'twitter:description': 'abstractNote' }); item.complete(); }); translator.getTranslatorObject(function(trans) { // Adjust for multiple item types trans.itemType = "newspaperArticle"; trans.doWeb(doc, url); }); }
- SEE: DONT UNDERSTAND!
-
Zotero.selectItems(getSearchResults(doc, false), function (items) { if (!items) { return true; } var articles = []; for (var i in items) { articles.push(i); } ZU.processDocuments(articles, scrape);
- COMPARE with zotero.doc: link-annotation
- code: annotation
- evita paso intermedio, pasando las keys (DEBEN ser URLs): ZU.processDocuments(Object.keys(items), scrape);
-
Generate test cases[edit] Once the code of a translator is prepared, it is recommended to create test cases. These test cases are run daily and help the community to figure out if a translator fails in future and needs any update or complete rewriting. We will generate test cases for MediaWiki translator through Scaffold. Open mediawiki in a new tab. Launch Scaffold and open the translator we have created. Open the "Testing" tab of Scaffold. We need to give a web page as input. For example, open citoid's page. Keeping this web page as the active tab, simply click on the "New Web" button. It will load the web page in the Input pane as a new unsaved test. Select the input entry and click the save button to have the output of test be saved as JSON data. Similarly lets create a test case for a search page. Open this link in a new tab as the active one and then click on "New Web". Once it is loaded, save it. You can see the saved test cases in the "Test" tab of Scaffold. For this search page, you can notice a JSON object as follows.var testCases = [ { "type": "web", "url": "https://www.mediawiki.org/w/index.php?search=Zotero+&title=Special:Search&go=Go&searchToken=2pwkmi9qkwlogcnknozyzpco1", "items": "multiple" } ]
- SEE
-
-
www.zotero.org www.zotero.org
-
Generate testCases (with Scaffold).
- link
- SEE annotation
-
An overview of the currently installed translators, giving the option of running their tests, can be accessed by entering the following into the address bar: chrome://zotero/content/tools/testTranslators/testTranslators.html
- URL NDF!
-
In most cases, it is not necessary or desirable to write these tests by hand– they can and should be generated by the testing framework using Scaffold; see below.
- HOWTO?
Tags
Annotators
URL
-
-
www.zotero.org www.zotero.org
-
htmlSpecialChars Function description https://www.zotero.org/trac/browser/extension/branches/1.0/chrome/content/zotero/xpcom/utilities.js#L153 Zotero.Utilities.prototype.htmlSpecialChars = function(str) @type String Escapes several predefined characters: & (ampersand) becomes & “ (double quote) becomes " ' (single quote) becomes ' < (less than) becomes < > (greater than) becomes > and <ZOTEROBREAK/> becomes <br/> <ZOTEROHELLIP> becomes …
- HERE!
- UTIL: htmlSpecialChars
-
parseMarkup Function description https://www.zotero.org/trac/browser/extension/branches/1.0/chrome/content/zotero/xpcom/utilities.js#L206 Zotero.Utilities.prototype.parseMarkup = function(str) @return {Array} An array of objects with the following form: { type: 'text'|'link', text: “text content”, [ attributes: { key1: val [ , key2: val, …] } }</pre> Parses a text string for HTML/XUL markup and returns an array of parts. Currently only finds HTML links (<a> tags)
- UTIL: parseMarkup
Tags
Annotators
URL
-
-
www.zotero.org www.zotero.org
-
Zotero.Translate/translators Added Zotero.Utilities.processAsync(sets, callbacks, onDone) – this can be used to make it easier to correctly chain sets of asynchronous callbacks
- HOWTO? processAsync
-
-
www.zotero.org www.zotero.org
-
The functions Zotero.Utilities.HTTP.doGet and Zotero.Utilities.processDocuments run asynchronously. In order to allow them to complete before moving along, follow either function with Zotero.wait() and include Zotero.done() as the onDone in order to signal the translator that it has completed its asynchronous operations.
- IMPORTANT:
- BUT: wait() and done() EXAMPLES???
- SEE: DEPRECATED annotation
-
Don't forget to turn off debug logging
- IDEA: if (BDG) Z.debug("log");
- let DBG;
- let DBG = "yes";
-
use the debug log accessible through the Zotero preferences pane, under “advanced”.
- ok
Tags
Annotators
URL
-
-
www.zotero.org www.zotero.org
-
Zotero.wait DEPRECATED since 3.0
- SEE annotation
Tags
Annotators
URL
-
-
github.com github.com
-
zuphilip commented on Feb 2, 2018 • edited Here is a list of the most common used functions from Zotero.Utilities and Zotero: grep -E -o -h "(ZU|Zotero\.Utilities)\.[^\(]+\(" *.js | sort | uniq -c | sort -nr $ grep -E -o -h "(ZU|Zotero\.Utilities)\.[^\(]+\(" *.js | sort | uniq -c | sort -nr 1533 ZU.xpathText( 831 ZU.xpath( 382 ZU.trimInternal( 272 ZU.processDocuments( 201 ZU.cleanAuthor( 190 Zotero.Utilities.cleanAuthor( 155 Zotero.Utilities.processDocuments( 124 Zotero.Utilities.trimInternal( 98 ZU.doGet( 84 Zotero.Utilities.capitalizeTitle( 81 Zotero.Utilities.trim( 79 ZU.capitalizeTitle( 70 Zotero.Utilities.unescapeHTML( 53 Zotero.Utilities.HTTP.doGet( 47 ZU.strToISO( 30 ZU.unescapeHTML( 27 ZU.cleanISBN( 26 ZU.doPost( 24 ZU.trim( 24 ZU.cleanDOI( 24 Zotero.Utilities.superCleanString( 20 ZU.fieldIsValidForType( 17 Zotero.Utilities.cleanTags( 16 Zotero.Utilities.getItemArray( 14 ZU.cleanISSN( 14 Zotero.Utilities.doGet( 13 ZU.cleanTags( 13 Zotero.Utilities.xpathText( 13 Zotero.Utilities.xpath( 11 Zotero.Utilities.HTTP.doPost( 9 ZU.strToDate( 8 ZU.removeDiacritics( 7 ZU.lpad( 7 ZU.getItemArray( 6 ZU.HTTP.doGet( 6 Zotero.Utilities.strToDate( 5 ZU.superCleanString( 5 Zotero.Utilities.parseContextObject( 4 ZU.formatDate( 4 Zotero.Utilities.getVersion( 4 Zotero.Utilities.getCreatorsForType( 3 ZU.XRegExp( 3 ZU.parseContextObject( 3 ZU.itemTypeExists( 3 ZU.deepCopy( 2 ZU.XRegExp.replace( 2 ZU.quotemeta( 2 ZU.arrayDiff( 2 Zotero.Utilities.text2html( 2 Zotero.Utilities.strToISO( 2 Zotero.Utilities.lpad( 2 Zotero.Utilities.loadDocument( 2 Zotero.Utilities.itemTypeExists( 2 Zotero.Utilities.getLocalizedCreatorType( 2 Zotero.Utilities.createContextObject( 1 ZU.xpathText ( 1 ZU.itemToCSLJSON( 1 ZU.itemFromCSLJSON( 1 ZU.isEmpty( 1 ZU.getCreatorsForType( 1 ZU.doHead( 1 ZU.arrayUnique( 1 Zotero.Utilities.htmlSpecialChars( 1 Zotero.Utilities.getPageRange( 1 Zotero.Utilities.formatDate( 1 Zotero.Utilities.doPost( 1 Zotero.Utilities.composeDoc( grep -E -o -h "(Z|Zotero)\.[^\(\.]+\(" *.js | sort | uniq -c | sort -nr $ grep -E -o -h "(Z|Zotero)\.[^\(\.]+\(" *.js | sort | uniq -c | sort -nr 517 Z.debug( 426 Zotero.selectItems( 371 Zotero.debug( 351 Zotero.Item( 315 Zotero.loadTranslator( 117 Zotero.done( 99 Zotero.wait( 43 Zotero.write( 39 Zotero.read( 31 Z.selectItems( 21 Zotero.getOption( 17 Zotero.nextItem( 15 Z.monitorDOMChanges( 14 Z.Item( 13 Z.getHiddenPref( 7 Zotero.getXML( 5 Zotero.setCharacterSet( 5 Zotero.monitorDOMChanges( 5 Zotero.Collection( 3 Zotero.setProgress( 3 Zotero.nextCollection( 3 Zotero.getHiddenPref( 3 Z.loadTranslator( 2 Zotero.addOption( 2 Z.done( 1 Zotero.doGaleWeb( 1 Zotero.detectGaleWeb( 1 Z.write( 1 Z.wait( 1 Z.setProgress( 1 Z.read( 1 Z.nextItem( 1 Z.getXML( 1 Z.debug ( 1 Z._]*/( 1 Z. ( Thus, maybe we can concentrate on this and divide them into some groups, e.g. CSS path: text, attr xpath: ZU.xpath, ZU.xpathText call websites: ZU.doGet, ZU.processDocuments, ZU.doPost special cleaning functions: ZU.cleanAuthor, ZU.strToISO, ZU.cleanISBN, ZU.cleanDOI, ZU.cleanISSN clean text strings: ZU.trimInternal, ZU.capitalizeTitle, ZU.unescapeHTML, ZU.superCleanString, ZU.cleanTags, ZU.removeDiacritics debug: Z.debug specialized functions: Z.monitorDOMChanges, Z.selectItems, Z.loadTranslator, ...
- VERY IMPORTANT: zotero "public" functions
Tags
Annotators
URL
-
-
forums.zotero.org forums.zotero.org
-
adamsmith September 8, 2017 OK -- your translators are coffeescript too, right? Or do you have a code snipet I could just copy over? emilianoeheyns September 8, 2017 They are at this moment; after the port I'm moving everything to ES6. Anyhow, BBT has externalised the bibtex parser to https://github.com/fiduswriter/biblatex-csl-converter (most of the work having been done by Johannes) for my import parsing, and it takes care of this in the parse phase. It's not as simple as copying a piece of the code, unfortunately.For the stock Zotero translators, wouldn't something like this do the trick? text.replace(/\n+/, function(match) { return match.length == 1 ? " " : "\n\n"; }) (or, if you want HTML) text.replace(/\n+/, function(match) { return match.length == 1 ? " " : "<p>"; }) (which is cheating because it's not valid HTML, but most HTML parsers will deal fairly well. <br><br> is less-cheaty)
- TEST
-
- Dec 2021
-
www.eventdata.crossref.org www.eventdata.crossref.org
-
In this example, Bigipedia informs us that the DOI is referenced by the article page. Note that because the subject is not a DOI, the metadata must be supplied in the subj key. $ curl "https://bus.eventdata.crossref.org/events" \ --verbose \ -H "Content-Type: application/json" \ -H "Authorization: Token token=591df7a9-5b32-4f1a-b23c-d54c19adf3fe" \ -X POST \ --data '{"id": "dbba925e-b47c-4732-a27b-0063040c079d", "source_token": "b1bba157-ab5b-4cb8-9ac8-4beb2d6405ff", "subj_id": "http://bigipedia.com/pages/Chianto", "obj_id": "https://doi.org/10.3403/30164641u", "relation_type_id": "references", "source_id": "bigipedia", "license: "https://creativecommons.org/publicdomain/zero/1.0/", "subj": {"title": "Chianto", "issued": "2016-01-02", "URL": "http://bigipedia.com/pages/Chianto"}}'
- SUBJECT is Page, no DOI
- metadata in object "subj"
Tags
Annotators
URL
-
-
francis.naukas.com francis.naukas.com
-
El púlsar binario PSR J0737 como banco de pruebas de la relatividad general Por Francisco R. Villatoro, el 16 diciembre, 2021. Categoría(s): Astronomía • Ciencia • Física • Noticias • Physics • Relatividad • Science ✎ 3
l púlsar binario PSR J0737 como banco de pruebas de la relatividad general Por Francisco R. Villatoro, el 16 diciembre, 2021. Categoría(s): Astronomía • Ciencia • Física • Noticias • Physics • Relatividad • Science ✎ 3
Hulse y Taylor recibieron el Premio Nobel de Física en 1993 por su estudio del púlsar binario PSR B1913+16 (el primero que se descubrió en 1974), que observó de forma indirecta la emisión de ondas gravitacionales. Se publica en Physical Review X un análisis similar del púlsar binario PSR J0737−3039A/B, descubierto en 2003. El púlsar binario PSR J0737 es un banco de pruebas único para el estudio de la relatividad general ya que está situado a solo dos mil años luz de la Tierra, ambas estrellas de neutrones se observan como púlsares y su inclinación orbital es muy próxima a 90 °, luego se puede observar cómo el espaciotiempo curvo del plano orbital modifica los pulsos emitidos. Las observaciones durante 16 años de la precesión del periastro siguen la fórmula de la emisión gravitacional cuadripolar de Einstein con un error menor del 0.013 % (el resultado obtenido tras 2.5 años de observaciones tenía un error del 0.05 % y se publicó en 2006 en Science). Sin lugar a dudas un púlsar binario que habrá que seguir durante las próximas décadas para mejorar estas estimaciones.
Además de probar la fórmula cuadripolar de Einstein, se ha probado el retraso debido al efecto de Shapiro (en un espaciotiempo curvo las señales de radio viajan durante más tiempo y las observamos retrasadas). También se han realizado otras pruebas de la relatividad que hasta ahora no se habían podido realizar con otros púlsares binarios. Por ejemplo, se ha medido la deformación relativista de la órbita (debido al acoplamiento relativista entre el espín (rotación de las estrellas de neutrones) y el momento angular de su órbita). En estas pruebas los resultados tienen mucha mayor incertidumbre, pero en todos los casos son compatibles con las predicciones de la relatividad general de Einstein. Esta teoría, a la que muchos físicos quieren matar cuanto antes, además de muy bella es muy robusta y promete reinar en la física durante muchas décadas.
El artículo es M. Kramer, I. H. Stairs, …, G. Theureau, «Strong-field Gravity Tests with the Double Pulsar,» Physical Review X 11: 04150 (13 Dec 2021), doi: https://doi.org/10.1103/PhysRevX.11.041050, arXiv:2112.06795[astro-ph.HE] (13 Dec 2021); más información divulgativa en Lijing Shao, «General Relativity Withstands Double Pulsar’s Scrutiny,» Physics 14: 173 (13 Dec 2021) [web].
Una manera de destacar la excepcionalidad del púlsar binario PSR J0737 es compararlo con el famoso PSR B1913, que ha sido estudiado durante 35 años. Esta figura muestra la precesión del periastro de la órbita; la diferencia en la densidad de puntos entre 0 y −20 es notable. Así se explica que el nuevo resultado para PSR J0737 tras 16 años tenga un error menor del 0.013 %, cuando para PSR B1913 solo se alcanzó el 0.2 %; por cierto, para las fusiones de agujeros negros observadas por LIGO-Virgo el error típico ronda el 20 %. No le he dicho, pero supongo que sabrás que el periastro de una órbita elíptica es el punto donde la distancia entre ambos cuerpos es mínima; se llama perihelio cuando uno de los cuerpos es el Sol y perigeo cuando es la Tierra. El fenómeno que mide esta figura es análogo a la precesión del perihelio de la órbita de Mercurio, que Einstein usó como guía hacia la formulación correcta de su teoría de la gravitación.
ASTROFÍSICACIENCIAEXPERIMENTOFÍSICANOTICIASPÚLSARTEORÍA DE LA RELATIVIDAD GENERAL
3 Comentarios Mario dice: 17 diciembre, 2021 a las 5:10 pm Francis Hay una frase que no entiendo, favor revisar: «…,luego sus señales se observa cómo el espaciotiempo curvo del plano orbital modificada la señal que observamos». Atte Mario
RESPONDER Francisco R. Villatoro dice: 17 diciembre, 2021 a las 9:15 pm Gracias, Mario.
RESPONDER Mario dice: 19 diciembre, 2021 a las 10:07 pm Francis, entiendo que por el efecto shapiro las señales de radio se ven retrasadas; pero para notar tal retraso tiene que haber una referencia. Cuál es esa referencia?
RESPONDER Deja un comentario
-
-
www.eventdata.crossref.org www.eventdata.crossref.org
-
Evidence Record Creates observations of type landing-page-url for annotates relation types. Creates observations of type plaintext for discusses relation types.
- SEE
- In Evidence:
- "candidates": [ { "type": "landing-page-url",
-
Discusses: { "license": "https://creativecommons.org/publicdomain/zero/1.0/", "obj_id": "https://doi.org/10.1146/annurev.earth.32.082503.144359", "source_token": "8075957f-e0da-405f-9eee-7f35519d7c4c", "occurred_at": "2015-05-11T04:03:44Z", "subj_id": "https://hypothes.is/a/qNv_Ei5ZSnWOWO54GXdFPA", "id": "00054d54-7f35-4557-b083-7fa1f028856d", "evidence_record": "https://evidence.eventdata.crossref.org/evidence/20170413-hypothesis-a37bc9bf-1dc0-4c8a-b943-2e14beb4de6f", "terms": "https://doi.org/10.13003/CED-terms-of-use", "action": "add", "subj": { "pid": "https://hypothes.is/a/qNv_Ei5ZSnWOWO54GXdFPA", "json-url": "https://hypothes.is/api/annotations/qNv_Ei5ZSnWOWO54GXdFPA", "url": "https://hyp.is/qNv_Ei5ZSnWOWO54GXdFPA/www.cnn.com/2015/05/05/opinions/sutter-sea-level-climate/#", "type": "annotation", "title": "The various scenarios presented should be specified as being global averages of expected sea level rise. The sea level rise observed locally will vary significantly, due to a lot of different geophysical factors.", "issued": "2015-05-11T04:03:44Z" }, "source_id": "hypothesis", "obj": { "pid": "https://doi.org/10.1146/annurev.earth.32.082503.144359", "url": "https://doi.org/10.1146/annurev.earth.32.082503.144359" }, "timestamp": "2017-04-13T10:40:18Z", "relation_type_id": "discusses" }
- URL (Landing) in annotations!
-
Annotates: { "license": "https://creativecommons.org/publicdomain/zero/1.0/", "obj_id": "https://doi.org/10.1007/bfb0105342", "source_token": "8075957f-e0da-405f-9eee-7f35519d7c4c", "occurred_at": "2015-11-04T06:30:10Z", "subj_id": "https://hypothes.is/a/NrIw4KlKTwa7MzbTrMAyjw", "id": "00044ac9-d729-4d3f-a2c8-618bcdf1d252", "evidence_record": "https://evidence.eventdata.crossref.org/evidence/20170412-hypothesis-de560308-e500-4c55-ba28-799d7b272039", "terms": "https://doi.org/10.13003/CED-terms-of-use", "action": "add", "subj": { "pid": "https://hypothes.is/a/NrIw4KlKTwa7MzbTrMAyjw", "json-url": "https://hypothes.is/api/annotations/NrIw4KlKTwa7MzbTrMAyjw", "url": "https://hyp.is/NrIw4KlKTwa7MzbTrMAyjw/arxiv.org/abs/quant-ph/9803052", "type": "annotation", "title": "[This article](http://arxiv.org/abs/quant-ph/9803052) was referenced by [\"Decoherence\"](http://web.mit.edu/redingtn/www/netadv/Xdecoherenc.html) on Sunday, September 25 2005.", "issued": "2015-11-04T06:30:10Z" }, "source_id": "hypothesis", "obj": { "pid": "https://doi.org/10.1007/bfb0105342", "url": "http://arxiv.org/abs/quant-ph/9803052" }, "timestamp": "2017-04-12T07:16:20Z", "relation_type_id": "annotates" }
- An arXiv page (article with DOI) is considered OBJECT (DOI)
This example is an AUTO-REFERENCE !!!
It is due to the arXiv Agent (?)
-
looks in the text for links to registered content
- "DOI:"
-
It looks for two things: the annotation of registered content (for example Article Landing Pages) and the mentioning of registered content (for example DOIs) in the text of annotations.
- DOI: in annotation text [OK.verified] in SUBJECT pages
- Annotations in OBJECT pages (Landing)
-
The Hypothes.is Agent monitors annotations
- See examples of Evidences
- Agent uses "url": "https://hypothes.is/api/search"
- GUESS: filter by date of annotation? "extra": { "cutoff-date": "2005-04-13T09:08:04.578Z"
-
-
www.eventdata.crossref.org www.eventdata.crossref.org
-
Crossref Membership rules #7 state that: You must have your DOIs resolve to a page containing complete bibliographic information for the content with a link to — or information about — getting the full text of the content. Where publishers break these rules, we will alert them.
- INTERESTING!
-
It's always not one-to-one DOIs can be assigned to books and book chapters, articles and figures. Each Agent will do its job as accurately as possible, with minimal cleaning-up, which could affect interpretation. This means that if someone tweets the DOI for a figure within an article, we will record that figure's DOI. If they tweet the landing page URL for that figure, we will do our best to match it to a DOI. Depending on the method used, and what the publisher landing page tells us, we may match the article's DOI or the figure's DOI. Sometimes two pages may claim to be about the same DOI. This could happen if a publisher runs two different sites about the same content. It's also possible that a landing page has no DOI metadata, so we can't match it to an Event. The reverse is true: sometimes two DOIs point to the same landing page. This can happen by accident. It is rare, but does happen. This has no material effect on the current methods for reporting Events.
- non-uniqueness: DOI <-> Page Publisher
Tags
Annotators
URL
-