93 Matching Annotations
  1. Last 7 days
    1. (https://odc-tbi.org/)

      DOI: 10.48550/arXiv.2209.12994

      Resource: Open Data Commons for Traumatic Brain Injury (RRID:SCR_021736)

      Curator: @bandrow

      SciCrunch record: RRID:SCR_021736


      What is this?

  2. Jul 2024
    1. 7063

      DOI: 10.48550/arXiv.2304.02112

      Resource: (BDSC Cat# 7063,RRID:BDSC_7063)

      Curator: @bandrow

      SciCrunch record: RRID:BDSC_7063


      What is this?

  3. Jul 2023
  4. Jun 2023
  5. May 2023
  6. Apr 2023
  7. Feb 2023
  8. Jan 2023
  9. Oct 2022
  10. Sep 2022
    1. Trackback guide By sending a trackback, you can notify arXiv.org that you have created a web page that references a paper. Popular blogging software supports trackback: you can send us a trackback about this paper by giving your software the following trackback URL: https://arxiv.org/trackback/{arXiv_id} Some blogging software supports trackback autodiscovery -- in this case, your software will automatically send a trackback as soon as your create a link to our abstract page. See our trackback help page for more information.
      • INTERESTING
      • EXAMPLE
      • title
      • doi
      • arxiv_id
      • ads_id
      • authors []
      • type
      • altmetric_id
      • details_url: citation_id=${altmetric_id}
    1. n the meantime, we encourage readers to give Librarian a try!
      • 2022: NOT FOUND
    2. need to be opt-in and provide appropriate previews
      • y que pasa cuando el autor SUBE una nueva version, hay que generar DE NUEVO los links!
    3. Despite the quantitative “no”, when we look at freetext comments we see a number of heartfelt pleas not to mess with arXiv PDFs, from both the author and the reader’s perspective. For example, “Please don’t insert anything in the authors’ PDFs! This should be absolutely up to the authors.” and “Please, please, please DO NOT MESS WITH THE PDFs!”. It’s clear to us that, if we do provide links in PDFs, we need to be extremely careful to respect the wishes of authors at all times. We hear you!
      • IDEA: en formulario de registro
      • metadata para indicar que el autor NO desea que se inserten links
    4. both
      • LO ESPERADO!
    5. Our talented new UX developer has been hard at work on integrating the references into the abstract page in a way that respects the austere look-and-feel that we all love, and we’ve gotten excellent feedback from users as we go along. We’re looking forward to demonstrating a prototype of the new abstract page soon
      • WHEN???
    6. Fermat’s Library recently released a Chrome extension called Librarian, for example, that provides links to cited references while browsing arXiv PDFs.
      • 2022: NOT FOUND in Chrome Store
    7. People are already working on arXiv references!
      • DO IT YOURSELF!!!
      • REINVENT THE WHEEL
    8. Embed links in arXiv PDFs.
      • TOO, BUT NOT ONLY!!!
    9. Based on input from the arXiv user survey, we knew that reference extraction and linking was a very high priority for arXiv users.
      • ok, HIGH PRIORITY
    10. We decided to proceed with a combined approach – using CERMINE, RefExtract, and Grobid – performing parallel extractions on each paper and then combining the extracted references to produce the most reliable set of references possible. We also added a few extra extraction steps of our own to be sure that we caught arXiv identifiers, and to supplement DOI detection.
      • GOOD
    11. Rather than invent a reference extraction tool from scratch, we evaluated existing reference extraction tools available under open source licenses. We found several extractors that we liked: Content ExtRactor and MINEr (CERMINE) – https://github.com/CeON/CERMINE– Developed by the Center for Open Science. GNU GPL 3. RefExtract – http://pythonhosted.org/refextract/– Developed by CERN; spun off from the Invenio project. GNU GPL 2. GROBID – https://github.com/kermitt2/grobid– Developed by Patrice Lopez. Apache 2.0. ScienceParse – https://github.com/allenai/science-parse – From the Allen Institute for Artificial Intelligence. Apache 2.0.
      • SEE
  11. Apr 2022
  12. Feb 2022
    1. https://www.zotero.org/save?type=

      URL for adding URL, ISBN, DOI, PMID, or arXiv IDs to one's Zotero account.

      I've created a mobile shortcut using the URL Forwarder app to accomplish this with a share functionality after highlighting an ISBN.

      Might also try using https://play.google.com/store/apps/details?id=com.srowen.bs.android&hl=en with the added custom search query custom search URL https://www.zotero.org/save?q=%s to see if that might work as well. This should allow using a scanner to get ISBN barcodes into the system as well. Useful for browsing at the bookstore.

      I should also create a javascript bookmarklet for this pattern as well.

      See also: - https://forums.zotero.org/discussion/77178/barcode-scanner - https://forums.zotero.org/discussion/76471/scanning-isbn-barcode-to-input-books-to-zotero-library

      Alternate URL paths for this: - https://www.zotero.org/save?type=isbn - https://www.zotero.org/save?q=

  13. Jan 2022
    1. _re_id["doi"] = re.compile(r"\b10\.\d{4,}(?:\.\d+)*\/(?:(?!['\"&<>])\S)+\b") _re_id["bibcode"] = re.compile(r"\b\d{4}\D\S{13}[A-Z.:]\b") _re_id["arxiv"] = re.compile(r"\b(?:\d{4}\.\d{4,5}|[a-z-]+(?:\.[A-Za-z-]+)?\/\d{7})\b")
      • REGEX
    1. Subject classes do not exist for some of the older archives in the Physics group. Instead, each archive represents a subject class, e.g., hep-ex, hep-lat, hep-ph, and hep-th. The astro-ph archive currently has no subject classes, while cond-mat and physics are classified by subject classes that appear only in the metadata (not in the identifier). This scheme uses two upper-case characters to identify the subject class,
      • SUBJECT CLASS
    2. The canonical form of identifiers from January 2015 (1501) is arXiv:YYMM.NNNNN, with 5-digits for the sequence number within the month. The article identifier scheme used by arXiv was changed in April 2007. All existing articles retain their original identifiers but newly announced articles have identifiers following the new scheme. As of January 2015, the number of digits used in the second part of the new identifier, the sequence number within the month, is increased from 4 to 5. This will allow arXiv to handle more than 9999 submissions per month (see monthly submission rates).
      • DISTINCT FORMATS
    1. shows the primary classification in a standard way, and is also recommended as the preferred citation format
      • CITATION STYLE
    2. Extension. The new scheme could be extended to use more than 4 or 5 digits for the sequence number. However, that would correspond to more than 99999 submissions per month, or over 10 times the current submission rate, and is thus not anticipated for many years.
      • CODIGO 4-5
    3. Old papers in archives where the archive name matches the primary subject classification (e.g. hep-th) do not have the square brackets with primary subject classification
      • OK, exception in zotero translator
    1. Many bibliography databases supply a DOI (Digital Object Identifier) or arXiv eprint number with BibTeX entries. However, the standard BiBTeX style files either ignore this information or print it without hyperlinking it correctly. Here’s how you can get working clickable links to both DOI and arXiv eprints.
      • BIBTEX standard doesnt use them
    1. For the new style arXiv identifiers (April 2007 and onwards) we recommend these bib-style extensions: **archivePrefix = "arXiv"**, **eprint = "0707.3168"**, **primaryClass = "hep-th"**,
      • BIBTEX EXTENSIONS!
      • for "old style ID":
      • **eprint = "hep-ph/9609357"**
    1. APIs for Scholarly Resources What is an API? API stands for application programming interface. An API is a protocol that allows a user to query a resource and retrieve and download data in a machine-readable format.  Researchers sometimes use APIs to download collections of texts, such as scholarly journal articles, so they can perform automated text mining on the corpus they've downloaded. Here is a simple tutorial that explains what an API is.  Below are some APIs that are available to researchers. Some are open to the public, while others are available according to the terms of Temple University Libraries' subscriptions. Many require you to create an API key, which is a quick and free process.   How do I Use APIs? You can create a simple query in the address bar in a web browser. However, a more complex query generally requires using a programming language. Commonly used languages for querying APIs are Python and R. (R is the language used in the R software.) The examples given in the documentation for the APIs listed below typically do not include sample programming code; they only explain how the data is structured in order to help users write a query. List of APIs for Scholarly Research arXiv Content: metadata and article abstracts for the e-prints hosted on arXiv.org Permissions: no registration required Limitations: no more than 4 requests per second Contact: https://groups.google.com/forum/#!forum/arxiv-api, https://arxiv.org/help/api/index   Astrophysics Data System Content: bibliographic data on astronomy and physics publications from SAO/NASA astrophysics databases Permissions: free to register; request a key at https://github.com/adsabs/adsabs-dev-api Limitations: varies Contact: https://groups.google.com/forum/#!forum/adsabs-dev-api, adshelp@cfa.harvard.edu   BioMed Central Content: metadata and full-text content for open access journals published in BioMed Central Permissions: free to access, request a key at https://dev.springer.com/signup Limitations: none Contact: info@biomedcentral.com   Chronicling America Content: digitized newspapers from 1789-1963, as well as a directory of newspapers published 1960 to the present, with information on library holdings Permissions: no registration required Limitations: none Contact: http://www.loc.gov/rr/askalib/ask-webcomments.html   CORE Content: metadata and full-text of over 100 million OA research papers Permissions: free to access for non-commercial purposes, request a key at https://core.ac.uk/api-keys/register Limitations: One batch request or five single requests every 10 seconds. Contact CORE if you need a faster rate. Contact: theteam@core.ac.uk   CrossRef Content: metadata records with CrossRef DOIs, over 100 million scholarly works Permissions: no registration required Limitations: guidelines to avoid overloading the servers at https://github.com/CrossRef/rest-api-doc#meta. "We reserve the right to impose rate limits and/or to block clients that are disrupting the public service." Contact: labs@crossref.org   Digital Public Library of America Content: metadata on items and collections indexed by the DPLA Permissions: request a free key; instructions here https://pro.dp.la/developers/policies Limitations: none, however, "The DPLA reserves the right to limit or revoke access to the API if, in its discretion, a user engages in abusive conduct, conduct that materially degrades the ability of other users to query the API." Contact: codex@dp.la   Elsevier Content: multiple APIs for full-text books and journals from ScienceDirect and citation data from Engineering Village and Embase Permissions: free to register; click 'Get API Key" to request a personal key: https://dev.elsevier.com/ Limitations: "Researchers at subscribing academic institutions can text mine subscribed full-text ScienceDirect content via the Elsevier APIs for non-commercial purposes."   Usage policies depend on use cases; see list at https://dev.elsevier.com/use_cases.html Contact: integrationsupport@elsevier.com   HathiTrust (Bibliographic API) Content: bibliographic and rights information for items in the HathiTrust Digital Library Permissions: no registration required Limitations: may request up to 20 records at once. Not intended for bulk retrieval Contact: feedback@issues.hathitrust.org   HathiTrust (Data API) Content: full-text of HathiTrust and Google digitized texts of public domain works Permissions: free to access, request a key at https://babel.hathitrust.org/cgi/kgs/request Limitations: "Please contact [HathiTrust] to determine the suitability of the API for intended uses." Contact: feedback@issues.hathitrust.org   IEEE Xplore Content: metadata for articles included in IEEE Xplore Permissions: must be affiliated with an institution that subscribes to IEEE Xplore. Temple is a subscriber. Limitations: maximum 1,000 results per query Contact: onlinesupport@ieee.org   JSTOR Content: full-text articles from JSTOR Permissions:  free to use, register at https://www.jstor.org/dfr/ Limitations:  Not a true API, but allows users to construct a search and then download the results as a dataset for text-mining purposes. Can download up to 25,000 documents. Largest datasets available by special request Contact: https://support.jstor.org/hc/en-us   National Library of Medicine Content: 60 separate APIs for accessing various NLM databases, including PubMed Central, ToxNet, and ClinicalTrials.gov. The PubMed API is listed separately below. Permissions: varies Limitations: varies Contact: varies   Nature.com OpenSearch Content: bibliographic data for content hosted on Nature.com, including news stories, research articles and citations Permissions: free to access Limitations: varies Contact: interfaces@nature.com   OECD Content: a selection of the top used datasets covering data for OECD countries and selected non-member economies. Datasets included appear in the catalogue of OECD databases with API access Permissions: no registration required, see terms and conditions Limitations: max 1,000,000 results per query, max URL length of 1,000 characters. Contact: OECDdotStat@oecd.org   PLOS Search API Content: full-text of research articles in PLOS journals Permissions: free to access, register at http://api.plos.org/registration/ <!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> Limitations: Max is 7200 requests a day, 300 per hour, 10 per minute. Users should wait 5 seconds for each query to return results. Requests should not return more than 100 rows. High-volume users should contact api@plos.org. API users are limited to no more than five concurrent connections from a single IP address. Contact: api@plos.org   PubMed Content: information stored in 38 NCBI databases, including some info from PubMed. Will retrieve a PubMed ID when citation information is input. Permissions: API key required starting May 1, 2018 Limitations: After May 1, 2018, with an API key a site can post up to 10 requests per second by default. Large jobs should be limited to outside 9-5 weekday hours. Higher rates are available by request (see contact information below) Contact: eutilities@ncbi.nlm.nih.gov <!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> ​   Springer Content: full-text of SpringerOpen journal content and BioMed Central, as well as metadata from other Springer resources <!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--> Permissions: free to access, request a key at https://dev.springer.com/signup Limitations: noncommercial use Contact: tdm@springernature.com   World Bank APIs Content: APIs for the following datasets: Indicators (time series data), Projects (data on the World Bank’s operations), and World Bank financial data (World Bank Finances API) Permissions: no registration required Limitations: See Terms & Conditions of Using our Site Contact: data@worldbankgroup.org        Acknowledgements We would like to acknowledge API guides created by the Libraries at MIT, Berkeley, Purdue and Drexel that informed our work on this guide. Librarian Gretchen Sneff I'm offline, chat with another librarian jQuery.getScript("https://api3.libcal.com/js/myscheduler.min.js", function() { jQuery("#mysched_8635").LibCalMySched({iid: 1621, lid: 0, gid: 0, uid: 8635, width: 500, height: 450, title: 'Schedule an Appointment with a Librarian - ', domain: 'https://api3.libcal.com'}); }); Schedule Appointment #mysched_8635 { background: #2A609A; border: 1px solid #2A609A; border-radius: 4px; color: #FFFFFF; font: 14px Arial, Helvetica, Verdana; padding: 8px 20px; cursor: pointer; } #mysched_8635:hover, #mysched_8635:active, #mysched_8635:focus { opacity: 0.6; } Contact: gsneff@temple.edu Charles Library(215) 204-4724 Subjects: Earth & Environmental Science, Engineering, Mathematics Librarian Karen Kohn Email Me Contact: Paley Library, Room 101215-204-4428 Last Updated: Dec 15, 2021 9:13 AM URL: https://guides.temple.edu/APIs Print Page Login to LibApps Report a problem. Tags: API, Application Programming Interface, research methodology, scraping
      • GOOD LIST in legible format
    1. Pattern for Local Unique Identifiers Local identifiers in arXiv should match this regular expression:^(\w+(\-\w+)?(\.\w+)?)?\d{4,7}(\.\d+(v\d+)?)?$
      • VALID ONLY for "new" format!!!
      • not valid for hep-th/9108008v1
    2. Pattern for Local Unique Identifiers Local identifiers in arXiv should match this regular expression:^(\w+(\-\w+)?(\.\w+)?)?\d{4,7}(\.\d+(v\d+)?)?$ Example Local Unique Identifier 0807.4956v1   Resolve Pattern for CURIES Compact URIs (CURIEs) constructed from arXiv should match this regular expression:^arxiv:(\w+(\-\w+)?(\.\w+)?)?\d{4,7}(\.\d+(v\d+)?)?$ Example CURIE arxiv:0807.4956v1
      • REGEX ARXIV
    1. 3.1.1.1 Encoding an OAI-PMH request in a URL for an HTTP GET URLs for GET requests have keyword arguments appended to the base URL, separated from it by a question mark [?]. For example, the URL of a GetRecord request to a repository with base URL that is http://an.oa.org/OAI-script might be: http://an.oa.org/OAI-script? verb=GetRecord&identifier=oai:arXiv.org:hep-th/9901001&metadataPrefix=oai_dc However, since special characters in URIs must be encoded, the correct form of the above GET request URL is: http://an.oa.org/OAI-script? verb=GetRecord&identifier=oai%3AarXiv.org%3Ahep-th%2F9901001&metadataPrefix=oai_dc
      • IMPORTANT: encoding de URIs
    1. karnesky commented on Sep 1, 2013 A few things to note here: arXiv does have preprints, but a lot of these are linked to journal articles & some people use it as a reprint server. If an arXiv record has a DOI, I would suggest (strongly) that it should be typed as a journal article. We may even just use the ADS link, which seems to have a great BibTeX-formatted record for most eprints (though I'm torn on doing that). NASA ADS and most others classify arXiv eprints as journal articles anyway. Zotero will import any of those as journal articles, so there might be a case to import all arXiv eprints as if they were journal articles
      • ok
    1. However these fields are not filled automatically by zotero when importing from arxiv, instead a Report is created with all three fields (Archive, Loc. in Archive, Call number) empty
      • OK: doesnt work, because:
      • arxiv.js import doesnt fill these fields
      • ADS.js import, doesnt
    2. Further, by trial and error I found that setting the Journal article fields as follows: Archive: arxiv Loc. in Archive: 1234.1231 Call number: hep-ph results in zotero exporting a biblatex file containing: eprinttype = {arxiv}, eprint = {1234.1231}, eprintclass = {hep-ph},
      • TIP: SEE BibLATex.js translator:
      • if (item.archive == "arXiv" || item.archive == "arxiv") {
        writeField("eprinttype", "arxiv");
        writeField("eprint", item.archiveLocation);
        if (item.callNumber) { // assume call number is used for arxiv class
            writeField("eprintclass", item.callNumber);
        }
        
        }
    3. According to the biblatex manual ftp://bay.uchicago.edu/CTAN/macros/latex/exptl/biblatex/doc/biblatex.pdf section 3.11.7, arxivprefix is an alias for eprinttype and primaryclass is an alias for eprintclass.
      • BIBLATEX: extended fields
      • the are alias
    4. uses in general the fields archivePrefix, eprint and primaryClass
    5. The recommended way to add arxiv information to bibtex items is giving here http://arxiv.org/hypertex/bibstyles/
      • URL NOT WORKING
  14. Dec 2021
    1. 3.1.1.1. search_query and id_list logic We have already seen the use of search_query in the quickstart section. The search_query takes a string that represents a search query used to find articles. The construction of search_query is described in the search query construction appendix. The id_list contains a comma-delimited list of arXiv id's. The logic of these two parameters is as follows: If only search_query is given (id_list is blank or not given), then the API will return results for each article that matches the search query. If only id_list is given (search_query is blank or not given), then the API will return results for each article in id_list. If BOTH search_query and id_list are given, then the API will return each article in id_list that matches search_query. This allows the API to act as a results filter. This is summarized in the following table: search_query present id_list present API returns yes no articles that match search_query no yes articles that are in id_list yes yes articles in id_list that also match search_query

      BUSQUEDAS en lista de IDs: usar los 2 parametros: id_list y search_query

    2. 5.2. Details of Atom Results Returned The following table lists each element of the returned Atom results. For a more detailed explanation see Outline of an Atom Feed. element explanation feed elements <title> The title of the feed containing a canonicalized query string. <id> A unique id assigned to this query. <updated> The last time search results for this query were updated. Set to midnight of the current day. <link> A url that will retrieve this feed via a GET request. <opensearch:totalResults> The total number of search results for this query. <opensearch:startIndex> The 0-based index of the first returned result in the total results list. <opensearch:itemsPerPage> The number of results returned. entry elements <title> The title of the article. <id> A url http://arxiv.org/abs/id <published> The date that version 1 of the article was submitted. <updated> The date that the retrieved version of the article was submitted. Same as <published> if the retrieved version is version 1. <summary> The article abstract. <author> One for each author. Has child element <name> containing the author name. <link> Can be up to 3 given url's associated with this article. <category> The arXiv or ACM or MSC category for an article if present. <arxiv:primary_category> The primary arXiv category. <arxiv:comment> The authors comment if present. <arxiv:affiliation> The author's affiliation included as a subelement of <author> if present. <arxiv:journal_ref> A journal reference if present. <arxiv:doi> A url for the resolved DOI to an external resource if present.

      detalle de campos en el resultado

    3. 5.1.1. A Note on Article Versions Each arXiv article has a version associated with it. The first time an article is posted, it is given a version number of 1. When subsequent corrections are made to an article, it is resubmitted, and the version number is incremented. At any time, any version of an article may be retrieved. When using the API, if you want to retrieve the latest version of an article, you may simply enter the arxiv id in the id_list parameter. If you want to retrieve information about a specific version, you can do this by appending vn to the id, where n is the version number you are interested in.

      Esta API permite obtener datos de una version especifica; la API OAI no lo permite!

    4. table lists the field prefixes for all the fields that can be searched. prefix explanation ti Title au Author abs Abstract co Comment jr Journal Reference cat Subject Category rn Report Number id Id (use id_list instead) all All of the above Note: The id_list parameter should be used rather than search_query=id:xxx to properly handle article versions. In addition, note that all: searches in each of the fields simultaneously.

      filter fields (campos de filtro en busquedas)

    5. For each entry, there are up to three <link> elements, distinguished by their rel and title attributes. The table below summarizes what these links refer to rel title refers to always present alternate - abstract page yes related pdf pdf yes related doi resolved doi no For example: <link xmlns="http://www.w3.org/2005/Atom" href="http://arxiv.org/abs/hep-ex/0307015v1" rel="alternate" type="text/html"/> <link xmlns="http://www.w3.org/2005/Atom" title="pdf" href="http://arxiv.org/pdf/hep-ex/0307015v1" rel="related" type="application/pdf"/> <link xmlns="http://www.w3.org/2005/Atom" title="doi" href="http://dx.doi.org/10.1529/biophysj.104.047340" rel="related"/>

      3 links: 2 always (abs, pdf), 1 if DOI exists

    1. Item = Article Each article in arXiv is modeled as an Item in the OAI-PMH interface. Only the most recent version of each article is exposed via this interface (some metadata formats include the version history).

      VERY IMPORTANT: OAI solo devuelve datos de "ULTIMA" version; no permite acceder a nivel de version; por eso: CAMBIAR a API /api/

    2. 12 April 2007 The arXiv OAI baseURL changed to http://export.arxiv.org/oai2 from http://arxiv.org/oai2. The old URL will issue a redirect for some time but please update your harvester to use the new baseURL.

      IMPORTANT: cambio de URL; usar "export.arxiv"

    3. Metedata formats Metadata for each item (article) is available in several formats, all formats are supported for all articles. The available formats include: oai_dc - Simple Dublin Core. See example in oai_dc format. arXiv - arXiv specific metadata format which includes author names separated out, category and license information. See example in arXiv format. arXivRaw - arXiv specific metadata format which is very close the internal format stored at arXiv. Includes version history. See example in arXivRaw format. You may request a list of all the metadata formats supported with the ListMetadataFormats verb.

      probar 3 formatos

    1. Projects Using the API The following projects use the arXiv API: OpenWetWare's Mediawiki Installation Sonny Software's Bookends Reference Manager for OSX arXiv Droid - arXiv app for Android Retrieve Bibliographic arXiv Information The snarXiv daily arXiv by categories PaperRater.org - a web-based tool for open review and social reading of scientific publications ArXiv Analytics - a web portal for reading and discussing arXiv eprints Bibcure - keeps bibtex up to date and normalized, and allows you to download all papers inside your bibtex biblio.el - Download BibTeX entries from arXiv and others in Emacs Lib arXiv - arXiv app for iOS devices arxivist.com

      see

    2. Bibcure - keeps bibtex up to date and normalized, and allows you to download all papers inside your bibtex

      bibcure

    1. The primary motivation for removing subject-classification information from the identifier was to decouple these two properties (identification and classification).

      OK, mismo item en diferentes areas

  15. Mar 2021
    1. Patricio R Estevez-Soto. (2020, November 24). I’m really surprised to see a lot of academics sharing their working papers/pre-prints from cloud drives (i.e. @Dropbox @googledrive) 🚨Don’t!🚨 Use @socarxiv @SSRN @ZENODO_ORG, @OSFramework, @arxiv (+ other) instead. They offer persisent DOIs and are indexed by Google scholar [Tweet]. @prestevez. https://twitter.com/prestevez/status/1331029547811213316

  16. Apr 2020
    1. A few months later, in August 1991, a centralized web-based network, arXiv (https://arxiv.org/, pronounced ‘är kīv’ like the word “archive”, from the Greek letter “chi”), was created. arXiv is arguably the most influential preprint platform and has supported the fields of physics, mathematics, and computer science for over 30 years.

      ArXiv (arkaif) adalah contoh lain dari teknologi preprint yang telah dikenalkan sejak tahun 1990.

      ArXiv = bidang fisika, matematika dan sains komputasi.

      Setelah era Arxiv, ada waktu kosong selama 15 tahun tanpa ada perkembangan jumlah server preprint.

  17. Feb 2019
  18. Jan 2019
  19. Nov 2018
    1. hep-th

      事实证明,有些领域的物理研究是很爱大搞特搞各种“model”的。。。。

  20. Nov 2017
    1. Currently, since arXiv lacks an explicit representation of authors and other entities in metadata, ADS must parse author metadata from arXiv heuristically.

      It will be interesting if solving this problem becomes one of hardcore ORCID integration coupled with metadata extraction from submitted manuscripts.

    2. ADS shares those matches with us via its API, and we use that information to populate DOI and JREF fields on arXiv papers.

      I've always wondered if this were true. I continue to wonder if arXiv uses other sources of eprint-DOI matches to corroborate or append to those from ADS.

  21. Oct 2017
    1. We are pleased to announce that Steinn Sigurdsson has assumed the Scientific Director position. He will collaborate with the arXiv Program Director (Oya Y. Rieger) in overseeing the service and work with arXiv staff and the Scientific Advisory Board (SAB) in providing intellectual leadership for the operation.

      Great news!

  22. Jul 2016
    1. Unsupervised Learning of 3D Structure from Images Authors: Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, Nicolas Heess (Submitted on 3 Jul 2016) Abstract: A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.

      The 3D representation of a 2D image is ambiguous and multi-modal. We achieve such reasoning by learning a generative model of 3D structures, and recover this structure from 2D images via probabilistic inference.

    1. When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning as standard practice for improved new task performance.

      Learning w/o Forgetting: distilled transfer learning

  23. Jun 2016
    1. Dynamic Filter Networks

      "... filters are generated dynamically conditioned on an input" Nice video frame prediction experiments.

    1. Atl=xtifl= 0MAXPOOL(RELU(CONV(Etl1)))l >0(1)^Atl=RELU(CONV(Rtl))(2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)](3)Rtl=CONVLSTM(Et1l;Rt1l;Rtl+1)(4)

      Very unique network structure. Prediction results look promising.