21 Matching Annotations
  1. Mar 2021
    1. Patricio R Estevez-Soto. (2020, November 24). I’m really surprised to see a lot of academics sharing their working papers/pre-prints from cloud drives (i.e. @Dropbox @googledrive) 🚨Don’t!🚨 Use @socarxiv @SSRN @ZENODO_ORG, @OSFramework, @arxiv (+ other) instead. They offer persisent DOIs and are indexed by Google scholar [Tweet]. @prestevez. https://twitter.com/prestevez/status/1331029547811213316

  2. Apr 2020
    1. A few months later, in August 1991, a centralized web-based network, arXiv (https://arxiv.org/, pronounced ‘är kīv’ like the word “archive”, from the Greek letter “chi”), was created. arXiv is arguably the most influential preprint platform and has supported the fields of physics, mathematics, and computer science for over 30 years.

      ArXiv (arkaif) adalah contoh lain dari teknologi preprint yang telah dikenalkan sejak tahun 1990.

      ArXiv = bidang fisika, matematika dan sains komputasi.

      Setelah era Arxiv, ada waktu kosong selama 15 tahun tanpa ada perkembangan jumlah server preprint.

  3. Feb 2019
  4. Jan 2019
  5. Nov 2018
    1. hep-th


  6. Nov 2017
    1. Currently, since arXiv lacks an explicit representation of authors and other entities in metadata, ADS must parse author metadata from arXiv heuristically.

      It will be interesting if solving this problem becomes one of hardcore ORCID integration coupled with metadata extraction from submitted manuscripts.

    2. ADS shares those matches with us via its API, and we use that information to populate DOI and JREF fields on arXiv papers.

      I've always wondered if this were true. I continue to wonder if arXiv uses other sources of eprint-DOI matches to corroborate or append to those from ADS.

  7. Oct 2017
    1. We are pleased to announce that Steinn Sigurdsson has assumed the Scientific Director position. He will collaborate with the arXiv Program Director (Oya Y. Rieger) in overseeing the service and work with arXiv staff and the Scientific Advisory Board (SAB) in providing intellectual leadership for the operation.

      Great news!

  8. Jul 2016
    1. Unsupervised Learning of 3D Structure from Images Authors: Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, Nicolas Heess (Submitted on 3 Jul 2016) Abstract: A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.

      The 3D representation of a 2D image is ambiguous and multi-modal. We achieve such reasoning by learning a generative model of 3D structures, and recover this structure from 2D images via probabilistic inference.

    1. When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning as standard practice for improved new task performance.

      Learning w/o Forgetting: distilled transfer learning

  9. Jun 2016
    1. Dynamic Filter Networks

      "... filters are generated dynamically conditioned on an input" Nice video frame prediction experiments.

    1. Atl=xtifl= 0MAXPOOL(RELU(CONV(Etl1)))l >0(1)^Atl=RELU(CONV(Rtl))(2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)](3)Rtl=CONVLSTM(Et1l;Rt1l;Rtl+1)(4)

      Very unique network structure. Prediction results look promising.