- Apr 2020
-
www.mdpi.com www.mdpi.com
-
A few months later, in August 1991, a centralized web-based network, arXiv (https://arxiv.org/, pronounced ‘är kīv’ like the word “archive”, from the Greek letter “chi”), was created. arXiv is arguably the most influential preprint platform and has supported the fields of physics, mathematics, and computer science for over 30 years.
ArXiv (arkaif) adalah contoh lain dari teknologi preprint yang telah dikenalkan sejak tahun 1990.
ArXiv = bidang fisika, matematika dan sains komputasi.
Setelah era Arxiv, ada waktu kosong selama 15 tahun tanpa ada perkembangan jumlah server preprint.
Tags
Annotators
URL
-
- Feb 2019
-
arxiv.org arxiv.org
-
Great paper on orphaned annotations in H.
Tags
Annotators
URL
-
- Jan 2019
-
www.inverse.com www.inverse.com
-
ArxIV.
What? Somebody help them...
-
- Nov 2018
-
iphysresearch.github.io iphysresearch.github.io
-
hep-th
事实证明,有些领域的物理研究是很爱大搞特搞各种“model”的。。。。
Tags
Annotators
URL
-
- Nov 2017
-
blogs.cornell.edu blogs.cornell.edu
-
Currently, since arXiv lacks an explicit representation of authors and other entities in metadata, ADS must parse author metadata from arXiv heuristically.
It will be interesting if solving this problem becomes one of hardcore ORCID integration coupled with metadata extraction from submitted manuscripts.
-
ADS shares those matches with us via its API, and we use that information to populate DOI and JREF fields on arXiv papers.
I've always wondered if this were true. I continue to wonder if arXiv uses other sources of eprint-DOI matches to corroborate or append to those from ADS.
-
- Oct 2017
-
blogs.cornell.edu blogs.cornell.edu
-
We are pleased to announce that Steinn Sigurdsson has assumed the Scientific Director position. He will collaborate with the arXiv Program Director (Oya Y. Rieger) in overseeing the service and work with arXiv staff and the Scientific Advisory Board (SAB) in providing intellectual leadership for the operation.
Great news!
-
- Jul 2016
-
-
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
xnor-net: a very efficient network
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks
Tags
Annotators
URL
-
-
-
Unsupervised Learning of 3D Structure from Images Authors: Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, Nicolas Heess (Submitted on 3 Jul 2016) Abstract: A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
The 3D representation of a 2D image is ambiguous and multi-modal. We achieve such reasoning by learning a generative model of 3D structures, and recover this structure from 2D images via probabilistic inference.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning as standard practice for improved new task performance.
Learning w/o Forgetting: distilled transfer learning
Tags
Annotators
URL
-
- Jun 2016
-
arxiv.org arxiv.org
-
Low-shot visual object recognition
Tags
Annotators
URL
-
-
-
Beyond Sharing Weights for Deep Domain Adaptation
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Deep Convolutional Inverse Graphics Network
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Dynamic Filter Networks
"... filters are generated dynamically conditioned on an input" Nice video frame prediction experiments.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Atl=xtifl= 0MAXPOOL(RELU(CONV(Etl1)))l >0(1)^Atl=RELU(CONV(Rtl))(2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)](3)Rtl=CONVLSTM(Et1l;Rt1l;Rtl+1)(4)
Very unique network structure. Prediction results look promising.
Tags
Annotators
URL
-
-
-
Unsupervised convolutional neural networks for motion estimation
-
-
arxiv.org arxiv.org
-
Adversarial Feature Learning
Tags
Annotators
URL
-
-
-
Pairwise Decomposition of Image Sequences for Active Multi-View Recognition
Using pairs so ordering is no longer important.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
Optimize code in reversed network.
Tags
Annotators
URL
-