- Jul 2016
-
-
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
xnor-net: a very efficient network
Tags
Annotators
URL
-
-
www.tensorflow.org www.tensorflow.org
-
A well-known fact is that transferring data to and from GPUs is quite slow. For this reason, we decide to store and update all model parameters on the CPU (see green box).
Not very clear.
-
-
stackoverflow.com stackoverflow.com
-
True.
a = np.array(1) # 0-d, but different from np.int64(1)
b = np.array([1]) # 1-d, same as a.reshape((1))
c = np.array([[1]]) # 2-d, same as a.reshape((1,1))
-
-
arxiv.org arxiv.org
-
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks
Tags
Annotators
URL
-
-
-
Unsupervised Learning of 3D Structure from Images Authors: Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, Nicolas Heess (Submitted on 3 Jul 2016) Abstract: A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
The 3D representation of a 2D image is ambiguous and multi-modal. We achieve such reasoning by learning a generative model of 3D structures, and recover this structure from 2D images via probabilistic inference.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
When building a unified vision system or gradually adding new capabilities to a system, the usual assumption is that training data for all tasks is always available. However, as the number of tasks grows, storing and retraining on such data becomes infeasible. A new problem arises where we add new capabilities to a Convolutional Neural Network (CNN), but the training data for its existing capabilities are unavailable. We propose our Learning without Forgetting method, which uses only new task data to train the network while preserving the original capabilities. Our method performs favorably compared to commonly used feature extraction and fine-tuning adaption techniques and performs similarly to multitask learning that uses original task data we assume unavailable. A more surprising observation is that Learning without Forgetting may be able to replace fine-tuning as standard practice for improved new task performance.
Learning w/o Forgetting: distilled transfer learning
Tags
Annotators
URL
-
- Jun 2016
-
docs.scipy.org docs.scipy.org
-
Remember that a slicing tuple can always be constructed as obj and used in the x[obj] notation. Slice objects can be used in the construction in place of the [start:stop:step] notation. For example, x[1:10:5,::-1] can also be implemented as obj = (slice(1,10,5), slice(None,None,-1)); x[obj] . This can be useful for constructing generic code that works on arrays of arbitrary dimension.
-
-
github.com github.com
-
Fixed this by including zlib1g-dev package in the ami we're using
solved the problem for me on ubuntu 14.04
-
-
github.com github.com
-
conda install nomkl
Solved my problem
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Low-shot visual object recognition
Tags
Annotators
URL
-
-
sketchy.eye.gatech.edu sketchy.eye.gatech.edu
-
The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies
Tags
Annotators
URL
-
-
-
Beyond Sharing Weights for Deep Domain Adaptation
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Deep Convolutional Inverse Graphics Network
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Dynamic Filter Networks
"... filters are generated dynamically conditioned on an input" Nice video frame prediction experiments.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Atl=xtifl= 0MAXPOOL(RELU(CONV(Etl1)))l >0(1)^Atl=RELU(CONV(Rtl))(2)Etl= [RELU(Atl^Atl);RELU(^AtlAtl)](3)Rtl=CONVLSTM(Et1l;Rt1l;Rtl+1)(4)
Very unique network structure. Prediction results look promising.
Tags
Annotators
URL
-
-
-
Unsupervised convolutional neural networks for motion estimation
-
-
arxiv.org arxiv.org
-
Adversarial Feature Learning
Tags
Annotators
URL
-
-
-
Pairwise Decomposition of Image Sequences for Active Multi-View Recognition
Using pairs so ordering is no longer important.
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
Optimize code in reversed network.
Tags
Annotators
URL
-
-
www.lian-li.com www.lian-li.com
-
Two 140mm fan (front)
Why are the front fans not designed to be center aligned...
Tags
Annotators
URL
-
-
www.oklink.net www.oklink.net
-
这门‘乾坤大挪移’功夫,当真令人好生佩服
当真令人好生佩服
Tags
Annotators
URL
-