- Oct 2019
-
en.wikipedia.org en.wikipedia.org
-
Compact space
So, basically, compact means bounded and closed?
-
- May 2019
-
-
cost function
But what if the cost function (reward function) is not available? Or it is usually available for many cases of interest?
-
- Dec 2018
-
developer.apple.com developer.apple.com
-
delegate
An protocol between an object and user's input.
-
- May 2018
-
jameshaytonphd.com jameshaytonphd.com
-
Using research to prove something you passionately believe in can lead to confirmation bias, where you only pay attention to results that support your existing view.
be aware of confirmation bias
-
The natural temptation might be to set your aims as high as possible and make your project as comprehensive as you can. Such projects are easy to imagine, but much harder to implement.
Start small in writing your aims!
-
Another approach is to test the basic assumptions that others in the field have used
Test the basic assumption will be of great value to the field.
Tags
Annotators
URL
-
- Jan 2018
-
-
A replacement for numpy to use the power of GPUs
-
- Dec 2017
-
dicl.unist.ac.kr dicl.unist.ac.kr
-
Warp divergence
When threads inside a warp branches to different execution paths. Instead of all 32 threads in the warp do the same instruction, on average only half of the threads do the instruction when warp divergence occurs. This causes 50% performance loss.
-
make as many consecutive threads as possible do the same thing
an important take-home message for dealing with branch divergence.
-
Warps are run concurrently in an SM
This statement conflicts the statement that only one warp is executed at a time per SM.
-
Each SM has multiple processors but only one instruction unit
Q: Only one instruction unit is a SM, and a SM has many warps. Does this imply all these warps within the same SM execute the same set of instructions?
A: No. Each SM has a (cost-free) warp scheduler which prioritizes ready warps (along the time dimension). Take a look at the figure in page 6. of http://www.math.ncku.edu.tw/~mhchen/HPC/CUDA/GPGPU_Lecture5.pdf
-
- Nov 2017
-
mpitutorial.com mpitutorial.com
-
MPI ping pong program
- world_rank, partner_rank: private for each process?
- ping_pong_count: shared?
-> NO! Each process spawns an instance of the same program with its own memory space. Operations that a process carries out on the variables in its memory space does not affect the values of the variables of another process' memory space. To communicate a value of a variable from one process to another (e.g., update the newly computed value of a variable X to its value in another process), we can use MPI_Send and MPI_Recv.
**After MPI_Send, the sent data is packed into a buffer and the program continues (e.g., it needs no receiver for the program to continue).
However, after MPI_Recv, the program waits until it receives the data. **
-
-
www.cs.cmu.edu www.cs.cmu.edu
-
cudaMalloc
cudaMalloc API: first (returning) arg as the address of the pointer to the allocated device memory
-
- Dec 2016
-
dpkingma.com dpkingma.com
-
L(\theta, \phi; x) is the lower bound of log-likelihood p_{\theta}(x)
-
- Aug 2016
-
caffe.berkeleyvision.org caffe.berkeleyvision.org
-
CxHXW = 1x1x1?
-
- Jul 2016
-
www.cs.virginia.edu www.cs.virginia.edu
-
``You and Your Research''
-
-
research.microsoft.com research.microsoft.com
-
Research skills
-
- Jun 2016
-
cs231n.github.io cs231n.github.io
-
understand the basic Image Classification pipeline and the data-driven approach (train/predict stages)
This is my first note using 'Hypothesis.io'
Tags
Annotators
URL
-