15 Matching Annotations
- Nov 2020
-
arxiv.org arxiv.org
-
There are two steps in ourframework:pre-trainingandfine-tuning. Dur-ing pre-training, the model is trained on unlabeleddata over different pre-training tasks. For fine-tuning, the BERT model is first initialized withthe pre-trained parameters, and all of the param-eters are fine-tuned using labeled data from thedownstream tasks.
.
-
- May 2019
-
academic.oup.com academic.oup.com
-
(Kanari et al. 2010)
2018?
-
- Mar 2019
-
www.jneurosci.org www.jneurosci.org
-
0.5 mVms1
0.5 mV*ms^-1
-
160m
160 um
-
14
-
TRN
BIRNLEX:1721
-
12.60.4 ms
12.6 +/- 0.4 ms
-
0.7F/cm2
0.7 uF/cm^2
-
12 mV
-
170cm
170 ohm*cm
-
rat
BIRNLEX:160
-
400m
400 um
-
7105cm/s
7e-05 cm/s
-
79 mV
-79 mV
Tags
- type:exp_cond
- link:brain_region_1
- type:sample_size
- FAQ
- species:rat
- type:capacitance_membrane
- type:time_const_membrane
- type:dendrite_length
- type:eq_potential_ion_curr
- link:liquid_junction_potential_1
- type:liquid_junction_potential
- DEMO
- type:brain_region
- type:internal_resist
- link:sample_size_1
- brain_region:trn
- link:species_1
- type:species
- type:conductance_ion_curr_max
Annotators
URL
-
- Nov 2018
-
Local file Local file