- Nov 2018
-
creativecoding.soe.ucsc.edu creativecoding.soe.ucsc.edu
-
The issue of representation lies at the heart of the debate between the logic-inspired and the neural-network-inspired paradigms for cognition. In the logic-inspired paradigm, an instance of a symbol is something for which the only property is that it is either identical or non-identical to other symbol instances. It has no internal structure that is relevant to its use; and to reason with symbols, they must be bound to the variables in judiciously chosen rules of inference. By contrast, neural networks just use big activity vectors, big weight matrices and scalar non-linearities to perform the type of fast ‘intui-tive’ inference that underpins effortless commonsense reasoning.
Essa parte "... fast 'intuitive' inference that underpins effortless commonsense reasoning" chamou minha atenção. A ideia de intuição ser "simulada" por uma rede neural, por computador, através de mecanismos matemáticos, é deveras interessante.
-
Deep-learning theory shows that deep nets have two different expo-nential advantages over classic learning algorithms that do not use distributed representations21. Both of these advantages arise from the power of composition and depend on the underlying data-generating distribution having an appropriate componential structure40.
"... data-generating distribution having an apropriate componential structure."
Interessante. Algo que não sabia, apesar de parecer óbvio. A pergunta é: como saber se tal distribuição tem tal característica?
-
In past decades, neural nets used smoother non-linearities, such as tanh(z) or 1/(1+exp(−z)), but the ReLU typically learns much faster in networks with many layers, allowing training of a deep supervised network without unsupervised pre-training28
Não entendi bem essa parte: "... allowing training of a deep supervised network without unsupervised pre-training".
-
Since the 1960s we have known that linear classifiers can only carve their input space into very simple regions, namely half-spaces sepa-rated by a hyperplane19. But problems such as image and speech recog-nition require the input–output function to be insensitive to irrelevant variations of the input, such as variations in position, orientation or illumination of an object, or variations in the pitch or accent of speech, while being very sensitive to particular minute variations (for example, the difference between a white wolf and a breed of wolf-like white dog called a Samoyed).
"... to particular minute variations..."
Esse tipo de diferença para mim era conhecida, mas fiquei surpreso pela relação às diferenças irrelevantes (posição, orientação etc.). Surpreso no sentido de "que interessante, realmente não é fácil (para um sistema de DL?) distinguir essas diferenças".
-