- Oct 2024
-
x.com x.com
-
The similarity is because they are all saying roughly the same thing: Total (result) = Kinetic (cost) + Potential (benefit) Cost is either imaginary squared or negative (space-like), benefit is real (time-like), result is mass-like. Just like physics, the economic unfavourable models are the negative results. In economics, diversity of products is a strength as it allows better recovery from failure of any one, comically DEI of people fails miserably at this, because all people are not equal. Here are some other examples you will know if you do physics: E² + (ipc)² = (mc²)² (relativistic Einstein equation), mass being the result, energy time-like (potential), momentum the space-like (kinetic). ∇² - 1/c² ∂²/∂t² = (mc/ℏ)² (Klein-Gordon equation), mass is the result, ∂²/∂t² potential, ∇² is kinetic. Finally we have Dirac equation, which unlike the previous two as "sum of squares" is more like vector addition (first order differentials, not second). iℏγ⁰∂₀ψ + iℏγⁱ∂ᵢψ = mcψ First part is still the time-like potential, second part is the space-like kinetic, and the mass is still the result though all the same. This is because energy is all forms, when on a flat (free from outside influence) worksheet, acts just like a triangle between potential, kinetic and resultant energies. E.g. it is always of the form k² + p² = r², quite often kinetic is imaginary to potential (+,-,-,-) spacetime metric, quaternion mathematics. So the r² can be negative, or imaginary result if costs out way benefits, or work in is greater than work out. Useless but still mathematical solution. Just like physics, you always want the mass or result to be positive and real, or your going to lose energy to the surrounding field, with negative returns. Economic net loss do not last long, just like imaginary particles in physics.
in reply to Cesar A. Hidalgo at https://x.com/realAnthonyDean/status/1844409919161684366
via Anthony Dean @realAnthonyDean
-
- Jul 2023
-
openaccess.thecvf.com openaccess.thecvf.com
-
Xu, ICCV, 2019 "Temporal Recurrent Networks for Online Action Detection"
arxiv: https://arxiv.org/abs/1811.07391 hypothesis: https://hyp.is/go?url=https%3A%2F%2Fopenaccess.thecvf.com%2Fcontent_ICCV_2019%2Fpapers%2FXu_Temporal_Recurrent_Networks_for_Online_Action_Detection_ICCV_2019_paper.pdf&group=world
-
-
blogs.nvidia.com blogs.nvidia.com
Tags
- ai
- machine learning
- wikipedia:en=Transformer_(machine_learning_model)
- wikipedia:en=Self-supervised_learning
- wikipedia:en=BERT_(language_model)
- cito:cites=doi:10.48550/arXiv.1706.03762
- cito:cites=doi:10.48550/arXiv.2108.07258
- wikipedia:en=Artificial_neural_network
- wikipedia:en=Attention_(machine_learning)
- neural networks
Annotators
URL
-
- Jun 2023
-
cdn.openai.com cdn.openai.com
-
Recent work in computer vision has shown that common im-age datasets contain a non-trivial amount of near-duplicateimages. For instance CIFAR-10 has 3.3% overlap betweentrain and test images (Barz & Denzler, 2019). This results inan over-reporting of the generalization performance of ma-chine learning systems.
CIFAR-10 performance results are overestimates since some of the training data is essentially in the test set.
-
- May 2023
-
docdrop.org docdrop.org
-
It turns out that backpropagation is a special case of a general techniquein numerical analysis called automatic differentiat
Automatic differentiation is a technique in numerical analysis. That's why Real Analysis is an important Mathematics area that should be studied if one wants to go into AI research.
-
- Dec 2022
- Nov 2022
-
www.researchgate.net www.researchgate.net
-
n recent years, the neural network based topic modelshave been proposed for many NLP tasks, such as infor-mation retrieval [11], aspect extraction [12] and sentimentclassification [13]. The basic idea is to construct a neuralnetwork which aims to approximate the topic-word distri-bution in probabilistic topic models. Additional constraints,such as incorporating prior distribution [14], enforcing di-versity among topics [15] or encouraging topic sparsity [16],have been explored for neural topic model learning andproved effective.
Neural topic models are often trained to mimic the behaviours of probabilistic topic models - I should come back and look at some of the works:
- R. Das, M. Zaheer, and C. Dyer, “Gaussian LDA for topic models with word embeddings,”
- P. Xie, J. Zhu, and E. P. Xing, “Diversity-promoting bayesian learning of latent variable models,”
- M. Peng, Q. Xie, H. Wang, Y. Zhang, X. Zhang, J. Huang, and G. Tian, “Neural sparse topical coding,”
Tags
Annotators
URL
-
- Oct 2022
-
www.robinsloan.com www.robinsloan.com
-
https://www.robinsloan.com/notes/writing-with-the-machine/
Related work leading up to this video: https://vimeo.com/232545219
-
- Jan 2022
-
vimeo.com vimeo.com
-
from: Eyeo Conference 2017
Description
Robin Sloan at Eyeo 2017 | Writing with the Machine | Language models built with recurrent neural networks are advancing the state of the art on what feels like a weekly basis; off-the-shelf code is capable of astonishing mimicry and composition. What happens, though, when we take those models off the command line and put them into an interactive writing environment? In this talk Robin presents demos of several tools, including one presented here for the first time. He discusses motivations and process, shares some technical tips, proposes a course for the future — and along the way, write at least one short story together with the audience: all of us, and the machine.
Notes
Robin created a corpus using If Magazine and Galaxy Magazine from the Internet Archive and used it as a writing tool. He talks about using a few other models for generating text.
Some of the idea here is reminiscent of the way John McPhee used the 1913 Webster Dictionary for finding words (or le mot juste) for his work, as tangentially suggested in Draft #4 in The New Yorker (2013-04-22)
Cross reference: https://hypothes.is/a/t2a9_pTQEeuNSDf16lq3qw and https://hypothes.is/a/vUG82pTOEeu6Z99lBsrRrg from https://jsomers.net/blog/dictionary
Croatian acapella singing: klapa https://www.youtube.com/watch?v=sciwtWcfdH4
Writing using the adjacent possible.
Corpus building as an art [~37:00]
Forgetting what one trained their model on and then seeing the unexpected come out of it. This is similar to Luhmann's use of the zettelkasten as a serendipitous writing partner.
Open questions
How might we use information theory to do this more easily?
What does a person or machine's "hand" look like in the long term with these tools?
Can we use corpus linguistics in reverse for this?
What sources would you use to train your model?
References:
- Andrej Karpathy. 2015. "The Unreasonable Effectiveness of Recurrent Neural Networks"
- Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, et al. "Generating sentences from a continuous space." 2015. arXiv: 1511.06349
- Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. "A Hybrid Convolutional Variational Autoencoder for Text generation." arXiv:1702.02390
- Soroush Mehri, et al. 2017. "SampleRNN: An Unconditional End-to-End Neural Audio Generation Model." arXiv:1612.07837 applies neural networks to sound and sound production
-
- May 2021
-
colab.research.google.com colab.research.google.com
- Mar 2021
-
arxiv.org arxiv.org
-
Kozlowski, Diego, Jennifer Dusdal, Jun Pang, and Andreas Zilian. ‘Semantic and Relational Spaces in Science of Science: Deep Learning Models for Article Vectorisation’. ArXiv:2011.02887 [Physics], 5 November 2020. http://arxiv.org/abs/2011.02887.
-
- Jul 2020
-
psyarxiv.com psyarxiv.com
-
Wool, Lauren E, and The International Brain Laboratory. ‘Knowledge across Networks: How to Build a Global Neuroscience Collaboration’. Preprint. PsyArXiv, 14 July 2020. https://doi.org/10.31234/osf.io/f4uaj.
-
- Apr 2020
-
www.analyticsvidhya.com www.analyticsvidhya.com
-
import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way.
-
TensorFlow recently released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people. We’ll build a speech recognition system that understands simple spoken commands. You can download the dataset from here.
-
In the 1980s, the Hidden Markov Model (HMM) was applied to the speech recognition system. HMM is a statistical model which is used to model the problems that involve sequential information. It has a pretty good track record in many real-world applications including speech recognition. In 2001, Google introduced the Voice Search application that allowed users to search for queries by speaking to the machine. This was the first voice-enabled application which was very popular among the people. It made the conversation between the people and machines a lot easier. By 2011, Apple launched Siri that offered a real-time, faster, and easier way to interact with the Apple devices by just using your voice. As of now, Amazon’s Alexa and Google’s Home are the most popular voice command based virtual assistants that are being widely used by consumers across the globe.
-
Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
-
-
keras.io keras.io
-
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility). Supports both convolutional networks and recurrent networks, as well as combinations of the two. Runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.6.
-
- Jul 2019
-
jmlr.csail.mit.edu jmlr.csail.mit.edu
-
Compared with neural networks configured by a pure grid search,we find that random search over the same domain is able to find models that are as good or betterwithin a small fraction of the computation time.
-
- Jun 2019
-
en.wikipedia.org en.wikipedia.org
-
Throughout the past two decades, he has been conducting research in the fields of psychology of learning and hybrid neural network (in particular, applying these models to research on human skill acquisition). Specifically, he has worked on the integrated effect of "top-down" and "bottom-up" learning in human skill acquisition,[1][2] in a variety of task domains, for example, navigation tasks,[3] reasoning tasks, and implicit learning tasks.[4] This inclusion of bottom-up learning processes has been revolutionary in cognitive psychology, because most previous models of learning had focused exclusively on top-down learning (whereas human learning clearly happens in both directions). This research has culminated with the development of an integrated cognitive architecture that can be used to provide a qualitative and quantitative explanation of empirical psychological learning data. The model, CLARION, is a hybrid neural network that can be used to simulate problem solving and social interactions as well. More importantly, CLARION was the first psychological model that proposed an explanation for the "bottom-up learning" mechanisms present in human skill acquisition: His numerous papers on the subject have brought attention to this neglected area in cognitive psychology.
Tags
Annotators
URL
-
-
sebastianraschka.com sebastianraschka.com
-
However, this doesn’t mean that Min-Max scaling is not useful at all! A popular application is image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range). Also, typical neural network algorithm require data that on a 0-1 scale.
Use min-max scaling for image processing & neural networks.
-
- Mar 2019
- Oct 2018
-
www.slideshare.net www.slideshare.net
-
Do neural networks dream of semantics?
Neural networks in visual analysis, linguistics Knowledge graph applications
- Data integration,
- Visualization
- Exploratory search
- Question answering
Future goals: neuro-symbolic integration (symbolic reasoning and machine learning)
-
- Aug 2017
-
arxiv.org arxiv.org
-
This is a very easy paper to follow, but it looks like their methodology is a simple way to improve performance on limited data. I'm curious how well this is reproduced elsewhere.
-
- Apr 2017
-
www.tensorflow.org www.tensorflow.org
-
If we write that out as equations, we get:
It would be easier to understand what are x and y and W here if the actual numbers were used, like 784, 10, 55000, etc. In this simple example there are 3 x and 3 y, which is misleading. In reality there are 784 x elements (for each pixel) and 55,000 such x arrays and only 10 y elements (for each digit) and then 55,000 of them.
-
- Nov 2016
-
roachsinai.github.io roachsinai.github.io
-
Softmax分类器所做的就是最小化在估计分类概率(就是 Li=efyi/∑jefjLi=efyi/∑jefjL_i =e^{f_{y_i}}/\sum_je^{f_j})和“真实”分布之间的交叉熵.
而这样的好处,就是如果样本误分的话,就会有一个非常大的梯度。而如果使用逻辑回归误分的越严重,算法收敛越慢。比如,\(t_i=1\) 而 \(y_i=0.0000001\),cost function 为 \(E=\frac{1}{2}(t-y)^2\) 那么,\(\frac{dE}{dw_i}=-(t-y)y(1-y)x_i\).
-
- Jan 2016
-
thinkingmachines.mit.edu thinkingmachines.mit.edu
-
n and d
What do n and d mean?
-
- Jul 2015
-
Tags
Annotators
URL
-
- Jun 2015
-
www.technologyreview.com www.technologyreview.com
-
Enter the Daily Mail website, MailOnline, and CNN online. These sites display news stories with the main points of the story displayed as bullet points that are written independently of the text. “Of key importance is that these summary points are abstractive and do not simply copy sentences from the documents,” say Hermann and co.
Someday, maybe projects like Hypothesis will help teach computers to read, too.
-