System A is all about integrity and health and the folk not as nodes in a machine, but as a growing, adapting, distributed and living whole. It is the difference between a neighborhood and a housing development.
Organic
System A is all about integrity and health and the folk not as nodes in a machine, but as a growing, adapting, distributed and living whole. It is the difference between a neighborhood and a housing development.
Organic
Together, they plotted how they could take leading-edge machine-learning techniques and superimpose them on the universe.
Interesting!
Robert Mercer, Steve Bannon, Breitbart, Cambridge Analytica, Brexit, and Trump.
“The danger of not having regulation around the sort of data you can get from Facebook and elsewhere is clear. With this, a computer can actually do psychology, it can predict and potentially control human behaviour. It’s what the scientologists try to do but much more powerful. It’s how you brainwash someone. It’s incredibly dangerous.
“It’s no exaggeration to say that minds can be changed. Behaviour can be predicted and controlled. I find it incredibly scary. I really do. Because nobody has really followed through on the possible consequences of all this. People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”
-- Jonathan Rust, Cambridge University Psychometric Centre
AI criticism is also limited by the accuracy of human labellers, who must carry out a close reading of the ‘training’ texts before the AI can kick in. Experiments show that readers tend to take longer to process events that are distant in time or separated by a time shift (such as ‘a day later’).
Even though AI annotation schemes are versatile and expressive, they’re not foolproof. Longer, book-length texts are prohibitively expensive to annotate, so the power of the algorithms is restricted by the quantity of data available for training them.
In most cases, this analysis involves what’s known as ‘supervised’ machine learning, in which algorithms train themselves from collections of texts that a human has laboriously labelled.
The team on Google Translate has developed a neural network that can translate language pairs for which it has not been directly trained. "For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English."
Machine learning:
In machine learning, the term "ground truth" refers to the accuracy of the training set's classification for supervised learning techniques.
Ground truth in machine learning
A team at Facebook reviewed thousands of headlines using these criteria, validating each other’s work to identify a large set of clickbait headlines. From there, we built a system that looks at the set of clickbait headlines to determine what phrases are commonly used in clickbait headlines that are not used in other headlines. This is similar to how many email spam filters work.
Though details are scarce, the very idea that Facebook would tackle this problem with both humans and algorithms is reassuring. The common argument about human filtering is that it doesn’t scale. The common argument about algorithmic filtering is that it requires good signal (though some transhumanists keep saying that things are getting better). So it’s useful to know that Facebook used so hybrid an approach. Of course, even algo-obsessed Google has used human filtering. Or, at least, human judgment to tweak their filtering algorithms. (Can’t remember who was in charge of this. Was a semi-frequent guest on This Week in Google… Update: Matt Cutts) But this very simple “we sat down and carefully identified stuff we think qualifies as clickbait before we fed the algorithm” is refreshingly clear.
Docker is a type of virtual machine
How does it compare to the packages installed directly? Could be useful for development, but maybe not practical for HPC applications. Maybe just create a cd iso with all the correct programs and their dependencies.
2013 article about Douglas Hofstadter, who has continued to pursue an understanding of the human mind through experiments with AI.
the algorithm was somewhat more accurate than a coin flip
In machine learning it's also important to evaluate not just against random, but against how well other methods (e.g. parole boards) do. That kind of analysis would be nice to see.
We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.
Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.
Patrick Ball—a data scientist and the director of research at the Human Rights Data Analysis Group—who has previously given expert testimony before war crimes tribunals, described the NSA's methods as "ridiculously optimistic" and "completely bullshit." A flaw in how the NSA trains SKYNET's machine learning algorithm to analyse cellular metadata, Ball told Ars, makes the results scientifically unsound.
“Search is the cornerstone of Google,” Corrado said. “Machine learning isn’t just a magic syrup that you pour onto a problem and it makes it better. It took a lot of thought and care in order to build something that we really thought was worth doing.”
n and d
What do n and d mean?
UT Austin SDS 348, Computational Biology and Bioinformatics. Course materials and links: R, regression modeling, ggplot2, principal component analysis, k-means clustering, logistic regression, Python, Biopython, regular expressions.
Python interface to the R programming language.<br> Use R functions and packages from Python.<br> https://pypi.python.org/pypi/rpy2
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
They're hiring: https://openai.com/about/
Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
a study by Stephen Schueller, published last year in the Journal of Positive Psychology, found that people assigned to a happiness activity similar to one for which they previously expressed a preference showed significantly greater increases in happiness than people assigned to an activity not based on a prior preference. This, writes Schueller, is “a model for positive psychology exercises similar to Netflix for movies or Amazon for books and other products.”
TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.
https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.
Nanodegree Program Summary Machine learning represents a key evolution in the fields of computer science, data analysis, software engineering, and artificial intelligence. It has quickly become industry's preferred way to make sense of the staggering volume of data our modern world produces. Machine learning engineers build programs that dynamically perform the analyses that data scientists used to perform manually. These programs can “learn” based on millions of experiences, all rigorously and numerically defined.
The machine, of course, is not complete without a third party, the (human) operator, and it is within this triad that the text takes place.
It reminds me of the machine performance, the human performance and the idea of the text as a performative event (as in Johanna Drucker's theory of performative materiality).
I have the feeling we do not need to use models as complicated as some outlined in the text; we can (and finally will have to) abstract from most of the issues we can imagine. I expect that "magic" (an undisclosed heuristic, perhaps in combination with machine learning) will deal with the issues, a black box that will be considered inherently flawed and practical enough at the same time. The results from experimental ethics can help form the heuristic while the necessity for easy implementation and maintainability will limit the applications significantly.
This is a really useful visualization of the Multi Class SVM loss
This course introduces students to the basic knowledge representation, problem solving, and learning methods of artificial intelligence.
Enter the Daily Mail website, MailOnline, and CNN online. These sites display news stories with the main points of the story displayed as bullet points that are written independently of the text. “Of key importance is that these summary points are abstractive and do not simply copy sentences from the documents,” say Hermann and co.
Someday, maybe projects like Hypothesis will help teach computers to read, too.
Logistic regression, also called a logit model, is used to model dichotomous outcome variables. In the logit model the log odds of the outcome is modeled as a linear combination of the predictor variables.
Claudia Müller-Birn differenziert in ihrem Vortrag als "types of scholarly reading":
und bezieht sich damit auf N. Katherine Hayles vgl. Beyond reading in print – “close reading” vs “hyper reading”. In: Communication Mediated. 28.08.2013
Boltzmann Machines
what is boltzman machine