401 Matching Annotations
  1. Dec 2020
  2. Nov 2020
    1. Some of the verbs implemented by systemctl are designed to provide a high-level overview in a human readable format. All that information is available over dbus, and/or journalctl, systemctl show. We could provide that information in json format, but there's a second problem. Information and format of information printed by e.g. systemctl status is not stable. Since the output is not suitable for programmatic consumption anyway, there's no need to provide it in a machine readable format.
  3. Oct 2020
    1. Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they've kicked this firm off their site, but I think they've got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, #Breaking Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let's start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn't dominated by a single censor. #BreakUpBigTech.”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

      Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it's inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

      If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the "masses" (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning's coffee or today's lunch marginalized.

      To analogize it, we've provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

      If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn't have the need for as much human curation.

    2. It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

      But platforms are making huge decisions about who is allowed to speak. While they're generally allowing everyone to have a voice, they're also very subtly privileging many voices over others. While they're providing space for even the least among us to have a voice, they're making far too many of the worst and most powerful among us logarithmic-ally louder.

      It's not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn't have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

      The issue we ought to be looking at is the dynamic range between people and the messages they're able to send through social platforms.

      We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they're able to buy, we're imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

      If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I'd have to actively search that content out for it to cause me that sort of harm.

    1. As an American and a staunch defender of the First Amendment, I’m allergic to the notion of forbidden speech. But if government is going to forbid it, it damned well better clearly define what is forbidden or else the penumbra of prohibition will cast a shadow and chill on much more speech.

      Perhaps it's not what people are saying so much as platforms are accelerating it algorithmically? It's one thing for someone to foment sedition, praise Hitler, or yell their religious screed on the public street corner. The problem comes when powerful interests in the form of governments, corporations, or others provide them with megaphones and tacitly force audiences to listen to it.

      When Facebook or Youtube optimize for clicks keyed on social and psychological constructs using fringe content, we're essentially saying that machines, bots, and extreme fringe elements are not only people, but that they've got free speech rights, and they can be prioritized with the reach and exposure of major national newspapers and national television in the media model of the 80's.

      I highly suspect that if real people's social media reach were linear and unaccelerated by algorithms we wouldn't be in the morass we're generally seeing on many platforms.

    2. Many of the book’s essayists defend freedom of expression over freedom from obscenity. Says Rabbi Arthur Lelyveld (father of Joseph, who would become executive editor of The New York Times): “Freedom of expression, if it is to be meaningful at all, must include freedom for ‘that which we loathe,’ for it is obvious that it is no great virtue and presents no great difficulty for one to accord freedom to what we approve or to that to which we are indifferent.” I hear too few voices today defending speech of which they disapprove.

      I might take issue with this statement and possibly a piece of Jarvis' argument here. I agree that it's moral panic that there could be such a thing as "too much speech" because humans have a hard limit for how much they can individually consume.

      The issue I see is that while anyone can say almost anything, the problem becomes when a handful of monopolistic players like Facebook or YouTube can use algorithms to programattically entice people to click on and consume fringe content in mass quantities and that subtly, but assuredly nudges the populace and electorate in an unnatural direction. Most of the history of human society and interaction has long tended toward a centralizing consensus in which we can manage to cohere. The large scale effects of algorithmic-based companies putting a heavy hand on the scales are sure to create unintended consequences and they're able to do it at scales that the Johnson and Nixon administrations only wish they had access to.

      If we look at as an analogy to the evolution of weaponry, I might suggest we've just passed the border of single shot handguns and into the era of machine guns. What is society to do when the next evolution occurs into the era of social media atomic weapons?

    1. A statistician is the exact same thing as a data scientist or machine learning researcher with the differences that there are qualifications needed to be a statistician, and that we are snarkier.
    1. numerically evaluate the derivative of a function specified by a computer program

      I understand what they're saying, but one should be careful here not to confuse themselves with numerical differentiation a la finite differnces

    1. use Xstate which offers a finite state machine that adheres to the SCXML spec­i­fi­ca­tion and provides a lot of extra goodness, including vi­su­al­iza­tion tools, test helpers and much more
  4. Sep 2020
    1. For example, the one- pass (hardware) translator generated a symbol table and reverse Polish code as in conven- tional software interpretive languages. The translator hardware (compiler) operated at disk transfer speeds and was so fast there was no need to keep and store object code, since it could be quickly regenerated on-the-fly. The hardware-implemented job controller per- formed conventional operating system func- tions. The memory controller provided

      Hardware assisted compiler is a fantastic idea. TPUs from Google are essentially this. They're hardware assistance for matrix multiplication operations for machine learning workloads created by tools like TensorFlow.

    1. I suspect that most people who aren't avid users of social media and aren't super technical don't even think to change their username. Why would they? Twitter works perfectly well, and shows their chosen name in conversations, without ever touching the username setting.

      Dat is interessant. Ik wist niet dat het aanmaken van een username tegenwoordig zo diep in de Twitter settings zit. Je identiteit op Twitter word je dus opgelegd door het systeem....

  5. Aug 2020
  6. Jul 2020
    1. RDFa is intended to solve the problem of marking up machine-readable data in HTML documents. RDFa provides a set of HTML attributes to augment visual data with machine-readable hints. Using RDFa, authors may turn their existing human-visible text and links into machine-readable data without repeating content.
    1. It does, however, provide the --porcelain option, which causes the output of git status --porcelain to be formatted in an easy-to-parse format for scripts, and will remain stable across Git versions and regardless of user configuration.
    2. Parsing the output of git status is a bad idea because the output is intended to be human readable, not machine-readable. There's no guarantee that the output will remain the same in future versions of Git or in differently configured environments.
    1. Determine if who is using my computer is me by training a ML model with data of how I use my computer. This is a project for the Intrusion Detection Systems course at Columbia University.
    1. By honoring the mammae as sign and symbol of the highest class ofanimals, Linnaeus assigned a new value to the female, especially women’s unique rolein reproduction

      Throughout the multiple texts, utilized human-parts place specified bodies within social constructions, given limits of autonomy dependent on close monitoring by superiors. Kirkup and Schiebinger reflect on the Womxn’s breasts dictating the taxonomy of humans as mammalia--”a study of breasts." We see this era uplifted the sacredness of milk and the role of women’s reproduction, whilst stationing them closer to “beasts” than men, and assigning women to domesticity.

      Breasts as parts, natural tools embedded in the female body, parallels the seemingly hopeful outlook on this developing Cyborg body’s own parts, but these parts remain observed and reduced to science--a socially constructed pyramid falsely dubbed as standardized and empirical--determining the value and humanity of minorities. The parts of the female and POC body do not grant the bearer their autonomy, but rather outside scrutiny and oversight.

      We established mid 20th-century authors dubbed living beings as very complex machines, and question "are humans machines?"--can we break down the human/machine boundary by referring the symbol of breasts as also a mechanized part? I feel through Haraway's Cyborg we can, as rough as it feels to conceptualize breasts as another gear/customization.

    2. on a stage two feet high, along which she was led by her keeper, and exhibited like awild beast; being obliged to walk, stand, or sit as he ordered her.”6

      African women’s breasts are dubbed “beastly,” “pendulous” (Schiebinger 26)--using breasts and vaginal physical traits as a determinism to rank women by race. As Saartjie Bartman’s naked body is exhibited an object--reminding of a modern tech convention putting foreign car parts on a pedestal--the male scientific gaze is further scrutinizing and classifying womxn by parts.

      Thus the eyes of the male gaze are the male scientists, carried down to the audience’s white curiosity--the circus scene is disquieting. Further investigation of her body only continues to stretch the spectacle of Saartjie Baartman, exhibited like colonized art within museum, even as a corpse.

    3. reast shapes amonghumans

      The mathematical, geometric breakdown of the breast's shape feels uncomfortable like an engineer's diagram--dictating its value by diameter. This continues my thought that body parts are observed as machine parts under the male and scientific gaze.

    1. Our membership inference attack exploits the observationthat machine learning models often behave differently on thedata that they were trained on versus the data that they “see”for the first time.

      How well would this work on some of the more recent zero-shot models?

    1. data leakage (data from outside of your test set making it back into your test set and biasing the results)

      This sounds like the inverse of “snooping”, where information about the test data is inadvertently built into the model.

  7. Jun 2020
    1. The easiest way I've found to manage that is to copy hardware-configuration.nix and a minimal version of configuration.nix and import it into the NixOps config for the corresponding machine. (I keep them in a git submodule, but keeping them in the same repo could also make sense.) 1 Pick your reaction

      If I understood it correctly, take the hardware-configration.nix from the target machine, and put it into the NixOps config.

      Also relevant: Minimal NixOS config for Nixops deployment (discourse)

  8. May 2020
    1. the network typically learns to useh(t)as a kind of lossysummary of the task-relevant aspects of the past sequence of inputs up tot

      The hidden state h(t) is a high-level representation of whatever happened until time step t.

    2. Parameter sharingmakes it possible to extend and apply the model to examples of different forms(different lengths, here) and generalize across them. If we had separate parametersfor each value of the time index, we could not generalize to sequence lengths notseen during training, nor share statistical strength across different sequence lengthsand across different positions in time. Such sharing is particularly important whena specific piece of information can occur at multiple positions within the sequence.

      RNN have the same parameters for each time step. This allows to generalize the inferred "meaning", even when it's inferred at different steps.

    1. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
  9. Apr 2020
    1. Python contributed examples¶ Mic VAD Streaming¶ This example demonstrates getting audio from microphone, running Voice-Activity-Detection and then outputting text. Full source code available on https://github.com/mozilla/DeepSpeech-examples. VAD Transcriber¶ This example demonstrates VAD-based transcription with both console and graphical interface. Full source code available on https://github.com/mozilla/DeepSpeech-examples.
    1. Python API Usage example Edit on GitHub Python API Usage example¶ Examples are from native_client/python/client.cc. Creating a model instance and loading model¶ 115 ds = Model(args.model) Performing inference¶ 149 150 151 152 153 154 if args.extended: print(metadata_to_string(ds.sttWithMetadata(audio, 1).transcripts[0])) elif args.json: print(metadata_json_output(ds.sttWithMetadata(audio, 3))) else: print(ds.stt(audio)) Full source code
    1. DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier. NOTE: This documentation applies to the 0.7.0 version of DeepSpeech only. Documentation for all versions is published on deepspeech.readthedocs.io. To install and use DeepSpeech all you have to do is: # Create and activate a virtualenv virtualenv -p python3 $HOME/tmp/deepspeech-venv/ source $HOME/tmp/deepspeech-venv/bin/activate # Install DeepSpeech pip3 install deepspeech # Download pre-trained English model files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.pbmm curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.scorer # Download example audio files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/audio-0.7.0.tar.gz tar xvf audio-0.7.0.tar.gz # Transcribe an audio file deepspeech --model deepspeech-0.7.0-models.pbmm --scorer deepspeech-0.7.0-models.scorer --audio audio/2830-3980-0043.wav A pre-trained English model is available for use and can be downloaded using the instructions below. A package with some example audio files is available for download in our release notes.
    1. import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way. 
    2. TensorFlow recently released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people. We’ll build a speech recognition system that understands simple spoken commands. You can download the dataset from here.
    3. Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
    1. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility). Supports both convolutional networks and recurrent networks, as well as combinations of the two. Runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.6.
    1. Installation in Windows Compatibility: > OpenCV 2.0 Author: Bernát Gábor You will learn how to setup OpenCV in your Windows Operating System!
    2. Here you can read tutorials about how to set up your computer to work with the OpenCV library. Additionally you can find very basic sample source code to introduce you to the world of the OpenCV. Installation in Linux Compatibility: > OpenCV 2.0
    1. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
    1. there is also strong encouragement to make code re-usable, shareable, and citable, via DOI or other persistent link systems. For example, GitHub projects can be connected with Zenodo for indexing, archiving, and making them easier to cite alongside the principles of software citation [25].
      • Teknologi Github dan Gitlab fokus kepada modus teks yang dapat dengan mudah dikenali dan dibaca mesin/komputer (machine readable).

      • Saat ini text mining adalah teknologi utama yang berkembang cepat. Machine learning tidak akan jalan tanpa bahan baku dari teknologi text mining.

      • Oleh karenanya, jurnal-jurnal terutama terbitan LN sudah lama memiliki dua versi untuk setiap makalah yang dirilis, yaitu versi PDF (yang sebenarnya tidak berbeda dengan kertas zaman dulu) dan versi HTML (ini bisa dibaca mesin).

      • Pengolah kata biner seperti Ms Word sangat bergantung kepada teknologi perangkat lunak (yang dimiliki oleh entitas bisnis). Tentunya kode-kode untuk membacanya akan dikunci.

      • Bahkan PDF yang dianggap sebagai cara termudah dan teraman untuk membagikan berkas, juga tidak dapat dibaca oleh mesin dengan mudah.

  10. Mar 2020
    1. a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as “gorillas.”
    2. More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology
  11. Dec 2019
    1. Like a centaur, the hybrid would have the strength of each of its components: the processing power of a large logic circuit and the intuition of a human brain’s wetware. The result: human-machine teams, even when they didn’t include the best grandmasters or most powerful computers, consistently beat teams composed solely of human grandmasters or superfast machines.

      This is what is most needed: the spark of intuition coupled with the indefatigably pursuit of its implications. We handle the former and computers the latter.

  12. Nov 2019
    1. , un ordinateur n’a rien à voir avec uneautomobile ou une machine à laver : nous le verrons, c’est une machine à penser

      L'ordinateur comme « machine à penser »

    1. The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.
  13. Sep 2019
    1. At the moment, GPT-2 uses a binary search algorithm, which means that its output can be considered a ‘true’ set of rules. If OpenAI is right, it could eventually generate a Turing complete program, a self-improving machine that can learn (and then improve) itself from the data it encounters. And that would make OpenAI a threat to IBM’s own goals of machine learning and AI, as it could essentially make better than even humans the best possible model that the future machines can use to improve their systems. However, there’s a catch: not just any new AI will do, but a specific type; one that uses deep learning to learn the rules, algorithms, and data necessary to run the machine to any given level of AI.

      This is a machine generated response in 2019. We are clearly closer than most people realize to machines that can can pass a text-based Turing Test.

    1. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume.[nb 2] Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture. Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".

      important terms you hear repeatedly great visuals and graphics @https://distill.pub/2018/building-blocks/

    1. Here's a playground were you can select different kernel matrices and see how they effect the original image or build your own kernel. You can also upload your own image or use live video if your browser supports it. blurbottom sobelcustomembossidentityleft sobeloutlineright sobelsharpentop sobel The sharpen kernel emphasizes differences in adjacent pixel values. This makes the image look more vivid. The blur kernel de-emphasizes differences in adjacent pixel values. The emboss kernel (similar to the sobel kernel and sometimes referred to mean the same) givens the illusion of depth by emphasizing the differences of pixels in a given direction. In this case, in a direction along a line from the top left to the bottom right. The indentity kernel leaves the image unchanged. How boring! The custom kernel is whatever you make it.

      I'm all about my custom kernels!

    1. We developed a new metric, UAR, which compares the robustness of a model against an attack to adversarial training against that attack. Adversarial training is a strong defense that uses knowledge of an adversary by training on adversarially attacked images[3]To compute UAR, we average the accuracy of the defense across multiple distortion sizes and normalize by the performance of an adversarially trained model; a precise definition is in our paper. . A UAR score near 100 against an unforeseen adversarial attack implies performance comparable to a defense with prior knowledge of the attack, making this a challenging objective.

      @metric

  14. Aug 2019
    1. Using multiple copies of a neuron in different places is the neural network equivalent of using functions. Because there is less to learn, the model learns more quickly and learns a better model. This technique – the technical name for it is ‘weight tying’ – is essential to the phenomenal results we’ve recently seen from deep learning.

      This parameter sharing allows CNNs, for example, to need much less params/weights than Fully Connected NNs.

    2. The known connection between geometry, logic, topology, and functional programming suggests that the connections between representations and types may be of fundamental significance.

      Examples for each?

    3. Representations are Types With every layer, neural networks transform data, molding it into a form that makes their task easier to do. We call these transformed versions of data “representations.” Representations correspond to types.

      Interesting.

      Like a Queue Type represents a FIFO flow and a Stack a FILO flow, where the space we transformed is the operation space of the type (eg a Queue has a folded operation space compared to an Array)

      Just free styling here...

    4. In this view, the representations narrative in deep learning corresponds to type theory in functional programming. It sees deep learning as the junction of two fields we already know to be incredibly rich. What we find, seems so beautiful to me, feels so natural, that the mathematician in me could believe it to be something fundamental about reality.

      compositional deep learning

    5. Appendix: Functional Names of Common Layers Deep Learning Name Functional Name Learned Vector Constant Embedding Layer List Indexing Encoding RNN Fold Generating RNN Unfold General RNN Accumulating Map Bidirectional RNN Zipped Left/Right Accumulating Maps Conv Layer “Window Map” TreeNet Catamorphism Inverse TreeNet Anamorphism

      👌translation. I like to think about embeddings as List lookups

    1. As log-bilinear regression model for unsupervised learning of word representations, it combines the features of two model families, namely the global matrix factorization and local context window methods

      What does "log-bilinear regression" mean exactly?

  15. Jul 2019
    1. Another solution might be to limit on the number of times a tweet can be retweeted.

      This isn't too dissimilar to an idea I've been mulling over and which Robin Sloan wrote about on the same day this story was released: https://platforms.fyi/

    1. We will discuss classification in the context of supportclassificationvector machines

      SVMs aren't used that much in practice anymore. It's more of an academic fling, because they're nice to work with mathematically. Empirically, Tree Ensembles or Neural Nets are almost always better.

    1. Compared with neural networks configured by a pure grid search,we find that random search over the same domain is able to find models that are as good or betterwithin a small fraction of the computation time.
  16. Jun 2019
    1. To interpret a model, we require the following insights :Features in the model which are most important.For any single prediction from a model, the effect of each feature in the data on that particular prediction.Effect of each feature over a large number of possible predictions

      Machine learning interpretability

    1. By comparison, Amazon’s Best Seller badges, which flag the most popular products based on sales and are updated hourly, are far more straightforward. For third-party sellers, “that’s a lot more powerful than this Choice badge, which is totally algorithmically calculated and sometimes it’s totally off,” says Bryant.

      "Amazon's Choice" is made by an algorithm.

      Essentially, "Amazon" is Skynet.

    1. This problem is called overfitting—it's like memorizing the answers instead of understanding how to solve a problem.

      Simple and clear explanation of overfitting

  17. May 2019
    1. policy change index - machine learning on corpus of text to identify and predict policy changes in China

  18. Mar 2019
    1. Mention McDonald’s to someone today, and they're more likely to think about Big Mac than Big Data. But that could soon change: The fast-food giant has embraced machine learning, in a fittingly super-sized way.McDonald’s is set to announce that it has reached an agreement to acquire Dynamic Yield, a startup based in Tel Aviv that provides retailers with algorithmically driven "decision logic" technology. When you add an item to an online shopping cart, it’s the tech that nudges you about what other customers bought as well. Dynamic Yield reportedly had been recently valued in the hundreds of millions of dollars; people familiar with the details of the McDonald’s offer put it at over $300 million. That would make it the company's largest purchase since it acquired Boston Market in 1999.

      McDonald's are getting into machine learning. Beware.

  19. Feb 2019
    1. Warp Knitting machines are a speciality of A.T.E. who have been providing machinery from the best of brands to industries across 60 countries for the past 75 years. We provide various types of warp knitting machines like Tricot, Raschel etc. A.T.E. always has a solution to increase your productivity, lower costs, and provide excellent customer service.

    1. For instance, an aborigine who possesses all of our basic sensory-mental-motor capabilities, but does not possess our background of indirect knowledge and procedure, cannot organize the proper direct actions necessary to drive a car through traffic, request a book from the library, call a committee meeting to discuss a tentative plan, call someone on the telephone, or compose a letter on the typewriter.

      In other words: culture. I'm pretty sure that Engelbart would agree with the statement that someone who could order a book from a library would likely not know the best way to find a nearby water source, as the right kind of aborigine would know. Collective intelligence is a monotonically increasing store of knowledge that is maintained through social learning -- not just social learning, but teaching. Many species engage in social learning, but humans are the only primates with visible sclera -- the whites of our eyeballs -- which enables even infants to track where their teacher/parent is looking. I think this function of culture is what Engelbart would call "C work"

      A Activity: 'Business as Usual'. The organization's day to day core business activity, such as customer engagement and support, product development, R&D, marketing, sales, accounting, legal, manufacturing (if any), etc. Examples: Aerospace - all the activities involved in producing a plane; Congress - passing legislation; Medicine - researching a cure for disease; Education - teaching and mentoring students; Professional Societies - advancing a field or discipline; Initiatives or Nonprofits - advancing a cause.
      
      B Activity: Improving how we do that. Improving how A work is done, asking 'How can we do this better?' Examples: adopting a new tool(s) or technique(s) for how we go about working together, pursuing leads, conducting research, designing, planning, understanding the customer, coordinating efforts, tracking issues, managing budgets, delivering internal services. Could be an individual introducing a new technique gleaned from reading, conferences, or networking with peers, or an internal initiative tasked with improving core capability within or across various A Activities.
      
      C Activity: Improving how we improve. Improving how B work is done, asking 'How can we improve the way we improve?' Examples: improving effectiveness of B Activity teams in how they foster relations with their A Activity customers, collaborate to identify needs and opportunities, research, innovate, and implement available solutions, incorporate input, feedback, and lessons learned, run pilot projects, etc. Could be a B Activity individual learning about new techniques for innovation teams (reading, conferences, networking), or an initiative, innovation team or improvement community engaging with B Activity and other key stakeholders to implement new/improved capability for one or more B activities.
      

      In other words, human culture, using language, artifacts, methodology, and training, bootstrapped collective intelligence; what Engelbart proposed, then was to apply C work to culture's bootstrapping capabilities.

    1. Nearly half of FBI rap sheets failed to include information on the outcome of a case after an arrest—for example, whether a charge was dismissed or otherwise disposed of without a conviction, or if a record was expunged

      This explains my personal experience here: https://hyp.is/EIfMfivUEem7SFcAiWxUpA/epic.org/privacy/global_entry/default.html (Why someone who had Global Entry was flagged for a police incident before he applied for Global Entry).

    2. Applicants also agree to have their fingerprints entered into DHS’ Automatic Biometric Identification System (IDENT) “for recurrent immigration, law enforcement, and intelligence checks, including checks against latent prints associated with unsolved crimes.

      Intelligence checks is very concerning here as it suggests pretty much what has already been leaked, that the US is running complex autonomous screening of all of this data all the time. This also opens up the possibility for discriminatory algorithms since most of these are probably rooted in machine learning techniques and the criminal justice system in the US today tends to be fairly biased towards certain groups of people to begin with.

    3. It cited research, including some authored by the FBI, indicating that “some of the biometrics at the core of NGI, like facial recognition, may misidentify African Americans, young people, and women at higher rates than whites, older people, and men, respectively.

      This re-affirms the previous annotation that the set of training data for the intelligence checks the US runs on global entry data is biased towards certain groups of people.

    1. the operation of the whole machine.

      I'm not exactly sure what to do with this yet, but I want to note that here, Hume is comfortable speaking metaphorically of the human as a machine. In the older stuff, agricultural metaphors are preferred.

    2. }-lume who seeks to understand the operations of mind.

      In this sense, the mind is a machine, which operates in order to produce a certain product. What is this product? Knowledge? Can the product differ between people and instances?

  20. Jan 2019
    1. machine intelligence

      Interestingly enough, we saw it coming. All the advances in technology that lead to this much efficiency in technology, were not to be taken lightly. A few decades ago (about 35 years, since the invention of the internet and online networks in 1983) people probably saw the internet as a gift from heavens - one with little or any downsides to it. But now, as it has advanced to such an extreme. with advanced machines engineering, we have learned otherwise. The hacking of sites and networks, viruses and malware, user data surveillance and monitoring, are only a few of the downsides to such heavenly creation. And now, we face the truth: machine intelligence is not to be underestimated! Or the impact on our lives could be negative in years to come. This is because it will only get more intense with the years, as technology further develops.

  21. Nov 2018
  22. Sep 2018
    1. in equation B for the marginal of a gaussian, only the covariance of the block of the matrix involving the unmarginalized dimensions matters! Thus “if you ask only for the properties of the function (you are fitting to the data) at a finite number of points, then inference in the Gaussian process will give you the same answer if you ignore the infinitely many other points, as if you would have taken them all into account!”(Rasmunnsen)

      key insight into Gaussian processes

    1. keenest attachments, and whose natural gifts may be, if we do not squander or destroy them, exactly what we need to flourish and perfect ourselves—as human beings.

      Kass' implications in the quote indicates the potential biotechnology has on the human psyche. Although biotechnology has the ability to forge new paths in curing feeble human (or in essence, any living thing) traits, such as sickness and suffering, it can be further exploited to enhance physical traits. However, Kass' tonality shines light that when this technology is fully developed, humans will lose sight of what they formerly relied on "keenest attachments" Therefore, it is of great significance that the limbs (keenest attachments) are used to "...perfect ourselves-as human beings." and not misused or ultimately destroyed.

    1. predictive analysis

      Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.

  23. Jul 2018
  24. course-computational-literary-analysis.netlify.com course-computational-literary-analysis.netlify.com
    1. There is here, moral, if not legal, evidence, that the murder was committed by the Indians.

      This is a very interesting take on "evidence" as being moral if not legal by Sergeant Cuff. It makes me question exactly what he means by that if there is a way to use computational analysis to find out. We could perhaps start by parsing out "evidence" throughout the text with a machine learning algorithm to help he define evidence and then, going forward, device a way (maybe with sentiment analysis) to determine moral evidence from legal evidence.

    1. ~32:00 What about the domain of the function being effectively lower dimensional, rather than a strongly regularity assumption? That would also work, right? Could this be the case for images? (what's the dimensionality of the manifold of natural images?)

      Nice. I like the idea of regularity <> low dimensional representation. I guess by that general definition, the above is a form of regularity..

      He comments about this on 38:30

    1. The documentation of routines invited the students to reflect on the multiplicity of practices that shape temporality inside the school community, making the social layering of time more perceptible. Far from being restricted to timetables, buzzers and timed tasks, school time is a fusion of personal times, rhythms and temporal force

      This graf and the next, might be helpful for the Time Machine Project study. Cites: Adam on description of "school time."

    1. This system of demonstrating tasks to one robot that can then transfer its skills to other robots with different body shapes, strengths, and constraints might just be the first step toward independent social learning in robots. From there, we might be on the road to creating cultured robots.
    2. Soon we might add robots to this list. While our fanciful desert scene of robots teaching each other how to defuse bombs lies in the distant future, robots are beginning to learn socially. If one day robots start to develop and share knowledge independently of humans, might that be the seed for robot culture?
    3. his imaginary scene shows the power of learning from others. Anthropologists and zoologists call this “social learning”: picking up new information by observing or interacting with others and the things others produce. Social learning is rife among humans and across the wider animal kingdom. As we discussed in our previous post, learning socially is fundamental to how humans become fully rounded people, in all our diversity, creativity, and splendor.
    1. "It's so scary that it works," Perelman sighs. "Machines are very brilliant for certain things and very stupid on other things. This is a case where the machines are very, very stupid."
  25. Jun 2018
    1. One consequence of thisposition is a more radical understanding of the sense in whichmateriality is discursive (i.e., material phenomena are inseparable from theapparatuses of bodily production: matteremerges out of and includes as part of itsbeing the ongoing reconfiguring of boundaries), just as discursive practices arealways already material (i.e., they are ongoing material (re)configurings of theworld) (2003: 822).Brought back into the world oftechnology design, this intimate co-constitution ofconfigured materialities with configuring agencies clearly implies a very differentunderstanding of the ‘human-machine interface’.
  26. May 2018
    1. A.T.E. Enterprises brings you the complete range of warp knitting machines to meet the demands of high-quality fabric manufacturers. Warp knitting machines are used to produce a huge range of warp knitted fabrics (warp knits) for clothing and technical textiles.

  27. Apr 2018
    1. Teraspin manufacture textile machinery spare parts which includes spindles, spindle inserts, cradles for roving frames and ring frames, etc. for different types of textile machines. We ensure high product performance and durability. Our R&D team is constantly innovating to improve existing products and introduce new ones.

  28. Mar 2018
    1. Artificial intelligence (AI), machine learning and deep learning

      Explicación gráfica de artificial intelligence, machine learning y deep learning

  29. Dec 2017
    1. Most of the recent advances in AI depend on deep learning, which is the use of backpropagation to train neural nets with multiple layers ("deep" neural nets).

      Neural nets consist of layers of nodes, with edges from each node to the nodes in the next layer. The first and last layers are input and output. The output layer might only have two nodes, representing true or false. Each node holds a value representing how excited it is. Each edge has a value representing strength of connection, which determines how much of the excitement passes through.

      The edges in an untrained neural net start with random values. The training data consists of a series of samples that are already labeled. If the output is wrong, the edges are adjusted according to how much they contributed to the error. It's called backpropagation because it starts with the output nodes and works toward the input nodes.

      Deep neural nets can be effective, but only for single specific tasks. And they need huge sets of training data. They can also be tricked rather easily. Worse, someone who has access to the net can discover ways of adding noise to images that will make the net "see" things that obviously aren't there.

  30. Nov 2017
    1. UML automatically finds these hidden patterns to link seemingly unrelated accounts and customers. These links can be one of thousands of data fields that the UML model ingests.

      Why does this have to be done in a different system?

  31. Oct 2017
  32. Sep 2017
    1. Đầu tiên mình nghĩ bạn cần nắm về machine learning và algorithm, bạn có thể bắt đầu bằng các khóa học trên mạng. Mình recommend khóa học Machine Learning của Andrew Ng, khóa học này được coi là kinh thánh cho data scientist. Sau đó bạn có thể bắt đầu với Python hoặc R và tham gia challenge trên Kaggle. Kaggle là một platform để Data Scientist tham gia, kiếm tiền thưởng và cạnh tranh thứ hạng với nhau. Nhiều người cũng nói với mình Kaggle là con đường tốt nhất và ngắn nhất để đến với Data Science.

      Học cơ bản

  33. Aug 2017
  34. Jul 2017
  35. Jun 2017
  36. May 2017
    1. ditching machine
      A ditching machine is used for digging ditches or trenches of a specified depth and width. These ditches are often used for irrigation, drainage, or pipe-laying. They could also be used to build fences or fortifications. These machines can also be used to excavate for any other purpose (Edwards, 1888). Within the Berger Inquiry, the Banister Model 710 and Model 812 wheel ditchers are discussed. This machine was designed and built by Banister Pipelines of Edmonton, Alberta. Banister Pipelines built their first ditcher, the Model 508, in 1965. The Model 508 was designed to “cut through frozen ground.” Banister Pipelines was later able to “develop the technology in the 1970s that led to some of the largest ditchers ever built.” They designed a prototype of the Model 710 in 1972 which was tested to cut through frozen ground. This machine weighs 115 tons and can dig a ditch 7 feet wide and 10 feet deep. It is powered by two Caterpillar diesel engines which produce 1,120 horsepower. This machine is so powerful that in thawed ground it can reach a production rate of up to 20 feet of trench per minute. A few years later, in 1978, Banister Pipelines built a larger ditching machine, the Model 812, which is almost twice the size of the Model 710. This machine can dig 12 feet deep. The Model 710 and Model 812 by Banister Pipelines are still in use today (Haddock, 1998). 
      

      References

      Edwards, C. C. (1888, December 18). Ditching-Machine. Retrieved from The Portal to Texas History: https://texashistory.unt.edu/ark:/67531/metapth171924/ Haddock, K. (1998). Giant Earthmovers: An Illustrated History. Osceola: MBI.

  37. Apr 2017
    1. Detection of fake news in social media based on who liked it.

      we show that Facebook posts can be classified with high accuracy as hoaxes or non-hoaxes on the basis of the users who "liked" them. We present two classification techniques, one based on logistic regression, the other on a novel adaptation of boolean crowdsourcing algorithms. On a dataset consisting of 15,500 Facebook posts and 909,236 users, we obtain classification accuracies exceeding 99% even when the training set contains less than 1% of the posts.

    1. Obviously, in this situation whoever controls the algorithms has great power. Decisions like what is promoted to the top of a news feed can swing elections. Small changes in UI can drive big changes in user behavior. There are no democratic checks or controls on this power, and the people who exercise it are trying to pretend it doesn’t exist

    2. On Facebook, social dynamics and the algorithms’ taste for drama reinforce each other. Facebook selects from stories that your friends have shared to find the links you’re most likely to click on. This is a potent mix, because what you read and post on Facebook is not just an expression of your interests, but part of a performative group identity.

      So without explicitly coding for this behavior, we already have a dynamic where people are pulled to the extremes. Things get worse when third parties are allowed to use these algorithms to target a specific audience.

    3. any system trying to maximize engagement will try to push users towards the fringes. You can prove this to yourself by opening YouTube in an incognito browser (so that you start with a blank slate), and clicking recommended links on any video with political content.

      ...

      This pull to the fringes doesn’t happen if you click on a cute animal story. In that case, you just get more cute animals (an experiment I also recommend trying). But the algorithms have learned that users interested in politics respond more if they’re provoked more, so they provoke. Nobody programmed the behavior into the algorithm; it made a correct observation about human nature and acted on it.

    1. Really cool venue for publishing online, interactive articles for ML

  38. Mar 2017
    1. the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative')

      AUC能够在CTR应用中有指导意义的原因

    1. System A is all about integrity and health and the folk not as nodes in a machine, but as a growing, adapting, distributed and living whole. It is the difference between a neighborhood and a housing development.
  39. Feb 2017
    1. Robert Mercer, Steve Bannon, Breitbart, Cambridge Analytica, Brexit, and Trump.

      “The danger of not having regulation around the sort of data you can get from Facebook and elsewhere is clear. With this, a computer can actually do psychology, it can predict and potentially control human behaviour. It’s what the scientologists try to do but much more powerful. It’s how you brainwash someone. It’s incredibly dangerous.

      “It’s no exaggeration to say that minds can be changed. Behaviour can be predicted and controlled. I find it incredibly scary. I really do. Because nobody has really followed through on the possible consequences of all this. People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”

      -- Jonathan Rust, Cambridge University Psychometric Centre

  40. Jan 2017
    1. AI criticism is also limited by the accuracy of human labellers, who must carry out a close reading of the ‘training’ texts before the AI can kick in. Experiments show that readers tend to take longer to process events that are distant in time or separated by a time shift (such as ‘a day later’).
    2. Even though AI annotation schemes are versatile and expressive, they’re not foolproof. Longer, book-length texts are prohibitively expensive to annotate, so the power of the algorithms is restricted by the quantity of data available for training them.
    3. In most cases, this analysis involves what’s known as ‘supervised’ machine learning, in which algorithms train themselves from collections of texts that a human has laboriously labelled.
  41. Dec 2016
    1. The team on Google Translate has developed a neural network that can translate language pairs for which it has not been directly trained. "For example, if the neural network has been taught to translate between English and Japanese, and English and Korean, it can also translate between Japanese and Korean without first going through English."

  42. Oct 2016
    1. In machine learning, the term "ground truth" refers to the accuracy of the training set's classification for supervised learning techniques.

      Ground truth in machine learning

  43. Aug 2016
    1. A team at Facebook reviewed thousands of headlines using these criteria, validating each other’s work to identify a large set of clickbait headlines. From there, we built a system that looks at the set of clickbait headlines to determine what phrases are commonly used in clickbait headlines that are not used in other headlines. This is similar to how many email spam filters work.

      Though details are scarce, the very idea that Facebook would tackle this problem with both humans and algorithms is reassuring. The common argument about human filtering is that it doesn’t scale. The common argument about algorithmic filtering is that it requires good signal (though some transhumanists keep saying that things are getting better). So it’s useful to know that Facebook used so hybrid an approach. Of course, even algo-obsessed Google has used human filtering. Or, at least, human judgment to tweak their filtering algorithms. (Can’t remember who was in charge of this. Was a semi-frequent guest on This Week in Google… Update: Matt Cutts) But this very simple “we sat down and carefully identified stuff we think qualifies as clickbait before we fed the algorithm” is refreshingly clear.

  44. Jun 2016
    1. Docker is a type of virtual machine

      How does it compare to the packages installed directly? Could be useful for development, but maybe not practical for HPC applications. Maybe just create a cd iso with all the correct programs and their dependencies.

  45. May 2016
    1. the algorithm was somewhat more accurate than a coin flip

      In machine learning it's also important to evaluate not just against random, but against how well other methods (e.g. parole boards) do. That kind of analysis would be nice to see.

  46. Apr 2016
    1. We should have control of the algorithms and data that guide our experiences online, and increasingly offline. Under our guidance, they can be powerful personal assistants.

      Big business has been very militant about protecting their "intellectual property". Yet they regard every detail of our personal lives as theirs to collect and sell at whim. What a bunch of little darlings they are.

  47. Feb 2016
    1. Patrick Ball—a data scientist and the director of research at the Human Rights Data Analysis Group—who has previously given expert testimony before war crimes tribunals, described the NSA's methods as "ridiculously optimistic" and "completely bullshit." A flaw in how the NSA trains SKYNET's machine learning algorithm to analyse cellular metadata, Ball told Ars, makes the results scientifically unsound.
    1. “Search is the cornerstone of Google,” Corrado said. “Machine learning isn’t just a magic syrup that you pour onto a problem and it makes it better. It took a lot of thought and care in order to build something that we really thought was worth doing.”
  48. Jan 2016
    1. UT Austin SDS 348, Computational Biology and Bioinformatics. Course materials and links: R, regression modeling, ggplot2, principal component analysis, k-means clustering, logistic regression, Python, Biopython, regular expressions.

  49. Dec 2015
    1. OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
    1. Big Sur is our newest Open Rack-compatible hardware designed for AI computing at a large scale. In collaboration with partners, we've built Big Sur to incorporate eight high-performance GPUs
  50. Nov 2015
    1. a study by Stephen Schueller, published last year in the Journal of Positive Psychology, found that people assigned to a happiness activity similar to one for which they previously expressed a preference showed significantly greater increases in happiness than people assigned to an activity not based on a prior preference. This, writes Schueller, is “a model for positive psychology exercises similar to Netflix for movies or Amazon for books and other products.”
    1. TPOT is a Python tool that automatically creates and optimizes machine learning pipelines using genetic programming. Think of TPOT as your “Data Science Assistant”: TPOT will automate the most tedious part of machine learning by intelligently exploring thousands of possible pipelines, then recommending the pipelines that work best for your data.

      https://github.com/rhiever/tpot TPOT (Tree-based Pipeline Optimization Tool) Built on numpy, scipy, pandas, scikit-learn, and deap.

    1. Nanodegree Program Summary Machine learning represents a key evolution in the fields of computer science, data analysis, software engineering, and artificial intelligence. It has quickly become industry's preferred way to make sense of the staggering volume of data our modern world produces. Machine learning engineers build programs that dynamically perform the analyses that data scientists used to perform manually. These programs can “learn” based on millions of experiences, all rigorously and numerically defined.
    1. The machine, of course, is not complete without a third party, the (human) operator, and it is within this triad that the text takes place.

      It reminds me of the machine performance, the human performance and the idea of the text as a performative event (as in Johanna Drucker's theory of performative materiality).

  51. Oct 2015
    1. I have the feeling we do not need to use models as complicated as some outlined in the text; we can (and finally will have to) abstract from most of the issues we can imagine. I expect that "magic" (an undisclosed heuristic, perhaps in combination with machine learning) will deal with the issues, a black box that will be considered inherently flawed and practical enough at the same time. The results from experimental ethics can help form the heuristic while the necessity for easy implementation and maintainability will limit the applications significantly.

  52. Sep 2015
  53. Aug 2015
  54. Jul 2015
  55. Jun 2015
    1. Enter the Daily Mail website, MailOnline, and CNN online. These sites display news stories with the main points of the story displayed as bullet points that are written independently of the text. “Of key importance is that these summary points are abstractive and do not simply copy sentences from the documents,” say Hermann and co.

      Someday, maybe projects like Hypothesis will help teach computers to read, too.

  56. Jan 2015
    1. Logistic regression, also called a logit model, is used to model dichotomous outcome variables. In the logit model the log odds of the outcome is modeled as a linear combination of the predictor variables.
  57. Nov 2014