201 Matching Annotations
  1. Last 7 days
    1. Eamonn Keogh is an assistant professor of Computer Science at the University ofCalifornia, Riverside. His research interests are in Data Mining, Machine Learning andInformation Retrieval. Several of his papers have won best paper awards, includingpapers at SIGKDD and SIGMOD. Dr. Keogh is the recipient of a 5-year NSF CareerAward for “Efficient Discovery of Previously Unknown Patterns and Relationships inMassive Time Series Databases”.

      Look into Eamonn Keogh's papers that won "best paper awards"

    1. “The metaphor is that the machine understands what I’m saying and so I’m going to interpret the machine’s responses in that context.”

      Interesting metaphor for why humans are happy to trust outputs from generative models

  2. Nov 2022
    1. The rapid increase in both the quantity and complexity of data that are being generated daily in the field of environmental science and engineering (ESE) demands accompanied advancement in data analytics. Advanced data analysis approaches, such as machine learning (ML), have become indispensable tools for revealing hidden patterns or deducing correlations for which conventional analytical methods face limitations or challenges. However, ML concepts and practices have not been widely utilized by researchers in ESE. This feature explores the potential of ML to revolutionize data analysis and modeling in the ESE field, and covers the essential knowledge needed for such applications. First, we use five examples to illustrate how ML addresses complex ESE problems. We then summarize four major types of applications of ML in ESE: making predictions; extracting feature importance; detecting anomalies; and discovering new materials or chemicals. Next, we introduce the essential knowledge required and current shortcomings in ML applications in ESE, with a focus on three important but often overlooked components when applying ML: correct model development, proper model interpretation, and sound applicability analysis. Finally, we discuss challenges and future opportunities in the application of ML tools in ESE to highlight the potential of ML in this field.

      环境科学与工程(ESE)领域日益增长的数据量和复杂性,伴随着数据分析技术的进步而不断提高。先进的数据分析方法,如机器学习(ML) ,已经成为揭示隐藏模式或推断相关性的不可或缺的工具,而传统的分析方法面临着局限性或挑战。然而,机器学习的概念和实践并没有得到广泛的应用。该特性探索了机器学习在 ESE 领域革新数据分析和建模的潜力,并涵盖了此类应用所需的基本知识。首先,我们使用五个示例来说明 ML 如何处理复杂的 ESE 问题。然后,我们总结了机器学习在 ESE 中的四种主要应用类型: 预测、提取特征重要性、检测异常和发现新材料或化学品。接下来,我们介绍了 ESE 中机器学习应用所需的基本知识和目前存在的缺陷,重点介绍了应用机器学习时三个重要但经常被忽视的组成部分: 正确的模型开发、适当的模型解释和良好的适用性分析。最后,我们讨论了机器学习工具在 ESE 中的应用所面临的挑战和未来的机遇,以突出机器学习在这一领域的潜力。

    1. "On the Opportunities and Risks of Foundation Models" This is a large report by the Center for Research on Foundation Models at Stanford. They are creating and promoting the use of these models and trying to coin this name for them. They are also simply called large pre-trained models. So take it with a grain of salt, but also it has a lot of information about what they are, why they work so well in some domains and how they are changing the nature of ML research and application.

    1. Technology like this, which lets you “talk” to people who’ve died, has been a mainstay of science fiction for decades. It’s an idea that’s been peddled by charlatans and spiritualists for centuries. But now it’s becoming a reality—and an increasingly accessible one, thanks to advances in AI and voice technology. 
  3. Oct 2022
    1. There's no market for a machine-learning autopilot, or content moderation algorithm, or loan officer, if all it does is cough up a recommendation for a human to evaluate. Either that system will work so poorly that it gets thrown away, or it works so well that the inattentive human just button-mashes "OK" every time a dialog box appears.

      ML algorithms must work or not work

  4. Sep 2022
  5. Aug 2022
  6. Jul 2022
    1. because it only needs to engage a portion of the model to complete a task, as opposed to other architectures that have to activate an entire AI model to run every request.

      i don't really understand this: in z-code thre are tasks that other competitive softwares would need to restart all over again while z-code can do it without restarting...

  7. Jun 2022
    1. determine the caliphate; and another group led by Mu'awiya in the Levant, who demanded revenge for Uthman's blood. He defeated the first group in the Battle of the Camel; but in the end,

      this is another post

    1. Discussion of the paper:

      Ghojogh B, Ghodsi A, Karray F, Crowley M. Theoretical Connection between Locally Linear Embedding, Factor Analysis, and Probabilistic PCA. Proceedings of the Canadian Conference on Artificial Intelligence [Internet]. 2022 May 27; Available from: https://caiac.pubpub.org/pub/7eqtuyyc

  8. Apr 2022
  9. Feb 2022
  10. Jan 2022
    1. We are definitely living in interesting times!

      The problem with Machine learning in my eyes seems to be the non-transparency in the field. After all what makes the data we are researching valuable. If he collect so much data why is only .5% being studied? There seems to be a lot missing and big opportunities here that aren't being used properly.

  11. Dec 2021
  12. Oct 2021
  13. Sep 2021
    1. a class of attacks that were enabled by Privacy Badger’s learning. Essentially, since Privacy Badger adapts its behavior based on the way that sites you visit behave, a dedicated attacker could manipulate the way Privacy Badger acts: what it blocks and what it allows. In theory, this can be used to identify users (a form of fingerprinting) or to extract some kinds of information from the pages they visit
  14. Jul 2021
  15. Jun 2021
    1. The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.

      We do better with algorithms where the utility function can be expressed mathematically. When we try to design for utility/goals that include human values, it's much more difficult.

    2. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

  16. May 2021
  17. Apr 2021
    1. Machine learning app development has been gaining traction among companies from all over the world. When dealing with this part of machine learning application development, you need to remember that machine learning can recognize only the patterns it has seen before. Therefore, the data is crucial for your objectives. If you’ve ever wondered how to build a machine learning app, this article will answer your question.

    1. The insertion of an algorithm’s predictions into the patient-physician relationship also introduces a third party, turning the relationship into one between the patient and the health care system. It also means significant changes in terms of a patient’s expectation of confidentiality. “Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses,” the authors wrote.

      There is some work being done on federated learning, where the algorithm works on decentralised data that stays in place with the patient and the ML model is brought to the patient so that their data remains private.

  18. Mar 2021
  19. Feb 2021
  20. Jan 2021
    1. I present the Data Science Venn Diagram… hacking skills, math and stats knowledge, and substantive expertise.

      An understanding of advanced statistics is a must as the methodologies get more complex and new methods are being created such as machine learning

    1. Zappos created models to predict customer apparel sizes, which are cached and exposed at runtime via microservices for use in recommendations.

      There is another company named Virtusize who is doing the same thing like size predicting or recommendation

  21. Dec 2020
  22. Nov 2020
  23. Oct 2020
    1. A statistician is the exact same thing as a data scientist or machine learning researcher with the differences that there are qualifications needed to be a statistician, and that we are snarkier.
    1. numerically evaluate the derivative of a function specified by a computer program

      I understand what they're saying, but one should be careful here not to confuse themselves with numerical differentiation a la finite differnces

  24. Sep 2020
    1. For example, the one- pass (hardware) translator generated a symbol table and reverse Polish code as in conven- tional software interpretive languages. The translator hardware (compiler) operated at disk transfer speeds and was so fast there was no need to keep and store object code, since it could be quickly regenerated on-the-fly. The hardware-implemented job controller per- formed conventional operating system func- tions. The memory controller provided

      Hardware assisted compiler is a fantastic idea. TPUs from Google are essentially this. They're hardware assistance for matrix multiplication operations for machine learning workloads created by tools like TensorFlow.

  25. Aug 2020
  26. Jul 2020
    1. Determine if who is using my computer is me by training a ML model with data of how I use my computer. This is a project for the Intrusion Detection Systems course at Columbia University.
    1. Our membership inference attack exploits the observationthat machine learning models often behave differently on thedata that they were trained on versus the data that they “see”for the first time.

      How well would this work on some of the more recent zero-shot models?

    1. data leakage (data from outside of your test set making it back into your test set and biasing the results)

      This sounds like the inverse of “snooping”, where information about the test data is inadvertently built into the model.

  27. Jun 2020
  28. May 2020
    1. the network typically learns to useh(t)as a kind of lossysummary of the task-relevant aspects of the past sequence of inputs up tot

      The hidden state h(t) is a high-level representation of whatever happened until time step t.

    2. Parameter sharingmakes it possible to extend and apply the model to examples of different forms(different lengths, here) and generalize across them. If we had separate parametersfor each value of the time index, we could not generalize to sequence lengths notseen during training, nor share statistical strength across different sequence lengthsand across different positions in time. Such sharing is particularly important whena specific piece of information can occur at multiple positions within the sequence.

      RNN have the same parameters for each time step. This allows to generalize the inferred "meaning", even when it's inferred at different steps.

    1. Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
  29. Apr 2020
    1. Python contributed examples¶ Mic VAD Streaming¶ This example demonstrates getting audio from microphone, running Voice-Activity-Detection and then outputting text. Full source code available on https://github.com/mozilla/DeepSpeech-examples. VAD Transcriber¶ This example demonstrates VAD-based transcription with both console and graphical interface. Full source code available on https://github.com/mozilla/DeepSpeech-examples.
    1. Python API Usage example Edit on GitHub Python API Usage example¶ Examples are from native_client/python/client.cc. Creating a model instance and loading model¶ 115 ds = Model(args.model) Performing inference¶ 149 150 151 152 153 154 if args.extended: print(metadata_to_string(ds.sttWithMetadata(audio, 1).transcripts[0])) elif args.json: print(metadata_json_output(ds.sttWithMetadata(audio, 3))) else: print(ds.stt(audio)) Full source code
    1. DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier. NOTE: This documentation applies to the 0.7.0 version of DeepSpeech only. Documentation for all versions is published on deepspeech.readthedocs.io. To install and use DeepSpeech all you have to do is: # Create and activate a virtualenv virtualenv -p python3 $HOME/tmp/deepspeech-venv/ source $HOME/tmp/deepspeech-venv/bin/activate # Install DeepSpeech pip3 install deepspeech # Download pre-trained English model files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.pbmm curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.scorer # Download example audio files curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/audio-0.7.0.tar.gz tar xvf audio-0.7.0.tar.gz # Transcribe an audio file deepspeech --model deepspeech-0.7.0-models.pbmm --scorer deepspeech-0.7.0-models.scorer --audio audio/2830-3980-0043.wav A pre-trained English model is available for use and can be downloaded using the instructions below. A package with some example audio files is available for download in our release notes.
    1. import all the necessary libraries into our notebook. LibROSA and SciPy are the Python libraries used for processing audio signals. import os import librosa #for audio processing import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np from scipy.io import wavfile #for audio processing import warnings warnings.filterwarnings("ignore") view raw modules.py hosted with ❤ by GitHub View the code on <a href="https://gist.github.com/aravindpai/eb40aeca0266e95c128e49823dacaab9">Gist</a>. Data Exploration and Visualization Data Exploration and Visualization helps us to understand the data as well as pre-processing steps in a better way. 
    2. TensorFlow recently released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people. We’ll build a speech recognition system that understands simple spoken commands. You can download the dataset from here.
    3. Learn how to Build your own Speech-to-Text Model (using Python) Aravind Pai, July 15, 2019 Login to Bookmark this article (adsbygoogle = window.adsbygoogle || []).push({}); Overview Learn how to build your very own speech-to-text model using Python in this article The ability to weave deep learning skills with NLP is a coveted one in the industry; add this to your skillset today We will use a real-world dataset and build this speech-to-text model so get ready to use your Python skills!
    1. Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility). Supports both convolutional networks and recurrent networks, as well as combinations of the two. Runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.6.
    1. Installation in Windows Compatibility: > OpenCV 2.0 Author: Bernát Gábor You will learn how to setup OpenCV in your Windows Operating System!
    2. Here you can read tutorials about how to set up your computer to work with the OpenCV library. Additionally you can find very basic sample source code to introduce you to the world of the OpenCV. Installation in Linux Compatibility: > OpenCV 2.0
    1. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 18 million. The library is used extensively in companies, research groups and by governmental bodies. Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony, Honda, Toyota that employ the library, there are many startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCV’s deployed uses span the range from stitching streetview images together, detecting intrusions in surveillance video in Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products in factories around the world on to rapid face detection in Japan. It has C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDAand OpenCL interfaces are being actively developed right now. There are over 500 algorithms and about 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers.
    1. there is also strong encouragement to make code re-usable, shareable, and citable, via DOI or other persistent link systems. For example, GitHub projects can be connected with Zenodo for indexing, archiving, and making them easier to cite alongside the principles of software citation [25].
      • Teknologi Github dan Gitlab fokus kepada modus teks yang dapat dengan mudah dikenali dan dibaca mesin/komputer (machine readable).

      • Saat ini text mining adalah teknologi utama yang berkembang cepat. Machine learning tidak akan jalan tanpa bahan baku dari teknologi text mining.

      • Oleh karenanya, jurnal-jurnal terutama terbitan LN sudah lama memiliki dua versi untuk setiap makalah yang dirilis, yaitu versi PDF (yang sebenarnya tidak berbeda dengan kertas zaman dulu) dan versi HTML (ini bisa dibaca mesin).

      • Pengolah kata biner seperti Ms Word sangat bergantung kepada teknologi perangkat lunak (yang dimiliki oleh entitas bisnis). Tentunya kode-kode untuk membacanya akan dikunci.

      • Bahkan PDF yang dianggap sebagai cara termudah dan teraman untuk membagikan berkas, juga tidak dapat dibaca oleh mesin dengan mudah.

  30. Mar 2020
    1. a black software developer embarrassed Google by tweeting that the company’s Photos service had labeled photos of him with a black friend as “gorillas.”
    2. More than two years later, one of those fixes is erasing gorillas, and some other primates, from the service’s lexicon. The awkward workaround illustrates the difficulties Google and other tech companies face in advancing image-recognition technology
  31. Nov 2019
  32. Sep 2019
    1. At the moment, GPT-2 uses a binary search algorithm, which means that its output can be considered a ‘true’ set of rules. If OpenAI is right, it could eventually generate a Turing complete program, a self-improving machine that can learn (and then improve) itself from the data it encounters. And that would make OpenAI a threat to IBM’s own goals of machine learning and AI, as it could essentially make better than even humans the best possible model that the future machines can use to improve their systems. However, there’s a catch: not just any new AI will do, but a specific type; one that uses deep learning to learn the rules, algorithms, and data necessary to run the machine to any given level of AI.

      This is a machine generated response in 2019. We are clearly closer than most people realize to machines that can can pass a text-based Turing Test.

    1. Since all neurons in a single depth slice share the same parameters, the forward pass in each depth slice of the convolutional layer can be computed as a convolution of the neuron's weights with the input volume.[nb 2] Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture. Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure; for which we expect completely different features to be learned on different spatial locations. One practical example is when the inputs are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".

      important terms you hear repeatedly great visuals and graphics @https://distill.pub/2018/building-blocks/

    1. Here's a playground were you can select different kernel matrices and see how they effect the original image or build your own kernel. You can also upload your own image or use live video if your browser supports it. blurbottom sobelcustomembossidentityleft sobeloutlineright sobelsharpentop sobel The sharpen kernel emphasizes differences in adjacent pixel values. This makes the image look more vivid. The blur kernel de-emphasizes differences in adjacent pixel values. The emboss kernel (similar to the sobel kernel and sometimes referred to mean the same) givens the illusion of depth by emphasizing the differences of pixels in a given direction. In this case, in a direction along a line from the top left to the bottom right. The indentity kernel leaves the image unchanged. How boring! The custom kernel is whatever you make it.

      I'm all about my custom kernels!

    1. We developed a new metric, UAR, which compares the robustness of a model against an attack to adversarial training against that attack. Adversarial training is a strong defense that uses knowledge of an adversary by training on adversarially attacked images[3]To compute UAR, we average the accuracy of the defense across multiple distortion sizes and normalize by the performance of an adversarially trained model; a precise definition is in our paper. . A UAR score near 100 against an unforeseen adversarial attack implies performance comparable to a defense with prior knowledge of the attack, making this a challenging objective.

      @metric

  33. Aug 2019
    1. Using multiple copies of a neuron in different places is the neural network equivalent of using functions. Because there is less to learn, the model learns more quickly and learns a better model. This technique – the technical name for it is ‘weight tying’ – is essential to the phenomenal results we’ve recently seen from deep learning.

      This parameter sharing allows CNNs, for example, to need much less params/weights than Fully Connected NNs.

    2. The known connection between geometry, logic, topology, and functional programming suggests that the connections between representations and types may be of fundamental significance.

      Examples for each?