49 Matching Annotations
  1. Nov 2021
  2. Jun 2021
  3. Mar 2021
  4. Feb 2021
    1. purely functional programming usually designates a programming paradigm—a style of building the structure and elements of computer programs—that treats all computation as the evaluation of mathematical functions.
  5. Jan 2021
    1. Help is coming in the form of specialized AI processors that can execute computations more efficiently and optimization techniques, such as model compression and cross-compilation, that reduce the number of computations needed. But it’s not clear what the shape of the efficiency curve will look like. In many problem domains, exponentially more processing and data are needed to get incrementally more accuracy. This means – as we’ve noted before – that model complexity is growing at an incredible rate, and it’s unlikely processors will be able to keep up. Moore’s Law is not enough. (For example, the compute resources required to train state-of-the-art AI models has grown over 300,000x since 2012, while the transistor count of NVIDIA GPUs has grown only ~4x!) Distributed computing is a compelling solution to this problem, but it primarily addresses speed – not cost.
  6. Dec 2020
  7. Nov 2020
    1. Note that when using sass (Dart Sass), synchronous compilation is twice as fast as asynchronous compilation by default, due to the overhead of asynchronous callbacks.

      If you consider using asynchronous to be an optimization, then this could be surprising.

  8. Oct 2020
    1. The reason why we don't just create a real DOM tree is that creating DOM nodes and reading the node properties is an expensive operation which is what we are trying to avoid.
    1. Parsing HTML has significant overhead. Being able to parse HTML statically, ahead of time can speed up rendering to be about twice as fast.
    1. But is overhead always bad? I believe no — otherwise Svelte maintainers would have to write their compiler in Rust or C, because garbage collector is a single biggest overhead of JavaScript.
  9. Sep 2020
    1. Forwarding events from the native element through the wrapper element comes with a cost, so to avoid adding extra event handlers only a few are forwarded. For all elements except <br> and <hr>, on:focus, on:blur, on:keypress, and on:click are forwarded. For audio and video, on:pause and on:play are also forwarded.
  10. Aug 2020
  11. Jun 2020
  12. May 2020
  13. Apr 2020
    1. Our approach strikes a balance between privacy, computation overhead, and network latency. While single-party private information retrieval (PIR) and 1-out-of-N oblivious transfer solve some of our requirements, the communication overhead involved for a database of over 4 billion records is presently intractable. Alternatively, k-party PIR and hardware enclaves present efficient alternatives, but they require user trust in schemes that are not widely deployed yet in practice. For k-party PIR, there is a risk of collusion; for enclaves, there is a risk of hardware vulnerabilities and side-channels.
  14. Mar 2020
    1. 10,000 CPU cores

      10,000 CPU cores for 2 weeks Question:

      1. where can we find 10,000 CPU cores in China? AWS? Ali? Tencent?

    Tags

    Annotators

  15. Oct 2019
    1. The world can be resolved into digital bits, with each bit made of smaller bits. These bits form a fractal pattern in fact-space. The pattern behaves like a cellular automaton. The pattern is inconceivably large in size and dimensions. Although the world started simply, its computation is irreducibly complex.
    1. categorical formalism should provide a much needed high level language for theory of computation, flexible enough to allow abstracting away the low level implementation details when they are irrelevant, or taking them into account when they are genuinely needed. A salient feature of the approach through monoidal categories is the formal graphical language of string diagrams, which supports visual reasoning about programs and computations. In the present paper, we provide a coalgebraic characterization of monoidal computer. It turns out that the availability of interpreters and specializers, that make a monoidal category into a monoidal computer, is equivalent with the existence of a *universal state space*, that carries a weakly final state machine for any pair of input and output types. Being able to program state machines in monoidal computers allows us to represent Turing machines, to capture their execution, count their steps, as well as, e.g., the memory cells that they use. The coalgebraic view of monoidal computer thus provides a convenient diagrammatic language for studying computability and complexity.

      monoidal (category -> computer)

  16. Jun 2019
    1. Currently, when we say fractal computation, we are simulating fractals using binary operations. What if "binary" emerges on fractals? Can we find a new computation realm that can simulate binary? Can "fractal computing" be a lower level approach to our current binary understanding of computation?

  17. May 2019
    1. There’s a bug in the evolutionary code that makes up our brains.

      Saying it's a "bug" implies that it's bad. But something this significant likely improves our evolutionary fitness in the past. This "bug" is more of a previously-useful adaptation. Whether it's still useful or not is another question, but it might be.

  18. Aug 2018
    1. Another way to use a classification system is to consider if there are other possible values that could be used for a given dimension.

      Future direction: Identify additional sample values and examples in the literature or in situ to expand the options within each dimension.

    2. For researchers looking for new avenues within human computation, a starting point would be to pick two dimensions and list all possible combinations of values.

      Future direction: Apply two different human computation dimensions to imagine a new approach.

    3. These properties formed three of our dimensions: motivation, human skill, and aggregation.

      These dimensions were inductively revealed through a search of the human computation literature.

      They contrast with properties that cut across human computational systems: quality control, process order and task-request cardinality.

    4. A subtle distinction among human computation systems is the order in which these three roles are performed. We consider the computer to be active only when it is playing an active role in solving the problem, as opposed to simply aggregating results or acting as an information channel. Many permutations are possible.

      3 roles in human computation — requester, worker and computer — can be ordered in 4 different ways:

      C > W > R // W > R > C // C > W > R > C // R > W

    5. The classification system we are presenting is based on six of the most salient distinguishing factors. These are summarized in Figure 3.

      Classification dimensions: Motivation, Quality control, Aggregation, Human skill, Process order, Task-Request Cardinality

    6. "... groups of individuals doing things collectively that seem intelligent.” [41]

      Collective intelligence definition.

      Per the authors, "collective intelligence is a superset of social computing and crowdsourcing, because both are defined in terms of social behavior."

      Collective intelligence is differentiated from human computation because the latter doesn't require a group.

      It is differentiated from crowdsourcing because it doesn't require a public crowd and it can happen without an open call.

    7. Data mining can be defined broadly as: “the application of specific algorithms for extracting patterns from data.” [17]

      Data mining definition

      No human is involved in the extraction of data via a computer.

    8. “... applications and services that facilitate collective action and social interaction online with rich exchange of multimedia information and evolution of aggregate knowledge...” [48]

      Social computing definition

      Humans perform a social role while communication is mediated by technology. The interaction between human social role and CMC is key here.

    9. The intersection of crowdsourcing with human computation in Figure 1 represents applications that could reasonably be considered as replacements for either traditional human roles or computer roles.

      Authors provide example of language translation which could be performed by a machine (when speed and cost matter) or via crowdsourcing (when quality matters)

    10. “Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” [24

      Crowdsourcing definition

      Labor process of worker replaced by public.

    11. modern usage was inspired by von Ahn’s 2005 dissertation titled "Human Computation" [64] and the work leading to it. That thesis defines the term as: “...a paradigm for utilizing human processing power to solve problems that computers cannot yet solve.”

      Human computation definition.

      Problem solving by human reasoning and not a computer.

    12. When classifying an artifact, we consider not what it aspires to be, but what it is in its present state.

      Criterion for determining when/if the artifact is a product of human computation.

    13. human computation does not encompass online discussions or creative projects where the initiative and flow of activity are directed primarily by the participants’ inspiration, as opposed to a predetermined plan designed to solve a computational problem.

      What human computation is not.

      The authors cite Wikipedia as not an example of human computation.

      "Wikipedia was designed not to fill the place of a machine but as a collaborative writing project in place of the professional encyclopedia authors of yore."

    14. Human computation is related to, but not synonymous with terms such as collective intelligence, crowdsourcing, and social computing, though all are important to understanding the landscape in which human computation is situated.

  19. Jun 2018
    1. So far, we have dealt with self-reference, but the situation is quite similar with the notion of self-modification. Partial self- modification is easy to achieve; the complete form goes beyond ordinary mathematics and anything we can formulate. Consider, for instance, recursive programs. Every recursive program can be said to modify itself in some sense, since (by the definition of recursiveness) the exact operation carried out at time t depends on the result of the operation at t-1, and so on: therefore, the final "shape" of the transformation is getting defined iteratively, in runtime (a fact somewhat obscured by the usual way in which recursion is written down in high-level programming languages like C). At the same time, as we can expect, to every finite recursive program there belongs an equivalent "straight" program, that uses no recursion at all, and is perfectly well defined in advance, so that it does not change in any respect; it is simply a fixed sequence of a priori given elementary operations.

      So unbounded recursion automatically implies a form of self-reference and self-modification?

  20. Aug 2016
    1. That was in 1960. If computing power doubles every two years, we’ve undergone about 25 doubling times since then, suggesting that we ought to be able to perform Glushkov’s calculations in three years – or three days, if we give him a lab of three hundred sixty five computers to work with.

      The last part of this sentence seems ignorant of Amdahl's Law.

  21. Dec 2015
    1. Why use Storm? Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use! Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Storm integrates with the queueing and database technologies you already use. A Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.

      stream computation