22 Matching Annotations
  1. Oct 2019
    1. The world can be resolved into digital bits, with each bit made of smaller bits. These bits form a fractal pattern in fact-space. The pattern behaves like a cellular automaton. The pattern is inconceivably large in size and dimensions. Although the world started simply, its computation is irreducibly complex.
    1. categorical formalism should provide a much needed high level language for theory of computation, flexible enough to allow abstracting away the low level implementation details when they are irrelevant, or taking them into account when they are genuinely needed. A salient feature of the approach through monoidal categories is the formal graphical language of string diagrams, which supports visual reasoning about programs and computations. In the present paper, we provide a coalgebraic characterization of monoidal computer. It turns out that the availability of interpreters and specializers, that make a monoidal category into a monoidal computer, is equivalent with the existence of a *universal state space*, that carries a weakly final state machine for any pair of input and output types. Being able to program state machines in monoidal computers allows us to represent Turing machines, to capture their execution, count their steps, as well as, e.g., the memory cells that they use. The coalgebraic view of monoidal computer thus provides a convenient diagrammatic language for studying computability and complexity.

      monoidal (category -> computer)

  2. Jun 2019
    1. Currently, when we say fractal computation, we are simulating fractals using binary operations. What if "binary" emerges on fractals? Can we find a new computation realm that can simulate binary? Can "fractal computing" be a lower level approach to our current binary understanding of computation?

  3. May 2019
    1. There’s a bug in the evolutionary code that makes up our brains.

      Saying it's a "bug" implies that it's bad. But something this significant likely improves our evolutionary fitness in the past. This "bug" is more of a previously-useful adaptation. Whether it's still useful or not is another question, but it might be.

  4. Aug 2018
    1. Another way to use a classification system is to consider if there are other possible values that could be used for a given dimension.

      Future direction: Identify additional sample values and examples in the literature or in situ to expand the options within each dimension.

    2. For researchers looking for new avenues within human computation, a starting point would be to pick two dimensions and list all possible combinations of values.

      Future direction: Apply two different human computation dimensions to imagine a new approach.

    3. These properties formed three of our dimensions: motivation, human skill, and aggregation.

      These dimensions were inductively revealed through a search of the human computation literature.

      They contrast with properties that cut across human computational systems: quality control, process order and task-request cardinality.

    4. A subtle distinction among human computation systems is the order in which these three roles are performed. We consider the computer to be active only when it is playing an active role in solving the problem, as opposed to simply aggregating results or acting as an information channel. Many permutations are possible.

      3 roles in human computation — requester, worker and computer — can be ordered in 4 different ways:

      C > W > R // W > R > C // C > W > R > C // R > W

    5. The classification system we are presenting is based on six of the most salient distinguishing factors. These are summarized in Figure 3.

      Classification dimensions: Motivation, Quality control, Aggregation, Human skill, Process order, Task-Request Cardinality

    6. "... groups of individuals doing things collectively that seem intelligent.” [41]

      Collective intelligence definition.

      Per the authors, "collective intelligence is a superset of social computing and crowdsourcing, because both are defined in terms of social behavior."

      Collective intelligence is differentiated from human computation because the latter doesn't require a group.

      It is differentiated from crowdsourcing because it doesn't require a public crowd and it can happen without an open call.

    7. Data mining can be defined broadly as: “the application of specific algorithms for extracting patterns from data.” [17]

      Data mining definition

      No human is involved in the extraction of data via a computer.

    8. “... applications and services that facilitate collective action and social interaction online with rich exchange of multimedia information and evolution of aggregate knowledge...” [48]

      Social computing definition

      Humans perform a social role while communication is mediated by technology. The interaction between human social role and CMC is key here.

    9. The intersection of crowdsourcing with human computation in Figure 1 represents applications that could reasonably be considered as replacements for either traditional human roles or computer roles.

      Authors provide example of language translation which could be performed by a machine (when speed and cost matter) or via crowdsourcing (when quality matters)

    10. “Crowdsourcing is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call.” [24

      Crowdsourcing definition

      Labor process of worker replaced by public.

    11. modern usage was inspired by von Ahn’s 2005 dissertation titled "Human Computation" [64] and the work leading to it. That thesis defines the term as: “...a paradigm for utilizing human processing power to solve problems that computers cannot yet solve.”

      Human computation definition.

      Problem solving by human reasoning and not a computer.

    12. When classifying an artifact, we consider not what it aspires to be, but what it is in its present state.

      Criterion for determining when/if the artifact is a product of human computation.

    13. human computation does not encompass online discussions or creative projects where the initiative and flow of activity are directed primarily by the participants’ inspiration, as opposed to a predetermined plan designed to solve a computational problem.

      What human computation is not.

      The authors cite Wikipedia as not an example of human computation.

      "Wikipedia was designed not to fill the place of a machine but as a collaborative writing project in place of the professional encyclopedia authors of yore."

    14. Human computation is related to, but not synonymous with terms such as collective intelligence, crowdsourcing, and social computing, though all are important to understanding the landscape in which human computation is situated.

  5. Jun 2018
    1. So far, we have dealt with self-reference, but the situation is quite similar with the notion of self-modification. Partial self- modification is easy to achieve; the complete form goes beyond ordinary mathematics and anything we can formulate. Consider, for instance, recursive programs. Every recursive program can be said to modify itself in some sense, since (by the definition of recursiveness) the exact operation carried out at time t depends on the result of the operation at t-1, and so on: therefore, the final "shape" of the transformation is getting defined iteratively, in runtime (a fact somewhat obscured by the usual way in which recursion is written down in high-level programming languages like C). At the same time, as we can expect, to every finite recursive program there belongs an equivalent "straight" program, that uses no recursion at all, and is perfectly well defined in advance, so that it does not change in any respect; it is simply a fixed sequence of a priori given elementary operations.

      So unbounded recursion automatically implies a form of self-reference and self-modification?

  6. Aug 2016
    1. That was in 1960. If computing power doubles every two years, we’ve undergone about 25 doubling times since then, suggesting that we ought to be able to perform Glushkov’s calculations in three years – or three days, if we give him a lab of three hundred sixty five computers to work with.

      The last part of this sentence seems ignorant of Amdahl's Law.

  7. Dec 2015
    1. Why use Storm? Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use! Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Storm integrates with the queueing and database technologies you already use. A Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.

      stream computation