102 Matching Annotations
  1. Apr 2021
    1. Magicians, for instance, are masters at manipulating our intuitions via neglect. Suppress the right kind of information, and humans intuit exceptional entities and events. Is it simply a coincidence that we both suffer source-neglect and we intuit exceptional entities and events when reflecting on our behaviour?

      In short, "Free will and choice are magic tricks."

    2. sourcing lizard-hunting a la neuroscience has nothing to do with our experience of hunting lizards—so long as everything functions as it should. Sun-stroke is but one of countless, potential ‘choice-talk’ breakers here.

      Source-talk trumps choice-talk when there is a simple source to identify, such as when in a sunstroke. We can very easily deal with a sunstroked person by moving them into the shade, rather than giving a peptalk about how they should "stop choosing to be lazy and get going".

    1. Intuitively, then, source-talk trumps choice-talk when applied to the same behaviour.

      For ancient humans, some kinds of social situations can both be dealt with by choice-talk and source-talk, such as head injuries, seizures (aka "demonic possessions"), etc. Generally, these are inherently low-dimensional social situations, allowing source-talk to handle social situations that normally requires choice-talk.

      In situations like that, since source-talk is more adaptive than choice-talk due to being more accurate, evolution selected for an extra trait: "When both choice-talk and source-talk applies, use source-talk."

    2. Since the biocomplexity barrier assured that each mode would be cued the way it had always been cued since time immemorial, we could, ancestrally speaking, ignore our ignorance and generally trust our intuitions.

      Because humans didn't need to understand the difference between choice-talk and source-talk, they never evolved the understanding that there is a difference, or the understanding that they don't understand it.

      It's a metacognitive neglect.

    1. The limits of remembering, in other words, provide an elegant, entirely naturalistic, explanation for our metacognitive intuitions of spontaneity, the almost inescapable sense that thought has to represent some kind of fundamental discontinuity in being.

      Every moment of consciousness feels new and free from the previous moment, because the causal links from the previous moment to the current moment are not represented.

      "I freely choose to eat cake."
      
      "Free? You say you're free because you are extremely unaware of the processes that led you to this moment."
      
    1. As Kriegel says in Sources of Intentionality

      (2011)

      What do thoughts, hopes, paintings, words, desires, photographs, traffic signs, and perceptions have in common? They are all about something, are directed, are contentful—in a way chairs and trees, for example, are not. This book inquires into the source of this power of directedness that some items exhibit while others do not. An approach to this issue prevalent in the philosophy of the past half-century seeks to explain the power of directedness in terms of certain items' ability to reliably track things in their environment. A very different approach, with a venerable history and enjoying a recent resurgence, seeks to explain the power of directedness rather in terms of an intrinsic ability of conscious experience to direct itself. This book attempts a synthesis of both approaches, developing an account of the sources of such directedness that grounds it both in reliable tracking and in conscious experience.

    1. Blind Agents

      This section reviews Making-it-Explicit (Brandom, 1998), which was a serious attempt to explain how we explain. Brandom tried to reconcile the tension between mechanical implicit with the philosophical implicit.

      He argued that, though we can't find intentionality in the physical and biological laws of nature, we can find it in the conversations people make in a society. Intentionality is a myth that becomes real when enough people vocalize that myth when they talk to each other.

      Kinda like money, really. Pieces of paper or lumps of gold become exchangeable for goods when enough people take them as exchangeable for goods.

      To make his argument, Brandom reviewed the history of explaining how we explain.

      Kant

      Kant argued that there are rules that are necessary for anyone to explain anything. For example, when we explain to someone about why the sky is blue, we assume that they are "rational", is going to "take us seriously", will interpret the words we say the same way we interpret them, etc. All these necessary rules are moral rules, which makes them immune to scientific understanding. Kant calls these moral rules "practical reason" to distinguish them from scientific understanding of the world ("pure reason").

      In this way, Kant separated a few objects (God, immortal soul, free will) from scientific understanding, and that allowed him to simply assert that they must be true because we need those concepts to do things.

      Bakker would criticize this as Kant trying to do behavioral psychology by purely thinking inside his own little head. Kant's approach is doomed because introspection is an unreliable hack meant to work only for talking with people in daily life, and we can't even understand what actually goes on inside our own heads without science.

      Kant: "We need God, immortality of the soul, and free will in order to do things."

      Bakker: "Well, time to check that claim with behavioral psychology. I bet Kant's wrong!"

      Frege

      Frege argued that normative explanations are incompatible with causal explanations. If I explain why I helped you using behavioral neuroscience, I would have made a "modal mistake", since the right explanation must be normative rather than descriptive.

      Explication is an intrinsically normative activity. Making causal constraints explicit at most describes what systems will do, never prescribes what they should do.

      Since how we explain is constrained by normative things (as Kant argued), and normative things cannot be explained with descriptive statements (as Frege argued), any explanation for how we explain must be made of normative things.

      Wittgenstein

      Wittgenstein argued that the implicit assumption behind every explanation is in the other people who hear the explanation. You can't be understood unless other people follow some shared rules. Telling others to follow a rule of understanding like "law of noncontradiction" only works if the other people agrees to that rule and agrees with you about how to use that rule.

      "Don't contradict with that statement you made before."
      
      "Why not? And did I really make a contradiction?"
      

      while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules

      Thus, an explanation of how people explaining must explain why people follow rules. This explanation "why you should follow rules" is itself a normative rule and requires normative explanation.

      "You can't say that! It's against rule X."
      
      "What's the problem?"
      
      "It's against the rule-rule to go against rule X!"
      
      "What's the problem?"
      
      "It's against the rule-rule-rule to go against rule-rule!"
      

      Overview

      Kant proposes that we are blind to the "things in themselves", such as God, immortal souls, and free will. This blindness allows us to imagine those things as much as we like since science can't say we're wrong about them. Then, the necessity of

      Wittgenstein proposes that we, as a society of people talking to each other, are blind to the rules that support conversations with other people.

      With those philosophers, Bakker agrees that our severe blindness is what gives rise to the properties of consciousness, but insists that consciousness can be fully described by science, explaining away all normative statements in the process.

    2. Integrating psychology and neuroscience: functional analyses as mechanism sketches

      We sketch a framework for building a unified science of cognition. This unification is achieved by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems. The core idea is that functional analyses are sketches of mechanisms, in which some structural aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are seamlessly integrated with multilevel mechanistic explanations.

      The received view is that functional analysis is autonomous and thus distinct from mechanistic explanation... As we argue, though, functional and mechanistic explanations are not distinct and autonomous from one another precisely because functional analysis, properly constrained, is a kind of mechanistic explanation—an elliptical mechanistic explanation. Thus, a correct understanding of functional analysis undermines the influential claim that explanation in psychology is distinct and autonomous from explanation in neuroscience

    3. Review of The Eliminativistic Implicit I

      When we talk and think and feel, we don't merely talk and think and feel what we can speak of, but also imply what we can't speak of. This is the question of the implicit. People have been trying to figure out how the implicit works: What are the hidden assumptions in an argument? What are the hidden motivations in a murder? Etc.

      One particular question of the implicit is very deep and is the question investigated by the author. The question of "What are the implicit assumptions and motivations in an explanation?"

      In other words, try to explain how we explain. Explicate how we explicate.

      There are four main methods for explaining how we explain.

      1. The Everyday Implicit: Folk psychology, using concepts like "hope", "desire", "memories", etc, and adapted to solve various practical problems involving humans (and animals).

      2. The Philosophical Implicit: Theories about how humans are intentional agents, and use that to explain how humans (and intentional agents in general) work, including their explaining-behavior.

      3. The Psychological Implicit: Theories using things like cognitive psychology to describe the function and behavior of humans (and perhaps explainable AI), including their explaining-behavior.

      4. The Mechanical Implicit: Using neurobiology and physics to describe how humans work, including their explaining-behavior.

      Bakker proposes that the mechanical implicit is the most successful and powerful way to understand how we explain.

      The everyday implicit is a bag of heuristic tools sharpened by evolution for talking with other people (and thus it is easily broken and inconsistent when applied to situations never seen before in human evolution).

      The philosophical implicit is largely an artifact of theoretical misapplications of those heuristic tools,

      The psychological implicit is an empirical approximation using behavioural evidence. Basically, they model what goes on inside the brain by studying what humans (and animals) do. It will only be an approximation like how classical thermodynamics is an approximation of statistical mechanics.

    4. The explicit appeal to some rule, in other words, is actually an implicit appeal to some shared system of norms that we think will license our communicative act.

      "You can't say that! It's against rule X."

      "What's the problem?"

      "It's against the rule-rule to go against rule X!"

      "What's the problem?"

      "It's against the rule-rule-rule to go against rule-rule!"

      ...

    5. Agrippa’s trilemma

      In epistemology, the Münchhausen trilemma is a thought experiment used to demonstrate the impossibility of proving any truth, even in the fields of logic and mathematics. If it is asked how any given proposition is known to be true, proof may be provided. Yet that same question can be asked of the proof, and any subsequent proof. The Münchhausen trilemma is that there are only three options when providing further proof in response to further questioning:

      • The circular argument, in which the proof of some proposition is supported only by that proposition
      • The regressive argument, in which each proof requires a further proof, ad infinitum
      • The dogmatic argument, which rests on accepted precepts which are merely asserted rather than defended
    6. the natural necessity whose recognition is implicit in cognitive or theoretical activity, and the moral necessity whose recognition is implicit in practical activity, as species of one genus

      Kant proposed that w can't help but think about things in space and time, and can't help but do things for right and wrong. These two kinds of "can't help but" are the same kind: they are both caused by our biological constraints.

    7. The ‘implicit’ is a kind of compensatory mechanism, a communicative prosthesis for neglect, a ‘blank box’ for the post facto proposal of various, abductively warranted precursors.

      "Okay, so I did this and that..."

      "How did you do it?"

      "Uhh... it's a black box function!"

    8. What Brandom provides instead is an elegant reprise of the problem’s history

      Brandom didn't give his own second-order explanation. He reviewed the history of second-order explanations.

    9. I mention this because any attempt to assay the difficulty of the problem of making making-explicit explicit would have explicitly begged the question of whether he (or anyone else) possessed the resources required to solve the problem.
      "I'm trying to explain how we explain things."
      
      "Oh?"
      
      "So, you know how we explain this and that? I call those first-order explanations. I'm trying to do a second-order explanation: an explanation for how we can do first-order explanation."
      
      "Ambitious. Nobody has done it before despite 2000 years of trying. What makes you think you can succeed?"
      
      <silence>
      
    10. Making explicit, Brandom is saying, has never been adequately made explicit—this despite millennia of philosophical disputation
      "I did it."
      
      "Explain how you did it."
      
      <explanation>
      
      "Explain how you could explain."
      
      <silence>
      

      Suppose we manage to finally explain how to explain (a "second-order explanation"), then we would be "done" with explaining. We don't need to explain how we did our second-order explanation, because the second-order explanation is an explanation itself.

      That is, a second-order explanation is self-explaining, and our work is finally done. No more second-order or higher-order explanation necessary.

    11. Making-it-Explicit: Reasoning, Representing, and Discursive Commitment

      What would something unlike us—a chimpanzee, say, or a computer—have to be able to do to qualify as a possible knower, like us? To answer this question at the very heart of our sense of ourselves, philosophers have long focused on intentionality and have looked to language as a key to this condition. Making It Explicit is an investigation into the nature of language—the social practices that distinguish us as rational, logical creatures—that revises the very terms of this inquiry. Where accounts of the relation between language and mind have traditionally rested on the concept of representation, this book sets out an alternate approach based on inference, and on a conception of certain kinds of implicit assessment that become explicit in language. Making It Explicit is the first attempt to work out in detail a theory that renders linguistic meaning in terms of use—in short, to explain how semantic content can be conferred on expressions and attitudes that are suitably caught up in social practices.

      At the center of this enterprise is a notion of discursive commitment. Being able to talk—and so in the fullest sense being able to think—is a matter of mastering the practices that govern such commitments, being able to keep track of one’s own commitments and those of others. Assessing the pragmatic significance of speech acts is a matter of explaining the explicit in terms of the implicit. As he traces the inferential structure of the social practices within which things can be made conceptually explicit, the author defines the distinctively expressive role of logical vocabulary. This expressive account of language, mind, and logic is, finally, an account of who we are.

    1. Precisely because the Everyday Implicit is so robustly functional, however, our ability to gerrymander experimental contexts around it should come as no surprise.

      Common cognitive biases and cognitive neglects usually don't fuck us up in daily life, because evolution would have weeded them out otherwise. Therefore, any experiment that tricks us into bad cognitive behavior (such as taking a losing bet) is likely to be very different from daily life in some regard, and so people often simply dismiss them as "unrealistic", drawing a jagged boundary to separate "real life conditions" from "unrealistic lab conditions".

      Similarly, even though moral biases fuck us up in thought experiments (like in the trolley problem), those biases remain functional enough in real world, that we can draw a jagged boundary to separate "real life morality" from "unrealistic thought experiment morality".

      Similarly, experiments about free will that clearly reveals the problems with the everyday understanding of free will are not everyday conditions, allowing us to draw a jagged boundary between "real free will" and "unrealistic lab un-free-will".

    2. The feeling of willing can be readily tricked, and thus stands revealed as interpretative

      interpretative, rather than directly experienced. It is possible to be wrong about what you willed about. That is very important. You could be wrong about the color of the shoe, you could be wrong about willing. Perhaps you could even be wrong about feeling like you willed. There is very little that you can take for granted, even your own thoughts.

    3. And yet, whenever anyone attempted to make this Everyday Implicit explicit, they seemed to come up with something different.

      When a normal try to explain social rules to someone with very few social intuitions (such as some autistic people), they feel the difficulty in turning the Everyday Implicit into words. What really does it mean to hate, or to love, or to feel jealousy?

    4. it operates as a training interface, where the deliberative repetition of actions can be committed to automatic systems. So perhaps it should come as no surprise that, like behaviour, it is largely serial

      Perhaps if consciousness is evolved only for understanding what we see, it would have been parallel.

    5. No matter how often we revisit and infer, we simply cannot accumulate the data we need to arbitrate between our various posits.

      Because introspection does not allow us to observe what caused us to think "this caused that", it is impossible for us to, using our bare introspection, make a causal theory for how we come to think "this caused that".

      We can say "this caused that" but cannot say "this caused me to think that 'this caused that'". Not unless we wield brain science.

    6. So what brings about our assumptive sense of efficacy, our sense of causal power? Why should repeating the serial presentation of two phenomena produce the ‘feeling,’ as Hume terms it, that the first somehow determines the second?

      Bakker will start explaining how the mechanism for creating sense of efficacy is implicit: inscrutable, effective, retrospectively accessible, inferentially accessible.

    7. something operative within us that nonetheless remains hidden from our capacity to consciously report

      brain mechanisms that we can't talk about. Brain science research is turning implicit brain mechanisms into explicit words that describe these mechanisms.

      Compared to brain science, philosophy has been remarkably useless at turning the implicit brain mechanisms into explicit words.

    8. intentionality as traditionally theorized, far from simply ‘making explicit’ what is ‘implicitly the case,’ is actually a kind of conceptual comedy of errors turning on heuristic misapplication and metacognitive neglect

      intentionality itself is an error, a wrong theory about how brains work.

    1. under erasure

      Sous rature is a strategic philosophical device originally developed by Martin Heidegger. Usually translated as 'under erasure', it involves the crossing out of <del>some text</del> a word within a text, but allowing it to remain legible and in place. Used extensively by <del>postmodernist</del> Jacques Derrida, it signifies that a word is "inadequate yet necessary";

    2. It is the medium for the normative commitments that underwrite our ability to change our minds about things, to revise our beliefs in the face of new evidence and correct our understanding when confronted with a superior argument.

      The author is trying to say that:

      Rational agents are rational because they follow rules (they have "normative behavior") of rationality (such as "law of excluded middle", "law of noncontradiction", etc). This rule-following behavior implies that these agents are agents, rather than just piles of meat and bones, because piles of meat and bones cannot follow rules.

      Or can they?? Bakker is about to argue that we don't actually behave rationally by following normative rules at all. Rather, we behave rationally because we are programmed by evolution and structures to do so, as blindly as rocks follow the law of gravity. We only appear to use normative rules due to our blindness to our own operations. Turns out rule-following behavior can be done by sufficiently complex piles of meat and bones, without any necessity of using an atom of normativity.

    3. Pinker claims “that science and ethics are two self-contained systems played out among the same entities in the world, just as poker and bridge are different games played with the same fifty-two-card deck” (55)–even though the problem is precisely that these two systems are anything but ‘self-contained.’

      Pinker assumes that science can never exculpate the "really bad" murders because somehow there is a core part of ethics that is immune to scientific constraints. Sure, the science of ADHD has turned some ethics of hard-work into psychiatry, but ADHD has never been "really" ethical, anyway. Science merely reveals what has never been about ethics.

      But that's precisely how science creeps upon ethics: it reveals that more and more ethical things have never been "really" about ethics, anyway. Where does it end? If materialism is true, then it will not end until every last bit of ethics is replaced with science.

    4. explanatory guesswork has more to do with social signalling than with ‘getting motivations right.’ This effectively blocks ‘motivational necessity’ as an argument securing the ineliminability of the intentional. It also raises the question of what kind of game are we actually playing when we play the so-called ‘game of giving and asking for reasons.’

      Consider a bunch of robots moving about and making beeping noises to each other in their conversation games. Some philosophers think that their conversation games are motivated by some kind of subjective "manifest image".

      But turns out they beep to get social benefits, motivated by a feedback loop inside their chips.

      The robots are us.

    5. Let’s call this the ‘Intentional Dissociation Problem,’ the problem of jettisoning the traditional metacognitive subject (person, mind, consciousness, being-in-the-world) while retaining some kind of traditional metacognitive intentionality

      The problem of "How could a pile of meat and bones think? How could a bunch of electronics and electricity explain things? How could a bunch of atoms say things with meaning?"

    6. take the science into account and see justice is done.

      A man in Manchester may have made legal history in the UK after being acquitted of murdering his father because he was sleepwalking at the time. He was found not guilty due to insanity and sent to a psychiatric hospital for an indefinite period of time.

    7. Tractatus, 5.542

      But it is clear that "A believes that p", "A thinks p", "A says p", are of the form "`p' says p": and here we have no co-ordination of a fact and an object, but a co-ordination of facts by means of a co-ordination of their objects.

      This shows that there is no such thing as the soul -- the subject, etc. -- as it is conceived in superficial psychology. A composite soul would not be a soul any longer.

    8. The more we learn about what we actually do, let alone how we are made, the more fractionate the natural picture–or what Sellars famously called the ‘scientific image’–of the human becomes

      The more biology we know, the more we see the human body as a machine made of little parts that can break, be replaced, be taken apart, be upgraded, downgraded, swapped for parts...

      This kind of bits-and-parts image of the human is so different from the subjective feeling of being human. The subjective feeling of being a whole, indivisible person -- that feeling is illusionary.

    9. if Descartes’ metacognitive subject is ‘broken,’ an insufficient fragment confused for a sufficient whole, then how do we know that everything subjective isn’t likewise broken?

      Some philosopher: Descartes was wrong. After thinking about subjective experiences and some scientific data, we see that the person is actually quite social/contextual/.

      Bakker: What makes you so sure you are right? The way in which Descartes was wrong is so deep, that your proposed replacement is probably also wrong. Maybe we have to throw away everything that we know subjectively, because subjective experience is so unreliable. We have to base everything on objective brain science...

  2. Oct 2020
    1. A few years ago Angus Maddison, an economic historian at the University of Groningen, in the Netherlands, plotted a graph of world economic product

      Measuring and Interpreting World Economic Performance 1500-2001

  3. Apr 2020
    1. The demographics show that white people in London will become a minority by 2010

      At the 2011 census, 59.8% of all London residents were white.

  4. Jan 2020
    1. History teaches us that attempts to patch over gaps in understanding by inventing invisible phenomena are both useful (they prevent science from stalling in the face of mysteries) and usually wrong

      One must note many hypotheses on invisible phenomena that succeeded. The allusion that the invisible phenomena of modern science are motivated by such psychological desire for spiritual stuff is just the author guessing.

      Consider the atomic theory in 18-19th century. Atoms remained highly contentious until Einstein's explanation of Brownian matter, and unseen until around

      Or the discovery of Neptune.

      Or the imaginary numbers. Nothing in real life ever has a complex number of them, and yet they proved very useful.

      Or the zero, something that is literally invisible and nonexistent.

      As long as something is in principle observable, it is scientific. If it is not, but mathematically convenient, it is still useful.

    1. If every immune response is caused by damage and every immune response causes damage, then the organism should enter into a vicious circle of immune activation, which is luckily not the case.

      Wrong. This sum diverges if and only if the "response ratio" is at least 1.

    1. entangled bank

      In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history…

      It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us.

      These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the external conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing Divergence of Character and the Extinction of less-improved forms.

      Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows.

      There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

    2. 3. Whither Individuality?

      This chapter is very similar to Gilbert, Scott F., Jan Sapp, and Alfred I. Tauber, ‘A Symbiotic View of Life: We Have Never Been Individuals’, The Quarterly Review of Biology, 87.4 (2012), 325–41 https://doi.org/10/gfz64k

    3. immune processes have diverse roles in the body’s ceaseless economy of internal cellular turnover, maintaining stable symbiotic relationships, and mediating external benign exchanges with the environment.
    4. immune processes have diverse roles in the body’s ceaseless economy of internal cellular turnover, maintaining stable symbiotic relationships, and mediating external benign exchanges with the environment.

      In contrast to the "immune self" in clonal selection theory, which is defined by the absence of immune response

    5. immune homunculus

      The immune system contains some autoantibodies (antibodies that attach to proteins inside one's own body). This is the immune system's representation of the body itself, the "immune homunculus".

      The notion of the immunological homunculus arose from the observations (1) that the healthy adaptive immune system is inclined to respond (T cell reactivity and autoantibodies) to particular sets of body molecules (self-antigens) and (2) that autoimmune diseases are characterized by sets of autoimmune reactivity to some of the very same self-antigens recognized by healthy subjects -- with an obvious difference in outcome. I termed this natural autoimmune structuring of the immune system, the immunological homunculus -- the immune system’s representation of the body.

    1. some descendant ofXiis inZ.

      \(X_i\) or some descendent

  5. Dec 2019
    1. Riedl and Louis (2012) to write in a commentary that crawling in Drosophila larvae is a “no-brainer” (in their title).

      As early as 1962, Horridge demonstrated that the ventral ganglia (loosely equivalent to the spinal cord) of cockroaches and locust are sufficient to associate leg positions with an electric shock punishment [1]. In vertebrates and invertebrates, removal of the brain has little impact on the execution of basic motor patterns, even though the resulting behaviors often lack coordination 2, 3. Providing that they stay hydrated, decapitated adult flies will happily stand on their six legs and groom spontaneously or upon touching their mechanoreceptor bristles

    1. The abundance of both solid and liquid brains in nature suggests that evolution has found advantages with both approaches. Traditionally, brains are thought to be composed of components such as neurons that are fixed in space with communication among components through networks. For other distributed biological systems such as ant colonies, information exchange occurs locally between agents as they move through space. We suggest that there is probably a trade-off: mobile agents have more flexible communication patterns determined through movement but lack dedicated communication networks owing to the difficulty of maintaining fixed communication structures as agents move.

      Is this relevant for sociology? A society of people can have "liquid" or "solid" social communication networks. A communication network is liquid when it is easy and quick to cut old ties, form new ties, especially long-distance ties; the opposite for a solid network.

      Liquid network examples: scientist communication, business network, internet friendship...

      Solid network examples: feudal economy, caste system, bureaucracy, military command...

    1. Wiener’s resolute pacifist stance after Hiroshima brought him under close FBI watch and cast a shadow of suspicion over his ideas. The subsequent cybernetics scare in the United States further tinged this field with the red of communism, and set hurdles for federal funding of cybernetics research.

      Also, Project Cybersyn is socialist.

    2. U.S. research in artificial intelligence did receive a very significant boost at the time. Starting in 1963, the Information Processing Techniques Office (IPTO) at the Defense Department’s Advanced Research Projects Agency (ARPA) lavishly funded Project MAC at MIT and other artificial intelligence initiatives.

      MIT Computer Science and Artificial Intelligence Laboratory - Wikipedia:

      Project MAC would become famous for groundbreaking research in operating systems, artificial intelligence, and the theory of computation.

      It led to development of Multics (precursor of Unix).

    3. nobody has really been able to figure out how to make good use of this enormous pile of material

      The problem of "big dumb data", or "data swamp".

    1. Scala Naturae

      "ladder of nature"

      Here it denotes the levels of biological organization, from molecular biology to human culture (assumed to be the most complex level of biological organization).

    1. Molecular machines have no learning capacity

      Not true. Molecular machines can be deformed by experience and keep memories in their deformation. This is "conformational memory".

  6. Nov 2019
    1. There’s nothing quite as synonymous with summer as the beach

      For me, the cicada, exam, and the smell of air conditioning.

    1. envelope protection

      Flight envelope protection is a human machine interface extension of an aircraft's control system that prevents the pilot of an aircraft from making control commands that would force the aircraft to exceed its structural and aerodynamic operating limits

    1. Starts of Intermediate Difficulty (SoID)

      It seems rather common sense. It's the same idea as zone of proximal development. Perhaps something else from psychology of learning can be applied to ML too.

    1. Utopian techno-managerial experiments of Chilean communism

      Project Cybersyn

    2. accelerate the process

      In The Will to Power §898, Nietzsche talks about how a greater race of humans should be created so that the future would be greater. However, in order to create such a greater race, the current race of Europeans should be made even blander.

      the leveling process of European man is the great process which should not be checked: one should even accelerate it.

      This bland race would bring about the great race:

      Not merely a master race whose sole task is to rule, but a race with its own sphere of life, with an excess of strength for beauty, bravery, culture, manners to the highest peak of the spirit; an affirming race that may grant itself every great luxury — strong enough to have no need of the tyranny of the virtue-imperative, rich enough to have no need of thrift and pedantry, beyond good and evil; a hothouse for strange and choice plants .

    3. decoding and deterritorialization

      As noted before, capitalism "deterritorializes" by kicking things out of their meaningful place and putting a price on them, allowing them to be traded utterly out of their context.

      "Decoding" is the same idea. Perhaps a better word is "de-symbolizing". Consider the crucifix. It is a sacred symbol for Christians. Now slap a price tag on it. The symbolism is lost. 1 Crucifix = 1 Subway Sandwich.

    4. withdraw from the world market, as Samir Amin advises Third World countries to do

      Note that this advice was taken up by Khieu Samphan's doctoral thesis "Underdevelopment in Cambodia", and followed through by Khmer Rouge, who withdrew completely from the world, causing disaster.

    5. citing Nietzsche to re-activate Marx

      "On the Question of Free Trade" (Marx, 1848)

      But, in general, the protective system of our day is conservative, while the free trade system is destructive. It breaks up old nationalities and pushes the antagonism of the proletariat and the bourgeoisie to the extreme point. In a word, the free trade system hastens the social revolution. It is in this revolutionary sense alone, gentlemen, that I vote in favor of free trade

    6. As the circuit is incrementally closed, or intensified, it exhibits ever greater autonomy, or automation. It becomes more tightly auto-productive (which is only what ‘positive feedback’ already says). Because it appeals to nothing beyond itself, it is inherently nihilistic. It has no conceivable meaning beside self-amplification. It grows in order to grow. Mankind is its temporary host, not its master. Its only purpose is itself.

      Nick Land imagines capitalism to be a big cancerous thing. This is not necessarily how capitalism works in reality, but certainly how Nick Land imagines it to work.

      Clicker games are the best metaphor for this idea of capitalism. In a clicker game like cookie clicker, you make cookies not to eat them, but convert them to capital:

      In economics, capital consists of assets that can enhance one's power to perform economically useful work.

      Cookies in this game loses all of its previous meaning (things to be eaten), but becomes pure capital, made to make more capital, until infinity.

      Nick Land imagines our capitalism to become like this cookie-clicker-capitalism: a giant complex of machines that sustains itself and makes more machines, growth for the sake of growth, utterly meaningless in human eyes. Humans would be like their "bootloader", discarded once the machine complex becomes self-sustaining.

    7. Deterritorialization is the only thing accelerationism has ever really talked about.

      "Deterritorialization" is used by D&G to talk about the way capitalism works. Traditional people-things like religion, ethnicity, love, dream, etc, all have their time and place. They have a home and have their meaning-symbols. Capitalism comes and destroys their uniqueness, by giving a price to all of them, allowing them to be exchanged. This essentially makes them no longer fixed to a time, place, or meaning-system. This kicks them off their territories.

      Consider for example Tetlock's "forbidden tradeoff", where people resist attempting to give prices to sacred things like human lives, religious beliefs, or national flags. Capitalism breaks their meaning by pricing them.

    8. Doing anything, at this point, would take too long. So instead, events increasingly just happen.

      Doing anything consciously would take too much time, so things happen without enough human thought to their consequences.

    9. The definite probability that the allotment of time to decision-making is undergoing systematic compression remains a neglected consideration, even among those paying explicit and exceptional attention to the increasing rapidity of change.

      The author emphasizes that there isn't much time for thinking through the consequences of accelerationism, because our society is changing much faster than the past, back when there was enough time to think through changes.

    10. when it began to become self-aware, decades ago

      The author gives agency to a lot of things, such as an ideology. In other places, he calls such "living" ideologies hyperstition: a collection of ideas that become alive, and protects and grows itself by manipulating people whose brains contain such ideas.

  7. Oct 2019
    1. Traditional psychoanalysis is repressive.

      Ego is based on a mirror illusion of unity. A human is made of a collection of stuffs, and the mirror only shows one physical object. A human seeing its container in the mirror mistakes it for itself.

      Linear time is a human convention.

      Schizophrenics loses meaning by not having linear time and not having an ego.

      Current consumption-based capitalism found it profitable to make people be a little schizophrenic. Ads and other commercial images make consumers go through a cycle:

      1. assume a new identity,
      2. buy products to satisfy the identity.
      3. make the identity go away (made easier by making the identity contentless).

      The Internet and its instant-buy made the cycle go really fast, but it can't possibly get much faster, since humans have a limit to reaction time. If this process is accelerated, capitalism can reach its limit of actual societal schizophrenia, and collapse.

      Why? Because schizophrenics don't have an ego, and the cycle of identity-based consumption breaks.

    2. if the schizophrenic flow transgresses a certain limit, ego identification becomes impossible altogether. In this scenario, the urge to buy would be utterly defused, and capitalism would become impossible.

      This style of thinking is Accelerationism, wherein people are encouraged to make capitalism faster and more pervasive (often to escape it, but not always).

    1. Detrending is removing a trend from a time series;

      Think of it as subtracting the low terms in Taylor expansion to see the high terms.

    1. cultigens

      A cultigen is a plant that has been deliberately altered or selected by humans; it is the result of artificial selection.

    1. risks apparently decreasing

      assuming the assets in the bundle have uncorrelated returns. But in fact they were correlated (in a system), and so the bundles had much more risk than calculated.

    2. APT

      Arbitrage Pricing Theory

    3. intrafinancial system claims

      Claims inside financial system. For example, lending between banks.

  8. Sep 2019
    1. maintenance of interglacial-like conditions

      keep Earth in the nice Holocene climate

    2. Is there a planetary threshold in the trajectory of theEarth System that, if crossed, could prevent stabili-zation in a range of intermediate temperature rises?

      Yes: there are tipping points.

  9. Aug 2019
    1. AIXI take as a starting assumption a single environment to be solved

      Not necessarily. It is possible to postulate several AIXI agents interacting, each playing the environment for the others.

      One could play "God" as the designer of environments, designing interesting teaching environments and instructive problems, which the other AIXIs deal with.

    2. Environments are keptonly if they are not too hard for all of the agents in the population, or are not too easy for any ofthe agents. A copy of the highest-performing agent is transferred to the new environment, where itbegins optimizing to try solve it.

      Compare this with PowerPlay

    3. in a paper in Nature [30]
    4. pen-ended search algorithms, meaning algorithms that endlessly generatenew things [169]. In the AI-GA context, that would mean algorithms that endlessly generate an ex-panding set of challenging environments and solutions to each of those challenges.

      Another possible approach is through algorithms that aim for "empowerment", that is, they try to have more choices and increase their ability to determine the future. See for example Entropy | Free Full-Text | Changing the Environment Based on Empowerment as Intrinsic Motivation

    5. Consider the task of teaching computers to see. The strategies, in increasing abstraction, are

      -1. Direct programming of computer vision.

      1. Applying a hand-written neural network on some hand-picked examples, using hand-picked hyperparameters.
      2. Using a learning algorithm to pick a suitable network architecture and learning hyperparameters.
      3. Using a learning algorithm to pick informative initializations of the neural network before it encounters particular training sets.
      4. Using a teacher algorithm that finds/generates valuable learning environments.
    6. s has been pointedout, switching from normal learning to meta-learning changes the burden on the researcher fromdesigning learning algorithms to designing environments

      The performance of meta-learning algorithms critically depends on the tasks available for meta-training: in the same way that supervised learning algorithms generalize best to test points drawn from the same distribution as the training points, meta-learning methods generalize best to tasks from the same distribution as the meta-training tasks. In effect, meta-reinforcement learning offloads the design burden from algorithm design to task design. If we can automate the process of task design as well, we can devise a meta-learning algorithm that is truly automated.

    7. PowerPlay

      training an increasingly general problem solver by continually searching for the simplest still unsolvable problem

    8. However, these approaches have failed to create anythingresembling an open-ended complexity explosion because thenon-agent (abiotic) component of theenvironmentthey operate in is fixed.

      Fundamentally, the laws of physics are fixed, and yet evolution is open-ended. So rather than blaming it on the lack of openness in the environment, perhaps blame it on insufficient complexity and abstraction in the environment.

      Instead of simulating virtual soccer, imagine simulating a patch of grass, with full biochemical complexity. It would surely be open-ended.

    9. neuromodulation

      Neuromodulation is the physiological process by which a given neuron uses one or more chemicals to regulate diverse populations of neurons. This is in contrast to synaptic transmission in which an axonal terminal secretes neurotransmitters to target fast-acting receptors of only one particular partner neuron.

    10. The mimicpath is unlikely to be the fastest path to general AI because it attempts to simulate all of the detailof biological brains irrespective of whether they can be ignored or abstracted by different, moreefficient, machinery.

      Whole brain emulation by molecular-level simulation, if purely extrapolated from Moore's Law, might take another 100 years to achieve. See Whole Brain Emulation: A Roadmap

    11. The job of the learning algorithm is to produce an initial set of weightsthat can rapidly learn any task from the distribution

      Finding useful priors/inductive biases. In evolutionary psychology, this is exemplified by human inductive reasoning biased by evolution to reach conclusions quickly. See for example Adaptive Rationality: An Evolutionary Perspective on Cognitive Bias | Social Cognition

    12. current machine learning community is mostly committed to the manual path, I advocate that weshould shift investment to the AI-GA path to pursue both promising paths to general AI.

      Also, no mention of "seed AI" research, interesting omission.

    13. AI-GAs would thus better allow us to study andunderstand the space of possible intelligences

      Experimental xenopsychology. Alien intelligence. A sampling of the distribution of possible minds. However, any intelligence that evolves on earth could be biased in certain earth-bound ways, so the sampling would not be universal. It would still be more than directly asking humans to imagine possible minds, though.

    14. Presumably different instantiations of AI-GAs (either different runs of the same AI-GA or differenttypes of AI-GAs) would lead to different kinds of intelligence, including different, alien cultures

      Gould's question about "replaying life's tape" comes to mind.

    1. originalist

      In the context of United States constitutional interpretation, originalism is a way to interpret the Constitution's meaning as stable from the time of enactment, which can be changed only by the steps set out in Article Five. The term originated in the 1980s.

      Originalism - Wikipedia

    2. we have never had to fundamentally rethink the energy basis of our way of life

      The collapse of west roman empire had an energy component. See Joseph Tainter's research on collapse and complexity.

  10. Jul 2019
    1. Jane Bennett’s assemblages

      It means a collection of things (human or not) that relate to each other and do things. For example, guns don't kill people, nor does people kill people. (Gun + people) kill people.

    2. The military are especially good at shouting [prescriptions] through the mouthpiece of human instructors who delegate back to themselves the task of explaining, in the rifle’s name, the characteristics of the rifle’s ideal user

      Rifleman's Creed - Wikipedia

      Without me, my rifle is useless. Without my rifle, I am useless. I must fire my rifle true. I must shoot straighter than my enemy who is trying to kill me. I must shoot him before he shoots me. I will ...

      My rifle and I know that what counts in war is not the rounds we fire, the noise of our burst, nor the smoke we make. We know that it is the hits that count. We will hit ..

    1. across ecosystems? To take just one example, when fish stocks fall in Ghanaian seas, hunting of bushmeat goes up and 41 land-based species go into decline. As hyperkeystones, we unite the entire world in a chain of falling dominoes
    2. humans, the hyperkeystone

    1. See the author's blog post In Defense of Soundbites (2 January 2011)

      soundbites have dropped in length for a variety of reasons — economic, political, historical, and professional. What’s more, they’ve been dropping for a long time, as new research suggests that newspaper quotations began shrinking in a similar way in the 1890s.

      Instead of soundbites, then, we should worry about the tone and focus of our political discourse. And there’s no doubt that this, too, has evolved.

      Elaborated in the story:

      Hallin has argued all along that television news in the 1960s and 1970s, which many take to be the genre’s golden age, was never actually that good. Stories were dull and disorganized; those long quotations would be followed by a couple of seconds of dead air. Early newspapers, in their time, were no different. The Boston Globe’s first issue, in 1872, devoted much of its front page to transcriptions of church sermons.

      as networks shortened their sound bites, they also changed the substance of their political coverage. They started using more in-house experts, pundits who looked less at what people said than at how they said it. TV news became more about strategy and the parsing of strategy — about buzzwords like “expectations” and “momentum” — than about the issues that presumably lie at the heart of politics. Journalists wanted to turn campaigns into larger narratives, and there was no easier narrative than covering politics as though it were a sport. Indeed, Ryfe found that the same thing happened with 19th-century journalists, who, as they professionalized, also “became handicappers of the political process.”

      Ironically, this note is nothing but sound bites!

    2. Journalists wanted to turn campaigns into larger narratives, and there was no easier narrative than covering politics as though it were a sport.

      Not the only narrative, of course, but the easiest one, certainly.

    1. See also the author's own take.

      If the Modernists loved revision so much that they kept at it throughout the literary process, including when their work was in proofs — and one of Sullivan’s key points is that these discrete stages actually encouraged revision — then why didn’t their printers and publishers complain? ... changing work in proofs is expensive.

      That's because Modernists had the support money to revise and to experiment with the rules of revision.

      In her memoir Shakespeare & Company, Sylvia Beach recalls Joyce’s publisher warning about “a lot of extra expenses with these proofs. . . . He suggested that I call Joyce’s attention to the danger of going beyond my depth; perhaps his appetite for proofs might be curbed.”

      But Beach explains that, for her, the most important thing was that Joyce could work as diligently and obsessively as he wanted to:

      I wouldn’t hear of such a thing. Ulysses was to be as Joyce wished, in every respect. I wouldn’t advise ‘real’ publishers to follow my example, nor authors to follow Joyce’s. It would be the death of publishing. My case was different. It seemd natural to me that the efforts and sacrifices on my part should be proportionate to the greatnes of the work I was publishing.

    1. Fear of humans as apex predators has landscape-scale impactsfrom mount ain lions to mice (2019)

      Apex predators such as large carnivores can have cascading, landscape-scale impacts across wild-life communities, which could result largely from the fear they inspire, although this has yet to be experimentally demonstrated.

      Humans have supplanted large carnivores as apex predators in many systems, and similarly pervasive impacts may now result from fear of the human ‘superpredator’.

      We conducted a landscape-scale playback experiment demonstrating that the sound of humans speaking generates a landscape of fear with pervasive effects across wildlife communities.

      • Large carnivores avoided human voices and moved more cautiously when hearing humans,
      • medium-sized carnivores became more elusive and reduced foraging.
      • Small mammals evidently benefited, increasing habitat use and foraging.

      Thus, just the sound of a predator can have landscape-scale effects at multiple trophic levels.

      Our results indicate that many of the globally observed impacts on wildlife attributed to anthropogenic activity may be explained by fear of humans.

    1. Hubert Humphrey

      He was the Democratic Party's nominee in the 1968 presidential election, losing to Republican nominee Richard Nixon.