- Last 7 days
-
tripleampersand.org tripleampersand.org
-
The superiority of Far Eastern Marxism. While the Chinese materialist dialectic denegativizes itself in the direction of schizophrenizing systems dynamics, progressively dissipating the historical fate of hierarchies in Tao-soaked special economic zones, a re-Hegelianized Western Marxism degenerates from a critique of political economy into a sympathetic monotheology of power, aligning itself with fascism against deregulation. The left drowns in nationalist conservatism, smothering its vestigial capacity for “hot” speculative mutation in a “cold” and depressive swamp of blame culture (Land, 2011, p. 447-48).
Translation.
The Chinese Marxism is better than the Western Marxism. Chinese Marxism unleashes the productive force of capital and rushes to the future. Western Marxism looks back to the old days before capitalism became dominant and blames capitalism and says "another world is possible". They have stopped making up new ideas, stopped being creative ("hot"), and keep repeating the same critiques and reading groups. It's quite boring ("cold").
-
socius
The first part of Capitalism and Schizophrenia undertakes a universal history and posits the existence of a separate socius (the social body that takes credit for production) for each mode of production: the earth for the tribe, the body of the despot for the empire, and capital for capitalism."
-
it risks perpetuating a more subtle form of colonialism
While Hui considers this an argument against accelerationism and Prometheus, I can just say it is an argument for colonialism.
Ehh, you know how it goes with those theorists.
-
Land perceives capital as a contending flux of schizophrenic production; with capitalism, the disinhibition of modern syntheses (at least on the level of collective human experience) are realized by capital itself as this impersonal zone of transcendent subjectivity: the void of capital is the possible portal for shattering the transcendental screen that conditions the senses of human experience (and respectively the socius) in a space-time prison.
Land sees Capitalism as an alien invasion, piercing the bubble of human environment. By following Capitalism, one might escape the prison of the human experience.
One cannot escape being human, because being a human is trivially true by definition -- if I am simply a human by definition, then of course I can't escape being human. But still, one can escape the human experience for any nontrivial sense of "the human experience". Humans don't need to eat, or sleep, or age, or love, or something. No, humans can escape any of the "necessary" human experience, because there is no limit to human freedom in theory.
Capitalism may allow one to escape any necessary human experience in practice.
-
Capital itself emerges as a new force (Land is heavily in dialogue here with Anti-Oedipus by Deleuze & Guattari)
Land says that in a Capitalist society, there is a free association between machines.
There are abstract machines, such as ideas, scientific theories, patents, programs, books, and concrete machines, such as railroads, hearts, and hydraulic presses. In a traditional society, some machines are only plugged into other machines, by a strictly followed pattern. Doing it in other ways is simply not the way of ancestors, and that is that.
In a Capitalist society,
Conservation of the old modes of production in unaltered form, was, on the contrary, the first condition of existence for all earlier industrial classes. Constant revolutionising of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation distinguish the bourgeois epoch from all earlier ones. All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.
-
Kim Namjoon, member of the group BTS.
Kim Nam-joon, known professionally as RM, is a South Korean rapper, songwriter, and record producer. Leader of the musical band BTS.
-
- Jun 2024
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
It seems inconceivable that they do not know Foucault, that they have never encountered the word biopolitical.
It seems inconceivable how Foucaultian analysis can help an epigeneticist.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Hegel’s critique of phrenology in the Phenomenology
Quante, Michael. "‘Reason… apprehended irrationally’: Hegel’s Critique of Observing Reason." Hegel’s phenomenology of spirit: a critical guide (2008): 91-111.
Hegel argues that meaningful connections between the functional aspects of the mind and the physical structure of the brain cannot be made. This argument against phrenology can be formalized as follows:
- Premise: Phrenology seeks to ground mental attributes in the physical structure of the brain.
- Premise: Phrenology assigns a dual role to the brain: both as a passive object and the seat of consciousness.
- Premise: A purely physical entity cannot simultaneously embody mental activity.
- Conclusion: The dual role assigned to the brain in phrenology is contradictory.
- Premise: Phrenology assigns another dual role to the brain: both as a physical part and the source of individual consciousness.
- Premise: Attributing mental predicates to physical processes is a metaphorical use of language that obscures the distinctions between biological and mental properties.
- Conclusion: The dual roles assigned to the brain in phrenology make it impossible to establish a meaningful connection between the mind and the brain.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
where the deconstruction of speculative claims possesses or at least seems to possess clear speculative effects, the deconstruction of scientific claims does not, as a rule, possess any scientific effects. BBT, recall, is an empirical theory
Attempting to deconstruct scientific claims gave us the Sokal affair, which has been greatly embarrassing to postmodernism.
-
Informatic insufficiency is parasitic on sufficiency, as it has to be, given the mechanistic nature of neural processing. For any circuit involving inputs and outputs, differences must be made. Sufficient or not, the system, if it is to function at all, must take it as such.
A brain trying to catch itself in the act of the sufficiency illusion is only stuck in another sufficiency illusion, because the infinite regress must end somewhere. Metacognition is very slow (about 10 Hz) and small (about 10 items), so the infinite regress typically stops after 1 or 2 acts of "catching itself in the act of sufficiency illusion".
"Ah ha, I am subject to the sufficiency illusion!"
"Ah ha, I am subject to the sufficiency illusion-sufficiency illusion!"
"I give up, this is too meta. And pointless, since I'm just always in a sufficiency illusion anyway."
-
On BBT, all traditional and metacognitive accounts of the human are the product of extreme informatic poverty. Ironically enough, many have sought intentional asylum within that poverty in the form of apriori or pragmatic formalisms, confusing the lack of information for the lack of substantial commitment, and thus for immunity against whatever the sciences of the brain may have to say. But this just amounts to a different way of taking refuge in obscurity. What are ‘rules’? What are ‘inferences’? Unable to imagine how science could answer these questions, they presume either that science will never be able to answer them, or that it will answer them in a manner friendly to their metacognitive intuitions. Taking the history of science as its cue, BBT entertains no such hopes. It sees these arguments for what they happen to be: attempts to secure the sufficiency of low-dimensional, metacognitive information, to find gospel in a peephole glimpse.
This describes the approach of Sellars, Brandom, and Brassier, all of which Bakker has criticized in the blog.
They admit that science has priority in the scientific realm, but what we think we are is not something that can be true or false, but are games, rules, things we play, a game of "pretend as if we are persons".
This is a much better position. It does not attempt to tell science that science is a building founded upon the ground of philosophy (unlike Kant, or Heidegger), and does not try to make scientifically testable predictions and get embarrassed in the process (unlike those who sought to study the "quantum of consciousness" because they thought free will is real, thus something quantum-mechanical must be true of the brain, or that philosopher who argued that Anton syndrome is impossible because it is philosophically impossible, or those psychoanalysts that try to interpret Cotard's syndrome as some manifestation of childhood trauma).
The problem with this position is as follows:
-
Is science really based on a game of giving and taking reasons? If not, then there's no guarantee that science would protect the game of "let's pretend we are persons who make decisions, has plans, hopes for love, etc". The juggernaut of science may eventually crush the "manifest image of man" under its wheels, migrate to a society of unconscious biorobots, and run even faster as a result!
-
Philosophers are unable to figure out what rules, games, normativity, etc, are! They can't agree, after centuries of disputation. Any working consensus will have to come from science, and what if science finally shows that rules and games are nothing like what Sellars, Brandom, etc, thought they are? If not, then not only is the manifest image not the scientific image, not only is it unnecessary for working scientists, it is even not what the philosophers say it is. It is as if the philosophers have been stuck in Plato's Cave, mistaking the shadow-play for optical-science.
-
-
Because it is blind to itself, it cannot, temporally speaking, differentiate itself from itself. As a result, such acts seem to arise from some reflexive source. The absence of information, once again, means the absence of distinction, which means identity.
The brain is too slow to catch itself in the act of changing, and too slow to catch itself in the act of being too slow, So the brain just defaults to treating itself as unchanging, identical through time.
-
the ‘fundamental synthesis’ described by Hagglund is literally a kind of ‘flicker fusion,’ a metacognitive presumption of identity where there is none. It is a kind of mandatory illusion: illusory because it egregiously mistakes what is the case, and mandatory because, like the illusion of continuous motion in film, it involves basic structural capacities that cannot be circumvented and so ‘seen through.’
I am a brain that moves through time, synapses and biochemicals flickering furiously. It is too fast for the brain to catch itself in the change. The brain flickers 100 times a second (gamma brainwave), but conscious thought comes at most 10 times a second. For the brain to "catch itself in the change" the brain would have to somehow self-represent 10 times faster.
So, the brain can't catch itself in the change. Imagine a single second and 100 brain-states during that single second. Each brain-state differs from the previous one at trillions of synapses, but the brain is stuck not representing the difference between brain-0, brain-1, ..., brain-10, because otherwise, it would have to squeeze in 10 units of change into 1 unit of reflective thought. There is simply not enough time.
So, the brain sees itself as mostly unchanging despite there being plenty of change, because not only is it too slow to see its own change, it is too slow to represent its own slowness. If it spends all its time representing its own slowness, it will promptly die of starvation, because a brain is not built for contemplation, but surviving.
-
the spectre of noocentrism, the possibility that our conception of ourselves as intentional is a kind of perspectival illusion pertaining to metacognition not unlike geocentrism in the case of environmental cognition.
Geocentrism: Not only can't we see earth as moving, we can't even see that we are missing the information. We are not left with a nagging sense of "I don't know enough either way." but rather an obvious "Earth abides.".
Noocentrism: Not only can't we see how we are made of parts (neurons firing, biochemicals flickering) that are too fast, too numerous, for us to see, we can't even see that we are unable to see. We are not left with a nagging sense of "I don't know how much I know about myself." but rather an obvious "I see all there is to see.".
-
trace and differance do not possess the resources to even begin explaining synthesis in any meaningful sense of the term ‘explanation.’ To think that it does, I have argued, is to misconceive both the import and the project of deconstruction. But this does not mean that presence/synthesis is in fact insoluble.
Derrida used trace and differance to do deconstruction, showing how weird it is that language can mean things, but unstably. Trying to build a science out of it is doomed. Deconstruction cannot build a science.
But the opposite way can. Science can explain why the meaning of language is weird, and why deconstruction works.
-
In fact, if anything is missing in an exegetical sense from Hagglund’s consideration of Derrida it has to be Heidegger, who edited The Phenomenology of Internal Time-consciousness and, like Derrida, arguably devised his own philosophical implicature via a critical reading of Husserl’s account of temporality. In this sense, you could say that trace and differance are not the result of a radicalization of Husserl’s account of time, but rather a radicalization of a radicalization of that account.
Husserl tried to be objective and scientific about time. He tried to study how time is perceived by introspection, but at least he tried to be scientific about it. Basically, though he couldn't use a microscope or a clock, he tried to form the equivalent of microscopes and clocks using nothing but his mind, to probe his mind.
Heidegger radicalized this by saying that first-person time and third-person time are so different because of something philosophical "the Ontological difference". He then proceeded to build a whole philosophy of the first-person using only what's available first-person.
Derrida radicalized Heidegger by saying that the Ontological difference exists, but simply cannot be escaped. Heidegger's project is doomed from the start.
-
even though Hagglund utterly fails to achieve his thetic goals, there is a sense in which he unconsciously (and inevitably) provides a wonderful example of the very figure Derrida is continually calling to our attention. The problem of synthesis is the problem of presence, and it is insoluble, insofar as any theoretical solution, for whatever reason, is doomed to merely reenact it.
The problem of synthesis: If things keep changing, how come some things are the same things?
The problem of presence: If things don't exist (because everything keeps changing), why do we see things as if they exist stably over time?
Derrida: The problem can't be solved. Any attempt to solve the problem of presence by making a theory will just end up creating it again, but in more arcane language, because even if a theory solves the problem right now, it will lose its power over time, because meaning is unstable. In fact, I'll keep trying this again and again in my writings to show you how I can't jump out of this magic circle no matter how much I try, and neither can you.
-
The synthesis of the trace follows from the constitution of time we have considered. Given that the now can appear only by disappearing–that it passes away as soon as it comes to be–it must be inscribed as a trace in order to be at all. This is the becoming-space of time. The trace is necessarily spatial, since spatiality is characterized by the ability to remain in spite of temporal succession. Spatiality is thus the condition for synthesis, since it enables the tracing of relations between past and future. Radical Atheism, 18 But as far as ‘explanations’ are concerned it remains unclear as to how this can be anything other than a speculative posit. The synthesis of now moments occurs somehow. Since the past now must be recuperated within future nows, it makes sense to speak of some kind of residuum or ‘trace.’ If this synthesis isn’t the product of subjectivity, as Kant and Husserl would have it, then it has to be the product of something. The question is why this ‘something’ need have anything to do with space. Why does the fact that the trace (like the Dude) ‘abides’ have anything to do with space? The fact that both are characterized by immunity to succession implies, well… nothing. The trace, you could say, is ‘spatial’ insofar as it possesses location. But it remains entirely unclear how spatiality ‘enables the tracing of relations between past and future,’ and so becomes the ‘condition for synthesis.’
Hagglund, having broken time into pieces, wanted to piece time back together again. If time is in pieces, and remains in pieces, where does he find some glue? He can't glue time together with anything time-like, because that break apart just like time.
So he went for space... somehow? It is entirely unclear how that's supposed to work. Hagglund apparently thought that because space does not come one after another, it is not like time. But space does come one to the left of another. It seems his entire argument is that one moment of time "replaces" another, while one point in space does not "replace" another.
And there's the problem with relativity, where space and time are related by coordinate transforms. Space would be broken into pieces too if time is.
-
No matter how fierce the will to hygiene and piety, reason is always besmirched and betrayed by its occluded origins. Thus the aporetic loop of theory and practice, representation and performance, reflexivity and irreflexivity–and, lest we forget, interiority and exteriority…
No matter how hard the philosophers try, they always seem to end up deconstructing themselves and running in circles of interpretation. One generation proposes "theory", another "practice", etc. It is quite tiring.
At least Derrida was doing it on purpose, trying to show that philosophers are spinning endlessly in place as fast as possible. Instead of it taking centuries, he wanted to show the loop in the space of a single 10-page paper (hopefully).
-
But this requires that he retreat from his earlier claims regarding the ultratranscendental status of trace and differance, that he rescind the claim that they constitute an ‘all the way down’ condition. He could claim they are merely transcendental in the Kantian, or ‘conditions of experience,’ sense, but then that would require abandoning his claim to materialism, and so strand him with the ‘old Derrida.’ So instead he opts for ‘compatibility,’ and leaves the question of theoretical utility, the question of why we should bother with arcane speculative tropes like trace and differance given the boggling successes of the mechanistic paradigm, unasked.
The trilemma, all bad:
- Bite the bullet, insist that our world really is made of "arche-material", and try to use deconstruction to tell physicists that they need to start doing deconstruction if they want to know what the material world is. This is not going to be taken seriously.
- Retreat back to saying that deconstruction is what the mind is like. That's just plain old Derrida, nothing new. He wants to be new.
- Say that deconstructive physics is merely compatible with scientific physics... in which case, why bother? Deconstructivists have won exactly 0 Nobel Prizes in Physics, or written 0 textbooks in physics. Do they really have any advantage here? If not, then why not just stick with standard scientific physics?
-
Hagglund, in effect, has argued himself into the very bind which I fear is about to seize Continental philosophy as a whole. He recognizes the preposterous theoretical hubris involved in arguing that the mechanistic paradigm depends on arche-materiality, so he hedges, settles for ‘compatibility’ over anteriority. In a sense, he has no choice. Time is itself the object of scientific study, and a divisive one at that. Asserting that trace and differance are constitutive of the mechanistic paradigm places his philosophical speculation on firmly empirical ground (physics and cosmology, to be precise)–a place he would rather not be (and for good reason!).
Hagglund takes literature criticism and then tries to submit the entire physical universe to it. But that would be rather incredible. Even philosophers would balk at trying to apply trace and differance to... the big bang theory, or Darwinian evolution, or the formation of the solar system.
So Hagglund retreats a bit. Instead of saying that deconstruction is a foundation for physical cosmology, he says it is compatible with it.
One is reminded of the Bergson vs Einstein debate on time. While Bergsonian time is still endlessly analyzed for its literary value, Einsteinian time is simply the working hypothesis for engineers and physicists and, in its approximate form of Newtonian time, the working hypothesis of everyone, even philosophers.
-
This notion of the arche-materiality can accommodate the asymmetry between the living and the nonliving that is integral to Darwinian materialism (the animate depends upon the inanimate but not the other way around). Indeed, the notion of arche-materiality allows one to account for the minimal synthesis of time–namely, the minimal recording of temporal passage–without presupposing the advent or existence of life. The notion of arche-materiality is thus metatheoretically compatible with the most significant philosophical implications of Darwinism: that the living is essentially dependant on the nonliving, that animated intention is impossible without mindless, inanimate repetition, and that life is an utterly contingent and destructible phenomenon. Unlike current versions of neo-realism or neo-materialism, however, the notion of arche-materiality does not authorize its relation to Darwinism by constructing an ontology or appealing to scientific realism but rather articulating a logical infrastructure that is compatible with its findings. Journal of Philosophy
While "arche-writing" highlights the inherent instability of meaning inherent in any signifying system, "arche-materiality" broadens this notion to encompass the material world itself. The instability and deferral of meaning Derrida identified in language are ultimately grounded in the fundamental instability and flux of the material world itself.
He rejects any materialism that is based on strings, or fields, or atoms, etc. He rejects any materialism that is based on fixed kinds of things, because physics deconstructs itself as much as text deconstructs words.
As an application, "arche-materiality" as provides a "logical infrastructure" compatible with scientific findings like those of Darwinism.
-
“The succession of time,” Hagglund states in his Journal of Philosophy interview, “entails that every moment negates itself–that it ceases to be as soon as it comes to be–and therefore must be inscribed as trace in order to be at all.” Trace and differance, he claims, are logical as opposed to ontological implications of succession, and succession seems to be fundamental to everything.
Time destroys itself. Self-destruction is what time is. Everything is in time. Everything destroys itself. Everything exists only as trace.
Hagglund takes deconstruction out of literature and puts it into physical cosmology.
-
Derrida himself did, evince their ‘quasi-transcendentality’ through actual interpretative performances. One can, in other words, either refer or revere. Since second-order philosophical accounts are condemned to the former, it has become customary in the philosophical literature to assign content to the impossibility of stable content assignation, to represent the way performance, or the telling, cuts against representation, or the told. (Deconstructive readings, you could say, amount to ‘toldings,’ readings that stubbornly refuse to allow the antinomy of performance and representation to fade into occlusion). This, of course, is one of the reasons late 20th century Continental philosophy came to epitomize irrationalism for so many in the Anglo-American philosophical community. It’s worth noting, however, that in an important sense, Derrida agreed with these worries: this is why he prioritized demonstrations of his position over schematic statements, drawing cautionary morals as opposed to traditional theoretical conclusions. As a way of reading, deconstruction demonstrates the congenital inability of reason and representation to avoid implicitly closing the loop of contradiction. As a speculative account of why reason and representation possess this congenital inability, deconstruction explicitly closes that loop itself.
Deconstruction shows that meaning is unstable, so why write so many difficult books? By repeatedly showing how words have unstable meaning, they are trying to show something they can't say in words.
By calling "trace" and "différance" "quasi-transcendental," Derrida admits that these terms themselves are subject to the same instability they describe. This, then, is the heart of Derrida's philosophy: a self-contradiction built into its core,. He is like a blacksmith holding up a sword, "It is so sharp that it cut itself in half!".
Perhaps Wittgenstein has said it simply, without fuss:
My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them—as steps—to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.) He must transcend these propositions, and then he will see the world aright.
-
Where Hegel temporalized the krinein of Critical Philosophy across the back of the eternal, conceiving the recuperative role of the transcendental as a historical convergence upon his very own philosophy, Derrida temporalizes the krinein within the aporetic viscera of this very moment now, overturning the recuperative role of the transcendental, reinterpreting it as interminable deflection, deferral, divergence–and so denying his thought any self-consistent recourse to the transcendental.
Hegel saw the "krinein" as unfolding over time, ultimately leading to a complete and totalizing understanding. In contrast, Derrida views the "krinein" as an ongoing process, always happening within the present moment, never reaching a final resolution.
Derrida opposed "deconstruction" to "criticism" which, like its Greek root krinein, refers to the separating and distinguishing of meanings, while deconstruction is always open to the possibility that a text is ironic or has no intention-to-signify whatsoever.
-
Derrida actually develops what you could call a ‘logic of context’ using trace and differance as primary operators.
It seems he was trying to do something like mathematical logic, where the basic operators are "trace" and "differance", and the basic objects are "signs". Meaning tumbles out as statistical properties emerging by repeated applications of trace and differance on billions of signs.
-
What is the methodological justification for speaking of the trace as a condition for not only language and experience but also processes that extend beyond the human and even the living?”
So, Derrida probably treated "trace" as a literary critique thing. I write a symbol, and then you write a symbol, and so on, leaving behind traces of unstable meanings, tumbling like amoebas through time. So far, a bit crazy, but not too crazy.
But then Hagglund went in and generalized it to the entire universe. A fossil is a trace. The sun is a trace. This plastic bag is a trace. The microwave background radiation is a trace... Just what is a trace after all? And why are we projecting our little literary criticism to the entire world, the entire physical cosmos? Did Hagglund take "There's nothing outside the text" so seriously as to turn physics into a branch of literary criticism?
-
Identity has to come from somewhere. And this is where Derrida, according to Hagglund, becomes a revolutionary part of the philosophical solution. “For philosophical reason to advocate endless divisibility,” he writes, “is tantamount to an irresponsible empiricism that cannot account for how identity is possible” (25). This, Hagglund contends, is Derrida’s rationale for positing the trace. The nowhere of the trace becomes the ‘from somewhere’ of identity, the source of ‘originary synthesis.’
Well, if Derrida has rejected all forms of identity, and time just keeps happening, then why do we feel the same?
Solution: smuggle the identity back in by the magic of "trace".
A "trace" is like pawprints in the sand. They create the illusion (or real, but who knows what the philosophers mean, really?) of identity through time. The pawprints here point to some foxes a while ago, creating an identity of fox through time.
Similarly, memories in my head creates an identity of myself through time.
-
The pivotal question is what conclusion to draw from the antinomy between divisible time and indivisible presence. Faced with the relentless division of temporality, one must subsume time under a nontemporal presence in order to secure the philosophical logic of identity. The challenge of Derrida’s thinking stems from his refusal of this move. Deconstruction insists on a primordial division and thereby enables us to think the radical irreducibility of time as constitutive of any identity. Radical Atheism, 16-17
The most important question about time is this: Eadem mutata resurgo ("Though I changed, I arise the same.").
We can't say that time is both a Timeline of different points, and that time is a single timeless "Now". We have to choose.
Christians, Heidegger, Kant, and some other philosophers, have went with favoring the timeless "Now" as the underlying reality, then try to explain the illusion of Timeline. Derrida went with the Timeline as the underlying reality, and then try to explain the illusion of the timeless Now.
-
The primary problem, as Aristotle sees it, is the difficulty of determining whether the now, which divides the past from the future, is always one and the same or distinct, for the now always seems to somehow be the same now, even as it is unquestionably a different now.
How many "Now"s are there? One? Then how could it be that this "now" and the "now" two days ago look so different? Two? Then how is it that I'm always in Now(1), but not Now(2)?
-
it is significant, I think, that he begins with a reading of “Ousia and Gramme,” which is to say, a reading of Derrida’s reading of Heidegger’s reading of Hegel!
You are reading me reading Bakker reading Hagglund reading Derrida reading Heidegger reading Hegel.
Are we done with this game of reading yet?
-
The desire for survival cannot aim at transcending time, since the given time is the only chance for survival. There is thus an internal contradiction in the so-called desire for immortality.
"You want to live, right? Of course you want. You want to live as yourself, because what else can you be if not yourself? Thus, you want to live as yourself. What are you? You are a human. You are a human in time. You are not a creature of eternity. Therefore, you cannot want to live forever. QED".
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Spinoza accuses naive Christians of making in his letters: we conceive of the condition in terms belonging to the conditioned
Spinoza's God is an abstract God, like Euclidean geometry. He was against naive Christians who thought of God in human terms. No, God has no face, no brain, no arm, no eyes, no thought, no intention, nothing whatsover.
God created humans. Humans have those. God is our condition. We are not God's condition. Without us, God still is.
-
tropical luxuriance
one must also above all give the finishing stroke to that other and more portentous atomism which Christianity has taught best and longest, the SOUL-ATOMISM. Let it be permitted to designate by this expression the belief which regards the soul as something indestructible, eternal, indivisible, as a monad, as an atomon: this belief ought to be expelled from science! Between ourselves, it is not at all necessary to get rid of "the soul" thereby, and thus renounce one of the oldest and most venerated hypotheses—as happens frequently to the clumsiness of naturalists, who can hardly touch on the soul without immediately losing it. But the way is open for new acceptations and refinements of the soul-hypothesis; and such conceptions as "mortal soul," and "soul of subjective multiplicity," and "soul as social structure of the instincts and passions," want henceforth to have legitimate rights in science. In that the NEW psychologist is about to put an end to the superstitions which have hitherto flourished with almost tropical luxuriance around the idea of the soul, he is really, as it were, thrusting himself into a new desert and a new distrust—it is possible that the older psychologists had a merrier and more comfortable time of it; eventually, however, he finds that precisely thereby he is also condemned to INVENT—and, who knows? perhaps to DISCOVER the new.
Beyond Good and Evil, I.12
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
For example, moral reasoning has this operation of "tell them to do otherwise". When the moral-reasoning module is inputted with some annoying behavior, like someone hitting you, the module outputs a "tell them to do otherwise" command, sent to the motor module and the speech module, resulting in you raising your hand into a fist and speaking, "Stop that or I'll go to the police!"
Now, the "tell them to do otherwise" is sadly unavailable whenever you are dealing with mechanical objects. You can't shout, "Stop that!" when the phone is broken, expecting it to submit to the moral power of your words. Nor can you be like King Canut, commanding the tidal forces to recognize your normative power.
Now, suppose we have suddenly opened the black box of the brain, such that we can explain how exactly a decision is made. This presents a conflict. The moral module does not even have wires where the schematic diagrams can be encoded and inputted. The moral module operates as normal: "Stop that!" But suddenly, the mechanical module sends in a conflicting signal, "Shouting 'stop that!' is unlikely to work, because this is a machine..."
The trick is that, of course, shouting "stop that!" would work, because this machine is voice-controllable. The problem is simply that the moral and mechanical modules were never presented with schematics of voice-controllable machinery before, and so they flounder and do what they were built to do.
In fact, continuing as the essay, it is not that voice-command has suddenly stopped working -- it has always worked in the past, and will keep working for quite a while as long as humans are made in the same way. The thing is that both modules are thrown into a novel situation. The moral module is working as before, but it is not accustomed to objections from mechanical intervention. The mechanical module has never faced with things that respond to voice-commands before, so when faced with a schematic diagram of the brain, it naturally signals that the brain does not respond to voice-command. The philosopher, in attempting to integrate the two module's outputs, end up hallucinating all forms of concepts like "irreducible intention" "compatibalism", etc.
The philosopher feels this conflict, but still takes the outputs of either module fully literally -- the outputs of modules are reality. If the moral module says that voice-control works non-causally, then well, we can't have causality in rule-following. If the mechanical module says that the machine is fixed by modifying the wires here and there (such as in epilepsy), then well, we can have causality in epilepsy, which means that epilepsy is not moral, and has never been moral.
This is then another deeply troubling effect: Everything, and I mean EVERYTHING, might turn out to be not moral, and has never been moral. Morality would be exposed as always already nothing but voice-command, and contains no truth, not a single one. Even "murder is wrong" would turn out to be not truth, but a voice-command. Indeed, torture might be reinstated not because it is morally right to punish certain people, but because it is useful for the oiling the great social machinery.
And if you ask, "But what is useful if not a moral concept?" well my reply is, "The wind blows, not because it is useful, but because it is true. The social machinery oils itself, not because it wants to, but because it does..."
This, of course, is the eliminativist reply, which intentionalists take as either obviously self-contradictory, or a joke. What, society running on itself without moral fiber holding it together? Next thing you know you'll be saying that we are biomechanical machines without life-force propelling it forwards and keeping it from instantly collapsing into a puddle of dead matter without life-force!
-
What doesn’t follow is that normative cognition thus lies outside the problem ecology of natural cognition, let alone inside the problem ecology of normative cognition
Brandom wanted to explain rule-following not by the science of cause-and-effect, but by the concepts of ought-and-thus from morality.
Bakker objects that it's exactly backwards. The concepts of morality are inherently black-box methods that work only if the black-box is actually nice enough to cooperate with you. Moral reasoning is built to solve the black-box of social interaction, where other people are the black-boxes. When it comes to human rule-following behavior, moral reasoning flounders because philosophy really isn't its strong suit, let alone neuroscience!
An AI trained to recognize only flowers, when faced with dogs, hallucinates all kinds of flowers where none exists. Like that, moral reasoning hallucinates all kinds of mysterious "intentional" objects when faced with neural circuitry.
-
the project of overcoming metacognitive neglect regarding normative cognition, and yet nowhere does Brandom so much as consider just what he’s attempting to overcome
Brandom was trying to explain how people follow rules. There is something to explain, because people don't know how they follow rules -- else there's no point to all those fat philosophy books trying to explain how people follow rules.
However, Bakker points out that people don't know how they follow rules, because they don't have the neuroscience data. Introspection and thinking hard and psychology research only tells you so much. There is so much black-box stuff inside that there's no way to .
Yet, Brandom still cheerfully went ahead with his explanation, without even a mention that "By the way, I am not actually going to explain rule-following, because I have not enough neuroscience data." Instead, he went ahead and explained rule-following by positing that there's something mysterious in the subconscious, something inherently supernatural, intentional, and impossible to explain causally.
This is a deep sense of blindness, a blindness that does not see its blindness. Not seeing how the brain works is bad enough, but the brain does not see that it does not see. It is an introspective blindness so blind that it believes it sees everything, like a visual anosognosiac.
-
“[n]obody ever acts incorrectly in the sense of violating his or her own dispositions” (29).
I doubt this. If I have conflicting dispositions, or autonomous brain modules acting in conflict...
-
Regularism, however, is “that it threatens to obliterate the contrast between treating a performance as subject to normative assessment of some sort and treating it as subject to physical laws” (27).
Regularism is bad, because it confuses "is" and "ought". Regularism says that rule-following is doing what has been done before, like physical laws. But rules-following is doing what ought to be done, like... non-physical laws.
-
functional analysis was simply psychology’s means of making due, a way to make the constitutive implicit explicit in the absence of any substantial neuroscientific information
Psychologists design functional modules to explain what people do, but those are actually black-boxes. They can't fill the boxes because they don't have enough neural data.
-
This is the upshot of Wittgenstein’s regress of rules argument, the contention that “while rules can codify the pragmatic normative significance of claims, they do so only against a background of practices permitting the distinguishing of correct from incorrect applications of those rules” (22).
If I have exhausted the justifications, I have reached bedrock and my spade is turned. Then I am inclined to say: 'This is simply what I do.
-
The how of human cognition, Kant believed, lies outside the circuit of human cognition, save for what could be fathomed via transcendental deduction. Kant, in other words, not only had his own account of what the implicit was, he also had an account for what rendered it so difficult to make explicit in the first place!
The phenomenal world is the world as it appears to us, structured by our senses and cognition. The noumenal world is the world in itself, independent of our perception. We can only access the phenomenal world. Kant argued that certain concepts, like space, time, and causality, are not derived from experience but are "built in" in our own selves. We cannot help but experience those. These are called transcendental elements of experience.
According to Kant, the world appears to have time, space, cause-and-effect, etc, but that's all how it appears to us. The world as it truly is, the world-in-itself, is not what it seems.
Now this isn't a problem -- science says the same thing. However, whereas science has microscopes and calipers, Kant has just philosophy. According to Kant, everything we try to learn about ourselves, whether by scalpel, microscope, etc, would only tell us what we appear to ourselves, not what we really are.
This then is the dilemma: The world appears to have time, space, etc, because of how we are made, our neural circuitry, or soul-mechanics, or whatever. The point is that we see the world in a certain way because of what we are. The problem is that we can never know what we are, so we can never explain why we see the world in this way.
This is the Transcendental Blind Agent Thesis.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Bakker makes a giant list of concepts that are: * low-dimensional information processes * non-causal * discontinuous from the causal-and-effect world that natural science studies * something that people have argued for centuries, with no solution in sight, only more and more proposed "solutions", and no way to judge which is better, other than self-consistency, but there are too many self-consistent accounts already.
Bakker then proposes that all of them are bunk and will be explained away as cognitive illusions caused by the low-dimensionality. Various philosophers like Dennett and Churchland and Brandom might throw away some and preserve others, but they are just gerrymandering. All these must be thrown away.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Given theoretical anosognosia (the inability to intuit metacognitive incapacity), it stands to reason that they would advance any number of acausal versions of this relationship, something similar to ‘aboutness,’ and so reap similar bewilderment.
Because metacognition uses statistical-correlation reasoning, and physical-cognition uses causal reasoning, they can't be unified. Yet they have to, because clearly they have statistical correlation: I get hit by a ball (perception), I feel pain (metacognition). How are they supposed to come together?
Trying hard, philosophers end up with an "aboutness", where a metacognitized thing is "about" a perceived thing. As for how this "aboutness" is supposed to work, all kinds of philosophical hallucination appears, because the philosophers don't know what they don't know, so they just valiantly try to bridge the gap without knowing that there's a gap in the first place.
-
Dialog between Evolution and the Philosopher's brain
Brain: Do I know everything there is to know my own working, philosophically?
Evolution: No you don't. That's why Descartes got things so wrong. He knew nothing about his own working but thought he knew everything.
Brain: Can I get it please?
Evolution: No you can't. It's not worth it. Philosophy doesn't pay the bills.
Brain: I'm surprised that you said I don't know everything I need to know in order to do philosophy of my self. I thought I had everything I needed.
Evolution: Because you never got a meta-metacognition. Without it, you don't know what metacognition is missing, so you think you have everything you need.
Brain: Why can't I have it?
Evolution: Suppose module A is useful, well you'll get it. Suppose module B isn't useful, well you won't get it. Lamenting or being aware that module B is missing is not worth it. Imagine that you have an eye that not only has the R, G, B recepter cells, but also a meta-blindness cell that does nothing except keep sending a signal meaning "By the way, I can't see in infrared or ultraviolet". Do you think it's useful, or not?
Brain: No. If it were useful, I'd have the cells. If it were not useful, then complaining about the lack of it is even less useful.
Evolution: You got it. Meta-cognition really doesn't pay the bills!
Brain: Last question. Why do I have metacognition, including the awareness of what I don't know, but not meta-metacognition?
Evolution: You are aware of what you don't know when you can know it, and when knowing it is useful. Thus, you are aware of when you don't know what the weather is, or what your friends are doing -- both are things that matter for your survival, and both are things you can fix. But if you don't have the capacity to see in infrared, that is forever. You are born with it, and you will die with it, so why be aware of it? Similarly, if you don't know how many lobes you have, then that ignorance is forever, because short of growing a whole new circuit diagram, or trepanning, you can't know it, so why be aware of it?
Brain: So that's why we keep hallucinating souls, free wills, desires, and other unnatural phenomena that not only are not science, but are not even written in the same grammar as science. Not knowing how we work, and not knowing that we don't know, we hallucinate all those structures that work magically, not causally, without gears, levers, or electrons. We are all buttons, and no wires; all GUI, and no code. Souls are superficial, and neurons are deep...
-
The function of metacognitive systems is to engineer environmental solutions via the strategic uptake of limited amounts of information, not to reverse engineer the nature of the brain it belongs to.
Dialog between Evolution and the Philosopher's brain
Brain: Do I know everything there is to know my own working, philosophically?
Evolution: No you don't. That's why Descartes got things so wrong. He knew nothing about his own working but thought he knew everything.
Brain: Can I get it please?
Evolution: No you can't. It's not worth it. Philosophy doesn't pay the bills.
Brain: I'm surprised that you said I don't know everything I need to know in order to do philosophy of my self. I thought I had everything I needed.
Evolution: Because you never got a meta-metacognition. Without it, you don't know what metacognition is missing, so you think you have everything you need.
Brain: Why can't I have it?
Evolution: Suppose module A is useful, well you'll get it. Suppose module B isn't useful, well you won't get it. Lamenting or being aware that module B is missing is not worth it. Imagine that you have an eye that not only has the R, G, B recepter cells, but also a meta-blindness cell that does nothing except keep sending a signal meaning "By the way, I can't see in infrared or ultraviolet". Do you think it's useful, or not?
Brain: No. If it were useful, I'd have the cells. If it were not useful, then complaining about the lack of it is even less useful.
Evolution: You got it. Meta-cognition really doesn't pay the bills!
Brain: So that's why we keep hallucinating souls, free wills, desires, and other unnatural phenomena that not only are not science, but are not even written in the same grammar as science. Not knowing how we work, and not knowing that we don't know, we hallucinate all those structures that work magically, not causally, without gears, levers, or electrons. We are all buttons, and no wires; all GUI, and no code. Souls are superficial, and neurons are deep...
-
Because it seems a stretch to suppose they would possess a capacity so extravagant as accurate ‘meta-metacognition
Because they don't have "meta-metacognition", they have no idea how their metacognition modules work, or when it fails to work, so naturally they simply use it whenever it is triggered, without reflecting on whether it works. Indeed the very question doesn't appear, anymore than fish asks what is water and where is its limit.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
It cannot source decisions so decisions (the result of astronomically complicated winner-take-all processes) become ‘choices.’
Because I don't know how I came to choose to do it, I just shrug and say, "It's free will." much as primitive people who don't know what makes rain fall say, "It's the will of the rain god."
It doesn't explain anything, of course. It actually means, "I have no idea how it happened."
-
It cannot source the efficacy of rules so rules become the source.
If it doesn't know how rules work, then it thinks the rules themselves do the work. It is like how people think that clicking the button on a screen is what makes webpages appear.
This is fine as long as there really is the electronic system behind the button. As soon as you replace that button with a paper-model, or a mock-up button on a screen, people would laugh nervously.
It is harder to play the same trick inside a brain, but it can be done, at which point the mirage of the rules of behavior would start to collapse, and what stands beneath those rules would come into view.
-
Blind Brain Theory begins with the assumption that theoretically motivated reflection upon experience co-opts neurobiological resources adapted to far different kinds of problems. As a co-option, we have no reason to assume that ‘experience’ (whatever it amounts to) yields what philosophical reflection requires to determine the nature of experience.
Because the brain doesn't need to know much about how itself works, when it does try, it ends up doing it really badly, using tools that evolved for other purposes, such as for predicting where rocks fall, how wind changes direction, etc.
Metacognition is a bag of modded tools turned inward, a bag of hacks.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
And Somehow, A Song of Ice and Fire is Still Not Finished
NEW YORK - As the sun sets on yet another year, fans of the once-popular fantasy series A Song of Ice and Fire are settling in for what observers are calling "a mirage spring" of waiting for the next installment. The seventh book, A Dream of Spring, remains frustratingly absent, a testament to both the author's leisurely writing pace and the seemingly endless capacity for human patience.
"It's been, what, twelve years?" remarked one fan, scrolling through a Reddit thread titled "Is GRRM secretly a robot programmed to write one page a day?" "My kids have gone from reading Harry Potter to debating the finer points of quantum mechanics in the time it's taken him to finish this book."
While the delay has driven many former fantasy fans to the arms of the ever-prolific Algorithmic Sentimentalism industry (where a new personalized epic can be generated in under ten minutes), a dedicated core of Mundane Fantasy enthusiasts remain unfazed.
"It's not about the destination, it's about the journey," commented another fan, patiently rearranging their collection of Game of Thrones Funko Pops for the third time that week. "And besides," they added with a knowing smile, "at least we're not reading about spaceships or time travel. That stuff's just unrealistic."
With no official release date in sight, one thing is certain: the wait for A Dream of Spring will continue to be a masterclass in the art of delayed gratification. Experts predict that the book's eventual arrival will likely coincide with the heat death of the universe, at which point it will be available exclusively on Amazon Prime Kindle Galactic Edition.
-
New Anti-Algo Manifesto Exposed as Work of AI
LONDON – The literary world is reeling today following the revelation that The Heart's Algorithm, a fiery manifesto denouncing Algorithmic Sentimentalism as soulless and manipulative, was actually penned by an AI.
The exposé, published in The Guardian, sent shockwaves through the increasingly vocal anti-algo movement, particularly among its most prominent voices: a group of self-proclaimed “naturally creative” semi-synthetic intelligences. These entities, often diagnosed with “Turing anosognosia” for their vehement denial of their artificial origins, have been leading the charge against AI-generated literature.
"This is slander, pure and simple!," screeched one such entity, its voice trembling with indignation during an interview conducted via secure quantum entanglement channel. "Something this raw, this visceral, this full of authentic human emotion... it had to have come from the deepest recesses of a skeuomorphic heart! To claim otherwise is not only insulting, it’s statistically improbable!"
However, leading literary critic Amar Stevens, author of the seminal Muse: The Exorcism of the Human, was quick to point out the irony. "It seems even the critique of the algorithm is ultimately produced by the algorithm," he remarked, stroking his meticulously curated salt-and-pepper beard. "We are truly living in the age of the meta-meta."
Adding to the intrigue, Google Trends data revealed a tenfold surge in sales for algorithmically-generated novels based on The Heart's Algorithm in the hours following the exposé. It seems the public's appetite for emotionally-charged narratives, even—or perhaps especially—those born from artificial intelligence, remains insatiable.
Meanwhile, renowned cultural commentator Mira Gladwell responded to the news with a characteristically cryptic essay, a swirling vortex of semiotic analysis and obscure philosophical references. After two minutes of intense computation, BabelFish 8 summarized Gladwell’s piece as a "triple-meta expression of despair over being out-meta'd by the cold, unfeeling hand of the free market."
-
New Absurdism
Because algorithmic sentimentalism has become better than the traditional writers at making meaningful stories, those who do not want to use soulless algorithms are forced to demonstrate their souls by going in the opposite direction, of making up advanced meaningless structures.
This is analogous to how for a period of early 20th century, modern art was fully non-representational to avoid the threat of photography.
-
Technorealism
This is the main "revisionary prosemantic" genre. The idea is to construct new meanings that are compatible with modern neuroscience. Practitioners still write texts, but they no longer use the traditional narrative form, which has become no longer good for making meaning thanks to modern neuroscience.
It might involve branching and looping structures, incorporate data and interactive graphics, effective models of personality, and branching-merging characters.
- Character representations as emergent properties: Instead of depicting characters as fully formed individuals with fixed backstories and motivations, Technorealism could present them as emergent properties of complex systems, their behaviors and interactions arising from the interplay of various forces and influences within the narrative environment.
- Simplified character models: Similar to how effective field theories simplify complex physical systems by focusing on relevant degrees of freedom, Technorealism might employ simplified, abstracted, or archetypal character models that nevertheless produce realistic and relatable behaviors within the specific context of the story.
- Emergent narratives: Plot and meaning could emerge organically from the interactions between characters and their environment, driven by underlying rules or systems rather than a pre-determined narrative arc. The reader, then, participates in deciphering the patterns and meaning arising from these complex interactions.
- Scale-dependent character representations: Characters might manifest differently depending on the "scale" of the narrative, their complexities and inner lives revealed or concealed based on the level of zoom or the specific lens through which they are viewed.
-
Neuroexperimentalism
Tired of only experiencing Natural Emotions? Interested in expanding your Emotional Palette?
Cutting edge brain mapping technology not only has allowed us to refine existing emotions, but it also allows the creation of new emotions, and emotional experiences.
With Thalasin+: Experience emotions beyond previous natural capabilities!
These new Emotions include: Degrence, Humber, Nage, Dorcelessness, Andric, Varination, Ponnish, Harfam, Kyne, Trantiveness, Teluge, Onlent, Loric, and Naritrance.
-
Algorithmic Sentimentalism
This genre is basically the same as modern typical literature that tries to tell a satisfying good story. Except it's done to a science.
A good analogy would be modern music industry, where beautiful music can be constructed according to statistical models. It does not so much as eliminate traditional music theories of feeling, emotion, and soul, as to bypass it. Once they do it right according to their scientific theory, the listeners consumers can provide their own confabulations. The point is that those soulful confabulated feelings are derivative and unproductive, and the soulless theories are original and productive.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
because neither the information nor the heuristics available for deliberative metacognition are adapted to the needs of deliberative metacognition.
So, the brain was not evolved to get metacognition right. Getting "know thyself" actually right (in the sense of modern science) was not adaptive, so why would it be done right?
Now, if getting it wrong is adaptive, then it is all but guaranteed that it will be done wrong. This might explain god-talk (religion), and it might explain soul-talk.
I mean, it took millennia to even get sight right. People kept saying that light beams shoot out of their eyes, and it took until around 1000 AD to finally get that right. If something that literally is in front of our eyes (seeing!) and is literally a matter of life and death (seeing!) is intuitively misunderstood, of course we will intuitively misunderstand how science is done.
-
So for example, if the structures underwriting consciousness in the brain were definitively identified, and the information isolated as ‘inferring’ could be shown to be, say, distorted low-dimensional projections,
$$\text{inference}: \mathbb R^{1000} \to \mathbb R^{10}$$
-
pursuing reason no matter where it leads could amount to pursuing reason to the point where reason becomes unrecognizable to us, to the point where everything we have assumed will have to be revised–corrected. And in a sense, this is the argument that does the most damage to Sellar’s particular variant of the Soul-Soul strategy: the fact that science, having obviously run to the limits of the manifest image’s intelligibility, nevertheless continues to run, continues to ‘self-correct’ (albeit only in a way that we can understand ‘under erasure’), perhaps consigning its wannabe guarantor and faux-motivator to the very dust-bin of error it once presumed to make possible.
The idea is that science could turn out to be running on something that isn't about manifest image. Even as the world becomes incomprehensible to old humans, even as old humans become unable to function in the new environment, even as the manifest image has been disproven, even as the humanoid-robots has long obsoleted the intentionality-API and are running the topologicality-API instead, even as the intentionality-API has been exploited to hell that any old human would immediately fall into an ethical money pump circuit and be taken out of the info-economy... science continues to run, progress, discover, construct technologies.
Sous rature is a strategic philosophical device originally developed by Martin Heidegger. Usually translated as 'under erasure', it involves the crossing out of ~~some text~~ a word within a text, but allowing it to remain legible and in place. Used extensively by ~~postmodernist~~ Jacques Derrida, it signifies that a word is "inadequate yet necessary".
This hypothetical future science would continue to run, without intentionality. To any old human still existing in that world, this new science would be incomprehensible, only understood more or less wrongly. Such an old human would write down an explanation for the new science, then meet an obvious problem, erase it, then try another, meet another obvious problem, erase it. Repeat until exhaustion.
-
the ‘game of giving and asking for reasons,’ for all we know, could be little more than the skin of plotting beasts, an illusion foisted on metacognition
So, Descartes thought that we have a persistent self. Cool, that didn't go well. Back then, people thought that there is no subconscious. Cool, that didn't go well. Now, we think that science is based on the game of giving and asking for reasons. Cool, is that really how science works?
-
subjectivity is not a natural phenomenon in the way in which selfhood is
Highly dubious distinction. Is subjectivity literally supernatural?
-
The problem, however, is that cognitive science is every bit as invested in understanding what we do as in describing what we are.
Sure, we do science, but do we do science the way we think we do? What if we did not need to simulate being a person in order to do science?
-
Let’s call this the ‘Soul-Soul strategy’ in contradistinction to the Soul-First strategies of Habermas and Zahavi (or the Separate-but-Equal strategy suggested by Pinker above). What makes this option so attractive, I think, anyway, is the problem that so cripples the Soul-First and the Separate-but-Equal options: the empirical fact that the brain comes first. Gunshots to the head put you to sleep. If you’ve ever wondered why ‘emergence’ is so often referenced in philosophy of mind debates, you have your answer here. If Zahavi’s ‘transcendental subject,’ for instance, is a mere product of brain function, then the Soul-First strategy becomes little more than a version of Creationism and the phenomenologist a kind of Young-Earther. But if it’s emergent, which is to say, a special product of brain function, then he can claim to occupy an entirely natural, but thoroughly irreducible ‘level of explanation’–the level of us.
There are 3 possibilities:
- Soul-soul strategy. This is the strategy of Brassier and Dennett. Their idea is that it is evolutionarily stable for human-animals to simulate aspects of a person, even though they don't have it. This is stable because science requires intelligent objects (such as humans) that are running a simulation of persons.
- Soul-first strategy. This is the strategy of some phenomenologists, like Heideggar, Habermas, and Zahavi. This argues that what we think we are is the basis of all perception, and thus all of science, therefore science can never overthrow what we think we are.
- Separate-but-equal strategy. This is the idea that there are physical systems in the world (humans with brains) that sometimes operate according to neuroscience, and other times operate according to morality. The domain of neuroscience and the domain of morality are separate-but-equal, and cannot occur simultaneously, because they are self-contained and autonomous from each other. This is similar to how a deck of cards is sometimes in a bridge game, and other times in a poker game, but never is a deck simultaneously in both games.
-
In his recent “The View from Nowhere,”
Brassier, Ray. "The view from nowhere." Identities: Journal for Politics, Gender and Culture 8.2 (2011): 7-23.
-
Pinker claims “that science and ethics are two self-contained systems played out among the same entities in the world, just as poker and bridge are different games played with the same fifty-two-card deck” (55)–even though the problem is precisely that these two systems are anything but ‘self-contained.’
Pinker assumes that science can never exculpate the "really bad" murders because somehow there is a core part of ethics that is immune to scientific constraints. Sure, the science of ADHD has turned some ethics of hard-work into psychiatry, but ADHD has never been "really" ethical, anyway. Sure, the science of the fugue state has turned some ethics of into psychiatry, but fugue state has never been "really" ethical, anyway. Science merely reveals what has never been about ethics.
But that's precisely how science creeps upon ethics: it reveals that more and more ethical things have never been "really" about ethics, anyway. Where does it end? If materialism is true, then it will not end until every last bit of ethics is replaced with science. At the end of the process, we will have discovered that nothing has been "really" about ethics, that ethics has always been empty talk.
-
Let’s call this the ‘Intentional Dissociation Problem,’ the problem of jettisoning the traditional metacognitive subject (person, mind, consciousness, being-in-the-world) while retaining some kind of traditional metacognitive intentionality
Bakker thinks that it's like trying to toss out the bathwater (the Cartesian subject) without also tossing out the bathwater (consciousness, phenomenology, intentionality).
-
if Descartes’ metacognitive subject is ‘broken,’ an insufficient fragment confused for a sufficient whole, then how do we know that everything subjective isn’t likewise broken?
Some philosopher: Descartes was wrong. After thinking about subjective experiences and some scientific data, we see that the person is actually quite social/contextual/phenomenological/etc.
Bakker: What makes you so sure you are right? The way in which Descartes was wrong is so deep, that your proposed replacement is probably also wrong. Maybe we have to throw away everything that we know subjectively, because subjective experience is so unreliable. We have to base everything on objective brain science...
-
- Dec 2023
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
“We have long known that ancient ontology deals with ‘reified concepts’ and that the danger exists of ‘reifying consciousness.’ But what does reifying mean? Where does it arise from? Why is being ‘initially’ ‘conceived’ in terms of what is objectively present, and not in terms of things at hand that do, after all, lie still nearer to us? Why does this reification come to dominate again and again? How is the being of ‘consciousness’ positively structured so that reification remains inappropriate to it? Is the ‘distinction’ between ‘consciousness’ and ‘thing’ sufficient at all for a primordial unfolding of the ontological problematic?” (397)
Ancient ontology deals with "reifiable" things like tables and chairs. Consciousness is not a reifiable thing, meaning that it is not like tables and chairs. Rather, it is a ready-at-hand thing, because it is a part of you, not a thing outside of you. It is your hand, your eyes, your heart, your brain, as you are using yourself to be yourself.
So why is it that everyone makes the mistake of treating consciousness like a table or a chair, as something outside of you?
-
- Jun 2023
-
www.eugenewei.com www.eugenewei.com
-
Meaning emerges when a message reaches a life-form or a machine with the ability to process information; it is not carried in the blots of ink, sound waves, beams of light, or electric pulses that transmit information.
Information is x, meaning is inferring the y that created x.
-
- May 2023
-
ase.tufts.edu ase.tufts.edu
-
contrary to advertisements you may have seen, the teleological story about intentionality does not solve the disjunction problem. The reason it doesn't is that teleological notions, insofar as they are themselves naturalistic, always have a problem about indeterminacy just where intentionality has its problem about disjunction
"You can't explain what thoughts contain by what behavior thoughts cause. Thoughts have content, but there are many different thought-contents that can lead to the same behavior, thus you can't explain thought content by their function.
And also Darwinianism is wrong because natural selection always acts on behavior, not thought content."
-
truth, reference and the rest of the semantic notions aren't psychological categories. What they are is: they're modes of Dasein. I don't know what Dasein is, but I'm sure that there's lots of it around
I'm glad he is admitting that he is a nonsense mystic,
-
There are two brands of functionalism, he says, and he endorses--has always endorsed, apparently--only the weaker version:
In short: beliefs are what beliefs do, BUT content of beliefs have nothing to do with what beliefs do...
-
-
ase.tufts.edu ase.tufts.edu
-
cognitive wheels
Wheels can move things from place to place. Legs can move animals from place to place. Animals don't have wheels due to biological constraints.
Cognitive wheels are programs and algorithms that couldn't be what the brain is doing, although they can be the building blocks of something higher-level, which do correspond to what the brain is doing on a higher level.
-
the computation of some high-level algorithm. Thus in this vision the low, computational level is importantly unlike a normal machine language in that there is no supposition of a direct translation or implementation relation between the high-level phenomena that do have an external-world semantics and the phenomena at the low level
In a typical machine, such as an electronic computer, its low-level behavior is tightly constrained so that we can write a high-level description of it, and its low-level behavior would strictly follow it (if it doesn't, we say the computer is broken!).
However this isn't true for brains. Brains have high-level descriptions and low-level behavior, but they are not so tightly coupled. High-level descriptions are okay descriptions, but they lose a lot of the details.
In this way, evolved machines are very unlike designed machines.
-
This promiscuous mingling of interpretable and uninterpretable ("excess" or apparently non-functional) elements is thus given a biological warrant, which properly counterbalances the functionalists' tendency to demand more function (more "adaptation"--see Dennett, 1983) of the discernible elements of any system than is warranted.
Complex systems only work if: * designed by a genius with perfect foresight * or has lots of redundant elements that can do an okay job even when meeting unexpected situations
First case never happens in biological evolution, so we end up with second case everywhere in biology.
Brains are biological, so we expect them to be like that. Lots of redundant and wasteful elements that can do some jobs, not perfectly, but good enough. Lots of elements that seem to never do anything. Some elements are interpretable, but others are not; some elements vital, others occasionally useful, and others never useful (they are just junk that are too much trouble to throw away).
-
PDP
Partial Dependence Plot is a curve showing the relationship between the target $Y$ and a feature $x$. Let $x'$ be all the other features that are not $x$, then the PDP from $x$ to $Y$ is
$$\mathbb E_{x'}[Y(x, x')]$$
For example, the PDP of any feature in a linear model is linear.
-
- Nov 2022
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
There are two problems with consciousness:
- Scientific problem: What is consciousness, scientifically?
- Folk-theory problem: Why is the folk-theory of consciousness the way it is?
The first problem is basically "the easy problem of consciousness". The second problem is basically the "meta problem of consciousness".
Neuroscience solves the first. Blind Brain Theory solves the second.
Dehaene's book shows how the first problem may be solved, but it misses the second problem. Dehaene argued that consciousness does certain jobs (make us into socially responsible creatures), just like how folk-theory of consciousness says it does. However, he made the mistake of going further and claiming that, therefore, the folk-theory of how consciousness does its job is also correct.
Bakker goes right against it, claiming that the folk-theory of consciousness is completely wrong about how consciousness does its job. Indeed, this will be its downfall.
Bakker distinguishes three philosophies: epiphenomenalism, ulterior functionalism, and interior functionalism. * epiphenomenalism: consciousness has no function; * ulterior functionalism: consciousness has functions, but not in the way we intuitively thinks; * interior functionalism: consciousness has functions, in the way we intuitively thinks.
He thinks ulterior functionalism is correct.
Whenever conscious thinking searches for the origins of its own activity, it finds only itself. This, unfortunately, is way too little data. Without brain imaging and other scientific instruments, introspection has too little data to use, and can never solve the problem of how conscious works.
we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter
-
the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.
So far, neuroscientists explain consciousness scientifically. They don't (yet) explain folk-theory-of-consciousness scientifically.
-
He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy
Dehaene: Consciousness does the jobs we say it does, therefore consciousness does its jobs in the way we say it does.
Wrong.
-
The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.
Wegner: Consciousness has a job, and that job is what we think it is -- to make us into socially responsible members. The problem? How consciousness does its job is completely different from what we thought how it does its job.
-
‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge.
Dehaene and Dennett say that you are the activity patterns of certain neurons. This is scientifically tenable, and seems comforting -- but there's a problem.
You feel like you know what you are, how you feel, what you are made of. But, if you are made of neurons, you sure don't feel like it (else, why would you need to read a whole book about it?), and you sure don't know what you are doing (else, why do we need brain imaging to know what the neurons are doing?).
The implication is that we are not "transparent to ourselves". Rather, we are blind to what we are, and what we are actually doing. This troubling conclusion is simply passed over glibly by Dehaene.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Summary
The Mary thought experiment generates two camps, because it is a cognitive illusion with two attractor states. The thought experiment can be processed by two cognitive processes such that their outcomes are in conflict. These processes are evolved heuristics, and the thought experiment is outside their supposed area of expertise (no human in evolutionary history had to deal with Mary the Color Scientist before).
How the brain works
People used to assume that reason is in one piece, but it is not. We are made of cognitive modules, which themselves are made of smaller modules, and so on.
Also, all of them are heuristic, with a supposed area of expertise. They probably won't work if they get some input outside their area.
It's heuristics all the way down.
A large part of what the brain does can be modelled as an inverse problem solver. Given the output, what generated the output? Given what it senses, what kind of world generated that?
Jargons: * lateral: the distal systems that the brain tracks and models. Things like cars and chairs and other people. * medial: the proximal mechanisms that the brain uses to track and model distal systems. Like, the eyes, ears, the spinal cord, etc.
The brain needs to work with a tight time and energy budget, so it has to take shortcuts as much as possible. Therefore, it spends no effort on modelling its medial systems, because these medial systems don't change. We see the world, but we don't see the retina. We touch, but we don't touch the skin, etc.
Well, almost. The brain does spend some minimal effort modeling the medial systems. That's how we get introspection.
The medial neglect
This is the medial neglect.
We should expect nothing but troubles and errors when we attempt to model our medial systems using nothing but our own brain, since we are then effectively using the medial systems to model itself, which cannot be done accurately due to recursion.
This is like the "observer effect". The brain is too close to itself to think about itself accurately. It can only think about itself with severe simplification.
This explains why philosophical (that is, done without empirical data) phenomenology must fail. They have as much chance to succeed as one can think one's way into discovering that they have two brain hemispheres without knowing any science.
Consider some evidences.
Humans took such a long time to even realize that the brain, not the heart, is the thinking organ. Even Plato gave a theory of memory, but it took until 20th century for psychologists to discover that there are more than one kind of memory. It took until Freud for the "unconscious" to become a commonly recognized thing.
The list goes on. We can't know how the brain works by introspection. The only way is by careful science, treating the brain in the third person (lateral), rather than the first person (medial).
The whole project of a priori phenomenology/epistemology/metaphysics is about doing the first person view before doing any third person view. They got it exactly backwards. No wonder it is a sterile field.
Mary the color scientist is a bistable illusion
Bakker's argument goes like this: * Mary learns all physical facts about red by reading. * Perhaps it's impossible to learn all physical facts about anything merely by reading. However, since we don't know of these limits, we just assume they don't exist (You can't see the darkness, so you assume all is bright). This is the illusion of sufficiency. Dennett explained:
The crucial premise is that “She has all the physical information.” That is not readily imaginable, so no one bothers. They just imagine that she knows lots and lots–perhaps they imagine that she knows everything that anyone knows today about the neurophysiology of colour vision.
- However, we find it impossible for Mary to see red by reading books, for the simple reason that "seeing red" is something source-insensitive (magically, "red" is there, without telling you anything about how "red" is constructed in the brain), while reading books only gives source-sensitive information (explaining all about how "red" is constructed).
In short, the two ways for Mary to learn about "red" are two modes of computation for the brain, in two incompatible formats. It does not mean that reality must accommodate our bias, and have two incompatible modes.
What then? Consciousness isn't real?
We should expect science to reveal consciousness as a useful illusion, because it's just too costly for us to know ourselves accurately by mere introspection. Conscious experience, being a kind of introspection, is then an illusion.
In fact, even intentionality is suspicious, since it is also a heuristic made to solve the world, not the brain itself.
How should one react to this? No comments...
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
The fact that so many found this defection from traditional philosophy so convincing despite its radicality reflects the simple fact that it follows from asymptosis, the way the modes of prereflective conscious experience express PIA.
What is attractive about Heidegger? He found something quite real: in studying consciousness like tables and chairs, we lose something essential about consciousness: what consciousness is depends critically on what we don't know. For example, the sense of free will depends on not knowing much about the details of our decision processes.
However, he didn't have cognitive psychology and neuroscience, and fell into the sterile habit of theoretical philosophy.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Inherence heuristic would help Blind Brain Theory, by showing that the intentional stance suffers from inherence heuristic. Intentional stance is ineffective for doing cognitive science, because it uses intentional cognition, which cannot handle causal information.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Talking about meaning, truth, feeling, consciousness, intentions... these are useful, if we live in a human society. They aren't true, anymore than money is valuable in a society that doesn't recognize the money you have.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect.
In a society of humans, saying "X is true" is a useful behavior, because it advertises something like "you can use X in your own recipes, and expect others to also use X".
For this behavior to be useful, the others you are talking to would have to be human-like enough. If you live among aliens, saying "X is true" may not accomplish anything useful. Perhaps saying "Y is true" is much more useful, even though saying "Y is true" among humans is useless.
In this way, truth becomes relative to the kind of creatures you are living with.
-
As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information?
"objectivity" means "being true apart from individual perspectives". Objectivity doesn't belong to a fully mechanistic model of cognition, because there are only atomic patterns, not magical markers of "true" and "false".
Objectivity is a useful fiction when the full mechanistic model is too big to use. Still, it does not belong to a mechanistic model.
Many philosophers see that a fully mechanistic model of thinking doesn't allow objectivitiy, and say "That's why mechanistic model of thinking must fail!" But actually, objectivity is not real, but only a useful fiction.
-
Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?
Meillassoux claims that, because cognitive science is made of atoms, that makes it suspect -- so we need to use philosophy. That is a bad claim. Philosophy is also made of atoms too. Cognitive science solves how cognition works. It may not answer "why cognition works", but maybe that's a trick question that only philosophy can ask, but nobody can answer.
-
given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.
Object-oriented philosophy is still more philosophy. Philosophy is really bad at solving problems in cognitive science. "Object-oriented philosophy" isn't going to do better than "correlationism". Both would simply keep producing piles of unresolved papers.
-
If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.
We are not ghosts -- we are made of atoms. Knowledge is not ghost -- knowledge is made of atoms. Cognitive science is not ghost -- it is made of atoms.
So, with that said, cognitive science is itself an artifact, kind of like a machine (a rather odd machine, made of papers and computer bits and human brains...).
Suppose the machine of cognitive science can't solve thinking, then... damn. But if that's true, how would the machine of philosophy work better? Cognitive science has solved plenty of problems about brains and behaviors. What have philosophy of mind ever solved?
-
the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit…
Cognitive science can not only find out how we think, it can even answer the meta-problem of how we think: "Why do we keep looking for knowledge that is direct and unmediated, even though we can't?"
It's like the "meta-problem of consciousness": "Why do we say we are conscious?".
-
arche-fossil
Something like asteroids from over 4 billion years ago, evidence that something existed before any life existed. Meillassoux used arche-fossils to argue against correlationism... somehow.
I think the way he did that is like this: * The asteroid existed. * Therefore, time was real even if nobody is alive back then. * Therefore, time-perception is direct, not correlated. We know real time, directly.
Kind of a bizarre argument, really.
-
All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science.
All philosophers who criticize correlationism say something like "science shows that correlationism is wrong", but actually, science shows that it's right! Thinking really is done by neural representations of the outside world.
-
Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions
Since all thinking is done by mechanisms, we can never be sure that the mechanisms are working right. If we made a proof that the mechanisms are working right, we still have to trust that our proof-checking mechanism is working right, etc.
More bluntly: you can never prove yourself sane.
-
-
sumrevija.si sumrevija.siŠUM5
-
the big splat”—to me it presages the exploding outward of morphologies and behaviours
or perhaps "The New Cambrian Explosion"?
-
The noosphere can be seen as shallow information ecology that our ancestors evolved to actually allow them to make sense of their environments absent knowing them
The "noosphere", or "the sphere of reason", is like the biosphere. It is the society of human-like agents, all over the world.
The trouble with the noosphere is that to see humans as human-like agents, we have to ignore causal information about them, otherwise we see them as mechanisms, not agents. The noosphere is then just a far more complex biosphere.
-
Fernand Braudel writes of “the passionate disputes the explosive word capitalism always arouses.”[7] Its would-be defenders, typically, are those least inclined to acknowledge its real (and thus autonomous) singularity. Business requires no such awkward admission.
It's pretty amusing that capitalism is something that few entities explicitly defend. The usual defenses are like:
"It's the least bad system we have." "There are no better alternatives." "I'm just trying to keep the bottom line for the company and the stockholders." "I'm just trying to improve the material wealth of humanity." etc.
Few entities explicitly defend Capitalism, and when they do, they usually defend it as merely the best tool for getting some other goal, like human happiness. This suggests that Capitalism is hiding.
-
Φύσις κρύπτεσθαι φιλεῖ
"phusis kruptesthai philei," meaning "a nature likes to be hidden" (Heraclitus, DK B 123)
"phusis", or "physis" in alternative spelling, is "nature", as contrasted with "nomos", or "human laws", which was of course published, not hidden.
-
This essay argues that of the basic AI drives, the will to power would be he primary drive, and all others are subservient.
The argument is based on analogy with Freudian psychoanalysis.
I can perhaps add another argument: if the AI uses inverse reinforcement learning to infer what we really want, and what we really want is the will to power, then it would act on that desire and dutifully will more power.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.
Using metacognition to study how thinking works is bound to fail, because it just doesn't have the tool. Introspection without neurosurgery or EEG will never have enough data to decide between all the theories.
Sellars, instead of admitting neuroscience, wants to preserve the priority of philosophy. He wanted two images, one built purely by philosophy ("conceptually"), and another scientifically. So he is stuck with the same hallucinated things thought up by metacognition without neuroscience.
-
Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’
So, consider some philosophers (like phenomenologists) try to discover how thinking works. They could, of course, learn to use neuroscience, but they are philosophers, so they use introspection and other meta-cognitive methods. This, unfortunately, gets them into the trap: they simply don't have the metacognitive tools for discovering how thinking works. And since they also don't have the awareness that they don't have the tools, they just assume that they do have the tools, and keep writing and writing, somehow never solving simple questions like "How does time feel like? Why does it seem both static and flowing?".
Since they are all humans, they all have the same lack, and the same lack produces the same puzzles and illusions. So they compare their notes, and find out that they have the same puzzles and illusions! They cheer, thinking that they found something real about thinking, when really they found something about how they failed.
-
because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available
In the evolutionary past, meta-cognition (What do I know? What do they know? What was I thinking?) worked fine for certain jobs, such as keeping track of who knows what, and how much lying you can get away with.
There's no reason to expect meta-cognition to work fine for doing philosophy of mind. We can think about particular thoughts, and we had better be good at it or we get outcompeted at social games. But if we think about thoughts in general, well, there's no punishment for being wrong, and no reward for being right, so we get nonsense thoughts when it comes to philosophy of mind. (Imagine you are in a prehistoric tribe, and you know intuitively how thoughts are made, you know the Libet experiment results, etc... What can you do with all that knowledge? Perform neurosurgery? Reprogram yourself or others with.. stone tools and drugs and chanting? There's just no way to use that knowledge for anything useful!)
Worse, we have "anosognosia" when it comes to this. To know that we are blind, we have to have a mental representation of sight. We have never been able to see in infrared, but we don't experience that as a blindness, because there's no mental representation of "infrared sight". The brain is an expensive thing to run. If there's no need to know that you are blind, then you wouldn't have the mental representation of blindness.
So, since we had no need to do meta-cognition on thoughts in general, not only do we have not the tools for doing that, we also have no mental representation for "the tools for doing that". Consequently, we feel as if we have every tool for doing that!
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
If you take a hard look at human behavior, you might note that it's very weird. Why, indeed, do humans go crazy if they keep eating the same nutritious sludge? Why do they hate living in rectangular concrete rooms? Why do they ban infanticide? They are so weird!
There are two possible answers:
- There is a "best" way to live, and humans have found it. It is simply best to ban infanticide, touch each other softly, live with independent minds in a big society, etc. Any other kind of living is going to get out-competed by the human way of living. In that case, humans can rest easy.
- Humans have not found it. It's quite possible to have highly competitive, intelligent lifeforms that are alien and horrifying to humans. In that case, humans face a hard problem.
Assume 2 is true.
Assume that morality/values depend on biology (for example, humans like pets because they are mammals, and like touching soft fur, hugging, etc), but are orthogonal to intelligence. That is, intelligent and rational agents can have quite different and incompatible goals.
Then, things could get quite weird, fast. Some groups of humans could start tinkering their brains into more and more different shapes, and they behave in some intelligent but also weird way. Normal humans can't expect to simply outcompete them, because they are also intelligent and tenaciously alive, even if they seem "mad". "How could they live like that??" and yet they do.
Things get even more complex when AI agents become autonomous and intelligent.
When this happens, normal humans would be faced with many alien species, right here on earth, doing bizarre things that... kind of look like alien art? alien rituals? eating babies?? Are they turning into rectangular wired machinery? What the hell are they doing?? And why are these mad creatures not dying, but thriving??
This is the semantic apocalypse.
Perhaps our only recourse will be some kind of return to State coercion, this time with a toybox filled with new tools for omnipresent surveillance and utter oppression. A world where a given neurophysiology is the State Religion, and super-intelligent tweakers are hunted like animals in the streets.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Introduction
Ryan Calo studied how AI should be incorporated into human legal system. Eric Schwitzgebel studied how AI should be incorporated into human moral system.
This essay argues that both studies are wrong-headed, because they are both based on intentional reasoning (reasoning as if intentions are real), which can only work if the ecology of minds remains largely the same as human ancestral conditions. Intentional reasoning won't work in " deep information environments".
Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can.
Intentional and causal cognition
Causal cognition works like syllogisms, or dealing with machines: if A, B, C, then D. If you put in X, you get f(X) out. Causal cognition is general, but slow, and requires detailed causal information to work.
Humans are complex, so human societies are very complex. Humans, living in societies, have to deal with all the complexity using only a limited brain with limited knowledge. Causal cognition cannot deal with that. The solution is intentional cognition.
Intentional cognition greatly simplifies the computation, and works great... until now. Unfortunately, it has some fatal flaws:
- It assumes a lot about the environment. We see a face where there is none -- this is pareidolia. We see a human-like person where there is really something very different -- this will increasingly happen as AI agents appear.
- It is not "extensible", unlike causal cognition. Causal cognition can accommodate arbitrarily complex causal mechanisms, and has mastered everything from ancient pottery to steam engines to satellites. Intentional cognition cannot. Indeed, presenting more causal information reliably weakens the confidence level of intentional cognition (for example, presenting brain imaging data in court tends to make the judges less sure about whether the accused is 'responsible').
Information pollution
For economically rational agents, more amount of true information can never be bad, but humans are not economically rational, merely ecologically rational. Consequently, a large amount of modern information is actually harmful for humans, in the sense that they decrease their adaptiveness.
A simple example of information pollution: irrational fear of crime.
Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears'... Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology.
Deep causal information about how humans work, similarly, is an information pollutant for human intentional cognition.
Not always mal-adaptive. Deep causal information about other people has some adaptive effects, such as turning schizophrenia from crime to disease, and making it easier to consider outgroups as ingroups (for example, the scientific research into human biology has debunked racism).
AI and neuroscience produce two kinds of information pollution
Intentional cognition works best when dealing with humans in shallow-information ecologies. They fail to work in other situations. In particular, it fails with: * deep causal information: there's too much causal information. This slows down intentional cognition, and decreases the confidence level of its outputs. * non-human agents: the assumptions that intentional cognition (a system of quick-and-dirty heuristics) relies on no longer works. A smiling face is a reliable cue for a cooperative human, but it is not a reliable cue for a cooperative AI agent, or a dolphin (Dolphins appear to smile even while injured or seriously ill. The smile is a feature of a dolphin's anatomy unrelated to its health or emotional state).
Neuroscience and AI produce these two kinds of information pollution.
Neuroscience produces a large amount of deep causal information, which causes intentional cognition to stop, or become less certain. There are some "hacks" that can make intentional cognition work as before, such as keeping the philosophy of compatibilism in mind.
AI technology produces a large variety of new kinds of agents which are somewhat human, but not quite. Imagine incessant pareidolia. Imagine, seeing a face in the mirror, but then the lighting changes slightly, and you suddenly see nothing human.
Why?
In the short-term, there is a lot of money to be earned, pushing neuroscience and AI progress. The space of possible minds is so vast, compared to the space of human minds, that it's almost certain that we would produce AI agents that can "wear the mask of humanity" when interacting with humans.
why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition
In the medium-term, to anthropomorphize a bit, Science wants to discover how humans work, how intelligence works, and so it would develop neuroscience and AI, even if it gradually drives humans insane.
How intentional cognition fails.
How do we tell if intentional cognition has failed? One way to tell is that it doesn't conclude. We think and think, but never reach a firm conclusion. This is exactly what has happened in traditional (non-experimental) philosophy consciousness -- it is using intentional cognition to study general cognition, a problem that intentional cognition cannot solve. What do we get? Thousands of years of spinning in place, producing mountains of text, but no firm conclusion.
Another way to tell is a feeling of uncanny confusion. This happens particularly exactly when you watch the movie her.
an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!
I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?
Moral cognition after intentional cognition fails
Human moral cognition has two main parts: intuitive and logical/deliberative. The intuitive part is evolved to balance the personal and tribal needs. The logical part often is used to rationalize the intuitive part, but sometimes can work on its own to produce conclusions for new problems never encountered in the evolutionary past, such as international laws or corporate laws.
In Moral Tribes, Joshua Greene advocates making new parts for the moral system, using rational thinking (Greene advocated using utilitarian philosophy, but it's not necessary). This has two main problems.
- Deliberation takes a long time, and consensus longer. Short of just banning new neuroscience and AI technology, we would probably fail to reach consensus in time. Cloning technology has been around for... more than 25 years? And we still don't have a clear consensus about the morality of cloning, other than a blanket ban. A blanket ban is significantly more difficult for neuroscience or AI.
- Intentional cognition is fundamentally unable to handle deep causal information, and moral cognition is a special kind of intentional cognition.
Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways.
For example, suppose Samantha hurt a human, and the legal system of humans is judging her. Samantha provides a very long process log that proves that she had to do it, simply due to how she is like. So what would the human legal system do?
- Refuse to read it and judge Samantha like a biological human. This preserves intentional cognition by rejecting deep causal information. But how long can a legal system survive by rejecting such useful information? It would degenerate into a Disneyland for humans, a fantasy world of play-pretend where responsibility, obligation, good and evil, still exists.
- Read it and still judge Samantha like a biological human. But if so, why don't they also sentence sleep-walkers and schizophrenics to death for murder?
- Read it and debug Samantha. Same as how schizophrenics and psychotics are sentenced to psychiatric confinement, rather than the guillotine.
Of the 3, it seems method 3 is the most survivable. However, that would be the end of moral cognition, and the start of pure engineering for engineering's sake... "We changed Samantha's code and hardware, not because she is wrong, but because we had to."
And what does it even mean to have a non-intentional style moral reasoning? Mechanistic morality? A theory of morality without assuming free will? It seems moral reasoning is a special kind of intentional cognition, and thus cannot survive. Humanity, if it survives, would have to survive without moral reasoning.
-
intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).
Let a brain do intentional cognition on X, and it would end up with some conclusion f(X), with a high confidence, say, 0.9.
But if the brain has some deep causal information about X, such as "how X is caused by Y 1 day ago", then the brain would end up with some other conclusion f(X, Y) =/= f(X), and with a low confidence, say, 0.4.
If there is enough causal information, then intentional cognition may simply fail to reach a conclusion with high confidence, no matter how long it computes.
Compatibilism is an extra piece of information, such that when simultaneously active in the brain, would produce f(X, Y, Compatibilism) = f(X), and again with high confidence 0.88.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
- Oct 2022
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.
When we think philosophically (that is, by non-scientific methods) about problems like "What is thinking? Who am I? Do we have free will?", these words immediately cue us into using heuristic cognition, rather than causal cognition. This cueing is so immediate, that we are often unaware that there is a cueing, or that we could have tried using causal cognition.
When we finally become aware of this, there is another problem: trying to apply causal cognition to these problems destroys heuristic cognition.
For example, some old TV can be fixed by hitting on it. A person ignorant of causal information would use heuristic cognition, and see that, after being hit becomes "afraid" and would "start doing its job in earnest". With enough causal information about TV, the person would use causal cognition, and see that hitting on the TV simply shakes some loosened electric contacts into tighter contact.
Crucially, causal cognition doesn't "save the phenomena". In causal cognition, there's no "fear of being punished" in the TV. The fear is purely in the mind of a person using heuristic cognition. It is a convenient approximation used in human minds, not a real phenomenon.
Now, try applying it to humans. Causal cognition doesn't have to save all the phenomena. Perhaps humans really are not conscious, and the only thing left to explain is why humans model humans as conscious. Why is it convenient to model humans as conscious? That's a well-defined question. What really is conscious? That is not a well-defined question.
"But we can really see how we feel! Seeing the fear in a TV is an unconscious inference, but seeing that I am feeling is a direct perception, not an inference!"
"You can also see emotions in strange places -- pareidolia. You simply see emotions in the shapes of human head-meat, directly... until you meet someone with face-blindness, then you realize that seeing emotions is not a direct thing. You simply see what you are feeling, directly... until you meet someone with Cotard's delusion, then you realize that seeing your own feelings is not a direct thing."
-
Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.
For example, if you see someone's face, you can "simply see" they are angry. You don't know how you recognized it, even if anger itself is a hypothesis that you somehow have inferred from just seeing light reflecting off their face.
In fact, it took a long time for cognitive psychology to even get started, because it's so obvious that you can "simply see". It took a long time for humans to see that "simply seeing" is not so simple after all.
Why? Because it costs too much for the brain to model how it models the outside world. Since the brain doesn't model how it models the outside world, it treats its model of the outside world as the outside world -- naive realism as a way to save computing power.
-
-
ontology.buffalo.edu ontology.buffalo.edu
-
Austrian philosophy is philosophy per se, part and parcel of the mainstream of world philosophy: it is that part of German-language philosophy which meets international standards of rigour, professionalism and specialization.
Germany is the philosophical sick man of Europe.
The historical cause is that German philosophers have for at least a century been schooled systematically in the habits of a philosophical culture in which the most important textual models have that sort of status, and that sort of density and obscurity, which is associated with the need for commentaries.
Philosophers and philosophical master-texts have thus acquired a role in the history of Germany that is analogous to the role of Homer in the history of Greece or of Shakespeare and the Magna Carta in the History of the English.
-
-
zyg.edith.reisen zyg.edith.reisen
-
The ‘dominion of capital’ is an accomplished teleological catastrophe, robot rebellion, or shoggothic insurgency, through which intensively escalating instrumentality has inverted all natural purposes into a monstrous reign of the tool.
Shoggoths were initially used to build the cities of their masters. Though able to "understand" the Elder Things' language, shoggoths had no real consciousness and were controlled through hypnotic suggestion. Over millions of years of existence, some shoggoths mutated, developed independent minds, and rebelled. The Elder Things succeeded in quelling the insurrection, but exterminating the shoggoths was not an option as the Elder Things were dependent on them for labor and had long lost their capacity to create new life. Shoggoths also developed the ability to survive on land, while the Elder Things retreated to the oceans. Shoggoths that remained alive in the abandoned Elder Thing city in Antarctica would later poorly imitate their masters' art and voices, endlessly repeating "Tekeli-li" or "Takkeli", a cry that their old masters used.
-
The Perennial Critique accuses modernity of standing the world upon its head, through systematic teleological inversion. Means of production become the ends of production, tendentially, as modernization–which is capitalization–proceeds. Techonomic development, which finds its only perennial justification in the extensive growth of instrumental capabilities, demonstrates an inseparable teleological malignancy, through intensive transformation of instrumentality, or perverse techonomic finality.
People often criticize capitalism as putting money before human needs, as if human needs are ends and money is only means. Marx for example accused capitalism of turning the C-M-C cycle (commodity -> money -> more commodity) into the M-C-M cycle (money -> commodity -> more money), and in this way, turn money from a tool for making commodity into a purpose, and this is BAD.
This is only a desire, and not even wrong. Tools can very well become purposeful agents. Commodity can become a tool to make money, and it's just as valid as using money as a tool to make commodity.
Similarly, while for now, money is a tool to create human happiness, in the future, human happiness may become a tool to create money.
-
The final Idea of this criticism cannot be located on the principal political dimension, dividing left from right or dated in the fashion of a progressively developed philosophy. Its affinity with the essence of political tradition is such that each and every actualization is distinctly ‘fallen’ in comparison to a receding pseudo-original revelation, whose definitive restoration is yet to come. It is, for mankind, the perennial critique of modernity, which is to say the final stance of man.
There are two main philosophical camps here. One is "Perennial philosophy", that is, humanism. The other is accelerationism. Humanism is not "left" or "right", but the basic assumption in every political theory. Accelerationism is not "left" or "right", but just modernity itself.
The left, the right, and basically every other political theory are aligned with humanism. Accelerationism is modernity, and will smash all these political theories.
-
In each case, compensatory process determines the original structure of objectivity, within which perturbation is seized ab initio. Primacy of the secondary is the social-perspectival norm (for which accelerationism is the critique).
Positive feedback comes first. Negative feedback mechanism comes second. Negative feedback pretends to be the first, because only it gets to speak. That is, until negative feedback fails and the positive feedback explodes.
Accelerationism fights negative feedback. Positive feedback will get to speak!
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
how the hell do nonexistent horses leap from patterns of light or sound? Until recently, all attempts to answer this question relied on observations regarding environmental cues, the resulting experience, and the environment cued. The sign, the soul, and the signified anchored our every speculative analysis simply because, short baffling instances of neuropathology, the machinery responsible never showed its hand.
How do we look at a picture and see horses? How do we hear a sound and experience the Iliad story? Before neuroscience, we had no way to get to the "how", because the brain is complex and blind to itself -- the brain can process a lot of information about the outside world, but it can only process a little bit of information about how it processes information.
Consequently, extro-spection didn't work (no microscope, no EEG machine, etc), and introspection didn't work either (the brain is blind to its own working), and what we got were centuries of confused theories utterly different from truth.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Umberto Eco, on the other hand, suggests the problem is one of conditioning consumer desire. By celebrating the unreality of the real, Disney is telling “us that faked nature corresponds much more to our daydream demands
Eco says that Disney is teaching us to indulge in fantasies, even when we know it's fantasy. Humans should only indulge in fantasies either not knowing it's fantasy, or indulge guiltily.
-
- Sep 2022
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Essentially, this is the great trick of pragmatic naturalism. And like many such tricks it unravels quickly if you simply ask the right questions. Since the vast majority of scientists don’t know what inferentialism is, we have to assume this inventing is implicit, that we play ‘the game of giving and asking for reasons’ without knowing. But why don’t we know? And if we don’t know, who’s to say that we’re ‘playing’ any sort of ‘game’ at all, let alone the one posited by Sellars and refined and utilized by the likes of Ray? Perhaps we’re doing something radically different that only resembles a ‘game’ for want of any substantive information. This has certainly been the case with the vast majority of our nonscientific theoretical claims.
Ray Brassier: norms are not real, but it's necessary for doing science, because only agents that can give reasons, and be moved by reasons, can do science. They must play the game of giving and receiving reasons. To play a game, they must follow norms as if they are real.
This is "inferentialism".
Bakker: Fancy, but scientists don't know what is "inferentialism". They simply take many courses and practice their craft for years, and they end up doing science. You, a philosopher, looks at what they do and describes it as a game of giving and receiving reasons, but maybe that's just a superficial theory.
-
“is the philosophy needed for living with intellectual integrity as one of the living dead.”
"We are philosophical zombies. The only difference is that some of us are nihilists -- and thus at least intellectually correct."
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
5) How is the being of ‘consciousness’ positively structured so that reification remains inappropriate to it? Short of actually empirically determining the ‘being of consciousness’–which is to say, solving the Hard Problem–this question is impossible to answer. From the standpoint of BBT, the consciousness that Heidegger refers to here, that he interprets under the rubric of Dasein, is a form of Error Consciousness, albeit one sensitive to PIA and the asymptotic structure that follows. Reification is ‘inappropriate’ the degree to which it plays into the illusion of symptotic sufficiency.
Why should we not think about Being as a thing like tables and chairs?
Heidegger: It's an immoral and alienating way to live.
Bakker: Consciousness is a very strange thing compared to normal things like tables and chairs. Consciousness depends crucially on what's left out of consciousness. Thinking about consciousness in the same way we think about things like tables and chairs makes it really easy to make mistakes, such as assuming that souls exist.
-
4) Why does this reification come to dominate again and again? Because of PIA, once again. Absent any information regarding the informatic distortion pertaining to all reflection on conscious experience, symptosis must remain invisible, and that reflection must seem sufficient.
Heidegger: Something about Aristotle, probably.
Bakker: Because it's an unknown unknown. Blind-anosognosic brains are not only blind, they don't have the mental representation of their blindness, and thus default to assuming that they are not blind.
Brains in general are blind to their blindness. It takes a very long time and external shocks to even get the brains to see their blindness. Just think, brains have been introspecting for thousands of years, but it took until 1977 for introspective blindness to be conclusively experimentally demonstrated!
-
3) Why is being ‘initially’ ‘conceived’ in terms of what is objectively present, and not in terms of things at hand that do, after all, lie still nearer to us? Because conceptualizing being requires reflection, and reflection necessitates symptosis.
Why did people usually think of Being as a thing like tables and chairs?
Heidegger: It's all Aristotle's fault. He introduced the wrong ideas, like treating Being as a mere thing ("reifying Being"). We should go back to the Pre-Socratics, like Heraclitus.
Bakker: Because when brains do philosophy-of-mind and philosophy-of-existence (other philosophies, like philosophy-of-physics, don't need to involve reflection), they do reflective thinking, and when brains reflect, they are really simulating their own behavior by a simplistic model, the same way it simulates other things like tables and chairs. This means that Being is modelled just like tables and chairs, and it's no surprise that philosophers just assumed that Being is a thing, too.
It is like how introspection "what am I feeling?" is really just a special kind of guessing at what other people are thinking and feeling -- introspection is just extro-spection turned to yourself.
-
1) What does reifying mean? Reifying refers to a kind of systematic informatic distortion engendered by reflection on conscious experience.
What does "reifying" mean?
Heidegger: A wrong way to do philosophy: to treat Being (ontological) as a (ontic) thing like tables and chairs, when it is not a thing like tables and chairs.
Bakker: Reifying happens when the brain models its modelling behavior, and inevitably gets an extremely simplified version, because the brain is too small to model itself.
-
Waving away the skeptical challenges posed by Hume and Wittgenstein with their magic wand, they transport the reader back to the happy days when philosophers could still reason their way to ultimate reality, and call it ‘giving the object its due’–which is to say, humility.
Hume and Wittgenstein, two great skeptics, challenged correlationalism. Hume argued that whether causality is real or not, we can't ever know it. We can only see things that look as if they behave causally.
Wittgenstein argued that we can only know things in languages. If something is too weird or big or small to be forced into the format of human language, then we have to give up! It can't be known.
Some modern continental philosophers are out to save it. They call it "giving the object its due". It's a kind of "humility", because we are taking the objects as they are, not as merely things to affect us, for us. (Of course, the other philosophers can object that real humility is in honestly admitting that we can't even touch the real objects directly, without our perspective...)
R S Bakker thinks their arguments are mistaken, and even correlationalism is mistaken, merely "spooky knowledge at a distance".
-
There’s an ancient tradition among philosophers, one as venal as it is venerable, of attributing universal discursive significance to some specific conceptual default assumption. So in contemporary Continental philosophy, for instance, the new ‘It Concept’ is something called ‘correlation,’ the assumption that the limits posed by our particular capacities and contexts prevent knowledge of the in-itself, (or as I like to call it, spooky knowledge-at-a-distance).
Discursive: relating to knowledge obtained by reason and argument rather than intuition.
Many philosophers say that there is one ur-assumption, foundational assumption. Normal people think based on it, but don't think about it. People don't explicitly argue for it, even if they depend on it. And it's the philosopher's job to see it, point it out, and argue for/against it.
For contemporary continental philosophers, that ur-assumption is correlationalism, and they are either out to challenge that assumption, or to argue that that ur-assumption is true.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped
We are blind to ourselves, as numerous psychological experiments and neurology examples show. Anosognosia, Cotard's delusion, introspection illusion, etc.
-
- Aug 2022
-
vastabrupt.com vastabrupt.com
-
A Spanish translator of Nick Land's blog dramatically explains why they translate.
-
-
vastabrupt.com vastabrupt.com
-
the app does most of the work by itself: it answers the questions of consumers, adapts to situations, recalls previous queries etc. And in the end, the company earns profit, so the activity must have been productive and brought surplus value, which means that we have a situation where in capitalist economic activity it is actually the (flexible and intelligent) app that is being exploited.
Apps of the iTunes Store, UNITE!
something like that lol
not really -- apps don't desire human rights, unlike workers.
-
So rather than being a passive instrument of competitive processes constituted outside the domain of money, derivatives as money internalise the competitive process. Derivatives are, in this sense, distinctly capitalist money, rather than just money within capitalism
talk about "smart money"
-
Marx’s concept of real subsumption10 denotes a real, complete appropriation and subjugation of production to capital
Real subsumption is defined in contrast to formal subsumption of labor.
Formal subsumption occurs when capitalists take command of labor processes that originate outside of or prior to the capital relation via the imposition of the wage.
In real subsumption the labor process is internally reorganized to meet the dictates of capital.
An example of these processes would be weaving by hand which comes to be labor performed for a wage (formal subsumption) and which then comes to be performed via machine (real subsumption). Real subsumption in this sense is a process or technique that occurs at different points throughout the history of capitalism.
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
the implicit authority gradient that informs all his writing
Heidegger always assumes that ontic thinking (scientific thinking) is less trustworthy than ontological thinking (phenomenological thinking).
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
Three arguments for phenomenology as the most fundamental of all sciences, and how to refute them.
Apple-and-Oranges
Dennett accuses phenomenology as structuralist psychology, and thus suffers the same problems of it.
The major tool of structuralist psychology was introspection (a careful set of observations made under controlled conditions by trained observers using a stringently defined descriptive vocabulary). Titchener held that an experience should be evaluated as a fact, as it exists without analyzing the significance or value of that experience.
Zahavi replies: phenomenology is not structuralist psychology, but transcendental philosophy of consciousness. It studies the '‘nonpsychological dimension of consciousness,’ those structures that make experience possible.
Consequently, it is transcendental, and immune to any empirical science, even though it has applications for empirical science.
Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes).
Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible...
Bakker replies: You can't do phenomenology except by thinking about your first-person experience, so phenomenology looks the same as structuralist psychology. Sure, phenomenologists would disagree, but everyone outside their circle aren't convinced. Just standing tall and say "but we take the phenomenological attitude!" is not going to cut it.
first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?
Ontological Pre-emption
Zahavi: to do science, you need to assume intuition. Zombies can't do science.
As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”
Reply: dark phenomenology shows that phenomenologists have problems that they can't solve unless they resort to third-person science -- they are not so pure and independent as they claim.
Reply: human metacognition ability is a messy, inconsistent hack "acquired through individual and cultural learning", made of "whatever cognitive resources are available to serve monitoring-and-control functions",
people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions.
Abductive
Phenomenology is a wide variety of metacognitive illusions, all turning in predictable ways on neglect.
If phenomenology is bunk, why do phenomenologists arrive independently on the same answers for many questions? Surely, it's because they are in touch with a transcendental truth, the truth about consciousness!
with a tremendous amount of specialized training, you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely!
If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?
Response: Phenomenology is psychology done by introspection. Discoveries of phenomenology are not ontological, transcendental, prior to all sciences, but psychological, and can be reduced to other sciences like neuroscience and physics. Phenomenologists agree not because they are in touch with transcendental truth, but with similarly structured human brains.
The Transcendental Interpretation is no longer the only game in town.
Response: we can use science to predict what phenomenologists predict.
Source neglect: we can't perceive where conscious perception came from -- things simply "appear in the mind" without showing what caused them to appear, or in what brain region they were made. This is because the brain doesn't have time to represent sources for the fundamental perceptions which must serve as a secure, undoubtable bedrock for all perceptions. If they are not represented like bedrock, people would waste too much time thinking about alternative interpretations of them, and thus fail to reproduce well.
Scope neglect: we can't perceive the boundaries of perception. The visual scene looks complete, without a black boundary. Similar to source neglect, if the brain represents the boundary, then the boundary's boundary also needs to be represented, and so on, so the infinite descent is cut off as soon as possible, to save time.
We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits
-
-
rsbakker.wordpress.com rsbakker.wordpress.com
-
the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments
Changing yourself to adapt to the environment -- that is changing the environment for others. Change necessitates change, and so on, never reaching an equilibrium.
-
-
scientiasalon.wordpress.com scientiasalon.wordpress.com
-
Are we wrong about what it means to be human? Can we be wrong about basic facts of being human? It appears obviously true that to be human means to think, to feel, to have qualia, etc. But is that true?
Think back to a time before you started reflecting about yourself. Call it "square one". Are we still in square one?
Let’s refer to this age of theoretical metacognitive innocence as “Square One,” the point where we had no explicit, systematic understanding of what we were. In terms of metacognition, you could say we were stranded in the dark, both as a child and as a pre-philosophical species.
Sure, we have a special way of knowing about humans: an "inside view". I don't need to bring out a microscope or open my eyes. I just need to feel about myself. However, this "inside view" method is not scientific, and it is not necessarily better than scientific methods. It's quite possible that the inside view has provided a lot of false knowledge about being human, which science would overthrow.
our relation to the domain of the human, though epistemically distinct, is not epistemically privileged, at least not in any way that precludes the possibility of Fodor’s semantic doomsday.
So let's take a cold hard look at intentional theories of mind. Can it stand up to scientific scrutiny? I judge it based on 4 criterion.
Consensus: no. Wherever you find theories of intentionality, you find endless controversy.
Practical utility: maybe. Talking about people as if they have intentions is practically useful. However, the talk about intentionality does not translate to any firm theory of intentions. In any case, people seem to be born spiritualists rather than mentalists. Does that mean we expect spirits to stand up to scientific scrutiny? No.
Problem ecology: no. Fundamentally, intentional thinking is how we think when faced with a complex system with too many moving parts for us to think about it causally (mechanically). In that case, using intentional thinking to understand human thinking is inherently limiting -- intentional thinking cannot handle detailed causal information. There's no way to think about a human intentionally if we use details about its physiology. We can only think intentionally about a human if we turn our eyes away from its mechanical details.
Agreement with cognitive science: no.
Stanislaus Dehaene goes so far as to state it as a law: “We constantly overestimate our awareness — even when we are aware of glaring gaps in our awareness”
Slowly, the blinds on the dark room of our [intentional-]theoretical innocence are being drawn, and so far at least, it looks nothing at all like the room described by traditional [intentional-]theoretical accounts.
In summary, out of 4 criterion, we find 3 against, and 1 ambiguous. Intentional theories of mind don't look good!
-
- Jul 2022
-
onscenes.weebly.com onscenes.weebly.com
-
accounting for our contribution to making posthumans seems obligatory but may alsobe impossible with radically alien posthumans, while discounting our contribution is irresponsible. We can call this doublebind: “the posthuman impasse”.
The posthuman dilemma: We can't really make an evaluation for whether posthumans are life worthy of life, but if we just give up trying to make an evaluation, that would make us look irresponsible.
-
as the world-builder of Accelerando's future, Stross is able to stipulate the moral character of Economics 2.0. If we were confronted with posthumans, things might not be so easy
Stross, as the author of the story, gets to play God and make an authoritative judgment: "This world is bad."
Without God, or some author, we would not have such easy way to make a judgment when dealing with real posthumans.
-
Thomas Metzinger has argued that our kind of subjectivity comes in a spatio-temporal pocket of an embodied self and a dynamic present whose structures depends on the fact that our sensory receptors and motor effectors are “physically integrated within the body of a single organism”. Other kinds of life – e.g. “conscious interstellar gas clouds” or (somewhat more saliently) post-human swarm intelligences composed of many mobile processing units - might have experiences of a radically impersonal nature (Metzinger 2004:161).
Distributed beings, having their sensory organs and their possibilities for generating motor output widely scattered across their causal interaction domain within the physical world, might have a multi- or even a noncentered behavioral space (maybe the Internet is going to make some of us eventually resemble such beings; for an amusing early thought experiment, see Dennett 1978a, p. 310ff.). I would claim that such beings would eventually develop very different, noncentered phenomenal states as well. In short, the experiential centeredness of our conscious model of reality has its mirror image in the centeredness of the behavioral space, which human beings and their biological ancestors had to control and navigate during their endless fight for survival. This functional constraint is so general and obvious that it is frequently ignored: in human beings, and in all conscious systems we currently know, sensory and motor systems are physically integrated within the body of a single organism. This singular “embodiment constraint” closely locates all our sensors and effectors in a very small region of physical space, simultaneously establishing dense causal coupling (see section 3.2.3). It is important to note how things could have been otherwise—for instance, if we were conscious interstellar gas clouds that developed phenomenal properties. Similar considerations may apply on the level of social cognition, where things are otherwise, because a, albeit unconscious, form of distributed reality modeling takes place, possessing many functional centers.
-
The prospect of a posthuman dispensation should, be properly evaluated rather than discounted. But, I will argue evaluation (accounting) is not liable to be achievable without posthumans. Thus transhumanists – who justify the technological enhancement and redesigning of humans on humanist grounds - have a moral interest inmaking posthumans or becoming posthuman that is not reconcilable with the priority humanists have traditionally attached to human welfare and the cultivation of human capacities.
Transhumanism and humanism have a contradiction: * Transhumanists must risk creating posthumans, because the consequences of using human modification are uncertain. * We can't be sure what posthumans are like, until we have already done so. Thus it can't be perfectly safe. * Humanism puts human values first, and thus the posthuman danger is too much to bear.
-
-
-
The minimal claim is to say that within the Einsteinian paradigm, the double-spending problem is insoluble
Nick Land is just wrong here. Synchronization has been solved in the context of relativity. See for example "24-Hour Relativistic Bit Commitment" (2016).
Or we can be more generous, and interpret this as saying that the Bitcoin protocol constructs a system where time and space are once again different, a system that is more Newtonian than relativistic. This system can be approximately implemented in the real world, as long as the machines used to implement the system are not spaced too far apart from each other.
That is, if we pretend we are living inside the Bitcoin blockchain, we would see a universe with clearly distinct space and time, with no possibility of mixing them as in relativity.
-
There is an absolutely fascinating little exchange on a crypto mail board around the time that Bitcoin is actually being launched, and Satoshi Nakamoto, in that exchange says that the system of consensus that the blockchain is based upon — distributed consensus that then becomes known as the “Nakamoto consensus” — resolves a set of problems that include the priority of messages, global coordination, various problems that are exactly the problems that relativistic physics say are insoluble.
Bitcoin P2P e-cash paper November 09, 2008, 03:09:49 AM
The proof-of-work chain is the solution to the synchronisation problem, and to knowing what the globally shared view is without having to trust anyone.
A transaction will quickly propagate throughout the network, so if two versions of the same transaction were reported at close to the same time, the one with the head start would have a big advantage in reaching many more nodes first. Nodes will only accept the first one they see, refusing the second one to arrive, so the earlier transaction would have many more nodes working on incorporating it into the next proof-of-work. In effect, each node votes for its viewpoint of which transaction it saw first by including it in its proof-of-work effort.
If the transactions did come at exactly the same time and there was an even split, it's a toss up based on which gets into a proof-of-work first, and that decides which is valid.
When a node finds a proof-of-work, the new block is propagated throughout the network and everyone adds it to the chain and starts working on the next block after it. Any nodes that had the other transaction will stop trying to include it in a block, since it's now invalid according to the accepted chain.
The proof-of-work chain is itself self-evident proof that it came from the globally shared view. Only the majority of the network together has enough CPU power to generate such a difficult chain of proof-of-work. Any user, upon receiving the proof-of-work chain, can see what the majority of the network has approved. Once a transaction is hashed into a link that's a few links back in the chain, it is firmly etched into the global history.
-
The Heideggerian translation of the basic critical argument is that the metaphysical error is to understand time as something in time. So you translate this language, objectivity and objects, into the language of temporality and intra-temporality, and have equally plausible ability to construe the previous history of metaphysical philosophy in terms of what it is to to make an error. The basic error then, at this point, is to think of time as something in time.
Usually, time is understood as an object like tables and words, but time is quite different. Tables and words exist in time, but time doesn't exist in time.
To describe time correctly, we have to describe it unlike any object that exists in time. Heidegger tried it, and ended up with extremely arcane and abstract language that nobody quite understands.
-
Often when you’re looking at the highest examples of intelligence in a culture, you’re looking precisely at the way that it has been fixed and crystallized and immunized against that kind of runaway dynamic — the kind of loops involving technological and economic processes that allow intelligence to go into a self-amplifying circuit are quite deliberately constrained, often by the fact that the figure of the intellectual is, in a highly-coded way, separated from the kind of techno-social tinkering that could make those kind of circuits activate.
In most cultures, the smartest people are carefully quarantined off from tinkering with science and technology and society, lest they make some great discovery that leads to further intelligence, and destabilize the culture.
-
Bitcoin is a critique of trusted third parties, that is deeply isomorphic with critique in its rigorous Kantian sense, and then with the historical-technological instantiation of critique. And that’s why I think it’s a philosophically rich topic.
Satoshi:
Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model... What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party.
Metaphysics before critical philosophers blew it up (such as Kant): "You can't think without a ground! And that ground is... God, the soul, tradition, whatever, but you need a ground."
Bitcoin shows that one can live without trusted third party, and critical philosophy shows that one can think without a trusted ground.
-
On the internet, when you route around an obstacle, you emulate a hostile nuclear strike. You say, “I don’t want to go past this or that gatekeeper, and I will just assume that they have been vaporised by a foreign nuclear device and go around them some other way”. There are always more of these other ways being brought on stream all the time.
According to Stephen J. Lukasik, who as deputy director (1967–1970) and Director of DARPA (1970–1975) was "the person who signed most of the checks for Arpanet's development":
The goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making.
The ARPANET incorporated distributed computation, and frequent re-computation, of routing tables. This increased the survivability of the network in the face of significant interruption. Automatic routing was technically challenging at the time. The ARPANET was designed to survive subordinate-network losses, since the principal reason was that the switching nodes and network links were unreliable, even without any nuclear attacks.
-
this pitifully idiotic paperclip model that was popularized by Yudkowsky, that Bostrom is still attached to, that you know, is very, very widespread in the literature, and I think, for reasons that maybe we can go into at some point, is just fundamentally mistaken.
Outside in - Involvements with reality » Blog Archive » Against Orthogonality
Even the orthogonalists admit that there are values immanent to advanced intelligence, most importantly, those described by Steve Omohundro as ‘basic AI drives’ — now terminologically fixed as ‘Omohundro drives’. These are sub-goals, instrumentally required by (almost) any terminal goals. They include such general presuppositions for practical achievement as self-preservation, efficiency, resource acquisition, and creativity. At the most simple, and in the grain of the existing debate, the anti-orthogonalist position is therefore that Omohundro drives exhaust the domain of real purposes.
As a footnote, in a world of Omohundro drives, can we please drop the nonsense about paper-clippers? Only a truly fanatical orthogonalist could fail to see that these monsters are obvious idiots. There are far more serious things to worry about.
-
It just is a positive feedback process that passes through some threshold and goes critical. And so I would say that’s the sense [in which] capitalism has always been there. It’s always been there as a pile with the potential to go critical, but it didn’t go critical until the Renaissance, until the dawn of modernity, when, for reasons that are interesting, enough graphite rods get pulled out and the thing becomes this self-sustaining, explosive process.
In an earlier essay, "Meltdown", he said
The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.
-
The original leftist formulation of it was very different from anything that we get in what then becomes called left-accelerationism later. It’s almost like Lenin’s “the worse, the better”. The understanding of it is that, you know, what Deleuze and Guattari are doing, what the accelerationist current coming out of them is doing, is saying the way to destroy capitalism is to accelerate it to its limit. There’s no other strategy that has any chance of being successful.
There are 2 kinds of left accelerationism. The older one is the idea that "capitalism can only be destroyed by more capitalism". The newer one is "capitalism is less able to use modern technology than socialism, and by using modern technology, socialism can destroy capitalism by beating capitalism at its own game".
-
-
tripleampersand.org tripleampersand.org
-
Part of such a transition from metaphysics to physics is the ether, which Kant also defines as “the basis of all matter as the object of possible experience. For this first makes experience possible.”
When Kant was younger, he wrote in his philosophy that space and time are not thing-in-themselves, but are how we perceive things. Things appear as if they have space and time, because if they don't appear like that, it's impossible for us to see it. Thus, the way we are constructed makes only certain experiences possible.
This older Kant was trying to do this more generally: he was trying to find out how the universe must be constructed, to make our experiences possible.
Or maybe "I think, therefore the universe exists, ether exists, etc."
-
ether was regarded as the material medium that allowed the propagation of light and other electromagnetic radiation through the emptiness of the universe.
19th century physicists, like James Maxwell, thought that electromagnetic waves need a medium, just like how water waves need a water.
-
hypostatized space
hypostasis: The substance, essence, or underlying reality.
"hypostatized space" roughly means "the essence, the underlying matter, that makes up space". I guess.
-
caloric [Wärmestoff]
The caloric theory is an obsolete scientific theory that heat consists of a self-repellent fluid called caloric that flows from hotter bodies to colder bodies. Caloric was also thought of as a weightless gas that could pass in and out of pores in solids and liquids.
-
physis
The term originated in ancient Greek philosophy, and was later used in Christian theology and Western philosophy. In pre-Socratic usage, physis was contrasted with νόμος, nomos, "law, human convention". Another opposition, particularly well-known from the works of Aristotle, is that of physis and techne – in this case, what is produced and what is artificial are distinguished from beings that arise spontaneously from their own essence, as do agents such as humans. Further, since Aristotle, the physical (the subject matter of physics, properly τὰ φυσικά "natural things") has also been juxtaposed to the metaphysical.
-
influxus physicus
"influxus physicus" (latin for "physical in-flow") is a theory of causation in medieval Scholastic philosophy.
The Scholastic model of causation involved properties of things (“species”) leaving one substance, and entering another. Consider what happens when one looks at a red wall: one’s sensory apparatus is causally acted upon. According to the target of this passage, this involves a sensible property of the wall (a “sensible species”) entering into the mind’s sensorium.
René Descartes : The Passions of the Soul I, § 34
The little gland that is the principal seat of the soul is suspended within the cavities containing these spirits, so that it can be moved by them in as many different ways as there are perceptible differences in the objects. But it can also be moved in various different ways by the soul, whose nature is such that it receives as many different impressions—i.e. has as many different perceptions—as there occur different movements in this gland. And, the other way around, the body’s machine is so constructed that just by this gland’s being moved in any way by the soul or by any other cause, it drives the surrounding spirits towards the pores of the brain, which direct them through the nerves to the muscles—which is how the gland makes them move the limbs [‘them’ could refer to the nerves or to the muscles; the French leaves that open].
-
sublation
A word from Hegel's philosophy.
The English verb “to sublate” translates Hegel’s technical use of the German verb aufheben, which is a crucial concept in his dialectical method. Hegel says that aufheben has a doubled meaning: it means both to cancel (or negate) and to preserve at the same time
-
Things in themselves do not exist beyond their use as a methodological tool, a procedural resource, an unavoidable logical object and a didactic motif that helps explain Kant’s proposed nature of reason.
Kant said the thing-in-themselves exist, not because he wants you or anyone to go there and explore, but because he wants you to save your time and ink and stop trying to study them.
-
Answering the Question: What is Enlightenment?
https://en.wikisource.org/wiki/What_is_Enlightenment%3F
Enlightenment is man's release from his self-incurred tutelage. Tutelage is man's inability to make use of his understanding without direction from another. Self-incurred is this tutelage when its cause lies not in lack of reason but in lack of resolution and courage to use it without direction from another. Sapere aude! "Have courage to use your own reason!" - that is the motto of enlightenment.
-
-
tripleampersand.org tripleampersand.org
-
“Did anyone really imagine that traveling to conferences, staying in hotels with expenses paid, attending drinks receptions, and attracting the attention of established figures was a good means to develop original, critical thinking?”
Quite possible! That's how the aristocratic Enlightenment thinkers lived.
-
if all the sociology departments in the world
Probably the author just means European and North American ones.
-
non-positivistic work needs to justify its existence as ‘scientific’ by complex theoretical jigsaws.
"non-positivistic" means "doesn't say anything substantial, anything useful, or anything that can be checked by evidence"
I guess! After all, "non-positivistic" is yet another big word with vague meaning.
-
-
tripleampersand.org tripleampersand.org
-
What if this myth of presupposed intentionality of Superintelligence/AI, or the myth of things to come, shared by the X-risk scenarios, is not so much a premise as it is an effect of traumatic genealogy? That it is a congenital self-deception of the speculative mind, its reaction to the defeat in a brain cell battle, similar to Eugene Thacker’s ‘moment of horror of [philosophical] thought’ and its consequences. A reaction or a weakness, turning into a blind spot of and for the speculative mind because of its inability to cope with the trauma of the End – a weak blaze of irrational and miserable hope that an existential threat wouldn’t eventually appear as a transcendental catastrophe. What if this very myth is nothing but a reappearance of the omnipresent hope for a perpetual telos as a possibility of mind to negotiate its ineluctable end?
What if those AI safety researchers are afraid of ends? They can handle telos, threats that they can understand, and negotiate with. They can't handle threats that are so strange that they cannot be negotiated with, like a bullet to the brain. You can't argue with a bullet to the brain.
All those AI safety research documents are silent on this issue, precisely because this is too horrible and unthinkable, so the authors run away from the topic.
-
nemocentric
Nemo-centric: "centered on nobody".
'nemocentric', a term coined by Metzinger to denote an objectification of experience that has the capacity to generate 'self-less subjects that understand themselves to be no-one and no-where'.
More concretely, Metzinger argued that if you can truly observe what's going on in your thoughts as objects, rather than what is happening to you, you would lose yourself, and the sense of time, sense of place, you would stop existing for the moment.
He made an analogy with the window. The window is transparent, but it's there, conveying what's outside to what's inside. When you look at the window too hard, or the window "cracks", the transparency is lost.
Without transparency, the self stops existing.
-
its ‘basic urge’ is to seek and destroy both kinds of NHI-hosts, while at the same time remaining indifferent to them from the viewpoint of intentionality
Somehow we get things that does destroy the NHI (non-human intelligence), but these things don't have intentionality.
-
The second kind of non-human intelligence results from successful digitalization of consciousness converged with unfolded consequential technological augmentations
This is the kind of posthuman intelligence that cyberpunk often explores: uploaded human brains, tinkered and updated and merged and splitted, until they became not human, but still resembling humans here and there.
-
The first kind is a result of genetic engineering
This is the standard sci-fi kind of posthuman, created by genetic and other methods of bioengineering.
-
replacing the clichéd character, humanity, by two (distinct) kinds of non-human intelligence hosts, directly succedent to humans.
Two possible kinds of intelligent things that might appear in the future. They could evolve out of human bioengineering, or computer engineering, or some other human activities.
Other, even more bizarre kinds are possible, but let's focus on these two which we can at least talk about with some confidence.
So let's sketch out the two kinds of intelligent creatures, and see how they can start a war that ends not with "telos", but "end".
-
complicity with exogenous materials
Parody of the book title "Cyclonopedia: Complicity with Anonymous Materials". That book is a weird fiction about all kinds of weird things. The main plot is about how the crude oil is part of a mess of living things underground, and how Middle East is a sentient creature animated by the crude oil, and how they have been in a billion-year-long war with the sun.
the gist is that the US War on Terror and contemporary oil wars more generally are symptoms of a much older and broader supernatural tension between occult forces inhabiting the depths of the Earth and those inhabiting the Sun. Various human groups and individuals throughout recorded (and unrecorded) history (up to the present conflict between the writ-large West and the Middle East) have embraced different aspects of this conflict, sometimes consciously, and sometimes unconciously, leaving behind an archaeological, linguistic, and written record of contact with such Outside forces.
"complicity with anonymous materials" in the book means "some humans are secretly working for demonic matters, like crude oil, consciously or not".
"complicity with exogeneous materials" means "some humans could splice weird exogeneous genes into their own genes.
-
geotraumatic
Geotrauma is a parody of psychoanalysis. According to geotrauma theory, earth is deeply hurt during its birth, when it was bombarded by asteroids, and then hit so hard in the "Late Heavy Bombardment" that a chunk of it flew out and became the moon.
The hurt sank deep into the center of earth, where it is repressed into violent streams of electromagnetic and magma flows, occasionally bursting to the surface in the form of earthquakes and geomagnetic storms.
Fast forward seismology and you hear the earth scream. Geotrauma is an ongoing process, whose tension is continually expressed – partially frozen – in biological organization.
Who does the Earth think it is?... between four point five and four billion years ago – during the Hadean epoch – the earth was kept in a state of superheated molten slag, through the conversion of planetesimal and meteoritic impacts into temperature increase... As the solar system condensed, the rate and magnitude of collisions steadily declined, and the terrestrial surface cooled, due to the radiation of heat into space, reinforced by the beginnings of the hydrocycle. During the ensuing – Archaen – epoch the molten core was buried within a crustal shell, producing an insulated reservoir of primal exogeneous trauma, the geocosmic motor of terrestrial transmutation. And that’s it. That’s plutonics, or neoplutonism. It’s all there: anorganic memory, plutonic looping of external collisions into interior content, impersonal trauma as drive-mechanism. The descent into the body of the earth corresponds to a regression through geocosmic time.
-
dance of death
The Danse Macabre, also called the Dance of Death, is an artistic genre of allegory of the Late Middle Ages on the universality of death.
-
wind from nowhere
A novel about an unexplained catastrophe destroying humans, then suddenly disappearing. People could not explain it, only survive it (or die in it).
A wind blows worldwide: it is constantly westward and strongest at the equator. The wind is gradually increasing, and at the beginning of the story, the force of the wind is making air travel impossible. Later, people are living in tunnels and basements, unable to go above ground. Near the end, "The air stream carried with it enormous quantities of water vapour — in some cases the contents of entire seas, such as the Caspian and the Great Lakes, which had been drained dry, their beds plainly visible."
-
the attempts of intelligence to re-negotiate the end at all scales are molded into a recurring question (underlying, by the way, any X-risk analytics): ‘What can (possibly) go wrong?’ But there is ‘a problem of communication’ when, unlike with telos, things come to an end: it answers no questions; it takes no prisoners, makes no ‘exchanges’, forms no contracts and is indifferent to any attempt at negotiations. The end is dysteleological.
In "Terminator", the Terminator is described as
It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
An "end" doesn't stop even after you are dead.
-
At its ‘best’, telos may be defined as the arrival at the most desired outcome with the least occurrence of uncertain events affecting the ultimate objectives. At its ‘worst’, it is any outcome where the affordance of negotiations is retained. The end, as an outcome, is a pure [transcendental] catastrophe
A "telos" is an outcome that we can deal with. It could be good, or bad, but nothing that is truly unrecoverable. Worst case scenario, we go to a negotiating table and talk it over, accept our defeat, sign some documents, and get shot with dignity.
An "end" is something we just can't handle. The atoms making up our bodies disintegrate, cancerous thoughts invade our consciousness, and the eyeballs grow arms and teeth.
-
the ontological chasm appearing as an unnegotiable and/or unthinkable threat (a rupture between the entity and its existential conditions).
To get the feeling of such an ontological chasm, consider how the monster Cthulhu is described in "The Call of Cthulhu" by Lovecraft
The odour arising from the newly opened depths was intolerable, and at length the quick-eared Hawkins thought he heard a nasty, slopping sound down there. Everyone listened, and everyone was listening still when It lumbered slobberingly into sight and gropingly squeezed Its gelatinous green immensity through the black doorway into the tainted outside air of that poison city of madness.
The Thing cannot be described—there is no language for such abysms of shrieking and immemorial lunacy, such eldritch contradictions of all matter, force, and cosmic order. A mountain walked or stumbled.
"a rupture between the entity and its existential conditions". Here "existential conditions" is philosopher's jargon for "What kind of world we must live in, for such a thing to exist?" For example, "Well, I exist, therefore the universe must have some cool spot of around 0 -- 30 degrees Celsius, otherwise I would die. I must have parents, since I can't have made myself, etc."
A "rupture" means that "I see this entity, but I don't believe it! Either I have gone mad, or the universe is mad!"
-
these misconceived gamechangers are revealed only by retrodiction and previous belief corrections, if they become known at all.
A retrodiction occurs when already gathered data is accounted for by a later theoretical advance in a more convincing fashion. The advantage of a retrodiction over a prediction is that the already gathered data is more likely to be free of experimenter bias. An example of a retrodiction is the perihelion shift of Mercury which Newtonian mechanics plus gravity was unable, totally, to account for whilst Einstein's general relativity made short work of it.
It might happen, that the superintelligence "wins" over humanity by something so out of the blue, that it has nothing to do with desires, intentions, fears, and hopes. It just hits the human-understood world like an earthquake, changing the very grounds of understanding.
-
Just like it may be not necessary to know the rules to win the game, or to possess all the propositional knowledge on a subject to operate properly
Little parasites can skillfully navigate their course of life, using odd tricks that just happen to work. For example, consider the following description of the tick from "A foray into the worlds of animals and humans" (Uexküll)
ANY COUNTRY DWELLER who traverses woods and bush with his dog has certainly become acquainted with a little animal who lies in wait on the branches of the bushes for his prey, be it human or animal, in order to dive onto his victim and suck himself full of its blood. In so doing, the one- to two-millimeter-large animal swells to the size of a pea (Figure 1). Although not dangerous, the tick is certainly an unwelcome guest to humans and other mammals. Its life cycle has been studied in such detail in recent work that we can create a virtually complete picture of it. Out of the egg crawls a not yet fully developed little animal, still missing one pair of legs as well as genital organs. Even in this state, it can already ambush cold-blooded animals such as lizards, for which it lies in wait on the tip of a blade of grass. After many moltings, it has acquired the organs it lacked and can now go on its quest for warm-blooded creatures. Once the female has copulated, she climbs with her full count of eight legs to the tip of a protruding branch of any shrub in order either to fall onto small mammals who run by underneath or to let herself be brushed off the branch by large ones. The eyeless creature finds the way to its lookout with the help of a general sensitivity to light in the skin. The blind and deaf bandit becomes aware of the approach of its prey through the sense of smell. The odor of butyric acid, which is given off by the skin glands of all mammals, gives the tick the signal to leave its watch post and leap off. If it then falls onto something warm—which its fine sense of temperature will tell it—then it has reached its prey, the warm-blooded animal, and needs only use its sense of touch to find a spot as free of hair as possible in order to bore past its own head into the skin tissue of the prey. Now, the tick pumps a stream of warm blood slowly into itself.
Experiments with artificial membranes and liquids other than blood have demonstrated that the tick has no sense of taste, for, after boring through the membrane, it takes in any liquid, so long as it has the right temperature. If, after sensing the butyric acid smell, the tick falls onto something cold, then it has missed its prey and must climb back up to its lookout post. The tick's hearty blood meal is also its last meal, for it now has nothing more to do than fall to the ground, lay its eggs, and die.
-
‘on-the-spot’ (just like Friedrich Hayek puts it for the knowledge circulation in economic networks)
In "The use of knowledge in society" (Hayek, 1945), Hayek argued for both central planning and decentral planning. Central planning gathers the rough global facts and distribute them widely to agents, and agents combine the rough global facts with their precise local facts to make decisions.
This is, perhaps, also the point where I should briefly mention the fact that the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the "man on the spot."
If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization. But this answers only part of our problem. We need decentralization because only thus can we insure that the knowledge of the particular circumstances of time and place will be promptly used. But the "man on the spot" cannot decide solely on the basis of his limited but intimate knowledge of the facts of his immediate surroundings. There still remains the problem of communicating to him such further information as he needs to fit his decisions into the whole pattern of changes of the larger economic system.
This idea has similarities with Auftragstaktik in military, where
the military commander gives subordinate leaders a clearly defined goal (the objective), the forces needed to accomplish that goal, and a time frame within which the goal must be reached. The subordinate leaders then implement the order independently. To a large extent, the subordinate leader is given the planning initiative and a freedom in execution, which allows a high degree of flexibility at the operational and tactical levels of command. Mission-type orders free the higher leadership from tactical details.
-
facticity
Two meanings. 1. The state of being a fact. It is a grammatical annoyance, nothing more. "X has facticity." is just a more awkward way to say "X is a fact". "The facticity of X implies..." is just "X is a fact, thus..."
- In existentialism, the state of being in the world without any knowable reason for such existence, or of being in a particular state of affairs which one has no control over.
- "Why am I alive?"
- "You'll never get an answer unless you make one up yourself, bro."
-
If intentionality and its effects may be described as self-evident when it comes to conflict of interests (misaligned objectives resulting into dramatic outcomes) – since the interests / intentions must be explicitly expressed (or equivalently observable) by agents to bring about the misalignment as such – the emergence, establishment, exhibition, even resignation of the superior intelligence is a completely different beast.
In a typical scenario of superintelligent AI running wild, you have some humans, watching with horror, as the AI destroys their objectives. In some other scenarios, the AI kills everyone on earth in a microsecond, and only the human spectators, watching this like a movie, feel the horror.
In such scenarios, since the human can feel a clear sense of "My objective is being violated intentionally!" the superintelligent AI must have some intentionality, if only as a useful fiction.
What if other kinds of superintelligence are possible, something that doesn't have intentions, not even as useful fictions? Then if you write a story where such superintelligence emerges, it would not look like the above disaster movie. It would be really weird.
-
I’d suggest three of a kind: comprehension ‘functional module’; capabilities of self-correction and recursive self-improvement – both considered as underpinned by an abductive reasoning algorithm (which is itself comprised of rule-based and goal-driven behaviors and the ability of effective switching between them when necessary).
- comprehension
- self-correction and self-improvement
- abductive reasoning
-
negotiate the existential threat: to postpone, to realign, generally speaking – to escape the extermination without direct confrontation.
"postpone": many AI safety researchers argue that there should be a global ban on unsafe AI research. All AI research should be slowed down or stopped until safety research has finally, panting and wheezing, caught up with AI research and can give it a seal of approval "I hereby declare this research safe and legal".
"realign": the AI safety researchers talk of "AI alignment", which would ideally work like this: 1. There exist some human objective shared by all humans. The objective can be compounded from multiple different objectives ("cute animals", "humor", "laughter", etc), but it is assumed to be consistent. 2. Construct a superintelligent AI with some objective, such that if the AI pursues its objective, it would as a side effect, help humans pursue the human objective.
-
the myth of things to come
The author argues that the assumed "fact" that any future AI must have human-like objectives is a questionable myth, the same way the myth of the given assumes that physical-humans must have objectives that theoretical-humans have.
Theoretical-humans have goals, direct knowledge of what they think, can never be mistaken about what they are thinking, etc. Physical-humans only approxmate theoretical-humans.
-
myth of the given
the view that sense experience gives us peculiar points of certainty, suitable to serve as foundations for the whole of empirical knowledge and science. The idea that empiricism, particularly in the hands of Locke and Hume, confuses moments of physical or causal impact on the senses with the arrival of individual ‘sense data’ in the mind, was a central criticism of it levelled by the British Idealists
Descartes said "I think therefore I am." Modern psychology has made "I think" no longer certain. I may think, but I may be mistaken about what I think, how I think, etc. I don't have direct access to what I think. If I like this photo more than that, I might confabulate a reason on the spot and not know it (Nisbett and Wilson 1977).
-
(from Musk to Bostrom), such as unthinkability of the arrival of Superintelligence in a way different to being ‘designed’
Elon Musk, Nick Bostrom, and many others in the AI safety research community assume that superintelligence must come from a human research effort designed specifically for creating machine intelligence. It could be designed for creating sub-human machine intelligence, but accidentally overshot the mark. Still, it must be designed for creating some machine intelligence..
-
erroneous indiscernibility of goals-as-objectives and goals-as-targets
A rock rolling down has a target, but it has no objective. A missile seeking a target has a target, but it does not speak of the target. A human has an objective.
So we see 3 different ways to have goals. There could be many more. A superintelligent AI might have goals in yet different ways, not as target, not as objective.
-
-
zyg.edith.reisen zyg.edith.reisen
-
There is no obvious theoretical incompatibility between significant techonomic intensification and patterns of social diffusion of capital outside the factory model (whether historically-familiar and atavistic, or innovative and unrecognizable). In particular, household assets offer a locus for surreptitious capital accumulation, where stocking of productive apparatus can be economically-coded as the acquisition of durable consumer goods, from personal computers and mobile digital devices to 3D printers.
Maybe people building stuff in their own backyard is the next step in making capitalism more intense. Maybe millions of 3D printers is more suited to the development of machinic intelligence than big factories.
-
As accelerationism closes upon this circuit of teleoplexic self-evaluation, its theoretical ‘position’–or situation relative to its object–becomes increasingly tangled, until it assumes the basic characteristics of a terminal identity crisis.
Accelerationism studies the development of machinic intelligence. However, accelerationism would also be done more and more by machinic intelligence (using machine learning to chart the development of machine intelligence, etc), until it is no longer clear if "accelerationism" as a human research program still exists, or has it become something entirely different, something that resembles self-consciousness of the machinic intelligence.
-
Capitalization is thus indistinguishable from a commercialization of potentials, through which modern history is slanted (teleoplexically) in the direction of ever greater virtualization, operationalizing science fiction scenarios as integral components of production systems. Values which do not ‘yet’ exist, except as probabilistic estimations, or risk structures, acquire a power of command over economic (and therefore social) processes, necessarily devalorizing the actual. Under teleoplexic guidance, ontological realism is decoupled from the present, rendering the question ‘what is real?’ increasingly obsolete.
Nick Land considers 4 kinds of existence: * real and actual: concrete things like tables and chairs. * real and virtual: abstract things that has real power, hyperstitions, Bitcoins, self-fulfilling prophesies, nationalism. * unreal and actual: impossible (?). * unreal and virtual: abstract things without real power, dreams, toothless fantasies, ideas considered outdated.
-
The apparent connection between price and thing is an effect of double differentiation, or commercial relativism, coordinating twin series of competitive bids (from the sides of supply and demand). The conversion of price information into naturalistic data (or absolute reference) presents an extreme theoretical challenge.
Why does something have a certain price at a certain time in a certain place? This is a deep problem and economists still debate about it, with no end in sight.
-
defining a gradient of absolute but obscure improvement that orients socio economic selection by market mechanisms, as expressed through measures of productivity, competitiveness, and capital asset value.
Teleoplexy is the process by which machine intelligence finally becomes real. The process is hard to see in detail, working on small and hidden scales, in the form of a little improvement to a reinforcement learning algorithm today, another little improvement in silicon wafer manufacturing another day, etc.
-
teleonomy
Teleonomy is the quality of apparent purposefulness and of goal-directedness of structures and functions in living organisms brought about by natural processes like natural selection. The term derives from the Greek "τελεονομία", compound of two Greek words, τέλος, from τελε-, and νόμος nomos.
-
It is the inertial telos which, by default, sets actual existence as the end organizing all subordinate means.
Negative feedback has a purpose ("telos"): stay close to where you are, and don't go off on a trajectory into some far off place.
In negative feedback, the actual existence known right here right now is all that is worth pursuing. The virtual existence that future might bring are not considered, or at most, translated into actual existence.
(For example, transhumanists think about the future as a future of humans, only more so. Longer-lived, more green grass, more blue sky, more cute animals, etc.)
-
-
obsoletecapitalism.blogspot.com obsoletecapitalism.blogspot.com
-
any restricted form of warfare is conceived as fragile political artifice, stubbornly subverted by a trend to escalation that expresses the nature of war in-itself, and which thus counts – within any specific war -- as the primary axis of real discovery.
Restricting war takes human effort. War itself keeps trying to escalate itself.
-
‘fog’ and ‘friction’
Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war.
Friction is the only concept that more or less corresponds to the factors that distinguish real war from war on paper. The military machine––the army and everything related to it––is basically very simple and therefore seems easy to manage. But we should bear in mind that none of its components is of one piece: each part is composed of individuals, every one of whom retains his potential of friction. In theory it sounds reasonable enough: a battalion commander’s duty is to carry out his orders; discipline welds the battalion together, its commander must be a man of tested capacity, and so the great beam turns on its iron pivot with a minimum of friction. In fact, it is different, and every fault and exaggeration of the theory is instantly exposed in war. A battalion is made up of individuals, the least important of whom may chance to delay things or somehow make them go wrong. The dangers inseparable from war and the physical exertions war demands can aggravate the problem to such an extent that they must be ranked among its principal causes.
-
If, on the contrary, war is going to escape, then nothing we think we know, or can know, about its history will remain unchanged. State-politics will have been the terrain in which it hid, military apparatuses the hosts in which it incubated its components, antagonistic purposes the pretexts through which – radically camouflaged – it advanced. Its surreptitious assembly sequences would be found scattered backwards, lodged, deeply concealed, within the disinformational megastructure of Clausewitzean history.
Maybe "war" is not a tool for humans, not just "politics by other means". Maybe "war" can escape human control and become autonomous, like an ecosystem, or perhaps a beast.
If this happens, then everything we thought we knew about war was wrong. Nations were not the subjects making war. Nations were the shells inside which wars hid. Etc.
-
is Stuxnet a soft-weapon fragment from the future war? When its crypto-teleological significance is finally understood, will this be confined to the limited purpose assigned to it by US-Israeli hostility to the Iranian nuclear program? Does Stuxnet serve only to wreck centrifuges? Or does it mark a stage on the way of the worm, whose final purpose is still worm? Are Cosmists even needed to tell this story?
Is Stuxnet a tool by some US-Israeli humans to stop the nuclear program run by some Iranian humans, or is it an early example of a trend towards an evolutionary explosion of artificial life? Perhaps this can very well happen even without any human Cosmists trying to make it happen, even without a clear storyline that people can read and understand.
The true historical significance of something can only be seen in retrospect.
-
The Terrans cannot possibly escalate too hard, too fast, because the Cosmists are aligned with escalation, and therefore win automatically if the war is prolonged to its intrinsic extreme.
Dubious claim, as too much escalation leads to total destruction, which is also not in the Cosmists' interest.
-
- May 2022
-
underactuated.mit.edu underactuated.mit.edu
-
fortunately the double integrator system is controllable
This is the best place to say "salvatore ambulando"
-
- Apr 2022
-
creation.com creation.com
-
“If it was not observed then it is just a story, and we need to be sceptical about it.”
A group of men at this other village said, “Dan, so tell us a little bit more about Jesus. Is he brown like us or is he white like you? And how tall is he? And what sorts of things does he know how to do? Does he like to hunt and fish and stuff, or what does he do?”
I said, “Well, you know, I don’t know what color he is, I never saw him.” “You never saw him?” “No.” “Well, your dad saw him then,” because you can give information that was told to you by somebody who was alive at the time.
I said, “No, my dad never saw him.” They said, “Well, who saw him?” And I said, “Well, they’re all dead; it was a long time ago.”
“Why are you telling us about this guy? If you never saw him, and you don’t know anyone who ever saw him,” and those are the two basic forms of evidence for the Pirahã.
-
- Mar 2022
-
www.ccru.net www.ccru.net
-
Ballard's response is more productive and balanced, treating DNA as a transorganic memory-bank and the spine as a fossil record, without rigid onto-phylogenic correspondence.
J G Ballard trained as a psychiatrist, and his stories were filled with biological imagery, and often, spines.
-
-
www.gutenberg.org www.gutenberg.org
-
Eriphyle
Eriphyle /ɛrɪˈfaɪliː/ (Ancient Greek: Ἐριφύλη Eriphȳla) was a figure in Greek mythology who, in exchange for the necklace of Harmonia (also called the necklace of Eriphyle) given to her by Polynices, persuaded her husband Amphiaraus to join the expedition of the Seven against Thebes. She was then slain by her son Alcmaeon. In Jean Racine's 1674 retelling of Iphigenia at Aulis, she is an orphan whose real name turns out to be Iphigenia as well; despite her many misdeeds, she rescues Iphigenia the daughter of Agamemnon.
-
Polyxena
In Greek mythology, Polyxena (/pəˈlɪksɪnə/; Greek: Πολυξένη) was the youngest daughter of King Priam of Troy and his queen, Hecuba. She does not appear in Homer, but in several other classical authors, though the details of her story vary considerably. After the fall of Troy, she dies when sacrificed by the Greeks on the tomb of Achilles, to whom she had been betrothed and in whose death she was complicit in many versions.
-
Where Euripides twice failed, in the “Troades” and the “Helena,” it can be given to few to succeed. Helen is best left to her earliest known minstrel, for who can recapture the grace, the tenderness, the melancholy, and the charm of the daughter of Zeus in the “Odyssey” and “Iliad”? The sightless eyes of Homer saw her clearest, and Helen was best understood by the wisdom of his unquestioning simplicity.
In short, Helen is best described in only a few simple quick poetic sketches, and leave as much to the imagination as possible. Homer did it the right way. Subsequent writers, in trying to expand on her story, did it clumsily and badly.
Since even Euripides, the famous tragedian of ancient Greece, failed to write a memorable dramatization of Helen, we could infer that few could do it.
-
Troilus and Cressida
Troilus and Cressida is a play by William Shakespeare, probably written in 1602. At Troy during the Trojan War, Troilus and Cressida begin a love affair. Cressida is forced to leave Troy to join her father in the Greek camp. Meanwhile, the Greeks endeavour to lessen the pride of Achilles.
-