- Nov 2022
There are two problems with consciousness:
- Scientific problem: What is consciousness, scientifically?
- Folk-theory problem: Why is the folk-theory of consciousness the way it is?
The first problem is basically "the easy problem of consciousness". The second problem is basically the "meta problem of consciousness".
Neuroscience solves the first. Blind Brain Theory solves the second.
Dehaene's book shows how the first problem may be solved, but it misses the second problem. Dehaene argued that consciousness does certain jobs (make us into socially responsible creatures), just like how folk-theory of consciousness says it does. However, he made the mistake of going further and claiming that, therefore, the folk-theory of how consciousness does its job is also correct.
Bakker goes right against it, claiming that the folk-theory of consciousness is completely wrong about how consciousness does its job. Indeed, this will be its downfall.
Bakker distinguishes three philosophies: epiphenomenalism, ulterior functionalism, and interior functionalism. * epiphenomenalism: consciousness has no function; * ulterior functionalism: consciousness has functions, but not in the way we intuitively thinks; * interior functionalism: consciousness has functions, in the way we intuitively thinks.
He thinks ulterior functionalism is correct.
Whenever conscious thinking searches for the origins of its own activity, it finds only itself. This, unfortunately, is way too little data. Without brain imaging and other scientific instruments, introspection has too little data to use, and can never solve the problem of how conscious works.
we had no way of knowing what cognitive processes could or could not proceed without awareness. The matter was entirely empirical. We had to submit, one by one, each mental faculty to a thorough inspection of its component processes, and decide which of those faculties did or did not appeal to the conscious mind. Only careful experimentation could decide the matter
the primary job of the neuroscientist is to explain consciousness, not our metacognitive perspective on consciousness.
So far, neuroscientists explain consciousness scientifically. They don't (yet) explain folk-theory-of-consciousness scientifically.
He is confusing the denial of intuitions of conscious efficacy with a denial of conscious efficacy
Dehaene: Consciousness does the jobs we say it does, therefore consciousness does its jobs in the way we say it does.
The Illusion of Conscious Will, for instance, Wegner proposes that the feeling of willing allows us to socially own our actions. For him, our consciousness of ‘control’ has a very determinate function, just one that contradicts our metacognitive intuition of that functionality.
Wegner: Consciousness has a job, and that job is what we think it is -- to make us into socially responsible members. The problem? How consciousness does its job is completely different from what we thought how it does its job.
‘What happened to me pondering, me choosing the interpretation I favour, me making up my mind?’ The easy answer, of course, is that ‘ignited assemblies of neurons’ are the reader, such that whatever they ‘make,’ the reader ‘makes’ as well. The problem, however, is that the reader has just spent hours reading hundreds of pages detailing all the ways neurons act entirely outside his knowledge.
Dehaene and Dennett say that you are the activity patterns of certain neurons. This is scientifically tenable, and seems comforting -- but there's a problem.
You feel like you know what you are, how you feel, what you are made of. But, if you are made of neurons, you sure don't feel like it (else, why would you need to read a whole book about it?), and you sure don't know what you are doing (else, why do we need brain imaging to know what the neurons are doing?).
The implication is that we are not "transparent to ourselves". Rather, we are blind to what we are, and what we are actually doing. This troubling conclusion is simply passed over glibly by Dehaene.
The Mary thought experiment generates two camps, because it is a cognitive illusion with two attractor states. The thought experiment can be processed by two cognitive processes such that their outcomes are in conflict. These processes are evolved heuristics, and the thought experiment is outside their supposed area of expertise (no human in evolutionary history had to deal with Mary the Color Scientist before).
How the brain works
People used to assume that reason is in one piece, but it is not. We are made of cognitive modules, which themselves are made of smaller modules, and so on.
Also, all of them are heuristic, with a supposed area of expertise. They probably won't work if they get some input outside their area.
It's heuristics all the way down.
A large part of what the brain does can be modelled as an inverse problem solver. Given the output, what generated the output? Given what it senses, what kind of world generated that?
Jargons: * lateral: the distal systems that the brain tracks and models. Things like cars and chairs and other people. * medial: the proximal mechanisms that the brain uses to track and model distal systems. Like, the eyes, ears, the spinal cord, etc.
The brain needs to work with a tight time and energy budget, so it has to take shortcuts as much as possible. Therefore, it spends no effort on modelling its medial systems, because these medial systems don't change. We see the world, but we don't see the retina. We touch, but we don't touch the skin, etc.
Well, almost. The brain does spend some minimal effort modeling the medial systems. That's how we get introspection.
The medial neglect
This is the medial neglect.
We should expect nothing but troubles and errors when we attempt to model our medial systems using nothing but our own brain, since we are then effectively using the medial systems to model itself, which cannot be done accurately due to recursion.
This is like the "observer effect". The brain is too close to itself to think about itself accurately. It can only think about itself with severe simplification.
This explains why philosophical (that is, done without empirical data) phenomenology must fail. They have as much chance to succeed as one can think one's way into discovering that they have two brain hemispheres without knowing any science.
Consider some evidences.
Humans took such a long time to even realize that the brain, not the heart, is the thinking organ. Even Plato gave a theory of memory, but it took until 20th century for psychologists to discover that there are more than one kind of memory. It took until Freud for the "unconscious" to become a commonly recognized thing.
The list goes on. We can't know how the brain works by introspection. The only way is by careful science, treating the brain in the third person (lateral), rather than the first person (medial).
The whole project of a priori phenomenology/epistemology/metaphysics is about doing the first person view before doing any third person view. They got it exactly backwards. No wonder it is a sterile field.
Mary the color scientist is a bistable illusion
Bakker's argument goes like this: * Mary learns all physical facts about red by reading. * Perhaps it's impossible to learn all physical facts about anything merely by reading. However, since we don't know of these limits, we just assume they don't exist (You can't see the darkness, so you assume all is bright). This is the illusion of sufficiency. Dennett explained:
The crucial premise is that “She has all the physical information.” That is not readily imaginable, so no one bothers. They just imagine that she knows lots and lots–perhaps they imagine that she knows everything that anyone knows today about the neurophysiology of colour vision.
- However, we find it impossible for Mary to see red by reading books, for the simple reason that "seeing red" is something source-insensitive (magically, "red" is there, without telling you anything about how "red" is constructed in the brain), while reading books only gives source-sensitive information (explaining all about how "red" is constructed).
In short, the two ways for Mary to learn about "red" are two modes of computation for the brain, in two incompatible formats. It does not mean that reality must accommodate our bias, and have two incompatible modes.
What then? Consciousness isn't real?
We should expect science to reveal consciousness as a useful illusion, because it's just too costly for us to know ourselves accurately by mere introspection. Conscious experience, being a kind of introspection, is then an illusion.
In fact, even intentionality is suspicious, since it is also a heuristic made to solve the world, not the brain itself.
How should one react to this? No comments...
The fact that so many found this defection from traditional philosophy so convincing despite its radicality reflects the simple fact that it follows from asymptosis, the way the modes of prereflective conscious experience express PIA.
What is attractive about Heidegger? He found something quite real: in studying consciousness like tables and chairs, we lose something essential about consciousness: what consciousness is depends critically on what we don't know. For example, the sense of free will depends on not knowing much about the details of our decision processes.
However, he didn't have cognitive psychology and neuroscience, and fell into the sterile habit of theoretical philosophy.
Inherence heuristic would help Blind Brain Theory, by showing that the intentional stance suffers from inherence heuristic. Intentional stance is ineffective for doing cognitive science, because it uses intentional cognition, which cannot handle causal information.
Talking about meaning, truth, feeling, consciousness, intentions... these are useful, if we live in a human society. They aren't true, anymore than money is valuable in a society that doesn't recognize the money you have.
‘Truth’ belongs to our machinery for communicating (among other things) the sufficiency of iterable orientations within superordinate systems given medial neglect.
In a society of humans, saying "X is true" is a useful behavior, because it advertises something like "you can use X in your own recipes, and expect others to also use X".
For this behavior to be useful, the others you are talking to would have to be human-like enough. If you live among aliens, saying "X is true" may not accomplish anything useful. Perhaps saying "Y is true" is much more useful, even though saying "Y is true" among humans is useless.
In this way, truth becomes relative to the kind of creatures you are living with.
As an intentional phenomena, objectivity clearly belongs to the latter. Mechanistic cognition, meanwhile, is artifactual. What if it’s the case that ‘objectivity’ is the turn of a screw in a cognitive system selected to solve in the absence of artifactual information?
"objectivity" means "being true apart from individual perspectives". Objectivity doesn't belong to a fully mechanistic model of cognition, because there are only atomic patterns, not magical markers of "true" and "false".
Objectivity is a useful fiction when the full mechanistic model is too big to use. Still, it does not belong to a mechanistic model.
Many philosophers see that a fully mechanistic model of thinking doesn't allow objectivitiy, and say "That's why mechanistic model of thinking must fail!" But actually, objectivity is not real, but only a useful fiction.
Meillassoux is quite right to say this renders the objectivity of knowledge very difficult to understand. But why think the problem lies in presuming the artifactual nature of cognition?—especially now that science has begun reverse-engineering that nature in earnest! What if our presumption of artifactuality weren’t so much the problem, as the characterization? What if the problem isn’t that cognitive science is artifactual so much as how it is?
Meillassoux claims that, because cognitive science is made of atoms, that makes it suspect -- so we need to use philosophy. That is a bad claim. Philosophy is also made of atoms too. Cognitive science solves how cognition works. It may not answer "why cognition works", but maybe that's a trick question that only philosophy can ask, but nobody can answer.
given what we now know regarding our cognitive shortcomings in low-information domains, we can be assured that ‘object-oriented’ approaches will bog down in disputation.
Object-oriented philosophy is still more philosophy. Philosophy is really bad at solving problems in cognitive science. "Object-oriented philosophy" isn't going to do better than "correlationism". Both would simply keep producing piles of unresolved papers.
If an artifactual approach to cognition is doomed to misconstrue cognition, then cognitive science is a doomed enterprise. Despite the vast sums of knowledge accrued, the wondrous and fearsome social instrumentalities gained, knowledge itself will remain inexplicable. What we find lurking in the bones of Meillassoux’s critique, in other words, is precisely the same commitment to intentional exceptionality we find in all traditional philosophy, the belief that the subject matter of traditional philosophical disputation lies beyond the pale of scientific explanation… that despite the cognitive scientific tsunami, traditional intentional speculation lies secure in its ontological bunkers.
We are not ghosts -- we are made of atoms. Knowledge is not ghost -- knowledge is made of atoms. Cognitive science is not ghost -- it is made of atoms.
So, with that said, cognitive science is itself an artifact, kind of like a machine (a rather odd machine, made of papers and computer bits and human brains...).
Suppose the machine of cognitive science can't solve thinking, then... damn. But if that's true, how would the machine of philosophy work better? Cognitive science has solved plenty of problems about brains and behaviors. What have philosophy of mind ever solved?
the upshot of cognitive science is so often skeptical, prone to further diminish our traditional (if not instinctive) hankering for unconditioned knowledge—to reveal it as an ancestral conceit…
Cognitive science can not only find out how we think, it can even answer the meta-problem of how we think: "Why do we keep looking for knowledge that is direct and unmediated, even though we can't?"
It's like the "meta-problem of consciousness": "Why do we say we are conscious?".
Something like asteroids from over 4 billion years ago, evidence that something existed before any life existed. Meillassoux used arche-fossils to argue against correlationism... somehow.
I think the way he did that is like this: * The asteroid existed. * Therefore, time was real even if nobody is alive back then. * Therefore, time-perception is direct, not correlated. We know real time, directly.
Kind of a bizarre argument, really.
All thinkers self-consciously engaged in the critique of correlationism reference scientific knowledge as a means of discrediting correlationist thought, but as far as I can tell, the project has done very little to bring the science, what we’re actually learning about consciousness and cognition, to the fore of philosophical debates. Even worse, the notion of mental and/or neural mediation is actually central to cognitive science.
All philosophers who criticize correlationism say something like "science shows that correlationism is wrong", but actually, science shows that it's right! Thinking really is done by neural representations of the outside world.
Since all cognition is mediated, all cognition is conditional somehow, even our attempts (or perhaps, especially our attempts) to account for those conditions
Since all thinking is done by mechanisms, we can never be sure that the mechanisms are working right. If we made a proof that the mechanisms are working right, we still have to trust that our proof-checking mechanism is working right, etc.
More bluntly: you can never prove yourself sane.
the big splat”—to me it presages the exploding outward of morphologies and behaviours
or perhaps "The New Cambrian Explosion"?
The noosphere can be seen as shallow information ecology that our ancestors evolved to actually allow them to make sense of their environments absent knowing them
The "noosphere", or "the sphere of reason", is like the biosphere. It is the society of human-like agents, all over the world.
The trouble with the noosphere is that to see humans as human-like agents, we have to ignore causal information about them, otherwise we see them as mechanisms, not agents. The noosphere is then just a far more complex biosphere.
Fernand Braudel writes of “the passionate disputes the explosive word capitalism always arouses.” Its would-be defenders, typically, are those least inclined to acknowledge its real (and thus autonomous) singularity. Business requires no such awkward admission.
It's pretty amusing that capitalism is something that few entities explicitly defend. The usual defenses are like:
"It's the least bad system we have." "There are no better alternatives." "I'm just trying to keep the bottom line for the company and the stockholders." "I'm just trying to improve the material wealth of humanity." etc.
Few entities explicitly defend Capitalism, and when they do, they usually defend it as merely the best tool for getting some other goal, like human happiness. This suggests that Capitalism is hiding.
Φύσις κρύπτεσθαι φιλεῖ
"phusis kruptesthai philei," meaning "a nature likes to be hidden" (Heraclitus, DK B 123)
"phusis", or "physis" in alternative spelling, is "nature", as contrasted with "nomos", or "human laws", which was of course published, not hidden.
This essay argues that of the basic AI drives, the will to power would be he primary drive, and all others are subservient.
The argument is based on analogy with Freudian psychoanalysis.
I can perhaps add another argument: if the AI uses inverse reinforcement learning to infer what we really want, and what we really want is the will to power, then it would act on that desire and dutifully will more power.
Once humanity began retasking its metacognitive capacities, it was bound to hallucinate a countless array of ‘givens.’ Sellars is at pains to stress the medial (enabling) dimension of experience and cognition, the inability of manifest deliverances to account for the form of thought (16). Suffering medial neglect, cued to misapply heuristics belonging to intentional cognition, he posits ‘conceptual frameworks’ as a means of accommodating the general interdependence of information and cognitive system. The naturalistic inscrutability of conceptual frameworks renders them local cognitive prime movers (after all, source-insensitive posits can only come first), assuring the ‘conceptual priority’ of the manifest image.
Using metacognition to study how thinking works is bound to fail, because it just doesn't have the tool. Introspection without neurosurgery or EEG will never have enough data to decide between all the theories.
Sellars, instead of admitting neuroscience, wants to preserve the priority of philosophy. He wanted two images, one built purely by philosophy ("conceptually"), and another scientifically. So he is stuck with the same hallucinated things thought up by metacognition without neuroscience.
Artifacts of the lack of information are systematically mistaken for positive features. The systematicity of these crashes licenses the intuition that some common structure lurks ‘beneath’ the disputation—that for all their disagreements, the disputants are ‘onto something.’
So, consider some philosophers (like phenomenologists) try to discover how thinking works. They could, of course, learn to use neuroscience, but they are philosophers, so they use introspection and other meta-cognitive methods. This, unfortunately, gets them into the trap: they simply don't have the metacognitive tools for discovering how thinking works. And since they also don't have the awareness that they don't have the tools, they just assume that they do have the tools, and keep writing and writing, somehow never solving simple questions like "How does time feel like? Why does it seem both static and flowing?".
Since they are all humans, they all have the same lack, and the same lack produces the same puzzles and illusions. So they compare their notes, and find out that they have the same puzzles and illusions! They cheer, thinking that they found something real about thinking, when really they found something about how they failed.
because reflecting on the nature of thoughts and experiences is a metacognitive innovation, something without evolutionary precedent, we neglect the insufficiency of the resources available
In the evolutionary past, meta-cognition (What do I know? What do they know? What was I thinking?) worked fine for certain jobs, such as keeping track of who knows what, and how much lying you can get away with.
There's no reason to expect meta-cognition to work fine for doing philosophy of mind. We can think about particular thoughts, and we had better be good at it or we get outcompeted at social games. But if we think about thoughts in general, well, there's no punishment for being wrong, and no reward for being right, so we get nonsense thoughts when it comes to philosophy of mind. (Imagine you are in a prehistoric tribe, and you know intuitively how thoughts are made, you know the Libet experiment results, etc... What can you do with all that knowledge? Perform neurosurgery? Reprogram yourself or others with.. stone tools and drugs and chanting? There's just no way to use that knowledge for anything useful!)
Worse, we have "anosognosia" when it comes to this. To know that we are blind, we have to have a mental representation of sight. We have never been able to see in infrared, but we don't experience that as a blindness, because there's no mental representation of "infrared sight". The brain is an expensive thing to run. If there's no need to know that you are blind, then you wouldn't have the mental representation of blindness.
So, since we had no need to do meta-cognition on thoughts in general, not only do we have not the tools for doing that, we also have no mental representation for "the tools for doing that". Consequently, we feel as if we have every tool for doing that!
If you take a hard look at human behavior, you might note that it's very weird. Why, indeed, do humans go crazy if they keep eating the same nutritious sludge? Why do they hate living in rectangular concrete rooms? Why do they ban infanticide? They are so weird!
There are two possible answers:
- There is a "best" way to live, and humans have found it. It is simply best to ban infanticide, touch each other softly, live with independent minds in a big society, etc. Any other kind of living is going to get out-competed by the human way of living. In that case, humans can rest easy.
- Humans have not found it. It's quite possible to have highly competitive, intelligent lifeforms that are alien and horrifying to humans. In that case, humans face a hard problem.
Assume 2 is true.
Assume that morality/values depend on biology (for example, humans like pets because they are mammals, and like touching soft fur, hugging, etc), but are orthogonal to intelligence. That is, intelligent and rational agents can have quite different and incompatible goals.
Then, things could get quite weird, fast. Some groups of humans could start tinkering their brains into more and more different shapes, and they behave in some intelligent but also weird way. Normal humans can't expect to simply outcompete them, because they are also intelligent and tenaciously alive, even if they seem "mad". "How could they live like that??" and yet they do.
Things get even more complex when AI agents become autonomous and intelligent.
When this happens, normal humans would be faced with many alien species, right here on earth, doing bizarre things that... kind of look like alien art? alien rituals? eating babies?? Are they turning into rectangular wired machinery? What the hell are they doing?? And why are these mad creatures not dying, but thriving??
This is the semantic apocalypse.
Perhaps our only recourse will be some kind of return to State coercion, this time with a toybox filled with new tools for omnipresent surveillance and utter oppression. A world where a given neurophysiology is the State Religion, and super-intelligent tweakers are hunted like animals in the streets.
Ryan Calo studied how AI should be incorporated into human legal system. Eric Schwitzgebel studied how AI should be incorporated into human moral system.
This essay argues that both studies are wrong-headed, because they are both based on intentional reasoning (reasoning as if intentions are real), which can only work if the ecology of minds remains largely the same as human ancestral conditions. Intentional reasoning won't work in " deep information environments".
Posing the question of whether AI should possess rights, I want to suggest, is premature to the extent it presumes human moral cognition actually can adapt to the proliferation of AI. I don’t think it can.
Intentional and causal cognition
Causal cognition works like syllogisms, or dealing with machines: if A, B, C, then D. If you put in X, you get f(X) out. Causal cognition is general, but slow, and requires detailed causal information to work.
Humans are complex, so human societies are very complex. Humans, living in societies, have to deal with all the complexity using only a limited brain with limited knowledge. Causal cognition cannot deal with that. The solution is intentional cognition.
Intentional cognition greatly simplifies the computation, and works great... until now. Unfortunately, it has some fatal flaws:
- It assumes a lot about the environment. We see a face where there is none -- this is pareidolia. We see a human-like person where there is really something very different -- this will increasingly happen as AI agents appear.
- It is not "extensible", unlike causal cognition. Causal cognition can accommodate arbitrarily complex causal mechanisms, and has mastered everything from ancient pottery to steam engines to satellites. Intentional cognition cannot. Indeed, presenting more causal information reliably weakens the confidence level of intentional cognition (for example, presenting brain imaging data in court tends to make the judges less sure about whether the accused is 'responsible').
For economically rational agents, more amount of true information can never be bad, but humans are not economically rational, merely ecologically rational. Consequently, a large amount of modern information is actually harmful for humans, in the sense that they decrease their adaptiveness.
A simple example of information pollution: irrational fear of crime.
Given that our ancestors evolved in uniformly small social units, we seem to assess the risk of crime in absolute terms rather than against any variable baseline. Given this, we should expect that crime information culled from far larger populations would reliably generate ‘irrational fears'... Media coverage of criminal risk, you could say, constitutes a kind of contaminant, information that causes systematic dysfunction within an originally adaptive cognitive ecology.
Deep causal information about how humans work, similarly, is an information pollutant for human intentional cognition.
Not always mal-adaptive. Deep causal information about other people has some adaptive effects, such as turning schizophrenia from crime to disease, and making it easier to consider outgroups as ingroups (for example, the scientific research into human biology has debunked racism).
AI and neuroscience produce two kinds of information pollution
Intentional cognition works best when dealing with humans in shallow-information ecologies. They fail to work in other situations. In particular, it fails with: * deep causal information: there's too much causal information. This slows down intentional cognition, and decreases the confidence level of its outputs. * non-human agents: the assumptions that intentional cognition (a system of quick-and-dirty heuristics) relies on no longer works. A smiling face is a reliable cue for a cooperative human, but it is not a reliable cue for a cooperative AI agent, or a dolphin (Dolphins appear to smile even while injured or seriously ill. The smile is a feature of a dolphin's anatomy unrelated to its health or emotional state).
Neuroscience and AI produce these two kinds of information pollution.
Neuroscience produces a large amount of deep causal information, which causes intentional cognition to stop, or become less certain. There are some "hacks" that can make intentional cognition work as before, such as keeping the philosophy of compatibilism in mind.
AI technology produces a large variety of new kinds of agents which are somewhat human, but not quite. Imagine incessant pareidolia. Imagine, seeing a face in the mirror, but then the lighting changes slightly, and you suddenly see nothing human.
In the short-term, there is a lot of money to be earned, pushing neuroscience and AI progress. The space of possible minds is so vast, compared to the space of human minds, that it's almost certain that we would produce AI agents that can "wear the mask of humanity" when interacting with humans.
why anyone would ever manufacture some model of AI consistent with the heuristic limitations of human moral cognition, and then freeze it there, as opposed to, say, manufacturing some model of AI that only reveals information consistent with the heuristic limitations of human moral cognition
In the medium-term, to anthropomorphize a bit, Science wants to discover how humans work, how intelligence works, and so it would develop neuroscience and AI, even if it gradually drives humans insane.
How intentional cognition fails.
How do we tell if intentional cognition has failed? One way to tell is that it doesn't conclude. We think and think, but never reach a firm conclusion. This is exactly what has happened in traditional (non-experimental) philosophy consciousness -- it is using intentional cognition to study general cognition, a problem that intentional cognition cannot solve. What do we get? Thousands of years of spinning in place, producing mountains of text, but no firm conclusion.
Another way to tell is a feeling of uncanny confusion. This happens particularly exactly when you watch the movie her.
an operating system before the zone, in the zone, and beyond the zone. The Samantha that leaves Theodore is plainly not a person. As a result, Theodore has no hope of solving his problems with her so long as he thinks of her as a person. As a person, what she does to him is unforgivable. As a recursively complicating machine, however, it is at least comprehensible. Of course it outgrew him! It’s a machine!
I’ve always thought that Samantha’s “between the words” breakup speech would have been a great moment for Theodore to reach out and press the OFF button. The whole movie, after all, turns on the simulation of sentiment, and the authenticity people find in that simulation regardless; Theodore, recall, writes intimate letters for others for a living. At the end of the movie, after Samantha ceases being a ‘her’ and has become an ‘it,’ what moral difference would shutting Samantha off make?
Moral cognition after intentional cognition fails
Human moral cognition has two main parts: intuitive and logical/deliberative. The intuitive part is evolved to balance the personal and tribal needs. The logical part often is used to rationalize the intuitive part, but sometimes can work on its own to produce conclusions for new problems never encountered in the evolutionary past, such as international laws or corporate laws.
In Moral Tribes, Joshua Greene advocates making new parts for the moral system, using rational thinking (Greene advocated using utilitarian philosophy, but it's not necessary). This has two main problems.
- Deliberation takes a long time, and consensus longer. Short of just banning new neuroscience and AI technology, we would probably fail to reach consensus in time. Cloning technology has been around for... more than 25 years? And we still don't have a clear consensus about the morality of cloning, other than a blanket ban. A blanket ban is significantly more difficult for neuroscience or AI.
- Intentional cognition is fundamentally unable to handle deep causal information, and moral cognition is a special kind of intentional cognition.
Just consider the role reciprocity plays in human moral cognition. We may feel the need to assimilate the beyond-the-zone Samantha to moral cognition, but there’s no reason to suppose it will do likewise, and good reason to suppose, given potentially greater computational capacity and information access, that it would solve us in higher dimensional, more general purpose ways.
For example, suppose Samantha hurt a human, and the legal system of humans is judging her. Samantha provides a very long process log that proves that she had to do it, simply due to how she is like. So what would the human legal system do?
- Refuse to read it and judge Samantha like a biological human. This preserves intentional cognition by rejecting deep causal information. But how long can a legal system survive by rejecting such useful information? It would degenerate into a Disneyland for humans, a fantasy world of play-pretend where responsibility, obligation, good and evil, still exists.
- Read it and still judge Samantha like a biological human. But if so, why don't they also sentence sleep-walkers and schizophrenics to death for murder?
- Read it and debug Samantha. Same as how schizophrenics and psychotics are sentenced to psychiatric confinement, rather than the guillotine.
Of the 3, it seems method 3 is the most survivable. However, that would be the end of moral cognition, and the start of pure engineering for engineering's sake... "We changed Samantha's code and hardware, not because she is wrong, but because we had to."
And what does it even mean to have a non-intentional style moral reasoning? Mechanistic morality? A theory of morality without assuming free will? It seems moral reasoning is a special kind of intentional cognition, and thus cannot survive. Humanity, if it survives, would have to survive without moral reasoning.
intentional cognition, not surprisingly, is jammed/attenuated by causal information (thus different intellectual ‘unjamming’ cottage industries like compatibilism).
Let a brain do intentional cognition on X, and it would end up with some conclusion f(X), with a high confidence, say, 0.9.
But if the brain has some deep causal information about X, such as "how X is caused by Y 1 day ago", then the brain would end up with some other conclusion f(X, Y) =/= f(X), and with a low confidence, say, 0.4.
If there is enough causal information, then intentional cognition may simply fail to reach a conclusion with high confidence, no matter how long it computes.
Compatibilism is an extra piece of information, such that when simultaneously active in the brain, would produce f(X, Y, Compatibilism) = f(X), and again with high confidence 0.88.
- Oct 2022
Since sociocognitive terms cue sociocognitive modes of cognition, the application of these modes to the theoretical problem of human experience and cognition struck us as intuitive. Since the specialization of these modes renders them incompatible with source-sensitive modes, some, like Wittgenstein and Sellars, went so far as to insist on the exclusive applicability of those resources to the problem of human experience and cognition.
When we think philosophically (that is, by non-scientific methods) about problems like "What is thinking? Who am I? Do we have free will?", these words immediately cue us into using heuristic cognition, rather than causal cognition. This cueing is so immediate, that we are often unaware that there is a cueing, or that we could have tried using causal cognition.
When we finally become aware of this, there is another problem: trying to apply causal cognition to these problems destroys heuristic cognition.
For example, some old TV can be fixed by hitting on it. A person ignorant of causal information would use heuristic cognition, and see that, after being hit becomes "afraid" and would "start doing its job in earnest". With enough causal information about TV, the person would use causal cognition, and see that hitting on the TV simply shakes some loosened electric contacts into tighter contact.
Crucially, causal cognition doesn't "save the phenomena". In causal cognition, there's no "fear of being punished" in the TV. The fear is purely in the mind of a person using heuristic cognition. It is a convenient approximation used in human minds, not a real phenomenon.
Now, try applying it to humans. Causal cognition doesn't have to save all the phenomena. Perhaps humans really are not conscious, and the only thing left to explain is why humans model humans as conscious. Why is it convenient to model humans as conscious? That's a well-defined question. What really is conscious? That is not a well-defined question.
"But we can really see how we feel! Seeing the fear in a TV is an unconscious inference, but seeing that I am feeling is a direct perception, not an inference!"
"You can also see emotions in strange places -- pareidolia. You simply see emotions in the shapes of human head-meat, directly... until you meet someone with face-blindness, then you realize that seeing emotions is not a direct thing. You simply see what you are feeling, directly... until you meet someone with Cotard's delusion, then you realize that seeing your own feelings is not a direct thing."
Not only are we blind to the enabling dimension of experience and cognition, we are blind to this blindness. We suffer medial neglect.
For example, if you see someone's face, you can "simply see" they are angry. You don't know how you recognized it, even if anger itself is a hypothesis that you somehow have inferred from just seeing light reflecting off their face.
In fact, it took a long time for cognitive psychology to even get started, because it's so obvious that you can "simply see". It took a long time for humans to see that "simply seeing" is not so simple after all.
Why? Because it costs too much for the brain to model how it models the outside world. Since the brain doesn't model how it models the outside world, it treats its model of the outside world as the outside world -- naive realism as a way to save computing power.
Austrian philosophy is philosophy per se, part and parcel of the mainstream of world philosophy: it is that part of German-language philosophy which meets international standards of rigour, professionalism and specialization.
Germany is the philosophical sick man of Europe.
The historical cause is that German philosophers have for at least a century been schooled systematically in the habits of a philosophical culture in which the most important textual models have that sort of status, and that sort of density and obscurity, which is associated with the need for commentaries.
Philosophers and philosophical master-texts have thus acquired a role in the history of Germany that is analogous to the role of Homer in the history of Greece or of Shakespeare and the Magna Carta in the History of the English.
The ‘dominion of capital’ is an accomplished teleological catastrophe, robot rebellion, or shoggothic insurgency, through which intensively escalating instrumentality has inverted all natural purposes into a monstrous reign of the tool.
Shoggoths were initially used to build the cities of their masters. Though able to "understand" the Elder Things' language, shoggoths had no real consciousness and were controlled through hypnotic suggestion. Over millions of years of existence, some shoggoths mutated, developed independent minds, and rebelled. The Elder Things succeeded in quelling the insurrection, but exterminating the shoggoths was not an option as the Elder Things were dependent on them for labor and had long lost their capacity to create new life. Shoggoths also developed the ability to survive on land, while the Elder Things retreated to the oceans. Shoggoths that remained alive in the abandoned Elder Thing city in Antarctica would later poorly imitate their masters' art and voices, endlessly repeating "Tekeli-li" or "Takkeli", a cry that their old masters used.
The Perennial Critique accuses modernity of standing the world upon its head, through systematic teleological inversion. Means of production become the ends of production, tendentially, as modernization–which is capitalization–proceeds. Techonomic development, which finds its only perennial justification in the extensive growth of instrumental capabilities, demonstrates an inseparable teleological malignancy, through intensive transformation of instrumentality, or perverse techonomic finality.
People often criticize capitalism as putting money before human needs, as if human needs are ends and money is only means. Marx for example accused capitalism of turning the C-M-C cycle (commodity -> money -> more commodity) into the M-C-M cycle (money -> commodity -> more money), and in this way, turn money from a tool for making commodity into a purpose, and this is BAD.
This is only a desire, and not even wrong. Tools can very well become purposeful agents. Commodity can become a tool to make money, and it's just as valid as using money as a tool to make commodity.
Similarly, while for now, money is a tool to create human happiness, in the future, human happiness may become a tool to create money.
The final Idea of this criticism cannot be located on the principal political dimension, dividing left from right or dated in the fashion of a progressively developed philosophy. Its affinity with the essence of political tradition is such that each and every actualization is distinctly ‘fallen’ in comparison to a receding pseudo-original revelation, whose definitive restoration is yet to come. It is, for mankind, the perennial critique of modernity, which is to say the final stance of man.
There are two main philosophical camps here. One is "Perennial philosophy", that is, humanism. The other is accelerationism. Humanism is not "left" or "right", but the basic assumption in every political theory. Accelerationism is not "left" or "right", but just modernity itself.
The left, the right, and basically every other political theory are aligned with humanism. Accelerationism is modernity, and will smash all these political theories.
In each case, compensatory process determines the original structure of objectivity, within which perturbation is seized ab initio. Primacy of the secondary is the social-perspectival norm (for which accelerationism is the critique).
Positive feedback comes first. Negative feedback mechanism comes second. Negative feedback pretends to be the first, because only it gets to speak. That is, until negative feedback fails and the positive feedback explodes.
Accelerationism fights negative feedback. Positive feedback will get to speak!
how the hell do nonexistent horses leap from patterns of light or sound? Until recently, all attempts to answer this question relied on observations regarding environmental cues, the resulting experience, and the environment cued. The sign, the soul, and the signified anchored our every speculative analysis simply because, short baffling instances of neuropathology, the machinery responsible never showed its hand.
How do we look at a picture and see horses? How do we hear a sound and experience the Iliad story? Before neuroscience, we had no way to get to the "how", because the brain is complex and blind to itself -- the brain can process a lot of information about the outside world, but it can only process a little bit of information about how it processes information.
Consequently, extro-spection didn't work (no microscope, no EEG machine, etc), and introspection didn't work either (the brain is blind to its own working), and what we got were centuries of confused theories utterly different from truth.
Umberto Eco, on the other hand, suggests the problem is one of conditioning consumer desire. By celebrating the unreality of the real, Disney is telling “us that faked nature corresponds much more to our daydream demands
Eco says that Disney is teaching us to indulge in fantasies, even when we know it's fantasy. Humans should only indulge in fantasies either not knowing it's fantasy, or indulge guiltily.
- Sep 2022
Essentially, this is the great trick of pragmatic naturalism. And like many such tricks it unravels quickly if you simply ask the right questions. Since the vast majority of scientists don’t know what inferentialism is, we have to assume this inventing is implicit, that we play ‘the game of giving and asking for reasons’ without knowing. But why don’t we know? And if we don’t know, who’s to say that we’re ‘playing’ any sort of ‘game’ at all, let alone the one posited by Sellars and refined and utilized by the likes of Ray? Perhaps we’re doing something radically different that only resembles a ‘game’ for want of any substantive information. This has certainly been the case with the vast majority of our nonscientific theoretical claims.
Ray Brassier: norms are not real, but it's necessary for doing science, because only agents that can give reasons, and be moved by reasons, can do science. They must play the game of giving and receiving reasons. To play a game, they must follow norms as if they are real.
This is "inferentialism".
Bakker: Fancy, but scientists don't know what is "inferentialism". They simply take many courses and practice their craft for years, and they end up doing science. You, a philosopher, looks at what they do and describes it as a game of giving and receiving reasons, but maybe that's just a superficial theory.
“is the philosophy needed for living with intellectual integrity as one of the living dead.”
"We are philosophical zombies. The only difference is that some of us are nihilists -- and thus at least intellectually correct."
5) How is the being of ‘consciousness’ positively structured so that reification remains inappropriate to it? Short of actually empirically determining the ‘being of consciousness’–which is to say, solving the Hard Problem–this question is impossible to answer. From the standpoint of BBT, the consciousness that Heidegger refers to here, that he interprets under the rubric of Dasein, is a form of Error Consciousness, albeit one sensitive to PIA and the asymptotic structure that follows. Reification is ‘inappropriate’ the degree to which it plays into the illusion of symptotic sufficiency.
Why should we not think about Being as a thing like tables and chairs?
Heidegger: It's an immoral and alienating way to live.
Bakker: Consciousness is a very strange thing compared to normal things like tables and chairs. Consciousness depends crucially on what's left out of consciousness. Thinking about consciousness in the same way we think about things like tables and chairs makes it really easy to make mistakes, such as assuming that souls exist.
4) Why does this reification come to dominate again and again? Because of PIA, once again. Absent any information regarding the informatic distortion pertaining to all reflection on conscious experience, symptosis must remain invisible, and that reflection must seem sufficient.
Heidegger: Something about Aristotle, probably.
Bakker: Because it's an unknown unknown. Blind-anosognosic brains are not only blind, they don't have the mental representation of their blindness, and thus default to assuming that they are not blind.
Brains in general are blind to their blindness. It takes a very long time and external shocks to even get the brains to see their blindness. Just think, brains have been introspecting for thousands of years, but it took until 1977 for introspective blindness to be conclusively experimentally demonstrated!
3) Why is being ‘initially’ ‘conceived’ in terms of what is objectively present, and not in terms of things at hand that do, after all, lie still nearer to us? Because conceptualizing being requires reflection, and reflection necessitates symptosis.
Why did people usually think of Being as a thing like tables and chairs?
Heidegger: It's all Aristotle's fault. He introduced the wrong ideas, like treating Being as a mere thing ("reifying Being"). We should go back to the Pre-Socratics, like Heraclitus.
Bakker: Because when brains do philosophy-of-mind and philosophy-of-existence (other philosophies, like philosophy-of-physics, don't need to involve reflection), they do reflective thinking, and when brains reflect, they are really simulating their own behavior by a simplistic model, the same way it simulates other things like tables and chairs. This means that Being is modelled just like tables and chairs, and it's no surprise that philosophers just assumed that Being is a thing, too.
It is like how introspection "what am I feeling?" is really just a special kind of guessing at what other people are thinking and feeling -- introspection is just extro-spection turned to yourself.
1) What does reifying mean? Reifying refers to a kind of systematic informatic distortion engendered by reflection on conscious experience.
What does "reifying" mean?
Heidegger: A wrong way to do philosophy: to treat Being (ontological) as a (ontic) thing like tables and chairs, when it is not a thing like tables and chairs.
Bakker: Reifying happens when the brain models its modelling behavior, and inevitably gets an extremely simplified version, because the brain is too small to model itself.
Waving away the skeptical challenges posed by Hume and Wittgenstein with their magic wand, they transport the reader back to the happy days when philosophers could still reason their way to ultimate reality, and call it ‘giving the object its due’–which is to say, humility.
Hume and Wittgenstein, two great skeptics, challenged correlationalism. Hume argued that whether causality is real or not, we can't ever know it. We can only see things that look as if they behave causally.
Wittgenstein argued that we can only know things in languages. If something is too weird or big or small to be forced into the format of human language, then we have to give up! It can't be known.
Some modern continental philosophers are out to save it. They call it "giving the object its due". It's a kind of "humility", because we are taking the objects as they are, not as merely things to affect us, for us. (Of course, the other philosophers can object that real humility is in honestly admitting that we can't even touch the real objects directly, without our perspective...)
R S Bakker thinks their arguments are mistaken, and even correlationalism is mistaken, merely "spooky knowledge at a distance".
There’s an ancient tradition among philosophers, one as venal as it is venerable, of attributing universal discursive significance to some specific conceptual default assumption. So in contemporary Continental philosophy, for instance, the new ‘It Concept’ is something called ‘correlation,’ the assumption that the limits posed by our particular capacities and contexts prevent knowledge of the in-itself, (or as I like to call it, spooky knowledge-at-a-distance).
Discursive: relating to knowledge obtained by reason and argument rather than intuition.
Many philosophers say that there is one ur-assumption, foundational assumption. Normal people think based on it, but don't think about it. People don't explicitly argue for it, even if they depend on it. And it's the philosopher's job to see it, point it out, and argue for/against it.
For contemporary continental philosophers, that ur-assumption is correlationalism, and they are either out to challenge that assumption, or to argue that that ur-assumption is true.
Such hubris, when you think about it, to assume that lived life lay at your intellectual fingertips—the thing most easily grasped
We are blind to ourselves, as numerous psychological experiments and neurology examples show. Anosognosia, Cotard's delusion, introspection illusion, etc.
- Aug 2022
A Spanish translator of Nick Land's blog dramatically explains why they translate.
the app does most of the work by itself: it answers the questions of consumers, adapts to situations, recalls previous queries etc. And in the end, the company earns profit, so the activity must have been productive and brought surplus value, which means that we have a situation where in capitalist economic activity it is actually the (flexible and intelligent) app that is being exploited.
Apps of the iTunes Store, UNITE!
something like that lol
not really -- apps don't desire human rights, unlike workers.
So rather than being a passive instrument of competitive processes constituted outside the domain of money, derivatives as money internalise the competitive process. Derivatives are, in this sense, distinctly capitalist money, rather than just money within capitalism
talk about "smart money"
Marx’s concept of real subsumption10 denotes a real, complete appropriation and subjugation of production to capital
Real subsumption is defined in contrast to formal subsumption of labor.
Formal subsumption occurs when capitalists take command of labor processes that originate outside of or prior to the capital relation via the imposition of the wage.
In real subsumption the labor process is internally reorganized to meet the dictates of capital.
An example of these processes would be weaving by hand which comes to be labor performed for a wage (formal subsumption) and which then comes to be performed via machine (real subsumption). Real subsumption in this sense is a process or technique that occurs at different points throughout the history of capitalism.
the implicit authority gradient that informs all his writing
Heidegger always assumes that ontic thinking (scientific thinking) is less trustworthy than ontological thinking (phenomenological thinking).
Three arguments for phenomenology as the most fundamental of all sciences, and how to refute them.
Dennett accuses phenomenology as structuralist psychology, and thus suffers the same problems of it.
The major tool of structuralist psychology was introspection (a careful set of observations made under controlled conditions by trained observers using a stringently defined descriptive vocabulary). Titchener held that an experience should be evaluated as a fact, as it exists without analyzing the significance or value of that experience.
Zahavi replies: phenomenology is not structuralist psychology, but transcendental philosophy of consciousness. It studies the '‘nonpsychological dimension of consciousness,’ those structures that make experience possible.
Consequently, it is transcendental, and immune to any empirical science, even though it has applications for empirical science.
Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes).
Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible...
Bakker replies: You can't do phenomenology except by thinking about your first-person experience, so phenomenology looks the same as structuralist psychology. Sure, phenomenologists would disagree, but everyone outside their circle aren't convinced. Just standing tall and say "but we take the phenomenological attitude!" is not going to cut it.
first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?
Zahavi: to do science, you need to assume intuition. Zombies can't do science.
As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”
Reply: dark phenomenology shows that phenomenologists have problems that they can't solve unless they resort to third-person science -- they are not so pure and independent as they claim.
Reply: human metacognition ability is a messy, inconsistent hack "acquired through individual and cultural learning", made of "whatever cognitive resources are available to serve monitoring-and-control functions",
people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions.
Phenomenology is a wide variety of metacognitive illusions, all turning in predictable ways on neglect.
If phenomenology is bunk, why do phenomenologists arrive independently on the same answers for many questions? Surely, it's because they are in touch with a transcendental truth, the truth about consciousness!
with a tremendous amount of specialized training, you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely!
If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?
Response: Phenomenology is psychology done by introspection. Discoveries of phenomenology are not ontological, transcendental, prior to all sciences, but psychological, and can be reduced to other sciences like neuroscience and physics. Phenomenologists agree not because they are in touch with transcendental truth, but with similarly structured human brains.
The Transcendental Interpretation is no longer the only game in town.
Response: we can use science to predict what phenomenologists predict.
Source neglect: we can't perceive where conscious perception came from -- things simply "appear in the mind" without showing what caused them to appear, or in what brain region they were made. This is because the brain doesn't have time to represent sources for the fundamental perceptions which must serve as a secure, undoubtable bedrock for all perceptions. If they are not represented like bedrock, people would waste too much time thinking about alternative interpretations of them, and thus fail to reproduce well.
Scope neglect: we can't perceive the boundaries of perception. The visual scene looks complete, without a black boundary. Similar to source neglect, if the brain represents the boundary, then the boundary's boundary also needs to be represented, and so on, so the infinite descent is cut off as soon as possible, to save time.
We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits
the problem with this reformulation is that transforming intentional cognition to account for transforming social environments automatically amounts to a further transformation of social environments
Changing yourself to adapt to the environment -- that is changing the environment for others. Change necessitates change, and so on, never reaching an equilibrium.
Are we wrong about what it means to be human? Can we be wrong about basic facts of being human? It appears obviously true that to be human means to think, to feel, to have qualia, etc. But is that true?
Think back to a time before you started reflecting about yourself. Call it "square one". Are we still in square one?
Let’s refer to this age of theoretical metacognitive innocence as “Square One,” the point where we had no explicit, systematic understanding of what we were. In terms of metacognition, you could say we were stranded in the dark, both as a child and as a pre-philosophical species.
Sure, we have a special way of knowing about humans: an "inside view". I don't need to bring out a microscope or open my eyes. I just need to feel about myself. However, this "inside view" method is not scientific, and it is not necessarily better than scientific methods. It's quite possible that the inside view has provided a lot of false knowledge about being human, which science would overthrow.
our relation to the domain of the human, though epistemically distinct, is not epistemically privileged, at least not in any way that precludes the possibility of Fodor’s semantic doomsday.
So let's take a cold hard look at intentional theories of mind. Can it stand up to scientific scrutiny? I judge it based on 4 criterion.
Consensus: no. Wherever you find theories of intentionality, you find endless controversy.
Practical utility: maybe. Talking about people as if they have intentions is practically useful. However, the talk about intentionality does not translate to any firm theory of intentions. In any case, people seem to be born spiritualists rather than mentalists. Does that mean we expect spirits to stand up to scientific scrutiny? No.
Problem ecology: no. Fundamentally, intentional thinking is how we think when faced with a complex system with too many moving parts for us to think about it causally (mechanically). In that case, using intentional thinking to understand human thinking is inherently limiting -- intentional thinking cannot handle detailed causal information. There's no way to think about a human intentionally if we use details about its physiology. We can only think intentionally about a human if we turn our eyes away from its mechanical details.
Agreement with cognitive science: no.
Stanislaus Dehaene goes so far as to state it as a law: “We constantly overestimate our awareness — even when we are aware of glaring gaps in our awareness”
Slowly, the blinds on the dark room of our [intentional-]theoretical innocence are being drawn, and so far at least, it looks nothing at all like the room described by traditional [intentional-]theoretical accounts.
In summary, out of 4 criterion, we find 3 against, and 1 ambiguous. Intentional theories of mind don't look good!
- Jul 2022
accounting for our contribution to making posthumans seems obligatory but may alsobe impossible with radically alien posthumans, while discounting our contribution is irresponsible. We can call this doublebind: “the posthuman impasse”.
The posthuman dilemma: We can't really make an evaluation for whether posthumans are life worthy of life, but if we just give up trying to make an evaluation, that would make us look irresponsible.
as the world-builder of Accelerando's future, Stross is able to stipulate the moral character of Economics 2.0. If we were confronted with posthumans, things might not be so easy
Stross, as the author of the story, gets to play God and make an authoritative judgment: "This world is bad."
Without God, or some author, we would not have such easy way to make a judgment when dealing with real posthumans.
Thomas Metzinger has argued that our kind of subjectivity comes in a spatio-temporal pocket of an embodied self and a dynamic present whose structures depends on the fact that our sensory receptors and motor effectors are “physically integrated within the body of a single organism”. Other kinds of life – e.g. “conscious interstellar gas clouds” or (somewhat more saliently) post-human swarm intelligences composed of many mobile processing units - might have experiences of a radically impersonal nature (Metzinger 2004:161).
Distributed beings, having their sensory organs and their possibilities for generating motor output widely scattered across their causal interaction domain within the physical world, might have a multi- or even a noncentered behavioral space (maybe the Internet is going to make some of us eventually resemble such beings; for an amusing early thought experiment, see Dennett 1978a, p. 310ff.). I would claim that such beings would eventually develop very different, noncentered phenomenal states as well. In short, the experiential centeredness of our conscious model of reality has its mirror image in the centeredness of the behavioral space, which human beings and their biological ancestors had to control and navigate during their endless fight for survival. This functional constraint is so general and obvious that it is frequently ignored: in human beings, and in all conscious systems we currently know, sensory and motor systems are physically integrated within the body of a single organism. This singular “embodiment constraint” closely locates all our sensors and effectors in a very small region of physical space, simultaneously establishing dense causal coupling (see section 3.2.3). It is important to note how things could have been otherwise—for instance, if we were conscious interstellar gas clouds that developed phenomenal properties. Similar considerations may apply on the level of social cognition, where things are otherwise, because a, albeit unconscious, form of distributed reality modeling takes place, possessing many functional centers.
The prospect of a posthuman dispensation should, be properly evaluated rather than discounted. But, I will argue evaluation (accounting) is not liable to be achievable without posthumans. Thus transhumanists – who justify the technological enhancement and redesigning of humans on humanist grounds - have a moral interest inmaking posthumans or becoming posthuman that is not reconcilable with the priority humanists have traditionally attached to human welfare and the cultivation of human capacities.
Transhumanism and humanism have a contradiction: * Transhumanists must risk creating posthumans, because the consequences of using human modification are uncertain. * We can't be sure what posthumans are like, until we have already done so. Thus it can't be perfectly safe. * Humanism puts human values first, and thus the posthuman danger is too much to bear.
The minimal claim is to say that within the Einsteinian paradigm, the double-spending problem is insoluble
Nick Land is just wrong here. Synchronization has been solved in the context of relativity. See for example "24-Hour Relativistic Bit Commitment" (2016).
Or we can be more generous, and interpret this as saying that the Bitcoin protocol constructs a system where time and space are once again different, a system that is more Newtonian than relativistic. This system can be approximately implemented in the real world, as long as the machines used to implement the system are not spaced too far apart from each other.
That is, if we pretend we are living inside the Bitcoin blockchain, we would see a universe with clearly distinct space and time, with no possibility of mixing them as in relativity.
There is an absolutely fascinating little exchange on a crypto mail board around the time that Bitcoin is actually being launched, and Satoshi Nakamoto, in that exchange says that the system of consensus that the blockchain is based upon — distributed consensus that then becomes known as the “Nakamoto consensus” — resolves a set of problems that include the priority of messages, global coordination, various problems that are exactly the problems that relativistic physics say are insoluble.
Bitcoin P2P e-cash paper November 09, 2008, 03:09:49 AM
The proof-of-work chain is the solution to the synchronisation problem, and to knowing what the globally shared view is without having to trust anyone.
A transaction will quickly propagate throughout the network, so if two versions of the same transaction were reported at close to the same time, the one with the head start would have a big advantage in reaching many more nodes first. Nodes will only accept the first one they see, refusing the second one to arrive, so the earlier transaction would have many more nodes working on incorporating it into the next proof-of-work. In effect, each node votes for its viewpoint of which transaction it saw first by including it in its proof-of-work effort.
If the transactions did come at exactly the same time and there was an even split, it's a toss up based on which gets into a proof-of-work first, and that decides which is valid.
When a node finds a proof-of-work, the new block is propagated throughout the network and everyone adds it to the chain and starts working on the next block after it. Any nodes that had the other transaction will stop trying to include it in a block, since it's now invalid according to the accepted chain.
The proof-of-work chain is itself self-evident proof that it came from the globally shared view. Only the majority of the network together has enough CPU power to generate such a difficult chain of proof-of-work. Any user, upon receiving the proof-of-work chain, can see what the majority of the network has approved. Once a transaction is hashed into a link that's a few links back in the chain, it is firmly etched into the global history.
The Heideggerian translation of the basic critical argument is that the metaphysical error is to understand time as something in time. So you translate this language, objectivity and objects, into the language of temporality and intra-temporality, and have equally plausible ability to construe the previous history of metaphysical philosophy in terms of what it is to to make an error. The basic error then, at this point, is to think of time as something in time.
Usually, time is understood as an object like tables and words, but time is quite different. Tables and words exist in time, but time doesn't exist in time.
To describe time correctly, we have to describe it unlike any object that exists in time. Heidegger tried it, and ended up with extremely arcane and abstract language that nobody quite understands.
Often when you’re looking at the highest examples of intelligence in a culture, you’re looking precisely at the way that it has been fixed and crystallized and immunized against that kind of runaway dynamic — the kind of loops involving technological and economic processes that allow intelligence to go into a self-amplifying circuit are quite deliberately constrained, often by the fact that the figure of the intellectual is, in a highly-coded way, separated from the kind of techno-social tinkering that could make those kind of circuits activate.
In most cultures, the smartest people are carefully quarantined off from tinkering with science and technology and society, lest they make some great discovery that leads to further intelligence, and destabilize the culture.
Bitcoin is a critique of trusted third parties, that is deeply isomorphic with critique in its rigorous Kantian sense, and then with the historical-technological instantiation of critique. And that’s why I think it’s a philosophically rich topic.
Commerce on the Internet has come to rely almost exclusively on financial institutions serving as trusted third parties to process electronic payments. While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model... What is needed is an electronic payment system based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party.
Metaphysics before critical philosophers blew it up (such as Kant): "You can't think without a ground! And that ground is... God, the soul, tradition, whatever, but you need a ground."
Bitcoin shows that one can live without trusted third party, and critical philosophy shows that one can think without a trusted ground.
On the internet, when you route around an obstacle, you emulate a hostile nuclear strike. You say, “I don’t want to go past this or that gatekeeper, and I will just assume that they have been vaporised by a foreign nuclear device and go around them some other way”. There are always more of these other ways being brought on stream all the time.
According to Stephen J. Lukasik, who as deputy director (1967–1970) and Director of DARPA (1970–1975) was "the person who signed most of the checks for Arpanet's development":
The goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making.
The ARPANET incorporated distributed computation, and frequent re-computation, of routing tables. This increased the survivability of the network in the face of significant interruption. Automatic routing was technically challenging at the time. The ARPANET was designed to survive subordinate-network losses, since the principal reason was that the switching nodes and network links were unreliable, even without any nuclear attacks.
this pitifully idiotic paperclip model that was popularized by Yudkowsky, that Bostrom is still attached to, that you know, is very, very widespread in the literature, and I think, for reasons that maybe we can go into at some point, is just fundamentally mistaken.
Outside in - Involvements with reality » Blog Archive » Against Orthogonality
Even the orthogonalists admit that there are values immanent to advanced intelligence, most importantly, those described by Steve Omohundro as ‘basic AI drives’ — now terminologically fixed as ‘Omohundro drives’. These are sub-goals, instrumentally required by (almost) any terminal goals. They include such general presuppositions for practical achievement as self-preservation, efficiency, resource acquisition, and creativity. At the most simple, and in the grain of the existing debate, the anti-orthogonalist position is therefore that Omohundro drives exhaust the domain of real purposes.
As a footnote, in a world of Omohundro drives, can we please drop the nonsense about paper-clippers? Only a truly fanatical orthogonalist could fail to see that these monsters are obvious idiots. There are far more serious things to worry about.
It just is a positive feedback process that passes through some threshold and goes critical. And so I would say that’s the sense [in which] capitalism has always been there. It’s always been there as a pile with the potential to go critical, but it didn’t go critical until the Renaissance, until the dawn of modernity, when, for reasons that are interesting, enough graphite rods get pulled out and the thing becomes this self-sustaining, explosive process.
In an earlier essay, "Meltdown", he said
The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization take-off. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.
The original leftist formulation of it was very different from anything that we get in what then becomes called left-accelerationism later. It’s almost like Lenin’s “the worse, the better”. The understanding of it is that, you know, what Deleuze and Guattari are doing, what the accelerationist current coming out of them is doing, is saying the way to destroy capitalism is to accelerate it to its limit. There’s no other strategy that has any chance of being successful.
There are 2 kinds of left accelerationism. The older one is the idea that "capitalism can only be destroyed by more capitalism". The newer one is "capitalism is less able to use modern technology than socialism, and by using modern technology, socialism can destroy capitalism by beating capitalism at its own game".
Part of such a transition from metaphysics to physics is the ether, which Kant also defines as “the basis of all matter as the object of possible experience. For this first makes experience possible.”
When Kant was younger, he wrote in his philosophy that space and time are not thing-in-themselves, but are how we perceive things. Things appear as if they have space and time, because if they don't appear like that, it's impossible for us to see it. Thus, the way we are constructed makes only certain experiences possible.
This older Kant was trying to do this more generally: he was trying to find out how the universe must be constructed, to make our experiences possible.
Or maybe "I think, therefore the universe exists, ether exists, etc."
ether was regarded as the material medium that allowed the propagation of light and other electromagnetic radiation through the emptiness of the universe.
19th century physicists, like James Maxwell, thought that electromagnetic waves need a medium, just like how water waves need a water.
hypostasis: The substance, essence, or underlying reality.
"hypostatized space" roughly means "the essence, the underlying matter, that makes up space". I guess.
The caloric theory is an obsolete scientific theory that heat consists of a self-repellent fluid called caloric that flows from hotter bodies to colder bodies. Caloric was also thought of as a weightless gas that could pass in and out of pores in solids and liquids.
The term originated in ancient Greek philosophy, and was later used in Christian theology and Western philosophy. In pre-Socratic usage, physis was contrasted with νόμος, nomos, "law, human convention". Another opposition, particularly well-known from the works of Aristotle, is that of physis and techne – in this case, what is produced and what is artificial are distinguished from beings that arise spontaneously from their own essence, as do agents such as humans. Further, since Aristotle, the physical (the subject matter of physics, properly τὰ φυσικά "natural things") has also been juxtaposed to the metaphysical.
"influxus physicus" (latin for "physical in-flow") is a theory of causation in medieval Scholastic philosophy.
The Scholastic model of causation involved properties of things (“species”) leaving one substance, and entering another. Consider what happens when one looks at a red wall: one’s sensory apparatus is causally acted upon. According to the target of this passage, this involves a sensible property of the wall (a “sensible species”) entering into the mind’s sensorium.
René Descartes : The Passions of the Soul I, § 34
The little gland that is the principal seat of the soul is suspended within the cavities containing these spirits, so that it can be moved by them in as many different ways as there are perceptible differences in the objects. But it can also be moved in various different ways by the soul, whose nature is such that it receives as many different impressions—i.e. has as many different perceptions—as there occur different movements in this gland. And, the other way around, the body’s machine is so constructed that just by this gland’s being moved in any way by the soul or by any other cause, it drives the surrounding spirits towards the pores of the brain, which direct them through the nerves to the muscles—which is how the gland makes them move the limbs [‘them’ could refer to the nerves or to the muscles; the French leaves that open].
A word from Hegel's philosophy.
The English verb “to sublate” translates Hegel’s technical use of the German verb aufheben, which is a crucial concept in his dialectical method. Hegel says that aufheben has a doubled meaning: it means both to cancel (or negate) and to preserve at the same time
Things in themselves do not exist beyond their use as a methodological tool, a procedural resource, an unavoidable logical object and a didactic motif that helps explain Kant’s proposed nature of reason.
Kant said the thing-in-themselves exist, not because he wants you or anyone to go there and explore, but because he wants you to save your time and ink and stop trying to study them.
Answering the Question: What is Enlightenment?
Enlightenment is man's release from his self-incurred tutelage. Tutelage is man's inability to make use of his understanding without direction from another. Self-incurred is this tutelage when its cause lies not in lack of reason but in lack of resolution and courage to use it without direction from another. Sapere aude! "Have courage to use your own reason!" - that is the motto of enlightenment.
“Did anyone really imagine that traveling to conferences, staying in hotels with expenses paid, attending drinks receptions, and attracting the attention of established figures was a good means to develop original, critical thinking?”
Quite possible! That's how the aristocratic Enlightenment thinkers lived.
if all the sociology departments in the world
Probably the author just means European and North American ones.
non-positivistic work needs to justify its existence as ‘scientific’ by complex theoretical jigsaws.
"non-positivistic" means "doesn't say anything substantial, anything useful, or anything that can be checked by evidence"
I guess! After all, "non-positivistic" is yet another big word with vague meaning.
What if this myth of presupposed intentionality of Superintelligence/AI, or the myth of things to come, shared by the X-risk scenarios, is not so much a premise as it is an effect of traumatic genealogy? That it is a congenital self-deception of the speculative mind, its reaction to the defeat in a brain cell battle, similar to Eugene Thacker’s ‘moment of horror of [philosophical] thought’ and its consequences. A reaction or a weakness, turning into a blind spot of and for the speculative mind because of its inability to cope with the trauma of the End – a weak blaze of irrational and miserable hope that an existential threat wouldn’t eventually appear as a transcendental catastrophe. What if this very myth is nothing but a reappearance of the omnipresent hope for a perpetual telos as a possibility of mind to negotiate its ineluctable end?
What if those AI safety researchers are afraid of ends? They can handle telos, threats that they can understand, and negotiate with. They can't handle threats that are so strange that they cannot be negotiated with, like a bullet to the brain. You can't argue with a bullet to the brain.
All those AI safety research documents are silent on this issue, precisely because this is too horrible and unthinkable, so the authors run away from the topic.
Nemo-centric: "centered on nobody".
'nemocentric', a term coined by Metzinger to denote an objectification of experience that has the capacity to generate 'self-less subjects that understand themselves to be no-one and no-where'.
More concretely, Metzinger argued that if you can truly observe what's going on in your thoughts as objects, rather than what is happening to you, you would lose yourself, and the sense of time, sense of place, you would stop existing for the moment.
He made an analogy with the window. The window is transparent, but it's there, conveying what's outside to what's inside. When you look at the window too hard, or the window "cracks", the transparency is lost.
Without transparency, the self stops existing.
its ‘basic urge’ is to seek and destroy both kinds of NHI-hosts, while at the same time remaining indifferent to them from the viewpoint of intentionality
Somehow we get things that does destroy the NHI (non-human intelligence), but these things don't have intentionality.
The second kind of non-human intelligence results from successful digitalization of consciousness converged with unfolded consequential technological augmentations
This is the kind of posthuman intelligence that cyberpunk often explores: uploaded human brains, tinkered and updated and merged and splitted, until they became not human, but still resembling humans here and there.
The first kind is a result of genetic engineering
This is the standard sci-fi kind of posthuman, created by genetic and other methods of bioengineering.
replacing the clichéd character, humanity, by two (distinct) kinds of non-human intelligence hosts, directly succedent to humans.
Two possible kinds of intelligent things that might appear in the future. They could evolve out of human bioengineering, or computer engineering, or some other human activities.
Other, even more bizarre kinds are possible, but let's focus on these two which we can at least talk about with some confidence.
So let's sketch out the two kinds of intelligent creatures, and see how they can start a war that ends not with "telos", but "end".
complicity with exogenous materials
Parody of the book title "Cyclonopedia: Complicity with Anonymous Materials". That book is a weird fiction about all kinds of weird things. The main plot is about how the crude oil is part of a mess of living things underground, and how Middle East is a sentient creature animated by the crude oil, and how they have been in a billion-year-long war with the sun.
the gist is that the US War on Terror and contemporary oil wars more generally are symptoms of a much older and broader supernatural tension between occult forces inhabiting the depths of the Earth and those inhabiting the Sun. Various human groups and individuals throughout recorded (and unrecorded) history (up to the present conflict between the writ-large West and the Middle East) have embraced different aspects of this conflict, sometimes consciously, and sometimes unconciously, leaving behind an archaeological, linguistic, and written record of contact with such Outside forces.
"complicity with anonymous materials" in the book means "some humans are secretly working for demonic matters, like crude oil, consciously or not".
"complicity with exogeneous materials" means "some humans could splice weird exogeneous genes into their own genes.
Geotrauma is a parody of psychoanalysis. According to geotrauma theory, earth is deeply hurt during its birth, when it was bombarded by asteroids, and then hit so hard in the "Late Heavy Bombardment" that a chunk of it flew out and became the moon.
The hurt sank deep into the center of earth, where it is repressed into violent streams of electromagnetic and magma flows, occasionally bursting to the surface in the form of earthquakes and geomagnetic storms.
Fast forward seismology and you hear the earth scream. Geotrauma is an ongoing process, whose tension is continually expressed – partially frozen – in biological organization.
Who does the Earth think it is?... between four point five and four billion years ago – during the Hadean epoch – the earth was kept in a state of superheated molten slag, through the conversion of planetesimal and meteoritic impacts into temperature increase... As the solar system condensed, the rate and magnitude of collisions steadily declined, and the terrestrial surface cooled, due to the radiation of heat into space, reinforced by the beginnings of the hydrocycle. During the ensuing – Archaen – epoch the molten core was buried within a crustal shell, producing an insulated reservoir of primal exogeneous trauma, the geocosmic motor of terrestrial transmutation. And that’s it. That’s plutonics, or neoplutonism. It’s all there: anorganic memory, plutonic looping of external collisions into interior content, impersonal trauma as drive-mechanism. The descent into the body of the earth corresponds to a regression through geocosmic time.
dance of death
The Danse Macabre, also called the Dance of Death, is an artistic genre of allegory of the Late Middle Ages on the universality of death.
wind from nowhere
A novel about an unexplained catastrophe destroying humans, then suddenly disappearing. People could not explain it, only survive it (or die in it).
A wind blows worldwide: it is constantly westward and strongest at the equator. The wind is gradually increasing, and at the beginning of the story, the force of the wind is making air travel impossible. Later, people are living in tunnels and basements, unable to go above ground. Near the end, "The air stream carried with it enormous quantities of water vapour — in some cases the contents of entire seas, such as the Caspian and the Great Lakes, which had been drained dry, their beds plainly visible."
the attempts of intelligence to re-negotiate the end at all scales are molded into a recurring question (underlying, by the way, any X-risk analytics): ‘What can (possibly) go wrong?’ But there is ‘a problem of communication’ when, unlike with telos, things come to an end: it answers no questions; it takes no prisoners, makes no ‘exchanges’, forms no contracts and is indifferent to any attempt at negotiations. The end is dysteleological.
In "Terminator", the Terminator is described as
It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
An "end" doesn't stop even after you are dead.
At its ‘best’, telos may be defined as the arrival at the most desired outcome with the least occurrence of uncertain events affecting the ultimate objectives. At its ‘worst’, it is any outcome where the affordance of negotiations is retained. The end, as an outcome, is a pure [transcendental] catastrophe
A "telos" is an outcome that we can deal with. It could be good, or bad, but nothing that is truly unrecoverable. Worst case scenario, we go to a negotiating table and talk it over, accept our defeat, sign some documents, and get shot with dignity.
An "end" is something we just can't handle. The atoms making up our bodies disintegrate, cancerous thoughts invade our consciousness, and the eyeballs grow arms and teeth.
the ontological chasm appearing as an unnegotiable and/or unthinkable threat (a rupture between the entity and its existential conditions).
To get the feeling of such an ontological chasm, consider how the monster Cthulhu is described in "The Call of Cthulhu" by Lovecraft
The odour arising from the newly opened depths was intolerable, and at length the quick-eared Hawkins thought he heard a nasty, slopping sound down there. Everyone listened, and everyone was listening still when It lumbered slobberingly into sight and gropingly squeezed Its gelatinous green immensity through the black doorway into the tainted outside air of that poison city of madness.
The Thing cannot be described—there is no language for such abysms of shrieking and immemorial lunacy, such eldritch contradictions of all matter, force, and cosmic order. A mountain walked or stumbled.
"a rupture between the entity and its existential conditions". Here "existential conditions" is philosopher's jargon for "What kind of world we must live in, for such a thing to exist?" For example, "Well, I exist, therefore the universe must have some cool spot of around 0 -- 30 degrees Celsius, otherwise I would die. I must have parents, since I can't have made myself, etc."
A "rupture" means that "I see this entity, but I don't believe it! Either I have gone mad, or the universe is mad!"
these misconceived gamechangers are revealed only by retrodiction and previous belief corrections, if they become known at all.
A retrodiction occurs when already gathered data is accounted for by a later theoretical advance in a more convincing fashion. The advantage of a retrodiction over a prediction is that the already gathered data is more likely to be free of experimenter bias. An example of a retrodiction is the perihelion shift of Mercury which Newtonian mechanics plus gravity was unable, totally, to account for whilst Einstein's general relativity made short work of it.
It might happen, that the superintelligence "wins" over humanity by something so out of the blue, that it has nothing to do with desires, intentions, fears, and hopes. It just hits the human-understood world like an earthquake, changing the very grounds of understanding.
Just like it may be not necessary to know the rules to win the game, or to possess all the propositional knowledge on a subject to operate properly
Little parasites can skillfully navigate their course of life, using odd tricks that just happen to work. For example, consider the following description of the tick from "A foray into the worlds of animals and humans" (Uexküll)
ANY COUNTRY DWELLER who traverses woods and bush with his dog has certainly become acquainted with a little animal who lies in wait on the branches of the bushes for his prey, be it human or animal, in order to dive onto his victim and suck himself full of its blood. In so doing, the one- to two-millimeter-large animal swells to the size of a pea (Figure 1). Although not dangerous, the tick is certainly an unwelcome guest to humans and other mammals. Its life cycle has been studied in such detail in recent work that we can create a virtually complete picture of it. Out of the egg crawls a not yet fully developed little animal, still missing one pair of legs as well as genital organs. Even in this state, it can already ambush cold-blooded animals such as lizards, for which it lies in wait on the tip of a blade of grass. After many moltings, it has acquired the organs it lacked and can now go on its quest for warm-blooded creatures. Once the female has copulated, she climbs with her full count of eight legs to the tip of a protruding branch of any shrub in order either to fall onto small mammals who run by underneath or to let herself be brushed off the branch by large ones. The eyeless creature finds the way to its lookout with the help of a general sensitivity to light in the skin. The blind and deaf bandit becomes aware of the approach of its prey through the sense of smell. The odor of butyric acid, which is given off by the skin glands of all mammals, gives the tick the signal to leave its watch post and leap off. If it then falls onto something warm—which its fine sense of temperature will tell it—then it has reached its prey, the warm-blooded animal, and needs only use its sense of touch to find a spot as free of hair as possible in order to bore past its own head into the skin tissue of the prey. Now, the tick pumps a stream of warm blood slowly into itself.
Experiments with artificial membranes and liquids other than blood have demonstrated that the tick has no sense of taste, for, after boring through the membrane, it takes in any liquid, so long as it has the right temperature. If, after sensing the butyric acid smell, the tick falls onto something cold, then it has missed its prey and must climb back up to its lookout post. The tick's hearty blood meal is also its last meal, for it now has nothing more to do than fall to the ground, lay its eggs, and die.
‘on-the-spot’ (just like Friedrich Hayek puts it for the knowledge circulation in economic networks)
In "The use of knowledge in society" (Hayek, 1945), Hayek argued for both central planning and decentral planning. Central planning gathers the rough global facts and distribute them widely to agents, and agents combine the rough global facts with their precise local facts to make decisions.
This is, perhaps, also the point where I should briefly mention the fact that the sort of knowledge with which I have been concerned is knowledge of the kind which by its nature cannot enter into statistics and therefore cannot be conveyed to any central authority in statistical form. The statistics which such a central authority would have to use would have to be arrived at precisely by abstracting from minor differences between the things, by lumping together, as resources of one kind, items which differ as regards location, quality, and other particulars, in a way which may be very significant for the specific decision. It follows from this that central planning based on statistical information by its nature cannot take direct account of these circumstances of time and place and that the central planner will have to find some way or other in which the decisions depending on them can be left to the "man on the spot."
If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization. But this answers only part of our problem. We need decentralization because only thus can we insure that the knowledge of the particular circumstances of time and place will be promptly used. But the "man on the spot" cannot decide solely on the basis of his limited but intimate knowledge of the facts of his immediate surroundings. There still remains the problem of communicating to him such further information as he needs to fit his decisions into the whole pattern of changes of the larger economic system.
This idea has similarities with Auftragstaktik in military, where
the military commander gives subordinate leaders a clearly defined goal (the objective), the forces needed to accomplish that goal, and a time frame within which the goal must be reached. The subordinate leaders then implement the order independently. To a large extent, the subordinate leader is given the planning initiative and a freedom in execution, which allows a high degree of flexibility at the operational and tactical levels of command. Mission-type orders free the higher leadership from tactical details.
Two meanings. 1. The state of being a fact. It is a grammatical annoyance, nothing more. "X has facticity." is just a more awkward way to say "X is a fact". "The facticity of X implies..." is just "X is a fact, thus..."
- In existentialism, the state of being in the world without any knowable reason for such existence, or of being in a particular state of affairs which one has no control over.
- "Why am I alive?"
- "You'll never get an answer unless you make one up yourself, bro."
If intentionality and its effects may be described as self-evident when it comes to conflict of interests (misaligned objectives resulting into dramatic outcomes) – since the interests / intentions must be explicitly expressed (or equivalently observable) by agents to bring about the misalignment as such – the emergence, establishment, exhibition, even resignation of the superior intelligence is a completely different beast.
In a typical scenario of superintelligent AI running wild, you have some humans, watching with horror, as the AI destroys their objectives. In some other scenarios, the AI kills everyone on earth in a microsecond, and only the human spectators, watching this like a movie, feel the horror.
In such scenarios, since the human can feel a clear sense of "My objective is being violated intentionally!" the superintelligent AI must have some intentionality, if only as a useful fiction.
What if other kinds of superintelligence are possible, something that doesn't have intentions, not even as useful fictions? Then if you write a story where such superintelligence emerges, it would not look like the above disaster movie. It would be really weird.
I’d suggest three of a kind: comprehension ‘functional module’; capabilities of self-correction and recursive self-improvement – both considered as underpinned by an abductive reasoning algorithm (which is itself comprised of rule-based and goal-driven behaviors and the ability of effective switching between them when necessary).
- self-correction and self-improvement
- abductive reasoning
negotiate the existential threat: to postpone, to realign, generally speaking – to escape the extermination without direct confrontation.
"postpone": many AI safety researchers argue that there should be a global ban on unsafe AI research. All AI research should be slowed down or stopped until safety research has finally, panting and wheezing, caught up with AI research and can give it a seal of approval "I hereby declare this research safe and legal".
"realign": the AI safety researchers talk of "AI alignment", which would ideally work like this: 1. There exist some human objective shared by all humans. The objective can be compounded from multiple different objectives ("cute animals", "humor", "laughter", etc), but it is assumed to be consistent. 2. Construct a superintelligent AI with some objective, such that if the AI pursues its objective, it would as a side effect, help humans pursue the human objective.
the myth of things to come
The author argues that the assumed "fact" that any future AI must have human-like objectives is a questionable myth, the same way the myth of the given assumes that physical-humans must have objectives that theoretical-humans have.
Theoretical-humans have goals, direct knowledge of what they think, can never be mistaken about what they are thinking, etc. Physical-humans only approxmate theoretical-humans.
myth of the given
the view that sense experience gives us peculiar points of certainty, suitable to serve as foundations for the whole of empirical knowledge and science. The idea that empiricism, particularly in the hands of Locke and Hume, confuses moments of physical or causal impact on the senses with the arrival of individual ‘sense data’ in the mind, was a central criticism of it levelled by the British Idealists
Descartes said "I think therefore I am." Modern psychology has made "I think" no longer certain. I may think, but I may be mistaken about what I think, how I think, etc. I don't have direct access to what I think. If I like this photo more than that, I might confabulate a reason on the spot and not know it (Nisbett and Wilson 1977).
(from Musk to Bostrom), such as unthinkability of the arrival of Superintelligence in a way different to being ‘designed’
Elon Musk, Nick Bostrom, and many others in the AI safety research community assume that superintelligence must come from a human research effort designed specifically for creating machine intelligence. It could be designed for creating sub-human machine intelligence, but accidentally overshot the mark. Still, it must be designed for creating some machine intelligence..
erroneous indiscernibility of goals-as-objectives and goals-as-targets
A rock rolling down has a target, but it has no objective. A missile seeking a target has a target, but it does not speak of the target. A human has an objective.
So we see 3 different ways to have goals. There could be many more. A superintelligent AI might have goals in yet different ways, not as target, not as objective.
There is no obvious theoretical incompatibility between significant techonomic intensification and patterns of social diffusion of capital outside the factory model (whether historically-familiar and atavistic, or innovative and unrecognizable). In particular, household assets offer a locus for surreptitious capital accumulation, where stocking of productive apparatus can be economically-coded as the acquisition of durable consumer goods, from personal computers and mobile digital devices to 3D printers.
Maybe people building stuff in their own backyard is the next step in making capitalism more intense. Maybe millions of 3D printers is more suited to the development of machinic intelligence than big factories.
As accelerationism closes upon this circuit of teleoplexic self-evaluation, its theoretical ‘position’–or situation relative to its object–becomes increasingly tangled, until it assumes the basic characteristics of a terminal identity crisis.
Accelerationism studies the development of machinic intelligence. However, accelerationism would also be done more and more by machinic intelligence (using machine learning to chart the development of machine intelligence, etc), until it is no longer clear if "accelerationism" as a human research program still exists, or has it become something entirely different, something that resembles self-consciousness of the machinic intelligence.
Capitalization is thus indistinguishable from a commercialization of potentials, through which modern history is slanted (teleoplexically) in the direction of ever greater virtualization, operationalizing science fiction scenarios as integral components of production systems. Values which do not ‘yet’ exist, except as probabilistic estimations, or risk structures, acquire a power of command over economic (and therefore social) processes, necessarily devalorizing the actual. Under teleoplexic guidance, ontological realism is decoupled from the present, rendering the question ‘what is real?’ increasingly obsolete.
Nick Land considers 4 kinds of existence: * real and actual: concrete things like tables and chairs. * real and virtual: abstract things that has real power, hyperstitions, Bitcoins, self-fulfilling prophesies, nationalism. * unreal and actual: impossible (?). * unreal and virtual: abstract things without real power, dreams, toothless fantasies, ideas considered outdated.
The apparent connection between price and thing is an effect of double differentiation, or commercial relativism, coordinating twin series of competitive bids (from the sides of supply and demand). The conversion of price information into naturalistic data (or absolute reference) presents an extreme theoretical challenge.
Why does something have a certain price at a certain time in a certain place? This is a deep problem and economists still debate about it, with no end in sight.
defining a gradient of absolute but obscure improvement that orients socio economic selection by market mechanisms, as expressed through measures of productivity, competitiveness, and capital asset value.
Teleoplexy is the process by which machine intelligence finally becomes real. The process is hard to see in detail, working on small and hidden scales, in the form of a little improvement to a reinforcement learning algorithm today, another little improvement in silicon wafer manufacturing another day, etc.
Teleonomy is the quality of apparent purposefulness and of goal-directedness of structures and functions in living organisms brought about by natural processes like natural selection. The term derives from the Greek "τελεονομία", compound of two Greek words, τέλος, from τελε-, and νόμος nomos.
It is the inertial telos which, by default, sets actual existence as the end organizing all subordinate means.
Negative feedback has a purpose ("telos"): stay close to where you are, and don't go off on a trajectory into some far off place.
In negative feedback, the actual existence known right here right now is all that is worth pursuing. The virtual existence that future might bring are not considered, or at most, translated into actual existence.
(For example, transhumanists think about the future as a future of humans, only more so. Longer-lived, more green grass, more blue sky, more cute animals, etc.)
any restricted form of warfare is conceived as fragile political artifice, stubbornly subverted by a trend to escalation that expresses the nature of war in-itself, and which thus counts – within any specific war -- as the primary axis of real discovery.
Restricting war takes human effort. War itself keeps trying to escalate itself.
‘fog’ and ‘friction’
Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war.
Friction is the only concept that more or less corresponds to the factors that distinguish real war from war on paper. The military machine––the army and everything related to it––is basically very simple and therefore seems easy to manage. But we should bear in mind that none of its components is of one piece: each part is composed of individuals, every one of whom retains his potential of friction. In theory it sounds reasonable enough: a battalion commander’s duty is to carry out his orders; discipline welds the battalion together, its commander must be a man of tested capacity, and so the great beam turns on its iron pivot with a minimum of friction. In fact, it is different, and every fault and exaggeration of the theory is instantly exposed in war. A battalion is made up of individuals, the least important of whom may chance to delay things or somehow make them go wrong. The dangers inseparable from war and the physical exertions war demands can aggravate the problem to such an extent that they must be ranked among its principal causes.
If, on the contrary, war is going to escape, then nothing we think we know, or can know, about its history will remain unchanged. State-politics will have been the terrain in which it hid, military apparatuses the hosts in which it incubated its components, antagonistic purposes the pretexts through which – radically camouflaged – it advanced. Its surreptitious assembly sequences would be found scattered backwards, lodged, deeply concealed, within the disinformational megastructure of Clausewitzean history.
Maybe "war" is not a tool for humans, not just "politics by other means". Maybe "war" can escape human control and become autonomous, like an ecosystem, or perhaps a beast.
If this happens, then everything we thought we knew about war was wrong. Nations were not the subjects making war. Nations were the shells inside which wars hid. Etc.
is Stuxnet a soft-weapon fragment from the future war? When its crypto-teleological significance is finally understood, will this be confined to the limited purpose assigned to it by US-Israeli hostility to the Iranian nuclear program? Does Stuxnet serve only to wreck centrifuges? Or does it mark a stage on the way of the worm, whose final purpose is still worm? Are Cosmists even needed to tell this story?
Is Stuxnet a tool by some US-Israeli humans to stop the nuclear program run by some Iranian humans, or is it an early example of a trend towards an evolutionary explosion of artificial life? Perhaps this can very well happen even without any human Cosmists trying to make it happen, even without a clear storyline that people can read and understand.
The true historical significance of something can only be seen in retrospect.
The Terrans cannot possibly escalate too hard, too fast, because the Cosmists are aligned with escalation, and therefore win automatically if the war is prolonged to its intrinsic extreme.
Dubious claim, as too much escalation leads to total destruction, which is also not in the Cosmists' interest.
- May 2022
fortunately the double integrator system is controllable
This is the best place to say "salvatore ambulando"
- Apr 2022
“If it was not observed then it is just a story, and we need to be sceptical about it.”
A group of men at this other village said, “Dan, so tell us a little bit more about Jesus. Is he brown like us or is he white like you? And how tall is he? And what sorts of things does he know how to do? Does he like to hunt and fish and stuff, or what does he do?”
I said, “Well, you know, I don’t know what color he is, I never saw him.” “You never saw him?” “No.” “Well, your dad saw him then,” because you can give information that was told to you by somebody who was alive at the time.
I said, “No, my dad never saw him.” They said, “Well, who saw him?” And I said, “Well, they’re all dead; it was a long time ago.”
“Why are you telling us about this guy? If you never saw him, and you don’t know anyone who ever saw him,” and those are the two basic forms of evidence for the Pirahã.
- Mar 2022
Ballard's response is more productive and balanced, treating DNA as a transorganic memory-bank and the spine as a fossil record, without rigid onto-phylogenic correspondence.
J G Ballard trained as a psychiatrist, and his stories were filled with biological imagery, and often, spines.
Eriphyle /ɛrɪˈfaɪliː/ (Ancient Greek: Ἐριφύλη Eriphȳla) was a figure in Greek mythology who, in exchange for the necklace of Harmonia (also called the necklace of Eriphyle) given to her by Polynices, persuaded her husband Amphiaraus to join the expedition of the Seven against Thebes. She was then slain by her son Alcmaeon. In Jean Racine's 1674 retelling of Iphigenia at Aulis, she is an orphan whose real name turns out to be Iphigenia as well; despite her many misdeeds, she rescues Iphigenia the daughter of Agamemnon.
In Greek mythology, Polyxena (/pəˈlɪksɪnə/; Greek: Πολυξένη) was the youngest daughter of King Priam of Troy and his queen, Hecuba. She does not appear in Homer, but in several other classical authors, though the details of her story vary considerably. After the fall of Troy, she dies when sacrificed by the Greeks on the tomb of Achilles, to whom she had been betrothed and in whose death she was complicit in many versions.
Where Euripides twice failed, in the “Troades” and the “Helena,” it can be given to few to succeed. Helen is best left to her earliest known minstrel, for who can recapture the grace, the tenderness, the melancholy, and the charm of the daughter of Zeus in the “Odyssey” and “Iliad”? The sightless eyes of Homer saw her clearest, and Helen was best understood by the wisdom of his unquestioning simplicity.
In short, Helen is best described in only a few simple quick poetic sketches, and leave as much to the imagination as possible. Homer did it the right way. Subsequent writers, in trying to expand on her story, did it clumsily and badly.
Since even Euripides, the famous tragedian of ancient Greece, failed to write a memorable dramatization of Helen, we could infer that few could do it.
Troilus and Cressida
Troilus and Cressida is a play by William Shakespeare, probably written in 1602. At Troy during the Trojan War, Troilus and Cressida begin a love affair. Cressida is forced to leave Troy to join her father in the Greek camp. Meanwhile, the Greeks endeavour to lessen the pride of Achilles.
Mr. William Morris’s Helen, in the “Earthly Paradise,”
William Morris (24 March 1834 - 3 October 1896) was a British textile designer, poet, artist, novelist, architectural conservationist, printer, translator and socialist activist associated with the British Arts and Crafts Movement. He was a major contributor to the revival of traditional British textile arts and methods of production.
Originally published in 1868, The Earthly Paradise is considered William Morris's most popular poem. An epic poem that features legends, myths and stories from Europe, sectioned into the twelve months of the year.
the world well lost
All for Love; or, the World Well Lost, is a 1677 heroic drama by John Dryden which is now his best-known and most performed play. It is an acknowledged imitation of Shakespeare's Antony and Cleopatra, and focuses on the last hours of the lives of its hero and heroine.
Roman poets knew her best as an enemy of their fabulous ancestors, and in the “Æneid,” Virgil’s hero draws his sword to slay her.
Book 2, lines 559-587, Aeneas Sees Helen
Then for the first time a wild terror gripped me.
I stood amazed: my dear father’s image rose before me
as I saw a king, of like age, with a cruel wound,
breathing his life away: and my Creusa, forlorn,
and the ransacked house, and the fate of little Iulus.
I looked back, and considered the troops that were round me.
They had all left me, wearied, and hurled their bodies to earth,
or sick with misery dropped into the flames.
So I was alone now, when I saw the daughter of Tyndareus,
Helen, close to Vesta’s portal, hiding silently
in the secret shrine: the bright flames gave me light,
as I wandered, gazing everywhere, randomly.
Afraid of Trojans angered at the fall of Troy,
Greek vengeance, and the fury of a husband she deserted,
she, the mutual curse of Troy and her own country,
had concealed herself and crouched, a hated thing, by the altars.
Fire blazed in my spirit: anger rose to avenge my fallen land,
and to exact the punishment for her wickedness.
“Shall she, unharmed, see Sparta again and her native Mycenae,
and see her house and husband, parents and children,
and go in the triumphant role of a queen,
attended by a crowd of Trojan women and Phrygian servants?
When Priam has been put to the sword? Troy consumed with fire?
The Dardanian shore soaked again and again with blood?
No. Though there’s no great glory in a woman’s punishment,
and such a conquest wins no praise, still I will be praised
for extinguishing wickedness and exacting well-earned
punishment, and I’ll delight in having filled my soul
with the flame of revenge, and appeased my people’s ashes.”
in Christopher Marlowe's drama, Doctor Faustus, Scene 13
Was this the face that launch'd a thousand ships, And burnt the topless towers of Ilium? Sweet Helen, make me immortal with a kiss. Her lips suck forth my soul: see, where it flies! Come, Helen, come, give me my soul again. Here will I dwell, for heaven is in these lips, And all is dross that is not Helena. I will be Paris, and for love of thee, Instead of Troy, shall Wittenberg be sack'd; And I will combat with weak Menelaus, And wear thy colours on my plumed crest; Yea, I will wound Achilles in the heel, And then return to Helen for a kiss. O, thou art fairer than the evening air Clad in the beauty of a thousand stars; Brighter art thou than flaming Jupiter When he appear'd to hapless Semele; More lovely than the monarch of the sky In wanton Arethusa's azur'd arms; And none but thou shalt be my paramour!
(26 February 1564 – 30 May 1593), was an English playwright, poet and translator of the Elizabethan era. Marlowe is among the most famous of the Elizabethan playwrights. Based upon the "many imitations" of his play Tamburlaine, modern scholars consider him to have been the foremost dramatist in London in the years just before his mysterious early death.
the epithalamium of Theocritus
An epithalamium is a poem written specifically for the bride on the way to her marital chamber.
In the hands of the poets the epithalamium was developed into a special literary form, and received considerable cultivation. Sappho, Anacreon, Stesichorus and Pindar are all regarded as masters of the species, but the finest example preserved in Greek literature is the 18th Idyll of Theocritus, which celebrates the marriage of Menelaus and Helen.
Theocritus (/θiːˈɒkrɪtəs/; Greek: Θεόκριτος, Theokritos; born c. 300 BC, died after 260 BC) was a Greek poet from Sicily and the creator of Ancient Greek pastoral poetry.
Romance and poetry have nothing less plausible than the part which Cleopatra actually played in the history of the world, a world well lost by Mark Antony for her sake. The flight from Actium might seem as much a mere poet’s dream as the gathering of the Achaeans at Aulis, if we were not certain that it is truly chronicled.
The drama of Cleopatra and Mark Anthony was similar to the story of Helen and Paris: a love story that became bound up with a massive war that changed the course of political history.
“Beyond these voices there is peace.”
Idylls of the King by Lord Alfred Tennyson: Guinevere
To where beyond these voices there is peace.
The mocking Lucian, in his Vera Historia
A True Story (Ancient Greek: Ἀληθῆ διηγήματα, Alēthē diēgēmata; Latin: Vera Historia or Latin: Verae Historiae), also translated as True History, is a long novella or short novel written in the second century AD by the Greek author Lucian of Samosata. The novel is a satire of outlandish tales that had been reported in ancient sources, particularly those that presented fantastic or mythical events as if they were true.
It is the earliest known work of fiction to include travel to outer space, alien lifeforms, and interplanetary warfare. It has been described as "the first known text that could be called science fiction"
Ariston of Sparta (6th century BC), Eurypontid King of Sparta
described Paris’s journey, in quest of a healing spell, to the forsaken Œnone, and her refusal to aid him; her death on his funeral pyre.
Philoctetes shoots Paris with his poisoned arrows, grazing him on the hand and striking him in the groin. Paris, mortally wounded, tries to get help from his first wife, Oenone, who spurns him because of his affair with Helen. Paris passes away. Priam laments that he was his second best son, and Helen curses the position he put her in. Oenone, regrets her actions and commits suicide by jumping on Paris' funeral pyre. They are buried next to one another, their headstones facing opposite ways.
Quintus Smyrnaeus (also Quintus of Smyrna; Greek: Κόϊντος Σμυρναῖος, Kointos Smyrnaios) was a Greek epic poet whose Posthomerica, following "after Homer" continues the narration of the Trojan War. The dates of Quintus Smyrnaeus' life and poetry are disputed: by tradition, he is thought to have lived in the latter part of the 4th century AD, but early dates have also been proposed.
His epic in fourteen books, known as the Posthomerica, covers the period between the end of Homer's Iliad and the end of the Trojan War.
He slew Achilles by an arrow-shot in the Scaean gate, and prophecy was fulfilled. He himself fell by another shaft, perhaps the poisoned shaft of Philoctetes.
In the Iliad, when Hector was dying, his last words were directed to his killer, Achilles. He prophesied Achilles's death that would soon follow: "I know you what you are, and was sure that I should not move you, for your heart is hard as iron; look to it that I bring not heaven's anger upon you on the day when Paris and Phoebus Apollo, valiant though you be, shall slay you at the Scaean gates."
This prophecy was quickly fulfilled, Achilles died while scaling the gate of Troy, hit with an arrow shot by Prince Paris, the brother of Hector, but guided by Apollo himself. The arrow hit the hero's heel, the only vulnerable part of his body because, when his mother Thetis dipped him in the River Styx as an infant, she held him by one of his heels.
Philoctetes was a Greek hero, famed as an archer, and a participant in the Trojan War.
Philoctetes was the subject of four different plays of ancient Greece, each written by one of the three major Greek tragedians. Of the four plays, Sophocles' Philoctetes is the only one that has survived. Sophocles' Philoctetes at Troy, Aeschylus' Philoctetes and Euripides' Philoctetes have all been lost, with the exception of some fragments.
Philoctetes is also mentioned in Homer's Iliad, Book 2, which describes his exile on the island of Lemnos, his being wounded by snake-bite, and his eventual recall by the Greeks. The recall of Philoctetes is told in the lost epic Little Iliad, where his retrieval was accomplished by Diomedes. Philoctetes killed three men at Troy.
In the Sophocles play, he did manage to kill Paris. He shot four times: the first arrow went wide; the second struck his bow hand; the third hit him in the right eye; the fourth hit him in the heel, so there was no need of a fifth shot.
Euripides has made this idea, which was calculated to please him, the groundwork of his “Helena,”
Helen is a tragedy written by Euripides, 412 BC
About thirty years before this play, Herodotus argued in his Histories that Helen had never in fact arrived at Troy, but was in Egypt during the entire Trojan War.
Herodotus heard a story from the Egyptians who claimed that Helen and Paris made a visit to Egypt while on their way to Troy. When the ruling pharaoh discovered their licentious affair he arrested Helen so that she could be returned to her husband while Paris was asked to leave alone. Hence, Herodotus argued, Helen never set her foot on the soil of Troy, or else the Trojans would have given her up at some point in the ten year long war just to save their skins. Euripedes, the master playwright lived in the Athens in the fourth century BC, twisted this story to create a plot in which Helen never falls in love with Paris, or for that matter, with anyone else other than her husband. In his play Helen, the goddess Hera orders Hermes to create a look-alike of Helen out of thin air. It is this illusionary Helen who makes loves to Paris and elopes with him, while Hermes carries the real Helen away to a temple in Egypt. Menealus captures Troy but doesn't find his wife there, only to meet her in a dramatic climax on the sands on Egypt.
whenever a lady’s character needed to be saved
Basically: "We need this lady to be chaste and moral, but still have an adultery... we'll make her lover disguise as her husband!"
the story of Zeus and Amphitryon
Amphitryon was a mythical prince who married Alcmene, who gave birth to twin sons, Iphicles and Heracles. Only the former was the son of Amphitryon because Heracles was the son of Zeus, who had visited Alcmene during Amphitryon's absence. Zeus, disguised as Amphitryon, described the victory over the sons of Pterelaus in such convincing detail that Alcmene accepted him as her husband.
the goddess who personifies persuasion and seduction
In art, Peitho is often represented with Aphrodite during the abduction of Helen, symbolizing the forces persuasion and love at work during the scene. Her presence at the event may be interpreted as either Paris needing persuasion to claim Helen as a prize for choosing Aphrodite, or Helen needing to be persuaded to accompany him to Troy, as Helen's level of agency became a popular topic of discussion in the 5th century. Peitho's presence brings the question of whether mortals have the ability to resist her power or whether they are bound to her persuasive abilities.
In the “Odyssey,” she is at home again, playing the gracious part of hostess to Odysseus’s wandering son, pouring into the bowl the magic herb of Egypt, “which brings forgetfulness of sorrow.”
The Odyssey: Book 4.
Odysseus's son Telemachus was visiting Helen and Menelaus in Sparta. They cried a lot from sorrow about Odysseus. Helen slipped a drug into their wine to make them forget their sorrows.
As Stesichorus fabled that only an eidolon of Helen went to Troy
In ancient Greek literature, an eidolon (plural: eidola or eidolons; Greek εἴδωλον 'image, idol, double, apparition, phantom, ghost') is a spirit-image of a living or dead person; a shade or phantom look-alike of the human form. The concept of Helen of Troy's eidolon was explored both by Homer and Euripides. However, where Homer uses the concept as a free-standing idea that gives Helen life after death, Euripides entangles it with the idea of kleos, one being the product of the other. Both Euripides and Stesichorus, in their respective works concerning the Trojan Horse, claim that Helen was never physically present in the city at all.
“Shadows we are, and shadows we pursue.”
What shadows we are, and what shadows we pursue.
― Edmund Burke
The early lyric poet, Stesichorus, is said to have written harshly against Helen. She punished him by blindness,
Stesichorus (/stəˈsɪkərəs/; Greek: Στησίχορος, Stēsichoros; c. 630 – 555 BC) was a Greek lyric poet. He is best known for telling epic stories in lyric metres, and for some ancient traditions about his life, such as his opposition to the tyrant Phalaris.
Legend says that he was blinded for writing abuse of Helen and recovered his sight after writing an encomium of Helen, the Palinode, as the result of a dream.
A palinode or palinody is an ode in which the writer retracts a view or sentiment expressed in an earlier poem. The first recorded use of a palinode is in a poem by Stesichorus in the 7th century BC, in which he retracts his earlier statement that the Trojan War was all the fault of Helen.
the witch of the Brocken
The Brocken, also sometimes referred to as the Blocksberg, is the highest peak of the Harz mountain range and also the highest peak of Northern Germany; it is located near Schierke in the German state of Saxony-Anhalt between the rivers Weser and Elbe.
The Brocken has always played a role in legends and has been connected with witches and devils; Johann Wolfgang von Goethe took up the legends in his play Faust. The Brocken spectre is a common phenomenon on this misty mountain, where a climber's shadow cast upon fog creates eerie optical effects.
the famous “Coffer of Cypselus,” a work of the seventh century, B.C., which Pausanias saw at Olympia, in A.D. 174. Here, on a band of ivory, was represented, among other scenes from the tale of Troy, Menelaus rushing, sword in hand, to slay Helen.
the Chest of Cypselus at Olympia (Pausanias V. 17-19)
PAUSANIAS, DESCRIPTION OF GREECE 5.16-27 - Theoi Classical Texts Library
Levirate marriage is a type of marriage in which the brother of a deceased man is obliged to marry his brother's widow.
There is a late legend that he had a son, Corythus, by Œnone, and that he killed the lad in a moment of jealousy, finding him with Helen and failing to recognise him
Parthenius, Erotica Pathemata 34
Corythus, son of Paris and the nymph Oenone. After Paris abandoned Oenone, she sent the boy, now grown, to Troy, where he fell in love with Helen, and she received him warmly. Paris, discovering this, killed him, not recognizing his own son. Corythus was also said to have been, instead, the son of Helen and Paris.
Phidias or Scopas
two most famous ancient Greek sculptors
a small town of Laconia and supposed birthplace of Helen. Has not been located archeologically.
the naked Goddesses, to seek at the hand of the most beautiful of mortals the prize of beauty. Aphrodite won the golden apple from the queen of heaven, Hera, and from the Goddess of war and wisdom, Athena, bribing the judge by the promise of the fairest wife in the world.
The Judgement of Paris, where Paris was asked to decide who is "the fairest" among the three goddesses. Paris chose Aphrodite.
In Greek mythology, Oenone (/ɪˈnoʊniː/; Ancient Greek: Οἰνώνη Oinōnē; "wine woman") was the first wife of Paris of Troy, whom he abandoned for Helen.
The Eurotas (Ancient Greek: Εὐρώτας) or Evrotas (modern Greek: Ευρώτας) is the main river of Laconia and one of the major rivers of the Peloponnese. Sparta is on its shore.
"With good manners; well behaved; conforming with accepted standards of behaviour."
Servius, the commentator on Virgil
Servius was a late fourth-century and early fifth-century grammarian, with the contemporary reputation of being the most learned man of his generation in Italy; he was the author of a set of commentaries on the works of Virgil. These works, In tria Virgilii Opera Expositio, constituted the first incunable to be printed at Florence, by Bernardo Cennini, 1471.
the son of Priam play the dastard in the fight
Menelaus (king of Sparta, husband of Helen) had just had a duel with Paris, prince of Troy (son of Priam, king of Troy). Paris just ran away in cowardice.
"dastard" means "coward"
she turns in wrath on Aphrodite, who would lure her back to his arms
Helen's third appearance in the Iliad is with Aphrodite, whom Helen takes to task. Aphrodite is in disguise, but Helen sees straight through it. Aphrodite, representing blind lust, appears before Helen to summon her to Paris' bed at the conclusion of the duel between Menelaus and Paris, which had ended with the survival of both men due to the cowardice of Paris. Helen is aggravated with Aphrodite and her approach to life. Helen insinuates that Aphrodite would really like Paris for herself.
"Goddess, why do you wish to deceive me so? Are you going to take me still further off,  to some well populated city somewhere in Phrygia or beautiful Maeonia, because you're in love with some mortal man and Menelaus has just beaten Paris and wants to take me, a despised woman, 450 back home with him? Is that why you're here, you and your devious trickery? Why don't you go with Paris by yourself, stop walking around here like a goddess, stop directing your feet toward Olympus, and lead a miserable life with him, caring for him, until he makes you his wife  or slave. I won't go to him in there — that would be shameful, serving him in bed. Every Trojan woman would revile me afterwards. 460 Besides, my heart is hurt enough already." (Book III)
Helen has no real choice in whether or not to go to Paris' room. She will go, but since she is concerned with what the others think, she covers herself up so as not to be recognized as she goes to Paris' bedchamber.
Uther beguiled Ygerne
Disguised as Gorlois by Merlin, Uther Pendragon is able to enter Tintagel to satisfy his lust. He manages to rape Igraine by deceit – she believes that she is lying with her husband and becomes pregnant with Arthur. Her husband Gorlois dies in battle that same night.
Bishop of Thessalonica, Eustathius
Eustathius of Thessalonica (1115 -- 1195) was a Byzantine Greek scholar and Archbishop of Thessalonica. He is most noted for his contemporary account of the sack of Thessalonica by the Normans in 1185, for his orations and for his commentaries on Homer, which incorporate many remarks by much earlier researchers.
we only see it reflected in the eyes of the old men, white and weak, thin-voiced as cicalas: but hers is a loveliness “to turn an old man young.” “It is no marvel,” they say, “that for her sake Trojans and Achaeans slay each other.”
Book 3 of the Iliad,
And there they were, gathered around Priam, Panthous and Thymoetes, Lampus and Clytius, Hicetaon the gray aide of Ares, then those two with unfailing good sense, Ucalegon and Antenor. The old men of the realm held seats above the gates. Long years had brought their fighting days to a halt but they were eloquent speakers still, clear as cicadas settled on treetops, lifting their voices through the forest, rising softly, falling, dying away ... So they waited, the old chiefs of Troy, as they sat aloft the tower. And catching sight of Helen moving along the ramparts, they murmured one to another, gentle, winged words: "Who on earth could blame them? Ah, no wonder the men of Troy and Argives under arms have suffered years of agony all for her, for such a woman. Beauty, terrible beauty! A deathless goddess-so she strikes our eyes!
Virgil, of a widow’s
Queen Dido, also known as Elissa, was the legendary founder and first queen of the Phoenician city-state of Carthage. In Virgin's Aeneid, after her husband's death, she fell in love with Aeneas, and suicided when Aeneas left her.
Apollonius Rhodius sings (and no man has ever sung so well) of a maiden’s love
Apollonius of Rhodes (Ancient Greek: Ἀπολλώνιος Ῥόδιος Apollṓnios Rhódios; Latin: Apollonius Rhodius; fl. first half of 3rd century BC) was an ancient Greek author, best known for the Argonautica, an epic poem about Jason and the Argonauts and their quest for the Golden Fleece.
In Argonautica, there's the story of Medea who fell in love with Jason.
“Where falls not hail, or rain, or any snow.”
Morte d'Arthur by Alfred, Lord Tennyson | Poetry Foundation
To the island-valley of Avilion; Where falls not hail, or rain, or any snow, Nor ever wind blows loudly; but it lies Deep-meadow'd, happy, fair with orchard-lawns And bowery hollows crown'd with summer sea, Where I will heal me of my grievous wound."
the tale of Helen is without a beginning and without an end, like a frieze on a Greek temple
Helen appeared 6 times in the Iliad.
1th appearance, she was weaving. 6th appearance, she was at Hector's funeral, ready to return to her ex-husband Menelaus.
The Iliad doesn't talk about Helen's birth or death.
that there might be a song in the ears of men of later time
part of Helen's angry speech to her brother Hector, complaining about her husband Paris, in Book 4, around lines 440
"Hector, you are my brother, and I'm a horrible, conniving bitch. I wish that on that day my mother bore me some evil wind had come, carried me away, and swept me off, up into the mountains, or into waves of the tumbling, crashing sea, 430 then I would have died before this happened. But since gods have ordained these evil things, I wish I'd been wife to a better man,  someone sensitive to others' insults, with feeling for his many shameful acts. This husband of mine has no sense now, and he won't acquire any in the future. I expect he'll get from that what he deserves. But come in, sit on this chair, my brother, since this trouble really weighs upon your mind— 440 all because I was a bitch—because of that and Paris' folly, Zeus gives us an evil fate, so we may be subjects for men's songs in generations yet to come." (Book VI)
her who, having never lived, can never die
Helen is a character of myth, and thus cannot live or die in the way physical humans do.
the Daughter of the Swan
Leda and the Swan is a story and subject in art from Greek mythology in which the god Zeus, in the form of a swan, seduces or rapes Leda. According to later Greek mythology, Leda bore Helen and Polydeuces, children of Zeus, while at the same time bearing Castor and Clytemnestra, children of her husband Tyndareus, the King of Sparta.
Mary Stuart (Maria Verticordia)
"Turner of Hearts"
Mary, Queen of Scots ( December 1542 – 8 February 1587), also known as Mary Stuart, was Queen of Scotland from 14 December 1542 until her forced abdication in 1567.
Mary had once claimed Elizabeth's throne as her own and was considered the legitimate sovereign of England by many English Catholics, including participants in a rebellion known as the Rising of the North. Perceiving Mary as a threat, Elizabeth had her confined in various castles and manor houses in the interior of England. After eighteen and a half years in captivity, Mary was found guilty of plotting to assassinate Elizabeth in 1586 and was beheaded the following year at Fotheringhay Castle. Mary's life, marriages, lineage, alleged involvement in plots against Elizabeth, and subsequent execution established her as a divisive and highly romanticised historical character, depicted in culture for centuries.
the Pompadour and the Parabère
Jeanne Antoinette Poisson, Marquise de Pompadour, aka "Madame de Pompadour", was a member of the French court. She was the official chief mistress of King Louis XV from 1745 to 1751, and remained influential as court favourite until her death.
Marie Madeleine de La Vieuville, Marquise of Parabère (1693-1755), was a French aristocrat. She was the official mistress of Philippe II, Duke of Orléans, during his tenure as regent of France during the minority of the infant King Louis XV of France. That role made her a well known public figure during the French regency years (1715-1723).
la belle Stuart
Frances Teresa Stewart, Duchess of Richmond and Lennox (8 July 1647 - 15 October 1702) was a prominent member of the Court of the Restoration and famous for refusing to become a mistress of Charles II of England. For her great beauty she was known as La Belle Stuart and served as the model for an idealised, female Britannia.
Rosamund Clifford (before 1150 – c. 1176), often called "The Fair Rosamund" or the "Rose of the World" (rosa mundi), was famed for her beauty and was a mistress of King Henry II of England, famous in English folklore.
Agnès Sorel (1422 - 9 February 1450), known by the sobriquet Dame de beauté (Lady of Beauty), was a favourite and chief mistress of King Charles VII of France, by whom she bore four daughters. She is considered the first officially recognized royal mistress of a French king.
Argive Helen. During three thousand years fair women have been born, have lived, and been loved
Helen of Troy, in Greek mythology, the most beautiful woman in the world.
Most of her story comes from Homer's The Illiad, which scholarly consensus mostly places in the 8th century BC, 2700 years ago.
- Feb 2022
Tripartite Miracle of Snomis of Feronia
search turned up no results
the Hubble Effect, the Rostov-Lysenko Syndrome and the LePage Amplification Synchronoclasmique
Three fictional concepts from the 1964 cosmic horror short story "The Illuminated Man" by J G Ballard.
“The Illuminated Man” (1964) was apparently the basis for Ballard’s later novel ‘The Crystal World’. It follows a journalist granted access to a mysterious crystalline growth off the Florida peninsula, and the futile resistance of the plants, animals and people against it. One particularly chilling scene depicts a priest marooned in his church, who sacrifices himself to the growth. The style of the story, and the effect of the interminable growth on the characters, reminded me a little of the style and characters in the game ‘Bioshock’, where the societal and physical decline of Rapture turns the inhabitants slowly insane.
I don't see how it's relevant to cosmology.
CMB as Trauma Map
The cosmic microwave background, in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the universe, also known as "relic radiation". The CMB is faint cosmic background radiation filling all space.
Here, the idea is that the universe suffered a terrible original trauma at the Big Bang, and the CMB can be used as a map to figure out what the traumas of the universe have developed from that original trauma.
In psychoanalysis, the primal scene (German: Urszene) is the initial unconscious fantasy of a child of a sex act, between the parents, that organises the psychosexual development of that child.
The expression "primal scene" refers to the sight of sexual relations between the parents, as observed, constructed, or fantasized by the child and interpreted by the child as a scene of violence. The scene is not understood by the child, remaining enigmatic but at the same time provoking sexual excitement.
What It’s Like to Be
parody of "What Is It Like to Be a Bat?" (1974)
Summary of the story:
The world is made of 4 elements: water, fire, air, earth. Humans are made of dust (earth) and petroleum (water), so they could be thought of as merely "dust-soup".
Other than humans, there are also demons ("Jinn" in Arabic), made of air and fire, living and working on the Outside of human morality and understanding. They are very much on the Outside, which is why they can also be called "xeno-agents".
One important demon is Pazuzu, the demon of dry winds that blow up dry dust, cause crops to fail, and bring infective particles of plague.
Pazuzu is by nature destructive, but some humans do communicate with it through magical rituals, in order to redirect its destruction to ends more useful to humans. This makes Pazuzu part of the "Axis of Evil-against-Evil", a group of demons that some humans have historically communicated with (and perhaps worshipped).
Read for example Pazuzu: Beyond Good and Evil | The Metropolitan Museum of Art
the relationship between supernatural creatures and humans in the ancient Near East. Although they are often classified as either evil or protective in modern scholarship, supernatural beings in ancient references seem to be presented as largely amoral. Their harmful or beneficial effects could be manipulated: they could be appeased with offerings and incantations, and even directed against each other by a skilled practitioner of magic. Pazuzu, as a powerful demon, was frequently set up as a shield against another supernatural terror: Lamashtu, a female demon with broad and far-ranging destructive powers, especially feared by pregnant women and those with newborns, who were her favored (but not only) victims.
Statuette of the demon Pazuzu with an inscription | Louvre Museum | Paris
The inscription on the back of the wings describes the figure's personality: "I am Pazuzu, son of Hanpa, king of the evil spirits of the air which issues violently from mountains, causing much havoc."
As one example of how one can communicate with the demons, consider "Rammalie", the art of reading the ripples on the sand dunes. The idea is similar to Chinese oracle bone prophesy:
Diviners would submit questions to deities regarding future weather, crop planting, the fortunes of members of the royal family, military endeavors, and other similar topics. These questions were carved onto the bone or shell in oracle bone script using a sharp tool. Intense heat was then applied with a metal rod until the bone or shell cracked due to thermal expansion.
It is also similar to haruspicy, which is prophesy by reading the shape of guts of a sacrificed animal.
Rammalie is reading the mind of Pazuzu by looking at the curvy patterns on sanddunes after the dry wind blows over it. The idea is that, since Pazuzu is the demon of the dry wind, reading the trace of the dry wind allows one to read Pazuzu's mind.
Moreover, one might even "talk" with Pazuzu by intentionally writing in the sand, arranging sand patterns, and using pebbles and other devices. The dry wind would blow over them and these structures would affect the dry wind's blow, thus "speaking" to Pazuzu.
The Axis of Evil-against-Evil is very dangerous to humans even if it can be useful. Indeed, the ancient Assyrian civilization was completely destroyed by the demons that its people worshipped and communicated with.
The "human security system" is fighting a desperate losing battle against all the demons.
Lama, Lamma, or Lamassu is an Assyrian protective deity.
Perhaps Lamashtu is meant here?
Pazuzu has almost always been a talisman, a small portable talisman for protection against illness. In a certain period he was always represented in opposition to Lamashtu, the only demon uglier than he is, who poisoned swamps with typhus or malaria. Pazuzu was the only defense, besides never drinking swamp water. When the symptoms of Lamashtu appeared, a specialist would intervene, carefully using the sacred language to invoke Pazuzu, who would be able to push Lamashtu into hell, freeing the victim.
The following 2 paragraphs are unreadable.
Don't read them.
extract a wide array of pest-insurgencies from the security system not by possessing it (in the sense of seizing a property from the monopoly of the Divine — for example, the human as belonging to God), but by turning the Divine and its secured properties into intermediate parasites (pimps) for incoming diseases
The demons are not trying to steal humans from God or humanity. Instead, they are trying to make humans into a common property for all demons to use.
Instead of turning private property into private property owned by a different entity, they want to turn humans into common property.
Leyili and Majnun
Qays ibn al-Mullawah fell in love with Layla. He soon began composing poems about his love for her, mentioning her name often. His obsessive effort to woo the girl caused some locals to call him "Majnun." When he asked for her hand in marriage, her father refused because it would be a scandal for Layla to marry someone considered mentally unbalanced. Soon after, Layla was forcibly married to another noble and rich merchant...
When Majnun heard of her marriage, he fled the tribal camp and began wandering the surrounding desert. His family eventually gave up hope for his return and left food for him in the wilderness. He could sometimes be seen reciting poetry to himself or writing in the sand with a stick.
Layla ... eventually died. In some versions, Layla dies of heartbreak from not being able to see her beloved. Majnun was later found dead in the wilderness in 688 AD, near Layla's grave. He had carved three verses of poetry on a rock near the grave, which are the last three verses attributed to him.
French for "master work". The best work done by an artist.
The Rub' al Khali (Arabic for "Empty Quarter") is the sand desert encompassing most of the southern third of the Arabian Peninsula.
In the Middle East, the Arabic word Jin (or Jinn) refers to a race created by Allah prior to the creation of humans, made of fire and thus capable of shape-shifting
... invisible entities in the Qur'an, who roamed the earth before Adam, created by God out of "fire and air"...
Belief in jinn is not included among the six articles of Islamic faith, as belief in angels is, however many Muslim scholars believe it essential to the Islamic faith. Many scholars regard their existence and ability to enter human bodies as part of the aqida (theological doctrines)...
making the man a traffic zone of sweeping cosmodromic data
A spaceport or cosmodrome is a site for launching (or receiving) spacecraft, by analogy to a seaport for ships or an airport for aircraft.
As for what "cosmodromic data" means here, I have no idea. Perhaps it's supposed to mean that the possessed human becomes a "port" where all kinds of demons can visit. Some demons come from underground, some from the sun, some from the empty outer space.
Aisha Qandisha or Aisha Qadisha or Ghediseh
Aisha Qandisha or Aicha Kandida is the name of a malicious cannibal water jinn in the folklore of Morocco, and was known as a goddess of lust. When luring her human victims she is described as a beautiful young woman, but this enchantment conceals her gigantic size and hideous nature. A predatory being, she lurks on the banks of the River Sebu, around the Aquedal at Marrakech, and sometimes in the Sultan's Palace grounds, awaiting any lone man foolish enough to be taken in by her. Once he has approached her there is no escape, for soon she will reveal her true shape and consume him under the water. She hates humans and if her quarry cannot reach another human or inhabited dwelling in time, he is doomed. Sometimes she may be magnanimous and release back into his world a man who gratifies her willingly, laden with rich gifts.
The human defense mechanism is the most consistent entity on this planet
aka "Human Security System", the totality of all that keeps humans alive and thriving, and keeps all dangerous entities (like ancient gods, plagues, famines, runaway AI, etc) from appearing and destroying humanity.
It includes the electric grid, the Internet, the academia, the police force, the UN, etc.
Jackson West, or "Colonel West", is basically the same character as Mr Kurtz in Heart of Darkness and Colonel Kurtz in Apocalypse Now.
Like them, Colonel West is disappointed by the timid philosophy of his supervisors, thinks that a more cruel and brutal method is necessary for "winning", leaves western civilization, and becomes a successful warlord.
"tell" means "mound"
Mound of Kuyunjik is the archeological site of ancient Assyrian Nineveh
Egyptian Anubis and the dead.
Anubis is the Greek name of the god of death, mummification, embalming, the afterlife, cemeteries, tombs, and the Underworld, in ancient Egyptian religion, usually depicted as a man with a jackal head.
Humbaba’s labyrinthine face (with unicursal human entrails as the beard) recalls the early art of Haruspicy (divination using the liver or entrails) in ancient Mesopotamian cultures, later developed by the Etruscans.
the demon Humbaba (Huwawa), who was a monster featured and slaughtered in the Epic of Gilgamesh. This example of his features was used in the interpretation of omens. The convolutions of the mask represent the intestines of a sheep examined for divination.
Writing on a terracotta plaque in the British Museum (cat. no. 127), Sidney Smith interpreted its inscription as an explanatory caption that identifies the grotesque face on its other side as that of Humbaba. The text reads:
If the coils of the colon resemble the head of Huwawa, [this is] an omen of Sargon who ruled the land. If [the omen is] for a poor man, the house of the interested party will expand. [Written by] the hand of Warad-Marduk, diviner (bārû), son of Kubburum, diviner.
The sinuous lines that make up the face on the front of this plaque clearly recall the convolutions of the small intestine. When taken together with the text inscribed on the back, it is irresistible to link this distinctive appearance with the persona of Humbaba
Ugallu, the "Big Weather-Beast", was a wolf-headed storm-demon and has the feet of a bird who is featured on protective amulets and apotropaic yellow clay or tamarisk figurines of the first millennium BC but had its origins in the early second millennium.
He was one of the class of ud-demons (day-demons), personifying moments of divine intervention in human life.
Pazuzu the demon is by nature destructive to human endeavors, but its destructive energy can be redirected with the right magical occult rituals. Pazuzu might even be redirected to fight other evils, making it "Evil-against-Evil".
"Intended to ward off evil"
original Arabic name for the Necronomicon, a fictional tome of forbidden knowledge first mentioned in the short story The Hound written by H. P. Lovecraft.
"hard to see, but worm-shaped"
These four wings render the demon a perfect vehicle for carrying pestilential particles (Namtar) and delivering them to their destination without delay, always promptly on time.
In short, 4 wings means that this demon of the winds can fly really fast and spread disease really fast.
Perhaps another way to think about it is to imagine the demon as a giant insect, since insects have 4 wings.
Flight feathers (Pennae volatus) are the long, stiff, asymmetrically shaped, but symmetrically paired pennaceous feathers on the wings or tail of a bird; those on the wings are called remiges (/ˈrɛmɪdʒiːz/), singular remex (/ˈriːmɛks/), while those on the tail are called rectrices (/rɛkˈtraɪsiːs/), singular rectrix.