- Apr 2022
-
future.a16z.com future.a16z.com
-
Technology helped save the world.
People helped save the world, technology was secondary, and we're not out of the woods yet. Tackling global warming is going to require even greater effort than COVID.
-
Permanently divorcing physical location from economic opportunity gives us a real shot at radically expanding the number of good jobs in the world while also dramatically improving quality of life for millions, or billions, of people.
More optimistic than necessary. Local infrastructure needs to be maintained and people need to feel like they are part of the community. If work and economic activity is decoupled from local community involvement then that's not as great as one might think. In many cases it's probably an improvement but it's not clear whether the long term trend is sustainable.
-
It turns out companies really are capable of organizing and sustaining remote work even — perhaps especially — in the most sophisticated and complex fields.
It could also mean that these complex fields aren't as complicated as once thought. If the work can be accomplished without much human interaction then that's an indicator it is ripe for automation.
-
Finally, possibly the most profound technology-driven change of all — geography, and its bearing on how we live and work. For thousands of years, until the time of COVID, the dominant fact of every productive economy has been that people need to live where we work. The best jobs have always been in the bigger cities, where quality of life is inevitably impaired by the practical constraints of colocation and density.
The downside here is decoupling economic activities from the local community. This is not a good trend in the long run.
-
- Mar 2021
-
www.opendemocracy.net www.opendemocracy.net
-
Alongside globalisation – the capitalist rationalisation of space and time – we are witnessing the epistemic and technical rationalisation of the neuronal foundations of the self, or what Walker Percy called the abstraction of the self from itself.
We have reified a lot of implicit aspects of ourselves and it's hard to know what to do with this newfound knowledge. Right now this knowledge is subordinate to the machinery of capital but it doesn't have to be. This same understanding can be used for pro-social endeavors instead of making more and more money.
-
The lifting of temporal and geographical constraints on communication nurtures the illusion of unlimited accessibility and mobility.
The world is smaller but our brains are not capable of handling all this intimacy.
-
Using chemicals to improve our economy of attention and become emotionally "fitter" is an option that penetrated public consciousness some time ago.
Same is true of reinforcement learning algorithms.
-
They have become more significant because social interaction is governed by social convention to a much lesser extent than it was fifty years ago.
Probably because everything is now alogrithmically mediated.
-
Eva Illouz's Cold Intimacies: The Making of Emotional Capitalism and Georg Franck's Mental Capitalism
Books to look into.
-
The possibility of pharmacological intervention thus expands the subjective autonomy of people to act in their own best interests or to their own detriment. This in turn is accompanied by a new form of self-reflection, which encompasses both structural images of the brain and the ability to imagine the neuro-chemical activity that goes on there. What is alarming is that many of the neuroscientific findings that have triggered a transformation in our perception of ourselves are linked with commercial interests.
The same can be said about reinforcement learning algorithms. Just replace "pharmacological intervention" with "algorithmic mediation of social interactions".
-
Reconceptualising joy as dopamine activity in the brain's reward centres, melancholy as serotonin deficiency, attention as the noradrenalin-induced modulation of stimulus-processing, and, not least, love as a consequence of the secretion of centrally acting bonding hormones, changes not only our perspective on emotional and mental states, but also our subjective experience of self. That does not mean that we experience the physiological side of feelings like love or guilt any differently, but it does make us think about them differently. This, in turn, changes the way we perceive, interpret and order them, and hence the effect they have on our behaviour.
Being aware of how we operate is probably worthwhile but not when this understanding is subverted to create more profits for owners of vast algorithmic empires, e.g. Facebook, Twitter, Google, etc.
-
Just as shift workers are sometimes given stimulants, so the point here was to adapt the innate neurobiological capacity of humans as a productive force to the technologies and rhythms of globalisation.
More sinister vibes of the machinery of capital.
-
We will never be able to predict with any certainty how altering instrumental and socio-affective traits will ultimately affect the reflexively structured human personality as a whole. Today's tacit assumption that neuro-psychotropic interventions are reversible is leading individuals to experiment on themselves. Yet even if certain mental states are indeed reversible, the memory of them may not be. The barriers to neuro-enhancement actually fell some time ago, albeit in ways that for a long time went unnoticed. Jet-lag-free short breaks to Bali, working for global companies with a twenty-four hour information flow from headquarters in Tokyo, Brussels and San Francisco, exams and assessments, medical emergency services
The machinery of capital requires productivity and the psychopharmacology industry has stepped in to fulfill that requirements.
-
Among the best-selling neuro-psychotropic drugs are those that modulate the way people experience emotions and those that improve their capacity to pay attention and to concentrate, in most cases regardless of whether there is a clinically definable impairment of these functions.
Artificially enhancing focus because everything around us is now designed to maximum distraction.
-
Yet sales figures for certain neuro-psychotropic drugs are considerably higher than the incidence of the illnesses for which they are indicated would lead one to expect. This apparent paradox applies above all to neuropsychotropic drugs that have neuro-enhancement properties. The most likely explanation is that neuro-enhancers are currently undergoing millions of self-trials, including in universities – albeit probably not in their laboratories. The ten top-selling psychotropic substances in the USA include anti-depressants, neuroleptics (antipsychotics), stimulants and drugs for treating dementia.
-
Depression, anxiety or attention deficit disorders are now regarded by researchers and clinical practitioners alike as products of neuro-chemical dysregulation in interconnected systems of neurotransmitters. They are therefore treated with substances that intervene either directly or indirectly in the regulation of neurotransmitters.
Our tools and the systems that surround us have a direct effect on our brain structure but psychopharmacology places the burden of fixing systemic issues on the individual by medicating those that do not fit the systemic mold. The drugs are basically the system subduing any individual that might bring out the contradictions in the "machine" and so the simplest approach is to "subdue" them with drugs.
This all seems pretty sinister and inhumane to me.
-
In other words, they are driven by economic and epistemic forces that emanate from the capitalism of today, and that will shape the capitalism of tomorrow – whatever that might look like.
-
Unlike the latter, however, the neurosciences are extremely well funded by the state and even more so by private investment from the pharmaceutical industry.
More reasons to be wary. The incentive structure for the research is mostly about control. It's a little sinister. It's not about helping people on their own terms. It's mostly about helping people become "good" citizens and participants of the state apparatus.
-
Depression, however, was also the first widespread mental illness for which modern neuroscience promptly found a remedy. Depression and anxiety were located in the gaps between the synapses, which is precisely where they were treated. Where previously there had only been reflexive psychotherapy, an interface had now been identified where suffering induced by the self and the world could now be alleviated directly and pre-reflexively. At this point, if not before, the unequal duo of capitalism and neuroscience was joined by a third partner. From now on, the blossoming pharmaceutical industry was to function as a kind of transmission belt connecting the two wheels and making them turn faster.
One good reason to be wary of psychopharmacology is that it is extremely profitable.
Tags
- systemic oppression
- psychology
- psychotropics
- incentive design
- small world
- state control
- sinister
- stimulants
- selfhood
- incentives
- depression
- books
- capitalism
- ai
- machinery of capital
- algorithms
- psychopharmacology
- reinforcement learning algorithms
- dementia
- state funding
- hope
- technology
- citizenship
- neuro-enhancement
- understanding
Annotators
URL
-
-
www.theverge.com www.theverge.com
-
“There is very little incentive for Microsoft to make a significant change to features that are used extremely widely by the rest of the massive community of Excel users.”
Microsoft did the right thing here. Most users are not geneticists and they rely on automatic date conversion.
-
-
www.ejumpcut.org www.ejumpcut.org
-
James Whale’s Frankenstein (1931) and Bride of Frankenstein (1935) explored both the hubris of the male scientist described in Mary Shelley’s novel Frankenstein, or the Modern Prometheus (1818) as well as the repressive sexuality of Western culture. Robert Wise’s The Day the Earth Stood Still (1951) advocated for a liberal belief in the collective submission to a technocratic elite.
I initially found this article by searching for "alien movie hubris" and the search results did not disappoint. This essay does a great job weaving several themes about creativity, automation, intelligence, biology, culture, ambition, power, delusions of grandeur, human spirituality and sexuality, and a few more I'm probably forgetting. It's definitely worthwhile reading.
-
- Sep 2020
-
fermatslibrary.com fermatslibrary.com
-
For example, the one- pass (hardware) translator generated a symbol table and reverse Polish code as in conven- tional software interpretive languages. The translator hardware (compiler) operated at disk transfer speeds and was so fast there was no need to keep and store object code, since it could be quickly regenerated on-the-fly. The hardware-implemented job controller per- formed conventional operating system func- tions. The memory controller provided
Hardware assisted compiler is a fantastic idea. TPUs from Google are essentially this. They're hardware assistance for matrix multiplication operations for machine learning workloads created by tools like TensorFlow.
-
-
electricliterature.com electricliterature.com
-
It’s no coincidence that “aspiration” means both hope and the act of breathing.
All speech is aspirational.
-
-
www.oreilly.com www.oreilly.com
-
In this one fable is all of Herbert's wisdom. When people want the future to be like the present, they must reject what is different. And in what is different is the seed of change. It may look warped and stunted now, but it will be normal when we are gone.
Another echo of Feynman. Progress might not be inevitable but change is.
-
Among many analogues to the twentieth century, one might note that the very scientists who discovered the fundamental principles of relativity and physical uncertainty upon which Paul's teachings are based are considered purveyors of an absolute, priestly knowledge too difficult for the uninitiated public to understand.
Quote by Feynam is relevant here
"Right. I don't believe in the idea that there are a few peculiar people capable of understanding math, and the rest of the world is normal. Math is a human discovery, and it's no more complicated than humans can understand. I had a calculus book once that said, ‘What one fool can do, another can." What we've been able to work out about nature may look abstract and threatening to someone who hasn't studied it, but it was fools who did it, and in the next generation, all the fools will understand it. There's a tendency to pomposity in all this, to make it deep and profound." - Richard Feynman, Omni 1979
-
Leto's vision goes much further, to a new evolutionary step in the history of mankind in which each individual will create his own myth, and solidarity will not be the solidarity of leaders and followers, but of all men as equal dreamers of the infinite.
I just like the phrase, "...equal dreamers of the infinite".
-
-
digitalhumanities.org digitalhumanities.org
-
The trend had turned in the direction of digital machines, a whole new generation had taken hold. If I mixed with it, I could not possibly catch up with new techniques, and I did not intend to look foolish. [Bush 1970, 208]
One needs courage to endure looking foolish.
-
While the pioneers of digital computing understood that machines would soon accelerate human capabilities by doing massive calculations, Bush continued to be occupied with extending, through replication, human mental experience. [Nyce 1991, 124]
Ironic that adaptation was part of the memex and yet it did not adapt to the emerging field of digital computing.
-
In all versions of the Memex essay, the machine was to serve as a personal memory support. It was not a public database in the sense of the modern Internet: it was first and foremost a private device. It provided for each person to add their own marginal notes and comments, recording reactions to and trails from others' texts, and adding selected information and the trails of others by “dropping” them into their archive via an electro-optical scanning device. In the later adaptive Memex, these trails fade out if not used, and “if much in use, the trails become emphasized” [Bush 1970, 191] as the web adjusts its shape mechanically to the thoughts of the individual who uses it.
A personal memex must first and foremost be personal. No cloud based system can claim to be a memex because it loses the personal / private aspect.
-
So Memex was first and foremost an extension of human memory and the associative movements that the mind makes through information: a mechanical analogue to an already mechanical model of memory. Bush transferred this idea into information management; Memex was distinct from traditional forms of indexing not so much in its mechanism or content, but in the way it organised information based on association. The design did not spring from the ether, however; the first Memex design incorporates the technical architecture of the Rapid Selector and the methodology of the Analyzer — the machines Bush was assembling at the time.
How much further would Bush have gone if he had known about graph theory? He is describing a graph database with nodes and edges and a graphical model itself is the key to the memex.
-
Solutions were suggested (among them slowing down the machine, and checking abstracts before they were used) [Burke 1991, 154], but none of these were particularly effective, and a working machine wasn’t ready until the fall of 1943. At one stage, because of an emergency problem with Japanese codes, it was rushed to Washington — but because it was so unreliable, it went straight back into storage. So many parts were pulled out that the machine was never again operable [Burke 1991, 158]. In 1998, the Selector made Bruce Sterling’s Dead Media List, consigned forever to a lineage of failed technologies. Microfilm did not behave the way Bush and his team wanted it to. It had its own material limits, and these didn’t support speed of access.
People often get stuck on specific implementation details that are specific to their time, place, and context. Why didn't Bush consider other storage mechanisms?
-
In engineering science, there is an emphasis on working prototypes or “deliverables”. As Professor of Computer Science Andries van Dam put it in an interview with the author, when engineers talk about work, they mean “work in the sense of machines, software, algorithms, things that are concrete ” [Van Dam 1999]. This emphasis on concrete work was the same in Bush’s time. Bush had delivered something which had been previously only been dreamed about; this meant that others could come to the laboratory and learn by observing the machine, by watching it integrate, by imagining other applications. A working prototype is different to a dream or white paper — it actually creates its own milieu, it teaches those who use it about the possibilities it contains and its material technical limits. Bush himself recognised this, and believed that those who used the machine acquired what he called a “mechanical calculus”, an internalised knowledge of the machine. When the army wanted to build their own machine at the Aberdeen Proving Ground, he sent them a mechanic who had helped construct the Analyzer. The army wanted to pay the man machinist’s wages; Bush insisted he be hired as a consultant [Owens 1991, 24]. I never consciously taught this man any part of the subject of differential equations; but in building that machine, managing it, he learned what differential equations were himself … [it] was interesting to discuss the subject with him because he had learned the calculus in mechanical terms — a strange approach, and yet he understood it. That is, he did not understand it in any formal sense, he understood the fundamentals; he had it under his skin. (Bush 1970, 262 cited in Owens 1991, 24)
Learning is an act of creation. To understand something we must create mental and physical constructions. This is a creative process.
-
-
-
This is ultimately loss aversion
Good observation. Sunk cost and loss aversion muddy clear thinking patterns.
-
- Dec 2017
-
-
The three processes needed to be separated into three small, disposable programs.
Pipelines FTW
-
and system needs to leave enough edge-cases un-automated so that the users are continuously practiced and know how to use the tools well.
Again, I don't agree. The system should be reflective enough to allow people to reacquaint themselves with the system as necessary instead of paying the constant tax of manual labor.
-
Imagine a machine that receives wood in one end and outputs furniture. It's a completely sealed unit that's automated for safety and efficiency, but when a splinter gets stuck somewhere the machine's way of dealing with the problem is to dump the entire pile of unfinished parts into a heap. As the machine only has one input, and that input only takes raw wood, there's no way to fix the cause of the fault and resume the process where it left off.
This seems like not having the proper debugging capabilities instead of being "over-automated". The designers assumed perfect operation and so did not add the proper entry points for debugging and visibility.
-
When a peculiarity rears its head and gets in the way of a necessary change, there'll be less to demolish and rebuild.
Modularity is good but it adds overhead in other places. Maybe the overhead is justified but it requires discipline to keep under control because it too can balloon out of proportion relative to useful work being done by the system.
-
These are slow deaths because the cost to work around them on a daily basis eventually overwhelms the payoff from making the change. If the organization depending on this system doesn't die, then it'll be forced to replace the entire system at great risk.
Usually the organization will leverage cheap labor to work around the issues.
-
a peculiarity of your business got baked into the way the system works
The system is over-specialized for the present instead of the future.
-
their parts can't be swapped out
Incidental complexity from the way the problem was coded.
-
But most systems and computer programs are written to resist change. At the system level it's a problem of ignorance, while at the program or code level it might also be a consequence of reality, since code has to draw the line somewhere.
Many things are dynamic but code is static. This echoes Gerald Sussman's talk about us not knowing how to code.
-
-
www.yacoset.com www.yacoset.com
-
The workflow has to begin with the EDI document and use it as the bible all the way to the end. You pull the minimum details into your database for tracking and indexing, and build an abstraction layer to query the original document for everything that isn't performance sensitive (such as getting the spec for the shipping label). Now when the document's format changes, you only have to change the abstraction layer
Sure sounds like a monad. Build up the computation tree and then
run
it by passing in the required parameters. -
You need to design your system so that validation takes place after a transaction has been received and recorded, and that the transaction is considered "on hold" until validation is performed
Sometimes regulatory practices can prevent this. Even though the user has a shitty experience there is nothing that can be done from the software side because the regulation spells out exactly how and what needs to happen in what sequence.
-
Your customers can probably tolerate a delay in fulfillment, but they'll go elsewhere if they can't even submit an order at all
This layer of indirection can be helpful but has added another point, the queue, that can fail. As long as the queue is less error prone than the DB then this is fine but usually DBs are way more robust than queues.
-
Address correction services, for example, can identify addresses with missing apartment numbers, incorrect zip codes, etc. They can help you cut back on reshipping costs and penalty fees charged by UPS for false or inaccurate data. If this service fails you'll want your software to transparently time-out and pass the address through as-is, and the business will simply cope with higher shipping costs until the service is restored
Pass through modes tend to go unnoticed. I think it is better to fail loudly rather than simply pass things through.
-
All that you've done is create more possible failure modes.
Because each insurance policy is another component with its own failure modes.
-
A failure mode is a degradation of quality, but it is not yet a catastrophe. In your personal life and in your work, you should always think about what kind of quality you'll be limping along with if some component or assumption were to fail. If you find that quality is unpalatable, then it's time to go back to the drawing board and try again.
I've never seen this exercise performed.
-
At 4:00:36 a.m. on March 28, 1979 the pumps feeding the secondary cooling loop of reactor number 2 at the Three Mile Island nuclear plant in western Pennsylvania shut down. An alarm sounds in the control room, which is ignored becuase the backup pumps have automatically started. The backup pumps are on the loop of a 'D'-shaped section in the pipework. At the corners of this 'D' are the bypass valves, which are normally open, but were shut a day earlier so that technicians could perform routine maintenance on the Number 7 Polisher. Even though they completed the maintenance, they forgot to reopen the valves, meaning that the backup pumps are pumping vacuum instead of cooling water.As pressure in the primary cooling system rose--from being unable to shift heat into the secondary loop--a Pressure Relief Valve (PORV) on top of the reactor vessel opens automatically and begins to vent steam and water into a tank in the floor of the containment building.Nine seconds have elapsed since the pumps failed and now control rods made of boron and silver are automatically lowered into the reactor core to slow down the reaction. In the control room the indicator light for the PORV turns off, but the PORV is still open: its failure mode is "fail open", like how a fire-escape door is always installed on the outer rim of the door-jam.Water as well as steam begin to vent from the PORV, a condition known as a Loss Of Coolant Accident. At two minutes into the accident, Emergency Injection Water (EIW) is automatically activated to replace the lost coolant. The human operators see the EIW has turned on, but believe that the PRV is shut and that pressure is decreasing, so they switch off the EIW.At the eight minute mark, an operator notices that the bypass valves of the secondary cooling loop are closed, and so he opens them. Gagues in the control room falsely report that the water level is high, when in fact it has been dropping. At an hour and 20 minutes into the accident, the pumps on the primary cooling loop begin to shake from steam being forced through them. An operator mistakes this to mean the pumps are malfunctioning, so he shuts off half of them. These are the last two that were still operation, so now there is no circulation of cooling water in the core at all, and the water level drops to expose the top of the core.Superheated steam reacts with the zirconium alloy in the control rods, producing hydrogen gas that escapes through the PORV.At two hours and 45 minutes, a radiation alarm sounds and a site emergency is declared and all non-essential staff are evacuated. Half of the core is now exposed, but the operators don't know it, and think that the temperature readings are erroneous.Seven and a half hours into the accident, the operators decide to pump water into the primary loop and open a backup PORV valve to lower pressure.Nine hours, and the hydrogen in the containment vessel explodes. This is heard as a dull thump, and the operators think it was a ventilator damper.Fifteen hours in, and the primary loop pumps are turned back on. Half the core has melted, but now that water is circulating the core temperature is finally brought back under control.But even if the operators had done nothing at all, Three Mile Island had an inherently high-quality failure mode: it was a negative void coefficient reactor. This meant that as steam increased (voids), the nuclear reaction decreased (negative coefficient).Compare this to a reactor with a positive void coefficient, and a much lower quality failure mode.
This is a cool story. Seems like everything went wrong.
-
-
-
In a way, state and community colleges have already figured this out and have their students rebuild the school's web site and class signup software every few years as part of their CS curriculum. It gets thrown away a few years later by another batch of kids, and they always do a shitty job of it each time, but this is what it's like in the enterprise, too.
Many enterprises already buy into this model and outsource all the work to places like Palantir, IBM, Accenture, HP, etc.
-
institutionalized cluelessness
Interesting phrase. I'd heard it as institutional brain damage.
-
But the universe itself is the most malicious partner; it works hard to contrive problems that--by nature--will get worse if you try to solve them with the ponies and unicorns of your time. It sets you up for a fall before you've even started writing code.
Entropy always wins.
-
And there is such a strong economic incentive to solve a new problem with an old trick that good problems go misdiagnosed.
Always comes back to incentives and the failures of human heuristics.
-
-
www.scientificamerican.com www.scientificamerican.com
-
In a typical season most flu-related deaths occur among children and the elderly, both of whom are uniquely vulnerable. The immune system is an adaptive network of organs that learns how best to recognize and respond to threats over time. Because the immune systems of children are relatively naive, they may not respond optimally. In contrast the immune systems of the elderly are often weakened by a combination of age and underlying illness. Both the very young and very old may also be less able to tolerate and recover from the immune system's self-attack. Apart from children between six and 59 months and individuals older than 65 years, those at the greatest risk of developing potentially fatal complications are pregnant women, health care workers and people with certain chronic medical conditions, such as HIV/AIDS, asthma, and heart or lung diseases, according to the World Health Organization.
The system is always on the precipice of decline. It's a wonder anything in biology works at all. The cycles are all unstable or barely stable.
-
In most healthy adults this process works, and they recover within days or weeks. But sometimes the immune system's reaction is too strong, destroying so much tissue in the lungs that they can no longer deliver enough oxygen to the blood, resulting in hypoxia and death.
Evolution is a fucking joke. There is no intelligent design in any of this. Humans are an agglomeration of disparate pieces that barely just work.
-
-
erlang.org erlang.org
-
If the entire tables are constant over a long time, you could generate them as modules. Nowadays, compile-time constants (even complex ones) are placed in a constant pool associated with the module. So, you could generate something like this: -module(autogen_table). -export([find/1]). find(0) -> {some_struct, "hello", ...}; ... find(99) -> {some_other_struct, <<"hi!">>} find(X) -> throw({not_found, X}). As far as I know, these constants will not be copied to the private heaps of the processes. The generated code will also give you the fastest possible lookup.
This is a pretty cool trick. So looks like a way to bypass the copying overhead is to make a function that only has constant outputs and doesn't actually compute anything.
-
-
www.ostinelli.net www.ostinelli.net
-
To summarize: without queuing mechanism: same Erlang node: 5.3 million messages/min; different Erlang nodes: 700 K messages/min. with queuing mechanism: same Erlang node: 5.3 million messages/min; different Erlang nodes: 2.1 million messages/min. The complete code to run this on your machine is available here. This whole ‘queuing idea’ is still an experiment, and I’d be more than delighted to hear your feedback, to see whether you are getting the same results, you know how to improve the concept or the code, or you have any considerations at all you would like to share.
I got here from the discord blog on how they optimized their performance and it looks like the trick is to batch messages when sending to remote nodes. Seems kinda obvious though that batching messages would improve performance.
A trick to keep in the back pocket.
-
-
blog.discordapp.com blog.discordapp.com
-
After doing some research, we found mochiglobal, a module that exploits a feature of the VM: if Erlang sees a function that always returns the same constant data, it puts that data into a read-only shared heap that processes can access without copying the data. mochiglobal takes advantage of this by creating an Erlang module with one function at runtime and compiling it.
This is a cool trick and it sounds like partial evaluation and just-in-time compilation.
-
An Erlang VM responsible for sessions can have up to 500,000 live sessions on it.
This is pretty impressive: half a million processes.
-
Finally, those workers send the messages to the actual processes. This ensures the partitioner does not get overloaded and still provides the linearizability guaranteed by send/2. This solution was effectively a drop-in replacement for send/2:
Opening says they weren't going to shard but how is this not sharding?
-
We knew we had to somehow distribute the work of sending messages. Since spawning processes in Erlang is cheap, our first guess was to just spawn another process to handle each publish. However, each publish could be scheduled at a different time, and Discord clients depend on linearizability of events. That solution also wouldn’t scale well because the guild service was also responsible for an ever-growing amount of work.
So seems like the plan was to do the send in another spawn call but that would be a hell of a lot of processes and as they mention lose the "linear" aspect of publishing messages.
-
session process (a GenServer), which then communicates with remote Erlang nodes that contain guild (internal for a “Discord Server”) processes (also GenServers)
So looks like the coordination happens across nodes with these "guild" processes
-
How Discord Scaled Elixir to 5,000,000 Concurrent Users
Is this across the entire set of clusters or is this per single node or set of nodes for a given "guild"?
-