41 Matching Annotations
  1. Sep 2020
    1. For example, the one- pass (hardware) translator generated a symbol table and reverse Polish code as in conven- tional software interpretive languages. The translator hardware (compiler) operated at disk transfer speeds and was so fast there was no need to keep and store object code, since it could be quickly regenerated on-the-fly. The hardware-implemented job controller per- formed conventional operating system func- tions. The memory controller provided

      Hardware assisted compiler is a fantastic idea. TPUs from Google are essentially this. They're hardware assistance for matrix multiplication operations for machine learning workloads created by tools like TensorFlow.

    1. It’s no coincidence that “aspiration” means both hope and the act of breathing.

      All speech is aspirational.

    1. In this one fable is all of Herbert's wisdom. When people want the future to be like the present, they must reject what is different. And in what is different is the seed of change. It may look warped and stunted now, but it will be normal when we are gone.

      Another echo of Feynman. Progress might not be inevitable but change is.

    2. Among many analogues to the twentieth century, one might note that the very scientists who discovered the fundamental principles of relativity and physical uncertainty upon which Paul's teachings are based are considered purveyors of an absolute, priestly knowledge too difficult for the uninitiated public to understand.

      Quote by Feynam is relevant here

      "Right. I don't believe in the idea that there are a few peculiar people capable of understanding math, and the rest of the world is normal. Math is a human discovery, and it's no more complicated than humans can understand. I had a calculus book once that said, ‘What one fool can do, another can." What we've been able to work out about nature may look abstract and threatening to someone who hasn't studied it, but it was fools who did it, and in the next generation, all the fools will understand it. There's a tendency to pomposity in all this, to make it deep and profound." - Richard Feynman, Omni 1979

    3. Leto's vision goes much further, to a new evolutionary step in the history of mankind in which each individual will create his own myth, and solidarity will not be the solidarity of leaders and followers, but of all men as equal dreamers of the infinite.

      I just like the phrase, "...equal dreamers of the infinite".

    1. The trend had turned in the direction of digital machines, a whole new generation had taken hold. If I mixed with it, I could not possibly catch up with new techniques, and I did not intend to look foolish.  [Bush 1970, 208]

      One needs courage to endure looking foolish.

    2. While the pioneers of digital computing understood that machines would soon accelerate human capabilities by doing massive calculations, Bush continued to be occupied with extending, through replication, human mental experience.  [Nyce 1991, 124]

      Ironic that adaptation was part of the memex and yet it did not adapt to the emerging field of digital computing.

    3. In all versions of the Memex essay, the machine was to serve as a personal memory support. It was not a public database in the sense of the modern Internet: it was first and foremost a private device. It provided for each person to add their own marginal notes and comments, recording reactions to and trails from others' texts, and adding selected information and the trails of others by “dropping” them into their archive via an electro-optical scanning device. In the later adaptive Memex, these trails fade out if not used, and “if much in use, the trails become emphasized”  [Bush 1970, 191] as the web adjusts its shape mechanically to the thoughts of the individual who uses it.

      A personal memex must first and foremost be personal. No cloud based system can claim to be a memex because it loses the personal / private aspect.

    4. So Memex was first and foremost an extension of human memory and the associative movements that the mind makes through information: a mechanical analogue to an already mechanical model of memory. Bush transferred this idea into information management; Memex was distinct from traditional forms of indexing not so much in its mechanism or content, but in the way it organised information based on association. The design did not spring from the ether, however; the first Memex design incorporates the technical architecture of the Rapid Selector and the methodology of the Analyzer — the machines Bush was assembling at the time.

      How much further would Bush have gone if he had known about graph theory? He is describing a graph database with nodes and edges and a graphical model itself is the key to the memex.

    5. Solutions were suggested (among them slowing down the machine, and checking abstracts before they were used) [Burke 1991, 154], but none of these were particularly effective, and a working machine wasn’t ready until the fall of 1943. At one stage, because of an emergency problem with Japanese codes, it was rushed to Washington — but because it was so unreliable, it went straight back into storage. So many parts were pulled out that the machine was never again operable [Burke 1991, 158]. In 1998, the Selector made Bruce Sterling’s Dead Media List, consigned forever to a lineage of failed technologies. Microfilm did not behave the way Bush and his team wanted it to. It had its own material limits, and these didn’t support speed of access.

      People often get stuck on specific implementation details that are specific to their time, place, and context. Why didn't Bush consider other storage mechanisms?

    6. In engineering science, there is an emphasis on working prototypes or “deliverables”. As Professor of Computer Science Andries van Dam put it in an interview with the author, when engineers talk about work, they mean “work in the sense of machines, software, algorithms, things that are concrete ”  [Van Dam 1999]. This emphasis on concrete work was the same in Bush’s time. Bush had delivered something which had been previously only been dreamed about; this meant that others could come to the laboratory and learn by observing the machine, by watching it integrate, by imagining other applications. A working prototype is different to a dream or white paper — it actually creates its own milieu, it teaches those who use it about the possibilities it contains and its material technical limits. Bush himself recognised this, and believed that those who used the machine acquired what he called a “mechanical calculus”, an internalised knowledge of the machine. When the army wanted to build their own machine at the Aberdeen Proving Ground, he sent them a mechanic who had helped construct the Analyzer. The army wanted to pay the man machinist’s wages; Bush insisted he be hired as a consultant [Owens 1991, 24]. I never consciously taught this man any part of the subject of differential equations; but in building that machine, managing it, he learned what differential equations were himself … [it] was interesting to discuss the subject with him because he had learned the calculus in mechanical terms — a strange approach, and yet he understood it. That is, he did not understand it in any formal sense, he understood the fundamentals; he had it under his skin.  (Bush 1970, 262 cited in Owens 1991, 24)

      Learning is an act of creation. To understand something we must create mental and physical constructions. This is a creative process.

    1. This is ultimately loss aversion

      Good observation. Sunk cost and loss aversion muddy clear thinking patterns.

  2. Dec 2017
    1. The three processes needed to be separated into three small, disposable programs.

      Pipelines FTW

    2. and system needs to leave enough edge-cases un-automated so that the users are continuously practiced and know how to use the tools well.

      Again, I don't agree. The system should be reflective enough to allow people to reacquaint themselves with the system as necessary instead of paying the constant tax of manual labor.

    3. Imagine a machine that receives wood in one end and outputs furniture. It's a completely sealed unit that's automated for safety and efficiency, but when a splinter gets stuck somewhere the machine's way of dealing with the problem is to dump the entire pile of unfinished parts into a heap. As the machine only has one input, and that input only takes raw wood, there's no way to fix the cause of the fault and resume the process where it left off.

      This seems like not having the proper debugging capabilities instead of being "over-automated". The designers assumed perfect operation and so did not add the proper entry points for debugging and visibility.

    4. When a peculiarity rears its head and gets in the way of a necessary change, there'll be less to demolish and rebuild.

      Modularity is good but it adds overhead in other places. Maybe the overhead is justified but it requires discipline to keep under control because it too can balloon out of proportion relative to useful work being done by the system.

    5. These are slow deaths because the cost to work around them on a daily basis eventually overwhelms the payoff from making the change. If the organization depending on this system doesn't die, then it'll be forced to replace the entire system at great risk.

      Usually the organization will leverage cheap labor to work around the issues.

    6. a peculiarity of your business got baked into the way the system works

      The system is over-specialized for the present instead of the future.

    7. their parts can't be swapped out

      Incidental complexity from the way the problem was coded.

    8. But most systems and computer programs are written to resist change. At the system level it's a problem of ignorance, while at the program or code level it might also be a consequence of reality, since code has to draw the line somewhere.

      Many things are dynamic but code is static. This echoes Gerald Sussman's talk about us not knowing how to code.

    1. The workflow has to begin with the EDI document and use it as the bible all the way to the end. You pull the minimum details into your database for tracking and indexing, and build an abstraction layer to query the original document for everything that isn't performance sensitive (such as getting the spec for the shipping label). Now when the document's format changes, you only have to change the abstraction layer

      Sure sounds like a monad. Build up the computation tree and then run it by passing in the required parameters.

    2. You need to design your system so that validation takes place after a transaction has been received and recorded, and that the transaction is considered "on hold" until validation is performed

      Sometimes regulatory practices can prevent this. Even though the user has a shitty experience there is nothing that can be done from the software side because the regulation spells out exactly how and what needs to happen in what sequence.

    3. Your customers can probably tolerate a delay in fulfillment, but they'll go elsewhere if they can't even submit an order at all

      This layer of indirection can be helpful but has added another point, the queue, that can fail. As long as the queue is less error prone than the DB then this is fine but usually DBs are way more robust than queues.

    4. Address correction services, for example, can identify addresses with missing apartment numbers, incorrect zip codes, etc. They can help you cut back on reshipping costs and penalty fees charged by UPS for false or inaccurate data. If this service fails you'll want your software to transparently time-out and pass the address through as-is, and the business will simply cope with higher shipping costs until the service is restored

      Pass through modes tend to go unnoticed. I think it is better to fail loudly rather than simply pass things through.

    5. All that you've done is create more possible failure modes.

      Because each insurance policy is another component with its own failure modes.

    6. A failure mode is a degradation of quality, but it is not yet a catastrophe. In your personal life and in your work, you should always think about what kind of quality you'll be limping along with if some component or assumption were to fail. If you find that quality is unpalatable, then it's time to go back to the drawing board and try again.

      I've never seen this exercise performed.

    7. At 4:00:36 a.m. on March 28, 1979 the pumps feeding the secondary cooling loop of reactor number 2 at the Three Mile Island nuclear plant in western Pennsylvania shut down. An alarm sounds in the control room, which is ignored becuase the backup pumps have automatically started. The backup pumps are on the loop of a 'D'-shaped section in the pipework. At the corners of this 'D' are the bypass valves, which are normally open, but were shut a day earlier so that technicians could perform routine maintenance on the Number 7 Polisher. Even though they completed the maintenance, they forgot to reopen the valves, meaning that the backup pumps are pumping vacuum instead of cooling water.As pressure in the primary cooling system rose--from being unable to shift heat into the secondary loop--a Pressure Relief Valve (PORV) on top of the reactor vessel opens automatically and begins to vent steam and water into a tank in the floor of the containment building.Nine seconds have elapsed since the pumps failed and now control rods made of boron and silver are automatically lowered into the reactor core to slow down the reaction. In the control room the indicator light for the PORV turns off, but the PORV is still open: its failure mode is "fail open", like how a fire-escape door is always installed on the outer rim of the door-jam.Water as well as steam begin to vent from the PORV, a condition known as a Loss Of Coolant Accident. At two minutes into the accident, Emergency Injection Water (EIW) is automatically activated to replace the lost coolant. The human operators see the EIW has turned on, but believe that the PRV is shut and that pressure is decreasing, so they switch off the EIW.At the eight minute mark, an operator notices that the bypass valves of the secondary cooling loop are closed, and so he opens them. Gagues in the control room falsely report that the water level is high, when in fact it has been dropping. At an hour and 20 minutes into the accident, the pumps on the primary cooling loop begin to shake from steam being forced through them. An operator mistakes this to mean the pumps are malfunctioning, so he shuts off half of them. These are the last two that were still operation, so now there is no circulation of cooling water in the core at all, and the water level drops to expose the top of the core.Superheated steam reacts with the zirconium alloy in the control rods, producing hydrogen gas that escapes through the PORV.At two hours and 45 minutes, a radiation alarm sounds and a site emergency is declared and all non-essential staff are evacuated. Half of the core is now exposed, but the operators don't know it, and think that the temperature readings are erroneous.Seven and a half hours into the accident, the operators decide to pump water into the primary loop and open a backup PORV valve to lower pressure.Nine hours, and the hydrogen in the containment vessel explodes. This is heard as a dull thump, and the operators think it was a ventilator damper.Fifteen hours in, and the primary loop pumps are turned back on. Half the core has melted, but now that water is circulating the core temperature is finally brought back under control.But even if the operators had done nothing at all, Three Mile Island had an inherently high-quality failure mode: it was a negative void coefficient reactor. This meant that as steam increased (voids), the nuclear reaction decreased (negative coefficient).Compare this to a reactor with a positive void coefficient, and a much lower quality failure mode.

      This is a cool story. Seems like everything went wrong.

    1. In a way, state and community colleges have already figured this out and have their students rebuild the school's web site and class signup software every few years as part of their CS curriculum. It gets thrown away a few years later by another batch of kids, and they always do a shitty job of it each time, but this is what it's like in the enterprise, too.

      Many enterprises already buy into this model and outsource all the work to places like Palantir, IBM, Accenture, HP, etc.

    2. institutionalized cluelessness

      Interesting phrase. I'd heard it as institutional brain damage.

    3. But the universe itself is the most malicious partner; it works hard to contrive problems that--by nature--will get worse if you try to solve them with the ponies and unicorns of your time. It sets you up for a fall before you've even started writing code.

      Entropy always wins.

    4. And there is such a strong economic incentive to solve a new problem with an old trick that good problems go misdiagnosed.

      Always comes back to incentives and the failures of human heuristics.

    1. In a typical season most flu-related deaths occur among children and the elderly, both of whom are uniquely vulnerable. The immune system is an adaptive network of organs that learns how best to recognize and respond to threats over time. Because the immune systems of children are relatively naive, they may not respond optimally. In contrast the immune systems of the elderly are often weakened by a combination of age and underlying illness. Both the very young and very old may also be less able to tolerate and recover from the immune system's self-attack. Apart from children between six and 59 months and individuals older than 65 years, those at the greatest risk of developing potentially fatal complications are pregnant women, health care workers and people with certain chronic medical conditions, such as HIV/AIDS, asthma, and heart or lung diseases, according to the World Health Organization.

      The system is always on the precipice of decline. It's a wonder anything in biology works at all. The cycles are all unstable or barely stable.

    2. In most healthy adults this process works, and they recover within days or weeks. But sometimes the immune system's reaction is too strong, destroying so much tissue in the lungs that they can no longer deliver enough oxygen to the blood, resulting in hypoxia and death.

      Evolution is a fucking joke. There is no intelligent design in any of this. Humans are an agglomeration of disparate pieces that barely just work.

    1. If the entire tables are constant over a long time, you could generate them as modules. Nowadays, compile-time constants (even complex ones) are placed in a constant pool associated with the module. So, you could generate something like this: -module(autogen_table). -export([find/1]). find(0) -> {some_struct, "hello", ...}; ... find(99) -> {some_other_struct, <<"hi!">>} find(X) -> throw({not_found, X}). As far as I know, these constants will not be copied to the private heaps of the processes. The generated code will also give you the fastest possible lookup.

      This is a pretty cool trick. So looks like a way to bypass the copying overhead is to make a function that only has constant outputs and doesn't actually compute anything.

    1. To summarize: without queuing mechanism: same Erlang node: 5.3 million messages/min; different Erlang nodes: 700 K messages/min. with queuing mechanism: same Erlang node: 5.3 million messages/min; different Erlang nodes: 2.1 million messages/min. The complete code to run this on your machine is available here. This whole ‘queuing idea’ is still an experiment, and I’d be more than delighted to hear your feedback, to see whether you are getting the same results, you know how to improve the concept or the code, or you have any considerations at all you would like to share.

      I got here from the discord blog on how they optimized their performance and it looks like the trick is to batch messages when sending to remote nodes. Seems kinda obvious though that batching messages would improve performance.

      A trick to keep in the back pocket.

    1. After doing some research, we found mochiglobal, a module that exploits a feature of the VM: if Erlang sees a function that always returns the same constant data, it puts that data into a read-only shared heap that processes can access without copying the data. mochiglobal takes advantage of this by creating an Erlang module with one function at runtime and compiling it.

      This is a cool trick and it sounds like partial evaluation and just-in-time compilation.

    2. An Erlang VM responsible for sessions can have up to 500,000 live sessions on it.

      This is pretty impressive: half a million processes.

    3. Finally, those workers send the messages to the actual processes. This ensures the partitioner does not get overloaded and still provides the linearizability guaranteed by send/2. This solution was effectively a drop-in replacement for send/2:

      Opening says they weren't going to shard but how is this not sharding?

    4. We knew we had to somehow distribute the work of sending messages. Since spawning processes in Erlang is cheap, our first guess was to just spawn another process to handle each publish. However, each publish could be scheduled at a different time, and Discord clients depend on linearizability of events. That solution also wouldn’t scale well because the guild service was also responsible for an ever-growing amount of work.

      So seems like the plan was to do the send in another spawn call but that would be a hell of a lot of processes and as they mention lose the "linear" aspect of publishing messages.

    5. session process (a GenServer), which then communicates with remote Erlang nodes that contain guild (internal for a “Discord Server”) processes (also GenServers)

      So looks like the coordination happens across nodes with these "guild" processes

    6. How Discord Scaled Elixir to 5,000,000 Concurrent Users

      Is this across the entire set of clusters or is this per single node or set of nodes for a given "guild"?