145 Matching Annotations
  1. Last 7 days
    1. As society scales up, gossip becomes ineffective. Rumors don’t spread easily from village to village, so I can get away with violating norms when I venture out and deal with strangers.

      Gossip doesn't scale, mostly because rumors don't spread from village to village. As a result you can get away with violating norms when you deal with strangers.

    2. Someone who is skilled at acquiring and spreading information can have high status in the tribe. As receivers of gossip, we are excited to be “in on the secret.”

      A gossip-monger can be afforded status in the group because of what he knows.

    3. Telling lies may eventually get you in trouble, but not necessarily. If the victims of a false rumor are unable to fight back effectively, people who engage in false gossip may be successful.

      Telling lies as gossip may result in the same benefits as telling truths, especially if the victims are not able to fight back.

    4. To escape from the chaos, we will need new norms of behavior that incline us away from gossip.

      To balance out this gossip-driven world, Arnold Kling argues we need new norms of behavior (I would argue perhaps we need new mechanisms), to incline us away from gossip.

    5. The result is that we are living through a period of chaos. Symptoms include conspiracy theories, information bubbles, cancel culture, President Trump’s tweets, and widespread institutional decay and dysfunction.

      Symptoms of this chaotic, gossip run world are: conspiracy theories, information bubbles, cancel culture, Trump's tweets and decay of institutions as well as dysfunction.

    6. We have increased the power of gossip-mongers and correspondingly reduced the power of elite institutions of the 20th century, including politicians, mainstream media, and scientists.

      The scaling up of the gossip mechanism on top of ISS has resulted in an increase in power for gossip mongers and a decrease in power of the institutions we relied on before: politicians, mainstream media, scientists.

    7. Our ISS technology changes this. It makes it possible to gossip effectively at large scale. This in turn has revived our propensity to rely on gossip. Beliefs spread without being tested for truth.

      Internet, Smartphones and Social Media (ISS) allow gossip to take place at a larger scale. Arnold Kling suggests that because of this, we've come to rely more on it than we used to.

      One consequence of gossip being scaled up by ISS, and gossip not being about the truth, is that we have a proliferation of beliefs without them being tested for truth.

    8. But gossip is not the search for truth. It is a search for approval by attacking the perceived flaws of others.

      Gossip is detached from objective truth. Gossip fundamentally is not about truth, it is about gaining approval (~status?) by attacking perceived flaws in others.

    9. Large societies need other enforcement mechanisms: government, religion, written codes.

      Larger groups, such as societies, use other mechanisms to enforce norms, such as: government, religion, written codes.

    10. As a social enforcement mechanism, gossip does not scale.

      Gossip does not scale to larger groups as an enforcement mechanism for social norms.

    11. Human evolution produced gossip. Cultural anthropology sees gossip as an informal way of enforcing group norms. It is effective in small groups.

      Gossip evolved as a strategy to enforce group norms and it is effective in small groups.

    1. Declarative programming is an enabler of abstraction. Imperative programming is an inhibitor of abstraction. Declarative programming allows you to say “I want this and I don’t care how I get it” while imperative programming requires you to define each and every step.

      Declarative programming, i.e. "build me a house, I don't care how", is an enabler of abstraction.

      Imperative programming, i.e. "build walls, windows, a roof.", is an inhibitor of abstraction.

    1. By 2005 blogs had crashed the cultural gates. China’s editors, station directors, and pub-lishers had always acted as cultural “gatekeepers:” deciding who could and couldn’t becomeknown through publication, TV and film appearances, and musical performances. In a majorcultural power-shift, pop cultural icons could emerge through blogs, forums, chatrooms, andpersonal websites, completely outside of the government approved cultural structures.But while Communist Party propaganda department had lost control over China’s cul-ture, in the realm of politics the gates and walls are constantly being rebuilt, upgraded, andreinforced. It would be impossible for a dissident political leader to rise to popularity in thesame way that Mu Zimei rose to stardom.

      Even though China's publishing class lost control as cultural gatekeepers with the advent of blogs, the Communist Party propaganda department constantly rebuilds, upgrades and reinforces the gates.

    2. This situation is reinforced by recent survey results—surprisingto many Westerners—showing that most urban Chinese Internet users actually trust domesticsources of news and information more than they trust the information found on foreign newswebsites (Guo et al.2005, pp. 66–67).

      Survey results reveal that Chinese citizens trust domestic sources more than foreign sources.

      This is a curious result and something I'm beginning to see in the West. I wonder if it's a result of their policies. I wonder if this means that the filtering and manufacturing of opinion is successful.

    3. While the Chinese government has supportedthe development of the Internet as a tool for business, entertainment, education, and infor-mation exchange, it has succeeded in preventing people from using the Internet to organizeany kind of viable political opposition.

      The Chinese government has succeeded in leveraging the internet to generate economic benefits, without succumbing to its predicted democratizing effects.

    4. They are determined to prevent the Internet from serving as a tool for “colorrevolution” in the way that online media and communication tools empowered activists inUkraine and Lebanon. Thus in 2005 the Chinese government updated its regulations control-ling online news and information, and aggressively leaned on organizations hosting onlinechatrooms and blogs to stop the spread of online discussions about recent local governmentcrackdowns against farmer protests in the Chinese countryside.

      China is determined to not have the internet serve as a tool that helps bring about another color revolution, like in Ukraine and Lebanon.

      In the past they've leaned aggressive on organizations hosting discussions about government crackdowns.

    1. Though government statements emphasize anti-pornography crackdowns, ONI found the primary focus of China's filtering system to be on political content. Public security organs and internet service providers employ thousands of people – nationwide, at multiple levels – as monitors and censors. Their job is to monitor everything posted online by ordinary Chinese people and to delete objectionable content.

      The Chinese government employs thousands of people to monitor and censor content. Their job is to filter out anything objectionable that gets posted.

  2. Oct 2020
    1. Most people seem to follow one of two strategies - and these strategies come under the umbrella of tree-traversal algorithms in computer science.

      Deciding whether you want to go deep into one topic, or explore more topics, can be seen as a choice between two types of tree-traversal algorithms: depth-first and breadth-first.

      This also reminds me of the Explore-Exploit problem in machine learning, which I believe is related to the Multi-Armed Bandit Problem.

    1. Here's one quick way to test if your application has properly segregated itself between the Model, View, and Controller roles: is your app skinnable? My experience is that designers don't understand loops or any kind of state. They do understand templates with holes in them. Everybody understands mail merge. And if you say, "Apply the bold template to this hole," they kind of get that, too. So separating model and view addresses this very important practical problem of how to have designers work with coders. The other problem is there is no way to do multiple site skins properly if you don't have proper separation of concerns. If you are doing code generation or sites with different skins on them, there is no way to properly make a new skin by simply copying and pasting the old skin and changing it. If you have the view and the logic together, when you make a copy of the view you copy the logic as well. That breaks one of our primary rules as developers: have only one place to change anything.

      An effective way of testing whether your app practices separation of concerns within the MVC paradigm is whether or not it is "skinnable"

    1. In 1972 David L. Parnas published a classic paper entitled On the Criteria To Be Used in Decomposing Systems into Modules. It appeared in the December issue of the Communications of the ACM, Volume 15, Number 12. In this paper, Parnas compared two different strategies for decomposing and separating the logic in a simple algorithm. The paper is fascinating reading, and I strongly urge you to study it. His conclusion, in part, is as follows: “We have tried to demonstrate by these examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.”

      Parnas published a paper in 1972 about what heuristics are best to decide when to decompose a system into modules.

      His conclusion is that it is almost always wrong to start with a representation such as a flowchart (because things change).

      Instead he recommends focusing on a list of difficult design decisions, or decisions, once made, that will likely change. Then design each module is designed to hide such decisions from others.

    1. "Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one's subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained —on the contrary!— by tackling these various aspects simultaneously. It is what I sometimes have called "the separation of concerns", which, even if not perfectly possible, is yet the only available technique for effective ordering of one's thoughts, that I know of. This is what I mean by "focussing one's attention upon some aspect": it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect's point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.

      Dijkstra posits that a characteristic of what he calls "intelligent thinking" is the tendency to practice a "separation of concerns". By this he means thinking about concepts separate of one another for the sake of their own consistency, rather than simultaneously, which doesn't help in ordering your thinking.

    1. Domain-driven design separates the model layer “M” of MVC into an application, domain and infrastructure layer. The infrastructure layer is used to retrieve and store data. The domain layer is where the business knowledge or expertise is. The application layer is responsible for coordinating the infrastructure and domain layers to make a useful application. Typically, it would use the infrastructure to obtain the data, consult the domain to see what should be done, and then use the infrastructure again to achieve the results.

      Domain Driven Design separates the the Model in the MVC architecture into an application layer, an infrastructure layer and a domain layer.

      The business logic lives in the domain layer. The infrastructure layer is used to retrieve and store data. The application layer is responsible for coordinating between the domain and infrastructure layer.

  3. Sep 2020
    1. Q7. What are controlled components?In HTML, form elements such as <input>, <textarea>, and <select> typically maintain their own state and update it based on user input. When a user submits a form the values from the aforementioned elements are sent with the form. With React it works differently. The component containing the form will keep track of the value of the input in it's state and will re-render the component each time the callback function e.g. onChange is fired as the state will be updated. A form element whose value is controlled by React in this way is called a "controlled component".With a controlled component, every state mutation will have an associated handler function. This makes it straightforward to modify or validate user input.

      In classical HTML form components such as <input> or <textarea> maintain their own state, which gets sent somewhere upon submission of the form.

      React keeps track of the form's state inside a component and will re-render the component when the state changes. This can be listened to by subscribing to the onChange callback function.

    2. React creates a virtual DOM. When state changes in a component it firstly runs a “diffing” algorithm, which identifies what has changed in the virtual DOM. The second step is reconciliation, where it updates the DOM with the results of diff.The HTML DOM is always tree-structured — which is allowed by the structure of HTML document. The DOM trees are huge nowadays because of large apps. Since we are more and more pushed towards dynamic web apps (Single Page Applications — SPAs), we need to modify the DOM tree incessantly and a lot. And this is a real performance and development pain.The Virtual DOM is an abstraction of the HTML DOM. It is lightweight and detached from the browser-specific implementation details. It is not invented by React but it uses it and provides it for free. ReactElements lives in the virtual DOM. They make the basic nodes here. Once we defined the elements, ReactElements can be render into the "real" DOM.Whenever a ReactComponent is changing the state, diff algorithm in React runs and identifies what has changed. And then it updates the DOM with the results of diff. The point is - it’s done faster than it would be in the regular DOM.

      React creates a virtual DOM and every time the state of a component changes, it runs a diff algorithm on the virtual DOM. If something needs to be changed, it changes only this part in the HTML DOM. This is faster than the default of updating the entire HTML DOM any time something changes.

    3. The state is a data structure that starts with a default value when a Component mounts. It may be mutated across time, mostly as a result of user events.

      The state is a data structure with a default value at the start which can be mutated based on user events.

    4. Props (short for properties) are a Component’s configuration. Props are how components talk to each other. They are received from above component and immutable as far as the Component receiving them is concerned. A Component cannot change its props, but it is responsible for putting together the props of its child Components. Props do not have to just be data — callback functions may be passed in as props.

      Props are the configuration of a component. They are immutable. A component will not change the props passed to it. A component will be responsible for the props it passes to child components. Props can be values or callback functions.

    5. Q5. What are the differences between a class component and functional component?Class components allows us to use additional features such as local state and lifecycle hooks. Also, to enable our component to have direct access to our store and thus holds state.When our component just receives props and renders them to the page, this is a ‘stateless component’, for which a pure function can be used. These are also called dumb components or presentational components.

      Functional components cannot hold any state, they are stateless.

    6. Q2. What is JSX?JSX is a syntax extension to JavaScript and comes with the full power of JavaScript. JSX produces React “elements”. You can embed any JavaScript expression in JSX by wrapping it in curly braces. After compilation, JSX expressions become regular JavaScript objects. This means that you can use JSX inside of if statements and for loops, assign it to variables, accept it as arguments, and return it from functions. Eventhough React does not require JSX, it is the recommended way of describing our UI in React app.

      JSX is a syntax reminiscent of HTML which compiles to JavaScript. It makes it easier to compose an app with JSX than it is with functions.

    1. Taking this one step further, a big part of the friction that Zoom removed was that you don’t need an account, an app or a social graph to use it: Zoom made network effects irrelevant. But, that means Zoom doesn’t have those network effects either. It grew by removing defensibility.

      Zoom removed the friction involved in needing an account, an app or a social graph to use it. In doing so, it removed the elements that would create a network effect.

      Without network effects, Zoom also lost it's defensibility.

    2. There’s lots of bundling and unbundling coming, as always. Everything will be ‘video’ and then it will disappear inside.
    3. Part of the founding legend of Dropbox is that Drew Houston told people what he wanted to do, and everyone said ‘there are hundreds of these already’ and he replied ‘yes, but which one do you use?’ That’s what Zoom did - video calls are nothing new, but Zoom solved a lot of the small pieces of friction that made it fiddly to get into a call.  

      What Dropbox did was solve a lot of the small pieces of friction that made it difficult to use cloud storage.

      Zoom has done the same for video calls.

    1. To cultivate an idea meritocracy, they developed an app called a “dot collector” which enables all employees to rate each other along many different dimensions, ranging from “knowledgeability” to communication style. Over time, the app builds up a picture of each employee’s “believability” on different issues. This enables Bridgewater to understand where expertise lies within the company in addition to the hierarchical authority easily understood on an org chart.

      Bridgewater created a "dot collector" app that collects employee ratings of others across different dimensions to get to a "idea meritocracy"

    2. The chaos manager is concerned with the credibility of the organization and ensures that positional authority is aligned with personal authority.  That the people in leadership are the ones people want to follow.  While the Marine Corps has a clear position hierarchy, they have a deep understanding of this idea.  Official authority is a function of rank and position and is bestowed by organization and by law. Personal authority is a function of personal influence and derives from factors such as experience, reputation, skill, character, and personal example. It is bestowed by the other members of the organization.…Official authority provides the power to act but is rarely enough; most effective commanders also possess a high degree of personal authority

      The Marine Corps draws a distinction between positional authority and personal authority.

      Reminds me of lateral leadership.

    3. Amazon pushes teams to escalate one-way door decisions – those that can’t be reversed and may have long-term consequences.  However, with “two-way” decisions, managers are coached to make these decisions themselves.

      Amazon encourages employees to escalate decisions that are irreversible (one-way door decisions) and to delegate decisions that are not. The idea being that if you can act quickly, even if you make more mistakes, it will benefit the system as a whole.

    4. “Working backwards”  from customer needs can be contrasted with a “skills-forward” approach where existing skills and competencies are used to drive business opportunities. The skills-forward approach says, “We are really good at X. What else can we do with X?” That’s a useful and rewarding business approach. However, if used exclusively, the company employing it will never be driven to develop fresh skills.

      This reminds me of the Product Management interview task of coming up with a new product. You can start with a SWOT analysis, but then you'd be missing out on thinking from the customer's point of view.

      Bezos calls the former the skills-forward approach, and the latter the working backwards approach.

    5. However, I quickly realized the problem.  Kotter’s approach puts the senior executive at the center of the story and the leader’s task is to force a change on a resistant organization.  To him, the business leader “defines what the future should look like, aligns people with that vision, and inspires them to make it happen despite the obstacles” Chaos theory, in contrast, removes the senior executive from the center of the story and puts the system at the center.  That is exciting for people who enjoy thinking about complex systems, but isn’t likely to be profitable to a consulting firm which sells projects to senior executives.

      Looking at a organization through a chaos lens would be more accurate and fruitful, but because it removes the CEO from the center (and replaces it with the system), it's not something a management consultancy would pitch (as they pitch to CEOs).

      This reminds me of pharmaceutical companies not having an incentive to research a drug they cannot patent and thus cannot make a profit on.

    1. Historically, design has been opaque as a business unit due to the logistical and technical difficulties of making the design process legible to others. But as these hurdles are increasingly solved by companies like Figma, we’re seeing teams navigate how to best integrate design with the rest of a company’s processes. This should be no surprise, as we have seen the same arc play out in engineering over the last few decades.

      Design used to be a siloed and opaque business process because making the process more legible to others was too difficult logistically and technically.

      First engineering overcame this hurdle (think git, github, github comments etc.)

      Now design is overcoming this hurdle with Figma.

      Other domains will probably follow.

    2. Figma is browser-first, which was made possible (and more importantly performant) by their understanding and usage of new technologies like WebGL, Operational Transforms, and CRDTs. From a user’s perspective, there are no files and no syncing that needs to be done with others editing a design. The actual *experience* of designing in Figma is native to the internet. Even today, competitors often talk about cloud, but are torn over how *much* of the experience to port over to the internet. Hint: “all of it” is the correct answer that they all eventually will converge on.

      Company's struggle to figure out how much of their experience they should port over to the cloud. Figma pioneered the idea of porting all of it and call it a "browser first" application.

      For the Figma user there are no versioned files and there is no syncing.

      Kwok claims all companies will converge to having all of their experience be "internet native".

    3. In 2014 as I helped diligence Figma (disclaimer: I worked on Greylock’s investment in Figma, but I don’t have a personal stake in Figma. sadly.), I used to sit with designers at startups and watch them work. The top right corner of their screens were always a nonstop cycle of Dropbox notifications. Because design teams saved all their files in a shared folder like Dropbox, every time a coworker made a revision they would get a notification. And often there were complex naming conventions to make sure that people were using the right versions. Figma solved this problem. Designs in Figma are not just stored in the cloud; they are edited in the cloud, too. This means that Figma users are always working on the same design. With Dropbox, this isn’t true. The files may be stored in the cloud, but the editing happens locally—imagine the difference between sharing Word files in Dropbox vs. editing in Google Docs.

      Dropbox did not solve the problem of coordinating around file versioning and syncing.

      Figma solved this problem. When you edit a Figma document, you edit it in the cloud, like you edit a Google Doc in the cloud.

    4. Design appears to be inflecting in the direction of engineering. Figma is in pole position to drive this evolution. As a tool, it makes designers both more efficient and more collaborative by breaking down the walls between design and the other teams they work with.

      All disciplines are (probably) inflecting towards engineering. Engineering is ahead because it can create its own tools (the tightest feedback loop).

    5. As disciplines evolve, they figure out the social norms needed to operate better, build tools that can be shared across the industry, and invent abstractions that allow offloading more and more of the workload. They learn how to collaborate better, not just with each other but with all the other functions as well. Disciplines are not an end to themselves; the degree to which they contribute to the larger organizations and ecosystems they are part of is the final measure of their progress.

      As disciplines evolve, they get more efficient, they commodify packets of work.

      The ultimate measuring stick for a discipline is to what extent it contributes to the larger organization and ecosystem they are part of.

    6. Engineering is almost unparalleled in the rate at which it commoditizes itself and pushes the frontier of progress out. The best practices in frameworks, languages, and infrastructure are always rapidly—and sometimes tumultuously—evolving. What used to take entire teams to build before, requires fewer and fewer people every year.

      It seems that software engineering is ahead of all the trends in increasing productivity.

      This is probably because they can build tools that make themselves more productive.

    7. Platforms are needed most when the diversity and scale of use cases is larger than can be built—or often even understood—by the company.
    8. The real power of plugins, however, is in making them publicly available across the ecosystem. Plugins are collective progress available to all users.

      Plugins are collective progress available to everyone.

    9. In many ways Figma’s Communities are a reflection of Github’s philosophy and intent, but built with design in mind. Duplicate a shared design, and a copy is instantly saved to your workspace and ready to be edited.

      The idea of a click-to-fork-repository was brought to Figma in the form of communities.

    10. This impacts monetization and purchasing at companies. Paying for a new design tool because it has new features for designers may not be a top priority. But if product managers, engineers, or even the CEO herself think it matters for the business as a whole—that has much higher priority and pricing leverage.

      If a tool benefits the entire team, vs. just the designer, it becomes an easier purchase decision.

    11. By bringing both designers and non-designers alike into Figma, they create a cross-side network effect. In a direct network effect, a homogenous group gets more value from a product as more of them join. In contrast, a cross-side network effect involves two (or more) distinct groups that grow in size and value as the other group does, too. Figma’s cross-side network effect between designers and non-designers is one of the primary and under-appreciated sources of their compounding success over the last few years. As more designers use Figma, they pull in the non-designers they work with. Similarly, as these non-designers use Figma, they encourage the other designers they work with to use Figma. It’s a virtuous circle and a powerful compounding loop.

      By bringing non-designers into the design process, Figma created cross-side network effects for itself.

      Where typically the designers would get their designer peers to use the tools they're excited about, now non-designers would experience the value and recommend Figma to designers and non-designers alike.

    12. Much of Figma’s current success is driven by its ability to spread within companies. Figma becomes more useful as more people within a company use it, driving advantaged speed and scale of penetration within companies. Figma was quick to recognize that the constraints on design at companies is often not a problem of pixels, but of people. Many of Figma’s competitors are great tools for designers. But that’s who they are for—designers. Figma is a tool for teams to design. Not for designers alone.

      Much of Figma's success is due to the fact that Figma spreads easily within a team, because the barrier to entry is so low (you only need a browser and a link).

    13. Tightening the feedback loop of collaboration allows for non-linear returns on the process. Design can be drafted simultaneously with the product, allowing feedback to flow in both directions throughout the process. Aligning the assets used by design and engineering allow more seamless handoffs, and allows for more lossless and iterative exchange.

      With Figma, the feedback loops in the design process have tightened. This allows for design to be included in the product development process earlier and with less inertia.

    14. Increasingly our tools must understand and align with how we collaborate. This was less important when collaboration was logistically difficult and prohibitively costly, but as collaboration becomes easier its importance has risen. People’s work is less siloed—and their tools must reflect this.

      When not much could be improved in the realm of collaboration (because it wasn't yet technologically possible), it wasn't of much importance.

      Now that the technology exists, the importance of collaboration has become paramount. Collaboration is the ultimate measuring stick for any tool used in a team context.

    15. Our understanding of building platforms and sequencing towards them is still nascent. From how to shift the allocation of resources between a company’s core business and its potential future expansions to how to structure a platform to catalyze its growth and more. Until the playbook is well understood, it is more art than science. There are many decisions to make, including which layers should be centralized, whether the ecosystem should be driven by open source contributors or profit-seeking enterprises, how broad in scope to allow the ecosystem to grow and where to not let it grow, and more.

      How to evolve a platform and cultivate a plugin ecosystem is not yet understood.

    16. Building for everyone in the design process and not just designers is also the foundation of Figma’s core loop, which drives their growth and compounding scale. That network effect is made possible by Figma’s key early choices like:  Architecting Figma to be truly browser-first, instead of just having storage be in the cloudTheir head start in new technologies like WebGL and CRDTs that made this browser-first approach possibleFocusing on a product purpose built for those designing vector based digital products

      Figma's core "loop" comes from the way it was built: to include non-designers in the design process.

      Figma was able to include non-designers by making certain technical choices:

      • Make a tool that works in the browser (easy for anyone to jump in without downloading anything)
      • Invest in technologies like WebGL and CRDT (multiplayer) that make a browser-first approach possible
      • Focusing on vector based product design use cases (could be tackled in the browser with the above)
    17. The core insight of Figma is that design is larger than just designers. Design is all of the conversations between designers and PMs about what to build. It is the mocks and prototypes and the feedback on them. It is the handoff of specs and assets to engineers and how easy it is for them to implement them.

      The key insight the Figma team had was that the design process involves a wide range of people, conversations and artefacts within an organization. Figma brings all those into one place.

  4. Aug 2020
    1. In a pure cloud world, this atomic unit of documents seems increasingly archaic. Documents are more a constraint of a pre-cloud world. And once you assume storing them online is table stakes, the question becomes where is actual collaboration happening that then leads people to wherever they need to do work.

      The document model, stored in the cloud, as pioneered by Dropbox and Box, hinges on the archaic metaphor of "a document". Kwok points out that: "documents are a constraint of a pre-cloud world". When storing documents online becomes trivial, people still need to coordinate on where (i.e. in which document) the collaboration is happening.

    2. The arc of collaboration is long and it bends in the direction of functional workflows.

      Document-based collaboration is unbundling into functional workflows (e.g. Figma).

    3. Messaging, it turned out, appears to be a better center of gravity than documents. And while Dropbox (barring significant traction in its new products) seems to be fading in its centrality, what’s striking is that Slack’s victory seems hollow as well. If anything we’ve seen even *more* new companies building towards owning parts of these workflows and getting traction.

      Messaging turned out to be a more natural domain in which to foster collaboration (as opposed to documents), but Slack's victory feels "hollow".

      Dropbox — and perhaps file storage applications in general — seem to be fading in their centrality.

    4. When Slack first started growing, there were many debates over which company would own collaboration, Slack or Dropbox. Dropbox proponents argued that Dropbox already managed all the actual records of a company, and so would be the center of gravity. Slack partisans argued that Dropbox was a transitory product, and eventually companies would stop caring about individual files, and messaging would be the more important live heartbeat of a company.

      When Slack got started, there were debates if it would come to own collaboration at the expense of Dropbox.

    5. Beyond its Slack-like functionality, Discord has functionality like a social graph, seeing what games your friends are playing, voice chat, etc. These have been misunderstood by the market. They aren’t random small features. They are the backbone of a central nervous system. Active users of Discord have it on all the time, even when they are not playing games. It’s a passive way to have presence with your friends. And when your friends start playing games it makes it easy to with one click go join them in the game. Bringing your actual social graph across all games. Finally, voice chat makes it possible to talk with your friends across all games, even when you are playing the game. Like when working in a google doc, having to switch out of your game to message is a negative experience. Instead Discord adds functionality to your games even while you are focused solely on them. We will see more companies understand and begin to work on this area.

      Discord, unlike Slack, is the central nervous system (or meta-layer) for the gaming market. You can see what games your friends are playing and join them in real time. You can talk with them while playing a different game.

    6. Slack ironically is more similar to Dropbox than expected. The more time goes by the more it looks like exception handling being needed ubiquitously is a transitory product as we switch off of documents. After all, like Dropbox, Slack makes the most sense as a global communication channel when the workflows themselves don’t have communication and collaboration baked in natively. For documents this is true, but increasingly for modern apps this is false.

      If Slack is an exception handler for when apps don't have communication and collaboration baked in. And if we're increasingly moving away from a document-based model (and towards these apps), then Slack looks very much like a transitory product (not unlike Dropbox).

    7. As a company’s processes mature and the apps they use get more sophisticated, we expect to see the need to go to Slack for exception handling *decrease* over time.

      If Slack is an exception handler for faulty, lacking or immature business processes, then you would expect to see less Slack usage for more mature companies.

    8. Slack serves three functions: Else statement. Slack is the exception handler, when specific productivity apps don’t have a way to handle something. This should decrease in usefulness, as the apps build in handling of these use cases, and the companies build up internal processes.Watercooler. Slack is a social hub for co-workers. This is very important, and full of gifs.Meta-coordination. Slack is the best place for meta-levels of strategy and coordination that don’t have specific productivity apps. This is really a type of ‘else statement’, but one that could persist for a while in unstructured format.

      Kwok identifies three functions that Slack serves:

      1. It is an exception handler for everything that has no place in the other tools used.
      2. It is a social hub for workers.
      3. It is the best place for (inherently unstructured) meta-discussions on a variety of topics.
    9. It’s not that Slack is too distracting and killing individual productivity. It’s that your company’s processes are so dysfunctional you need Slack to be distracting and killing individual productivity.

      Kwok points out that Slack's reputation for being a productivity killer doesn't get at the root of the issue. He argues that resorting to Slack is a symptom of the underlying cause: dysfunctional business processes.

    10. The dream of Slack is that they become the central nervous system for all of a company’s employees and apps. This is the view of a clean *separation* of productivity and collaboration. Have all your apps for productivity and then have a single app for coordinating everyone, with your apps also feeding notifications into this system. In this way, Slack would become a star. With every app revolving around it. Employees would work out of Slack, periodically moving to whichever app they were needed in, before returning to Slack. But productivity *isn’t* separate from collaboration. They are the two parts of the same loop of producing work. And if anything collaboration is in *service* of team productivity.

      The vision of Slack, according to Kwok, was for people to have their productivity in designated apps, and have one central nervous system (Slack) through which they could collaborate. This was based on the assumption that producing and collaborating could be separated.

      Kwok claims that this assumption is wrong. Collaboration and productivity are intertwined, and you might event say that collaboration serves productivity.

    11. And core Dropbox is not a solution to this. People store their documents in it. But they had to use email and other messaging apps to tell their co-workers which document to check out and what they needed help with. Dropbox understands this concern. It’s what’s driven their numerous forays into owning the workflows and communication channels themselves. With Carousel, Mailbox, and their new desktop apps all working to own that. However, there are constraints to owning the workflow when your fundamental atomic unit is documents. And they never quite owned the communication channels.

      Dropbox is not a solution to this problem, even though they've been trying with Carousel, Mailbox and other desktop apps.

      Kwok posits that Dropbox's problem is that when your fundamental atomic unit is a document, you constrain your ability to own the workflow. Besides, Kwok points out, they never owned the communication channels.

    12. Suddenly, the constraint on work became much more about the speed and lossiness of collaboration. Which remained remarkably analog. The friction of getting people your document, much less keeping correct versioning was non-trivial.

      The shift to digital work removed the friction inherent in analog work (e.g. copying things, moving things, constructing things). The new bottleneck to productivity became collaboration – which remained "remarkably analog" according to Kwok (e.g. document version control was non-trivial).

    13. Digital work has significantly faster feedback loops for productivity. Software, quite simply, can produce and iterate new things at a daily if not hourly or minute basis.

      Software is uniquely suited for iterative development. This also creates faster feedback loops for productivity.

    14. As the ecosystem of specialized SaaS apps and workflows continues to mature, messaging becomes a place of last resort. When things are running smoothly, work happens in the apps built to produce them. And collaboration happens within them. Going to slack is increasingly a channel of last resort, for when there’s no established workflow of what to do. And as these functional apps evolve, there are fewer and fewer exceptions that need Slack. In fact, a sign of a maturing company is one that progressively removes the need to use Slack for more and more situations.

      Slack is a medium of last resort. When things go well, and if the app that is used is well designed and mature, collaboration will happen inside it. The need for messaging in Slack is more a sign of an immature process or company.

    1. There’s just so much noise small businesses tend to ignore. But in Indonesia, that isn’t the case…yet. The software landscape there is similar to the 1990s in the US. It’s harder to piggyback off of existing software infrastructure — whether it’s payments or platforms — but there’s also a lot of obvious opportunity in software that no one is going after. The same could be said about investing elsewhere in Southeast Asia or in LatAm or Africa. There are fewer startups to compete with for attention, and it’s less of a marketing game than building a software company in the US.

      The software industry in southeast asia, latam or africa is similar to the US in the 1990s and is more about building than about marketing.

    2. As economist Carlotta Perez describes, we are now in the Deployment Phase of the internet in the US — meaning, we are in-process of exhausting all use cases for internet technologies in the US. What has traditionally happened at the end of a technology phase is oversaturation of investment dollars chasing smaller returns. Valuations go up, returns go down, and investors lose their money. (Sound familiar?) On a company level, what this means is, if not careful, a lot of companies will end up wasting marketing dollars in this type of landscape. Companies in the 2020s, unlike in the 1990s, need to really be performance-marketing driven in order to compete. The end of last year certainly showed us many examples of well-funded companies that could not make the unit economics work. The software industry has become a marketing game.

      According to Carlotta Perez we are in the Deployment Phase of internet as a technology. Meaning we are exhausting the use cases for the internet and more money is chasing decreasing returns.

      As a result companies need to be more efficient with their marketing spend in the 2020s compared to before.

      The software industry has become a marketing game

    3. There are certainly exceptions but if we are talking strictly about software, (not hardware, not drug discovery, not synthetic bio, etc) you’d be hard pressed to find a company where winning does not require a solid marketing and/or sales game. This is very different from the 1990s. Having a marketing skillset and mindset is what you need to win in 2020 in the US software market.

      In the 2020s winning in a software requires a strong marketing game. This is very different from how things worked in the 90s.

    4. As such, while many VCs are still fixated on finding unique technology in software and chasing companies that will ultimately be the sole winner, I’d contend that these two strategies — while successful in the 90s and early 00s — largely no longer work.

      While many VCs are still chasing the idea of startups that either will be category winners or ones that have unique technology — these strategies worked in the 90s and early 00s, but no longer in the 2020s.

    5. Put another way, a lot of the “low hanging fruit” in the US software market is now gone. Software in the US generally works. And new opportunities get swept up with would-be competitors immediately. If the 90s was about thinking through your build, the 2020s is about thinking through marketing & distribution.

      The low hanging fruit in software markets is now gone in the US. New opportunities get swept up immediately. The 90s were about figuring out how to build it, the 2020s are about figuring out marketing & distribution.

    6. Furthermore, incumbents who generally do a good job, often manage to continue reigning. According to Brad Gerstner, CEO of Altimeter Capital, who recently did a podcast on Invest Like The Best, large tech companies have managed to take even more market share than 10 years ago. Some people may argue this is because the large tech companies have improved their products over time to stay ahead due to their increased collection of data and better algorithms that feed on that data over time. That may be true for some companies but not all. This also applies to other products that have not made significant strides in their technology — Craigslist, Salesforce CRM, Turbotax, Quickbooks to name a few. Even Google Search which arguably had a better product in the 1990s compared to its peers is about on par with alternative search engines today, but 90% of people worldwide still use Google. Old habits die hard, and distribution matters more than ever if you are just starting a business. It’s hard to topple incumbents who have strong distribution and already large audiences — even if you can build a much better product.

      Large incumbent tech companies have managed to retain their lead, partly due to network effects, but it also applies to companies that haven't made significant strides (e.g. Salesforce), probably because old habits die hard and success goes to the successful.

    7. For most software businesses in the US, the problem isn’t technical knowledge anymore. The problem is getting a wedge into distribution — also known as marketing.

      Building a website has become a utility. This has shifted the domain of competitive advantage to distribution, which in this case means marketing.

    8. This has led to a flurry of many applications being built online – often with multiple teams building the same thing. It is not uncommon to run into 50 different founding teams all trying to build a marketplace for gym trainers. Or 300 founding teams trying re-invent marketing automation.

      It has become common to see multiple software teams go after the same market.

    1. In "GPT-2 As Step Toward General Intelligence" lays out the argument that because GPT-2 is unexpectedly able to do a wide range of tasks reasonably well, we might need to update our understanding of what intelligence is and what the road to AGI looks like.

      There's little reason to think that the "slurry" that GPT-3 produces, based only on statistical patterns in language, is much different from what we call human intelligence.

      If true, this would mean that many people would need to rethink how they approach AI research.

    2. GPT-2 is instantiated on giant supercomputers; it’s a safe bet they could calculate the square root of infinity in a picosecond. But it counts more or less the same way as a two-year old. GPT-2 isn’t doing math. It’s doing the ridiculous “create a universe from first principles and let it do the math” thing that humans do in their heads.

      Even though GPT2/3 is run on supercomputers, that could quickly make challenging calculations, it's unable to count to ten (unless specifically instructed to do so).

      This is because it's not doing math. It's doing, what Scott Alexander calls, "the ridiculous create a universe from first principles and let it do the math".

    3. A machine learning researcher writes me in response to yesterday’s post, saying: I still think GPT-2 is a brute-force statistical pattern matcher which blends up the internet and gives you back a slightly unappetizing slurry of it when asked.

      What a machine learning researcher wrote to Scott Alexander.

    4. But this should be a wake-up call to people who think AGI is impossible, or totally unrelated to current work, or couldn’t happen by accident. In the context of performing their expected tasks, AIs already pick up other abilities that nobody expected them to learn. Sometimes they will pick up abilities they seemingly shouldn’t have been able to learn, like English-to-French translation without any French texts in their training corpus. Sometimes they will use those abilities unexpectedly in the course of doing other things. All that stuff you hear about “AIs can only do one thing” or “AIs only learn what you program them to learn” or “Nobody has any idea what an AGI would even look like” are now obsolete.

      Scott Alexander claims that the results shown by GPT-2 render statements like "AI can only do 1 thing", "AI can only learn what you teach it" and "No one knows what AGI looks like" obsolete.

    5. Wittgenstein writes: “The limits of my language mean the limits of my world”. Maybe he was trying to make a restrictive statement, one about how we can’t know the world beyond our language. But the reverse is also true; language and the world have the same boundaries. Learn language really well, and you understand reality. God is One, and His Name is One, and God is One with His Name. “Become good at predicting language” sounds like the same sort of innocent task as “become good at Go” or “become good at Starcraft”. But learning about language involves learning about reality, and prediction is the golden key. “Become good at predicting language” turns out to be a blank check, a license to learn every pattern it can.

      Because language is an isomorphic mapping to the world, learning to predict language means you're learning to predict patterns that occur in the world.

    6. Imagine you prompted the model with “What is one plus one?” I actually don’t know how it would do on this problem. I’m guessing it would answer “two”, just because the question probably appeared a bunch of times in its training data. Now imagine you prompted it with “What is four thousand and eight plus two thousand and six?” or some other long problem that probably didn’t occur exactly in its training data. I predict it would fail, because this model can’t count past five without making mistakes. But I imagine a very similar program, given a thousand times more training data and computational resources, would succeed. It would notice a pattern in sentences including the word “plus” or otherwise describing sums of numbers, it would figure out that pattern, and it would end up able to do simple math. I don’t think this is too much of a stretch given that GPT-2 learned to count to five and acronymize words and so on.

      This is also borne out in my own tests. Easy calculations, the likes of which the model must have seen or easily learnt, it does well on. More exotic ones not so much.

      What is interesting is that what predicts whether or not GPT3 is able to do the calculation is not the difficulty of the calculation, but the likelihood it occurred in its training.

    7. Again, GPT-2 isn’t good at summarizing. It’s just surprising it can do it at all; it was never designed to learn this skill. All it was designed to do was predict what words came after other words. But there were some naturally-occurring examples of summaries in the training set, so in order to predict what words would come after the words tl;dr, it had to learn what a summary was and how to write one.

      Whatever is naturally occurring in GPT2/3's dataset it will learn how to do, whether it be summarization, translation to French etc.

    8. A very careless plagiarist takes someone else’s work and copies it verbatim: “The mitochondria is the powerhouse of the cell”. A more careful plagiarist takes the work and changes a few words around: “The mitochondria is the energy dynamo of the cell”. A plagiarist who is more careful still changes the entire sentence structure: “In cells, mitochondria are the energy dynamos”. The most careful plagiarists change everything except the underlying concept, which they grasp at so deep a level that they can put it in whatever words they want – at which point it is no longer called plagiarism.

      When you plagiarize a piece of text and you change everything about it except the underlying concept — it is no longer plagiarism.

    1. It might be instructive to think about what it would take to create a program which has a model of eighth grade science sufficient to understand and answer questions about hundreds of different things like “growth is driven by cell division”, and “What can magnets be used for” that wasn’t NLP led. It would be a nightmare of many different (probably handcrafted) models. Speaking somewhat loosely, language allows for intellectual capacities to be greatly compressed. From this point of view, it shouldn’t be surprising that some of the first signs of really broad capacity- common sense reasoning, wide ranging problem solving etc., have been found in language based programs- words and their relationships are just a vastly more efficient way of representing knowledge than the alternatives.

      DePonySum ask us to consider what you would need to program to be able to answer a wide range of eight grade science level questions (e.g. What can magnets be used for.) The answer is you would need a whole slew of separately trained and optimized models.

      Language, they say, is a way to compress intellectual capacities.

      It is then no surprise that common sense reasoning, and solving a wide range of problems, is first discovered through language models. Words and their relationships are probably a very efficient way of representing knowledge.

    2. However we should not forget that the relationships between words are isomorphic to the relations between things- that isomorphism is why language works. This is to say the patterns in language use mirror the patterns of how things are(1). Models are transitive- if x models y, and y models z, then x models z. The upshot of these facts are that if you have a really good statistical model of how words relate to each other, that model is also implicitly a model of the world.

      DePonySum observes that language is an isomorphic transformation – a mapping that can be reversed while remaining valid. They also note that models are transitive (but I'm not sure how that fits into anything).

      The consequence of this is that a very good statistical model of how words relate, would (through isomorphism) constitute a very good model of how the world is.

    3. Interesting take on recent progress in NLP. The author posits that language models might constitute a path towards AGI.

    1. With more and more productivity apps creating their own messaging systems, users suddenly face a new problem: Multiple inboxes. You now have to check notifications in Github, Trello, Google Docs and half a dozen (if not more) other tools in your productivity stack.

      The multiple inbox problem.

    2. With a strong enough NLP engine behind the command line interface, the possibilities become endless: Add that New York Times article to your Pocket queue or send it directly to your Kindle to read it later Re-assign Jira tickets directly from Superhuman or send them to your to-do listPay invoices or send money to a friend

      Julian Lehr offers an interesting idea. If you can process emails directly, without needing to open them, and if you can do so with a text-based user interface powered with an NLP engine —you've got something very powerful on your hands.

      This is especially interesting because with the advent of GPT3 this is actually becoming closer to a reality.

    1. Bottom line: Blockchain can help a bit with voting, but it’s not doing the most important part of the work. It doesn’t help tally secret ballots in a publicly verifiable way. It doesn’t provide individual verifiability that a ballot was correctly encoded. And it’s not useful for voting eligibility, since that’s all about human authentication and a centrally produced voter list. At best, in voting, Blockchain can be a ledger that helps us track the voting metadata.

      Blockchain can only solve some of the problems that need to be solved in a voting system. Where it falls short:

      • It doesn't help count secret ballots in a publicly verifiable way
      • It doesn't provide individual verifiability that a ballot was recorded and counted
      • It doesn't help with voting eligibility, since that's about human authentication (and a centrally maintained voter list)
    2. Then there’s the need to check voter eligibility, a critical piece of global verifiability. No matter what technology we use, we need a clear list of eligible voters, and each voter should get to vote only once. Ultimately, the list of eligible voters is set in a centralized way: it’s produced by the State. There’s nothing distributed about voter eligibility. Even when there is federation / delegation to individual counties, like in the US, there is a centralized effort to cross-check that a voter isn’t registered in multiple counties.

      The list of eligible voters is, in the modern nation state, inherently centralized. There's nothing distributed about it.

    3. In a typical election setting with secret ballots, we need: enforced secrecy: a way for each voter to cast a ballot secretly and no way to prove how they voted (lest they be unduly influenced) individual verifiability: a way for each voter to gain confidence that their own vote was correctly recorded and counted. global verifiability: a way for everyone to gain confidence that all votes were correctly counted and that only eligible voters cast a ballot.

      The requirements of the ideal voting system are:

      1. Enforced secrecy — Each voter can be sure their vote cannot be tied to their identity.
      2. Individual verifiability — Each voter can verify their vote was cast and counted.
      3. Global verifiability — Everyone can verify that all votes were correctly counted and that only eligible voters cast ballots
    4. Blockchain isn’t just a distributed database, it’s a very specific kind of distributed database where the database maintainers aren’t authenticated: anyone can be a blockchain maintainer without revealing who they are or having any kind of privileged relationship with other maintainers. the set of maintainers changes over time. New maintainers come in, existing maintainers leave, without central planning or predictability. The maintainers of the Bitcoin blockchain 5 years ago are very different from the maintainers today.

      Blockchain is a special kind of distributed database. A database where (1) the maintainers are not authenticated and (2) where there is a cycling of maintainers over time.

    1. Ohhh, never thought of this hypothesis: that the act of getting drunk together might be a social technology that helps us verify the trustworthiness of others by inhibiting their higher cognitive functions and thus making it harder to consciously fake things.

      Proof of trustworthiness

    1. As a final practical maxim, relative to these habits of the will, we may, then, offer something like this: Keep the faculty of effort alive in you by a little gratuitous exercise every day. That is, be systematically ascetic or heroic in little unnecessary points, do every day or two something for no other reason than that you would rather not do it, so that when the hour of dire need draws nigh, it may find you not unnerved.

      Practice the habit you want to form by little wins every day, however small and unnecessary the battles may seem.

    2. Seize the very first possible opportunity to act on every resolution you make, and on every emotional prompting you may experience in the direction of the habits you aspire to gain. It is not in the moment of their forming, but in the moment of their producing motor effects, that resolves and aspirations communicate the new ‘set’ to the brain.

      The first time you feel an urge to do something in the direction of the habits you want to form, you need to grab it with both hands.

      It's when these emotions produce their first motor effects that we can start solidifying a new normal in our brains.

    3. The more of the details of our daily life we can hand over to the effortless custody of automatism, the more our higher powers of mind will be set free for their own proper work.

      Reminds me of Whiteheads' quote:

      "Civilization advances by extending the number of important operations which we can perform without thinking about them."

  5. Jul 2020
    1. Consider texts like the Bible and the Analects of Confucius. People integrate ideas from those books into their lives over time—but not because authors designed them that way. Those books work because they’re surrounded by rich cultural activity. Weekly sermons and communities of practice keep ideas fresh in readers’ minds and facilitate ongoing connections to lived experiences.

      People integrate the lessons from the Bible into their lives because they've come up with rich cultural activities that reinforce those ideas.

    1. By now you might be starting to think about businesses and communities could benefit from paid social networks.

      Which businesses and communities could benefit from a paid social network?

    2. Just as built-in social networks are a moat for information products, customized tooling is a moat for social networks.

      You used to build a social network to function as a moat for your tool. Now you will build a tool to function as a moat for your social network.

    3. Today’s existing tools will continue to be sufficient for some communities, and Discord and Slack’s robust bot APIs are capable of solving some community needs. But fundamentally, they are still based on chat, and chat simply isn’t the right core user experience for many other communities. Unique functionality and bespoke interfaces provide distinct advantages that off-the-shelf tooling can never achieve.

      Although some communities' needs will be met by building on top of Slack, Discord and others, they are still based on chat, and chat won't be the right core experience to build on for certain types of communities.

    1. If you believe there's nothing true that you can't say, then anyone who gets in trouble for something they say must deserve it.

      This is the concept of orthodox privilege.

    2. It doesn't seem to conventional-minded people that they're conventional-minded. It just seems to them that they're right. Indeed, they tend to be particularly sure of it.

      Someone that doesn't view themselves as conventional-minded, views themselves as open-minded. And who doesn't like to view themselves as open-minded?

    1. I do force myself to publish a Medley every week since they require me to keep reading.

      Nat Eliason forces himself to publish a Medley so that he keeps on reading.

    1. Imagine a large population of people living, seeing, learning, doing and generally going about their lives. As they do so, they accumulate beliefs. Depending on how smart they are, they also compress beliefs via abstraction, metaphor, subconscious pattern-recognition circuits, muscle memory, ritual, making and consuming art, going p-value fishing, exploring tantric sex, generating irreproducible peer-reviewed Science! and so on.

      Compression of knowledge through abstractions ~ mental models.

    1. Academics gain prestige by publishing novel stuff. This gives them a warped perspective on what is valuable. You can’t publish a paper that would summarize five other papers and argue that these papers are undervalued in a top journal but in the real world the value of doing that might be very high. The mechanisms of discovery are broken in academia.

      In academic publishing you get rewarded for publishing novel findings. But there is no reward for, for instance, publishing a review paper where you argue that the papers you review should be given more weight. In the mean time, in the real world this type of activity may indeed be high value.

    1. The world is currently overflowing with hard problems, so it is overflowing with grift as well.

      The prevalence of hard problems opens up the door to individuals promoting action that masquerades as a solution, but does nothing to solve the problem – grifters and grifts.

    2. Grifts often rely on narrative vacuums. When the real story is too complicated or boring or requires numbers and graphs to understand, people reach for the simpler story. Grifters supply it.

      The law of triviality (the bicycle-shed effect) says that when something is too complicated, we focus on the parts we understand, irrespective of their importance to the whole.

      Grifters supply this simpler story.

    1. A remarkable phenomenon commented on in the Moynihan report of thirty years ago goes unnoticed in The Bell Curve--the prevalence of females among blacks who score high on mental tests. Others who have done studies of high- IQ blacks have found several times as many females as males above the 120 IQ level. Since black males and black females have the same genetic inheritance, this substantial disparity must have some other roots, especially since it is not found in studies of high-IQ individuals in the general society, such as the famous Terman studies, which followed high-IQ children into adulthood and later life. If IQ differences of this magnitude can occur with no genetic difference at all, then it is more than mere speculation to say that some unusual environmental effects must be at work among blacks. However, these environmental effects need not be limited to blacks, for other low-IQ groups of European or other ancestries have likewise tended to have females over-represented among their higher scorers, even though the Terman studies of the general population found no such patterns. One possibility is that females are more resistant to bad environmental conditions, as some other studies suggest. In any event, large sexual disparities in high-IQ individuals where there are no genetic or socioeconomic differences present a challenge to both the Herrnstein- Murray thesis and most of their critics.

      Other studies not cited by the Bell Curve found many times as many females among the highest IQ cohorts among lower IQ populations. This would refute the genetic explanation since genetics don't change from female to male. Instead it seems to point to environmental factors. One possible explanation being that women are more resistant to bad environmental conditions.

    2. Strangely, Herrnstein and Murray refer to "folklore" that "Jews and other immigrant groups were thought to be below average in intelligence. " It was neither folklore nor anything as subjective as thoughts. It was based on hard data, as hard as any data in The Bell Curve. These groups repeatedly tested below average on the mental tests of the World War I era, both in the army and in civilian life. For Jews, it is clear that later tests showed radically different results--during an era when there was very little intermarriage to change the genetic makeup of American Jews.

      Apparently jews scored lower than average on IQ tests administered in the WW I era.

  6. Jun 2020
    1. Anyway! Your only responsibility is to do stuff that’s actually in Japanese; the remainder of the responsibility rests entirely with the Japanese stuff — media — itself. The media has a responsibility to entertain you. You don’t have to find the value in it; it has to demonstrate its value to you by being so much fun that you don’t notice time going by — by sucking you in. It has to make you wish that eating and sleep and bodily hygiene could take care of themselves because they cut into your media time. And if it doesn’t do that or it stops doing that, then you “fire” it by changing to something else. You are the boss and there are no labor laws. Fire the mother. You do the work of setting up and showing up to the environment, but after that the environment must work for you.

      This strategy reminds me of Niklas Luhmann who allegedly said that he never did anything that he didn't feel like doing.

      This is like following your curiosity 100% and it goes against a lot of the other advice out there e.g. like sitting down every day and writing.

      This also reminds me of this idea of starting as many books as possible. Drop them when they're no longer interesting to you.

    2. DO NOT, DO NOT, DO NOT turn Japanese into work. Don’t turn it into “study”; don’t turn it into 勉強 (a word that refers to scholastic study in Japanese, but actually carries the rather negative meaning of “coercion” in Chinese). Just play at it. PLAY. That’s why I keep telling people: don’t make all these rules about what is and is not OK for you to do in Japanese, or how Gokusen is over-coloured by the argot of juvenile delinquents or watching Love Hina will make you talk like a girl — it doesn’t matter, you need to learn all that vocabulary in order to truly be proficient in Japanese anyway, so whatever you watch is fine — as long as you’re enjoying it right now. Write this on your liver: just because anything is OK to watch in Japanese, that doesn’t mean that everything is worth watching…to you that is. One person’s Star Trek is another person’s…well, I can’t imagine how any human being could fail to love Star Trek, but you get the idea.

      If you want to learn something, make sure that you keep it in the realm of play. If you make it work, you will kill it.

      This reminds me of Mark Sisson talking about incorporating play.

      This also reminds me of the concept of Flow.

    1. Most people think you build the product then you market it. Thinking in loops means you build the marketing into the product. The product doesn't precede the marketing. The product is the marketing.

      By thinking in loops Harry Dry refers to a way of thinking about your acquisition strategy as being part of your product.

      This reminds me of Brian Balfour's idea of product-channel fit and how stresses that the product gets shaped by its acquisition channel.

  7. May 2020
    1. The task of "making a thing satisfying our needs" as a single responsibility is split into two parts "stating the properties of a thing, by virtue of which it would satisfy our needs" and "making a thing guaranteed to have the stated properties". Business data processing systems are sufficiently complicated to require such a separation of concerns and the suggestion that in that part of the computing world "scientific thought is a non-applicable luxury" puts the cart before the horse: the mess they are in has been caused by too much unscientific thought.

      Dijkstra suggested that instead of concerning ourselves with a software system that meets the user's needs, we should first separate our concerns.

      We should first concern ourselves with the user's needs and draw up careful specifications – properties to which the system should adhere should it satisfy the user's needs.

      With those specifications in hand we can concern ourselves with making a system guaranteed to have stated properties.

      The problem with this thinking, which the software industry would later discover, is that a user's needs cannot be accurately or completely determined before building the system. We learn more about what is needed by the process of building.

      This is an instance of the [[Separation of concerns]] not working.

      This is also why the industry has settled on a technique to build iteratively (Agile), always leaving the option open to change course.

    2. Some time ago I visited the computing center of a large research laboratory where they were expecting new computing equipment of such a radically different architecture, that my colleagues had concluded that a new programming language was needed for it if the potential concurrency were to be exploited to any appreciable degree. But they got their language design never started because they felt that their product should be so much like FORTRAN that the casual user would hardly notice the difference "for otherwise our users won't accept it". They circumvented the problem of explaining to their user community how the new equipment could be used at best advantage by failing to discover what they should explain. It was a rather depressing visit.... The proper technique is clearly to postpone the concerns for general acceptance until you have reached a result of such a quality that it deserves acceptance. It is the significance of your message that should justify the care that you give to its presentation, it may be its "unusualness" that makes extra care necessary.

      When you've developed an idea, you will typically want to communicate that idea so that it can be understood and used more generally. Dijkstra calls this reaching "general acceptance".

      To do so, you must communicate the idea in a way so that it can be properly understood and used. For certain ideas this becomes a challenging problem in and of itself.

      Many forgo this challenge, and instead of figuring out what new language they need to invent to most accurately communicate the idea, they use legacy language and end up communicating their idea less effectively, in pursuit of general acceptance.

      Dijkstra says that the proper way of dealing with this dilemma is to separate your concerns. You separate your concern of the solution from the concern of communicating the solution.

      When you've reached a solution that is of such high quality that it deserves communicating – and only then – do you concern yourself with its presentation.

    1. When someone asks if you have time for a meeting next Tuesday, you may have nothing on your calendar, so you say “sure.” If you hadn’t agreed to the meeting, you would have done something with that time - but what? By getting clear on what the “what” was that I could be doing made me better at saying no. When you say yes that one hour phone call next week, you are saying no to revamping your sales page or going to the gym or getting home an hour earlier. There is not always a right or wrong answer, but if you realize what you are saying no to every time you say yes, then you can make a judgement call: “Is this phone call more important to me than going to the gym today?”

      By blocking in your calendar in advance you make future tradeoffs explicit. You are no longer saying yes to a meeting on Thursday. You are saying yes to swapping out your gym session for that meeting (or not).

    1. We have come to a place where thanks to many libraries and frameworks, and overall improving software, what would’ve once used many developers to build from scratch is now more often than not, a bunch of people plumbing different things together. Software is creating software faster than we can use it. This is also why you are seeing so many of these “no-code” or “low-code” solutions pop up all over the place. There are increasingly fewer reasons to write code, and those who are writing code should, and do, increasingly write less of it. This will only be more accelerated by shifting to remote work due to how it’s going to change how we decide what code to write.

      There are increasingly less reasons to write code, so less code should be written.

      How Can relates this to remote work is unclear to me here.

    2. Anyone who’s spent a few months at a sizable tech company can tell you that a lot of software seems to exist primarily because companies have hired people to write and maintain them. In some ways, the software serves not the business, but the people who have written it, and then those who need to maintain it. This is stupid, but also very, very true.

      A company with a software development team writing its own software often creates inertia for itself. They will be biased to write software, because they have that capability – not because it's necessary.

    3. In a world where most employees are remote, this can be harder to do. Not only employees could be in touch with each other less, and in less personal ways, they might not be even able to do so without having non-monitored places. There will always be ways to employees to sneak around monitoring and surveillance, but it’ll be harder when everything is fully remote, and you’ll have less trust in those who will bond (or conspire with, depending on your POV) with you.

      Can believes it will be harder for employees to coordinate collectively when the company goes remote-first (and this maybe part of the reason it is happening).

    4. The remote-first mentality will be a god-send simply because you’ll no longer be restricted to a tiny piece of land with a questionable housing policy to source your talent. People estimate 40% of all VC funding going to landlords in the Bay, and I think that’s too conservative.

      If you remove the requirement on an employee to be located near their employer, the following happens.

      Less upward pressure on housing prices (because employees aren't required to live near their employers).

      Downward pressure on salaries. Because employees don't need to live in the expensive locality of their employer and can live with less.

    5. Obviously, things can get quite weird when you take this model to its logical end. In the Bay Area, where the companies are giant, the geography tiny and the housing policies extremely questionable, this has resulted in salaries ballooning to insane levels. Getting a six-figure salary straight out of college barely raises an eyebrow anymore at many big firms. Companies have gone to great lengths, including some illegal ones to curb this competitive behavior to depress the salaries.

      Salaries become ridiculously high when:

      (1) The difference between value provided by the employee and value derived from the employer is high (a lot of latitude to increase) (2) There are many such employers able and willing to compete for an employee

      One consequence of this is that housing prices go up (because the employees can afford to pay more).

    6. Most people would like to believe salaries are determined by a cost-plus model, where you get a tiny bit less than the value you add to the company. However, in reality, they are really determined by the competition. Companies are forced to pay as much as possible to keep the talent for leaving. In a competitive labor market, this is often a good thing for the employees.

      The height of salaries is determined by competitive pressure. How much do you need to give an employee to make sure they stay?

    1. With limited or no access to technol-ogy, limited capacity to cope and adapt, limited or no savings, inadequate access to social services, and un-certainty about their legal status and potential to ac-cess healthcare services, tens of thousands of migrants and non-nationals have left Thailand over the past weeks.

      With little certainty and little to fall back on, many refugees have left Thailand in the last weeks.

    1. Insight through making suggests that you’ll need to make simultaneous progress in theory-space and system-space to spot the new implications in their conjoined space. Effective system design requires insights drawn from serious contexts of use: you must constantly instantiate new theoretical ideas in new systems, then observe their impact in some serious context of use.

      Very powerful way of wording the implications of Insights through making and the need for serious contexts of use.

      You need to advance in theory-space as well as in system-space to spot the implications for their conjoined space.

      Pragmatically, you must constantly instantiate new theoretical ideas in the system, then observe the effects in some serious context of use.

    1. Whether in music (Bach, Lennon), art (Picasso, Bernini), film (Tarantino, Anderson), games (Blow, Lantz), fiction (Kundera, Tolstoy), the most eminent work is usually the result of a single person’s creative efforts. Occasionally it’s a very small group (Eames, Wrights).

      Great creative work is usually the product of a single person.

    1. Per Michael: you probably would rather have Stradivarius make your violin than Joshua Bell, but you’d probably rather hear Joshua Bell play. Each activity—violin-making and violin-playing—requires virtuosic skill and a lifetime of practice. It’s very unlikely to find both abilities in the same person!

      Great tool-makers are often not great tool-users. You would want Stradivarius to make your violin, but not to play it. You want Joshua Bell to play it, but not to make it.

    1. One huge advantage to scaling up is that you’ll get far more feedback for your Insight through making process. It’s true that Effective system design requires insights drawn from serious contexts of use, but it’s possible to create small-scale serious contexts of use which will allow you to answer many core questions about your system.

      Even though a larger user base will increase your odds of getting more feedback, you can still get valuable contextual feedback with less users.

    2. WhyGeneral infrastructure simply takes time to build. You have to carefully design interfaces, write documentation and tests, and make sure that your systems will handle load. All of that is rival with experimentation, and not just because it takes time to build: it also makes the system much more rigid.Once you have lots of users with lots of use cases, it’s more difficult to change anything or to pursue radical experiments. You’ve got to make sure you don’t break things for people or else carefully communicate and manage change.Those same varied users simply consume a great deal of time day-to-day: a fault which occurs for 1% of people will present no real problem in a small prototype, but it’ll be high-priority when you have 100k users.Once this playbook becomes the primary goal, your incentives change: your goal will naturally become making the graphs go up, rather than answering fundamental questions about your system.

      The reason the conceptual architecture tends to freeze is because there is a tradeoff between a large user base and the ability to run radical experiments. If you've got a lot of users, there will always be a critical mass of complaints when the experiment blows up.

      Secondly, it takes a lot of time to scale up. This is time that you cannot spend experimenting.

      Andy here is basically advocating remaining in Explore mode a little bit longer than is usually recommended. Doing so will increase your chances of climbing the highest peak during the Exploit mode.

    3. This is obviously a powerful playbook, but it should be deployed with careful timing because it tends to freeze the conceptual architecture of the system.

      One a prototype gains some traction, conventional Silicon Valley wisdom says to scale it up. This, according to Andy Matuschak has certain disadvantages. The main drawback is that it tends to freeze the conceptual architecture of the system.

    1. Part of the problem of social media is that there is no equivalent to the scientific glassblowers’ sign, or the woodworker’s open door, or Dafna and Jesse’s sandwich boards. On the internet, if you stop speaking: you disappear. And, by corollary: on the internet, you only notice the people who are speaking nonstop.

      This quote comes from a larger piece by Robin Sloan. (I don't know who that is though)

      The problem with social media is that the equivalent to working with the garage door open (working in public) is repeatedly talking in public about what you're doing.

      One problem with this is that you need to choose what you want to talk about, and say it. This emphasizes whatever you select, not what would catch a passerby's eye.

      The other problem is that you become more visible by the more you talk. Conversely, when you stop talking, you become invisible.

    1. You should construct evergreen (permanent) notes based on concepts, not related to a source (e.g. a book) or an author.

      Your mental models are compression functions. You make them more powerful by trying to use them on new information. Are you able to compress the new information with an already acquired function? Yes, then you've discovered an analogous concept across two different sources. Sort of? Then maybe there's an important difference, or maybe it's a clue that your compression function needs updating. And finally, no? Then perhaps this is an indication that you need to construct a new mental model – a new compression function.

    1. If painting is an aesthetic medium of vision, music an aesthetic medium of sound, and cooking an aesthetic medium of taste, then games are an aesthetic medium of action, Frank Lantz observes.
    1. Annotations—even inline marginalia which include your own writing—have very little informational value. They’re atomized; they don’t relate to each other; they don’t add up to anything; they’re ultra-compressed; they’re largely unedited. That’s fine: think of them as just a reminder. They say “hey, look at this passage,” with a few words of context to jog your memory about what the passage was about.Since you’re going to write lasting notes anyway, annotations need carry just enough information to recreate your mental context in that moment of reading. You wouldn’t want to rely on that long-term, since then you’d just have a huge pile of hooks you’d have to “follow” anytime you wanted to think about your experience with that book.

      Classical marginalia in books, according to Andy Matuschok, have little informational value. They are not interlinked, they're very compressed and usually unedited. But that's okay.

      Their purpose is to help you get back to the mental context you were in when you thought the passage was worth returning to.

    1. Update 2020-01-14: I now store my outlines as Structure Zettel. For more information what a Structure Zettel is see this post.

      An important update to this piece as Sascha's method evolved. Instead of using outlines to capture new notes, he started using structured notes.

      I suspect the reason for this is that a system with atomic notes and structured notes is more clear cut than a system that relies on work-in-progress outlines. The main difference being that a structured note will contain only notes and not some floating, un-evolved ideas.

    1. Instead of having a task like “write an outline of the first chapter,” you have a task like “find notes which seem relevant.” Each step feels doable. This is an executable strategy (see Executable strategy).

      Whereas Dr. Sönke Ahrens in How to Make Smart Notes seemed to be saying that the writing of a permanent note (~evergreen note) is a unit of knowledge work with predictable effort & time investment (as well as searching for relevant notes), Andy emphasizes only the note searching activity in this context.

    1. In a classroom or professional setting, an expert might perform some of these tasks for a learner (Metacognitive supports as cognitive scaffolding), but when a learner’s on their own, these metacognitive activities may be taxing or beyond reach.

      In a classroom setting a teacher may perform many of the metacognitive tasks that are necessary for the student to learn. E.g. they may take over monitoring for confusion as well as testing the students to evaluate their understanding.

    2. To successfully learn something new, people must evaluate their understanding, monitor for confusion or inconsistency, plan what to do next based on those observations, and coordinate that plan’s execution. This often falls under the category of “metacognition,” though prefer to unbundle its phenomena.

      To learn something people need to use certain faculties that are often referred to as metacognition.

      They need to evaluate their understanding, monitor for confusion or inconsistency, plan what to do next and coordinate that plan's execution.

    1. Ericsson claims (2016, p. 98) that there is no deliberate practice possible for knowledge work because there are no objective criteria (so, poor feedback), because the skills aren’t clearly defined, and because techniques for focused skill improvement in these domains aren’t known.

      According to Ericsson deliberate practice for knowledge work is not possible because the criteria are not objective (you don't know if you're doing well).

      This collides with Dr. Sönke Ahrens' contention that note taking, specifically elaboration, instantiates two feedback loop. One feedback loop in that you can see whether you're capturing the essence of what you're trying to make a note on and a second feedback loop in that you can see whether your note is not only an accurate description of the original idea, but also a complete one.

      Put differently, note taking instantiates two feedback loops. One for precision and one for recall.

    1. One common choice is to set daily goals for a certain number of hours at work. Success with this strategy requires a clear theory of how those hours will inexorably accumulate to the desired outcome. Simply spending some number of hours on a project is a fairly weak constraint: it’s easy to work with focus many hours unproductively.

      I've run into this problem.

      You can spend time in flow state, very focused, but this time still doesn't bring you closer to your goal.

    1. Incremental writing is a method of writing in which ideas are written down and assembled incrementally. Incremental writing requires no linearity. It adapts to your way of thinking. Many great writers and scientists of the past used a variant of incremental writing using their own systems of notes. In SuperMemo, incremental writing is integral with the creative process and learning itself

      Incremental writing is a method of writing where you keep adding elements to a piece in a "creative phase". In this phase the manuscript progressively increases in size. This is followed by a "consolidation phase", a process in which the manuscript gets to the point and decreases in size.

    1. By contrast, when we’re working on a large work-in-progress manuscript, we’re juggling many ideas in various states of completion. Different parts of the document are at different levels of fidelity. The document is large enough that it’s easy to lose one’s place or to forget where other relevant points are when one returns. Starting and stopping work for the day feel like heavy tasks, drawing heavily on working memory.

      One key difference between working with atomic, evergreen notes compared to a draft manuscript is that the ideas in the manuscript are at different levels of evolution / fidelity. The ideas in the evergreen notes are all evolved components.

    1. Instead, nurture the wild idea and let it develop over time by incrementally writing Evergreen notes about small facets of the idea.

      If you cannot tackle a subject head on, tackle it obliquely by writing evergreen notes about facets of the idea.

      This is an interesting way of reducing the scope of, say, an essay, without sacrificing quality. Instead of writing the whole thing, just write an atomic piece about one of the concepts you need for the larger piece.

    1. The issue of the different layers is similar. If you chose software that doesn’t deal with those layers in a sophisticated way, you will not reap the benefits in the long term. Your archive will note work as a whole. I think that this is one of the reasons why many retreat to project-centered solutions, curating one set of notes for each book, for example. The problems that come with big and organic (= dynamic and living) systems is avoided. But so is the opportunity to create something that is greater than you.

      Interesting point where the author compares the barrier that is created between the editing and the writing mode in a wiki (which makes it more cumbersome to continue lines of thought) to the barriers that appear when you're not using the right software or conventions to structure your knowledge items, as well as to structure your knowledge items' structure.

    2. After a while, I did not only have structure notes that structure content notes, I also had structure notes that mainly structured sets of structure notes. They became my top level structure notes because they began to float on the top of my archive, so to say.

      After the need for a layer of Hub Notes a new need may emerge: to better organize the Hub Notes themselves. At this point you may want to introduce structure notes that structure sets of structured notes.

    3. Structure notes share a similarity to tags: Both point to sets of notes. Structure notes just add another element. They are sets with added structure. This added structure provides a better overview and adds to the utility of the archive.

      Structure notes or Hub Notes are similar to tags (or pages in Roam) in that they point to a collection of other notes (or pages in Roam). The only difference being that structure notes contain within themselves a structure which provides hierarchy and context.

    4. But after a while, you won’t be able to keep up. When I search for tags I get a couple hundred of notes. I have to review them to connect a note to some of them, or get a grasp of what I wrote and thought about a specific topic. Naturally, a need to organize the archive arises at this point. I can’t remember how many notes I had when I experienced this. I introduced hub-like notes when I had between 500 and 700 notes.1 I gave myself an overview of the most important notes on that topic.

      There seems to be an inflection point where your initial approach to organizing your Zettelkasten starts to fail (perhaps 500-700 notes). You'll simply have too many tags to choose from.

      At this point hub-like notes will be the next stage in the evolution of your Zettelkasten organization.