419 Matching Annotations
  1. Feb 2022
    1. According to Freeman, this is why games (in spite of their undeniable successes) have yet to achieve the same widespread market penetration as movies or television - because the emotional complexity of gaming is not up to the standard set by more conventional entertainment media.

      A prediction from 2003 why video games are less successful than movies or TV. I wonder what's the popular sentiment on this today.

    1. Today, K.e.coffman is a solid member of English Wikipedia’s editorial elite—No. 734 out of 121,000, as of this writing.

      This article is the story of one insanely dedicated Wikipedia editor. There are 733 at least more.

    2. “My editing style tends to be bold.”

      The most honest way to edit or write.

    3. In a long spree of edits, Coffman cleans up the two articles.

      I wonder if she investigated who put the false information there. IMHO it was most likely an oversight of someone who missed the clarifying sentence in the book rather than malice.

    1. Obsidian doesn't make connections for me. More importantly, it doesn't do all the really important stuff brains do — it doesn't breathe for me, it doesn't regulate my temperature, it doesn't interpret things I see, it doesn't feel. Calling a notetaking app a "second brain" abstracts away all the essential parts of being human that don't count as "objective" "thought."
    1. from similar indexing and discovery services

      I assume the following is a direct comparison to https://scholar.google.com/

  2. Jan 2022
    1. We wouldn’t have hilarious coffee drinking robot commercials without it!

      I wonder what the author is referring to here.

    1. The principle of least astonishment (POLA), aka principle of least surprise (alternatively a law or rule),[1][2] applies to user interface and software design.[3] It proposes that a component of a system should behave in a way that most users will expect it to behave. The behavior should not astonish or surprise users.

      Principle of least astonishment

    1. levelised costs well below lithium batteries

      "levelised costs" -- Is this system more expensive initially than installing lithium batteries?

    1. you still can’t say anything in casual conversation except “I read the book,”

      Related to this, I think being proud of "read books" (or worse, counting them) and bringing that up in conversations is pure signalling. Books are not just there to be consumed.

      Great essay on making knowledge your own: https://sive.rs/dq

    2. % that I understand and retain

      I don't think understanding can be clearly captured with a percentage. That assumes each author is a genius and you only learn from a book exactly what the author meant.

      With the best books, I often find that's not the case -- for me they serve as inspiration for my own thinking, and some aspects are much more interesting than others. I don't need to comprehend every idea the author meant if I got what's important to me.

    3. and the 1-2 sentences people usually say to introduce the book

      I find that I increasingly skip introductionary sentences written by other people, or what's on the book cover. The book should stand on it's own, if you want actual understanding and not copying of opinion.

    4. Read reviews/discussions of the book (ideally including author replies), but not the book: 2 hours of time investment, 25% understanding/retention.

      This means copying the reviewers opinion of the book instead of forming your own. Of course that's faster, but IMHO you cannot call it "understanding of the book". You're reading a different "book" written by the reviewer.

    5. only 12-15% understanding/retention

      I think the gap between thoughtful, slow reading and quick reading is much larger. Assuming of course that slow reading means thinking about the sentences, probably highlighting passages and scribbling a few notes.

      Doesn't apply to every book of course, and depends on the aspect of understanding you want (your own thoughts or what the author meant).

    1. as the country progressively embraced a more neoliberal capitalist economic model

      Why did they do that? Any good source here?

    2. the radical autonomy of the modern “consumer.”

      Or the modern self-employed producer, which we see more and more of (gig work, internet creators etc). You generally have more choice today in your form of employment. It is not just about consuming.

    3. “For our generation, children aren’t a necessity…Now we can live without any burdens. So why not invest our spiritual and economic resources on our own lives?”

      Indeed. Being forced to have children out of material necessity is no good basis for societal growth.

    4. This “commodification, in many ways, corrupts society and leads to a number of serious social problems.”

      How?

    5. a “younger generation [that] is ignorant of traditional Western values” and actively rejects its cultural inheritance

      What is he referring to here?

    6. nihilistic individualism

      How do those concepts fit thogether? Individualism means valuing yourself -- so there you are valuing something. Does it mean not caring about larger institutions then?

    7. But while Americans can, he says, perceive that they are faced with “intricate social and cultural problems,” they “tend to think of them as scientific and technological problems” to be solved separately.

      Yes, because we rarely make progress in other aspects. For technological problems there's a clear solution that people will buy if it works. Deliberate sociological change you may have to force, which is never a good basis.

    8. “Since 1949, we have criticized the core values of the classical and modern structures, but have not paid enough attention to shaping our own core values.” Therefore: “we must create core values.” Ideally, he concluded, “We must combine the flexibility of [China’s] traditional values with the modern spirit [both Western and Marxist].”

      At a glance, a sensible proposal. Western nations too spend a lot of time criticizing instead of reforming their core values.

    9. In the brutally cutthroat world of CCP factional politics

      Are China's politics cutthroat? As an ignorant western person it appears their government is always acting in unison, much more so than in the US for example. Definetely worth reading more on.

    10. Officially referred to as Chinese President Xi Jinping’s “Common Prosperity” campaign, this transformation is proceeding along two parallel lines: a vast regulatory crackdown roiling the private sector economy and a broader moralistic effort to reengineer Chinese culture from the top down. But why is this “profound transformation” happening? And why now? Most analysis has focused on one man: Xi and his seemingly endless personal obsession with political control. The overlooked answer, however, is that this is indeed the culmination of decades of thinking and planning by a very powerful man—but that man is not Xi Jinping.

      What a great introduction to the article.

    1. that more data would be needed to assess whether it is significant

      Exactly. He effectively created his algorithm on the training data set, and can't independently verify the results.

      Notably, the words used in the game are meant to be guessed by humans -- so using an optimal strategy that considers every english word should perform fairly well.

    2. The player accesses a no-frills page, with no fee, registration or advertising

      I think the explosive social growth comes from this simplicity. You are not told how well you did or get to try multiple examples. The only way to find that out is to share the product.

      Probably an unintentional effect, but a powerful one.

    1. The usual neural pathways for representing numbers lead to dead ends. And this, perhaps, is why people are afraid of big numbers.

      Not just "why we are afraid of big numbers", also why we do not intuitely understand exponentials (as referenced earlier in the article). Exponentials appear in nature, but humans rarely had to deal with them until a few thousand years ago.

    2. For approximate reckoning we use a ‘mental number line,’ which evolved long ago and which we likely share with other animals. But for exact computation we use numerical symbols, which evolved recently and which, being language-dependent, are unique to humans.

      The core of this hypothesis: Approximating things (e.g. numbers) is an innately human instinct, while precise computation using abstract symbols evolved only recently. In nature there's rarely a large difference between 1.0 and 1.1.

      It's applied to the big number problem in the next paragraph.

    3. Indeed, one could define science as reason’s attempt to compensate for our inability to perceive big numbers.

      Or very small numbers, like understanding how atoms work in order to synthesize stronger materials. Or in general, attempting to understand things that are outside our direct field of view and intuitive understanding.

      I like the author's argument. Reasoning about big numbers is reasoning about what we don't know -- the first step of getting to know it.

    4. But how can we determine, in a finite amount of time, whether something will go on endlessly?
    5. Nondeterministic Polynomial-Time.

      Meaning, you can come up with a solution to an NP problem in polynomic time (something represented by an exponential), if you take the optimal choice every time there is a choice (non-deterministic). Which is the same as verifying that a given solution is correct, by just following it's steps.

    6. physicist Albert Bartlett asserted "the greatest shortcoming of the human race" to be "our inability to understand the exponential function."
    7. Had he chosen easy-to-write 1’s rather than curvaceous 9’s, his number could have been millions of times bigger.

      It's about finding the best representation of big numbers, not thinking of them.

    1. Email is a place where that need gets met for many readers and therefore a place for us to experiment.

      Interestingly, I enjoy reading the articles more when I stumble upon them browsing their home page, rather than through a newsletter subscription. There's something to exploring vs. consuming.

      But it makes sense that they want to optimise for newsletter subscribers.

    2. some people we’ve interviewed in brand research told us that The Atlantic is “absolutely a conservative” institution, while for other people it is“definitely a liberal” one

      That's the best feedback you can get if you want to be truly independent.

    1. A presidential-campaign field organizer in a caucus state told me she can’t get low-income workers to commit to coming to meetings or rallies, let alone a time-consuming caucus, because they don’t know their schedules in advance.
    2. our every minute should be “captured, optimized, or appropriated as a financial resource by the technologies we use daily.”

      A similar read on the topic: https://www.oliverburkeman.com/time

    3. Sundays are no longer a day of forced noncommerce—everything’s open—or nonproductivity.

      That's not neccesarily the ideal for all people. But it should at least be a choice what to do with our time.

    4. To make the most efficient use of their scant time at home, some parents have resorted to using the same enterprise software that organizes their office lives

      It's coming full circle: The software that made companies more efficient and demands more of employee's attention now also solves the problems it created in their personal lives. Everyone is trying to "be more efficient", whatever that means.

    5. The personalization of time may seem like a petty concern, and indeed some people consider it liberating to set their own hours or spend their “free” time reaching for the brass ring. But the consequences could be debilitating for the U.S. in the same way they once were for the U.S.S.R. A calendar is more than the organization of days and months. It’s the blueprint for a shared life.

      What a well-written intro!

    6. Managers were supposed

      I wonder what it took to bribe Managers into assigning you the color you wanted.

    7. since production never stopped

      But production per day must have slowed down since you always had less people working at once. It's an interesting way to increase utilization of industrial equipment -- you don't have to increase capacity, but can use the same things longer. Assuming you rotated 2 rest days within 7 day weeks that would mean you need 2/7 = 28% less workplaces for people (maybe ore smelters for example), which could be significant.

    1. explicit study

      Here's a great example of one such study into life science research, which seems to be directly inspired by Progress Studies: https://guzey.com/how-life-sciences-actually-work/

    2. teaching better management practices to firms in Italy—improved productivity by 49 percent

      What do those "better management practices" entail, and what does "productivity" mean here?

      Did they enable more individual agency to allow for more innovation, or was is simply a better technical process for organizing labor?

    3. The success of Progress Studies will come from its ability to identify effective progress-increasing interventions and the extent to which they are adopted

      This is the key, to ground the study in practical results.

    4. it takes place in a highly fragmented fashion

      Fragmentation isn't inherently bad -- there's more duplicated effort, but also more experimentation and beneficial randomness. Having the same policies and conditions everywhere would not be learning from history IMHO.

    5. concoct policies and prescriptions that would help improve our ability to generate useful progress in the future

      I'm intuitively sceptical that having more policies and management practices creates more innovation or "progress". Though even having less policies and more individual agency is a "policy" in some sense. So I agree that we should study / observe what works best to enable progress, or at least more explicitly focus on it.

    6. we’re coming to appreciate more and more that organizations with higher levels of trust can delegate authority more effectively, thereby boosting their responsiveness and ability to handle problems. Organizations as varied as Y Combinator, MIT’s Radiation Lab, and ARPA have astonishing track records in catalyzing progress far beyond their confines.

      What does "delegate authority" mean in this context?

    7. the discoveries that came to elevate standards of living for everyone arose in comparatively tiny geographic pockets of innovative effort.

      Which suggests that pockets of innovative effort are extremely rare, and appear almost randomly in different places. We can and should replicate beneficial conditions developed there after the fact, but doesn't the seemingly random distribution of innovation suggest that we can't force it?

    8. A lot of progress can also come from smaller advances

      I would argue that progress only comes from smaller advances. For every major innovation you see, you don't see the experiments that enabled it. Which is even more reason to study the progress, because innovation is a process.

    9. we still need a lot of progress

      I would drop the "still" -- for me the essence of human endeavor is striving for progress :)

    10. a combination of focused professorial research and teaching could be a powerful engine for advance in research

      I wonder why this model has seemingly failed us now.

    11. he proposed a new organization dedicated to practical knowledge. He named it the Massachusetts Institute of Technology.

      More precicely, I believe he wanted to study the principles leading to practical knowledge? So to be more practical coming from purely abstract study, and to be broader than purely technical experimentation. Exactly the same motivation the authors suggest in this article.

      "The true and only practicable object of a polytechnic school is, as I conceive, the teaching, not of the minute details and manipulations of the arts, which can be done only in the workshop, but the inculcation of those scientific principles which form the basis and explanation of them, and along with this, a full and methodical review of all their leading processes and operations in connection with physical laws."

      https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology#Foundation_and_vision

    1. Mobile is the largest segment in gaming, with nearly 95% of all players globally enjoying games on mobile. Through great teams and great technology, Microsoft and Activision Blizzard will empower players to enjoy the most-immersive franchises, like “Halo” and “Warcraft,” virtually anywhere they want.

      That mobile games (and not immersive PC or console games) are mentioned here tells you a great deal about their direction, and they kind of audience they want to attract.

    1. Rather than employ the precautionary principle, which says, unless you can prove there is no harm, don’t use new technology, the Amish rely on the enthusiasm of Amish early adopters to try stuff out until they prove harm.

      That's a so refreshingly sensible practice, instead of rejecting things just because your identity is tied up in it.

    2. Like a lot of Amish who work along side their parents from an early age, he was incredibly poised and mature.

      That's what they value -- independence and maintaining strong family bonds, not success defined by arbitrary things like education, job, salary etc.

    3. the Amish’s famous buggies

      Pictures: https://www.google.com/search?q=amish+buggies

      I'd call them "carriages" more than "buggies".

    4. “If I had a TV, I’d watch it.” What could be simpler?
    5. So the technology of genetically modified crops allowed the Amish to continue using old, well-proven, debt-free equipment, which accomplished their main goal of keeping the family farm together.

      And we, the people living in the future, are sceptical of genetic crops for how unproven they are and what they might do to farmers.

    6. The Amish also make a distinction between technology they have at work and technology they have at home.

      A refreshingly sensible approach. Their goal is to not let their community get split up by new things, not to reject innovation. So for work, why not use modern(ish) equipment?

    7. Turns out the Amish make a distinction between using something and owning it. The Old Order won’t own a pickup truck, but they will ride in one.

      For all the understanding of their motivation, that's a hack to avoid saying that your rules are outdated (similar to orthodox Jewish practices). In my opinion, nothing is worse than not being clear to yourself what you believe.

    8. Behind all of these variations is the Amish motivation to strengthen their communities. When cars first appeared at the turn of last century the Amish noticed that drivers would leave the community to go shopping or sight-seeing in other towns, instead of shopping local and visiting friends, family or the sick on Sundays.

      A sensible idea. I can't imagine many modern people leaving behind electricity now, but it's almost common to limit screen time at least.

      I wonder if we will see more people splitting off, and building communities around the rejection of addictive new things.

    9. Some sects allow cars, if they are painted entirely black (no chrome) to ease the temptation to upgrade to the latest model.

      That's a really thoughtful idea.

    10. In any debate about the merits of embracing new technology, the Amish stand out as offering an honorable alternative of refusal.
    1. Is it possible to co-opt the useful functions of a healthy market economy and expand them to include a more complete concept of economic success, creating a new method of truly global collaboration in the process?

      "possible to co-opt", "useful functions", "healthy market economy", "more complete concept", "new method of truly global collaboration"

      I'm already inclined to skip this article for the great mass of vague, overly broad wording of these first paragraphs. That's not how you explain web3 concept to the average person :)

    2. Is it possible to co-opt the useful functions of a healthy market economy and expand them to include a more complete concept of economic success, creating a new method of truly global collaboration in the process?

      Honestly, I'm already tempted to skip this article for the great mass of buzzwords and vague, overly broad wording of these first paragraphs.

    3. trying to explain a concept like curation markets to the average person can be a trying task

      Isn't that a bad sign, that the value and application of web3 to real world problem is too contrived? Looking forward to seeing if the author succeeds with the explanation in this article.

    1. Specific parts of academia that seem to be problematic: rigid, punishing for deviation, career progression; peer review; need to constantly fundraise for professors. Parts that seem to be less of a problem than I initially thought: short-termism; lack of funding for young scientists.
    2. I’m wary of proposing any drastic changes to how grants are distributed, how researchers are trained, and so on.

      I think that's a great conclusion to this thorough article. Things are not as they appear on first sight (because people find their way around problems), and any drastic action will upset this state (which works surprisingly well, as the author wrote). Change small things we can actually change, the rest will take care of itself.

    3. Boyden

      Edward Boyden, a neuroscientist at MIT who seemingly developed a now widely-used research tool called "Optogenetics".

      https://en.wikipedia.org/wiki/Edward_Boyden

    4. [N]early every expert relies on the valuation of their expertise for money: therefore every expert has a strong case to oversell their expertise/the state of knowledge in their discipline

      Not if the other parties are acting rationally, and see through your over-promises (you will always detect thing that don't work later). This financial and reputational motive to believe in your own work is the basis of capitalism :) Maybe the main problem is over-promising marketing.

    5. As a result, in hiring decisions, the amount of money the researcher is able to bring sometimes effectively becomes the measure of quality of research.

      Assuming that people and public bodies spend money on what is valuable to them, judging researchers by the funding they attract (thus results they produce to stakeholders), is not bad.

      What valuable kind of research gets suppressed as a result of this?

    6. People who have big labs continue to fight for more funding and feel that what they create is unique and must be protected, If you have a big lab, you probably have a lot of people trying to get in, meaning that you feel there’s always plenty of opportunities to grow. while people who are barely scraping by suggest strict limits on lab size (and other ways to make distribution of money more equitable) and point out that sometimes big established labs continue to get funded almost by inertia. People have very strong feelings about this.

      Everyone fights for themselves. Probably we should have both sizes of labs?

    7. this allows for more risk and exploration in big labs

      Interesting. It's usually the opposite elsewhere -- small companies can experiment more than established businesses (which have little reason to experiment).

    8. virtually 100% of the papers in “top” journals come from the same 5-10 senior authors

      I would have liked the article to go deeper into this. If they are "senior authors", does that mean most of their research is actually done by others, and they mostly provide the publicity and direction? In this case, isn't the system working as expected?

      Are the published results really biased as a result? It seems like the commented is someone not part of this group.

    9. Peer reviewers in your field are your competitors, who have not themselves solved the problem you claim to be able to solve. They have both personal and professional interest (especially so if funding is limited) in giving low scores to grant applications of competing teams and to recommend rejection of their journal submissions.
    10. I hear that these days, companies have introduced “Individual Contributors” where you can grow while still being primarily a technical contributor.

      IC just means that somebody is not a manager. Specifically, there often are "Staff" and "Principal" engineering roles you see more experienced people inhabit. They usually write project proposals, kickstart new efforts, and shape the culture through their work, in addition to (less) actual engineering work.

      IMHO the role is just about finding a fitting place for people with experience to contribute in the best way they can, if they don't want to do management.

    11. there’s a path “PhD–>Postdoc–>PI” that is almost impossible to avoid

      Because PIs attract new funding, while research scientists do not?

    12. There’s a single PI who has to both be excellent at being the CEO and at being the CTO and who moreover has to run the lab essentially alone

      So they will just do the "CEO" job, and delegate the actual research to younger people, as written earlier. The point of having research "co-founders" here is about creating more independent projects?

    13. it is a serious problem to take somebody in the genius idiom and to push them into a different idiom, which is to reduce their variance

      Interesting idea (assuming that geniuses are bad are most things). That would speak for having more "pure researchers" as pointed out in section 6 below.

    14. In order to stay in, you have to convince many professors

      Is it a challenge to stay in research and not be kicked out? Or is this point mainly talking about progressing in your career?

    15. People outside of biology generally think that doing a PhD means spending 6 years at the bench performing your advisor’s experiments and is only possible with perfect undergrad GPA, not realizing that neither of these are true if you’re truly capable

      "if you're truly capable" will not apply to most people, particularly those who complain :)

    16. PIs are usually pretty open to getting Research Assistants and are very open to getting thoughtful personalized cold emails (this applies to scientists you would consider famous as well).

      Maybe this is something about scientists being presented with compelling facts (to hire someone), or with them knowing the state of things after going through them.

    17. X’s contribution was probably in providing the research environment in which all of these were possible, money, and talking to Z every other week about the progress with the idea.

      Which is X's function in this arrangement, and people in the industry seem to understand that. Maybe the problem is other media, who attributes new results to individuals instead of groups of people?

    18. On the ground this means that until age 35 (i.e. when your creativity is the highest) you are isolated from management and fundraising (and endless administrative responsibilities bestowed on any tenure-track professor) and can 100% focus on doing science and publishing papers
    19. R01 grants

      "The Research Project (R01) grant is an award made to support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing the investigator's specific interest and competencies, based on the mission of the NIH."

      Seems to be a fairly generic form of funding, I suspect other grants have more restrictions.

      https://grants.nih.gov/grants/funding/r01.htm

    20. skews what scientists work on towards things that are easy to dress up for NIH, rather than things that they believe are most important

      That's the function of any policy -- to get people working towards it. Maybe the real problem is that there's little funding dedicated to different goals?

    21. The left-over funds then can produce preliminary data for their high-risk ideas, making them appear less risky and more easily fundable by the NIH in the future.

      IMHO "making things appear less risky" is the key to figuring out new things in general. You can do many projects with low success chances but small downsides, they are not really "risky" (but using calculated probability).

      I wonder what kind of "high-risk" research projects the author is referring to here.

    22. NIH

      National Institutes of Health, the public health research body of the US.

    23. inefficiencies they see in resource allocation.

      I'm curious about how science grants work now, in general. Where does the money come from, if not from governments?

    24. different from what almost any single person I talked to has as their model of what’s going on in the field.

      I'd guess each person interpolates from their own life to the entire industry, thus misses some aspects (but is not necessarily wrong).

    25. I think that this observation is a general one – true for almost all areas of study

      That should probably be clarified to "areas of study that are based on opinions" (like investigating the state of an industry). If studying is about rational deduction from facts first-hand, or learning those explanations second-hand, I don't see how spending more time makes you more wrong generally.

      You may overestimate your ability, but it doesn't make you more wrong to hear more facts. We can only trust our mind. I'm looking forward to reading the linked article though.

    26. Specific parts of academia that seem to be problematic: rigid, punishing for deviation, career progression; peer review; need to constantly fundraise for professors.
    27. PIs

      Principal investigators, the "lead researchers" for research projects.

    1. to Eastern minds, nothing was no big deal. After all, many Hindu and Buddhist beliefs were based on the idea that reality actually is illusory

      I wonder if there are actual source for this, or if the author (or other authors) are making this up because it fits the story.

    2. It scarcely made sense, for example, that a real quantity such as, say, 352 would, if multiplied times zero, simply equal nothing.

      Mhh, I don't really agree. Say you have a good that goes bad (e.g. rotten food), then it has lots it's use and nobody will buy it from you. So its previous value gets multiplied by 0.

    3. The intended value was determined by context. But by 300 B.C., the Babylonians had solved this problem -- at least somewhat -- with an innovation. They added a symbol that functioned as a place holder. This was possible because their numbering system employed positional notation, or place value.

      The number zero developed out of placeholders for positional number notations, which were invented to make values more precise.

      Before that, noone had any "practical use" for the concept of "nothing".

    4. it would be "natural" to base a number system on 10, as we do now, because there are 10 digits on both hands

      TIL base 10 numeric systems likely originate from the fact that we have 10 digits on both hands.

    5. fruitcake syndrome

      As a non-native english speaker I did not know what this phrase meant. It seems to imply someone is "mad or that their behaviour is very strange": https://www.collinsdictionary.com/dictionary/english/fruitcake

      Curious to see what people thought about going into year 2000.

    1. they tend to turn down requests from riders with lower ratings since they expect that such riders are more likely to give low driver ratings

      I assume drivers also feel safer accepting rides by people with good ratings, since they are less likely to be assaulted, disrespected etc.

    2. would go a long way toward making rating systems more robust

      For me, the article failed to address the main question of why platforms should implement different rating systems. With with two (or three)-sided marketplaces, it isn't clear which services are the "best" -- and the "best" isn't available for everyone.

    3. .

      Another problem happens for products with very few reviews. If you sort by "rating" on Amazon it shows you products with 1-2 5 star reviews. Any of the actual best products may have a few 4-start reviews, so their average is less than 5.0.

      Here's an approach to deal with that problem, essentially creating a probabilistic function to find the real average given the number of ratings available: https://www.evanmiller.org/how-not-to-sort-by-average-rating.html

    4. Ratings are also prone to “grade” inflation

      I wonder which part of the paper the authors reference here. From the abstract: "We argue that reputation mechanisms used by platform markets suffer from two problems. First, buyers may draw conclusions about the quality of the platform from single transactions, causing a reputational externality across sellers. Second, for a variety of reasons we discuss, reputations will be biased."

    5. they do a poor job of separating good from great products

      I totally agree, I'm so glad someone put this frustration into words.

      Examples: It's becoming impossible to tell apart Amazon products by their ratings or reviews, every Uber rating <5 starts means you did something weird, and in general the default anywhere is to five "5 stars" for things that did what you expected.

    1. Natively built in note taking features like email notes in Hey feel like a good step in the right direction

      Is it? Email notes will just stay within the email app, and are not accessible from anywhere else. That avoids clutter, but you won't be able to connect ideas across domains, or even move email providers.

    2. One of the reasons I still read a lot of non-fiction in physical book form is because it’s easier to bookmark and annotate passages that I quickly want to find again later.

      I really would have thought the opposite -- digital notes can ideally be shared, searched across, and connected in different contexts, while physical notes stay attached to one context (e.g. book).

    3. Their relevance is based on other trigger points.

      I would argue there are also notes without predefined trigger condition -- thoughts you want to expand in the future. Tagging notes per topic, or putting things in a list represents a trigger, but there's also value in unplanned inspiration for connections you only make later..

    1. "We're computer scientists," he says. "We're not going to choose what is a good bias and what is a bad bias."

      This is the right scientific attitude in my opinion. Study and use data trends for specific purposes (e.g. offsetting a sexist bias to appeal to more readers), not because you morally think (or are told) one is "good" or "bad".

    2. that problem

      Which problem should we fix specifically? Humans in general are biased and have sexist stereotypes, and that's what the algorithm in the article finds.

      Should we offset this existing bias when creating algorithms? Isn't that yet another bias on the side of the programmer (imprinting his world view on the data)?

    1. be your own curator

      I would be really interesting to explore the value of being original / deciding for yourself what to learn as part of the risk diversification idea.

    2. the quality is all over the place

      My personal theory is that in those places, information content is not the main metric (marketing, social connection, status, or feeling part of a group is).

    3. their posts will mostly be based on experience and intuition, so I think blogs and social media are for inspiration and should not be read as research papers or deeply thought studies

      Aren't most things in the end based on personal experiences (e.g. the best books)?

    4. My favorite place to find high-quality content about fundamentals in computer science are MOOC platforms

      Why are they your favorite places to learn? Because they are free?

    5. You need to have a balance between low risk, mid risk, and high-risk assets.

      I really like this diversification idea applied to learning. Might be worth expanding on it with a few sentences (e.g. you want to have some security to take advantage of random events).

    6. This allowed me to have a great salary and save a lot of money.

      I think that's a great example of how learning something new led to things you didn't expect at all before.

    7. you'll naturally explore

      BTW I'm personally beginning to think this is the core of spending time well consuming information, and doing any original work. So not accepting filters others impose on you (like having to follow news, or work on this specific thing), building our own filters (interestingness) and trusting them. It's interesting to think about with your T-shaped risk / ROI framework...

    8. library

      What do you mean by library here? Things we are curious about to read, things we are reading, or things we have already read?

    9. Why aren't we exploring or learning from the digital libraries we spend so much time curating?

      You put this quote here to mean that it's worth learning some fundamentals from timeless content? And that makes filtering content more important?

    10. the assets from which you are expecting the highest ROI

      Do you suggest picking a "safe" technology with low risk but reasonable reward, or high risk but outsized ROI?

      Being T-shaped about something that doesn't matter seems bad, but so does having exact same skills as someone else.

    11. the hardest problems require deep knowledge

      Or at least you have a significant advantage over others if you have deep domain knowledge (could mean technical skills, knowledge of product area, personal network etc etc).

    12. So, how can we make the most out of this world full of information where it's easier than ever to learn pretty much anything faster than ever before?

      I really like this sentence as conclusion of the intro.

    13. An extremely relevant skill nowadays is the ability to curate content

      Why is recognising high quality content important (and hard) noawadays? What do you mean by curation?

    14. with fundamentals and principles because

      ... so we need to learn constantly?

    15. The most important aspect here is to keep exploring until you find a few creators that you like.

      Interesting thesis -- why does following content creators you like help to get more high quality information? (I think I agree).

    1. help us develop better anti-virals

      In the case of COVID, these insights could conceivably save millions of lives.

    2. Why is there virtually no public awareness of a lab leak that caused suffering and disease for 32 years?

      Perhaps it's not a nice topic to talk about, for anyone involved?

    3. Well, here’s the usual explanation:

      Wording that subtly suggests you to question this motivation before knowing anything about it.

    4. Even if this notion sounds completely far-fetched to you, please keep reading, and then decide what you think.

      As someone who was extremely sceptical of the idea, I would recommend reading further as well.

      But please watch the language in this article, and don't jump to the conclusion the author states pretty forcefully (so much for "decide what you think").

    5. We need to make efforts to stay informed about what’s happening with these viruses

      Controversial take: While the facts in this article convinced me that a deliberate virus manipulation (for the good cause of research into better vaccines) is a realistic and possibly likely cause of Omicron, I am going to stay away from COVID news even more than before. People's identity is so tied up in this topic that everyone bends the facts somewhat.

      "we need to tell our friends and families, to use whatever platform we might have, big or small, to promote the end of this research."

      That's a concerning conclusion after spending maybe 10 minutes reading one article about the topic.

    6. We need to decide whom to believe

      Why the us vs them mentality? A more credible conclusion would be that we need to gather more facts and present them to the people who believe otherwise (like the referenced tweets above suggest).

    7. who don’t have a dog in this fight

      Not having a stake in the result doesn't make people more credible. In that case they could talk about anything without repercussions from bad predictions (as we see so often nowadays).

      A sure measure that someone believes in something is when they spend their life / career on realising it.

    8. but 5.5 million people are dead

      I'm assuming this number is overall, not just from Omicron which this article discusses (again, bending the facts unnecessarily).

    9. What he’s saying is, it’s highly improbable that a virus mutated this way completely randomly

      He did not say "highly improbable", only that a deliberate manipulation is more probable.

      Please don't bend small facts like that, the story would be strong enough without manipulations like that. It makes me wonder how many other information in this article was adjusted to fit the narrative (e.g. the very suggestive wording of the delta lab leak above).

    10. And where is all the natural variation in between?

      Well, we haven't seen the variations in between. That could be caused by non-uniform testing for example, a deliberately manipulated version is not the only explanation.

    1. I hope I thoroughly convinced you why Donald Knuth is the Patron Saint of Yak Shaves

      Is this meant as a complement or criticism to Knuth? On one hand the whole TeX ecosystem is incredibly complex, on the none of the tools would exists if he simply type-setted his book by hand. Plus, many of the underlying languages and font setting likely didn't exist before.

    2. to make sure that each and every TeX document ever written builds

      Why does the current state of TeX need to compile every historical document? You could go back in the version history to always find a version to render each document. Are there really modern features or styling that you need for old text documents?

    3. TeX is currently at version 3.14159265, METAFONT at 2.7182818. Yep, TeX is slowly converging towards pi, while METAFONT towards e. Take that, semantic versioning advocates!

      That's funny. I wonder how they calculate their version increases, if they converge to solve specific number, they must have a measure of how close they are to the perfect product?

    4. It’s still considered good and has a huge factor in the recognisable look of TeX documents.

      According to Wikipedia, TeX was released in 1978. So that algorithm has been used for 43+ years. Some problems rarely change.

    1. As long as software requires such concerted energy and so much highly specialized human focus, I think it will have the tendency to serve the interests of the people sitting in that room every day rather than what we may consider our broader goals.

      That's a wide point beyond web3 -- to avoid the problems with big tech, we need to make software / products easier to create. Then there's little to gain by increasing scale beyond network effects (which is a separate topic web3 aims to solve).

      I think we're already beginning to see this decentralization, if not in software then for YouTube & TikTok creators, indie makes etc in comparison to old media companies.

    2. I’m hopeful that the creativity and exploration we’re seeing will have positive outcomes, but I’m not sure if it’s enough to prevent all the same dynamics of the internet from unfolding again.

      That's a great conclusion. He doesn't condemn the innovation that's taking place, just points out concerns about its directions and ways we can deal with them.

    3. However, even if this is just the beginning (and it very well might be!), I’m not sure we should consider that any consolation. I think the opposite might be true; it seems like we should take notice that from the very beginning, these technologies immediately tended towards centralization

      Great twist on the common refrain we hear everywhere. There's always an opposite to every argument.

    4. I don’t think we should be surprised that OpenSea isn’t a pure “view” that can be replaced, since it has been busy iterating the platform beyond what is possible strictly with the impossible/difficult to change standards.

      Important quote. OpenSea builds its own features not just to build a monopoly to make money, but also because the protocols don't allow for rapid innovation.

    1. Generally, like with much good writing, the magic for readers is in feeling like they’re traveling along with someone who truly cares and is exceedingly curious about their chosen topic.

      Interesting point about writing, I have not realized this before. Maybe seeing curiosity and authenticity is refreshing amid all the marketing and news we see.

    1. I have read blogs for many years and most blog posts are the triumph of the hare over the tortoise. They are meant to be read by a few people on a weekday in 2004 and never again

      I think there is a wider point here. We often conflate "new" with "relevance" -- e.g. following the latest news events but not understanding what important trends took place in the last decade.

      Maybe blogs and posts on the internet are especially prone to this because there's so much more available content than before.

    2. It is everything I felt worth writing that didn’t fit somewhere like Wikipedia or was already written

      Not writing about things that were seemingly already written about may be a mistake. I at least found that I'm sometimes consciously avoiding some articles because I want to think about the topic (and maybe write about it) myself.

      We don't need to do everything in the most efficient way, and understanding only comes from thinking ourselves.

    1. You just are the sum of the moments of your days.

      Beautifully said.

    2. you shouldn't move on to the next task

      The stated goal in the last paragraph is "use the next 25 minutes in the most worthwhile way that seems feasible". If I enjoy what I'm working on but am finished after 5 minutes already, idling for the next 20 won't be the best you can do. It means spending time on the tasks that seem worthwhile, not on the ones that actually are.

    3. there'll still be a million things theoretically left to do

      For me that's the key -- we can't and should not attempt to do everything, but only the most important things. As far as I understand the article, Pomodoro seems to help with realizing that.

    1. leave after 10 months

      Interesting that he even chose to leave 2 months before his initial equity vesting, and likely gave up his signing bonus as well. The author really believes in what he's writing down here.

    2. employees become “resources” (cogs) plugged into gaps to stop the bleeding, rather than contributing out of their core competencies

      Well, and when the delta between career ambitions and job gets too large, people leave -- it works out in the end. IMHO the opposite (people being cogs that always stay in the same place) is worse.

    3. Unfortunately, the majority of my time is spent doing “program” style work which leaves about 10% of my time for technical work. Ultimately this is not what I want to be doing with my career.

      So the author's reason for leaving is very personal, other people might be looking for this "release management" role with broader room for initiative. That's not criticism, I left my last job for a similar reason.

      But I'm wondering how much this article serves as a general criticism of Amazon. You always hear people being not happy in their role, but with thousands of employees that's probably expected.

    4. an engineer who had never done web development

      Maybe he wanted to get into web development?

    5. It is a lot easier to partially solve the problem than to actually think through things from first principles.

      Isn't solving a problem partially better than not solving it at all? Getting started helps you to figure out the actual core problem too. That applies at least to startups -- but maybe having to deliver results for every project at big companies is a different environment.

    6. the path to promotion between PM, Engineering and Design are all different and have incentives that conflict with one another.

      Sometimes incentives even conflict between people of the same role -- e.g. there are only so many new large projects to lead, but much more unglamorous or uncomfortable work that doesn't get you a promotion.

    7. Instead of a person being the source of truth on a subject, the document becomes the source of truth.

      I like this as a general policy -- that way you can criticize an idea without it being personal.

    8. dig deeper to better understand why they were in such a hurry

      Isn't nearly every employer always in a hurry to hire? Maybe we should normalize quitting a job after 1-2 month, after you actually know what it will be like.

    1. Funding that's returned via revenue, not equity.

      I wonder how different the returns are between equity and (capped) revenue share investments. Maybe the top VC funds have no incentive to change their ways, but as Courtland wrote, there's an increasing market for a different kind of funding.

    2. at the tail-end of the last recession in 2010, making money was considered… stupid

      It's crazy how much culture changes like that in just 10 years. Now it seems like we all see the downsides of hypergrowth and spending billions on marketing (e.g. Uber), and often swing to the complete opposite -- taking no funding at all.

    3. giving up the very independence you sought as a founder in the first place

      A good book on the topic is "The E-Myth", about the reason most people start small companies (not to manage a business, but to not have a boss). It's hard to truly get independence when you have to manage the business side, and adding VC board seats should only make it worse.

    4. invest a ton of money

      Interestingly, in 2005 Paul Graham suspected the nature of large VC funds to be source of huge valuations and investments. http://www.paulgraham.com/venturecapital.html

      The VC model should also work with smaller deal sizes, but maybe the insane wealth generated by the tech monopolies + the need to generate billions in returns for large VC funds makes shooting for unicorns the most profitable investment strategy.

    5. Let's say you put $10, $20, $30 million into a company, and you say alright, become a billion-dollar company. Instantly, your risk shoots through the roof that you're gonna fail. Because you're gonna hire 50-100 people, there's way more chaos, you're moving way faster, and it just makes it more likely that you're going to fail… It really increases your likelihood of either failing, or being a unicorn, a billion-dollar company. They want it to be binary.
    1. overcome the bootstrap

      Here's another article by the same author going a bit more into this: https://future.a16z.com/the-web3-playbook-using-token-incentives-to-bootstrap-new-networks/

      IMHO tokens as a tool for bootstrapping networks is their most promising application.

    2. A well-designed token network carefully manages the distribution of tokens across all five groups of network participants

      Ha, it's not a magical solution as it sometimes seems nowadays. Web3 tokens are a tool to design network incentives that has to be used carefully.

    3. Token networks remove this friction by aligning network participants to work together toward a common goal— the growth of the network and the appreciation of the token.

      That's a core point. Companies restrict other companies access to their network because it's their property and leverage -- they want users to use only their service because that maximizes value capture. That incentive should still be present between different tokens -- e.g. one developer being reluctant to support a service that does almost the same thing as he's writing.

      But any new token or smart contract grows the utility of the underlying network (e.g. Ethereum) -- there's more functionality to connect to, leading to more users and rewards, leading to improvements of the network, leading to better services.

      Smart contracts are also fundamentally composable, so the openness of collaboration between tokens or services is enforced at the network layer. Developers have no choice but to use open standards and let others build ontop their work.

    4. But with the launch of the iPhone and the rise of smartphones, proprietary networks quickly won out:

      I'm sceptical about attributing the entire rise of walled garden to smartphones. They are just a symptom of the appeal of Facebook and other networks like it. People gave up open standards because the proprietary networks were just too useful.

    5. In retrospect, Bitcoin was really two innovations: 1) a store of value for people who wanted an alternative to the existing financial system, and 2) a new way to develop open networks. Tokens unbundle the latter innovation from the former, providing a general method for designing and growing open networks.
    1. Over time, as the network effect and native utility grows, the token incentives taper off and eventually go to zero, and the world is left with a new, scaled network.

      Why should the token incentives disappear the larger a network grows? There will be less price speculation, but wouldn't users be rewarded for the value they provide (except if the token is purely an equity equivalent)? Since the network effects and user base will be larger, users might even be paid more later.

    2. Because bootstrapping networks is so hard, it’s likely that there are many networks that should exist — that would improve our collective well-being — but don’t because no one has figured out how to bootstrap them.

      What could be examples of such networks? Anything based on social communication should also work at a small scale -- since human desires were formed by millennia of living in small groups. Maybe he's thinking more about technical networks, like data storage or distributed computing?

    3. I’ve worked in Web3 and crypto since 2013 and have never worked with a project that spent meaningful money on sales and marketing.

      What turns me off about web3 projects is that everyone only seems to care about making money from tokens. But that's a feature, not a bug -- incentivizing users to bootstrap networks like owners. Maybe we're numb already to traditional marketing done by companies, but not by users.

      The lesson: see web3 monetary incentives as a tool to boostrap social networks.

    1. just as the very most popular kids don't have to persecute nerds, the very best VCs don't have to act like VCs. They get the pick of all the best deals.

      Maybe the best VCs are less needy, but aren't they still incentivized to invest large sums for their returns?

    2. Often they even install a new CEO.

      Maybe partly because of PG's influence, the new tech VC firms brand themselves as "founder-friendly" and seem to rarely force CEO changes now. I wonder how many other effects from this article also don't apply anymore.

    3. VCs get paid a percentage of the money they manage: about 2% a year in management fees, plus a percentage of the gains. So they want the fund to be huge-- hundreds of millions of dollars, if possible.

      To incentivize performance, VCs should get rewarded based on their returns. So to make more money they must increase their returns or manage more total funds. The latter method seems easier to pull off, so we see the effects PG describes here.

      What if VCs & fund managers would only get rewarded based on their returns, maybe also with a strict deal size limit? I'm not sure why they would willingly give up management fees, but wouldn't it resolve many of the side effects pointed out in this essay?

  3. Dec 2021
    1. Here’s how I got that to work:

      The following process didn't work for me with macOS Big Sur. Installing the same uncompressed dictionary from https://github.com/mortenjust/webster-mac worked though.

    2. It’s as if someone decided that dictionaries these days had to sound like they were written by a Xerox machine, not a person, certainly not a person with a poet’s ear

      I'd guess the most popular dictionaries are written for people who want to learn english, and not use a dictionary as writing aid.

    1. So, you do everything you can now to make your life easier then.

      If you don't know what will happen, how can you prepare for the unexpected events that have large consequences?

      You set yourself up to be more adaptable (robust as called here), e.g. by saving money or having good health.

    2. In practice, considering effects deeper than level C is rarely needed nor advised. In its essence, it's all guesswork.

      Totally agree. The point is not to get obsessive about each decision and try to think through every consequence, but to be aware that big things could happen because of it.

      That also works in the positive direction -- e.g. tweeting often probably won't get you a large audience by itself, but potentially you become a person who connects well with people, that builds a great product, and build an audience as a result of that. Just one example of the many things that can happen by doing things with some uncertainty.

    3. One smart thing to do is to get prepared for big deviations.

      I like this idea -- you don't know what second order effects there will be, but you know there will be some.

    1. Looking for newlines would slow grep down by a factor of several times, because to find the newlines it would have to look at every byte!

      Splitting text into lines is usually the first step of most text processing code. Doing the wrong things more technically efficient helps with nothing. Most things don't need to run as fast as grep.

      But if performance proves to be an issue, the tricks in this article are useful for low-level code.

    1. I’m saying that something has gone very wrong when basic features that already work in plain HTML suddenly no longer work without JavaScript.

      The reason I use React / Next.js for all projects is that it speeds up development significantly, especially if you want to gradually add more interactive features. If you don't do anything very performance intensive, the page will work on most devices.

    2. What’s less great is a team of highly-paid and highly-skilled people all using Chrome on a recent Mac Pro, developing in an office half a mile from almost every server they hit, then turning around and scoffing at people who don’t have exactly the same setup.

      That's a genuine and broader problem about not emphasising with users enough.

    1. At the end of every week, calculate the average of your time spent asleep divided by your time spent in bed each night.

      That's something only a software engineer can come up with.

    2. Outside of journals, the online information on sleep is largely terrible — a rehash of common wisdom or an attempt to sell you something.

      Since a lot of the online information is written to sell you something, and we're bad at separating quality from attention.

    1. Speedcabling

      To anyone who is confused, this seems to be about untangling wires as fast as possible.

      Example: https://www.youtube.com/watch?v=nKy_pmuB9-g

    1. We want freedom to follow strange ideas and then to give up without having to explain every move.

      This is IMHO the core of innovation. Only the results matter, the process you used to get there should not be important (within reasonable bounds).

      At least, we should not obsess over the process (e.g. agile, or open development) over the results it produces.

    1. Research has yet to reveal how large the potential of wood as a fuel can be if we would use oven stoves

      Why is this article so obsessed with wood as energy source? We could use radiant heat to reduce energy consumption without burning wood.

    2. Today’s energy crisis

      The article is from 2008, I wonder what the author refers to here. Possibly using more environmentally sustainable energy sources? Or using less energy in general?

    3. In Finland, a major producer of soapstone heaters, the purchase of an oven stove is subsidized by the government, with the consequence that 90 percent of new houses has them inside.

      But again, those will only heat one room. Most warmth will still come from other heating systems.

    4. All our contemporary heating appliances warm a house or a room mainly by means of convection: they heat up the air. An oven stove does it by means of radiant heat: infrared radiation, comparable to the heat of the sun.

      That's the core of the article. We don't need to heat up all the air in a room, only ourselves.

    5. In the 18th century, several European governments financed research to improve the technology

      I wonder what "research" in this context means.

    1. every unnecessary fact dilutes our point

      "unnecessary" is a broad word, and depends on the context. What may be obvious for some is a lead for others. Most scientific progress is also only possible by taking facts for granted (by quoting them) that others proved.

      But I agree that sources are largely irrelevant for more abstract ideas. More importantly, they may distract us from considering the ideas ourselves. One more case where schools is harmful for independent thinking.

    2. Whatever’s on your mind is not as important as it seems.

      Tangential to the article, but this seems like a powerful idea. I wonder what the best importance filter is if thinking is faulty.

      Possibly it's time that creates concrete evidence for importance. But connecting past dots is thinking, and you can't notice reoccurrences of something you don't know. You need to believe something is important to prove that it really is.

      Maybe the quote is not such a powerful idea after all, just a reminder to question our heuristics sometimes.

    3. have considered it

      The forth reason people quote things may be that it's easy to copy ideas that seem proven. You read about an idea in a book a nobel laurate wrote, so it must be correct right? The source of the idea is an integral part of why such quotes are useful and remembered.

    1. I don’t want to just duplicate it

      I think nobody wants to do that. The common approach is summarising or connecting existing ideas, but I agree creating deliberately different melodies leads us forward more directly.

    2. I know I’m not the only voice you hear. There’s a common message we all hear these days.

      I wonder what "we all" means here, what sources on philosophy or business Derek did not want to replicate.

      For people outside our bubbles, is contrarian writing more or less useful? Concise essays are so powerful and lead to independent thinking if they allude to ideas we already encountered before, so they do require context.

    3. My public writing is a counterpoint meant to complement the popular point.

      The guiding reason behind Derek Sivers' writing.

    1. A telescope is a time machine enabling me to look deep into the past, since the light hitting my eye had been traveling for years or possibly centuries to reach the earth, and could even be coming from stars that no longer existed.

      I wonder if this means that no information is ever lost. Say we come across a giant mirror in space someday and could watch past life on earth with incredible detail.

      But then, why would we need this much detail? Time itself is an information filter - thing that work will stick, others disappear.

    1. an answer you came up

      Or an answer you simply heard and retrieved from your associative memory without thinking.

    2. I don’t mind. I’m not trying to win any debates.

      He probably means debates in the formal sense. The best way to argue for something is to show it, for example building a business out of something other people scoff at.