51 Matching Annotations
  1. Jan 2023
    1. Cryptosporidium

      This parasite could also be present in water without coliform bacteria, so standard "total coliform" tests are likely to miss them out. (link)

  2. Jan 2021
    1. In fact, it might very well be the case that, while finding that within narrowly defined categories ideas are getting harder to find, new ideas in general are getting easier to find over time, since we can now use all of the technology we invented previously and all of the scientific knowledge we collected to help us find them.

      I think what you show here is weaker than how this section seems to be about. It's not true that individual case studies aren't relevant, it's just that it's restricting their claim to "in each category, eventually ideas are getting harder to find" where case studies are extremely relevant.

    2. vacuum tubes

      https://spectrum.ieee.org/semiconductors/design/during-the-20th-century-vacuum-tubes-improved-in-a-moores-lawlike-way

      (I didn't search for others)

      Although, you only think about your case in 17-18 hundreds

    3. TFP is an economic statistic that shows (a) how quickly the economy’s aggregate output grows compared to its aggregate input of labor and capital. If both inputs and outputs are growing at the same rate, TFP stays constant. If the output is growing quicker, TFP is increasing. If it’s growing slower, TFP is decreasing.

      I think that more importantly here, TFP growth is used in the theoretical model they are working under. What he tries to do is exactly to show that the model needs alpha to decrease (or be around 0 in this case) because TFP is only growing linearly since 1950.

      I think that the arguments in this section are therefore not relevant to the case against the paper. I think that they might be a case against this theory, but that's a wider claim than understood from this text

    4. As we can see from Bloom et al’s data, for the time period studied by Bloom et al, the US TFP has been growing linearly.

      Well, looking at the data farther back in time we see that it looks more exponential, or like it wants to be exponential (which is the baseline Bloom et al consider) -

      and some other countries -

      (This is taken from the Long Term Productivity Database). Also, I think that https://wtfhappenedin1971.com/ might be relevant

    5. although the paper is often interpreted

      I think that it's good to mention at this subsection that there are parts of the paper that directly address idea-looking phenomena. Say, V.A on new molecular entities

    6. Bloom et al define

      It was defined this way in Romer (1990)

    7. producing 2

      -> producing each

    8. first

      *second

  3. Dec 2020
  4. quantifieduncertainty.org quantifieduncertainty.org
    1. Incentive Problems With Current Forecasting Competitions.

      These pages don't show content - the associated elements in the same row

    2. Run quick experiments on systematic forecasting techniques.

      One thing I don't understand here - is the goal to improve decision making in situations with long time until feedback? if so, how do you plan to iterate?

    3. No. There have been setbacks very recently (Brexit, Trump, COVID challenges), but many successes as well (Many other world leaders, OpenPhil, Gates Foundation). In the long-term, institutions will still matter, and we don’t think they are hopeless.

      Controversial and I'm not sure if it addresses the core concern.

    1. Expected utility theory predicts that people are insensitive to changes in the dynamics.

      In the experiment, the participant were told that they were making 10 bets and wouldn't know the result of the bets until all choices were done. So expected theory seems to apply here well enough

    2. For instance

      I think that this is slightly misleading. It can be framed as choosing to work for a year for $12K instead of for a month for $2K, but these also has a very different cost in terms of work-hours.

  5. Nov 2020
  6. Jul 2020
    1. publishing houses are profit-driven

      It is a $10B industry. (expensive report)

    2. unlikely to be at the optimal level

      Perhaps an analysis of the political situation might be relevant.

      If there is an inherent (dis)belief in (basic) science as something important to fund for some political affiliations, then that would amount to understandable bias (this study has such an analysis, but I'm sure there are more relevant resources).

      If there are major lobbying entities for or against science funding, that could also be a clear sign of bias. I'd expect advocates for science to be weaker and less well organized.

    3. increasing returns to scale

      Where exactly?

    4. easier to evaluate

      It's important to note that it's not clear how to actually do that evaluation. We can measure somewhat easily how capable is a scientist in her domain, but I'm not sure how big of a problem that is generally for professionals with similar amount of experience. What we'd like to measure is how a specific research group plan is likely to result in improvements to the world which is generally a hard problem when it comes to basic research.

    5. already existing infrastructure

      This has several of the usual drawbacks of large institutions, which I'm not sure are overcome by the benefits of grant-handling productivity and having more public scrutiny.

      Inside politics, large bureaucracy and difficulty in making agile changes all hinder efficiency.

      Universities have high correlation between PR/academic status and amount of funding. This is not aligned well enough with public goods (I think).

    6. than are needed to replace themselves

      Is this a growing trend? Is the ratio between PhDs/Post-docs to academic positions really growing?

      Perhaps it has always been the case. Perhaps it is a feature and not a bug - if we assume that researcher productivity sits on a bell curve, we might not care as much about anything less than the 90th percentile (say) as they may not be worth the cost of the academic positions.

    7. This is because of strong incentives for freeriding on the efforts of others: countries might not have an incentive to spend more on basic research, because other countries might do the research anyway and everyone could benefit from it for free.

      Are there countries explicitly mentioning that kind of reasoning?

    8. Donations to research might have had a big impact historically

      What are the counterfactuals to the following examples? Would high-impact research be funded anyway, and thus funding science has high diminishing returns? (my intuition is that this is not the case because funders do not have the capacity or directive to pick the most promising research - unless the only way to estimate primisingness of research is by assessing the researcher herself, which I think that major funders are doing probably okay at)

  7. Jun 2020
    1. Seedlings are not about finding a solution to a problem, they’re designed to verify or disprove a hypothesis

      This seems like a generally good starting point - https://cs.stanford.edu/~jsteinhardt/ResearchasaStochasticDecisionProcess.html

    2. The first question suggests that you should seek out heretics and people with expertise who are not experts.

      I think that this follows if there are reputational risks or a major group-think problem. However, I think that many successful experts would be better at detecting systemic biases than almost all non-experts and that much of the success of their work lies on this fact.

    3. The second question suggests building structured frameworks for mapping a discipline and its assumptions.

      See Architecting Discovery by Ed Boyden and Adam Marblestone for an example of that kind of a framework used in neuroscience.

    4. Shifting the risk from the performers to the program managers enables DARPA to tackle systemic problems where other models cannot

      I take this to mean personal career risk, where the performers have a concrete research plan which is likely to work but it is not clear whether it will have impact - which is what the program manager is measured on.

    5. their ability to move money around quickly

      and it’s hard for the performer to pull funding from the program manager.

    6. Therefore, despite all the talk here and elsewhere about ‘the ARPA Model’ we must keep in mind that we may be attributing more structure to the process than actually exists.

      and also, it points at empowering and training PMs as a good alternatives to creating a new ARPA-style institution

  8. May 2020
    1. Evolutionary fitness is the opposite

      The argument is misleading here. Calculating the momentum of a flowing river precisely is impossible due to its complexity, while it is possible to make general remarks about specific adaptive traits.

      Perhaps an important difference here is that much of physics is more amenable to approximations.

    1. Computer hardware

      Maybe the possibility of Drexler's CAIS is an argument toward increasing computer hardware.

      Given much more computational resources, but little algorithmic progress, specialized algorithms might prevail and we would find ourselves in the CAIS world, and not in the AGI world.

    1. So what does natural selection select for?

      I have some problems with this argument.

      1. This section actually claims that we expect the preference of "having the most offsprings" to dominate.
      2. Under sufficiently complex dynamics or cluelessness about the effects of actions on the long term, caring more about influencing the future or having the most offsprings might be less adaptive than other traits which might be much more important for a reason unknown to us.
    2. because it is instrumentally useful

      Common sense morality values are also instrumentally useful for group selection. Expanding the "consequentialist capabilities" might also symmetrically be harmful.

      This is incorrect though. Expansionism is mostly considered an instrumental value in and of itself to the people and not to evolution who designed those people.

    3. They will bargain amongst each other and create a world that is good to live in

      This is supported by arguments as in paretotopian goal alignment

    1. rchitecting discovery mayinvolve, at its core, asking at least threekinds of question

      This is a bit misleading phrasing, I think. The three questions below are:

      1. how do we pick the class of problem that the tool will solve or the space of hypotheses it will allow users to probe?
      2. how well can you envision or roadmap out the possible solutions so that you can pick the best one possible?
      3. once you have chosen a problem and have a roadmap of the space of possible solutions, how do you choose which path is the best?

      But these are questions about the general process itself. The suggested three question for taking a topic and architecting discovery there are

      1. What is the class of problems that the tool will solve? What is space of hypothesis that it will allow users to probe?
      2. What is the complete set of solutions can you envision and what is the best roadmap of achieving these?
      3. Among these solutions, considering the roadmap, what is the most promising path?
    1. A tiny portion

      This is a major crux for me. I'd expect it to be the most likely scenario - a convergence to the preferences of the people alive as civilization gets more prosperous.

      Well, I don't really buy this. I expect that some economic incentives will hold and perhaps be even more important than today and not allow focus on well being. Also, something might lock-in before we get a hold of our preferences.

    2. For instance, an event that would create 1025 unhappy beings in a future that already contains 1035 happy individuals constitutes an s-risk, but not an x-risk.

      I find this extremely important, and not something I understood before. I imagined a homogeneous future, in which every life is either good or bad, but not one in which there are vast amounts of suffering, regardless of what other accomplishments there are. I find the view that we should not allow 10^25 people to live miserable lives in order for us to have 10^35 happy people. Would I be okay with a rich civilization that expands because of use of slavery? Would I be willing to work for civilization expansion if suffering is likely to dramatically increase with it?

    1. Gloor, 2017
    2. Confronting torture-level suffering

      I found that my intuition (which was initially to take (2) ) was shifted more toward (1) after viewing some of the videos on suffering in the link above.

      However, I think that perhaps the argument here that torture-level suffering is so horrible and unimaginable lacks the symmetric intuitions of extreme hedonic states. People go through a lot of terrible suffering in order to get another hit of heroin, and possibly wireheading really can be incredible. People having a bad trip still try LSD again because they think that it is worth the risk.

    3. person-affecting views,

      Person affecting view also supports making someone happy at the cost of creating miserable lives, which contradicts the view presented here.

    4. By contrast

      This again can be explained by the intuition that in absolute value miserable is much greater than happy.

      Perhaps though, this intuition is exactly the point

    5. Two planets

      This may be explained hedonically via scope insensitivity and an intuition that being miserable is much more negative than the amount of positive utility one has by being simply happy.

      If it would be a difference of a million to one, instead of million to thousand, I'd feel horrible not bringing about the existence of another million happy lives.

  9. Aug 2016
    1. first CONV layer (left), and the 5th CONV layer (right)

      I think it is the other way around

  10. Jul 2016
    1. validation set

      they meant training data?

    2. In practice

      is it actually used in practice? is it preferred over neural networks?

    1. valuable chemicals

      It is converted into Lactate, which is used predominantly in the bioplastic industry.