10,000 Matching Annotations
  1. Aug 2019
    1. Democratic People's Republic of Korea[a]

      North Korea itself rejects communism.

      “There are two ways of looking at a place: There is what it calls itself, and there is what analysts or journalists want to say a place is,” Owen Miller, who lectures in Korean history and culture at London’s School of Oriental and African Studies (SOAS), told Newsweek.

      “On neither of those counts is North Korea Communist. It doesn’t call itself Communist—it doesn’t use the Korean word for Communist. It uses the word for socialism but decreasingly, less and less over the decades.”

      The state’s official ideology is juche, a Sino-Korean word used in both North and South Korea that roughly translates as “independence, or the independent status of a subject,” according to Miller.

      “Juche is enshrined in North Korea’s constitution, explicated in thousands of propaganda texts and books, while teachers indoctrinate North Korean children with the ideology at an early age.

      The concept evolved in the 1950s, in the wake of the Korean War, as North Korea sought to distance itself from the influence of the big socialist powers: Russia and China. However the concept has a more profound resonance for North Koreans, alluding to the centuries when Korea was a vassal state of the Chinese.

      “When Kim Il Sung started using the word, he was using [it] to refer to this sense of injured pride, going back decades and much further, hundreds of years under Chinese control. He is saying North Korea is going to be an independent nation in the world, independent of other nations,” Miller says.”

      Is North Korea Communist?

    1. The term was coined by Tim Berners-Lee for a web of data (or data web)[3] that can be processed by machines[4]—that is, one in which much of the meaning is machine-readable.

      what semantic web mean in relation to machines

    2. The Semantic Web is an extension of the World Wide Web through standards by the World Wide Web Consortium (W3C).[1] The standards promote common data formats and exchange protocols on the Web, most fundamentally the Resource Description Framework (RDF). According to the W3C, "The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries".[2] The Semantic Web is therefore regarded as an integrator across different content, information applications and systems.

      definition of semantic web

    1. https://astrosociety.org/edu/publications/tnl/23/23.html

      Broken link - is in archive.org though. Here is the quote:

      Marsden continued to refine his calculations, and discovered that he could trace Comet Swift- Tuttle's orbit back almost two thousand years, to match comets observed in 188 AD and possibly even 69 BC. The orbit turned out to be more stable than he had originally thought, with the effects of the comet's jets less pronounced. Marsden concluded that it is highly unlikely the comet will be 15 days off in 2126, and he called off his warning of a possible collision. His new calculations show Comet Swift-Tuttle will pass a comfortable 15 million miles from Earth on its next trip to the inner solar system. However, when Marsden ran his orbital calculations further into the future, he found that, in 3044, Comet Swift-Tuttle may pass within a million miles of Earth, a true cosmic "near miss.''

      Marsden's prediction, and later retraction, of a possible collision between the Earth and the comet highlight that fact that we will most likely have century-long warnings of any potential collision, based on calculations of orbits of known and newly discovered asteroids and comets. Plenty of time to decide what to do.

      https://web.archive.org/web/20130402063233/https://astrosociety.org/edu/publications/tnl/23/23.html

    1. an active supervolcano

      It is not a supervolcano. Its VEI (Volcanic Explosivity Index) is variously estimated as 6 or 7. A super volcano has VEI at least 8. This is just taken from the title of the New Scientist article - NS does tend to use hyperbole (exaggeration for emotional effect) sometimes.

      This is a recent paper labeling it as VEI 7

      Pan, B., de Silva, S.L., Xu, J., Chen, Z., Miggins, D.P. and Wei, H., 2017. The VEI-7 Millennium eruption, Changbaishan-Tianchi volcano, China/DPRK: New field, petrological, and chemical constraints on stratigraphy, volcanology, and magma dynamics. Journal of Volcanology and Geothermal Research, 343, pp.45-59.

      This 2016 paper calls it VEI <=6

      Despite its historical and geological significance, relatively little is known about Paektu, a volcano that has produced multiple large (volcanic explosivity index ≤ 6) explosive eruptions, including the ME, one of the largest volcanic events on Earth in the last 2000 years

      The explosive ME deposited 23 ± 5 km3 dense rock equivalent (DRE) of material emplaced in two chemically distinct phases in the form of ash, pumice, and pyroclastic flow deposits

      Iacovino, K., Ju-Song, K., Sisson, T., Lowenstern, J., Kuk-Hun, R., Jong-Nam, J., Kun-Ho, S., Song-Hwan, H., Oppenheimer, C., Hammond, J.O. and Donovan, A., 2016. Quantifying gas emissions from the “Millennium Eruption” of Paektu volcano, Democratic People’s Republic of Korea/China. Science advances, 2(11), p.e1600913. Press release

      Study provides new evidence about gas emissions from ancient North Korean volcanic eruption

      USGS definition of a supervolano:

      The term "supervolcano" implies a volcanic center that has had an eruption of magnitude 8 on the Volcano Explosivity Index (VEI), meaning that at one point in time it erupted more than 1,000 cubic kilometers (240 cubic miles) of material. Eruptions of that size generally create a circular collapse feature called a caldera.

      What is a supervolcano?

      The NS article is just using hyperbole for a more dramatic headline for emotional effect

      Andy Coghlan (15 April 2016). "Waking supervolcano makes North Korea and West join forces". NewScientist. Retrieved 17 May 2019.

      In your own supervolcano article it is listed as a Vel 7, a "super eruption'" as your page puts it, but not quite a supervolcano. The page itself explains that a supervolcano is 8 or more.

  2. Jul 2019
    1. Strangelets are small pieces of strange matter, perhaps as small as nuclei. They would be produced when strange stars are formed or collide, or when a nucleus decays

      An excellent cite here for strangelets is the LHC safety review in 2011. It also gives additional details that would be useful for the article and includes a short summary of the state of current research on strangelet production. The supplement to the review describes how the LHC confirmed the emerging picture, which is that strange matter does not form at high energies

      Also, just as icecubes are not produced in furnaces, the high temperatures expected in heavy-ion collisions at the LHC would not allow the production of heavy nuclear matter, whethernormal nuclei or hypothetical strangelets.

      Review of the Safety of LHC Collisions LHC Safety Assessment Group, 2011

      Implications of LHC heavy ion data for multi-strange baryon production LHC Safety Assessment GroupSept 26, 201

    1. Kahle has been critical of Google's book digitization, especially of Google's exclusivity in restricting other search engines' digital access to the books they archive. In a 2011 talk Kahle described Google's 'snippet' feature as a means of tip-toeing around copyright issues, and expressed his frustration with the lack of a decent loaning system for digital materials. He said the digital transition has moved from local control to central control, non-profit to for-profit, diverse to homogeneous, and from "ruled by law" to "ruled by contract". Kahle stated that even public-domain material published before 1923, and not bound by copyright law, is still bound by Google's contracts and requires permission to be distributed or copied. Kahle reasoned that this trend has emerged for a number of reasons: distribution of information favoring centralization, the economic cost of digitizing books, the issue of library staff without the technical knowledge to build these services, and the decision of the administrators to outsource information services
    1. Student skills[edit] Typically, literacy in the classroom has focused on the following building blocks: phonemic awareness, phonics, fluency, vocabulary, text comprehension (NEIRTIC, 2004). However, as the electronic age permeates our society, students need to be prepared for jobs that require further literacy skills. Some of these skills include the following (Kinzer, 2003, para. 15): Keyboarding Layout and design skills for creating presentations and web pages Critical thinking about video, still images, audio, text, and interrelationships, and how they jointly convey intended and unintended messages Skill in using a variety of software types Information gathering, retrieval, and copying into presentation formats Scaling images

      Internet Workshop - instructional model that educates students on a newly emerging form of literacy, the Internet. It is good to be aware of the skills that my students will need as young adults, applying for jobs.

    1. This is the first time that ALMA has ever observed the surface of a star and this first attempt has resulted in the highest-resolution image of Betelgeuse available.

      This is about a decade out of date. There is a higher resolution image from 2009

      The Spotty Surface of Betelgeuse Credit: Xavier Haubois (Observatoire de Paris) et al.

      The figure in the paper itself is this one:

      The paper is here:

      Haubois, X., Perrin, G., Lacour, S., Verhoelst, T., Meimon, S., Mugnier, L., Thiébaut, E., Berger, J.P., Ridgway, S.T., Monnier, J.D. and Millan-Gabet, R., 2009. Imaging the spotty surface of Betelgeuse in the H band . Astronomy & Astrophysics, 508(2), pp.923-932.

      There are other images of similar resolution. This is an article from 2018.

      Ariste, A.L., Mathias, P., Tessore, B., Lèbre, A., Aurière, M., Petit, P., Ikhenache, N., Josselin, E., Morin, J. and Montargès, M., 2018. Convective cells in Betelgeuse: imaging through spectropolarimetry. Astronomy & Astrophysics, 620, p.A199.

    1. Fortune

      So here, Wikipedia authors cite "Fortune" as a major section of the profile of Machiavelli's life without bothering to mention the specific power and significance of the word "fortuna" within the original text. Seems like a giant omission, even for people (like me) who have limited knowledge of the language of the original text.

    1. Well known potentially hazardous asteroids are normally only a hazard on a time scale of hundreds of years

      Many are only potentially hazardous on a timescale of thousands of years or millions of years. Example, Swift-Tuttle's first chance of impact is a small chance of impact in 4479 of 1 in a million.

    1. Thus a wet bulb temperature of 35 °C (95 °F) is the threshold beyond which the body is no longer able to adequately cool itself

      Confusingly doesn't explain that wet bulb temperatures differ significantly from the heat index. The US system of heat index is roughly the “perceived heat”, how warm it feels, and is used mainly for public outreach such as heat wave warmings, rather than scientific research.

      Sadly, because the conversion depends on radiant heat (as well as humidity), there is no systematic way to convert one to the other. It gives an idea of what the temperature feels like - but how well you can tolerate it may depend on the amount of humidity and how much of the perceived heat is due to radiant heat.

      A wet bulb temperature of 33°C (92°F) corresponds very roughly to a heat index of around 57°C (135°F) in the absence of radiant heat. But with radiant heat the heat index can increase relative to those values and be larger than you’d expect from the wet bulb temperature by over 7°C (11°F) for indoor conditions and over 11°C (18°F) for outdoor conditions with direct sunlight.

      Iheanacho, I., 2014. Can the USA National Weather Service Heat Index Substitute for Wet Bulb Globe Temperature for Heat Stress Exposure Assessment?.

    1. In Hardy's words, "Exposition, criticism, appreciation, is work for second-rate minds. [...] It is a melancholy experience for a professional mathematician to find himself writing about mathematics. The function of a mathematician is to do something, to prove new theorems, to add to mathematics, and not to talk about what he or other mathematicians have done."

      similar to Nassim Taleb's "History is written by losers"

    1. believe that a limited convention is possible.

      So does James Kenneth Roger, Attorney at Osborn Maledon, P.A more about him - cited later in this article

      He uses various arguments against this, mainly that it would defeat its purpose if it was unlimited because States would be reluctant to call such a convention.

      However, he acknowledges that the Philadelphia convention in 1787) (not called under article V) went beyond its own remits and then he says that the main protection is that 3/4 of States have to support any amendments made by any such convention, which would include a majority of at least 22 out of the original at least 34 who called the convention in the first place (at most 12 total can be against any ammendment).

      For more details: later annotation in page

    2. The fact that Congress has not called such a convention, and that courts have rejected all attempts to force Congress to call a convention, has been cited as persuasive evidence that Paulsen's view is incorrect

      Rogers’ legal opinion about Paulson’s argument is misparaphrased in this article. He does not use the fact that no convention has been held yet as a reason to suppose that the convention has to be limited, indeed he acknowledges that the Philadelphia convention in 1787 ) (although not called under article V) and cites this as a reasonable concern.

      He uses other arguments against this, mainly that it would defeat its purpose if it was unlimited because States would be reluctant to call such a convention out of fear of what other things it might decide. He also says that if they thought this would happen the States would immediately rescind their applications, so preventing the convention, something Idaho has already done.

      He does say that if the convention was unlimited then all existing applications could be aggregated together to call a single convention to discuss them all but he does not use the fact that this has not happened as an argument to say that such a convention is impossible. However, he says that the main protection is that 3/4 of States have to support any amendments made by any such convention. This would include a majority of the original 34 or more States that called for it (at most 12 of them could refuse to ratify). The arguments are

      • The aim of the clause is to allow states to circumvent a recalcitrant Congress, so must allow the States to limit the convention
      • If States were unable to limit the scope of a convention, they would not want to apply for one because of the uncertainties of its results so the purpose of the clause would be frustrated It then mentions this concern:
      • If the States can't limit the scope, then all the applications would be counted in aggregate and based on this then a convention should be called as there are requests from more than two thirds of States already. It says however that if such a convention was about to be called the States would immediately rescind their applications, giving the example of Idaho that has done so already out of such concerns.

      It then says

      • If the arguments are valid that States can limit scope then the appliations for a convention for different subjects should be counted separately. And if the applications are talllied this way then the convention would be limited.

      It gives the example of the Philadelphia Convention of 1787 which exceeded its mandate of revising the Articles of Confederation to show that there are well founded concerns about whether a modern convention with a limited mandate could exceed its original scope.

      It says it would be difficult for a government to intervene as a constitutional convention could concievably claim independent authority.

      However it goes on to say that any ammendments have to be ratified by 3/4 of the States. So, if the convention proposes extra amendments the would only be accepted if ratified by 38 States. This would mean that most of the States that originally requested it would also ratify it thus legitimizing their actions.

      (This is the maths here: 38 out of 50 have to ratify so that means up to 12 could refuse to ratify, and a convention requres 2/3 of 50 or 34 States to be initiated. If the ones that don't ratify are all from the original States, then that would make it 22 that ratify of the original 34 or over 64% of them)

      The section concludes that

      > The ratification process itself is thus the States' means of enforcing a subject-matter limitation. If the States determine that the convention exceeded its scope, they can refuse to ratify the proposed amendments.


      The part of the passage that was misparaphrased by this article is this one:

      The second argument—that the States have no power beyond initiating a convention—is partially correct. They do, however, have indirect authority to limit the convention. Congress’s obligation to call a convention upon the application of two thirds of the States is mandatory, so it must call the convention that the States have requested. Thus, Congress may not impose its own will on the convention. As argued above, the purpose of the Convention Clause is to allow the States to circumvent a recalcitrant Congress. The Convention Clause, therefore, must allow the States to limit a convention in order to accomplish this purpose. The prospect of a general convention would raise the specter of drastic change and upheaval in our constitutional system. State legislatures would likely never apply for a convention in the face of such uncertainties about its results, especially in the face of a hostile national legislature. [73] States are far more likely to be motivated to call a convention to address particular issues. If the States were unable to limit the scope of a convention, and therefore never applied for one, the purpose of the Convention Clause would be frustrated.

      A related concern is whether States’ applications that are limited to a particular subject should be considered jointly regardless of subject or tallied separately by subject matter to reach the twothirds threshold necessary for the calling of a convention. [74] This is an important question because if all applications are considered jointly regardless of subject matter, Congress may have the duty to call a convention immediately based on the number of presently outstanding applications from states on single issues[74].

      If the above arguments about the States’ power to limit a convention are valid, however, then applications for a convention for different subjects should be counted separately. This would ensure that the intent of the States’ applications is given proper effect. An application for an amendment addressing a particular issue, therefore, could not be used to call a convention that ends up proposing an amendment about a subject matter the state did not request be addressed. [76]

      Footnote

      73

      These fears, however, are mitigated by the States’ own powers over ratification.

      74 . Paulsen, supra note 3, at 737–43. 75 . Id. at 764. Paulsen counts forty ‐ five valid applications as of 1993.

      76

      If it were established that applications on different topics are considered jointly when determining if the twothirds threshold has been reached, states would almost certainly rescind their outstanding applications to prevent a general constitutional convention. Some states have already acted based on fears of a general convention. For example, in 1999 the Idaho legislature adopted a resolution rescinding all of its outstanding applications for a constitutional convention. S.C.R. 129, 1999 Leg. (Idaho 1999). Georgia passed a similar resolution in 2004. H.R. 1343, Gen. Assemb. 2004 (Ga. 2004). Both resolutions were motivated by a fear that a convention could exceed its scope and propose sweeping changes to the Constitution.

      pdf here

    1. It is this combination of features that also makes HyperCard a powerful hypermedia system. Users can build backgrounds to suit the needs of some system, say a rolodex, and use simple HyperTalk commands to provide buttons to move from place to place within the stack, or provide the same navigation system within the data elements of the UI, like text fields. Using these features, it is easy to build linked systems similar to hypertext links on the Web.[5] Unlike the Web, programming, placement, and browsing were all the same tool. Similar systems have been created for HTML but traditional Web services are considerably more heavyweight.
    1. ; some studies have reported that in adult humans about 700 new neurons are added in the hippocampus every day

      2019 study finds thousands of young neurons in brain tissue through to the ninth decade of life.

      By utilizing highly controlled tissue collection methods and state-of-the-art tissue processing techniques, the researchers found thousands of newly formed neurons in 13 healthy brains from age 43 up to age 87 with a slight age-related decline in neurogenesis (about 30% from youngest to oldest).

      Old Brain, New Neurons? Harvard University press release

      New neurons in red in brain tissue from a 68-year-old Original paper: Moreno-Jiménez, E.P., Flor-García, M., Terreros-Roncal, J., Rábano, A., Cafini, F., Pallas-Bazarra, N., Ávila, J. and Llorens-Martín, M., 2019. Adult hippocampal neurogenesis is abundant in neurologically healthy subjects and drops sharply in patients with Alzheimer’s disease. Nature medicine, 25(4), p.554.

    1. was nicknamed the annus confusionis ("year of confusion")

      It was actually called the ultima annus confusionis, or the "final year of confusion". Also the primary source says it was 443 days.

      The primary source here is Macrobius, in his Saturnalia) 1, 14, 3:, 400 AD

      He first describes various Roman theories about when intercalation) (insertion of leap days or months) started, the earliest being an idea from Varro that it was a very ancient law inscribed on a bronze column around 472 BCE .

      He then goes on to say (page 165 and page 166):

      There was a time when intercalation) was entirely neglected out of superstition, while sometimes the influence of the priests, who wanted the year to be longer or shorter to suit the tax farmers,298 saw to it that the number of days in the year was now increased, now decreased, and under the cover of a scrupulous precision the opportunity for confusion increased.299

      1. But Gaius Caesar took all this chronological inconsistency, which he found still ill-sorted and fluid, and reduced it to a regular and well-defined order;300 in this he was assisted by the scribe Marcus Flavius, who presented a table of the individual days to Caesar in a form that allowed both their order to be determined and, once that was determined, their relative position to remain fixed.301

      2. When he was on the point of starting this new arrangement, then, Gaius Caesar let pass all the days still capable of creating confusion: when that was done, the final year of confusion was extended to 443 days.302 Then imitating the Egyptians, who alone know all things concerned with the divine, he undertook to reckon the year according to the sun, which completes its course in 365¼ days.303

      The source says

      Caesar called 46 BC the ultimus annus confusionis ("The final year of confusion")

      Roman wits, however, called it the annus confusionis ("Year of Confusion").

      Your ref 2 says

      We think of the calendar as a universal measure of time. It's like a perfect grid that can be extended endlessly into the future. There's a website that tells me my birthday in the year 2128 will fall on a Monday.

      But in antiquity, calendars were simply ways of organizing religious festivals, the terms of contracts, and other social arrangements. People knew calendars could be shifted and manipulated-even for political reasons. Priests and officials "kept" the time, and different calendars were in use throughout the world. Calendar time simply wasn't as fixed back then. An ancient calendar was more like a schedule, subject to change and revision.

      So Caesar's reform was all the more remarkable. As both high priest and dictator of Rome, he had the authority to impose a whole new scheme on the Roman world. Cicero joked that this man now wished to control the very stars, which rose according to his new calendar as if by edict. Caesar's calendar still needed some minor adjustments, but Europe never got another jumbo year like 46 BC. And to this day, we are still marching along on Caesar's time.

      The Longest year in History University of Houston scholar Richard Armstrong

      Here is an academic secondary source

      ... our seasons come always at very nearly the same time, as fixed by our calendar, so much so that if ther is any variety, we remark on it, and say that spring is late, or autumn early, this year. It needs some little historical knowledge and imaagination to remind us of a time when it was not so; when months were lunar, many days were named and not numbered, and the year had so little to do with the seasons that it was quite possible for November or December to arrive before the summer was well over. Yet this was the case in the greatest civilizations of classical antiquity until a comparatively late date. For Rome, the year which we call 46 B.C. is called by Macrobius the last year of the muddled reckoning, annus confusionis ultimus, and it was 445 days long, so much had the nominal dates got behind the real ones; with the next year began the Julian reckoning, albeit with sundry boggles on the part of the Roman officials who did not quite undersatnd it, and long delays before the whole Western world adopted it.

      Footnote: Macrobius, Saturnalia) 1, 14, 3: no one, except moderns who sould know better, ever calls it the annus confusionis simply.

      Rose HJ. The Pre-Caesarian Calendar: Facts and Reasonable Guesses. The Classical Journal. 1944 Nov 1;40(2):65-76.

    1. An Oblivious Tree is a rooted tree with the following property: All the leaves are in the same level. All the internal nodes have degree at most 3. Only the nodes along the rightmost path in the tree may have degree of one.

      Note this is not the definition of the oblivious decision trees in the CatBoost paper.

      There a oblivious decision tree means a tree where the feature used for splitting is the same across all intermediate nodes within the same level of the tree, and the leaves are all in the same level.

      See: https://stats.stackexchange.com/questions/353172/what-is-oblivious-decision-tree-and-why

    1. He was at first unable to afford the surgery that he needed immediately.

      WTH America, seriously? One of the greatest musicians of all times and he was unable to afford a surgery that could save his life?

    1. Its mission is to use biologically-detailed digital reconstructions and simulations of the mammalian brain to identify the fundamental principles of brain structure and function

      Most neuroscientists think this is impossible with current knowledge.

      This annotation paraphrases parts of the article in BBC Futures, Will we ever ... simulate the human brain?, which is cited as a summary of the issues on page 9 of the 2015 mediation report on the Blue Brain project

      A billion dollar project claims it will recreate the most complex organ in the human body in just 10 years. But detractors say it is impossible. Who is right?

      Is it even possible to build a computer simulation of the most powerful computer in the world – the 1.4-kg (3 lb) cluster of 86 billion neurons that sits inside our skulls?

      The very idea has many neuroscientists in an uproar, and the HBP’s substantial budget, awarded at a tumultuous time for research funding, is not helping

      The problem is that though neuroscientists have built neural nets since the 1950s, the vast majority treat each neuron as a single abstract point.

      Markram wants to treat each neuron as a complex entity together with the active genes that switch on and off inside them, the 3000 synapses that let each neuron connect with its neighbours, the ion channels (molecular gates) that allow them to build up a voltage by moving charged particles in and out of membrane boreders and the electrical activity.

      Critics say that even building a single neuron model in this way is feindishly difficult. Then we have even less knowledge about how these cells connect.

      Markram's idea was to do a complete inventory of which genes are switched on in which cells in which parts of the brain, the "single-cell transcriptome" and then based on that he thinks he can recreate the electrical behaviour of each cell and how the neurons branches grow from scratch.

      Eugene Izhikevich from the Brain Corporation thinks we should be able to build a network with the connectivity and anatomy of a real brain, but that it would just be a fantastically detailed simulation of a dead brain in a vat - that it would not be possible to simulate an active brain.

      Markram himself says that his aim is not to build a brain that could act like us.

      “People think I want to build this magical model that will eventually speak or do something interesting,” says Markram. “I know I’m partially to blame for it – in a TED lecture, you have to speak in a very general way. But what it will do is secondary. We’re not trying to make a machine behave like a human. We’re trying to organise the data.”

      Chris Eliasmith from University of Waterloo, Canada, told BBC Futures:

      “The project is impressive but might leave people baffled that someone would spend a lot of time and effort building something that doesn’t do anything,”

      He is involved in the IBM brain simulation called SyNAPSE which also doesn't do very much. He says

      “Markram would complain that those neurons aren’t realistic enough, but throwing a ton of neurons together and approximately wiring them according to biology isn’t going to bridge this gap,”

      Will we ever ... simulate the human brain?

    1. Jetsun Milarepa (Tibetan: .mw-parser-output .uchen{font-family:"Qomolangma-Dunhuang","Qomolangma-Uchen Sarchen","Qomolangma-Uchen Sarchung","Qomolangma-Uchen Suring","Qomolangma-Uchen Sutung","Qomolangma-Title","Qomolangma-Subtitle","Qomolangma-Woodblock","DDC Uchen","DDC Rinzin",Kailash,"BabelStone Tibetan",Jomolhari,"TCRC Youtso Unicode","Tibetan Machine Uni",Wangdi29,"Noto Sans Tibetan","Microsoft Himalaya"}.mw-parser-output .ume{font-family:"Qomolangma-Betsu","Qomolangma-Chuyig","Qomolangma-Drutsa","Qomolangma-Edict","Qomolangma-Tsumachu","Qomolangma-Tsuring","Qomolangma-Tsutong","TibetanSambhotaYigchung","TibetanTsugRing","TibetanYigchung"}རྗེ་བཙུན་མི་ལ་རས་པ, Wylie: rje btsun mi la ras pa, 1028/40–1111/23)[1] was a Tibetan siddha, who famously was a murderer as a young man then turned to Buddhism to become an accomplished buddha despite his past. He is generally considered as one of Tibet's most famous yogis and poets, serving as an example for the Buddhist life. He was a student of Marpa Lotsawa, and a major figure in the history of the Kagyu school of Tibetan Buddhism.[1]

      This is a hagiography completed in 1488,three and a half centuries after his death It was written by an inspirational poet and nyönpa or "religious madman" Gtsang-smyon He-ru-ka. It is a classic of Tibetan literature, but is not a biography. This article only presents this later acount.

      The earliest known account of his life is strikingly different, attributed to Milarepa's principle disciple, Gampopa, though it's probably lecture notes by one of his students.

      In this earliest account, he is not a murderer. There is no mention of him killing anyone with black magic, or of his trial constructing towers under Marpa. It's his mother who dies when he is young, not his father. T

      Andrew Quintman whose thesis and then book was about Milarepa's life hasn't attempted to deduce his "real life". Though he does say there is good evidence he existed as a historical figure.

      For an expanded version of this article with cites, and discussion of the earlier accounts, see Milarepa

    1. A practical example of service design thinking can be found at the Myyrmanni shopping mall in Vantaa, Finland. The management attempted to improve the customer flow to the second floor as there were queues at the landscape lifts and the KONE steel car lifts were ignored. To improve customer flow to the second floor of the mall (2010) Kone Lifts implemented their 'People Flow' Service Design Thinking by turning the Elevators into a Hall of Fame for the 'Incredibles' comic strip characters. Making their Elevators more attractive to the public solved the people flow problem. This case of service design thinking by Kone Elevator Company is used in literature as an example of extending products into services.
    1. The report estimated 86,000 casualties, including 3,500 fatalities, 715,000 damaged buildings, and 7.2 million people displaced, with two million of those seeking shelter, primarily due to the lack of utility services. Direct economic losses, according to the report, would be at least $300 billion

      This is not modeling a single event. The cite itself explains that it is for all three segments of the fault hypothetically rupturing as a single faujlt of magnitude 7.7. In actuality it would be three separate earthquakes.

      The combined rupture of all three segments simultaneously is designed to approximate the sequential rupture of all three segments over time. The magnitude of Mw7.7 is retained for the combined rupture.

      It also explains that these are mainly minor injuries.

      Nearly 86,000 total casualties are expected for the 2:00AM event. A large portion of these casualties are minor injuries, approximately 63,300, though 3,500 fatalities are also expected. It goes on to explain that these are immediate deaths from buildings and bridges Those estimates include casualties resulting from structural building and bridge damage only. Therefore, the estimates do not included injuries and fatalities related to transportation accidents, fires, or hazmat exposure. This section deals only with injuries. Fatalities are addressed under mortuary services. The injuries and casualties estimated by the model are only for those that occur at the time of the event. The model does not provide for increases in these numbers that occur post event. For example, those that sustain injuries may die later, or injuries incurred as a result of response activities may result in fatalities

      Under mortuary services it has this table which breaks down the 3,500 by state: That’s for the eight states of Missouri, Illinois, Indiana, Kentucky, Tennessee, Alabama and Missisicpi. Total population 43 million according to the 2000 data they were using. Most in Tennessee which had a population of 5.69 million and would have 1,319 casualties in this scenario. By comparison, the US yearly death rate is 8.1 per thousand so for Tennessse, about 45,000 a year.

      In the conclusion it says

      “Some impacts may be mitigated by retrofitting infrastructure in the most vulnerable areas. By addressing infrastructure vulnerability prior to such a catastrophic event, the consequences described in this report may be reduced substantially.The resource gaps and infrastructure damage described in this analysis present significant unresolved strategic and tactical challenges to response and recovery planners. It is highly unlikely that the resource gaps identified can be closed without developing new strategies and tactics and expanded collaborative relationships.”

    1. A 2015 study suggested that the AMOC has weakened by 15-20% in 200 years

      This doesn't seem to have been updated since 2015. The IPCC report in 2018 (chapter 3) says

      It is more likely than not that the Atlantic Meridional Overturning Circulation (AMOC) has been weakening in recent decades, given the detection of the cooling of surface waters in the North Atlantic and evidence that the Gulf Stream has slowed since the late 1950s. There is only limited evidence linking the current anomalously weak state of AMOC to anthropogenic warming. It is very likely that the AMOC will weaken over the 21st century. The best estimates and ranges for the reduction based on CMIP5 simulations are 11% (1– 24%) in RCP2.6 and 34% (12– 54%) in RCP8.5 (AR5). There is no evidence indicating significantly different amplitudes of AMOC weakening for 1.5°C versus 2°C of global warming.

      Hoegh-Guldberg, O., Jacob, D., Taylor, M., Bindi, M., Brown, S., Camilloni, I., Diedhiou, A., Djalante, R., Ebi, K., Engelbrecht, F. and Guiot, K., 2018. Impacts of 1.5 ºC global warming on natural and human systems. Chapter 3, section 3.3.8

    1. In 1996 and 1998, a pair of workshops at the University of Glasgow on information retrieval and human–computer interaction sought to address the overlap between these two fields. Marchionini notes the impact of the World Wide Web and the sudden increase in information literacy – changes that were only embryonic in the late 1990s.

      it took a half a century for these disciplines to discern their complementarity!

    1. Apart from a few asteroids whose densities have been investigated,[6] one has to resort to enlightened guesswork.

      The field has moved on a lot since then. This gives an idea of the range of values for NEO's

      So, for a 200 meter asteroid it’s between 1 and 3, most likely around 1.75 or so, but a small chance of an iron meteorite of 6 to 7. For a 20 meter asteroid it’s a similar range but with two very likely densities of around 2.2 and around 2.8 (just going by eye from that diagram). 100 meter size range similar. It is based on this analysis by the structural type and composition.

      Report of the Near-Earth Object Science Defnition Team : Update to Determine the Feasibility of Enhancing the Search and Characterization of NEOs, September 2017

    1. Authentication (from Greek: αὐθεντικός authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user.

      The assertion does not need to be about an actor's identity per se?

    1. The Battlestar Galactica, an aircraft carrier in space that fought in the earlier war, is in the final stages of being decommissioned and converted to a museum when the attack occurs. During her decades of colonial service the Galactica's computer systems had never been networked so the Galactica is unaffected by the Cylon sabotage.

      Galactica, using old tech, was saved from the Cylon hack.

    1. At the start of the 1970s, The New Communes author Ron E. Roberts classified communes as a subclass of a larger category of Utopias.[5] He listed three main characteristics. Communes of this period tended to develop their own characteristics of theory though, so while many strived for variously expressed forms of egalitarianism, Roberts' list should never be read as typical. Roberts' three listed items were: first, egalitarianism – that communes specifically rejected hierarchy or graduations of social status as being necessary to social order. Second, human scale – that members of some communes saw the scale of society as it was then organized as being too industrialized (or factory sized) and therefore unsympathetic to human dimensions. And third, that communes were consciously anti-bureaucratic.
    1. Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith, and others, all at the same time and in combination.[230] Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly.[21]

      The researchers are of the view that their work strongly supports the possibility that terrestrial microbes most likely can adapt physiologically to live on Mars

      "This work strongly supports the interconnected notions (i) that terrestrial life most likely can adapt physiologically to live on Mars (hence justifying stringent measures to prevent human activities from contaminating / infecting Mars with terrestrial organisms); (ii) that in searching for extant life on Mars we should focus on "protected putative habitats"; and (ii) that early-originating (Noachian period) indigenous Martian life might still survive in such micro-niches despite Mars' cooling and drying during the last 4 billion years"

      de Vera, Jean-Pierre; Schulze-Makuch, Dirk; Khan, Afshin; Lorek, Andreas; Koncz, Alexander; Möhlmann, Diedrich; Spohn, Tilman (2014). "Adaptation of an Antarctic lichen to Martian niche conditions can occur within 34 days". Planetary and Space Science. 98: 182–190. Bibcode:2014P&SS...98..182D. doi:10.1016/j.pss.2013.07.014. ISSN 0032-0633.

    2. Currently, the surface of Mars is bathed with radiation, and when reacting with the perchlorates on the surface, it may be more toxic to microorganisms than thought earlier.[11][12] Therefore, the consensus is that if life exists —or existed— on Mars, it could be found or is best preserved in the subsurface, away from present-day harsh surface processes.

      This is the old view from around 2007. Nowadays the surface is also thought to be of interest for the search for present day life on Mars.

      Cites here are from "A new analysis of Mars “special regions”: findings of the second MEPAG Special Regions Science Analysis Group (SR-SAG2)." 2014

      (see section 2.1, page 891)

      Finding 2-1: Modern martian environments may contain molecular fuels and oxidants that are known to support metabolism and cell division of chemolithoautotrophic microbes on Earth

      3.6. Ionizing radiation at the surface page 891 of[1]).

      Finding 3-8: From MSL RAD measurements, ionizing radiation from GCRs at Mars is so low as to be negligible. Intermittent SPEs can increase the atmospheric ionization down to ground level and increase the total dose, but these events are sporadic and last at most a few (2–5)days. These facts are not used to distinguish Special Regions on Mars.

      Over a 500-year time frame, the martian surface could be estimated to receive a cumulative ionizing radiation dose of less than 50 Gy, much lower than the LD90 (lethal dose where 90% of subjects would die) for even a radiation-sensitive bacterium such as E. coli (LD90 of ~200–400 Gy)

      (see 3.7. Polyextremophiles: combined effects of environmental stressors of[1]).

      Finding 3-9: The effects on microbial physiology of more than one simultaneous environmental challenge are poorly understood. Communities of organisms may be able totolerate simultaneous multiple challenges more easily than individual challenges presented separately. What little is known about multiple resistance does not affect our current limits of microbial cell division or metabolism in response to extreme single parameters.

      All citing:

      Rummel, J.D., Beaty, D.W., Jones, M.A., Bakermans, C., Barlow, N.G., Boston, P.J., Chevrier, V.F., Clark, B.C., de Vera, J.P.P., Gough, R.V. and Hallsworth, J.E., 2014. A new analysis of Mars “special regions”: findings of the second MEPAG Special Regions Science Analysis Group (SR-SAG2).

    3. The search for evidence of habitability, taphonomy (related to fossils), and organic compounds on Mars is now a primary NASA and ESA objective.

      Since the article is about Life on Mars it should surely mention the first of NASA’S four science goals:

      Goal I: determine if Mars ever supported life

      • Objective A: determine if environments having high potential for prior habitability and preservation of biosignatures contain evidence of past life.
      • Objective B: determine if environments with high potential for current habitability and expression of biosignatures contain evidence of extant life."

      From: Hamilton, V.E., Rafkin, S., Withers, P., Ruff, S., Yingst, R.A., Whitley, R., Center, J.S., Beaty, D.W., Diniega, S., Hays, L. and Zurek, R., Mars Science Goals, Objectives, Investigations, and Priorities: 2015 Version.

    1. There is an almost universal consensus among scholars that the Exodus story is best understood as myth

      This is an over simplification, it's possible that some of the population of Israelites did come from Egypt, possibly many thousands of them, and that the story has elements from the experiences of those who did.

      "While there is a consensus among scholars that the Exodus did not take place in the manner described in the Bible, surprisingly most scholars agree that the narrative has a historical core, andthat some of the highland settlers came, one wayor another, from Egypt "

      In this, I am not referring to the various traditions of Israel’s interaction with Egypt resulting from the era of Egyptian control inCanaan or from some relations with the Hyksos,which found their way into the Bible, but to the possibility that there was a group which fled Egypt, and brought this story of Exodus with it. Though the size of this group is debated, most of the above scholars agree that it was in the range of a few thousands, or even hundreds (some give it more weight, e.g., Hoffmeier 1997). Still, despite the limited size of this group, it appears that during the process of Israel’s ethnogenesis its story became part of the common history of all the Israelites. Most of those who accept some historical core for the story of the Exodus from Egypt, date it to the thirteenth century, at the time of Ramses II, while others dateit to the twelfth century, during the time of Ramses III.

      Archaeology does not really contribute to the debate over the historicity or even historical background of the Exo-dus itself, but if there was indeed such a group, it contributed the Exodus story to that of all Israel. While I agree that it is most likely that there was such a group, I must stress that this is based on an overall understanding of the developmentof collective memory and of the authorship of the texts (and their editorial process). Archaeology, unfortunately, cannot directly contribute(yet?) to the study of this specific group of Israel’s ancestors.

      So was this Exodus group also Merneptah’s Israel, or at least part of it? Clearly, if there was an Exodus in the thirteenthcentury this group of people could have been part of Merneptah’s Israel. However, despite the assumed significance of this group (the Exodus as a "national" epic, more below), it is likely that this group was incorporated at a later stage, only after Merneptah’s time, or at least that it was distinct from Merneptah’s Israel. After all,although this group clearly brought with it some of what became the history of Israel, it wasn’tMerneptah’s Israel, or any "Israel" for that matter. While many scholars agree that the Exodus group brought with it YHWH as a new deity, the name Israel has the component "El," rather than "Ya" or "Yahu." Thus, Israel could [have] preceded the arrival of the Exodus group, and it is likely that the latter was not Israel’s "core"group.

      See also the Wikipedia article

    1. There is no indication that the Israelites ever lived in Ancient Egypt, and the almost universal consensus among scholars is that the Exodus story is best understood as myth.[

      This is an over simplification, it's possible that some of them did, and that the story has elements from the experiences of those who did.

      "While there is a consensus among scholars that the Exodus did not take place in the manner described in the Bible, surprisingly most scholars agree that the narrative has a historical core, andthat some of the highland settlers came, one wayor another, from Egypt "

      In this, I am not referring to the various traditions of Israel’s interaction with Egypt resulting from the era of Egyptian control inCanaan or from some relations with the Hyksos,which found their way into the Bible, but to the possibility that there was a group which fled Egypt, and brought this story of Exodus with it. Though the size of this group is debated, most of the above scholars agree that it was in the range of a few thousands, or even hundreds (some give it more weight, e.g., Hoffmeier 1997). Still, despite the limited size of this group, it appears that during the process of Israel’s ethnogenesis its story became part of the common history of all the Israelites. Most of those who accept some historical core for the story of the Exodus from Egypt, date it to the thirteenth century, at the time of Ramses II, while others dateit to the twelfth century, during the time of Ramses III.

      Archaeology does not really contribute to the debate over the historicity or even historical background of the Exo-dus itself, but if there was indeed such a group, it contributed the Exodus story to that of all Israel. While I agree that it is most likely that there was such a group, I must stress that this is based on an overall understanding of the developmentof collective memory and of the authorship of the texts (and their editorial process). Archaeology, unfortunately, cannot directly contribute(yet?) to the study of this specific group of Israel’s ancestors.

      So was this Exodus group also Merneptah’s Israel, or at least part of it? Clearly, if there was an Exodus in the thirteenthcentury this group of people could have been part of Merneptah’s Israel. However, despite the assumed significance of this group (the Exodus as a "national" epic, more below), it is likely that this group was incorporated at a later stage, only after Merneptah’s time, or at least that it was distinct from Merneptah’s Israel. After all,although this group clearly brought with it some of what became the history of Israel, it wasn’tMerneptah’s Israel, or any "Israel" for that matter. While many scholars agree that the Exodus group brought with it YHWH as a new deity, the name Israel has the component "El," rather than "Ya" or "Yahu." Thus, Israel could [have] preceded the arrival of the Exodus group, and it is likely that the latter was not Israel’s "core"group.

      See also the Wikipedia article

    1. The curriculum of "The English High School" was clearly established to funnel a certain population of students into positions that were deemed "suitable" for their socioeconomic class. In Jean Anyon's "Social Class and the Hidden Curriculum of Work" she states that social class is perceived as a complex of social relations that one develops as one grows up and acquires certain bodies of knowledge, skills, abilities, and traits. She states that these define our material ties to the world and states an important concern of whether these relationships are developing in children in schools within particular social class contexts. The establishment of the English High School, compared to schools like Boston Latin, shows that these relationships did develop in children depending on the social class context of their school.

    1. It Takes a Nation of Millions to Hold Us Back is the second studio album by American hip hop group Public Enemy. It was released on June 28, 1

      here is an annotation

    1. Carbon mineralization

      The COSIA Carbon XPRIZE Challenge is a competition to convert CO2 into products with highest net value from either a coal or gas power plant. In April 2018, ten finalists were given $5 million each to demonstrate their technologies large scale in the real world. The winner gets a $7.5 million grand prize announced in March 2020.

      Five of the ten are focused on carbon minerallization technology. One of them is a team from Aberdeen that hopes to use CO2 capture to make the entire concrete industry carbon negative.

      The Carbon Capture Machine precipitates it into calcium and magnesium carbonates (much like stalactites in caves) as a carbon negative replacement for ground calcium carbonate (GCC) which is needed for concrete. If this works on a commercial scale it can decarbonize the concrete industry, or 6% of the world’s annual CO2 emissions. If they can make it commercially viable, GCC has a market value of $20 billion.Carbon Upcycling](http://www.co2upcycling.com/) makes new CO2ncrete from CO2 and chemicals, competing directly with the $400 billion concrete industry - in places like California with a carbon tax and mandate for low carbon building materials.

      *CarbonCure Technologies](https://www.carboncure.com/) injects CO2 into wet concrete while it is being mixed. They are aleady in commercial use with 100 installations across the US, retrofitting concrete plants for free then charging a licensing fee. It may take up to 20 years to be used on scale for reinforced concrete, because that’s needed as a durability testing period.

      For more on this see Between a Rock and Hard Place: Commercializing CO2 Through Mineralization

  3. Jun 2019
    1. Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to.

      But isn't this comparison unfair? I mean, doesn't it actually depend on the number of connectivity and dependency links it has?. In other words, it is true that the node ceases to function if it loses the only node it depends on. But wouldn't it be equally dramatic if it loses the only node it is connected to? It seems to me it is all a matter of how many nodes it is connected to/it depends on.

    1. Its footprints are distinctive

      The extremely short limbs make it impossible for this frog to hop, although it can walk (Boulenger 1907). ...

      This frog has extensive webbing on its feet, in contrast to other members of the genus Breviceps. Carruthers and Passmore (1978) conjecture that the foot webbing enables traction on loose sand, as the frog moves about on the surface of its sand dune habitat at night (based on its distinctive tracks).

      https://amphibiaweb.org/cgi/amphib_query?where-genus=Breviceps&where-species=macrops

    2. The small area of sand dunes often gets a lot of fog, which supplies moisture in an otherwise arid and dry region.

      Uncited sentence. Useful cite:

      Voucher specimens held in museum collections were examined, and demonstrate the northernmost locality in Lüderitz, Namibia, with all 11 localities in white sandy habitat where coastal fog exceeds 100 days per year. The most southerly record from active searches was just south of Kleinzee in South Africa. A new threat to this species is housing development in prime coastal sand dunes.

      Channing, A. and Wahlberg, K., 2011. Distribution and conservation status of the desert rain frog Breviceps macrops. African journal of herpetology, 60(2), pp.101-112.

    1. Contemporary analysis of historical data from the last 11 millennia[12] matches with the indigenous Saptarishi Calendar.[13] The length of the transitional periods between each Yuga is unclear, and can only be estimated based on historical data of past cataclysmic events. Using a 300 year (10% of the length of a particular yuga) period for transitions, Kali Yuga has either ended recently in the past 100 to 200 years, or is to end soon sometime in the next 100 years.

      Most Hindus would say that the Hindu Kalu Yuga ends thousands of years into our future. An earlier version of this page gave the conventional view

      The Kali Yuga began approximately five thousand years ago, and it has a duration of 432,000 years, leaving us with 427,000 years until the end of the present age.

      https://en.wikipedia.org/w/index.php?title=Kali_Yuga&oldid=889598580#10,000_year_%22Golden_Age%22

      This edit is based on an article on the Graham Hancock website - would not normally be regarded as a reliable source in Wikipedia.

      This is about him: https://en.wikipedia.org/wiki/Graham_Hancock

      This is the article they give as a source. https://grahamhancock.com/dmisrab6/

    1. Microsoft Word key combination

      诡异的问题,Word里em dash根据前面不同可能会打出不同的位置(正中和偏下位置)

    1. In Chile, the national telecom regulator ruled that this practice violated net neutrality laws and had to end by June 1, 2014.[3][4]

      But "Claro" offered in December 2018 data plans which only allowed access to Facebook, Instagram, Whatsapp and Snapchat.

    1. I know that Wikipedia can provide a quick overview on a topic; however, as a researcher/librarian, I would never suggest to students to add Wikipedia to a list of additional resources. Perhaps some of the referenced resources at the end of the entry might be a better choice.

      In general, I would sort the resources related to the commons by "historical/physical" and "contemporary/philosophical". It was confusing to read an article such as this, as well as some of the others that follow that address the physical environment, cow enclosures, land ownership,etc. mixed in with our discussion of the commons as a philosophical concept, related to a more contemporary information society. Of course, I understood the connection from history to present day and it is an interesting and important one; however, I think a little ordering and some subheadings would lead the reader down the right path more seamlessly.

    1. Jevons received public recognition for his work on The Coal Question (1865), in which he called attention to the gradual exhaustion of Britain's coal supplies and also put forth the view that increases in energy production efficiency leads to more, not less, consumption.[5]:7f, 161f This view is known today as the Jevons paradox, named after him. Due to this particular work, Jevons is regarded today as the first economist of some standing to develop an 'ecological' perspective on the economy.
    1. The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
    1. Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et al.[9] propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label. Expected model change: label those points that would most change the current model. Expected error reduction: label those points that would most reduce the model's generalization error. Exponentiated Gradient Exploration for Active Learning:[10] In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration. Membership Query Synthesis: This is where the learner generates its own instance from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if your dataset is small.[11] Pool-Based Sampling: In this scenario, instances are drawn from the entire data pool and assigned an informative score, a measurement of how well the learner “understands” the data. The system then selects the most informative instances and queries the teacher for the labels. Stream-Based Selective Sampling: Here, each unlabeled data point is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint. Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be. Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most Querying from diverse subspaces or partitions:[12] When the underlying model is a forest of trees, the leaf nodes might represent (overlapping) partitions of the original feature space. This offers the possibility of selecting instances from non-overlapping or minimally overlapping partitions for labeling. Variance reduction: label those points that would minimize output variance, which is one of the components of error. Conformal Predictors: This method predicts that a new data point will have a label similar to old data points in some specified way and degree of the similarity within the old examples is used to estimate the confidence in the prediction.[13]
    1. approximately 15,000 light years from Earth

      It is not known how far away it is. If it is in the Outer arm it is around 16,000 light years away (5 kpc) and if in the Perseus arm it is half that distance, 8,000 light years away. There is a supernova remnant that may be associated with it at a distance of around 800 parsecs or 2,600 light years away. The McGill survey estimates 2 kpc or about 6,500 light years. In short, there is considerable uncertainty about its distance.

      The line of sight intercepts the Perseus and Outer arms of the Galaxy, atdistances of∼2.5 and∼5 kpc, respectively. In this paper, we assume the distanced= 5 kpc. In addition,there exists a supernova remnant (SNR) G160.9+2.6,∼80′north of SGR 0501+4516 (Gaensler & Chatterjee2008; G ̈oˇg ̈u ̧s et al. 2010). The distance and age of the SNR were estimated as 800±400 pc and 4000–7000 years(Leahy & Tian 2007). G ̈oˇg ̈u ̧s et al. (2010) proposed thatSGR 0501+4516 could be associated with G160.9+2.6 Mong, Y.L. and Ng, C.Y., 2018. X-Ray Observations of Magnetar SGR 0501+ 4516 from Outburst to Quiescence. The Astrophysical Journal, 852(2), p.86.

      For the McGill distance see table 7 of:

      Olausen, S.A. and Kaspi, V.M., 2014. The McGill magnetar catalog. The Astrophysical Journal Supplement Series, 212(1), p.6.

    1. possibly until 1550 BC

      There doesn't seem to be any way to get 1550 from the cite. More like 2200 BC, works out at -2183 BC ± 40 years

      We report the youngest radiocarbon determination so far for an identified species of Antillean sloth, 4190 ± 40 yr BP

      Published in 2007, 2007 -4190 = -2183

      Another sloth bone, the youngest mentioned is still not 1550, seems to be -1726 ± 50yr

      Although Woods [1989] reported a “whole bone” date of 3715 ± 50yr bp for unspecified sloth remains recovered at Trou Wòch Sa Wo in southern Haiti, five different sloth species have been recovered from this cave [MacPhee et al., 2000] and there is thus no way of relating this date to a single taxon as we have done here. In any case, the accuracy of this age esti-mate should be confirmed, minimally byAMS dating of individual, systematicallyidentified elements.

      1989-3715 = -1726 and there is no obvious way to get 1550 from this.

      Cite is

      MacPhee, R.D., Iturralde-Vinent, M.A. and Vázquez, O.J., 2007. Prehistoric sloth extinctions in Cuba: Implications of a new “last” appearance date. Caribbean Journal of Science, 43(1), pp.94-99.

    1. Throughout the past two decades, he has been conducting research in the fields of psychology of learning and hybrid neural network (in particular, applying these models to research on human skill acquisition). Specifically, he has worked on the integrated effect of "top-down" and "bottom-up" learning in human skill acquisition,[1][2] in a variety of task domains, for example, navigation tasks,[3] reasoning tasks, and implicit learning tasks.[4] This inclusion of bottom-up learning processes has been revolutionary in cognitive psychology, because most previous models of learning had focused exclusively on top-down learning (whereas human learning clearly happens in both directions). This research has culminated with the development of an integrated cognitive architecture that can be used to provide a qualitative and quantitative explanation of empirical psychological learning data. The model, CLARION, is a hybrid neural network that can be used to simulate problem solving and social interactions as well. More importantly, CLARION was the first psychological model that proposed an explanation for the "bottom-up learning" mechanisms present in human skill acquisition: His numerous papers on the subject have brought attention to this neglected area in cognitive psychology.
    1. Most common web browsers can retrieve files hosted on FTP servers,

      This seems to have been first implemented by Tim Berners-Lee, when he was trying to popularize his World Wide Web and Web browser (according to himself, in his "Weaving the Web" 1999 book).

    1. 2017

      The U.S. Energy Infomormation Administration say that it rose by 2.8% in 2018 but project that it will decrease in 2019 and 2020. The increase in 2018 was due to a 10% increase in emissiosn from natural gas and preliminary data makes it 0.4% below teh record set in 2017. The high energy consumption in 2018 is largely due to air conditioning demand in the warm weather, the winter months were also colder. 2019 and 2020 are expected to be milder which is what leads to the reduced forecast for them. The estimates of industrial production growth and GDP growth also factor into their prediciton - a slow down in GDP but faster industrial growth in 2019, slow down in industrial growth in 2020, as industrial production tends to be more energy intensive than the rest of the economy.

      U.S. energy-related CO2 emissions increased in 2018 but will likely fall in 2019 and 2020

    1. The seasonal low of 1,078.96 feet (328.87 m) in 2017 is close to that experienced in 2014, safely above the drought trigger for now.[27] However, that level is still 36 feet (11 m) below the seasonal low experienced in 2012 and the lake is projected to begin falling again in 2018.[28]

      Seasonal low of 1,076 in 2018. However there was a big snow melt in 2019 feeding lake Powell. As of June 17 it is around 5 feet above its 2017 level. lake level chart

      Colorado had 134% of its normal snowfall in winter 2018, Utah 138%, Wyoming 116%.

      As a result Lake Powell is expected to rise 50 feet in 2019, a gain of 12 million acre-feet, compared with only 4.6 million last year. It expects to release 9 million acre feet from Powell to Mead for the fifth consecutive year.

      Elephant Butte, a reservoir for the Rio Grande in New Mexico will also be replenished from 10% to around 30% of capacity.

      It is too early to say if this is the end of the decade long drought phase. However it is enough so that Arizona, the state with the lowest priority rights to the water from lake Mead is no longer expected to have to cut its share in 2020. That shortage may now be put off until after 2021.

      Snowmelt fills Colorado River and other waterways in U.S. Southwest, easing drought fears, Denver Post, June 13, 2019

    2. Drought and water usage issues

      Out of date - doesn't mention the new drought contingency plan. All seven states signed a drought contingency plan on May 20 2019, which lasts through until 2026, involving voluntary reductions in water taken from lake Mead if the levels get low.

      One sticking point was the Carlton sea in California, which formed as a result of a failed canal project in the late twentieth century and is now an important stop for migrating birds. The Imperial Irrigation District needed extra funding of $200 million to help preserve this sea before it would agree to a reduction. But the Metropoliton Water District was able to pledge most of California's voluntary water cuts and saved the plan.

      They next need to work on a longer term contingency plan for the next 50 years.

      Interior and states sign historic drought agreements to protect Colorado River - press release by US government under "Reclamation, Manging water in the West"

      Under the drought plan, states voluntarily would give up water to keep Lake Mead on the Arizona-Nevada border and Lake Powell upstream on the Arizona-Utah border from crashing. Mexico also has agreed to cuts

      The drought contingency plan takes the states through 2026, when existing guidelines expire. The states already are preparing for negotiations that will begin next year for new guidelines.

      The Imperial Irrigation District was written out of California's plan when another powerful water agency, the Metropolitan Water District, pledged to contribute most of the state's voluntary water cuts.

      Imperial had said it would not commit to the drought plan unless it secured $200 million in federal funding to help restore a massive, briny lake southeast of Los Angeles known as the Salton Sea.

      Felicia Fonseca, US official declares drought plan done for Colorado River, Phys.org, March 20, 2019

      For more background

      Despite signs of interstate cooperation, the decline of Lake Mead isn’t near being solved, Michael Hiltzik Feb 08, 2019, Los Angeles Times.

    1. Setbacks

      Doesn't mention the first three rocket failures though they are covered in Falcon 1

      "And the reason that I ended up being the chief engineer or chief designer, was not because I want to, it's because I couldn't hire anyone. Nobody good would join. So I ended up being that by default. And I messed up the first three launches. The first three launches failed. Fortunately the fourth launch which was – that was the last money that we had for Falcon 1 – the fourth launch worked, or that would have been it for SpaceX."

      Elon Musk (28 September 2017), Making Life Multiplanetary | 2017 International Astronautical Congress

    1. if a small region of the universe by chance reached a more stable vacuum, this 'bubble' would spread.

      [this section needs cites, I have them when I have time] Should explain there are two ways that a false vacuum can collapse. It can do it through energetically pushing it over the barrier. This was only possible in the early universe and conditions do not exist for this today. Or it can happen through quantum tunneling. An example to explain quantum tunneling - in principle a ping pong ball inside a vault could spontaneously find itself outside of it just because of quantum position undertainty, without having to move through the walls. Given infinite time and enough vaults and ping pong balls eventually this has to happen but it is not a realistic possibility on normal timescales.

      On the molecular scale then quantum tunneling events can happen and indeed may help explain how we can smell, and how birds are able to sense the weak Earth's magnetic field enough to navigate.

      The quantum tunneling of the false vacuum collapse is more like the ping pong ball analogy, it is extremely unlikely. The latest estimates based on properties of the Higgs and the top quark is that the odds are googols to one against it having happened any time in the past through to the present since the Big Bang.

      However in the extreme conditions of the early universe, then it should have happened, back when the entire observable universe was compressed into a space far less than that of a proton, nucleus of hydrogen atom, and at extreme temperatures, through the first method of being pushed over the barrier rather than tunneling through it.

      John Ellis has suggested that this likely means that we need new physics to explain why the universe surfvived that early stage. The alternative he mentions is that we are surrounded by true vacuum in all directions and happen to be one of the few exceedingly unlikely patch of false vacuum in an infinite universe. Given infinite space - time then even the most improbable would happen somewhere - but he doesn't think this is a likely hypothesis (see 47:40 into this video).

      New physics that could explain this includes

      • Super symmetry
      • an extra higgs boson
      • Dark matter that interacts with the Higgs field.

      Another suggestion that's been made is that when a Higgs field collapses this could create a new universe with its own space and time rather than expand into ours.

      There is a possibility that the Higgs field was stable in the early universe due to an interaction with gravity which would explain why the universe survived to date cite. That still leaves us with the question, of why it is so close to the boundary between stable and unstable, when it could be completely stable?

    2. Existential threat

      This section is way out of date, most of it seems to have been written before the discovery of the Higgs boson in 2013 or shortly after it. Enough is now known for a rigorous calculation (assuming that there is no new physics to be found of course).

      The authors of the peer reviewed paper update their abstract in arxiv.org from time to time. The original published value was that there would be a future lifetime of the universe of \(10^{139} \) years with 95% oncfidence of more than \(10^{58} \) years

      As of version 4 of their paper revised 2nd May 2018, it is now \(10^{161} \) years with 95% oncfidence of more than \(10^{65} \) years

      The odds that we have encountered a vacuum collapse already, or that one is on its way (total over the lifetime of the univesre to date) used to be between odds of \(10^{107} \) to 1 against and \(10^{718 } \) to 1 against

      They now say it is between between odds of \(10^{367} \) to 1 against and \(10^{1124 } \) to 1 against

      Andreassen, A., Frost, W. and Schwartz, M.D., 2018. Scale-invariant instantons and the complete lifetime of the standard model. Physical Review D, 97(5), p.056006.

    3. Joseph Lykken has said that study of the exact properties of the Higgs boson could shed light on the possibility of vacuum collapse

      This has now been done in a 2018 paper published in Physical Review D cite

      The authors of the paper update their abstract in arxiv.org from time to time. The original published value was that there would be a future lifetime of the universe of \(10^{139} \) years with 95% oncfidence of more than \(10^{58} \) years

      As of version 4 of their paper revised 2nd May 2018, it is now \(10^{161} \) years with 95% oncfidence of more than \(10^{65} \) years

      The odds that we have encountered a vacuum collapse already, or that one is on its way (total over the lifetime of the univesre to date) used to be between odds of \(10^{107} \) to 1 against and \(10^{718 } \) to 1 against

      They now say it is between between odds of \(10^{367} \) to 1 against and \(10^{1124 } \) to 1 against

      Andreassen, A., Frost, W. and Schwartz, M.D., 2018. Scale-invariant instantons and the complete lifetime of the standard model. Physical Review D, 97(5), p.056006.

    4. They argue that due to observer selection effects, we might underestimate the chances of being destroyed by vacuum decay because any information about this event would reach us only at the instant when we too were destroyed.

      Their argument has been completely misunderstood. They are discussing any natural catastrophic events that could destroy Earth. This is a paper from 2005 well before the discovery of the Higgs.

      They mention three possibilities, that a cosmic radiation collision event triggers collapse of the Earth into a black hole, into strange matter or the false vacum collapse of the entire universe.

      The observer selection effect here is just that we exist and applies to all three scenarios and is not specific to the false vacuum collapse. Indeed it would apply to any other scenario that could destroy Earth or change it in such a way as to make it impossible for huamns to evolve. cite

      Given that life on Earth has survived for nearly 4 billion years (4 Gyr), it might be assumed that natural catastrophic events are extremely rare. Unfortunately, this argument is flawed because it fails to take into account an observation-selection effect, whereby observers are precluded from noting anything other than that their own species has survived up to the point when the observation is made.

      If it takes at least 4.6 Gyr for intelligent observers to arise, then the mere observation that Earth has survived for this duration can-not even give us grounds for rejecting with 99% confidence the hypothesis that the average cosmic neighbourhood is typically sterilized, say,every 1,000 years. The observation-selection effect guarantees that we would find ourselves in a lucky situation, no matter how frequent thesterilization events

    1. The researchers estimated from their observations that there are nearly two Jupiter-mass rogue planets for every star in the Milky Way

      A later 2017 study cast doubt on this result. It used a larger population of microlensing events and found at most one Jupiter-mass rogue planet for every four stars in the Milky Way.

      Mróz, P., Udalski, A., Skowron, J., Poleski, R., Kozłowski, S., Szymański, M.K., Soszyński, I., Wyrzykowski, Ł., Pietrukowicz, P., Ulaczyk, K. and Skowron, D., 2017. No large population of unbound or wide-orbit Jupiter-mass planets. Nature, 548(7666), p.183.<

    1. When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed rather than dew.

      Though popular accounts of meteorology sometimes suggest this, dew point and frost point differ. Dew point is the temperature for 100% humidity of the air in normal conditions. Frost point is the higher temperature for 100% humidity over an ice surface. This distinction normally doesn't matter much, but is important for processes in clouds. Growth of icy particles is favoured over water droplets when both are possible, because the frost point is at a higher temperature than the dew point

      I am summarizing what the meteorologist Jeff Haby explains here

      "The dew point is the temperature at which the air is saturated with respect to water vapor over a liquid surface. When the temperature is equal to the dewpoint then the relative humidity is 100%. The common ways for the relative humidity to be 100% is to 1) cool the air to the dewpoint, 2) evaporate moisture into the air until the air is saturated, 3) lift the air until it adiabatically cools to the dew point. "The frost point is the temperature at which the air is saturated with respect to water vapor over an ice surface. It is more difficult more water molecules to escape a frozen surface as compared to a liquid surface since an ice has a stronger bonding between neighboring water molecules. Because of this, the frost point is greater in temperature than the dew point. This fact is important to precipitation growth in clouds. Since the vapor pressure is less over an ice surface as compared to a supercooled liquid surface at the same temperature, when the relative humidity is 100% with respect to water vapor the relative humidity over the ice surface will be greater than 100%. Thus, precipitation growth is favored on the ice particles."

      Frost point and dew point

    1. Omega Point

      This article is very poor. Theillard de Chardin's theory is an attempt by a devout Catholic scientist to reconcile religious ideas of the love of God and of teleology - that our life and world has a purpose, with scientific understanding. To leave that aspect out is to miss the entire point of his work. The theory is very influential in both Christian theology generally and especially Catholic theology, not just in the nineteenth century but is still influential through to the present.

      This article attempts to treat it as a purely scientific theory stripping away all religious elements. It cites mainly critics who ridicule the idea that religion is relevant to science and the idea that our universe may have any teleology or purpose. There would be the same problems writing an article about Christian ideas of the Resurrection that ignored the theological context. This approach is not used in other articles on Christian theology in Wikipedia.

      Rather than annotate particular points in this article I think it is best to just direct the reader to the entry on him in the French Wikipedia, which is much better, written as theology, as well as some summaries of his work by other authors.


      Point Oméga

      The Omega point is a dynamic concept created by Pierre Teilhard de Chardin , who gave it the name of the last letter of the Greek alphabet : Omega .

      For Teilhard, the Omega Point is the ultimate point of the development of complexity and consciousness towards which the universe (1) . According to his theory, exposed in The Future of Man and The Human Phenomenon , the universe is constantly evolving towards ever higher degrees of complexity and consciousness, (1) the Omega point being the culmination but also the cause of this evolution (1) . In other words, the Omega point exists in a supremely complex and supremely conscious way, transcending the universe in the making.

      For Teilhard the Omega point evokes the Christian Logos , that is Christ , in that it attracts all things to him and is, according to the Nicene symbol , "God born of God, Light born of the Light, true God born of the true God ", with the indication: " and by him all things were done ".

      Subsequently this concept was taken up by other authors, such as John G. Bennett (1965) or Frank Tipler (1994).

      The Omega point has five attributes, which Teilhard details in The Human Phenomenon .

      The five attributes In the book The Human Phenomenon (The Human Phenomenon, 1955), Teilhard describes the five attributes of the Omega point:

      • It has always existed - only in this way you can explain the evolution of the universe to higher levels of consciousness.

      • must be personal - a person and not an abstract idea; the greater complexity of the question has not only led to higher forms of consciousness, but to greater personalization, of which humans are the highest forms of the "personalization" of the universe. They are fully "individualized", free activity centers. It is in this sense that it is said that man was made in the image of God, which is the highest form of personality. Teilhard de Chardin expressly maintains that the Omega point, when the universe by unification will become one, we will not see the elimination of people, but the super-personalizing. The personality will be infinitely richer. Indeed, the Omega point unites the creation, and it unites, the universe becomes more complex and increases its consciousness. Just as God created the universe evolves to forms more complexity, consciousness, and finally with man, personality because God, the universe attracting to itself is a person.

      • It must be transcendent - the Omega Point is not the result of complexity and consciousness. It exists before the evolution of the universe, because the Omega Point is the cause of the evolution of the universe towards greater complexity, consciousness and personality. This essentially means that the Omega Point is located outside the context in which the universe is evolving, because it is because of its magnetic attraction that the universe tends to it.

      • must be independent - without limits of space and time.

      • It must be irreversible - which must provide the ability to reach.

      [1] Dominique de Gramont, Le Christianisme est un transhumanisme, Paris, Les éditions du cerf, septembre 2017, 365 p. (ISBN 978-2-204-11217-8)

      Oxford Scholarship Online

      This is how the idea is described by Linda Sargent Wood as summarized by Oxford Scholarship Online

      Merging Catholicism and science, Teilhard asserted that evolution was God's ongoing creative act, that matter and spirit were one, and that all was converging into one complete, harmonious whole. Though controversial, his organismic ideas offered an alternative to reductionistic, dualistic, mechanistic evolutionary views. They satisfied many who were looking for ways to reconnect with nature and one another; who wanted to revitalize and make personal the spiritual part of life; and who hoped to tame, humanize, and spiritualize science. In the 1960s many Americans found his book The Phenomenon of Man and other mystical writings appealing. He attracted Catholics seeking to reconcile religion and evolution, and he proved to be one of the most inspirational voices for the human potential movement and New Age religious worshipers. Outlining the contours of Teilhard's holistic synthesis in this era of high scientific achievement helps explain how some Americans maintained a strong religious allegiance.

      Wood, L.S., 2012. A More Perfect Union: Holistic Worldviews and the Transformation of American Culture after World War II. Oxford University Press.

      This is what Pope Benedict says about his idea of the omega point

      “Only where someone values love more highly than life, that is, only where someone is ready to put life second to love, for the sake of love, can love be stronger and more than death. If it is to be more than death, it must first be more than mere life. But if it could be this, not just in intention but in reality, then that would mean at the same time that the power of love had risen superior to the power of the merely biological and taken it into its service. To use Teilhard de Chardin’s terminology, where that took place, the decisive complexity or “complexification” would have occurred; bios, too, would be encompassed by and incorporated in the power of love. It would cross the boundary—death—and create unity where death divides. If the power of love for another were so strong somewhere that it could keep alive not just his memory, the shadow of his “I”, but that person himself, then a new stage in life would have been reached. This would mean that the realm of biological evolutions and mutations had been left behind and the leap made to a quite different plane, on which love was no longer subject to bios but made use of it. Such a final stage of “mutation” and “evolution” would itself no longer be a biological stage; it would signify the end of the sovereignty of bios, which is at the same time the sovereignty of death; it would open up the realm that the Greek Bible calls zoe, that is, definitive life, which has left behind the rule of death. The last stage of evolution needed by the world to reach its goal would then no longer be achieved within the realm of biology but by the spirit, by freedom, by love. It would no longer be evolution but decision and gift in one.”

      Orthodoxy of Teilhard de Chardin: (Part V) (Resurrection, Evolution and the Omega Point)

      Summary by Kahn Academy

      His views have also been seen as relevant to modern tanshumanists who want to apply technology to overcome our human limitations. Some of them think that his ideas foreshadowed this.

      A movement known as tranhumanism wants to apply technology to overcome human limitations. Followers believe that computers and humans may combine to form a “super brain,” or that computers may eventually exceed human brain capacity. Some transhumanists refer to that future time as the “Singularity.” In his 2008 article “Teilhard de Chardin and Transhumanism,” Eric Steinhart wrote that:

      Teilhard de Chardin was among the first to give serious consideration to the future of human evolution.... [He] is almost certainly the first to describe the acceleration of technological progress to a singularity in which human intelligence will become super intelligence.

      Teilhard challenged theologians to view their ideas in the perspective of evolution and challenged scientists to examine the ethical and spiritual implications of their knowledge. He fully affirmed cosmic and biological evolution and saw them as part of an even more encompassing spiritual evolution toward the goal of ultrahumans and complete divinity. This hypothesis still resonates for some as a way to place scientific fact within an overarching spiritual view of the cosmos, though most scientists today reject the notion that the Universe is moving toward some clear goal.

      Pierre Teilhard de Chardin Paleontologist, Mystic and Jesuit Priest - Kahn Academy

      Book review: The Phenomenon of Man by Pierre Teilhard de Chardin

      By Tom Butler-Bowdon

      In a nutshell: By appreciating and expressing your uniqueness, you literally enable the evolution of the world.

      For Teilhard humankind was not the centre of the world but the ‘axis and leading shoot of evolution’. It is not that we will lift ourselves above nature but, in our intellectual and spiritual quests, dramatically raise its complexity and intelligence. The more complex and intelligent we become, the less of a hold the physical universe has on us, he believed.

      Just as space, the stars and galaxies expand ever outwards, the universe is just as naturally undergoing ‘involution’ from the simple to the increasingly complex; the human psyche also develops according to this law. ‘Hominisation’ is what Teilhard called the process of humanity becoming more human, or the fulfilment of its potential. ... Teilhard said as humanity became more self-reflective, able to appreciate its place in space and time, its evolution would start to move by great leaps instead of a slow climb. In place of the glacial pace of physical natural selection, there would be a supercharged refinement of ideas that would eventually free us of physicality altogether. We would move irresistibly toward a new type of existence, at which all potential would be reached. Teilhard called this the ‘omega point’.

      Book review: The Phenomenon of Man by Pierre Teilhard de Chardin

    1. Progress

      Needs a Criticism section.

      The main crticism is that it is not helping to understand how the human brain itself works. From the article by Frégnac et al in Nature in 2014:

      Contrary to public assumptions that the HBP would generate knowledge about how the brain works, the project is turning into an expensive database-management project with a hunt for new computing architectures. (1) The problem is that it is not founded on knowledge of how neurons are connected in the real brain or experimental data. Most importantly, there are no formulated biological hypotheses for these simulations to test Instead it is an attempt at simulation using many hardware neurons of something the researchers hope will resemble the way a human brain works without any experimental data to guide the experimentation.(1)

      The revised plan advances a concept in which in silico experimentation becomes a “foundational methodology for understanding the brain” (1)

      Shortly after the project started, many European neuroscientists signed an open letter raising the following issues (3)

      a) That the project could not provide understanding of the brain without corrective loops between hypotheses and experimental facts and the project is not guided by any precies hypotheses to be tested and checked.

      b) That the model is overoptimisic and wrong. It should either abandon neurologial researches and only focus on technological advances or be split into two projects for the two areas

      c) Too expensive, syphoning away important funds from other fundamental research d) excessively big and coordination mechanisms unclear

      This collective wrote “Open message to the European Commission concerning the Human Brain Project” to the European Commission on July 7, 2014.

      A mediation report was published in 2015. This upheld most of the criticisms.

      The mediation committee summarized the disagreement as (2):

      The goal of reconstructing the mouse and human brain in silico and the associated comprehensive bottom-up approach is viewed by one part of the scientific community as being impossible in principle or at least infeasible within the next ten years, whileanother part sees value not only in making such simulation tools available but also in their development,in organizing data, tools and experts

      They recommended that the goals should be less ambitious saying:

      Issue: Public announcements by the HBP leadership and by the EC overstated objectives and the possible achievements of the HBP. Unrealistic expectations were raised, such as tools for predictive sim-ulation of the human brain to enable understanding of brain function or to support diagnosis and therapy of neurodegenerative diseases within the course of the project. This resulted in a loss of scientific credibility of the HBP.

      Recomendation: The HBP andthe EC should clearly andopenly communicate the project’s sharp-ened mission and objectives. Furthermore,theHBP should systematically create and use opportuni-ties to enteraconstructive scientific discourse with the science community, with science policy mak-ers and with the interested public. Ultimately the reputation of the HBP in the science community will rest on the publication of convincing scientific results and the generation of widely used IT platforms. ...

      They recommend that it be split into three or four sub projects with PI's with a strong neuroscience background:

      Issue:The absenceof systems and cognitive neuroscience subverts the multi-scale and multi-perspective ambitions of the HBP to integrate and validate various approaches to a unifying model-ling and simulation platform.It also impairs the validation of other IT platforms developed in the HBP regarding the value added to future neuroscience research and clinical practice

      Recommendation:The SPs (and the constituent WPs) suggested in the FPA should be consolidated and integrated with a set of new cross-cutting horizontal tasks to form a matrix-type project structure. These cross-cutting activities should be organized in at least 3 -4 WPsto address challenging problems of systems and cognitive neuroscience which are led by PIs with a strongscientific background in the respective areas. These WPsshould be aggregated to a new cross-cutting subproject “Cognitive and Systems Neuroscience: CSN”.

      [1]Frégnac, Y. and Laurent, G., 2014. Neuroscience: Where is the brain in the Human Brain Project?. Nature News, 513(7516), p.27.

      [2] Human Brain Project Mediation Report, Mediator Prof. Dr. Wolfgang Marquardt, March 2015

      [3] This is a paraphrase of the relevant section of: Why do we foster and grant wrong innovative scientific methods? The neuroscientific challenge Jordi Vallverdu Autonomous University of Barcelona

    2. Blue Brain Project

      Page is five years out of date and doesn't include criticism section. French Wikipedia summarizes the criticisms like this

      This project, along with the resulting Human Brain Project , is heavily criticized for a variety of reasons, including the scientific strategy adopted and the high cost involved. Launched in 2013, HBP faced a number of criticisms, including the lack of evidence from Blue Brain [5 ]. However, in October 2015, the team of the Blue Brain Project (in) published in Cell an article describing a simulation of a rat brain, covering 30,000 neurons and 40 million synapses - which did not stop criticizing the overall unrealism of BPH[5] Blue Brain (French Wikipedia)

      [5] Kai Kupferschmidt, "Virtual rat brain fails to impress its critics", Science , October 16, 2015, Vol. 350 no. 6258 pp. 263-264; DOI: 10.1126 / science.350.6258.263

      See also Theil, S., 2015. Why the Human Brain Project Went Wrong—and How to Fix It. Scientific American, 313(4), pp.36-42.

      Frégnac, Y. and Laurent, G., 2014. Neuroscience: Where is the brain in the Human Brain Project?. Nature News, 513(7516), p.27.

    3. by reverse-engineering mammalian brain circuitry

      One of the main criticisms of this project is that neroscientists do not have a detailed "map of connections between neurons within and across brain areas that could guide simulations"

      From the beginning, neuroscientists pointed out that large-scale simulations make little sense unless constrained by data, and used to test precise hypotheses. In fact, we lack, among other resources, a detailed 'connectome', a map of connections between neurons within and across brain areas that could guide simulations. There is no unified format for building functional databases or for annotating data sets that encompass data collected under varying conditions. Most importantly, there are no formulated biological hypotheses for these simulations to test

      Frégnac, Y. and Laurent, G., 2014. Neuroscience: Where is the brain in the Human Brain Project?. Nature News, 513(7516), p.27.

    1. commonly recognized conditions such as delusional infestation"

      Note for comparison, Lede for French Wikipedia: Morgellons

      "Morgellons" or "Morgellon's disease" is a controversial medical condition reported in the United States in 2002. It is characterized by dermatological lesions where the patient perceives inert or organic fibers, included or protruding from the skin.

      It is most often considered as a form of parasitic delirium or Ekbom syndrome , a factitious disorder , or a collective syndrome of psychogenic origin. Some authors, however, indicate that Morgellon's disease is a true somatic disease of infectious origin, including Lyme disease .

    2. some people have linked Morgellons "to another illness viewed skeptically by most doctors, chronic Lyme disease"

      The Mayo clinic also refers to this research saying cite:

      The research on Morgellons by multiple groups over decades has yielded conflicting results. Multiple studies report a possible link between Morgellons and infection with Borrelia spirochetes. These results contradict [the CDC study] One of the main proponents of this hypothesis is Marianne J. Middelveen, MDes, a Veterinary Microbiologist from Alberta, Canada. She made a connection with a disease of cattle, called Bovine Digital dermatitis which has similar symptoms - and in that case, it is well established that there are microfilaments of keratin and collagen which form beneath the skin.

      She analysed filaments that form beneath the skin of sufferers, and found out that these also are made of keratin and collagen. She also found spirochetes, which are usually associated with Lyme disease in humans.

      The main paper is Middelveen, M.J., Bandoski, C., Burke, J., Sapi, E., Filush, K.R., Wang, Y., Franco, A., Mayne, P.J. and Stricker, R.B., 2015. Exploring the association between Morgellons disease and Lyme disease: identification of Borrelia burgdorferi in Morgellons disease patients. BMC dermatology, 15(1), p.1.

      She, along with a dozen or so other researchers, publish one or two papers a year on this topic.

      For more background and cites, see also Mystery Of Morgellons - Disease Or Delusion - Scientific Hypothesis Of Connection With Lyme Disease (originated as Wikipedia article but on rejection from Wikipedia, rewritten in less encyclopedic tone as a blog post)

      Here is Marianne Middelveen talking about her research https://youtu.be/IaxdRvesVfM

      French Wikipedia summarizes her reseach as

      A new emerging infection?

      As of 2011, some scientists are trying to demonstrate the reality of the disease by defining it as filamentous dermatitis linked to Lyme disease, by publishing one to two articles per year.

      The main author of publications is Marianne J. Middelveen, veterinarian, which reconciles the bovine digital dermatitis or Mortellaro disease in animals and humans Morgellons 13 . Bovine disease is a skin infection associated with various pathogens, including spirochaetes and treponemes . In the animal, the disease is shown as contagious, being able to present papules filiform.

      Middelveen assumes that morgellons are a human equivalent of bovine digit dermatitis. She regularly publishes works showing an association between Morgellons and Lyme disease 14. 15

      FOOTNOTES

      1. Marianne J. Middelveen and Raphael B. Stricker , " Filament formation associated with spirochetal infection: a comparative approach to Morgellons disease ," Clinical, Cosmetic and Investigational Dermatology , Vol. 42011, p. 167-177 ( ISSN 1178-7015 , PMID 22253541 , PMCID PMC3257881 , DOI 10.2147 / CCID.S26183 , read online [ archive ] , accessed March 4, 2019 )

      2. Marianne J. Middelveen Cheryl Bandoski Jennie Burke and Eva Sapi , " Exploring the Association entre Morgellons disease and Lyme disease: identification of Borrelia burgdorferi in Morgellons disease patients ," BMC Dermatology , Vol. 15, n o 1,February 12, 2015, p. 1 ( ISSN 1471-5945 , PMID 25879673 , PMCID PMC4328066 , DOI 10.1186 / s12895-015-0023-0 , read online [ archive ] , accessed March 4, 2019 ) summary in English " morgellons and lyme " [ archive ] , on imedecin.com .

      3. (in) Jyotsna S. Shah and Raphael B. Stricker , " Detection of tick-borne infection in Morgellons disease patients by serological and molecular technologies " [ archive ] on Clinical, Cosmetic and Investigational Dermatology ,November 9, 2018 (accessed March 4, 2019 )

    3. In 2008, The Washington Post reported that Internet discussions about Morgellons include many conspiracy theories about the cause, including biological warfare, nanotechnology, chemtrails and extraterrestrial life.

      The Washington Post article predates the CDC study. This is a summary by a journalist of an internet search of Moregellons discussion blogs.

      Other points in the Washington Post article not mentioned here:

      Robert Bransfield is a New Jersey psychiatrist who studies the connection between infection and mental illness. ... "This isn't delusional," Bransfield says. "Delusions are quite variable. So, one person might have a delusion that the FBI is sending out messages to his dentures. Someone else has a delusion that their next-door neighbor is stealing their mail. But the people who have Morgellons all describe it in the same way. It doesn't have the variability you would see in delusions." ... And what of mass hysteria? Could Morgellons be, in a very real sense, nothing more than an Internet virus that has taken hold in susceptible minds? "I do see suggestion with the Internet, but it's hard to explain it on that alone," Bransfield says. "You can see the fibers. The fibers can't be mass hysteria. You see people describe this who don't have a computer," he says. "It's puzzling. It's hard to make sense out of it. But it's there. "

      It also covers the very early stages of the research into a possible connection with Borrelia burgdorfer

      At about the same time, Leitao, who was trained as a biologist and worked as an electron microscopist; along with Ginger Savely, a nurse practitioner; and Raphael Stricker, a hematologist, published a paper in the American Journal of Clinical Dermatology reporting that 79 out of 80 Morgellons patients they studied also were infected with Borrelia burgdorferi, the tick-borne bacteria that cause Lyme disease.

    4. Morgellons Research Foundation

      As the article says, the Morgellons Research Foundation which was the primary patient advocacy group in the 2000s.

      However it does not mention its successor, the Charles Holman Morgellons Disease Foundation, . a 501(c)3 nonprofit organization committed to "advocacy and philanthropy in the battle against Morgellons Disease"

      This orgnaization holds an annual three day conference on Morgellons for researchers to discuss their findings. Its main subject of study is a possible connection with chronic lyme disease.

      There are many peer reviewed papers on the topic. It is minority view science but not fringe.

      All attempts to add a mention of this research to Wikipedia eiterh on this article or as a separate article, evne notlinked to by it, are removed. Here are some of the cites.

      Middelveen, Marianne J; Burugu, Divya; Poruri, Akhila; Burke, Jennie; Mayne, Peter J; Sapi, Eva; Kahn, Douglas G; Stricker, Raphael B (2013). "Association of spirochetal infection with Morgellons disease". F1000Research. doi:10.12688/f1000research.2-25.v1. ISSN 2046-1402.

      Middelveen, Marianne J; Bandoski, Cheryl; Burke, Jennie; Sapi, Eva; Filush, Katherine R; Wang, Yean; Franco, Agustin; Mayne, Peter J; Stricker, Raphael B (2015). "Exploring the association between Morgellons disease and Lyme disease: identification of Borrelia burgdorferi in Morgellons disease patients". BMC Dermatology 15 (1). doi:10.1186/s12895-015-0023-0. ISSN 1471-5945.

      Marianne J Middelveen, Raphael B Stricker, Filament formation associated with spirochetal infection: a comparative approach to Morgellons disease, in Clinical, Cosmetic and Investigational Dermatology 2011

      Marianne J. Middelveen1, Jennie Burke2, Eva Sapi3, Cheryl Bandoski3, Katherine R. Filush3, Yean Wang2, Agustin Franco2, Arun Timmaraju3, Hilary A. Schlinger1, Peter J. Mayne1 Culture and identification of Borrelia spirochetes in human vaginal and seminal secretions, F100 research

      Marianne J. Middelveen1, Elizabeth H. Rasmussen2, Douglas G. Kahn3 and Raphael B. Stricker1*, Morgellons Disease: A Chemical and Light Microscopic Study, Journal of Clinical&Experimental Dermatology Research

      Shah, PhD, Jyotsna S. "Morgellons Disease – Chronic Form Of Borrelia Infection?”

      Middelveen MJ, Stricker RB Morgellons disease: a filamentous borrelial dermatitis, International Journal of General Medicine » Volume 9, DovePress

    5. Morgellons is poorly understood but the general medical consensus is that it is a form of delusional parasitosis in which individuals have some form of actual skin condition that they believe contains some kind of fibers.

      The cites for this sentence all preced the big 2012 CDC study. Although most say it is a form of Delusional Parasitosis one of them says: The cause, transmission, and treatment are unknown."

      Whether Morgellons disease is a delusional disorder or even a disease has been a mystery for more than 300 years. Symptoms of Morgellons include crawling and stinging sensations, feeling of “bugs” and/or fiber-like material beneath the skin, disabling fatigue, and memory loss. The cause, transmission, and treatment are unknown.

      Simpson, L; Baier, M (August 2009). "Disorder or delusion? Living with Morgellons disease". Journal of Psychosocial Nursing and Mental Health Services. 47 (8): 36–41. doi:10.3928/02793695-20090706-03. PMID 19681520.

    6. Morgellons is poorly characterized but the general medical consensus is that it is a form of delusional parasitosis; the sores are the result of compulsive scratching, and the fibers, when analysed, turn out to originate from textiles.

      The CDC just said they were not able to conclude whether it represents a new condition, or a wider recogntion of delusioanl parasitosis, and called it an "unexplained dermopathy".

      We were not able to conclude based on this study whether this unexplained dermopathy represents a new condition, as has been proposed by those who use the term Morgellons, or wider recognition of an existing condition such as delusional infestation, with which it shares a number of clinical and epidemiologic features.

      Pearson, M.L., Selby, J.V., Katz, K.A., Cantrell, V., Braden, C.R., Parise, M.E., Paddock, C.D., Lewin-Smith, M.R., Kalasinsky, V.F., Goldstein, F.C. and Hightower, A.W., 2012. Clinical, epidemiologic, histopathologic and molecular features of an unexplained dermopathy. PLoS One, 7(1), p.e29908. The Mayo Clinic describes it like this:

      Morgellons disease: Managing an unexplained skin condition

      Morgellons disease is an uncommon, poorly understood condition characterized by small fibers or other particles emerging from skin sores. People with this condition often report feeling as if something were crawling on or stinging their skin.

      Some doctors recognize the condition as a delusional infestation and treat it with cognitive behavioral therapy, antidepressants, antipsychotic drugs and counseling. Others think the symptoms are related to an infectious process in skin cells. Further study is needed.

      This does not amount to a medical consensus that it is delusional parasitosis. Other cites given later in this article predate the CDC study.

    7. CDC investigation

      This section doesn't mention criticisms of the CDC study.

      The main problem they faced is the low prevalence of the disease, only 3.65 cases per 100,000. Their 4 year study found only 41 patients with the condition at more than ten thousand dollars per patient. They also didn't select patients who self diagnosed as having Morgellons so it is possible the patients they studied did not think they had the condition..

      Harry Schone summarizes these criticisms in one of the sections of his University College London thesis

      "It is indeed true that the CDC were being cautious, that they found no positive evidence for the claims made by Morgellons sufferers, but it does not mean that the study can go without critical appraisal. Although expensive and lengthy, the research only clinically evaluated 41 people. Furthermore, since the population was selected by criteria other than self-identification it has been argued by critics of the study that some of those included did not have or even consider themselves to have Morgellons. The validity of these criticisms may rest on somewhat pedantic points, but what is certainly true is that an awful lot of reading between the lines has been passed off as something more substantial."

      See Learning from Morgellons, Harry Quinn Schone, Masters thesis for UCL (University College London) - see Harry Schone.

      This is a masters thesis rather than a PhD, however UCL is one of the most prestigious universities in Europe and this thesis summarizes concerns of other researchers.

    8. No parasites or mycobacteria were detected in the samples collected from any patients. Most materials collected from participants' skin were composed of cellulose, likely of cotton origin

      In their 2015 paper, Middelveen and her co researchers describe techniological limitations of the CDC study had limitations which could explain why they didn't find the spirochetes that they are able to identify in Morgellons patients:

      "The search for spirochetal pathogens in that study was limited to Warthin-Starry staining on a small number of tissue samples and commercial two-tiered serological Lyme disease testing as interpreted by the CDC Lyme surveillance criteria. It should be noted that only two of the patients in our study group were positive for Lyme disease based on the CDC Lyme surveillance criteria and yet Borrelia spirochetes were readily detectable in this group of 25 MD patients."

      They attribute their success in detecting Borrelia burgdorferi and closely related spirochetes to several factors

      • Clear diagnostic criteria for patient selection: fibers visible underneath unbroken skin or embedded in or projecting from skin, documented by a healthcare provider
      • Ability to culture spirochetes in vitro to increase opportunity of detection
      • High spirochetal load for the lesions, similar to lesions in cattle with BDD
        • Use of molecular hybridization and PCR methods, able to detect spirochetal DNA in picogram range.
    1. As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-PV (e.g. electrical) work, which is often equal to zero (hence ΔG must be negative

      Sign of dG determines if a thing will happen spontaneously

    1. A 2015 review

      This article doesn't seem to have had much editing since 2015. The hypothesis of a multifactorial issue is now the scientific consensus.

      For more cites see the annotation on the last sentence of the lede

      A 2019 review concludes that the collapse develops through a sequence of seteps.

      First, climate change, agrochemicalization or inadequate food decreasesteh strength

      Then there are faults in bee management, including depriving them of too much honey replacing it with sugary food, inadequate treatment against Varoa, wintering on honey contaminated with insecticides etc.

      They then become more easily infected oftenwith nosemosis, Lotmaria infection and American foulborood infection. Then finally the varoa spreads due to lack of adequate measures to prevent it.

      STANIMIROVIĆ, Z., GLAVINIĆ, U., RISTANIĆ, M., ALEKSIĆ, N., JOVANOVIĆ, N., VEJNOVIĆ, B. and STEVANOVIĆ, J., 2019. LOOKING FOR THE CAUSES OF AND SOLUTIONS TO THE ISSUE OF HONEY BEE COLONY LOSSES. Acta Veterinaria-Beograd, 69(1), pp.1-31.

      Acta Veterinaria-Beograd 2019, 69 (1), 1-3118for bee feeding. The use of supplements with sugar syrup should not be avoided, since they provide sufficient amino acids, peptides, micro- and macroelements which are absent from pure sugar syrup [18]. The use of supplements may prevent energetic, immune and oxidative stress in bees, and thus prevent losses in apiaries [129, 171-174]. The presence of a young, healthy bee queen in the hive guarantees the development of healthy bee colonies and successful beekeeping [131, 175]. Suitable pathogen control in hives, primarily of the bee mite V. destructor, with effective, registered varroacides is also a prerequisite for maintaining bee colonies in a good health condition. In addition, a strong link was detected between colony losses and beekeepers’ education and training: professionals were capable of keeping colonies free from diseases, unlike hobbyists [12, 70]. Professionals promptly detected symptoms, especially those of American foul brood or Varroa infestation, and timely applied control measures, contributing to the survival of their colonies. This was the first time that scientists focused attention on the impact of apiculturists and beekeeping practices on colony losses. The same authors commented that the introduction of a bee killer, Varroamite, to Europe at the beginning of 1980s, did not result in increased colony losses. This was explained by the fact that beekeepers efficiently adopted measures to combat against the mite [12].concLuSIonSScientific consensus has been reached that colony losses (CCD) are a multifactorial issue [3, 4, 6], which follows various conditions, but, according to our observations, it develops through a sequence of steps. Firstly, various non-specific factors (e.g. climate changes, agrochemisation and inadequate food) decrease the strength of the colonies; apitechnical faults (depriving bees of too much honey and a consecutive addition of large quantities of sugary food, inadequate treatments of colonies mainly against V. destructor, high stress and exhausting of bees, wintering colonies on honey contaminated with pesticides – sunflower honey, bad timing for wintering the colonies etc.). Such colonies easily become eligible for bacterial, microsporidial and trypanosomal infections. Manifested nosemosis combined with Lotmaria infection and latent American foulbrood infection, additionally exhaust bee colonies and impair the immune system of the bee [176-179]. Finally, inadequate anti-varroa strategies lead to significant health problems in bees and the spread of viruses for which Varroa is a vector, and/or activator. The whole process is a path prepared for the manifestation of virus infections

    2. A large amount of speculation has surrounded a family of pesticides called neonicotinoids as having caused CCD.

      This page seems to have not been updated much since 2013. There is scientific consensus now that it is multifactorial.

      Bee populations are declining in the industrialized world, raising concerns for the sustainable pollination of crops. Pesticides, pollutants, parasites, diseases, and malnutrition have all been linked to this problem. We consider here neurobiological, ecological, and evolutionary reasons why bees are particularly vulnerable to these environmental stressors. Central-place foraging on flowers demands advanced capacities of learning, memory, and navigation. However, even at low intensity levels, many stressors damage the bee brain, disrupting key cognitive functions needed for effective foraging, with dramatic consequences for brood development and colony survival.

      Klein, S., Cabirol, A., Devaud, J.M., Barron, A.B. and Lihoreau, M., 2017. Why bees are so vulnerable to environmental stressors. Trends in Ecology & Evolution, 32(4), pp.268-278.

      The good news is that the past decade has seen plenty of progress in understanding the mystery of Colony Collapse Disorder. The bad news is that we now recognise it as a complex problem with many causes, although that doesn’t mean it is unsolvable.

      For all bees, foraging on flowers is a hard life. It is energetically and cognitively demanding; bees have to travel large distances to collect pollen and nectar from sometimes hard-to-find flowers, and return it all to the nest. To do this they need finely tuned senses, spatial awareness, learning and memory.

      Anything that damages such skills can make bees struggle to find food, or even get lost while trying to forage. A bee that cannot find food and make it home again is as good as dead.

      Because of this, bee populations are very vulnerable to what we call “sublethal stressors” – factors that don’t kill the bees directly but can hamper their behaviour.

      Ten years after the crisis, what is happening to the world’s bees?

      Colonies are often challenged by multiple stressors, which can interact: for example, pesticides can enhance disease transmission in colonies. Colonies may be particularly vulnerable to sublethal effects of pathogens and pesticides since colony functions are compromised whether a stressor kills workers, or causes them to fail at foraging. Modelling provides a way to understand the processes of colony failure by relating impacts of stressors to colony-level functions.

      Barron, A.B., 2015. Death of the bee hive: understanding the failure of an insect society. Current Opinion in Insect Science, 10, pp.45-50.

      A 2019 review concludes that the collapse develops through a sequence of seteps.

      First, climate change, agrochemicalization or inadequate food decreasesteh strength

      Then there are faults in bee management, including depriving them of too much honey replacing it with sugary food, inadequate treatment against Varoa, wintering on honey contaminated with insecticides etc.

      They then become more easily infected oftenwith nosemosis, Lotmaria infection and American foulborood infection. Then finally the varoa spreads due to lack of adequate measures to prevent it.

      STANIMIROVIĆ, Z., GLAVINIĆ, U., RISTANIĆ, M., ALEKSIĆ, N., JOVANOVIĆ, N., VEJNOVIĆ, B. and STEVANOVIĆ, J., 2019. LOOKING FOR THE CAUSES OF AND SOLUTIONS TO THE ISSUE OF HONEY BEE COLONY LOSSES. Acta Veterinaria-Beograd, 69(1), pp.1-31.

      Acta Veterinaria-Beograd 2019, 69 (1), 1-3118for bee feeding. The use of supplements with sugar syrup should not be avoided, since they provide sufficient amino acids, peptides, micro- and macroelements which are absent from pure sugar syrup [18]. The use of supplements may prevent energetic, immune and oxidative stress in bees, and thus prevent losses in apiaries [129, 171-174]. The presence of a young, healthy bee queen in the hive guarantees the development of healthy bee colonies and successful beekeeping [131, 175]. Suitable pathogen control in hives, primarily of the bee mite V. destructor, with effective, registered varroacides is also a prerequisite for maintaining bee colonies in a good health condition. In addition, a strong link was detected between colony losses and beekeepers’ education and training: professionals were capable of keeping colonies free from diseases, unlike hobbyists [12, 70]. Professionals promptly detected symptoms, especially those of American foul brood or Varroa infestation, and timely applied control measures, contributing to the survival of their colonies. This was the first time that scientists focused attention on the impact of apiculturists and beekeeping practices on colony losses. The same authors commented that the introduction of a bee killer, Varroamite, to Europe at the beginning of 1980s, did not result in increased colony losses. This was explained by the fact that beekeepers efficiently adopted measures to combat against the mite [12].concLuSIonSScientific consensus has been reached that colony losses (CCD) are a multifactorial issue [3, 4, 6], which follows various conditions, but, according to our observations, it develops through a sequence of steps. Firstly, various non-specific factors (e.g. climate changes, agrochemisation and inadequate food) decrease the strength of the colonies; apitechnical faults (depriving bees of too much honey and a consecutive addition of large quantities of sugary food, inadequate treatments of colonies mainly against V. destructor, high stress and exhausting of bees, wintering colonies on honey contaminated with pesticides – sunflower honey, bad timing for wintering the colonies etc.). Such colonies easily become eligible for bacterial, microsporidial and trypanosomal infections. Manifested nosemosis combined with Lotmaria infection and latent American foulbrood infection, additionally exhaust bee colonies and impair the immune system of the bee [176-179]. Finally, inadequate anti-varroa strategies lead to significant health problems in bees and the spread of viruses for which Varroa is a vector, and/or activator. The whole process is a path prepared for the manifestation of virus infections

    1. Alternative shipping routes

      Way out of date, last updated in 2012. As of 2019 the pipelines include:

      • UAE pipeline 1.5 million details, upgradeable to max capacity 1.8 million bpd details
      • East-West pipeline, 2.1 million bpd, capacity 5 million,bpd plans to expand to 7 million bpd by 2023 cite. from Saudi Arabia's Eastern Province to Yanbu port on the Red Sea
      • Iraq Pipeline 1.6 million bpd, through Saudi Arabia, re-opened in 2012 cite
      • Kirkuk-Ceyhan pipeline from northern Iraq to Turkey, max capacity 1.6 million bpd currently running at much lower than capacity at 80,000 to 90,000, some reports say much lower cite

      Oil Export Alternatives to the Strait of Hormuz

    2. A third of the world’s liquefied natural gas and almost 20% of total global oil production passes through the strait,

      Also, about 30% of seaborne traded oil, more than 85% of that for Asia, mainly Japan, India, South Korea and China [details] (https://www.reuters.com/article/us-yemen-security-oil-factbox/factbox-middle-east-oil-gas-shipping-risks-alternative-routes-idUSKBN0MM2E720150326)

      2017: 17.2 million bpd, first half of 2018, 17.4 million, [details] (https://www.reuters.com/article/us-iran-oil-factbox/strait-of-hormuz-the-worlds-most-important-oil-artery-idUSKBN1JV24O) (Sea-borne crude and condensate)

      Most of the crude exported from Saudi Arabia, Iran, the United Arab Emirates, Kuwait and Iraq passes through it. It is also the route for nearly all the liquefied natural gas (LNG) from lead exporter Qatar. cite

    1. 5G

      You might think this is the place to look for material on whether there are any possible dangers of 5g. However, in an eccentric decision, editors of this articlle remove any sections on the topic as here Removed Dangers of 5g and their explanation on talk page here.

      Wikipedia does have a separate article on the topic of Mobile phone radiation and health, but material from that article is not permitted here and this page doesn't link to it. Sadly that article also has almost nothing on 5g and health.

      In 2011 then WHO /International Agency for Researchon Cancer (IARC) classified radio frequency EM fields as possibly carcinogenic to humans (Group 2B).

      If there is a risk it's a tiny one, of certain type of brain cancer. You can take measures to avoid the risk such as not holding a cellphone near your head while downloading large files.

      IARC classifies radiofrequency electromagnetic fields as possibly carcinogenic to humans

      Mayo clinic puts the situation like this:

      The bottom line? For now, no one knows if cellphones are capable of causing cancer. Although long-term studies are ongoing, to date there's no convincing evidence that cellphone use increases the risk of cancer. If you're concerned about the possible link between cellphones and cancer, consider limiting your use of cellphones — or use a speaker or hands-free device that places the cellphone antenna, which is typically in the cellphone itself, away from your head.

      Is there a connection between cellphones and cancer?

      A couple of hundred scientists have signed an appeal to the WHO to say it needs further investigation. Not 5g particularly but cell phones and wifi generally.

      The evidence isn't very good yet but they think there is enough to be worth investigating on a precautionary level.

      International Scientists Appeal to U.N. to Protect Humans and Wildlife from Electromagnetic Fields and Wireless Technology

      These scientists are concerned about a very minute risk of cancer. Though too small to be noticed, if there are even a few deaths in a million, it is something to take precautions to prevent.

      There is a lot of conspirachy theory nonsense on the topic however. See for instance

    1. Deinococcus radiodurans also failed to grow under low atmospheric pressure, under 0 °C, or in the absence of oxygen

      This is not surprising - it shows that radiodurans is an obligate aerobe. This doesn't mention the surrpsing result mentinoped in this cite that S. liquefaciens a was able to grow under these conditions.

      A more accurate summary of the source would be something like this:

      In other simulations, Serratia liquefaciens strain ATCC 27592 was able to grow at 7 mbar, 0°C, in CO2-enriched anoxic atmospheres. This was surprising, as it is a generalist that occurs in many terestrial niches, not an extremophile. Two extremophiles, Deinococcus radiodurans strain R1 and Psychrobacter cryohalolentis strain K5, were both unable to grow in anoxic conditions (making them obligate aerobes) and R1 was also unable to grow below 0 C or at 7 mbar.

      Source says:

      Only Serratia liquefaciens strain ATCC 27592 exhibited growth at 7 mbar, 0°C, and CO2-enriched anoxic atmospheres ... The growth of S. liquefaciens at 7 mbar, 0°C, and CO2-enriched anoxic atmospheres was surprising since S. liquefaciens is ecologically a generalist that occurs in terrestrial plant, fish, animal, and food niches.

    2. Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere

      This is baed on earlier papers that studied dormant life because back then they thought that the present day Mars surface was too cold and dry for life, but that life could have survived in dormant form from times when the axial tilt varied, the atmosphere thickened and water briefly flows on Mars. Any such life would be buried deep because the cummulatie effects of ionizing radiation over millions of years can sterilize anything.

      However, if the life can continue to grow, reproduce and heal itself, then over 500 years, then even e coli, one of our most radiosensitive microbes, is reduced by only 90%. Levels are slimilar to the interior of the ISS and are not lethal to microbes unless they are dormant for long periods.

      From the MSL RAD measurements, ionizing radiation levels from cosmic radiation are so low as to be negligible. The intermittent solar storms increase the dose only for a few days and the Martian surface provides enough shielding so that the total dose from solar storms is less than double that from cosmic radiation/ Over 500 years the Mars surface would receive a cummulative dose of less than 50 Gy, far less than the dose where 90% of even a radiation senstiive bacterium such as e-coli would die (LD90 of ~200 - 400 Gy). These facts are not used to distinguish Special Regions on Mars

      Cite here

      NASA have the search for extant life as one of their two top priorities for searching for life on Mars.

      A special region on Mars for the purposes of Planetary protection is a region classified by COSPAR where terrestrial organisms are likely to propagate, or interpreted to have a high potential for existence of extant Martian life forms

      See

    3. A major goal is to preserve the planetary record of natural processes by preventing human-caused microbial introductions, also called forward contamination.

      Uncited assertion. Not the main goal. A few microbes on the Martian surface would not obscure the planetary record in the frozen regolith. The main goal is to protect future science experiments, so that they don't find Earth microbes when searching for extant Mars organisms.

      Cassie Conley, NASA spokeswoman for the office of planetary protection puts it like this: https://youtu.be/qk-Ycp5llEI

      Third video on overview page of the NASA Office of Planetary Protection.

      “So we have to do all of our search for life activities, we have to look for the Mars organisms, without the background, without the noise of having released Earth organisms into the Mars environment”

    1. link the Deccan Traps eruption to the asteroid impact that created the nearly antipodal Chicxulub crater

      This theory of antipodal focusing is disproved according to the intro to the cited source. The source says that the Chicxulub crater is offset from the antipodes by 130°. It also says that the impactor didn't have enough energy to cause melting at the antipodes. The cite says this disproves the antipodal theory.

      Instead the new idea presented in the cite is that the impact generated a magnitude 9 earthquake worldwide and this caused volcanism to increase everywhere, through a now well established process by which nearby earthquakes can trigger increased volcanism. The Deccan traps started well before the impact, due to a rising "plume head", rising through the mantle which hapens every 20-30 million years, but after the impact they sped up and the chemistry changed.

      Cite says:

      The possibility that an impact at Cretaceous-Paleogene time caused Deccan volcanism has been investigated since the discovery of the iridium anomaly at Cretaceous-Paleogene boundary, with an emphasis on antipodal focusing of seismic energy. However, the Deccan continental flood basalts were not antipodal to the 66 Ma Chicxulub crater at the time of the impact, but instead separated by an epicentral distance of ~130°. Also, a Chicxulub-size impact does not in any case appear capable of generating a large mantle melting event. Thus, impact-induced partial melting could not have caused the initi-ation of Deccan volcanism, consistent with the occurrence of Deccan volcanism well before Cretaceous-Paleogene/Chicxulub time.

      Instead, Deccan volcanism is widely thought to represent the initial outburst of a new mantle plume “head” at the beginning of the Réunion hotspot track

      Accompanying press release from Berkely university says:

      Michael Manga, a professor in the same department, has shown over the past decade that large earthquakes – equivalent to Japan’s 9.0 Tohoku quake in 2011 – can trigger nearby volcanic eruptions. Richards calculates that the asteroid that created the Chicxulub crater might have generated the equivalent of a magnitude 9 or larger earthquake everywhere on Earth, sufficient to ignite the Deccan flood basalts and perhaps eruptions many places around the globe, including at mid-ocean ridges.

      “It’s inconceivable that the impact could have melted a whole lot of rock away from the impact site itself, but if you had a system that already had magma and you gave it a little extra kick, it could produce a big eruption,” Manga said.

      Similarly, Deccan lava from before the impact is chemically different from that after the impact, indicating a faster rise to the surface after the impact, while the pattern of dikes from which the supercharged lava flowed – “like cracks in a soufflé,” Renne said – are more randomly oriented post-impact.

      “There is a profound break in the style of eruptions and the volume and composition of the eruptions,” said Renne. “The whole question is, ‘Is that discontinuity synchronous with the impact?’”

      Another cite here, "The Conversation" which is written by academics and is WP:RS.

      More bad news for dinosaurs: Chicxulub meteorite impact triggered global volcanic eruptions on the ocean floor (2018)

      Our observations suggest the following sequence of events at the end of the Cretaceous period. Just over 66 million years ago, the Deccan Traps start erupting – likely initiated by a plume of hot rock rising from the Earth’s core, similar in some ways to what’s happening beneath Hawaii or Yellowstone today, that impinged on the side of India’s tectonic plate. The mid-ocean ridges and dinosaurs continue their normal activity.

      About 250,000 years later, Chicxulub hits off the coast of what will become Mexico. The impact causes a massive disruption to the Earth’s climate, injecting particles into the atmosphere that will eventually settle into a layer of clay found across the planet. In the aftermath of impact, volcanic activity accelerates for perhaps tens to hundreds of thousands of years. The mid-ocean ridges erupt large volumes of magma, while the Deccan Traps eruptions flood lava across much of the Indian subcontinent.

    1. Sleep: What is the biological function of sleep? Why do we dream? What are the underlying brain mechanisms? What is its relation to anesthesia?

      This may be the biggest problem. What is/are the factors that increase or decrease the need to sleep? How can we push against natural fatigue and its causes? How can we give people more wakefulness / conscious life per day? (without suffering significant debuffs)

    1. clathrate gun hypothesis

      The lede doesn't mention the major literature review by the USGS n December 2016 which oncluded that evidence is lacking for the original hypothesis. the similar conclusion by the Royal Society in 2017 that there is a relatively limited role for climate feedback from dissociation of the methane clathrates and the the CAGE research group (Centre for Arctic Gas Hydrate, Environment and Climate) which conlcuded that the methane formed over 6 million years ago and have been slowly releasing methane for 2 million years independent of warm or cold climate Details here:Clathrate gun hypothesis

    2. A 2018 published review concluded that the clathrate gun hypothesis remains controversial, but that better understanding is vital.

      This is NOT their conclusion. Their conclusion was that it is unlikely. Quotes from the paper:

      "Although the clathrate gun hypothesis remains controversial (21), a good understanding of how environmental change affects natural CH4 sources is vital in terms of robustly projecting future fluxes under a changing climate."

      Then later:

      "Nevertheless, it seems unlikely that catastrophic, widespread dissociation of marine clathrates will be triggered by continued climate warming at contemporary rates (0.2◦C per decade) during the twenty-first century"

      .They did however urge caution about extraction of methane clathrates as a fuel, as this could lead to leaks of methane.

      As discussed previously(Section 4.1), the stability of CH4 clathrate deposits may already be at risk from climate change.Accidental or deliberate disturbance, due to fossil fuel extraction, has the potential for extremelyhigh fugitive CH4 losses to the atmosphere "Nevertheless, it seems unlikely that catastrophic, widespread dissociation of marine clathrates will be triggered by continued climate warming at contemporary rates (0.2◦C per decade) during the twenty-first century"

      For details with more cites Clathrate gun hypothesis

    1. It was estimated in November 2015 that only 18 Jews remain in Syria

      Cite has no mention of the number of Jews left in Israel or the number 18.

      There may be Jews left in Syria, according to the Jerusalem Post.

      First, one member of the family rescued in 2016 is still there because she was married to a Muslim man and signed conversion papers though she says she didn't really convert

      According to Motti Kahana, who engineered the operation, there are no Jews left in Aleppo, aside from one member of the Halabi family, Linda, whose immigration to Israel was denied, citing her conversion to Islam – the source of a dispute between the Jewish Agency and Kahana.

      The latter still sends kosher food to the woman, and maintains that though she signed conversion papers – which is required by Syrian law when marrying a Muslim – she did not really convert.

      Also another family in Aleppo claims they are Jewish asking for aliyah

      A family from war-torn Aleppo is appealing to the State of Israel for refuge, citing their Jewish heritage, Army Radio reported on Sunday.

      “There is nobody who can help us to get out of this place,” said 30-year-old Razan (real name withheld) in an audio recording translated from Arabic into Hebrew and aired on the radio station. “We are asking that the Israeli government does not abandon us, but helps us get out of here to another country. I ask that the government demands from the entire world to do this. All my love and loyalty is to this religion [Judaism].”

      Experts say there are still some Jews remaining in other parts of Syria. Elizabeth Tzurkov, a Syria researcher at Israeli think tank the Forum for Regional Thinking, told Army Radio: “... a number of Syrians have approached me who are descendants of Jewish women, who converted to Islam or who did not convert, and inquired how they can move to Israel.”

      https://www.jpost.com/Diaspora/Stranded-in-Aleppo-Syrians-claiming-to-be-Jews-seek-aid-from-Israel-483261

    1. More recently, in September 2016, the last Jews of Aleppo were rescued hence ending that last Jewish presence in Aleppo

      There may be Jews left in Syria, according to the Jerusalem Post.

      First, one member of the family rescued in 2016 is still there because she was married to a Muslim man and signed conversion papers though she says she didn't really convert

      According to Motti Kahana, who engineered the operation, there are no Jews left in Aleppo, aside from one member of the Halabi family, Linda, whose immigration to Israel was denied, citing her conversion to Islam – the source of a dispute between the Jewish Agency and Kahana.

      The latter still sends kosher food to the woman, and maintains that though she signed conversion papers – which is required by Syrian law when marrying a Muslim – she did not really convert.

      Also another family in Aleppo claims they are Jewish asking for aliyah

      A family from war-torn Aleppo is appealing to the State of Israel for refuge, citing their Jewish heritage, Army Radio reported on Sunday.

      “There is nobody who can help us to get out of this place,” said 30-year-old Razan (real name withheld) in an audio recording translated from Arabic into Hebrew and aired on the radio station. “We are asking that the Israeli government does not abandon us, but helps us get out of here to another country. I ask that the government demands from the entire world to do this. All my love and loyalty is to this religion [Judaism].”

      Experts say there are still some Jews remaining in other parts of Syria. Elizabeth Tzurkov, a Syria researcher at Israeli think tank the Forum for Regional Thinking, told Army Radio: “... a number of Syrians have approached me who are descendants of Jewish women, who converted to Islam or who did not convert, and inquired how they can move to Israel.”

      https://www.jpost.com/Diaspora/Stranded-in-Aleppo-Syrians-claiming-to-be-Jews-seek-aid-from-Israel-483261

    1. Perigean spring tide

      NOAA FAQ about Perigean spring tides is a useful source too

      Mr. Woods' book examines the occurrences of coastal flooding though history. What he discovered is that coastal flooding did occur when there was a strong onshore wind, such as a hurricane or nor'easter, which occasionally occurred at the same time as a "perigean spring tide."

      The problem has been that a number of people have misinterpreted the information presented in this book to mean that coastal flooding would occur whenever the "perigean spring tides" occur. This has led to articles published in various media sources that incorrectly predict widespread coastal flooding at the times of the "perigean spring tides," causing needless concern.

      Most people who live along the coastline know that coastal flooding can occur whenever there are strong onshore winds, whether there is a "perigean spring tide" or not. Additionally, this flooding will be worse if the storm strikes around the time of high tide rather than around the time of low tide.

      But in ALL cases, it is the storm winds which cause the coastal flooding, not the tides. Coastal flooding is the result of meteorology (the weather) not astronomy (normal tidal fluctuations). All astronomical considerations are accounted for in the NOS tide and tidal current predictions. https://co-ops.nos.noaa.gov/faq2.html#15

    1. The state is required to obtain at least 33% of its electricity from renewable resources by 2020, and 50% by 2030, excluding large hydro

      Our of date. Cited page now says that by SB 100 California is required to produce 60% renewables by 2030 and all electricity from carbon-free sources by 2045

    1. The cite is to a preprint, not a WP:RS. 81 is likely a typo for 18. Most often given as 10-15 km

      "Asteroids striking the Earth typically [Minton and Malhotra, 2010] have an impactor density of 2680 kg/m3and an impact velocity of 20 km/s.Assuming these properties, modern scaling relations indicate that a 10–15 km diameter projectile [Collins et al., 2008] created the 170 km diameter Chicxulub crater"

      Parkos, D., Alexeenko, A., Kulakhmetov, M., Johnson, B.C. and Melosh, H.J., 2015. NOx production and rainout from Chicxulub impact ejecta reentry. Journal of Geophysical Research: Planets, 120(12), pp.2152-2168

    1. Plutonium, like most metals, has a bright silvery appearance at first, much like nickel, but it oxidizes very quickly to a dull gray, although yellow and olive green are also reported.[1][2] At room temperature plutonium is in its α (alpha) form. This, the most common structural form of the element (allotrope), is about as hard and brittle as gray cast iron unless it is alloyed with other metals to make it soft and ductile. Unlike most metals, it is not a good conductor of heat or electricity. It has a low melting point (640 °C) and an unusually high boiling point (3,228 °C).[1] Alpha decay, the release of a high-energy helium nucleus, is the most common form of radioactive decay for plutonium.[3] A 5 kg mass of 239Pu contains about 12.5×1024 atoms. With a half-life of 24,100 years, about 11.5×1012 of its atoms decay each second by emitting a 5.157 MeV alpha particle. This amounts to 9.68 watts of power. Heat produced by the deceleration of these alpha particles makes it warm to the touch.[

      "Heat produced by the deceleration of these alpha particles makes it warm to the touch."

    1. Heavy water was first produced in 1932, a few months after the discovery of deuterium.[6] With the discovery of nuclear fission in late 1938, and the need for a neutron moderator that captured few neutrons, heavy water became a component of early nuclear energy research. Since then, heavy water has been an essential component in some types of reactors, both those that generate power and those designed to produce isotopes for nuclear weapons. These heavy water reactors have the advantage of being able to run on natural uranium without using graphite moderators that pose radiological[7] and dust explosion[8] hazards in the decommissioning phase. Most modern reactors use enriched uranium with ordinary water as the moderator.
    1. The Elephant’s Foot is the nickname given to a large mass of corium formed during the Chernobyl disaster in April 1986 and presently located in a steam distribution corridor underneath the remains of the reactor. It is currently an extremely deadly radioactive compound, yet its danger has decreased with the decay of its radioactive components.
    1. Camus follows Sartre's definition on the absurd, absurd is "That which is meaningless. Thus man's existence is absurd because his contingency finds no external justification".[71] The absurd is created because of the realization of man, who is placed into an unintelligent universe, that human values are not founded on a solid external component; or as Camus himself explains, the absurd is the result of the "confrontation between human need and the unreasonable silence of the world".[74] Even though absurdity is inescapable, Camus does not drift towards nihilism. But the realization of absurdity leads to the question: why someone should continue to live? Suicide is an option that Camus firmly dismisses as the renunciation of human values and freedom. Rather than, he proposes we accept that absurdity is a part of our lives and live with it.
    2. On the other hand, Camus focused most of his philosophy around existential questions. The absurdity of life, the inevitable ending (death) is highlighted in his acts, his belief that the absurd – life being void of meaning, or man's inability to know that meaning if it were to exist – was something that man should embrace, his anti-Christianity, his commitment to individual moral freedom and responsibility are only a few of the similarities with other existential writers.[69] More importantly, Camus addressed one of the fundamental questions of existentialism: the problem of suicide. He wrote "There is only one really serious philosophical question, and that is suicide" Camus viewed the question of suicide as arising naturally as a solution to the absurdity of life.[70]
    1. Radioactive decay is a stochastic (i.e. random) process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay,[1][2][3] regardless of how long the atom has existed. However, for a collection of atoms, the collection's expected decay rate is characterized in terms of their measured decay constants or half-lives. This is the basis of radiometric dating. The half-lives of radioactive atoms have no known upper limit, spanning a time range of over 55 orders of magnitude, from nearly instantaneous to far longer than the age of the universe.
    2. Radioactive decay (also known as nuclear decay, radioactivity or nuclear radiation) is the process by which an unstable atomic nucleus loses energy (in terms of mass in its rest frame) by emitting radiation, such as an alpha particle, beta particle with neutrino or only a neutrino in the case of electron capture, or a gamma ray or electron in the case of internal conversion. A material containing such unstable nuclei is considered radioactive. Certain highly excited short-lived nuclear states can decay through neutron emission, or more rarely, proton emission.
    1. The code name "Trinity" was assigned by Robert Oppenheimer, the director of the Los Alamos Laboratory, inspired by the poetry of John Donne. The test was of an implosion-design plutonium device, informally nicknamed "The Gadget", of the same design as the Fat Man bomb later detonated over Nagasaki, Japan, on August 9, 1945. The complexity of the design required a major effort from the Los Alamos Laboratory, and concerns about whether it would work led to a decision to conduct the first nuclear test. The test was planned and directed by Kenneth Bainbridge.
    1. In the Chernobyl disaster, the moderator was not responsible for the primary event. Instead, a massive power excursion during a mishandled test caused the catastrophic failure of the reactor vessel and a near-total loss of coolant supply. The result was that the fuel rods rapidly melted and flowed together while in an extremely-high-power state, causing a small portion of the core to reach a state of runaway prompt criticality and leading to a massive energy release,[22] resulting in the explosion of the reactor core and the destruction of the reactor building. The massive energy release during the primary event superheated the graphite moderator, and the disruption of the reactor vessel and building allowed the superheated graphite to come into contact with atmospheric oxygen. As a result, the graphite moderator caught fire, sending a plume of highly radioactive fallout into the atmosphere and over a very widespread area.[
    2. Nuclear graphite for the UK Magnox reactors was manufactured from petroleum coke mixed with coal-based binder pitch heated and extruded into billets, and then baked at 1,000 °C for several days. To reduce porosity and increase density, the billets were impregnated with coal tar at high temperature and pressure before a final bake at 2,800 °C. Individual billets were then machined into the final required shapes.[17] The manufacturing process is designed to ensure uniformity in material properties. Despite this care, recent research using stochastic finite element analysis[18] has shown that tiny spatial variations in material properties may play a significant role in how a graphite component ages.[19] A study carried out in 2016 provides data for the spatial variation of properties such as density and Young's modulus within a typical billet.[14] This information has been used to calibrate random fields for probabilistic simulation.[15]
    1. Despite their name, rare-earth elements are – with the exception of the radioactive promethium – relatively plentiful in Earth's crust, with cerium being the 25th most abundant element at 68 parts per million, more abundant than copper. However, because of their geochemical properties, rare-earth elements are typically dispersed and not often found concentrated in rare-earth minerals; as a result economically exploitable ore deposits are less common.[4] The first rare-earth mineral discovered (1787) was gadolinite, a mineral composed of cerium, yttrium, iron, silicon, and other elements. This mineral was extracted from a mine in the village of Ytterby in Sweden; four of the rare-earth elements bear names derived from this single location.
    1. In most reactor designs, as a safety measure, control rods are attached to the lifting machinery by electromagnets, rather than direct mechanical linkage. This means that in the event of power failure, or if manually invoked due to failure of the lifting machinery, the control rods fall automatically, under gravity, all the way into the pile to stop the reaction. A notable exception to this fail-safe mode of operation is the BWR, which requires hydraulic insertion in the event of an emergency shut-down, using water from a special tank under high pressure. Quickly shutting down a reactor in this way is called scramming.
    2. Chemical elements with a sufficiently high neutron capture cross-section include silver, indium and cadmium. Other candidate elements include boron, cobalt, hafnium, samarium, europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium and lutetium.[1] Alloys or compounds may also be used, such as high-boron steel,[2] silver-indium-cadmium alloy, boron carbide, zirconium diboride, titanium diboride, hafnium diboride, gadolinium nitrate,[3] gadolinium titanate, dysprosium titanate and boron carbide - europium hexaboride composite.[4]
    3. Control rods are usually used in control rod assemblies (typically 20 rods for a commercial PWR assembly) and inserted into guide tubes within a fuel element. A control rod is removed from or inserted into the central core of a nuclear reactor in order to increase or decrease the neutron flux, which describes the number of neutrons that split further uranium atoms. This in turn affects the thermal power, the amount of steam produced and hence the electricity generated.
    4. Control rods are used in nuclear reactors to control the fission rate of uranium and plutonium. They are composed of chemical elements such as boron, silver, indium and cadmium that are capable of absorbing many neutrons without themselves fissioning. Because these elements have different capture cross sections for neutrons of varying energies, the composition of the control rods must be designed for the reactor's neutron spectrum. Boiling water reactors (BWR), pressurized water reactors (PWR) and heavy water reactors (HWR) operate with thermal neutrons, while breeder reactors operate with fast neutrons.
  4. May 2019
    1. sales volume

      "Sales volume is the number of units sold within a reporting period. This figure is monitored by investors to see if a business is expanding or contracting. Within a business, sales volume may be monitored at the level of the product, product line, customer, subsidiary, or sales region."

    2. . TI also invented the hand-held calculator in 1967, and introduced the first single-chip microcontroller (MCU) in 1970, which combined all the elements of computing onto one piece of silicon.[10]

      Start of the calculators

    1. Pathogenic amyloids form when previously healthy proteins lose their normal physiological functions and form fibrous deposits in plaques around cells which can disrupt the healthy function of tissues and organs.

      Clusters of these proteins prevent organs from functioning correctly.

    2. Amyloids are aggregates of proteins that become folded into a shape that allows many copies of that protein to stick together, forming fibrils. In the human body, amyloids have been linked to the development of various diseases.

      Amyloids are clusters of proteins that are associated with development of various diseases in humans.

    1. They appear only twice (always plural) in the Tanakh, at Psalm 106:37 and Deuteronomy 32:17 both times, it deals with child or animal sacrifices.[6] Although the word is traditionally derived from the root ŠWD (Hebrew: שוד‎ shûd) that conveys the meaning of "acting with violence" or "laying waste"[7] it was possibly a loan-word from Akkadian in which the word shedu referred to a protective, benevolent spirit.[8] The word may also derive from the "Sedim, Assyrian guard spirits"[9] as referenced according to lore "Azazel slept with Naamah and spawned Assyrian guard spirits known as sedim".[10] With the translation of Hebew texts into Greek, under influence of Zorastrian dualism, shedim were translated into daimonia with implicit negativity. Otherwise, later in Judeo-Islamic culture, shedim became the Hebrew word for Jinn with a morally ambivalent attitude
    2. Shedim (Hebrew: שֵׁדִים‎) are spirits or demons in early Jewish mythology. However, they are not necessarily equivalent to the modern connotation of demons as evil entities.[3] Evil spirits were thought as the cause of maladies; conceptual differing from the shedim,[4] who are not evil demigods, but the foreign gods themselves. Shedim are just evil in the sense that they are not God.
    1. The report found that, due to human impact on the environment in the past half-century, the Earth's biodiversity has suffered a catastrophic decline unprecedented in human history

      This is really sad :(

    1. closed when approximately 13,000 workers voted to strike "indefinitely" in protest of a union leaders arrest for calling for an end to military rule in Chile.

      what was the military rule?

    1. Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modelled by a probability distribution that has a fixed set of parameters.[1] Conversely a non-parametric model differs precisely in that the parameter set (or feature set in machine learning) is not fixed and can increase, or even decrease, if new relevant information is collected.[2] Most well-known statistical methods are parametric.[3] Regarding nonparametric (and semiparametric) models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[4]

      Non-parametric vs parametric stats

    1. Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical. It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is called parametric. Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesis non-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termed distribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.

      Non-parametric vs parametric statistics

    2. Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in ordinal data. As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust. Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding. The wider applicability and increased robustness of non-parametric tests comes at a cost: in cases where a parametric test would be appropriate, non-parametric tests have less power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.

      Non-parametric vs parametric statistics

    1. The concept of data type is similar to the concept of level of measurement, but more specific: For example, count data require a different distribution (e.g. a Poisson distribution or binomial distribution) than non-negative real-valued data require, but both fall under the same level of measurement (a ratio scale).
    1. In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov.

    1. Yazidi accounts of creation differ from that of Judaism, Christianity, and Islam and resembles Zoroastrianism[119] or Hinduism. Especially worshipping a holy peacock, Melek Taus in oil lamps is more common in Hinduism. They believe that God first created Tawûsê Melek from his own (God's) illumination (Ronahî) and the other six archangels were created later. God ordered Tawûsê Melek not to bow to other beings. Then God created the other archangels and ordered them to bring him dust (Ax) from the Earth (Erd) and build the body of Adam. Then, God gave life to Adam from his own breath and instructed all archangels to bow to Adam. The archangels obeyed except for Tawûsê Melek. In answer to God, and the seemingly contradictory command, Tawûsê Melek replied, "How can I submit to another being! I am from your illumination while Adam is made of dust." Then, God praised him and made him the leader of all angels and his deputy on the Earth. This probably furthers what some see as a connection to the Islamic Shaytan, as according to the Quran, he too refused to bow to Adam at God's command, though in this case it is seen as being a sign of Shaytan's sinful pride. Hence, the Yazidis believe that Tawûsê Melek is the representative of God on the face of the Earth and comes down to the Earth on the first Wednesday of Nisan (April).
    2. The reason for the Yazidis' reputation of being devil worshipers is connected to the other name of Melek Taus, Shaytan, the same name the Koran has for Satan.[115] Yazidis, however, believe Tawûsê Melek is not a source of evil or wickedness. They consider him to be the leader of the archangels, not a fallen angel.[66][49] The Yazidis of Kurdistan have been called many things, most notoriously 'devil-worshippers,' a term used both by unsympathetic neighbours and fascinated Westerners. This sensational epithet is not only deeply offensive to the Yazidis themselves, but quite simply wrong."[116] Non-Yazidis have associated Melek Taus with Shaitan (Islamic/Arab name) or Satan, but Yazidis find that offensive and do not actually mention that name.[116]
    3. Yazidis are monotheists,[58] believing in one God, who created the world and entrusted it into the care of a Heptad of seven Holy Beings, often known as Angels or heft sirr (the Seven Mysteries). The names of these beings or angels are Azaz'il, Gabra'il (Jabra'il), Mikha'il, Rafa'il (Israfil), Dadra'il, Azrafil and Shamkil (Shemna'il)[113] Preeminent among these is Tawûsê Melek (frequently known as "Melek Taus" in English publications), the Peacock Angel[114][69] (identified with one of these Angels). Tawûsê Melek is often identified by Christians and Muslims with Satan. According to claims in Encyclopedia of the Orient,
    1. Monseigneur de Hemptinne watched Yeke people working at Dikuluwe as late as 1924. They worked in the dry season and stopped when the first rains arrived. The mining camp was near a stream where millet could be planted. Women and children collected malachite from the surface, while men used iron picks to excavate pits and shafts, using fire to crack the rocks when needed. The mines were between 10 metres (33 ft) and 15 metres (49 ft) deep with galleries up to 20 metres (66 ft) long. The ore would be sorted and then taken to a nearby stream for concentration before being smelted

      info about how mines work

    1. El Quiché forms the heartland of the Kʼicheʼ people. In pre-Columbian times, the Kʼicheʼ settlements and influence reached beyond the highlands, including the valley of Antigua and coastal areas in Escuintla.

      .

    2. There is also evidence for a large degree of cultural exchange between the Kʼicheʼ and the people of Central Mexico, and Nahuatl has influenced the Kʼicheʼ language greatly.[5]

      .

  5. Apr 2019
    1. the phrase alludes to influences by Confucianism[2](p10) – in particular, filial piety or loyalty towards the family, corporation, and nation; the forgoing of personal freedom for the sake of society's stability and prosperity; the pursuit of academic and technological excellence; and, a strong work ethic together with thrift

      I think these values might be useful in teaching in school. I wonder if many colleges and Western schools ever taught this to their children?

    2. Proponents of so-called "Asian values", who tend to support Asian-style authoritarian governments,[2](p13) claim these values are more appropriate for the region than Western democracy with its emphasis on individual freedoms.[3] "Asian values" were codified and promoted in the Bangkok Declaration of 1993, which re-emphasized the principles of sovereignty, self-determination, and non-interference in civil and political rights. They included: Preference for social harmony; Concern with socio-economic prosperity and the collective well-being of the community; Loyalty and respect towards figures of authority; Preference for collectivism and communitarianism.

      Now that I think about the times when people in the MTA Evergreen collaboration program are telling me that I am using male-dominated language, I found that this is the source of my values and yet I just find it sad that Evergreen students just never understand it at all.