unitary operator is a surjective bounded operator
Why must unitary operator only be surjective? Why not bijective?
unitary operator is a surjective bounded operator
Why must unitary operator only be surjective? Why not bijective?
A 2015 study suggested that the AMOC has weakened by 15-20% in 200 years
This doesn't seem to have been updated since 2015. The IPCC report in 2018 (chapter 3) says
It is more likely than not that the Atlantic Meridional Overturning Circulation (AMOC) has been weakening in recent decades, given the detection of the cooling of surface waters in the North Atlantic and evidence that the Gulf Stream has slowed since the late 1950s. There is only limited evidence linking the current anomalously weak state of AMOC to anthropogenic warming. It is very likely that the AMOC will weaken over the 21st century. The best estimates and ranges for the reduction based on CMIP5 simulations are 11% (1– 24%) in RCP2.6 and 34% (12– 54%) in RCP8.5 (AR5). There is no evidence indicating significantly different amplitudes of AMOC weakening for 1.5°C versus 2°C of global warming.
Hoegh-Guldberg, O., Jacob, D., Taylor, M., Bindi, M., Brown, S., Camilloni, I., Diedhiou, A., Djalante, R., Ebi, K., Engelbrecht, F. and Guiot, K., 2018. Impacts of 1.5 ºC global warming on natural and human systems. Chapter 3, section 3.3.8
In 1996 and 1998, a pair of workshops at the University of Glasgow on information retrieval and human–computer interaction sought to address the overlap between these two fields. Marchionini notes the impact of the World Wide Web and the sudden increase in information literacy – changes that were only embryonic in the late 1990s.
it took a half a century for these disciplines to discern their complementarity!
Apart from a few asteroids whose densities have been investigated,[6] one has to resort to enlightened guesswork.
The field has moved on a lot since then. This gives an idea of the range of values for NEO's
So, for a 200 meter asteroid it’s between 1 and 3, most likely around 1.75 or so, but a small chance of an iron meteorite of 6 to 7. For a 20 meter asteroid it’s a similar range but with two very likely densities of around 2.2 and around 2.8 (just going by eye from that diagram). 100 meter size range similar.
It is based on this analysis by the structural type and composition.
Authentication (from Greek: αὐθεντικός authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user.
The assertion does not need to be about an actor's identity per se?
The Battlestar Galactica, an aircraft carrier in space that fought in the earlier war, is in the final stages of being decommissioned and converted to a museum when the attack occurs. During her decades of colonial service the Galactica's computer systems had never been networked so the Galactica is unaffected by the Cylon sabotage.
Galactica, using old tech, was saved from the Cylon hack.
At the start of the 1970s, The New Communes author Ron E. Roberts classified communes as a subclass of a larger category of Utopias.[5] He listed three main characteristics. Communes of this period tended to develop their own characteristics of theory though, so while many strived for variously expressed forms of egalitarianism, Roberts' list should never be read as typical. Roberts' three listed items were: first, egalitarianism – that communes specifically rejected hierarchy or graduations of social status as being necessary to social order. Second, human scale – that members of some communes saw the scale of society as it was then organized as being too industrialized (or factory sized) and therefore unsympathetic to human dimensions. And third, that communes were consciously anti-bureaucratic.
Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none has considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith, and others, all at the same time and in combination.[230] Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly.[21]
The researchers are of the view that their work strongly supports the possibility that terrestrial microbes most likely can adapt physiologically to live on Mars
"This work strongly supports the interconnected notions (i) that terrestrial life most likely can adapt physiologically to live on Mars (hence justifying stringent measures to prevent human activities from contaminating / infecting Mars with terrestrial organisms); (ii) that in searching for extant life on Mars we should focus on "protected putative habitats"; and (ii) that early-originating (Noachian period) indigenous Martian life might still survive in such micro-niches despite Mars' cooling and drying during the last 4 billion years"
de Vera, Jean-Pierre; Schulze-Makuch, Dirk; Khan, Afshin; Lorek, Andreas; Koncz, Alexander; Möhlmann, Diedrich; Spohn, Tilman (2014). "Adaptation of an Antarctic lichen to Martian niche conditions can occur within 34 days". Planetary and Space Science. 98: 182–190. Bibcode:2014P&SS...98..182D. doi:10.1016/j.pss.2013.07.014. ISSN 0032-0633.
Currently, the surface of Mars is bathed with radiation, and when reacting with the perchlorates on the surface, it may be more toxic to microorganisms than thought earlier.[11][12] Therefore, the consensus is that if life exists —or existed— on Mars, it could be found or is best preserved in the subsurface, away from present-day harsh surface processes.
This is the old view from around 2007. Nowadays the surface is also thought to be of interest for the search for present day life on Mars.
Cites here are from "A new analysis of Mars “special regions”: findings of the second MEPAG Special Regions Science Analysis Group (SR-SAG2)." 2014
(see section 2.1, page 891)
Finding 2-1: Modern martian environments may contain molecular fuels and oxidants that are known to support metabolism and cell division of chemolithoautotrophic microbes on Earth
3.6. Ionizing radiation at the surface page 891 of[1]).
Finding 3-8: From MSL RAD measurements, ionizing radiation from GCRs at Mars is so low as to be negligible. Intermittent SPEs can increase the atmospheric ionization down to ground level and increase the total dose, but these events are sporadic and last at most a few (2–5)days. These facts are not used to distinguish Special Regions on Mars.
Over a 500-year time frame, the martian surface could be estimated to receive a cumulative ionizing radiation dose of less than 50 Gy, much lower than the LD90 (lethal dose where 90% of subjects would die) for even a radiation-sensitive bacterium such as E. coli (LD90 of ~200–400 Gy)
(see 3.7. Polyextremophiles: combined effects of environmental stressors of[1]).
Finding 3-9: The effects on microbial physiology of more than one simultaneous environmental challenge are poorly understood. Communities of organisms may be able totolerate simultaneous multiple challenges more easily than individual challenges presented separately. What little is known about multiple resistance does not affect our current limits of microbial cell division or metabolism in response to extreme single parameters.
All citing:
Rummel, J.D., Beaty, D.W., Jones, M.A., Bakermans, C., Barlow, N.G., Boston, P.J., Chevrier, V.F., Clark, B.C., de Vera, J.P.P., Gough, R.V. and Hallsworth, J.E., 2014. A new analysis of Mars “special regions”: findings of the second MEPAG Special Regions Science Analysis Group (SR-SAG2).
The search for evidence of habitability, taphonomy (related to fossils), and organic compounds on Mars is now a primary NASA and ESA objective.
Since the article is about Life on Mars it should surely mention the first of NASA’S four science goals:
Goal I: determine if Mars ever supported life
- Objective A: determine if environments having high potential for prior habitability and preservation of biosignatures contain evidence of past life.
- Objective B: determine if environments with high potential for current habitability and expression of biosignatures contain evidence of extant life."
From: Hamilton, V.E., Rafkin, S., Withers, P., Ruff, S., Yingst, R.A., Whitley, R., Center, J.S., Beaty, D.W., Diniega, S., Hays, L. and Zurek, R., Mars Science Goals, Objectives, Investigations, and Priorities: 2015 Version.
There is an almost universal consensus among scholars that the Exodus story is best understood as myth
This is an over simplification, it's possible that some of the population of Israelites did come from Egypt, possibly many thousands of them, and that the story has elements from the experiences of those who did.
"While there is a consensus among scholars that the Exodus did not take place in the manner described in the Bible, surprisingly most scholars agree that the narrative has a historical core, andthat some of the highland settlers came, one wayor another, from Egypt "
In this, I am not referring to the various traditions of Israel’s interaction with Egypt resulting from the era of Egyptian control inCanaan or from some relations with the Hyksos,which found their way into the Bible, but to the possibility that there was a group which fled Egypt, and brought this story of Exodus with it. Though the size of this group is debated, most of the above scholars agree that it was in the range of a few thousands, or even hundreds (some give it more weight, e.g., Hoffmeier 1997). Still, despite the limited size of this group, it appears that during the process of Israel’s ethnogenesis its story became part of the common history of all the Israelites. Most of those who accept some historical core for the story of the Exodus from Egypt, date it to the thirteenth century, at the time of Ramses II, while others dateit to the twelfth century, during the time of Ramses III.
Archaeology does not really contribute to the debate over the historicity or even historical background of the Exo-dus itself, but if there was indeed such a group, it contributed the Exodus story to that of all Israel. While I agree that it is most likely that there was such a group, I must stress that this is based on an overall understanding of the developmentof collective memory and of the authorship of the texts (and their editorial process). Archaeology, unfortunately, cannot directly contribute(yet?) to the study of this specific group of Israel’s ancestors.
So was this Exodus group also Merneptah’s Israel, or at least part of it? Clearly, if there was an Exodus in the thirteenthcentury this group of people could have been part of Merneptah’s Israel. However, despite the assumed significance of this group (the Exodus as a "national" epic, more below), it is likely that this group was incorporated at a later stage, only after Merneptah’s time, or at least that it was distinct from Merneptah’s Israel. After all,although this group clearly brought with it some of what became the history of Israel, it wasn’tMerneptah’s Israel, or any "Israel" for that matter. While many scholars agree that the Exodus group brought with it YHWH as a new deity, the name Israel has the component "El," rather than "Ya" or "Yahu." Thus, Israel could [have] preceded the arrival of the Exodus group, and it is likely that the latter was not Israel’s "core"group.
See also the Wikipedia article
There is no indication that the Israelites ever lived in Ancient Egypt, and the almost universal consensus among scholars is that the Exodus story is best understood as myth.[
This is an over simplification, it's possible that some of them did, and that the story has elements from the experiences of those who did.
"While there is a consensus among scholars that the Exodus did not take place in the manner described in the Bible, surprisingly most scholars agree that the narrative has a historical core, andthat some of the highland settlers came, one wayor another, from Egypt "
In this, I am not referring to the various traditions of Israel’s interaction with Egypt resulting from the era of Egyptian control inCanaan or from some relations with the Hyksos,which found their way into the Bible, but to the possibility that there was a group which fled Egypt, and brought this story of Exodus with it. Though the size of this group is debated, most of the above scholars agree that it was in the range of a few thousands, or even hundreds (some give it more weight, e.g., Hoffmeier 1997). Still, despite the limited size of this group, it appears that during the process of Israel’s ethnogenesis its story became part of the common history of all the Israelites. Most of those who accept some historical core for the story of the Exodus from Egypt, date it to the thirteenth century, at the time of Ramses II, while others dateit to the twelfth century, during the time of Ramses III.
Archaeology does not really contribute to the debate over the historicity or even historical background of the Exo-dus itself, but if there was indeed such a group, it contributed the Exodus story to that of all Israel. While I agree that it is most likely that there was such a group, I must stress that this is based on an overall understanding of the developmentof collective memory and of the authorship of the texts (and their editorial process). Archaeology, unfortunately, cannot directly contribute(yet?) to the study of this specific group of Israel’s ancestors.
So was this Exodus group also Merneptah’s Israel, or at least part of it? Clearly, if there was an Exodus in the thirteenthcentury this group of people could have been part of Merneptah’s Israel. However, despite the assumed significance of this group (the Exodus as a "national" epic, more below), it is likely that this group was incorporated at a later stage, only after Merneptah’s time, or at least that it was distinct from Merneptah’s Israel. After all,although this group clearly brought with it some of what became the history of Israel, it wasn’tMerneptah’s Israel, or any "Israel" for that matter. While many scholars agree that the Exodus group brought with it YHWH as a new deity, the name Israel has the component "El," rather than "Ya" or "Yahu." Thus, Israel could [have] preceded the arrival of the Exodus group, and it is likely that the latter was not Israel’s "core"group.
See also the Wikipedia article
The curriculum of "The English High School" was clearly established to funnel a certain population of students into positions that were deemed "suitable" for their socioeconomic class. In Jean Anyon's "Social Class and the Hidden Curriculum of Work" she states that social class is perceived as a complex of social relations that one develops as one grows up and acquires certain bodies of knowledge, skills, abilities, and traits. She states that these define our material ties to the world and states an important concern of whether these relationships are developing in children in schools within particular social class contexts. The establishment of the English High School, compared to schools like Boston Latin, shows that these relationships did develop in children depending on the social class context of their school.
Nosedive (Black Mirror)
Now this is a Black Mirror episode that is so close to our current reality.
Another prominent conclusion is that joint asset ownership is suboptimal if investments are in human capital.
Does that have to be the case?
ports many programming languages and markup languages, and functions can be added by users with plugins, typically community-built a
test annot
In 2012 the Danish government adopted a plan to increase the share of electricity production from wind to 50% by 2020,[6] and to 84% by 2035
In 2019 they committed to aim for 70% CO2 reductions by 2030 and zero emissions by 2040. New Danish government puts climate change centre stage
It Takes a Nation of Millions to Hold Us Back is the second studio album by American hip hop group Public Enemy. It was released on June 28, 1
here is an annotation
Eliminating the fraction of demand that occurs in these spikes eliminates the cost of adding reserve generators, cuts wear and tear and extends the life of equipment, and allows users to cut their energy bills by telling low priority devices to use energy only when it is cheapest.
Carbon mineralization
The COSIA Carbon XPRIZE Challenge is a competition to convert CO2 into products with highest net value from either a coal or gas power plant. In April 2018, ten finalists were given $5 million each to demonstrate their technologies large scale in the real world. The winner gets a $7.5 million grand prize announced in March 2020.
Five of the ten are focused on carbon minerallization technology. One of them is a team from Aberdeen that hopes to use CO2 capture to make the entire concrete industry carbon negative.
The Carbon Capture Machine precipitates it into calcium and magnesium carbonates (much like stalactites in caves) as a carbon negative replacement for ground calcium carbonate (GCC) which is needed for concrete. If this works on a commercial scale it can decarbonize the concrete industry, or 6% of the world’s annual CO2 emissions. If they can make it commercially viable, GCC has a market value of $20 billion.Carbon Upcycling](http://www.co2upcycling.com/) makes new CO2ncrete from CO2 and chemicals, competing directly with the $400 billion concrete industry - in places like California with a carbon tax and mandate for low carbon building materials.
*CarbonCure Technologies](https://www.carboncure.com/) injects CO2 into wet concrete while it is being mixed. They are aleady in commercial use with 100 installations across the US, retrofitting concrete plants for free then charging a licensing fee. It may take up to 20 years to be used on scale for reinforced concrete, because that’s needed as a durability testing period.
For more on this see Between a Rock and Hard Place: Commercializing CO2 Through Mineralization
The original codename for Kubernetes within Google was Project Seven of Nine, a reference to a Star Trek character that is a "friendlier" Borg.
Smil notes that as of 2018, coal, oil, and natural gas still supply 90% of the world's primary energy. Despite decades of growth of renewable energy, the world uses more fossil fuels in 2018 than in 2000, by percentage.
Where the Tibetan highlanders live, the oxygen level is only about 60% of that at sea level
Not cited, could cite:
"At 4000 meters, every lungful of air only has 60% of the oxygen molecules that people at sea level have," said co-author Cynthia Beall of Case Western Reserve University.
Ethiopians and Tibetans thrive in thin air using similar physiology, but different genes
Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to.
But isn't this comparison unfair? I mean, doesn't it actually depend on the number of connectivity and dependency links it has?. In other words, it is true that the node ceases to function if it loses the only node it depends on. But wouldn't it be equally dramatic if it loses the only node it is connected to? It seems to me it is all a matter of how many nodes it is connected to/it depends on.
Its footprints are distinctive
The extremely short limbs make it impossible for this frog to hop, although it can walk (Boulenger 1907). ...
This frog has extensive webbing on its feet, in contrast to other members of the genus Breviceps. Carruthers and Passmore (1978) conjecture that the foot webbing enables traction on loose sand, as the frog moves about on the surface of its sand dune habitat at night (based on its distinctive tracks).
https://amphibiaweb.org/cgi/amphib_query?where-genus=Breviceps&where-species=macrops
The small area of sand dunes often gets a lot of fog, which supplies moisture in an otherwise arid and dry region.
Uncited sentence. Useful cite:
Voucher specimens held in museum collections were examined, and demonstrate the northernmost locality in Lüderitz, Namibia, with all 11 localities in white sandy habitat where coastal fog exceeds 100 days per year. The most southerly record from active searches was just south of Kleinzee in South Africa. A new threat to this species is housing development in prime coastal sand dunes.
Channing, A. and Wahlberg, K., 2011. Distribution and conservation status of the desert rain frog Breviceps macrops. African journal of herpetology, 60(2), pp.101-112.
Contemporary analysis of historical data from the last 11 millennia[12] matches with the indigenous Saptarishi Calendar.[13] The length of the transitional periods between each Yuga is unclear, and can only be estimated based on historical data of past cataclysmic events. Using a 300 year (10% of the length of a particular yuga) period for transitions, Kali Yuga has either ended recently in the past 100 to 200 years, or is to end soon sometime in the next 100 years.
Most Hindus would say that the Hindu Kalu Yuga ends thousands of years into our future. An earlier version of this page gave the conventional view
The Kali Yuga began approximately five thousand years ago, and it has a duration of 432,000 years, leaving us with 427,000 years until the end of the present age.
https://en.wikipedia.org/w/index.php?title=Kali_Yuga&oldid=889598580#10,000_year_%22Golden_Age%22
This edit is based on an article on the Graham Hancock website - would not normally be regarded as a reliable source in Wikipedia.
This is about him: https://en.wikipedia.org/wiki/Graham_Hancock
This is the article they give as a source. https://grahamhancock.com/dmisrab6/
Microsoft Word key combination
诡异的问题,Word里em dash根据前面不同可能会打出不同的位置(正中和偏下位置)
In Chile, the national telecom regulator ruled that this practice violated net neutrality laws and had to end by June 1, 2014.[3][4]
But "Claro" offered in December 2018 data plans which only allowed access to Facebook, Instagram, Whatsapp and Snapchat.
I know that Wikipedia can provide a quick overview on a topic; however, as a researcher/librarian, I would never suggest to students to add Wikipedia to a list of additional resources. Perhaps some of the referenced resources at the end of the entry might be a better choice.
In general, I would sort the resources related to the commons by "historical/physical" and "contemporary/philosophical". It was confusing to read an article such as this, as well as some of the others that follow that address the physical environment, cow enclosures, land ownership,etc. mixed in with our discussion of the commons as a philosophical concept, related to a more contemporary information society. Of course, I understood the connection from history to present day and it is an interesting and important one; however, I think a little ordering and some subheadings would lead the reader down the right path more seamlessly.
Jevons received public recognition for his work on The Coal Question (1865), in which he called attention to the gradual exhaustion of Britain's coal supplies and also put forth the view that increases in energy production efficiency leads to more, not less, consumption.[5]:7f, 161f This view is known today as the Jevons paradox, named after him. Due to this particular work, Jevons is regarded today as the first economist of some standing to develop an 'ecological' perspective on the economy.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
Balance exploration and exploitation: the choice of examples to label is seen as a dilemma between the exploration and the exploitation over the data space representation. This strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. For example, Bouneffouf et al.[9] propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label. Expected model change: label those points that would most change the current model. Expected error reduction: label those points that would most reduce the model's generalization error. Exponentiated Gradient Exploration for Active Learning:[10] In this paper, the author proposes a sequential algorithm named exponentiated gradient (EG)-active that can improve any active learning algorithm by an optimal random exploration. Membership Query Synthesis: This is where the learner generates its own instance from an underlying natural distribution. For example, if the dataset are pictures of humans and animals, the learner could send a clipped image of a leg to the teacher and query if this appendage belongs to an animal or human. This is particularly useful if your dataset is small.[11] Pool-Based Sampling: In this scenario, instances are drawn from the entire data pool and assigned an informative score, a measurement of how well the learner “understands” the data. The system then selects the most informative instances and queries the teacher for the labels. Stream-Based Selective Sampling: Here, each unlabeled data point is examined one at a time with the machine evaluating the informativeness of each item against its query parameters. The learner decides for itself whether to assign a label or query the teacher for each datapoint. Uncertainty sampling: label those points for which the current model is least certain as to what the correct output should be. Query by committee: a variety of models are trained on the current labeled data, and vote on the output for unlabeled data; label those points for which the "committee" disagrees the most Querying from diverse subspaces or partitions:[12] When the underlying model is a forest of trees, the leaf nodes might represent (overlapping) partitions of the original feature space. This offers the possibility of selecting instances from non-overlapping or minimally overlapping partitions for labeling. Variance reduction: label those points that would minimize output variance, which is one of the components of error. Conformal Predictors: This method predicts that a new data point will have a label similar to old data points in some specified way and degree of the similarity within the old examples is used to estimate the confidence in the prediction.[13]
approximately 15,000 light years from Earth
It is not known how far away it is. If it is in the Outer arm it is around 16,000 light years away (5 kpc) and if in the Perseus arm it is half that distance, 8,000 light years away. There is a supernova remnant that may be associated with it at a distance of around 800 parsecs or 2,600 light years away. The McGill survey estimates 2 kpc or about 6,500 light years. In short, there is considerable uncertainty about its distance.
The line of sight intercepts the Perseus and Outer arms of the Galaxy, atdistances of∼2.5 and∼5 kpc, respectively. In this paper, we assume the distanced= 5 kpc. In addition,there exists a supernova remnant (SNR) G160.9+2.6,∼80′north of SGR 0501+4516 (Gaensler & Chatterjee2008; G ̈oˇg ̈u ̧s et al. 2010). The distance and age of the SNR were estimated as 800±400 pc and 4000–7000 years(Leahy & Tian 2007). G ̈oˇg ̈u ̧s et al. (2010) proposed thatSGR 0501+4516 could be associated with G160.9+2.6 Mong, Y.L. and Ng, C.Y., 2018. X-Ray Observations of Magnetar SGR 0501+ 4516 from Outburst to Quiescence. The Astrophysical Journal, 852(2), p.86.
For the McGill distance see table 7 of:
Olausen, S.A. and Kaspi, V.M., 2014. The McGill magnetar catalog. The Astrophysical Journal Supplement Series, 212(1), p.6.
As of March 2016[update], 23 magnetars are known
As of June 2019, 29 are known.
possibly until 1550 BC
There doesn't seem to be any way to get 1550 from the cite. More like 2200 BC, works out at -2183 BC ± 40 years
We report the youngest radiocarbon determination so far for an identified species of Antillean sloth, 4190 ± 40 yr BP
Published in 2007, 2007 -4190 = -2183
Another sloth bone, the youngest mentioned is still not 1550, seems to be -1726 ± 50yr
Although Woods [1989] reported a “whole bone” date of 3715 ± 50yr bp for unspecified sloth remains recovered at Trou Wòch Sa Wo in southern Haiti, five different sloth species have been recovered from this cave [MacPhee et al., 2000] and there is thus no way of relating this date to a single taxon as we have done here. In any case, the accuracy of this age esti-mate should be confirmed, minimally byAMS dating of individual, systematicallyidentified elements.
1989-3715 = -1726 and there is no obvious way to get 1550 from this.
Cite is
MacPhee, R.D., Iturralde-Vinent, M.A. and Vázquez, O.J., 2007. Prehistoric sloth extinctions in Cuba: Implications of a new “last” appearance date. Caribbean Journal of Science, 43(1), pp.94-99.
Throughout the past two decades, he has been conducting research in the fields of psychology of learning and hybrid neural network (in particular, applying these models to research on human skill acquisition). Specifically, he has worked on the integrated effect of "top-down" and "bottom-up" learning in human skill acquisition,[1][2] in a variety of task domains, for example, navigation tasks,[3] reasoning tasks, and implicit learning tasks.[4] This inclusion of bottom-up learning processes has been revolutionary in cognitive psychology, because most previous models of learning had focused exclusively on top-down learning (whereas human learning clearly happens in both directions). This research has culminated with the development of an integrated cognitive architecture that can be used to provide a qualitative and quantitative explanation of empirical psychological learning data. The model, CLARION, is a hybrid neural network that can be used to simulate problem solving and social interactions as well. More importantly, CLARION was the first psychological model that proposed an explanation for the "bottom-up learning" mechanisms present in human skill acquisition: His numerous papers on the subject have brought attention to this neglected area in cognitive psychology.
Most common web browsers can retrieve files hosted on FTP servers,
This seems to have been first implemented by Tim Berners-Lee, when he was trying to popularize his World Wide Web and Web browser (according to himself, in his "Weaving the Web" 1999 book).
2017
The U.S. Energy Infomormation Administration say that it rose by 2.8% in 2018 but project that it will decrease in 2019 and 2020. The increase in 2018 was due to a 10% increase in emissiosn from natural gas and preliminary data makes it 0.4% below teh record set in 2017. The high energy consumption in 2018 is largely due to air conditioning demand in the warm weather, the winter months were also colder. 2019 and 2020 are expected to be milder which is what leads to the reduced forecast for them. The estimates of industrial production growth and GDP growth also factor into their prediciton - a slow down in GDP but faster industrial growth in 2019, slow down in industrial growth in 2020, as industrial production tends to be more energy intensive than the rest of the economy.
U.S. energy-related CO2 emissions increased in 2018 but will likely fall in 2019 and 2020
The seasonal low of 1,078.96 feet (328.87 m) in 2017 is close to that experienced in 2014, safely above the drought trigger for now.[27] However, that level is still 36 feet (11 m) below the seasonal low experienced in 2012 and the lake is projected to begin falling again in 2018.[28]
Seasonal low of 1,076 in 2018. However there was a big snow melt in 2019 feeding lake Powell. As of June 17 it is around 5 feet above its 2017 level. lake level chart
Colorado had 134% of its normal snowfall in winter 2018, Utah 138%, Wyoming 116%.
As a result Lake Powell is expected to rise 50 feet in 2019, a gain of 12 million acre-feet, compared with only 4.6 million last year. It expects to release 9 million acre feet from Powell to Mead for the fifth consecutive year.
Elephant Butte, a reservoir for the Rio Grande in New Mexico will also be replenished from 10% to around 30% of capacity.
It is too early to say if this is the end of the decade long drought phase. However it is enough so that Arizona, the state with the lowest priority rights to the water from lake Mead is no longer expected to have to cut its share in 2020. That shortage may now be put off until after 2021.
Snowmelt fills Colorado River and other waterways in U.S. Southwest, easing drought fears, Denver Post, June 13, 2019
Drought and water usage issues
Out of date - doesn't mention the new drought contingency plan. All seven states signed a drought contingency plan on May 20 2019, which lasts through until 2026, involving voluntary reductions in water taken from lake Mead if the levels get low.
One sticking point was the Carlton sea in California, which formed as a result of a failed canal project in the late twentieth century and is now an important stop for migrating birds. The Imperial Irrigation District needed extra funding of $200 million to help preserve this sea before it would agree to a reduction. But the Metropoliton Water District was able to pledge most of California's voluntary water cuts and saved the plan.
They next need to work on a longer term contingency plan for the next 50 years.
Interior and states sign historic drought agreements to protect Colorado River - press release by US government under "Reclamation, Manging water in the West"
Under the drought plan, states voluntarily would give up water to keep Lake Mead on the Arizona-Nevada border and Lake Powell upstream on the Arizona-Utah border from crashing. Mexico also has agreed to cuts
The drought contingency plan takes the states through 2026, when existing guidelines expire. The states already are preparing for negotiations that will begin next year for new guidelines.
The Imperial Irrigation District was written out of California's plan when another powerful water agency, the Metropolitan Water District, pledged to contribute most of the state's voluntary water cuts.
Imperial had said it would not commit to the drought plan unless it secured $200 million in federal funding to help restore a massive, briny lake southeast of Los Angeles known as the Salton Sea.
Felicia Fonseca, US official declares drought plan done for Colorado River, Phys.org, March 20, 2019
For more background
Despite signs of interstate cooperation, the decline of Lake Mead isn’t near being solved, Michael Hiltzik Feb 08, 2019, Los Angeles Times.
Setbacks
Doesn't mention the first three rocket failures though they are covered in Falcon 1
"And the reason that I ended up being the chief engineer or chief designer, was not because I want to, it's because I couldn't hire anyone. Nobody good would join. So I ended up being that by default. And I messed up the first three launches. The first three launches failed. Fortunately the fourth launch which was – that was the last money that we had for Falcon 1 – the fourth launch worked, or that would have been it for SpaceX."
Elon Musk (28 September 2017), Making Life Multiplanetary | 2017 International Astronautical Congress
if a small region of the universe by chance reached a more stable vacuum, this 'bubble' would spread.
[this section needs cites, I have them when I have time] Should explain there are two ways that a false vacuum can collapse. It can do it through energetically pushing it over the barrier. This was only possible in the early universe and conditions do not exist for this today. Or it can happen through quantum tunneling. An example to explain quantum tunneling - in principle a ping pong ball inside a vault could spontaneously find itself outside of it just because of quantum position undertainty, without having to move through the walls. Given infinite time and enough vaults and ping pong balls eventually this has to happen but it is not a realistic possibility on normal timescales.
On the molecular scale then quantum tunneling events can happen and indeed may help explain how we can smell, and how birds are able to sense the weak Earth's magnetic field enough to navigate.
The quantum tunneling of the false vacuum collapse is more like the ping pong ball analogy, it is extremely unlikely. The latest estimates based on properties of the Higgs and the top quark is that the odds are googols to one against it having happened any time in the past through to the present since the Big Bang.
However in the extreme conditions of the early universe, then it should have happened, back when the entire observable universe was compressed into a space far less than that of a proton, nucleus of hydrogen atom, and at extreme temperatures, through the first method of being pushed over the barrier rather than tunneling through it.
John Ellis has suggested that this likely means that we need new physics to explain why the universe surfvived that early stage. The alternative he mentions is that we are surrounded by true vacuum in all directions and happen to be one of the few exceedingly unlikely patch of false vacuum in an infinite universe. Given infinite space - time then even the most improbable would happen somewhere - but he doesn't think this is a likely hypothesis (see 47:40 into this video).
New physics that could explain this includes
Another suggestion that's been made is that when a Higgs field collapses this could create a new universe with its own space and time rather than expand into ours.
There is a possibility that the Higgs field was stable in the early universe due to an interaction with gravity which would explain why the universe survived to date cite. That still leaves us with the question, of why it is so close to the boundary between stable and unstable, when it could be completely stable?
Existential threat
This section is way out of date, most of it seems to have been written before the discovery of the Higgs boson in 2013 or shortly after it. Enough is now known for a rigorous calculation (assuming that there is no new physics to be found of course).
The authors of the peer reviewed paper update their abstract in arxiv.org from time to time. The original published value was that there would be a future lifetime of the universe of \(10^{139} \) years with 95% oncfidence of more than \(10^{58} \) years
As of version 4 of their paper revised 2nd May 2018, it is now \(10^{161} \) years with 95% oncfidence of more than \(10^{65} \) years
The odds that we have encountered a vacuum collapse already, or that one is on its way (total over the lifetime of the univesre to date) used to be between odds of \(10^{107} \) to 1 against and \(10^{718 } \) to 1 against
They now say it is between between odds of \(10^{367} \) to 1 against and \(10^{1124 } \) to 1 against
Andreassen, A., Frost, W. and Schwartz, M.D., 2018. Scale-invariant instantons and the complete lifetime of the standard model. Physical Review D, 97(5), p.056006.
Joseph Lykken has said that study of the exact properties of the Higgs boson could shed light on the possibility of vacuum collapse
This has now been done in a 2018 paper published in Physical Review D cite
The authors of the paper update their abstract in arxiv.org from time to time. The original published value was that there would be a future lifetime of the universe of \(10^{139} \) years with 95% oncfidence of more than \(10^{58} \) years
As of version 4 of their paper revised 2nd May 2018, it is now \(10^{161} \) years with 95% oncfidence of more than \(10^{65} \) years
The odds that we have encountered a vacuum collapse already, or that one is on its way (total over the lifetime of the univesre to date) used to be between odds of \(10^{107} \) to 1 against and \(10^{718 } \) to 1 against
They now say it is between between odds of \(10^{367} \) to 1 against and \(10^{1124 } \) to 1 against
Andreassen, A., Frost, W. and Schwartz, M.D., 2018. Scale-invariant instantons and the complete lifetime of the standard model. Physical Review D, 97(5), p.056006.
They argue that due to observer selection effects, we might underestimate the chances of being destroyed by vacuum decay because any information about this event would reach us only at the instant when we too were destroyed.
Their argument has been completely misunderstood. They are discussing any natural catastrophic events that could destroy Earth. This is a paper from 2005 well before the discovery of the Higgs.
They mention three possibilities, that a cosmic radiation collision event triggers collapse of the Earth into a black hole, into strange matter or the false vacum collapse of the entire universe.
The observer selection effect here is just that we exist and applies to all three scenarios and is not specific to the false vacuum collapse. Indeed it would apply to any other scenario that could destroy Earth or change it in such a way as to make it impossible for huamns to evolve. cite
Given that life on Earth has survived for nearly 4 billion years (4 Gyr), it might be assumed that natural catastrophic events are extremely rare. Unfortunately, this argument is flawed because it fails to take into account an observation-selection effect, whereby observers are precluded from noting anything other than that their own species has survived up to the point when the observation is made.
If it takes at least 4.6 Gyr for intelligent observers to arise, then the mere observation that Earth has survived for this duration can-not even give us grounds for rejecting with 99% confidence the hypothesis that the average cosmic neighbourhood is typically sterilized, say,every 1,000 years. The observation-selection effect guarantees that we would find ourselves in a lucky situation, no matter how frequent thesterilization events
The researchers estimated from their observations that there are nearly two Jupiter-mass rogue planets for every star in the Milky Way
A later 2017 study cast doubt on this result. It used a larger population of microlensing events and found at most one Jupiter-mass rogue planet for every four stars in the Milky Way.
Mróz, P., Udalski, A., Skowron, J., Poleski, R., Kozłowski, S., Szymański, M.K., Soszyński, I., Wyrzykowski, Ł., Pietrukowicz, P., Ulaczyk, K. and Skowron, D., 2017. No large population of unbound or wide-orbit Jupiter-mass planets. Nature, 548(7666), p.183.<
It also signified the disappearance of an entire mammal family of river dolphins (Lipotidae)
Potentially confusing - may give impression it means extinction of all river dolphins worldwide. There are two other families of river dolphins.
Clearer as
"It also signified the disappearance of one entire river dolphin mammal family (Lipotidae), leaving only two extant families of river dolphins"
expected to be the first UHVDC cable in the United States
Now on hold. [https://www.windpowermonthly.com/article/1460152/hurdles-kill-off-uss-first-hvdc-line-20-years]
When the temperature is below the freezing point of water, the dew point is called the frost point, as frost is formed rather than dew.
Though popular accounts of meteorology sometimes suggest this, dew point and frost point differ. Dew point is the temperature for 100% humidity of the air in normal conditions. Frost point is the higher temperature for 100% humidity over an ice surface. This distinction normally doesn't matter much, but is important for processes in clouds. Growth of icy particles is favoured over water droplets when both are possible, because the frost point is at a higher temperature than the dew point
I am summarizing what the meteorologist Jeff Haby explains here
"The dew point is the temperature at which the air is saturated with respect to water vapor over a liquid surface. When the temperature is equal to the dewpoint then the relative humidity is 100%. The common ways for the relative humidity to be 100% is to 1) cool the air to the dewpoint, 2) evaporate moisture into the air until the air is saturated, 3) lift the air until it adiabatically cools to the dew point. "The frost point is the temperature at which the air is saturated with respect to water vapor over an ice surface. It is more difficult more water molecules to escape a frozen surface as compared to a liquid surface since an ice has a stronger bonding between neighboring water molecules. Because of this, the frost point is greater in temperature than the dew point. This fact is important to precipitation growth in clouds. Since the vapor pressure is less over an ice surface as compared to a supercooled liquid surface at the same temperature, when the relative humidity is 100% with respect to water vapor the relative humidity over the ice surface will be greater than 100%. Thus, precipitation growth is favored on the ice particles."
Omega Point
This article is very poor. Theillard de Chardin's theory is an attempt by a devout Catholic scientist to reconcile religious ideas of the love of God and of teleology - that our life and world has a purpose, with scientific understanding. To leave that aspect out is to miss the entire point of his work. The theory is very influential in both Christian theology generally and especially Catholic theology, not just in the nineteenth century but is still influential through to the present.
This article attempts to treat it as a purely scientific theory stripping away all religious elements. It cites mainly critics who ridicule the idea that religion is relevant to science and the idea that our universe may have any teleology or purpose. There would be the same problems writing an article about Christian ideas of the Resurrection that ignored the theological context. This approach is not used in other articles on Christian theology in Wikipedia.
Rather than annotate particular points in this article I think it is best to just direct the reader to the entry on him in the French Wikipedia, which is much better, written as theology, as well as some summaries of his work by other authors.
The Omega point is a dynamic concept created by Pierre Teilhard de Chardin , who gave it the name of the last letter of the Greek alphabet : Omega .
For Teilhard, the Omega Point is the ultimate point of the development of complexity and consciousness towards which the universe (1) . According to his theory, exposed in The Future of Man and The Human Phenomenon , the universe is constantly evolving towards ever higher degrees of complexity and consciousness, (1) the Omega point being the culmination but also the cause of this evolution (1) . In other words, the Omega point exists in a supremely complex and supremely conscious way, transcending the universe in the making.
For Teilhard the Omega point evokes the Christian Logos , that is Christ , in that it attracts all things to him and is, according to the Nicene symbol , "God born of God, Light born of the Light, true God born of the true God ", with the indication: " and by him all things were done ".
Subsequently this concept was taken up by other authors, such as John G. Bennett (1965) or Frank Tipler (1994).
The Omega point has five attributes, which Teilhard details in The Human Phenomenon .
The five attributes In the book The Human Phenomenon (The Human Phenomenon, 1955), Teilhard describes the five attributes of the Omega point:
It has always existed - only in this way you can explain the evolution of the universe to higher levels of consciousness.
must be personal - a person and not an abstract idea; the greater complexity of the question has not only led to higher forms of consciousness, but to greater personalization, of which humans are the highest forms of the "personalization" of the universe. They are fully "individualized", free activity centers. It is in this sense that it is said that man was made in the image of God, which is the highest form of personality. Teilhard de Chardin expressly maintains that the Omega point, when the universe by unification will become one, we will not see the elimination of people, but the super-personalizing. The personality will be infinitely richer. Indeed, the Omega point unites the creation, and it unites, the universe becomes more complex and increases its consciousness. Just as God created the universe evolves to forms more complexity, consciousness, and finally with man, personality because God, the universe attracting to itself is a person.
It must be transcendent - the Omega Point is not the result of complexity and consciousness. It exists before the evolution of the universe, because the Omega Point is the cause of the evolution of the universe towards greater complexity, consciousness and personality. This essentially means that the Omega Point is located outside the context in which the universe is evolving, because it is because of its magnetic attraction that the universe tends to it.
must be independent - without limits of space and time.
It must be irreversible - which must provide the ability to reach.
This is how the idea is described by Linda Sargent Wood as summarized by Oxford Scholarship Online
Merging Catholicism and science, Teilhard asserted that evolution was God's ongoing creative act, that matter and spirit were one, and that all was converging into one complete, harmonious whole. Though controversial, his organismic ideas offered an alternative to reductionistic, dualistic, mechanistic evolutionary views. They satisfied many who were looking for ways to reconnect with nature and one another; who wanted to revitalize and make personal the spiritual part of life; and who hoped to tame, humanize, and spiritualize science. In the 1960s many Americans found his book The Phenomenon of Man and other mystical writings appealing. He attracted Catholics seeking to reconcile religion and evolution, and he proved to be one of the most inspirational voices for the human potential movement and New Age religious worshipers. Outlining the contours of Teilhard's holistic synthesis in this era of high scientific achievement helps explain how some Americans maintained a strong religious allegiance.
Wood, L.S., 2012. A More Perfect Union: Holistic Worldviews and the Transformation of American Culture after World War II. Oxford University Press.
“Only where someone values love more highly than life, that is, only where someone is ready to put life second to love, for the sake of love, can love be stronger and more than death. If it is to be more than death, it must first be more than mere life. But if it could be this, not just in intention but in reality, then that would mean at the same time that the power of love had risen superior to the power of the merely biological and taken it into its service. To use Teilhard de Chardin’s terminology, where that took place, the decisive complexity or “complexification” would have occurred; bios, too, would be encompassed by and incorporated in the power of love. It would cross the boundary—death—and create unity where death divides. If the power of love for another were so strong somewhere that it could keep alive not just his memory, the shadow of his “I”, but that person himself, then a new stage in life would have been reached. This would mean that the realm of biological evolutions and mutations had been left behind and the leap made to a quite different plane, on which love was no longer subject to bios but made use of it. Such a final stage of “mutation” and “evolution” would itself no longer be a biological stage; it would signify the end of the sovereignty of bios, which is at the same time the sovereignty of death; it would open up the realm that the Greek Bible calls zoe, that is, definitive life, which has left behind the rule of death. The last stage of evolution needed by the world to reach its goal would then no longer be achieved within the realm of biology but by the spirit, by freedom, by love. It would no longer be evolution but decision and gift in one.”
Orthodoxy of Teilhard de Chardin: (Part V) (Resurrection, Evolution and the Omega Point)
His views have also been seen as relevant to modern tanshumanists who want to apply technology to overcome our human limitations. Some of them think that his ideas foreshadowed this.
A movement known as tranhumanism wants to apply technology to overcome human limitations. Followers believe that computers and humans may combine to form a “super brain,” or that computers may eventually exceed human brain capacity. Some transhumanists refer to that future time as the “Singularity.” In his 2008 article “Teilhard de Chardin and Transhumanism,” Eric Steinhart wrote that:
Teilhard de Chardin was among the first to give serious consideration to the future of human evolution.... [He] is almost certainly the first to describe the acceleration of technological progress to a singularity in which human intelligence will become super intelligence.
Teilhard challenged theologians to view their ideas in the perspective of evolution and challenged scientists to examine the ethical and spiritual implications of their knowledge. He fully affirmed cosmic and biological evolution and saw them as part of an even more encompassing spiritual evolution toward the goal of ultrahumans and complete divinity. This hypothesis still resonates for some as a way to place scientific fact within an overarching spiritual view of the cosmos, though most scientists today reject the notion that the Universe is moving toward some clear goal.
Pierre Teilhard de Chardin Paleontologist, Mystic and Jesuit Priest - Kahn Academy
By Tom Butler-Bowdon
In a nutshell: By appreciating and expressing your uniqueness, you literally enable the evolution of the world.
For Teilhard humankind was not the centre of the world but the ‘axis and leading shoot of evolution’. It is not that we will lift ourselves above nature but, in our intellectual and spiritual quests, dramatically raise its complexity and intelligence. The more complex and intelligent we become, the less of a hold the physical universe has on us, he believed.
Just as space, the stars and galaxies expand ever outwards, the universe is just as naturally undergoing ‘involution’ from the simple to the increasingly complex; the human psyche also develops according to this law. ‘Hominisation’ is what Teilhard called the process of humanity becoming more human, or the fulfilment of its potential. ... Teilhard said as humanity became more self-reflective, able to appreciate its place in space and time, its evolution would start to move by great leaps instead of a slow climb. In place of the glacial pace of physical natural selection, there would be a supercharged refinement of ideas that would eventually free us of physicality altogether. We would move irresistibly toward a new type of existence, at which all potential would be reached. Teilhard called this the ‘omega point’.
Book review: The Phenomenon of Man by Pierre Teilhard de Chardin
Progress
Needs a Criticism section.
The main crticism is that it is not helping to understand how the human brain itself works. From the article by Frégnac et al in Nature in 2014:
Contrary to public assumptions that the HBP would generate knowledge about how the brain works, the project is turning into an expensive database-management project with a hunt for new computing architectures. (1) The problem is that it is not founded on knowledge of how neurons are connected in the real brain or experimental data. Most importantly, there are no formulated biological hypotheses for these simulations to test Instead it is an attempt at simulation using many hardware neurons of something the researchers hope will resemble the way a human brain works without any experimental data to guide the experimentation.(1)
The revised plan advances a concept in which in silico experimentation becomes a “foundational methodology for understanding the brain” (1)
Shortly after the project started, many European neuroscientists signed an open letter raising the following issues (3)
a) That the project could not provide understanding of the brain without corrective loops between hypotheses and experimental facts and the project is not guided by any precies hypotheses to be tested and checked.
b) That the model is overoptimisic and wrong. It should either abandon neurologial researches and only focus on technological advances or be split into two projects for the two areas
c) Too expensive, syphoning away important funds from other fundamental research d) excessively big and coordination mechanisms unclear
This collective wrote “Open message to the European Commission concerning the Human Brain Project” to the European Commission on July 7, 2014.
A mediation report was published in 2015. This upheld most of the criticisms.
The mediation committee summarized the disagreement as (2):
The goal of reconstructing the mouse and human brain in silico and the associated comprehensive bottom-up approach is viewed by one part of the scientific community as being impossible in principle or at least infeasible within the next ten years, whileanother part sees value not only in making such simulation tools available but also in their development,in organizing data, tools and experts
They recommended that the goals should be less ambitious saying:
Issue: Public announcements by the HBP leadership and by the EC overstated objectives and the possible achievements of the HBP. Unrealistic expectations were raised, such as tools for predictive sim-ulation of the human brain to enable understanding of brain function or to support diagnosis and therapy of neurodegenerative diseases within the course of the project. This resulted in a loss of scientific credibility of the HBP.
Recomendation: The HBP andthe EC should clearly andopenly communicate the project’s sharp-ened mission and objectives. Furthermore,theHBP should systematically create and use opportuni-ties to enteraconstructive scientific discourse with the science community, with science policy mak-ers and with the interested public. Ultimately the reputation of the HBP in the science community will rest on the publication of convincing scientific results and the generation of widely used IT platforms. ...
They recommend that it be split into three or four sub projects with PI's with a strong neuroscience background:
Issue:The absenceof systems and cognitive neuroscience subverts the multi-scale and multi-perspective ambitions of the HBP to integrate and validate various approaches to a unifying model-ling and simulation platform.It also impairs the validation of other IT platforms developed in the HBP regarding the value added to future neuroscience research and clinical practice
Recommendation:The SPs (and the constituent WPs) suggested in the FPA should be consolidated and integrated with a set of new cross-cutting horizontal tasks to form a matrix-type project structure. These cross-cutting activities should be organized in at least 3 -4 WPsto address challenging problems of systems and cognitive neuroscience which are led by PIs with a strongscientific background in the respective areas. These WPsshould be aggregated to a new cross-cutting subproject “Cognitive and Systems Neuroscience: CSN”.
[1]Frégnac, Y. and Laurent, G., 2014. Neuroscience: Where is the brain in the Human Brain Project?. Nature News, 513(7516), p.27.
[2] Human Brain Project Mediation Report, Mediator Prof. Dr. Wolfgang Marquardt, March 2015
[3] This is a paraphrase of the relevant section of: Why do we foster and grant wrong innovative scientific methods? The neuroscientific challenge Jordi Vallverdu Autonomous University of Barcelona
Blue Brain Project
Page is five years out of date and doesn't include criticism section. French Wikipedia summarizes the criticisms like this
This project, along with the resulting Human Brain Project , is heavily criticized for a variety of reasons, including the scientific strategy adopted and the high cost involved. Launched in 2013, HBP faced a number of criticisms, including the lack of evidence from Blue Brain [5 ]. However, in October 2015, the team of the Blue Brain Project (in) published in Cell an article describing a simulation of a rat brain, covering 30,000 neurons and 40 million synapses - which did not stop criticizing the overall unrealism of BPH[5] Blue Brain (French Wikipedia)
[5] Kai Kupferschmidt, "Virtual rat brain fails to impress its critics", Science , October 16, 2015, Vol. 350 no. 6258 pp. 263-264; DOI: 10.1126 / science.350.6258.263
See also Theil, S., 2015. Why the Human Brain Project Went Wrong—and How to Fix It. Scientific American, 313(4), pp.36-42.
Frégnac, Y. and Laurent, G., 2014. Neuroscience: Where is the brain in the Human Brain Project?. Nature News, 513(7516), p.27.
by reverse-engineering mammalian brain circuitry
One of the main criticisms of this project is that neroscientists do not have a detailed "map of connections between neurons within and across brain areas that could guide simulations"
From the beginning, neuroscientists pointed out that large-scale simulations make little sense unless constrained by data, and used to test precise hypotheses. In fact, we lack, among other resources, a detailed 'connectome', a map of connections between neurons within and across brain areas that could guide simulations. There is no unified format for building functional databases or for annotating data sets that encompass data collected under varying conditions. Most importantly, there are no formulated biological hypotheses for these simulations to test
Frégnac, Y. and Laurent, G., 2014. Neuroscience: Where is the brain in the Human Brain Project?. Nature News, 513(7516), p.27.
commonly recognized conditions such as delusional infestation"
Note for comparison, Lede for French Wikipedia: Morgellons
"Morgellons" or "Morgellon's disease" is a controversial medical condition reported in the United States in 2002. It is characterized by dermatological lesions where the patient perceives inert or organic fibers, included or protruding from the skin.
It is most often considered as a form of parasitic delirium or Ekbom syndrome , a factitious disorder , or a collective syndrome of psychogenic origin. Some authors, however, indicate that Morgellon's disease is a true somatic disease of infectious origin, including Lyme disease .
some people have linked Morgellons "to another illness viewed skeptically by most doctors, chronic Lyme disease"
The Mayo clinic also refers to this research saying cite:
The research on Morgellons by multiple groups over decades has yielded conflicting results. Multiple studies report a possible link between Morgellons and infection with Borrelia spirochetes. These results contradict [the CDC study] One of the main proponents of this hypothesis is Marianne J. Middelveen, MDes, a Veterinary Microbiologist from Alberta, Canada. She made a connection with a disease of cattle, called Bovine Digital dermatitis which has similar symptoms - and in that case, it is well established that there are microfilaments of keratin and collagen which form beneath the skin.
She analysed filaments that form beneath the skin of sufferers, and found out that these also are made of keratin and collagen. She also found spirochetes, which are usually associated with Lyme disease in humans.
The main paper is Middelveen, M.J., Bandoski, C., Burke, J., Sapi, E., Filush, K.R., Wang, Y., Franco, A., Mayne, P.J. and Stricker, R.B., 2015. Exploring the association between Morgellons disease and Lyme disease: identification of Borrelia burgdorferi in Morgellons disease patients. BMC dermatology, 15(1), p.1.
She, along with a dozen or so other researchers, publish one or two papers a year on this topic.
For more background and cites, see also Mystery Of Morgellons - Disease Or Delusion - Scientific Hypothesis Of Connection With Lyme Disease (originated as Wikipedia article but on rejection from Wikipedia, rewritten in less encyclopedic tone as a blog post)
Here is Marianne Middelveen talking about her research https://youtu.be/IaxdRvesVfM
French Wikipedia summarizes her reseach as
As of 2011, some scientists are trying to demonstrate the reality of the disease by defining it as filamentous dermatitis linked to Lyme disease, by publishing one to two articles per year.
The main author of publications is Marianne J. Middelveen, veterinarian, which reconciles the bovine digital dermatitis or Mortellaro disease in animals and humans Morgellons 13 . Bovine disease is a skin infection associated with various pathogens, including spirochaetes and treponemes . In the animal, the disease is shown as contagious, being able to present papules filiform.
Middelveen assumes that morgellons are a human equivalent of bovine digit dermatitis. She regularly publishes works showing an association between Morgellons and Lyme disease 14. 15
FOOTNOTES
Marianne J. Middelveen and Raphael B. Stricker , " Filament formation associated with spirochetal infection: a comparative approach to Morgellons disease ," Clinical, Cosmetic and Investigational Dermatology , Vol. 42011, p. 167-177 ( ISSN 1178-7015 , PMID 22253541 , PMCID PMC3257881 , DOI 10.2147 / CCID.S26183 , read online [ archive ] , accessed March 4, 2019 )
Marianne J. Middelveen Cheryl Bandoski Jennie Burke and Eva Sapi , " Exploring the Association entre Morgellons disease and Lyme disease: identification of Borrelia burgdorferi in Morgellons disease patients ," BMC Dermatology , Vol. 15, n o 1,February 12, 2015, p. 1 ( ISSN 1471-5945 , PMID 25879673 , PMCID PMC4328066 , DOI 10.1186 / s12895-015-0023-0 , read online [ archive ] , accessed March 4, 2019 ) summary in English " morgellons and lyme " [ archive ] , on imedecin.com .
(in) Jyotsna S. Shah and Raphael B. Stricker , " Detection of tick-borne infection in Morgellons disease patients by serological and molecular technologies " [ archive ] on Clinical, Cosmetic and Investigational Dermatology ,November 9, 2018 (accessed March 4, 2019 )
In 2008, The Washington Post reported that Internet discussions about Morgellons include many conspiracy theories about the cause, including biological warfare, nanotechnology, chemtrails and extraterrestrial life.
The Washington Post article predates the CDC study. This is a summary by a journalist of an internet search of Moregellons discussion blogs.
Other points in the Washington Post article not mentioned here:
Robert Bransfield is a New Jersey psychiatrist who studies the connection between infection and mental illness. ... "This isn't delusional," Bransfield says. "Delusions are quite variable. So, one person might have a delusion that the FBI is sending out messages to his dentures. Someone else has a delusion that their next-door neighbor is stealing their mail. But the people who have Morgellons all describe it in the same way. It doesn't have the variability you would see in delusions." ... And what of mass hysteria? Could Morgellons be, in a very real sense, nothing more than an Internet virus that has taken hold in susceptible minds? "I do see suggestion with the Internet, but it's hard to explain it on that alone," Bransfield says. "You can see the fibers. The fibers can't be mass hysteria. You see people describe this who don't have a computer," he says. "It's puzzling. It's hard to make sense out of it. But it's there. "
It also covers the very early stages of the research into a possible connection with Borrelia burgdorfer
At about the same time, Leitao, who was trained as a biologist and worked as an electron microscopist; along with Ginger Savely, a nurse practitioner; and Raphael Stricker, a hematologist, published a paper in the American Journal of Clinical Dermatology reporting that 79 out of 80 Morgellons patients they studied also were infected with Borrelia burgdorferi, the tick-borne bacteria that cause Lyme disease.
Morgellons Research Foundation
As the article says, the Morgellons Research Foundation which was the primary patient advocacy group in the 2000s.
However it does not mention its successor, the Charles Holman Morgellons Disease Foundation, . a 501(c)3 nonprofit organization committed to "advocacy and philanthropy in the battle against Morgellons Disease"
This orgnaization holds an annual three day conference on Morgellons for researchers to discuss their findings. Its main subject of study is a possible connection with chronic lyme disease.
There are many peer reviewed papers on the topic. It is minority view science but not fringe.
All attempts to add a mention of this research to Wikipedia eiterh on this article or as a separate article, evne notlinked to by it, are removed. Here are some of the cites.
Middelveen, Marianne J; Burugu, Divya; Poruri, Akhila; Burke, Jennie; Mayne, Peter J; Sapi, Eva; Kahn, Douglas G; Stricker, Raphael B (2013). "Association of spirochetal infection with Morgellons disease". F1000Research. doi:10.12688/f1000research.2-25.v1. ISSN 2046-1402.
Middelveen, Marianne J; Bandoski, Cheryl; Burke, Jennie; Sapi, Eva; Filush, Katherine R; Wang, Yean; Franco, Agustin; Mayne, Peter J; Stricker, Raphael B (2015). "Exploring the association between Morgellons disease and Lyme disease: identification of Borrelia burgdorferi in Morgellons disease patients". BMC Dermatology 15 (1). doi:10.1186/s12895-015-0023-0. ISSN 1471-5945.
Marianne J Middelveen, Raphael B Stricker, Filament formation associated with spirochetal infection: a comparative approach to Morgellons disease, in Clinical, Cosmetic and Investigational Dermatology 2011
Marianne J. Middelveen1, Jennie Burke2, Eva Sapi3, Cheryl Bandoski3, Katherine R. Filush3, Yean Wang2, Agustin Franco2, Arun Timmaraju3, Hilary A. Schlinger1, Peter J. Mayne1 Culture and identification of Borrelia spirochetes in human vaginal and seminal secretions, F100 research
Marianne J. Middelveen1, Elizabeth H. Rasmussen2, Douglas G. Kahn3 and Raphael B. Stricker1*, Morgellons Disease: A Chemical and Light Microscopic Study, Journal of Clinical&Experimental Dermatology Research
Shah, PhD, Jyotsna S. "Morgellons Disease – Chronic Form Of Borrelia Infection?”
Middelveen MJ, Stricker RB Morgellons disease: a filamentous borrelial dermatitis, International Journal of General Medicine » Volume 9, DovePress
Morgellons is poorly understood but the general medical consensus is that it is a form of delusional parasitosis in which individuals have some form of actual skin condition that they believe contains some kind of fibers.
The cites for this sentence all preced the big 2012 CDC study. Although most say it is a form of Delusional Parasitosis one of them says: The cause, transmission, and treatment are unknown."
Whether Morgellons disease is a delusional disorder or even a disease has been a mystery for more than 300 years. Symptoms of Morgellons include crawling and stinging sensations, feeling of “bugs” and/or fiber-like material beneath the skin, disabling fatigue, and memory loss. The cause, transmission, and treatment are unknown.
Simpson, L; Baier, M (August 2009). "Disorder or delusion? Living with Morgellons disease". Journal of Psychosocial Nursing and Mental Health Services. 47 (8): 36–41. doi:10.3928/02793695-20090706-03. PMID 19681520.
Morgellons is poorly characterized but the general medical consensus is that it is a form of delusional parasitosis; the sores are the result of compulsive scratching, and the fibers, when analysed, turn out to originate from textiles.
The CDC just said they were not able to conclude whether it represents a new condition, or a wider recogntion of delusioanl parasitosis, and called it an "unexplained dermopathy".
We were not able to conclude based on this study whether this unexplained dermopathy represents a new condition, as has been proposed by those who use the term Morgellons, or wider recognition of an existing condition such as delusional infestation, with which it shares a number of clinical and epidemiologic features.
Pearson, M.L., Selby, J.V., Katz, K.A., Cantrell, V., Braden, C.R., Parise, M.E., Paddock, C.D., Lewin-Smith, M.R., Kalasinsky, V.F., Goldstein, F.C. and Hightower, A.W., 2012. Clinical, epidemiologic, histopathologic and molecular features of an unexplained dermopathy. PLoS One, 7(1), p.e29908. The Mayo Clinic describes it like this:
Morgellons disease: Managing an unexplained skin condition
Morgellons disease is an uncommon, poorly understood condition characterized by small fibers or other particles emerging from skin sores. People with this condition often report feeling as if something were crawling on or stinging their skin.
Some doctors recognize the condition as a delusional infestation and treat it with cognitive behavioral therapy, antidepressants, antipsychotic drugs and counseling. Others think the symptoms are related to an infectious process in skin cells. Further study is needed.
This does not amount to a medical consensus that it is delusional parasitosis. Other cites given later in this article predate the CDC study.
CDC investigation
This section doesn't mention criticisms of the CDC study.
The main problem they faced is the low prevalence of the disease, only 3.65 cases per 100,000. Their 4 year study found only 41 patients with the condition at more than ten thousand dollars per patient. They also didn't select patients who self diagnosed as having Morgellons so it is possible the patients they studied did not think they had the condition..
Harry Schone summarizes these criticisms in one of the sections of his University College London thesis
"It is indeed true that the CDC were being cautious, that they found no positive evidence for the claims made by Morgellons sufferers, but it does not mean that the study can go without critical appraisal. Although expensive and lengthy, the research only clinically evaluated 41 people. Furthermore, since the population was selected by criteria other than self-identification it has been argued by critics of the study that some of those included did not have or even consider themselves to have Morgellons. The validity of these criticisms may rest on somewhat pedantic points, but what is certainly true is that an awful lot of reading between the lines has been passed off as something more substantial."
See Learning from Morgellons, Harry Quinn Schone, Masters thesis for UCL (University College London) - see Harry Schone.
This is a masters thesis rather than a PhD, however UCL is one of the most prestigious universities in Europe and this thesis summarizes concerns of other researchers.
No parasites or mycobacteria were detected in the samples collected from any patients. Most materials collected from participants' skin were composed of cellulose, likely of cotton origin
In their 2015 paper, Middelveen and her co researchers describe techniological limitations of the CDC study had limitations which could explain why they didn't find the spirochetes that they are able to identify in Morgellons patients:
"The search for spirochetal pathogens in that study was limited to Warthin-Starry staining on a small number of tissue samples and commercial two-tiered serological Lyme disease testing as interpreted by the CDC Lyme surveillance criteria. It should be noted that only two of the patients in our study group were positive for Lyme disease based on the CDC Lyme surveillance criteria and yet Borrelia spirochetes were readily detectable in this group of 25 MD patients."
They attribute their success in detecting Borrelia burgdorferi and closely related spirochetes to several factors
As a necessary condition for the reaction to occur at constant temperature and pressure, ΔG must be smaller than the non-PV (e.g. electrical) work, which is often equal to zero (hence ΔG must be negative
Sign of dG determines if a thing will happen spontaneously
A 2015 review
This article doesn't seem to have had much editing since 2015. The hypothesis of a multifactorial issue is now the scientific consensus.
For more cites see the annotation on the last sentence of the lede
A 2019 review concludes that the collapse develops through a sequence of seteps.
First, climate change, agrochemicalization or inadequate food decreasesteh strength
Then there are faults in bee management, including depriving them of too much honey replacing it with sugary food, inadequate treatment against Varoa, wintering on honey contaminated with insecticides etc.
They then become more easily infected oftenwith nosemosis, Lotmaria infection and American foulborood infection. Then finally the varoa spreads due to lack of adequate measures to prevent it.
STANIMIROVIĆ, Z., GLAVINIĆ, U., RISTANIĆ, M., ALEKSIĆ, N., JOVANOVIĆ, N., VEJNOVIĆ, B. and STEVANOVIĆ, J., 2019. LOOKING FOR THE CAUSES OF AND SOLUTIONS TO THE ISSUE OF HONEY BEE COLONY LOSSES. Acta Veterinaria-Beograd, 69(1), pp.1-31.
Acta Veterinaria-Beograd 2019, 69 (1), 1-3118for bee feeding. The use of supplements with sugar syrup should not be avoided, since they provide sufficient amino acids, peptides, micro- and macroelements which are absent from pure sugar syrup [18]. The use of supplements may prevent energetic, immune and oxidative stress in bees, and thus prevent losses in apiaries [129, 171-174]. The presence of a young, healthy bee queen in the hive guarantees the development of healthy bee colonies and successful beekeeping [131, 175]. Suitable pathogen control in hives, primarily of the bee mite V. destructor, with effective, registered varroacides is also a prerequisite for maintaining bee colonies in a good health condition. In addition, a strong link was detected between colony losses and beekeepers’ education and training: professionals were capable of keeping colonies free from diseases, unlike hobbyists [12, 70]. Professionals promptly detected symptoms, especially those of American foul brood or Varroa infestation, and timely applied control measures, contributing to the survival of their colonies. This was the first time that scientists focused attention on the impact of apiculturists and beekeeping practices on colony losses. The same authors commented that the introduction of a bee killer, Varroamite, to Europe at the beginning of 1980s, did not result in increased colony losses. This was explained by the fact that beekeepers efficiently adopted measures to combat against the mite [12].concLuSIonSScientific consensus has been reached that colony losses (CCD) are a multifactorial issue [3, 4, 6], which follows various conditions, but, according to our observations, it develops through a sequence of steps. Firstly, various non-specific factors (e.g. climate changes, agrochemisation and inadequate food) decrease the strength of the colonies; apitechnical faults (depriving bees of too much honey and a consecutive addition of large quantities of sugary food, inadequate treatments of colonies mainly against V. destructor, high stress and exhausting of bees, wintering colonies on honey contaminated with pesticides – sunflower honey, bad timing for wintering the colonies etc.). Such colonies easily become eligible for bacterial, microsporidial and trypanosomal infections. Manifested nosemosis combined with Lotmaria infection and latent American foulbrood infection, additionally exhaust bee colonies and impair the immune system of the bee [176-179]. Finally, inadequate anti-varroa strategies lead to significant health problems in bees and the spread of viruses for which Varroa is a vector, and/or activator. The whole process is a path prepared for the manifestation of virus infections
A large amount of speculation has surrounded a family of pesticides called neonicotinoids as having caused CCD.
This page seems to have not been updated much since 2013. There is scientific consensus now that it is multifactorial.
Bee populations are declining in the industrialized world, raising concerns for the sustainable pollination of crops. Pesticides, pollutants, parasites, diseases, and malnutrition have all been linked to this problem. We consider here neurobiological, ecological, and evolutionary reasons why bees are particularly vulnerable to these environmental stressors. Central-place foraging on flowers demands advanced capacities of learning, memory, and navigation. However, even at low intensity levels, many stressors damage the bee brain, disrupting key cognitive functions needed for effective foraging, with dramatic consequences for brood development and colony survival.
Klein, S., Cabirol, A., Devaud, J.M., Barron, A.B. and Lihoreau, M., 2017. Why bees are so vulnerable to environmental stressors. Trends in Ecology & Evolution, 32(4), pp.268-278.
The good news is that the past decade has seen plenty of progress in understanding the mystery of Colony Collapse Disorder. The bad news is that we now recognise it as a complex problem with many causes, although that doesn’t mean it is unsolvable.
For all bees, foraging on flowers is a hard life. It is energetically and cognitively demanding; bees have to travel large distances to collect pollen and nectar from sometimes hard-to-find flowers, and return it all to the nest. To do this they need finely tuned senses, spatial awareness, learning and memory.
Anything that damages such skills can make bees struggle to find food, or even get lost while trying to forage. A bee that cannot find food and make it home again is as good as dead.
Because of this, bee populations are very vulnerable to what we call “sublethal stressors” – factors that don’t kill the bees directly but can hamper their behaviour.
Ten years after the crisis, what is happening to the world’s bees?
Colonies are often challenged by multiple stressors, which can interact: for example, pesticides can enhance disease transmission in colonies. Colonies may be particularly vulnerable to sublethal effects of pathogens and pesticides since colony functions are compromised whether a stressor kills workers, or causes them to fail at foraging. Modelling provides a way to understand the processes of colony failure by relating impacts of stressors to colony-level functions.
Barron, A.B., 2015. Death of the bee hive: understanding the failure of an insect society. Current Opinion in Insect Science, 10, pp.45-50.
A 2019 review concludes that the collapse develops through a sequence of seteps.
First, climate change, agrochemicalization or inadequate food decreasesteh strength
Then there are faults in bee management, including depriving them of too much honey replacing it with sugary food, inadequate treatment against Varoa, wintering on honey contaminated with insecticides etc.
They then become more easily infected oftenwith nosemosis, Lotmaria infection and American foulborood infection. Then finally the varoa spreads due to lack of adequate measures to prevent it.
STANIMIROVIĆ, Z., GLAVINIĆ, U., RISTANIĆ, M., ALEKSIĆ, N., JOVANOVIĆ, N., VEJNOVIĆ, B. and STEVANOVIĆ, J., 2019. LOOKING FOR THE CAUSES OF AND SOLUTIONS TO THE ISSUE OF HONEY BEE COLONY LOSSES. Acta Veterinaria-Beograd, 69(1), pp.1-31.
Acta Veterinaria-Beograd 2019, 69 (1), 1-3118for bee feeding. The use of supplements with sugar syrup should not be avoided, since they provide sufficient amino acids, peptides, micro- and macroelements which are absent from pure sugar syrup [18]. The use of supplements may prevent energetic, immune and oxidative stress in bees, and thus prevent losses in apiaries [129, 171-174]. The presence of a young, healthy bee queen in the hive guarantees the development of healthy bee colonies and successful beekeeping [131, 175]. Suitable pathogen control in hives, primarily of the bee mite V. destructor, with effective, registered varroacides is also a prerequisite for maintaining bee colonies in a good health condition. In addition, a strong link was detected between colony losses and beekeepers’ education and training: professionals were capable of keeping colonies free from diseases, unlike hobbyists [12, 70]. Professionals promptly detected symptoms, especially those of American foul brood or Varroa infestation, and timely applied control measures, contributing to the survival of their colonies. This was the first time that scientists focused attention on the impact of apiculturists and beekeeping practices on colony losses. The same authors commented that the introduction of a bee killer, Varroamite, to Europe at the beginning of 1980s, did not result in increased colony losses. This was explained by the fact that beekeepers efficiently adopted measures to combat against the mite [12].concLuSIonSScientific consensus has been reached that colony losses (CCD) are a multifactorial issue [3, 4, 6], which follows various conditions, but, according to our observations, it develops through a sequence of steps. Firstly, various non-specific factors (e.g. climate changes, agrochemisation and inadequate food) decrease the strength of the colonies; apitechnical faults (depriving bees of too much honey and a consecutive addition of large quantities of sugary food, inadequate treatments of colonies mainly against V. destructor, high stress and exhausting of bees, wintering colonies on honey contaminated with pesticides – sunflower honey, bad timing for wintering the colonies etc.). Such colonies easily become eligible for bacterial, microsporidial and trypanosomal infections. Manifested nosemosis combined with Lotmaria infection and latent American foulbrood infection, additionally exhaust bee colonies and impair the immune system of the bee [176-179]. Finally, inadequate anti-varroa strategies lead to significant health problems in bees and the spread of viruses for which Varroa is a vector, and/or activator. The whole process is a path prepared for the manifestation of virus infections
Syngenta together with Bayer is challenging this ban in court.
The case was dismissed by the European court on 22nd May 2018 cite
83 million in 2014
90 million in 2017, data from FAOSTAT
Alternative shipping routes
Way out of date, last updated in 2012. As of 2019 the pipelines include:
A third of the world’s liquefied natural gas and almost 20% of total global oil production passes through the strait,
Also, about 30% of seaborne traded oil, more than 85% of that for Asia, mainly Japan, India, South Korea and China [details] (https://www.reuters.com/article/us-yemen-security-oil-factbox/factbox-middle-east-oil-gas-shipping-risks-alternative-routes-idUSKBN0MM2E720150326)
2017: 17.2 million bpd, first half of 2018, 17.4 million, [details] (https://www.reuters.com/article/us-iran-oil-factbox/strait-of-hormuz-the-worlds-most-important-oil-artery-idUSKBN1JV24O) (Sea-borne crude and condensate)
Most of the crude exported from Saudi Arabia, Iran, the United Arab Emirates, Kuwait and Iraq passes through it. It is also the route for nearly all the liquefied natural gas (LNG) from lead exporter Qatar. cite
largest crude oil export line.
Currently running at much lower than capacity at 80,000 to 90,000 bpd, some reports say much lower details
Mastodon (software
Stuff
5G
You might think this is the place to look for material on whether there are any possible dangers of 5g. However, in an eccentric decision, editors of this articlle remove any sections on the topic as here Removed Dangers of 5g and their explanation on talk page here.
Wikipedia does have a separate article on the topic of Mobile phone radiation and health, but material from that article is not permitted here and this page doesn't link to it. Sadly that article also has almost nothing on 5g and health.
In 2011 then WHO /International Agency for Researchon Cancer (IARC) classified radio frequency EM fields as possibly carcinogenic to humans (Group 2B).
If there is a risk it's a tiny one, of certain type of brain cancer. You can take measures to avoid the risk such as not holding a cellphone near your head while downloading large files.
IARC classifies radiofrequency electromagnetic fields as possibly carcinogenic to humans
Mayo clinic puts the situation like this:
The bottom line? For now, no one knows if cellphones are capable of causing cancer. Although long-term studies are ongoing, to date there's no convincing evidence that cellphone use increases the risk of cancer. If you're concerned about the possible link between cellphones and cancer, consider limiting your use of cellphones — or use a speaker or hands-free device that places the cellphone antenna, which is typically in the cellphone itself, away from your head.
A couple of hundred scientists have signed an appeal to the WHO to say it needs further investigation. Not 5g particularly but cell phones and wifi generally.
The evidence isn't very good yet but they think there is enough to be worth investigating on a precautionary level.
These scientists are concerned about a very minute risk of cancer. Though too small to be noticed, if there are even a few deaths in a million, it is something to take precautions to prevent.
There is a lot of conspirachy theory nonsense on the topic however. See for instance
Deinococcus radiodurans also failed to grow under low atmospheric pressure, under 0 °C, or in the absence of oxygen
This is not surprising - it shows that radiodurans is an obligate aerobe. This doesn't mention the surrpsing result mentinoped in this cite that S. liquefaciens a was able to grow under these conditions.
A more accurate summary of the source would be something like this:
In other simulations, Serratia liquefaciens strain ATCC 27592 was able to grow at 7 mbar, 0°C, in CO2-enriched anoxic atmospheres. This was surprising, as it is a generalist that occurs in many terestrial niches, not an extremophile. Two extremophiles, Deinococcus radiodurans strain R1 and Psychrobacter cryohalolentis strain K5, were both unable to grow in anoxic conditions (making them obligate aerobes) and R1 was also unable to grow below 0 C or at 7 mbar.
Source says:
Only Serratia liquefaciens strain ATCC 27592 exhibited growth at 7 mbar, 0°C, and CO2-enriched anoxic atmospheres ... The growth of S. liquefaciens at 7 mbar, 0°C, and CO2-enriched anoxic atmospheres was surprising since S. liquefaciens is ecologically a generalist that occurs in terrestrial plant, fish, animal, and food niches.
Even the hardiest cells known could not possibly survive the cosmic radiation near the surface of Mars since Mars lost its protective magnetosphere and atmosphere
This is baed on earlier papers that studied dormant life because back then they thought that the present day Mars surface was too cold and dry for life, but that life could have survived in dormant form from times when the axial tilt varied, the atmosphere thickened and water briefly flows on Mars. Any such life would be buried deep because the cummulatie effects of ionizing radiation over millions of years can sterilize anything.
However, if the life can continue to grow, reproduce and heal itself, then over 500 years, then even e coli, one of our most radiosensitive microbes, is reduced by only 90%. Levels are slimilar to the interior of the ISS and are not lethal to microbes unless they are dormant for long periods.
From the MSL RAD measurements, ionizing radiation levels from cosmic radiation are so low as to be negligible. The intermittent solar storms increase the dose only for a few days and the Martian surface provides enough shielding so that the total dose from solar storms is less than double that from cosmic radiation/ Over 500 years the Mars surface would receive a cummulative dose of less than 50 Gy, far less than the dose where 90% of even a radiation senstiive bacterium such as e-coli would die (LD90 of ~200 - 400 Gy). These facts are not used to distinguish Special Regions on Mars
Cite here
NASA have the search for extant life as one of their two top priorities for searching for life on Mars.
A special region on Mars for the purposes of Planetary protection is a region classified by COSPAR where terrestrial organisms are likely to propagate, or interpreted to have a high potential for existence of extant Martian life forms
See
A major goal is to preserve the planetary record of natural processes by preventing human-caused microbial introductions, also called forward contamination.
Uncited assertion. Not the main goal. A few microbes on the Martian surface would not obscure the planetary record in the frozen regolith. The main goal is to protect future science experiments, so that they don't find Earth microbes when searching for extant Mars organisms.
Cassie Conley, NASA spokeswoman for the office of planetary protection puts it like this: https://youtu.be/qk-Ycp5llEI
Third video on overview page of the NASA Office of Planetary Protection.
“So we have to do all of our search for life activities, we have to look for the Mars organisms, without the background, without the noise of having released Earth organisms into the Mars environment”
This entire section is OR and SYNTHESIS. NASA have the search for extant life as one of their two top priorities for searching for life on Mars. See https://encyclopediaofastrobiology.org/wiki/Protecting_Mars_special_regions_with_potential_for_life_to_propagate
link the Deccan Traps eruption to the asteroid impact that created the nearly antipodal Chicxulub crater
This theory of antipodal focusing is disproved according to the intro to the cited source. The source says that the Chicxulub crater is offset from the antipodes by 130°. It also says that the impactor didn't have enough energy to cause melting at the antipodes. The cite says this disproves the antipodal theory.
Instead the new idea presented in the cite is that the impact generated a magnitude 9 earthquake worldwide and this caused volcanism to increase everywhere, through a now well established process by which nearby earthquakes can trigger increased volcanism. The Deccan traps started well before the impact, due to a rising "plume head", rising through the mantle which hapens every 20-30 million years, but after the impact they sped up and the chemistry changed.
Cite says:
The possibility that an impact at Cretaceous-Paleogene time caused Deccan volcanism has been investigated since the discovery of the iridium anomaly at Cretaceous-Paleogene boundary, with an emphasis on antipodal focusing of seismic energy. However, the Deccan continental flood basalts were not antipodal to the 66 Ma Chicxulub crater at the time of the impact, but instead separated by an epicentral distance of ~130°. Also, a Chicxulub-size impact does not in any case appear capable of generating a large mantle melting event. Thus, impact-induced partial melting could not have caused the initi-ation of Deccan volcanism, consistent with the occurrence of Deccan volcanism well before Cretaceous-Paleogene/Chicxulub time.
Instead, Deccan volcanism is widely thought to represent the initial outburst of a new mantle plume “head” at the beginning of the Réunion hotspot track
Accompanying press release from Berkely university says:
Michael Manga, a professor in the same department, has shown over the past decade that large earthquakes – equivalent to Japan’s 9.0 Tohoku quake in 2011 – can trigger nearby volcanic eruptions. Richards calculates that the asteroid that created the Chicxulub crater might have generated the equivalent of a magnitude 9 or larger earthquake everywhere on Earth, sufficient to ignite the Deccan flood basalts and perhaps eruptions many places around the globe, including at mid-ocean ridges.
“It’s inconceivable that the impact could have melted a whole lot of rock away from the impact site itself, but if you had a system that already had magma and you gave it a little extra kick, it could produce a big eruption,” Manga said.
Similarly, Deccan lava from before the impact is chemically different from that after the impact, indicating a faster rise to the surface after the impact, while the pattern of dikes from which the supercharged lava flowed – “like cracks in a soufflé,” Renne said – are more randomly oriented post-impact.
“There is a profound break in the style of eruptions and the volume and composition of the eruptions,” said Renne. “The whole question is, ‘Is that discontinuity synchronous with the impact?’”
Another cite here, "The Conversation" which is written by academics and is WP:RS.
Our observations suggest the following sequence of events at the end of the Cretaceous period. Just over 66 million years ago, the Deccan Traps start erupting – likely initiated by a plume of hot rock rising from the Earth’s core, similar in some ways to what’s happening beneath Hawaii or Yellowstone today, that impinged on the side of India’s tectonic plate. The mid-ocean ridges and dinosaurs continue their normal activity.
About 250,000 years later, Chicxulub hits off the coast of what will become Mexico. The impact causes a massive disruption to the Earth’s climate, injecting particles into the atmosphere that will eventually settle into a layer of clay found across the planet. In the aftermath of impact, volcanic activity accelerates for perhaps tens to hundreds of thousands of years. The mid-ocean ridges erupt large volumes of magma, while the Deccan Traps eruptions flood lava across much of the Indian subcontinent.
Sleep: What is the biological function of sleep? Why do we dream? What are the underlying brain mechanisms? What is its relation to anesthesia?
This may be the biggest problem. What is/are the factors that increase or decrease the need to sleep? How can we push against natural fatigue and its causes? How can we give people more wakefulness / conscious life per day? (without suffering significant debuffs)
clathrate gun hypothesis
The lede doesn't mention the major literature review by the USGS n December 2016 which oncluded that evidence is lacking for the original hypothesis. the similar conclusion by the Royal Society in 2017 that there is a relatively limited role for climate feedback from dissociation of the methane clathrates and the the CAGE research group (Centre for Arctic Gas Hydrate, Environment and Climate) which conlcuded that the methane formed over 6 million years ago and have been slowly releasing methane for 2 million years independent of warm or cold climate Details here:Clathrate gun hypothesis
A 2018 published review concluded that the clathrate gun hypothesis remains controversial, but that better understanding is vital.
This is NOT their conclusion. Their conclusion was that it is unlikely. Quotes from the paper:
"Although the clathrate gun hypothesis remains controversial (21), a good understanding of how environmental change affects natural CH4 sources is vital in terms of robustly projecting future fluxes under a changing climate."
Then later:
"Nevertheless, it seems unlikely that catastrophic, widespread dissociation of marine clathrates will be triggered by continued climate warming at contemporary rates (0.2◦C per decade) during the twenty-first century"
.They did however urge caution about extraction of methane clathrates as a fuel, as this could lead to leaks of methane.
As discussed previously(Section 4.1), the stability of CH4 clathrate deposits may already be at risk from climate change.Accidental or deliberate disturbance, due to fossil fuel extraction, has the potential for extremelyhigh fugitive CH4 losses to the atmosphere "Nevertheless, it seems unlikely that catastrophic, widespread dissociation of marine clathrates will be triggered by continued climate warming at contemporary rates (0.2◦C per decade) during the twenty-first century"
For details with more cites Clathrate gun hypothesis
It was estimated in November 2015 that only 18 Jews remain in Syria
Cite has no mention of the number of Jews left in Israel or the number 18.
There may be Jews left in Syria, according to the Jerusalem Post.
First, one member of the family rescued in 2016 is still there because she was married to a Muslim man and signed conversion papers though she says she didn't really convert
According to Motti Kahana, who engineered the operation, there are no Jews left in Aleppo, aside from one member of the Halabi family, Linda, whose immigration to Israel was denied, citing her conversion to Islam – the source of a dispute between the Jewish Agency and Kahana.
The latter still sends kosher food to the woman, and maintains that though she signed conversion papers – which is required by Syrian law when marrying a Muslim – she did not really convert.
Also another family in Aleppo claims they are Jewish asking for aliyah
A family from war-torn Aleppo is appealing to the State of Israel for refuge, citing their Jewish heritage, Army Radio reported on Sunday.
“There is nobody who can help us to get out of this place,” said 30-year-old Razan (real name withheld) in an audio recording translated from Arabic into Hebrew and aired on the radio station. “We are asking that the Israeli government does not abandon us, but helps us get out of here to another country. I ask that the government demands from the entire world to do this. All my love and loyalty is to this religion [Judaism].”
Experts say there are still some Jews remaining in other parts of Syria. Elizabeth Tzurkov, a Syria researcher at Israeli think tank the Forum for Regional Thinking, told Army Radio: “... a number of Syrians have approached me who are descendants of Jewish women, who converted to Islam or who did not convert, and inquired how they can move to Israel.”
More recently, in September 2016, the last Jews of Aleppo were rescued hence ending that last Jewish presence in Aleppo
There may be Jews left in Syria, according to the Jerusalem Post.
First, one member of the family rescued in 2016 is still there because she was married to a Muslim man and signed conversion papers though she says she didn't really convert
According to Motti Kahana, who engineered the operation, there are no Jews left in Aleppo, aside from one member of the Halabi family, Linda, whose immigration to Israel was denied, citing her conversion to Islam – the source of a dispute between the Jewish Agency and Kahana.
The latter still sends kosher food to the woman, and maintains that though she signed conversion papers – which is required by Syrian law when marrying a Muslim – she did not really convert.
Also another family in Aleppo claims they are Jewish asking for aliyah
A family from war-torn Aleppo is appealing to the State of Israel for refuge, citing their Jewish heritage, Army Radio reported on Sunday.
“There is nobody who can help us to get out of this place,” said 30-year-old Razan (real name withheld) in an audio recording translated from Arabic into Hebrew and aired on the radio station. “We are asking that the Israeli government does not abandon us, but helps us get out of here to another country. I ask that the government demands from the entire world to do this. All my love and loyalty is to this religion [Judaism].”
Experts say there are still some Jews remaining in other parts of Syria. Elizabeth Tzurkov, a Syria researcher at Israeli think tank the Forum for Regional Thinking, told Army Radio: “... a number of Syrians have approached me who are descendants of Jewish women, who converted to Islam or who did not convert, and inquired how they can move to Israel.”
It was estimated in November 2015 that only 18 Jews remain in Syria
Cite has no mention of the number of Jews left in Israel or the number 18.
The typical tidal range in the open ocean is about 0.6 metres (2 feet)
In the open ocean it can be anything from 0 to around a meter, 0.6 meters is on the high side.
Good map of tidal ranges here https://www.researchgate.net/figure/Global-tidal-ranges-C-2015-NASA-Goddard-Space-Flight-Center-NASA-Jet-Propulsion_fig3_317370107
Perigean spring tide
NOAA FAQ about Perigean spring tides is a useful source too
Mr. Woods' book examines the occurrences of coastal flooding though history. What he discovered is that coastal flooding did occur when there was a strong onshore wind, such as a hurricane or nor'easter, which occasionally occurred at the same time as a "perigean spring tide."
The problem has been that a number of people have misinterpreted the information presented in this book to mean that coastal flooding would occur whenever the "perigean spring tides" occur. This has led to articles published in various media sources that incorrectly predict widespread coastal flooding at the times of the "perigean spring tides," causing needless concern.
Most people who live along the coastline know that coastal flooding can occur whenever there are strong onshore winds, whether there is a "perigean spring tide" or not. Additionally, this flooding will be worse if the storm strikes around the time of high tide rather than around the time of low tide.
But in ALL cases, it is the storm winds which cause the coastal flooding, not the tides. Coastal flooding is the result of meteorology (the weather) not astronomy (normal tidal fluctuations). All astronomical considerations are accounted for in the NOS tide and tidal current predictions. https://co-ops.nos.noaa.gov/faq2.html#15
The state is required to obtain at least 33% of its electricity from renewable resources by 2020, and 50% by 2030, excluding large hydro
Our of date. Cited page now says that by SB 100 California is required to produce 60% renewables by 2030 and all electricity from carbon-free sources by 2045
The cite is to a preprint, not a WP:RS. 81 is likely a typo for 18. Most often given as 10-15 km
"Asteroids striking the Earth typically [Minton and Malhotra, 2010] have an impactor density of 2680 kg/m3and an impact velocity of 20 km/s.Assuming these properties, modern scaling relations indicate that a 10–15 km diameter projectile [Collins et al., 2008] created the 170 km diameter Chicxulub crater"
Parkos, D., Alexeenko, A., Kulakhmetov, M., Johnson, B.C. and Melosh, H.J., 2015. NOx production and rainout from Chicxulub impact ejecta reentry. Journal of Geophysical Research: Planets, 120(12), pp.2152-2168
Plutonium, like most metals, has a bright silvery appearance at first, much like nickel, but it oxidizes very quickly to a dull gray, although yellow and olive green are also reported.[1][2] At room temperature plutonium is in its α (alpha) form. This, the most common structural form of the element (allotrope), is about as hard and brittle as gray cast iron unless it is alloyed with other metals to make it soft and ductile. Unlike most metals, it is not a good conductor of heat or electricity. It has a low melting point (640 °C) and an unusually high boiling point (3,228 °C).[1] Alpha decay, the release of a high-energy helium nucleus, is the most common form of radioactive decay for plutonium.[3] A 5 kg mass of 239Pu contains about 12.5×1024 atoms. With a half-life of 24,100 years, about 11.5×1012 of its atoms decay each second by emitting a 5.157 MeV alpha particle. This amounts to 9.68 watts of power. Heat produced by the deceleration of these alpha particles makes it warm to the touch.[
"Heat produced by the deceleration of these alpha particles makes it warm to the touch."
Heavy water was first produced in 1932, a few months after the discovery of deuterium.[6] With the discovery of nuclear fission in late 1938, and the need for a neutron moderator that captured few neutrons, heavy water became a component of early nuclear energy research. Since then, heavy water has been an essential component in some types of reactors, both those that generate power and those designed to produce isotopes for nuclear weapons. These heavy water reactors have the advantage of being able to run on natural uranium without using graphite moderators that pose radiological[7] and dust explosion[8] hazards in the decommissioning phase. Most modern reactors use enriched uranium with ordinary water as the moderator.
The Elephant’s Foot is the nickname given to a large mass of corium formed during the Chernobyl disaster in April 1986 and presently located in a steam distribution corridor underneath the remains of the reactor. It is currently an extremely deadly radioactive compound, yet its danger has decreased with the decay of its radioactive components.
The largest known amounts of corium were formed during the Chernobyl disaster.[15] The molten mass of reactor core dripped under the reactor vessel and now is solidified in forms of stalactites, stalagmites, and lava flows; the best known formation is the "Elephant's Foot," located under the bottom of the reactor in a Steam Distribution Corridor
Corium, also called fuel containing material (FCM) or lava-like fuel containing material (LFCM), is a lava-like material created in the core of a nuclear reactor during a meltdown accident.
Craig Steven Wright
Please leave us in piece
Camus follows Sartre's definition on the absurd, absurd is "That which is meaningless. Thus man's existence is absurd because his contingency finds no external justification".[71] The absurd is created because of the realization of man, who is placed into an unintelligent universe, that human values are not founded on a solid external component; or as Camus himself explains, the absurd is the result of the "confrontation between human need and the unreasonable silence of the world".[74] Even though absurdity is inescapable, Camus does not drift towards nihilism. But the realization of absurdity leads to the question: why someone should continue to live? Suicide is an option that Camus firmly dismisses as the renunciation of human values and freedom. Rather than, he proposes we accept that absurdity is a part of our lives and live with it.
On the other hand, Camus focused most of his philosophy around existential questions. The absurdity of life, the inevitable ending (death) is highlighted in his acts, his belief that the absurd – life being void of meaning, or man's inability to know that meaning if it were to exist – was something that man should embrace, his anti-Christianity, his commitment to individual moral freedom and responsibility are only a few of the similarities with other existential writers.[69] More importantly, Camus addressed one of the fundamental questions of existentialism: the problem of suicide. He wrote "There is only one really serious philosophical question, and that is suicide" Camus viewed the question of suicide as arising naturally as a solution to the absurdity of life.[70]
Radioactive decay is a stochastic (i.e. random) process at the level of single atoms. According to quantum theory, it is impossible to predict when a particular atom will decay,[1][2][3] regardless of how long the atom has existed. However, for a collection of atoms, the collection's expected decay rate is characterized in terms of their measured decay constants or half-lives. This is the basis of radiometric dating. The half-lives of radioactive atoms have no known upper limit, spanning a time range of over 55 orders of magnitude, from nearly instantaneous to far longer than the age of the universe.
Radioactive decay (also known as nuclear decay, radioactivity or nuclear radiation) is the process by which an unstable atomic nucleus loses energy (in terms of mass in its rest frame) by emitting radiation, such as an alpha particle, beta particle with neutrino or only a neutrino in the case of electron capture, or a gamma ray or electron in the case of internal conversion. A material containing such unstable nuclei is considered radioactive. Certain highly excited short-lived nuclear states can decay through neutron emission, or more rarely, proton emission.
The first atomic bomb was successfully detonated on July 16, 1945, in the Trinity test in New Mexico. Oppenheimer later remarked that it brought to mind words from the Bhagavad Gita: "Now I am become Death, the destroyer of worlds."[2][note 2] In August 1945, the weapons were used in the atomic bombings of Hiroshima and Nagasaki.
"Now I am become Death, the destroyer of Worlds" - Bhagavad Gita
The code name "Trinity" was assigned by Robert Oppenheimer, the director of the Los Alamos Laboratory, inspired by the poetry of John Donne. The test was of an implosion-design plutonium device, informally nicknamed "The Gadget", of the same design as the Fat Man bomb later detonated over Nagasaki, Japan, on August 9, 1945. The complexity of the design required a major effort from the Los Alamos Laboratory, and concerns about whether it would work led to a decision to conduct the first nuclear test. The test was planned and directed by Kenneth Bainbridge.
In the Chernobyl disaster, the moderator was not responsible for the primary event. Instead, a massive power excursion during a mishandled test caused the catastrophic failure of the reactor vessel and a near-total loss of coolant supply. The result was that the fuel rods rapidly melted and flowed together while in an extremely-high-power state, causing a small portion of the core to reach a state of runaway prompt criticality and leading to a massive energy release,[22] resulting in the explosion of the reactor core and the destruction of the reactor building. The massive energy release during the primary event superheated the graphite moderator, and the disruption of the reactor vessel and building allowed the superheated graphite to come into contact with atmospheric oxygen. As a result, the graphite moderator caught fire, sending a plume of highly radioactive fallout into the atmosphere and over a very widespread area.[
Nuclear graphite for the UK Magnox reactors was manufactured from petroleum coke mixed with coal-based binder pitch heated and extruded into billets, and then baked at 1,000 °C for several days. To reduce porosity and increase density, the billets were impregnated with coal tar at high temperature and pressure before a final bake at 2,800 °C. Individual billets were then machined into the final required shapes.[17] The manufacturing process is designed to ensure uniformity in material properties. Despite this care, recent research using stochastic finite element analysis[18] has shown that tiny spatial variations in material properties may play a significant role in how a graphite component ages.[19] A study carried out in 2016 provides data for the spatial variation of properties such as density and Young's modulus within a typical billet.[14] This information has been used to calibrate random fields for probabilistic simulation.[15]
Nuclear graphite is any grade of graphite, usually synthetic graphite, specifically manufactured for use as a moderator or reflector within a nuclear reactor. Graphite is an important material for the construction of both historical and modern nuclear reactors, due to its extreme purity and its ability to withstand extremely high temperatures.
Despite their name, rare-earth elements are – with the exception of the radioactive promethium – relatively plentiful in Earth's crust, with cerium being the 25th most abundant element at 68 parts per million, more abundant than copper. However, because of their geochemical properties, rare-earth elements are typically dispersed and not often found concentrated in rare-earth minerals; as a result economically exploitable ore deposits are less common.[4] The first rare-earth mineral discovered (1787) was gadolinite, a mineral composed of cerium, yttrium, iron, silicon, and other elements. This mineral was extracted from a mine in the village of Ytterby in Sweden; four of the rare-earth elements bear names derived from this single location.
The 17 rare-earth elements are cerium (Ce), dysprosium (Dy), erbium (Er), europium (Eu), gadolinium (Gd), holmium (Ho), lanthanum (La), lutetium (Lu), neodymium (Nd), praseodymium (Pr), promethium (Pm), samarium (Sm), scandium (Sc), terbium (Tb), thulium (Tm), ytterbium (Yb), and yttrium (Y).
In most reactor designs, as a safety measure, control rods are attached to the lifting machinery by electromagnets, rather than direct mechanical linkage. This means that in the event of power failure, or if manually invoked due to failure of the lifting machinery, the control rods fall automatically, under gravity, all the way into the pile to stop the reaction. A notable exception to this fail-safe mode of operation is the BWR, which requires hydraulic insertion in the event of an emergency shut-down, using water from a special tank under high pressure. Quickly shutting down a reactor in this way is called scramming.
Chemical elements with a sufficiently high neutron capture cross-section include silver, indium and cadmium. Other candidate elements include boron, cobalt, hafnium, samarium, europium, gadolinium, terbium, dysprosium, holmium, erbium, thulium, ytterbium and lutetium.[1] Alloys or compounds may also be used, such as high-boron steel,[2] silver-indium-cadmium alloy, boron carbide, zirconium diboride, titanium diboride, hafnium diboride, gadolinium nitrate,[3] gadolinium titanate, dysprosium titanate and boron carbide - europium hexaboride composite.[4]
Control rods are usually used in control rod assemblies (typically 20 rods for a commercial PWR assembly) and inserted into guide tubes within a fuel element. A control rod is removed from or inserted into the central core of a nuclear reactor in order to increase or decrease the neutron flux, which describes the number of neutrons that split further uranium atoms. This in turn affects the thermal power, the amount of steam produced and hence the electricity generated.
Control rods are used in nuclear reactors to control the fission rate of uranium and plutonium. They are composed of chemical elements such as boron, silver, indium and cadmium that are capable of absorbing many neutrons without themselves fissioning. Because these elements have different capture cross sections for neutrons of varying energies, the composition of the control rods must be designed for the reactor's neutron spectrum. Boiling water reactors (BWR), pressurized water reactors (PWR) and heavy water reactors (HWR) operate with thermal neutrons, while breeder reactors operate with fast neutrons.
TI store (eCommerce)
expansion
speech synthesizer
artificial production of human speech
Morris Tanenbaum
worked at bell labs and at&t corp
sales volume
"Sales volume is the number of units sold within a reporting period. This figure is monitored by investors to see if a business is expanding or contracting. Within a business, sales volume may be monitored at the level of the product, product line, customer, subsidiary, or sales region."
. TI also invented the hand-held calculator in 1967, and introduced the first single-chip microcontroller (MCU) in 1970, which combined all the elements of computing onto one piece of silicon.[10]
Start of the calculators
Pathogenic amyloids form when previously healthy proteins lose their normal physiological functions and form fibrous deposits in plaques around cells which can disrupt the healthy function of tissues and organs.
Clusters of these proteins prevent organs from functioning correctly.
Amyloids are aggregates of proteins that become folded into a shape that allows many copies of that protein to stick together, forming fibrils. In the human body, amyloids have been linked to the development of various diseases.
Amyloids are clusters of proteins that are associated with development of various diseases in humans.
They appear only twice (always plural) in the Tanakh, at Psalm 106:37 and Deuteronomy 32:17 both times, it deals with child or animal sacrifices.[6] Although the word is traditionally derived from the root ŠWD (Hebrew: שוד shûd) that conveys the meaning of "acting with violence" or "laying waste"[7] it was possibly a loan-word from Akkadian in which the word shedu referred to a protective, benevolent spirit.[8] The word may also derive from the "Sedim, Assyrian guard spirits"[9] as referenced according to lore "Azazel slept with Naamah and spawned Assyrian guard spirits known as sedim".[10] With the translation of Hebew texts into Greek, under influence of Zorastrian dualism, shedim were translated into daimonia with implicit negativity. Otherwise, later in Judeo-Islamic culture, shedim became the Hebrew word for Jinn with a morally ambivalent attitude
Shedim (Hebrew: שֵׁדִים) are spirits or demons in early Jewish mythology. However, they are not necessarily equivalent to the modern connotation of demons as evil entities.[3] Evil spirits were thought as the cause of maladies; conceptual differing from the shedim,[4] who are not evil demigods, but the foreign gods themselves. Shedim are just evil in the sense that they are not God.
Numerous species, including some Ancylometes, Dolomedes, Megadolomedes, Pardosa, Pirata, Thalassius and others, live above water at the surface, but may actively submerge for a prolonged period of time, are strong swimmers and will catch underwater prey.[4][5][11]
I read an article on spider silk!
Information is the resolution of uncertainty
This.
Great person!
In early occult and spiritualist literature, remote viewing was known as telesthesia and travelling clairvoyance. Rosemary Guiley described it as "seeing remote or hidden objects clairvoyantly with the inner eye, or in alleged out-of-body travel."
The report found that, due to human impact on the environment in the past half-century, the Earth's biodiversity has suffered a catastrophic decline unprecedented in human history
This is really sad :(
closed when approximately 13,000 workers voted to strike "indefinitely" in protest of a union leaders arrest for calling for an end to military rule in Chile.
what was the military rule?
Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modelled by a probability distribution that has a fixed set of parameters.[1] Conversely a non-parametric model differs precisely in that the parameter set (or feature set in machine learning) is not fixed and can increase, or even decrease, if new relevant information is collected.[2] Most well-known statistical methods are parametric.[3] Regarding nonparametric (and semiparametric) models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[4]
Non-parametric vs parametric stats
Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical. It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is called parametric. Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesis non-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termed distribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.
Non-parametric vs parametric statistics
Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in ordinal data. As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust. Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding. The wider applicability and increased robustness of non-parametric tests comes at a cost: in cases where a parametric test would be appropriate, non-parametric tests have less power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.
Non-parametric vs parametric statistics
The concept of data type is similar to the concept of level of measurement, but more specific: For example, count data require a different distribution (e.g. a Poisson distribution or binomial distribution) than non-negative real-valued data require, but both fall under the same level of measurement (a ratio scale).
A barren, leached cap,
environmental effects
Problems stemming from artisanal mining include disruption of families, mining-related illnesses, environmental damage, child labor, prostitution and rape.
problems
Penrose
Last Friday night is when my pen rose.
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov.
Yazidi accounts of creation differ from that of Judaism, Christianity, and Islam and resembles Zoroastrianism[119] or Hinduism. Especially worshipping a holy peacock, Melek Taus in oil lamps is more common in Hinduism. They believe that God first created Tawûsê Melek from his own (God's) illumination (Ronahî) and the other six archangels were created later. God ordered Tawûsê Melek not to bow to other beings. Then God created the other archangels and ordered them to bring him dust (Ax) from the Earth (Erd) and build the body of Adam. Then, God gave life to Adam from his own breath and instructed all archangels to bow to Adam. The archangels obeyed except for Tawûsê Melek. In answer to God, and the seemingly contradictory command, Tawûsê Melek replied, "How can I submit to another being! I am from your illumination while Adam is made of dust." Then, God praised him and made him the leader of all angels and his deputy on the Earth. This probably furthers what some see as a connection to the Islamic Shaytan, as according to the Quran, he too refused to bow to Adam at God's command, though in this case it is seen as being a sign of Shaytan's sinful pride. Hence, the Yazidis believe that Tawûsê Melek is the representative of God on the face of the Earth and comes down to the Earth on the first Wednesday of Nisan (April).
The reason for the Yazidis' reputation of being devil worshipers is connected to the other name of Melek Taus, Shaytan, the same name the Koran has for Satan.[115] Yazidis, however, believe Tawûsê Melek is not a source of evil or wickedness. They consider him to be the leader of the archangels, not a fallen angel.[66][49] The Yazidis of Kurdistan have been called many things, most notoriously 'devil-worshippers,' a term used both by unsympathetic neighbours and fascinated Westerners. This sensational epithet is not only deeply offensive to the Yazidis themselves, but quite simply wrong."[116] Non-Yazidis have associated Melek Taus with Shaitan (Islamic/Arab name) or Satan, but Yazidis find that offensive and do not actually mention that name.[116]
Yazidis are monotheists,[58] believing in one God, who created the world and entrusted it into the care of a Heptad of seven Holy Beings, often known as Angels or heft sirr (the Seven Mysteries). The names of these beings or angels are Azaz'il, Gabra'il (Jabra'il), Mikha'il, Rafa'il (Israfil), Dadra'il, Azrafil and Shamkil (Shemna'il)[113] Preeminent among these is Tawûsê Melek (frequently known as "Melek Taus" in English publications), the Peacock Angel[114][69] (identified with one of these Angels). Tawûsê Melek is often identified by Christians and Muslims with Satan. According to claims in Encyclopedia of the Orient,
Monseigneur de Hemptinne watched Yeke people working at Dikuluwe as late as 1924. They worked in the dry season and stopped when the first rains arrived. The mining camp was near a stream where millet could be planted. Women and children collected malachite from the surface, while men used iron picks to excavate pits and shafts, using fire to crack the rocks when needed. The mines were between 10 metres (33 ft) and 15 metres (49 ft) deep with galleries up to 20 metres (66 ft) long. The ore would be sorted and then taken to a nearby stream for concentration before being smelted
info about how mines work
The Katanga, or Shaba, copperbelt in the DRC is a belt about 70 kilometres (43 mi) wide and 250 kilometres (160 mi) long
how large the coperbelt is
Popol Vuh
.
El Quiché forms the heartland of the Kʼicheʼ people. In pre-Columbian times, the Kʼicheʼ settlements and influence reached beyond the highlands, including the valley of Antigua and coastal areas in Escuintla.
.
According to the 2011 census, Kʼicheʼ people constituted 11% of the Guatemalan population,
.
There is also evidence for a large degree of cultural exchange between the Kʼicheʼ and the people of Central Mexico, and Nahuatl has influenced the Kʼicheʼ language greatly.[5]
.
the phrase alludes to influences by Confucianism[2](p10) – in particular, filial piety or loyalty towards the family, corporation, and nation; the forgoing of personal freedom for the sake of society's stability and prosperity; the pursuit of academic and technological excellence; and, a strong work ethic together with thrift
I think these values might be useful in teaching in school. I wonder if many colleges and Western schools ever taught this to their children?
Proponents of so-called "Asian values", who tend to support Asian-style authoritarian governments,[2](p13) claim these values are more appropriate for the region than Western democracy with its emphasis on individual freedoms.[3] "Asian values" were codified and promoted in the Bangkok Declaration of 1993, which re-emphasized the principles of sovereignty, self-determination, and non-interference in civil and political rights. They included: Preference for social harmony; Concern with socio-economic prosperity and the collective well-being of the community; Loyalty and respect towards figures of authority; Preference for collectivism and communitarianism.
Now that I think about the times when people in the MTA Evergreen collaboration program are telling me that I am using male-dominated language, I found that this is the source of my values and yet I just find it sad that Evergreen students just never understand it at all.
The Piri Reis map is a world map compiled in 1513 from military intelligence by the Ottoman admiral and cartographer Piri Reis (pronounced [piɾi ɾeis]). Approximately one third of the map survives; it shows the western coasts of Europe and North Africa and the coast of Brazil with reasonable accuracy. Various Atlantic islands, including the Azores and Canary Islands, are depicted, as is the mythical island of Antillia and possibly Japan. The map's historical importance lies in its demonstration of the extent of global exploration of the New World by approximately 1510, and in its claim to have used a map of Christopher Columbus, otherwise lost, as a source. Piri also stated that he had used ten Arab sources and four Indian maps sourced from the Portuguese. More recently, the map has been the focus of claims for the pre-modern exploration of the Antarctic coast.
Akhenaton placed much emphasis on the worship of the Egyptian sun which can be seen from many artistic depictions of a connection between the Pharoh and his family.[28] Some debate has focused on the extent to which Akhenaten forced his religious reforms on his people.[29] Certainly, as time drew on, he revised the names of the Aten, and other religious language, to increasingly exclude references to other gods; at some point, also, he embarked on the wide-scale erasure of traditional gods' names, especially those of Amun.
Akhenaten tried to shift his culture from Egypt's traditional religion, but the shifts were not widely accepted. After his death, his monuments were dismantled and hidden, his statues were destroyed, and his name excluded from the king lists.[12] Traditional religious practice was gradually restored, and when some dozen years later rulers without clear rights of succession from the 18th Dynasty founded a new dynasty, they discredited Akhenaten and his immediate successors, referring to Akhenaten himself as "the enemy" or "that criminal" in archival records
He is known for his heavily opinionated editorial column in the school newspaper, in which he writes in all-capital letters to reflect his shrill voice
Shouty!
Balm of Mecca[edit] Forskal found the plant occurring between Mecca and Medina. He considered it to be the genuine balsam-plant and named it Amyris opobalsamum Forsk. (together with two other varieties, A. kataf Forsk. and A. kafal Forsk.).[4] Its Arabic name is abusham or basham, which is identical with the Hebrew bosem or beshem.[6] Bruce found the plant occurring in Abyssinia.[3] In the 19th century it was discovered in the East Indies also.[4] Linnaeus distinguished two varieties: Amyris gileadensis L. (= Amyris opobalsamum Forsk.), and Amyris opobalsamum L., the variant found by Belon in a garden near Cairo, brought there from Arabia Felix. More recent naturalists (Lindley, Wight and Walker) have included the species Amyris gileadensis L. in the genus Protium.[4] Botanists enumerate sixteen balsamic plants of this genus, each exhibiting some peculiarity.[6] There is little reason to doubt that the plants of the Jericho balsam gardens were stocked with Amyris gileadensis L., or Amyris opobalsamum, which was found by Bruce in Abyssinia, the fragrant resin of which is known in commerce as the "balsam of Mecca".[3] According to De Sacy, the true balm of Gilead (or Jericho) has long been lost, and there is only "balm of Mecca".[6] Newer designations of the balsam plant are Commiphora gileadensis (L.) Christ., Balsamodendron meccansis Gled. and Commiphora opobalsamum.
Cancamon[edit] The lexicographer Bar Seroshewai considered the Arabic dseru (ﺿﺮﻭ), a tree of Yemen known as kamkam (ﮐﻤﮑﺎﻡ) or kankam (ﮐﻨﮑﺎﻡ), Syriac qazqamun (ܩܙܩܡܘܢ), Greek κάγκαμον, Latin cancamum, mentioned by Dioscorides (De materia medica 1.32) and Pliny (Hist. Nat. 12.44; 12.98).[28][30][31] Cancamon has been held for Balsamodendron kataf,[31] but also as Aleurites laccifera (Euphorbiaceae), Ficus spec. (Artocarpeae), and Butea frondosa (Papilionaceae).[32] Sanskrit kunkuma (कुनकुम) is saffron (Crocus sativus).
Pine[edit] The Greek word ῥητίνη, used in the Septuagint for translating tsori, denotes a resin of the pine, especially Pinus maritima (πεύκη).[26][27] The Aramaic tserua (ܨܪܘܐ) has been described as the fruit of Pinus pinea L., but it has also been held for stacte or storax.[28] The Greek ῥητίνη ξηρά is a species of Abietineae Rich
Terebinth[edit] Bochart strongly contended that the balm mentioned in Jer. 8:22 could not possibly be that of Gilead, and considered it as the resin drawn from the terebinth or turpentine tree.[6] The Biblical terebinth is Hebrew eloh (אֵלׇה), Pistacia terebinthus L.[24] or P. palaestina Boiss
Zukum[edit] Ödmann and Rosenmüller thought that the pressed juice of the fruit of the zukum-tree (Eleagnus angustifolius L.) or the myrobalanus of the ancients, is the substance denoted; but Rosenmüller, in another place, mentioned the balsam of Mecca (Amyris opobalsamum L.) as being probably the tsori. Zukum oil was in very high esteem among the Arabs, who even preferred it to the balm of Mecca, as being more efficacious in wounds and bruises. Maundrell found zukum-trees near the Dead Sea. Hasselquist and Pococke found them especially in the environs of Jericho. In the 19th century, the only product in the region of Gilead which had any affinity to balm or balsam was a species of Eleagnus
Mastic[edit] Celsius (in Hierobotanicon) identified the tsori with the mastic tree, Pistacia lentiscus L. The Arabic name of this plant is dseri or dseru, which is identical with the Hebrew tsori. Rauwolf and Pococke found the plant occurring at Joppa
Plants
Assuming that the 'tsori' was a plant product, several plants haven been proposed as its source.
Tsori[edit] In the Hebrew Bible, the balm of Gilead is tsori or tseri (צֳרִי or צְרִי). It is a merchandise in Gen. 37:28 and Ez. 27:17, a gift in Gen. 43:11, and a medicament (for national disaster, in fig.) in Jer. 8:22, 46:11, 51:8.[11] The Hebrew root z-r-h (צרה) means "run blood, bleed" (of vein), with cognates in Arabic (ﺿﺮﻭ, an odoriferous tree or its gum), Sabaean (צרו), Syriac (ܙܪܘܐ, possibly fructus pini), and Greek (στύραξ, in meaning).[12] The similar word tsori (צֹרִי) denotes the adjective "Tyrean", i. e. from the Phoenician city of Tyre.[13] Many attempts have been made to identify the tsori, but none can be considered conclusive. The Samaritan Pentateuch (Gen. 37:25) and the Syriac bible (Jer. 8:22) translate it as wax (cera). The Septuagint has ῥητίνη, "pine resin". The Arabic version and Castell hold it for theriac. Lee supposes it to be "mastich". Luther and the Swedish version have "salve", "ointment" in the passages in Jer., but in Ezek. 27:17 they read "mastic". Gesenius, Hebrew commentators (Kimchi, Junius, Tremellius, Deodatius), and the Authorized Version (except in Ezek. 27:17, rosin) have balm, balsam, Greek βάλσαμον, Latin opobalsamum.[3
Balm of Gilead was a rare perfume used medicinally, that was mentioned in the Bible, and named for the region of Gilead, where it was produced. The expression stems from William Tyndale's language in the King James Bible of 1611, and has come to signify a universal cure in figurative speech. The tree or shrub producing the balm is commonly identified as Commiphora gileadensis. Some botanical scholars have concluded that the actual source was a terebinth tree in the genus Pistacia.
Gift to King Solomon by the Queen of Sheba
el the market. Some of thes
Glamp
Jung was one of the first people to define introversion and extraversion in a psychological context. In Jung's Psychological Types, he theorizes that each person falls into one of two categories, the introvert and the extravert. These two psychological types Jung compares to ancient archetypes, Apollo and Dionysus. The introvert is likened with Apollo, who shines light on understanding. The introvert is focused on the internal world of reflection, dreaming and vision. Thoughtful and insightful, the introvert can sometimes be uninterested in joining the activities of others. The extravert is associated with Dionysus, interested in joining the activities of the world. The extravert is focused on the outside world of objects, sensory perception and action. Energetic and lively, the extravert may lose their sense of self in the intoxication of Dionysian pursuits.[77] Jungian introversion and extraversion is quite different from the modern idea of introversion and extraversion.[78] Modern theories often stay true to behaviourist means of describing such a trait (sociability, talkativeness, assertiveness etc.) whereas Jungian introversion and extraversion is expressed as a perspective: introverts interpret the world subjectively, whereas extraverts interpret the world objectively.
A blockchain,[1][2][3] originally block chain,[4][5] is a growing list of records, called blocks, which are linked using cryptography.[1][6] Each block contains a cryptographic hash of the previous block,[6] a timestamp, and transaction data (generally represented as a Merkle tree).
wikipedia
Sartre argued that a central proposition of Existentialism is that existence precedes essence, which means that the most important consideration for individuals is that they are individuals—independently acting and responsible, conscious beings ("existence")—rather than what labels, roles, stereotypes, definitions, or other preconceived categories the individuals fit ("essence"). The actual life of the individuals is what constitutes what could be called their "true essence" instead of there being an arbitrarily attributed essence others use to define them. Thus, human beings, through their own consciousness, create their own values and determine a meaning to their life.[27]
While the predominant value of existentialist thought is commonly acknowledged to be freedom, its primary virtue is authenticity.[6] In the view of the existentialist, the individual's starting point is characterized by what has been called "the existential attitude", or a sense of disorientation, confusion, or dread in the face of an apparently meaningless or absurd world.[7] Many existentialists have also regarded traditional systematic or academic philosophies, in both style and content, as too abstract and remote from concrete human experience.[8][9]
If in observing the present state of the world and life in general, from a Christian point of view one had to say (and from a Christian point of view with complete justification): It is a disease. And if I were a physician and someone asked me “What do you think should be done?” I would answer, “The first thing, the unconditional condition for anything to be done, consequently the very first thing that must be done is: create silence, bring about silence; God's Word cannot be heard, and if in order to be heard in the hullabaloo it must be shouted deafeningly with noisy instruments, then it is not God’s Word; create silence! Ah, everything is noisy; and just as strong drink is said to stir the blood, so everything in our day, even the most insignificant project, even the most empty communication, is designed merely to jolt the senses and to stir up the masses, the crowd, the public, noise! And man, this clever fellow, seems to have become sleepless in order to invent ever new instruments to increase noise, to spread noise and insignificance with the greatest possible haste and on the greatest possible scale. Yes, everything is soon turned upside-down: communication is indeed soon brought to its lowest point in regard to meaning, and simultaneously the means of communication are indeed brought to their highest with regard to speedy and overall circulation; for what is publicized with such hot haste and, on the other hand, what has greater circulation than---rubbish! Oh, create silence!” Soren Kierkegaard, For Self-Examination 1851 p. 47-48 Hong 1990
How much that is hidden may still reside in a person, or how much may still reside hidden! How inventive is hidden inwardness in hiding itself and in deceiving or evading others, the hidden inwardness that preferred that no one would suspect its existence, modestly afraid of being seen and mortally afraid of being entirely disclosed! Is it not so that the one person never completely understands the other? But if he does not understand him completely, then of course it is always possible that the most indisputable thing could still have a completely different explanation that would, note well, be the true explanation, since an assumption can indeed explain a great number of instances very well and thereby confirm its truth and yet show itself to be untrue as soon as the instance comes along that it cannot explain-and it would indeed be possible that this instance or this somewhat more precise specification could come even at the last moment. Therefore all calm and, in the intellectual sense, dispassionate observers, who eminently know how to delve searchingly and penetratingly into the inner being, these very people judge with such infinite caution or refrain from it entirely because, enriched by observation, they have a developed conception of the enigmatic world of the hidden, and because as observers they have learned to rule over their passions. Only superficial, impetuous passionate people, who do not understand themselves and for that reason naturally are unaware that they do not know others, judge precipitously. Those with insight, those who know never do this. Soren Kierkegaard, Works of Love, (1847) Hong 1995 p. 228-229
This section particularly interests me, this is more or less how my brain operates, the trains of thought, the natural inclination to analyze life by thinking, thinking of others, assumptions I make, others make. What is the truth? Is there a truth?
What I really need is to get clear about what I must do, not what I must know, except insofar as knowledge must precede every act. What matters is to find a purpose, to see what it really is that God wills that I shall do; the crucial thing is to find a truth which is truth for me, to find the idea for which I am willing to live and die.
One must first learn to know himself before knowing anything else (γνῶθι σεαυτόν). Not until a man has inwardly understood himself and then sees the course he is to take does his life gain peace and meaning; only then is he free of that irksome, sinister traveling companion — that irony of life, which manifests itself in the sphere of knowledge and invites true knowing to begin with a not-knowing (Socrates) just as God created the world from nothing. But in the waters of morality it is especially at home to those who still have not entered the tradewinds of virtue. Here it tumbles a person about in a horrible way, for a time lets him feel happy and content in his resolve to go ahead along the right path, then hurls him into the abyss of despair. Often it lulls a man to sleep with the thought, "After all, things cannot be otherwise," only to awaken him suddenly to a rigorous interrogation. Frequently it seems to let a veil of forgetfulness fall over the past, only to make every single trifle appear in a strong light again. When he struggles along the right path, rejoicing in having overcome temptation's power, there may come at almost the same time, right on the heels of perfect victory, an apparently insignificant external circumstance which pushes him down, like Sisyphus, from the height of the crag. Often when a person has concentrated on something, a minor external circumstance arises which destroys everything. (As in the case of a man who, weary of life, is about to throw himself into the Thames and at the crucial moment is halted by the sting of a mosquito.) Frequently a person feels his very best when the illness is the worst, as in tuberculosis. In vain he tries to resist it but he has not sufficient strength, and it is no help to him that he has gone through the same thing many times; the kind of practice acquired in this way does not apply here. (Søren Kierkegaard's Journals & Papers IA Gilleleie, 1 August 1835)
In the early months of 1951, public declarations from Dwight D. Eisenhower and other American military brass followed, to the effect of there being 'a real difference between the German soldier and Hitler and his criminal group.'[11]
The thinking was that the troops and the leader needed to be alienated from one another, and therefore, the troops needn't be brought to trial and would be integrated back into society? In other words, force apologies between the citizenry and the troops? Move on...
Annotations can be considered an additional layer with respect to comments. Comments are published by the same publisher who hosts the original document. Annotations are added on top of that, but may eventually become comments which, in turn, may be integrated in a further version of the document itself
comments(评论)能否是Annotation(注解)呢?
community ban of Abd from English Wikipedia
I've seen this before. Contrary to what the Smith brothers claim, I have not been commonly banned. This ban is like another that they also cite: if I retire, claim that I'm not going to edit any more, as long as a certain abusive situation continues, they then ban me. "You can't quit, you're fired!" This is all social dysfunction.
strong consensus
Wikipedia pretends that the community does not vote, rather, decisions are to be made on the strength of arguments, which can be quite subjective. When involvement is considered, the consensus was far weaker than he suggested. What I would say is that the ban was within his discretion as a closer,. He was correct, that the discussion was a waste of time. Subsequent history proved this, because something that few, if any, in the discussion seemed to realize. If the goal was to prevent disruption, it would not be prevented by declaring a ban. I had clearly abandoned any further participation on Wikipedia, but if I did intend to continue editing, being banned would not have any effect. Not being banned might make it easier to revert edits,but nobody seems to have noticed that no reversion was necessary for what I did. So what this discussion did for me was to demonstrate how hopeless the Wikipedia community was. But not just the Wikipedia community. It was the generic wiki community. I did work for some years on Wikiversity, and there were some exciting possibilities there, but it would take a community effort and vigilance to create them, and I never managed to iinspire that, though I did accomplish a lot. When I realized that Wikiversity was vulnerable to what the founder of Wikiversity called "Wikipedia Disease," and I called Wiki disease because it can happen on any wiki without protective structure, and attempts to create protective structure will be resisted by the oligarchy that has formed, existing wikis are intrinsically dangerous. Something else is needed. I see no sign that Wikipedia will be able to break the paralysis that increasingly afflicted it.
And I am so glad that I bailed when I did. It was a simple move, and made my life far easier than struggling with that mess.
reinstated via discretionary sanctions
I don't think that was true.
antithetic to the concepts of Wikipedia.
I don't know what talk page, but "Wikipedia" began with a set of ideas and ideals that were not intended to be fixed, hence WP:IAR. However, that's where it went, such that this person could believe that "the concepts of Wikipedia" are a fixed thing. That is the beginning of death, but I do not know how long it will take.
none of his other socks have been blocked
Yes. I had disclosed socks. I don't recall what I said then, but I just checked, there is such a sock still unblocked. I stopped the experiment because the purpose had been achieved and, yes, it was causing collateral damage. That he thought I was delighting in it was his projection, and that this fellow would say that demonstrates what is all too common. ABF rules.
Support
involved. sad case.
IP editing before and after his block
No, not true. I did not sock to evade blocks. I openly socked, dislosing the edits, initially on my talk page, which was quckly blocked (and, yes, I know the policy and why), then on Wikiversity, on a user study page. The IP socking was from 2 May to 8 May, By the end of that the enforcement, mostly by T. Canens, as I recall, was becomiing draconian, causing collateral damage. I was done, I had collected evidence that these idiots completely overlook, without harming anyone or any content, actually making positive contributions. Then I created one sock. What I wanted to observe was how a neutral editor, carefully avoiding disruption, would be treated. I found out. The old protective policies were dead. But, in any case, I did not disclose that account at first, for obvious reasons. The account EnergyNeutral, made 98 edits from May 19 to May 31, and then went on "wikibreak." It was blocked on 3 June. There were no furhter edits of Wikipedia by me. Calling a period of editing of one week,19 edits, with a defined purpose, designed to minimize disruption, as "extensive editing" was misleading, but misleading evidence and arguments are routine on Wikipedia from administrators and editors in good standing>
there is no adult supervision. JzG, however, was recently reprimended, his mojo must not be working, and he has been gone for about a month, from previous intense activity. But if history is any guide, he will realize that he can ignore this and carry on as if nothing happened, with maybe only a tiny amount of caution. At this rate, perhaps before he dies, he will stop telling users to fuck off.
block evasion
If block evasion had been my purpose, I would have created an account from the beginning and disguised my IP. Instead, this was all open, and very restrictted and time limited. I never claimed that the admin had no right to block me. In fact, I made it trivially easy to identify my edits.
When a banned user who was popular with the faction made edits under ban that were "harmless spelling corrections," many of the same users going after me thought it was ridiculous to block someone for making harmless edits. I invented self-reversion as a way that he could make those spelling corrections, easily, and without complicating ban enforcement, if the policy had been amended to legitimate this. I proposed it, and the proposal sat there for a time until someone actually used self-reversion and then the screams rose up A ban is a ban is a ban." And to hell with WP:IAR.and to hell with encouraging cooperation.
That user -- it was Science Apologist -- angrily rejected the proposal because "why should he revert a perfectly good edit," and the answer was meaningless to him, "to make ban enforcement uncomplicated by cooperating with the ban." He did not want to cooperate with the ban, he wanted to make any admin who reverted him look silly. And I then confronted this and he was site-bannnd for a time. This guy, though, highly disruptive, has a faction that just loves him. ArbCom site-banned, he came back through a community discussion, and he went right back to his old behavior, which is usually accepted because he has friends.
chronic editing through IP accounts
For a very short time, in a way as to cause minimal disruption. There is almost no notice of that. There was only disruption from the enforcement effort, and that only for a short time. This was actually a demonstration of how enforcement can cause more disruption that the original problem. but admins don't want to look at that.
I take words out of context
Experienced Wikipedia administrator does not know how to assume good faith. It is all too common. What T Canens said took what Silverseren had written and intepreted it in a way that was not implied.
POV-warring
I never did that.
single sock
He noticed!
trying to point a finger and laugh at him
In fact, fast-forward to 2017-2019, this ban is pointed to as proof of how disruptive I was, by people allied with JzG and that whole faction. There was no necessity for this ban, it did nothing but allow others to claim I was not merely indef blocked, but "banned." It did not prevent one edit.
Support
Very, very involved
en edit-warring block
This would require that I edit war, which is not what I ever did.
s that it enables the formal 3RR shield for reversions.
What is brilliant about this is that the examples cited of bannable behavior were not edits that required any reversions at all, originally because they were self-reverted. Because those edits were also self-identified, ban enforcement was made easier. If these users actually looked at that experiment with an eye toward seeing if there was anything of value there, they'd have seen this. If I actually cared about this, I might end up being really pissed and then creating massive disruption But I don't and that is not ever what I have done.
Was that lifted from a political "trial" in Maoist China
No, from a political trial in fascist Wikipedia. Now, here is how this works: A user who was highly involved in conflict with me, who, in fact, held a grudge and commonly blamed me for everything wrong with the cold fusion article, for years later, filed a request to consider a ban. His friend pile in and there are many comments in support from them. that then attracts other users who want to be a part of the community, or who are inclined to agree. Few of these will actually consider evidence, fewer still will consider contrary positions and ideas. So process on wikipedia, discussions like this without the protections of evidence collection with RfC or ArbCom cases, and without any protection against presenting even radically false arguments and deceptive evidence, will tend to go with the early responses. Reversal is unusual.
There is no penalty for being part of a pitchfork=weilding mob, I have never seen it, and I have seen maybe hundreds of ban discussions.
And these process defects are larded through Wikipedia. The adhocracy worked wonderfully for quick building of an encyclopedia, but not for making it reliable in any way. POV pushing is allowed if the POV is a popular one, which then pushes the project away from academic neutrality, toward a popular belief that what most editors believe is neutral, and any other POV is not neutral. So you can push a skeptical position, can insult a medical professoinal as a quack, with no sanction, but if you point out that this is not a neutral comment, you might be dinged for POV-pushing. Identifying administrative abuse was crucial to the development of a neutral project, but the reality is that they shoot the messenger.
Wikiversity to refight old vendettas
no evidence was alleged of old vendettas, what is he talking about?
ignorance of anti-socking policy
I was fully aware of the policy
COI material
never on Wikipedia, I was topic banned before the COI developed.
Support
very, very involved
Support
involved
rest of us to handle it
Why is someone knowing they are right (or incorrectly believing they are right, or appearing to be so) a cause requiring any handling at all?
Support
Very involved, prior conflict.
Ban policy (I checked) still requires a consensus of uninvolved editors. I contronted a rather large faction, over administrative abuse. It is utterly unsurpising that they would vote for a ban.
This was, remember, a process where I was not allowed to defend. It really made no difference to me what the outcome was. It did not stop me from editing, what stopped me was my choice not to waste any more time.
Support
very involved.
Support
involved, I think.
I'm not cooperating any more, period.
Yes. I finished some work of personal and larger interest, and then left Wikipedia entirely alone.
These fascists have no idea of how to create cooperation. It never occurs to them to ask a user what their intentions are. When they decide to unblock a user, commonly, nothing is put in place to help the user stay unblocked. When ArbComm admonished JzG, there was absolutely no structure to monitor his behavior to ensure that he heeded the warning. Commonly, when a user is sanctioned, no help is put in place to guide them to more productive behavior. No, it's all about punishment which is called "protective," but which is very inefficient at actual protection. As I wrote, if I had chosen to retaliate, I could have. Scibaby could easily create far more work than it took him to sock, once one learns how to do it. Yes, it may be necessary to block, etc., but for starters, Flagged Revisions to reduce the need for immediate attention. There is no coordination of monitoring of Recent Changes, it is all ad hoc and wastes vast amounts of editor time, it probably take fifty times the labor compared to what would be necessary with a little focus and a little developed responsibility. Wikipedia will eventually be eaten. That's my prediction. Meanwhile sane people move on, once they see the reality of the cabal. Meanwhile, ah, paid editing. Do they think that won't happen because they declare a policy against it? The system encourages paid editing, in fact, by not creating reliable review. If there were relaible review, there would be no problem with paid editing. It would be like news media prohibiting press releases. The early community had some great ideas, but little long-term vision. Not surprising, in fact.
were I to treat it as a battleground
which I never did.
With due process exhausted, my compliance becomes no longer a matter of obligation
that's correct. It becomes voluntary only. That full statement should be read. I stopped cooperating. However, that does not mean that I would continue to edit Wikipedia. Why should I care about sewage floating in a cesspool? Wikipedia can be beautiful at times, but overall, it is not a useful place to work on content. Horrible, actually. Wikipedia process requires a willingness to engage in vastly inefficient process. It took weeks to get consensus for putting in one freaking link because JzG kept reverting it. So I created a review process. A lot of work. It showed that the community wanted the link, and armed with that, he stopped removing it. Weeks of work for one link. And, of course, eventually it slid into the muck.
What I put in became this: https://en.wikipedia.org/w/index.php?title=Martin_Fleischmann&oldid=357342538#Conference_proceedings
This was not a "peer-reviewed" source, it was Fleischmann's own description of why he had undertaken his work. To use something like that requires consensus, and with substantial effort, consensus was obtained. So, for a time, readers of the article could find that source. One editor removed it in 2016 without discussion as part of a much more extensive edit. Nobody noticed (because, remember, I was banned, but there were others who had that page watchlisted and who knew.
The edit was generally good. But this editor had no idea why that text was there, and likely did not look at the talk page archive. Wikis are unreliable, unless they have far better protective structure in place.