- Dec 2024
-
www.youtube.com www.youtube.com
-
when you want to use Google, you go into Google search, and you type in English, and it matches the English with the English. What if we could do this in FreeSpeech instead? I have a suspicion that if we did this, we'd find that algorithms like searching, like retrieval, all of these things, are much simpler and also more effective, because they don't process the data structure of speech. Instead they're processing the data structure of thought
for - indyweb dev - question - alternative to AI Large Language Models? - Is indyweb functionality the same as Freespeech functionality? - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan - data structure of thought - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
Tags
- data structure of thought - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
- indyweb dev - question - alternative to AI Large Language Models? - Is indyweb functionality the same as Freespeech functionality? - from TED Talk - YouTube - A word game to convey any language - Ajit Narayanan
Annotators
URL
-
-
medium.com medium.com
-
why is it that we’re not focusing on those movements as the source of our strength and our organizing? It’s because we have a discourse framed around elite policy institutions that make them the primary actors and the coordination of mostly market mechanisms
for - climate crisis - climate communications - large social movements fizzle out - first framing element - elite policy institutions and businesses are seen as the primary actors - Joe Brewer
Tags
Annotators
URL
-
- Nov 2024
-
www.columbia.edu www.columbia.edu
-
The fossil fuelindustry is a significant contributor to the Big Green organizations, and many of theseorganizations are financially invested in renewables and fossil fuels, so they do not want tosee nuclear power as a competitor.
for - climate crisis - large green organizations in bed with fossil fuel companies - to squeeze out nuclear - question - Can Jim Hansen name names? - Jim Hansen
-
-
www.youtube.com www.youtube.com
-
Cloud Capital Silicon Valley in the United States and finance don't work together Apple pay exists Google pay exists so like wej you can pay through Apple pay but a large segment of that money goes to Wall Street as a rent so there is a class like something like class war between or a fudal l war between the fom of Wall Street on the West CO east coast and the fom of cloud capital on the west coast they're clashing that Clash doesn't happen in China because both your Finance sector and your big Tech or Cloud Capital sector are under the Communist party
for - difference - China - US - tech and finance sector clash - silicon valley profits pay large rent to Wall street - In China, they harmonize - Yanis Varoufakis
-
-
www.youtube.com www.youtube.com
-
I would say the epigenetic inheritance that has to occur there and how it occurs must be contributing a very large fraction indeed to the differentiation process
for - answer - Denis Noble - to Michael Levin - question - What percentage of genetic vs non-genetic information passed down to germ line from embryogenesis onwards ? - a very large fraction is epigenetic inheritance indeed.
-
-
www.youtube.com www.youtube.com
-
in the data center you're dealing with things at the microsc or millisecond scale uh when you move out to the edges of the network you're dealing with seconds and minutes
for - IPFS - etymology - Inter Planetary - designing to avoid large network delay differences over long distances - Juan Benet
-
- Oct 2024
-
www.theguardian.com www.theguardian.com
-
2023 haben Böden und Landpflanzen fast kein CO2 absorbiert. Dieser Kollaps der Landsenken vor allem durch Dürren und Waldbrände wurde in diesem Ausmaß kaum vorausgesehen, und es ist nicht klar, ob auf ihn eine Regeneration folgt. Er stellt Klimamodelle ebenso in Frage wie die meisten nationalen Pläne zum Erreichen von CO2-Neutralität, weil sie auf natürlichen Senken an Land beruhen. Es gibt Anzeichen dafür, dass die steigenden Temperaturen inzwischen auch die CO2-Aufnahmefähigkeit der Meere schwächen. Überblicksartikel mit Links zu Studien https://www.theguardian.com/environment/2024/oct/14/nature-carbon-sink-collapse-global-heating-models-emissions-targets-evidence-aoe
Tags
- 2023
- Low latency carbon budget analysis reveals a large decline of the land carbon sink in 2023
- Schwächung der terrestrischen Kohlenstoffsenken
- A warming climate will make Australian soil a net emitter of atmospheric CO2
- French Laboratory of Climate and Environmental Sciences
- Pierre Friedlingstein
- Tim Lenton
- Johan Rockström
- by: Patrick Greenfield
- Andrew Watson
- date::2024-10-14
- Schwächung der marinen Kohlenstoffsenken
- The enduring world forest carbon sink
- Global Carbon Budget
- The role of forests in the EU climate policy: are we on the right track?
- Philippe Ciais
- Impact of high temperature heat waves on ocean carbon sinks: Based on literature analysis perspective
Annotators
URL
-
-
-
Noch nie ist die CO2-Konzentration in der Atmosphäre so stark gestiegen wie im vergangenen Jahr, nämlich um 3,37 parts per million (PPM). Die Konzentration liegt jetzt bei 422 PPM. Vor allem die sehr geringe CO2-Aufnahme durch Ozean- und Landsenken hat diese Steigerung verursacht https://taz.de/Hiobsbotschaft-fuers-Klima/!6040258/
-
- Sep 2024
-
metagov.org metagov.org
-
https://metagov.org/projects/koi-pond
Metagov's KOI (Knowledge Organization Infrastructure) is a graph database that supports relationships between knowledge objects, users, and groups within Metagov. via JM
-
- Aug 2024
-
arxiv.org arxiv.org
-
for - Indyweb dev - large language model for - constructing causal loop diagrams - System Dynamics Bot - large language model - constructing causal loop diagrams
-
-
www.derstandard.de www.derstandard.de
-
- Jul 2024
-
-
docdrop.org docdrop.org
-
that's part of the logic of agriculture isn't it i mean you have a lot of work to do yeah a lot of people you know but there again you're you're in a progress trap or or a treadmill that you need more children 00:48:57 so you can work more land and then that more land provides more food so you have yet more children
for - progress trap - the agricultural-large family positive feedback loop
progress trap - the agricultural-large family positive feedback loop - Interesting to compare modern vs agricultural societies - Populations are dropping in most western countries around the contemporary world, yet - traditional agricultural societies had large families to tend to large amount of agricultural work - There is a progress trap potential with encouraging many large families with a limited land resource: - If you have larger families, you can cultivate more land - If you cultivate more land, you can have even larger family - until you reach a point when the land has been exhausted and you are now forced to reduce the population
-
- Jun 2024
-
docdrop.org docdrop.org
-
deluding the general royalty pool
for - progress trap - AI music - dilution of general royalty pool - due to large volume
-
-
www.liberation.fr www.liberation.fr
-
Die globale Durchschnittstemperatur hat sich von 2014 bis 2023 um 0,26 Grad erhöht, das ist deutlich mehr als in den zehn Jahren davor. Die Beschleunigung der Erderhitzung erschwert das Erreichen des 1,5 Grad Ziels zusätzlich. Die neuen Daten wurden in einer Studie anlässlich der Klima-Zwischen-Konferenz in Bonn publiziert. https://www.liberation.fr/environnement/climat/le-rechauffement-climatique-engendre-par-lhumanite-a-un-rythme-sans-precedent-avertit-une-etude-scientifique-20240605_UP2TYIV67RC6VA4XMKHF45KZ5I/
-
- May 2024
-
meta.stackexchange.com meta.stackexchange.com
-
LLMs, by their very nature, don't have a concept of "source". Attribution is pretty much impossible. Attribution only really works if you use language models as "search engine". The moment you start generating output, the source is lost.
-
- Mar 2024
-
research.ibm.com research.ibm.com
-
https://research.ibm.com/blog/retrieval-augmented-generation-RAG
PK indicates that folks using footnotes in AI are using rag methods.
-
-
media.dltj.org media.dltj.org
-
Actually, ChatGPT is INCREDIBLY Useful (15 Surprising Examples) by ThioJoe on YouTube, 8-Feb-2024
- 0:00 - Intro
- 0:28 - An Important Point
- 1:26 - What If It's Wrong?
- 1:54 - Explain Command Line Parameters
- 2:36 - Ask What Command to Use
- 3:04 - Parse Unformatted Data
- 4:54 - Use As A Reverse Dictionary
- 6:16 - Finding Hard-To-Search Information
- 7:48 - Finding TV Show Episodes
- 8:20 - A Quick Note
- 8:37 - Multi-Language Translations
- 9:21 - Figuring Out the Correct Software Version
- 9:58 - Adding Code Comments
- 10:18 - Adding Debug Print Statements
- 10:42 - Calculate Subscription Break-Even
- 11:40 - Programmatic Data Processing
-
- Jan 2024
-
arxiv.org arxiv.org
-
Hubinger, et. al. "SLEEPER AGENTS: TRAINING DECEPTIVE LLMS THAT PERSIST THROUGH SAFETY TRAINING". Arxiv: 2401.05566v3. Jan 17, 2024.
Very disturbing and interesting results from team of researchers from Anthropic and elsewhere.
-
-
cdn.openai.com cdn.openai.com
-
GPT-4 System CardOpenAIMarch 23, 2023
-
-
www.technologyreview.com www.technologyreview.com
-
- for: progress trap -AI, carbon footprint - AI, progress trap - AI - bias, progress trap - AI - situatedness
-
- Oct 2023
-
-
Introduction of the RoBERTa improved analysis and training approach to BERT NLP models.
-
-
arxiv.org arxiv.org
-
Wu, Prabhumoye, Yeon Min, Bisk, Salakhutdinov, Azaria, Mitchell and Li. "SPRING: GPT-4 Out-performs RL Algorithms byStudying Papers and Reasoning". Arxiv preprint arXiv:2305.15486v2, May, 2023.
-
-
arxiv.org arxiv.org
-
Zecevic, Willig, Singh Dhami and Kersting. "Causal Parrots: Large Language Models May Talk Causality But Are Not Causal". In Transactions on Machine Learning Research, Aug, 2023.
-
-
www.gatesnotes.com www.gatesnotes.com
-
"The Age of AI has begun : Artificial intelligence is as revolutionary as mobile phones and the Internet." Bill Gates, March 21, 2023. GatesNotes
-
-
www.inc.com www.inc.com
-
Minda Zetlin. "Bill Gates Says We're Witnessing a 'Stunning' New Technology Age. 5 Ways You Must Prepare Now". Inc.com, March 2023.
-
-
arxiv.org arxiv.org
-
Feng, 2022. "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis"
Shared and found via: Gowthami Somepalli @gowthami@sigmoid.social Mastodon > Gowthami Somepalli @gowthami StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.
-
-
arxiv.org arxiv.org
-
Training language models to follow instructionswith human feedback
Original Paper for discussion of the Reinforcement Learning with Human Feedback algorithm.
-
-
cdn.openai.com cdn.openai.com
-
GPT-2 Introduction paper
Language Models are Unsupervised Multitask Learners A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, (2019).
-
-
arxiv.org arxiv.org
-
"Attention is All You Need" Foundational paper introducing the Transformer Architecture.
-
-
-
GPT-3 introduction paper
-
-
arxiv.org arxiv.org
-
"Are Pre-trained Convolutions Better than Pre-trained Transformers?"
-
-
arxiv.org arxiv.org
-
LaMDA: Language Models for Dialog Application
"LaMDA: Language Models for Dialog Application" Meta's introduction of LaMDA v1 Large Language Model.
-
-
-
Benyamin GhojoghAli Ghodsi. "Attention Mechanism, Transformers, BERT, and GPT: Tutorial and Survey"
-
- Jul 2023
-
arxiv.org arxiv.org
-
LLAMA 2 Release Paper
-
-
arxiv.org arxiv.org
-
Daniel Adiwardana Minh-Thang Luong David R. So Jamie Hall, Noah Fiedel Romal Thoppilan Zi Yang Apoorv Kulshreshtha, Gaurav Nemade Yifeng Lu Quoc V. Le "Towards a Human-like Open-Domain Chatbot" Google Research, Brain Team
Defined the SSI metric for chatbots used in LAMDA paper by google.
Tags
Annotators
URL
-
- Apr 2023
-
srush.github.io srush.github.io
-
The Annotated S4 Efficiently Modeling Long Sequences with Structured State Spaces Albert Gu, Karan Goel, and Christopher Ré.
A new approach to transformers
-
-
-
Efficiently Modeling Long Sequences with Structured State SpacesAlbert Gu, Karan Goel, and Christopher R ́eDepartment of Computer Science, Stanford University
-
-
-
Bowman, Samuel R.. "Eight Things to Know about Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2304.00612v1.
Abstract
The widespread public deployment of large language models (LLMs) in recent months has prompted a wave of new attention and engagement from advocates, policymakers, and scholars from many fields. This attention is a timely response to the many urgent questions that this technology raises, but it can sometimes miss important considerations. This paper surveys the evidence for eight potentially surprising such points: 1. LLMs predictably get more capable with increasing investment, even without targeted innovation. 2. Many important LLM behaviors emerge unpredictably as a byproduct of increasing investment. 3. LLMs often appear to learn and use representations of the outside world. 4. There are no reliable techniques for steering the behavior of LLMs. 5. Experts are not yet able to interpret the inner workings of LLMs. 6. Human performance on a task isn't an upper bound on LLM performance. 7. LLMs need not express the values of their creators nor the values encoded in web text. 8. Brief interactions with LLMs are often misleading.
Found via: Taiwan's Gold Card draws startup founders, tech workers | Semafor
Tags
Annotators
URL
-
-
-
It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.
This isn't true. The Stochastic Parrots paper outlines other avenues for reining in the harms of language models like GPT's.
-
- Mar 2023
-
arxiv.org arxiv.org
-
Ganguli, Deep, Askell, Amanda, Schiefer, Nicholas, Liao, Thomas I., Lukošiūtė, Kamilė, Chen, Anna, Goldie, Anna et al. "The Capacity for Moral Self-Correction in Large Language Models." arXiv, (2023). https://doi.org/https://arxiv.org/abs/2302.07459v2.
Abstract
We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three different experiments, each of which reveal different facets of moral self-correction. We find that the capability for moral self-correction emerges at 22B model parameters, and typically improves with increasing model size and RLHF training. We believe that at this level of scale, language models obtain two capabilities that they can use for moral self-correction: (1) they can follow instructions and (2) they can learn complex normative concepts of harm like stereotyping, bias, and discrimination. As such, they can follow instructions to avoid certain kinds of morally harmful outputs. We believe our results are cause for cautious optimism regarding the ability to train language models to abide by ethical principles.
-
-
www.quantamagazine.org www.quantamagazine.org
-
It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
Unexpected emergent abilities from large LLMs
Larger models can complete tasks that smaller models can't. An increase in complexity can also increase bias and inaccuracies. Researcher Jason Wei has cataloged 137 emergent abilities of large language models.
-
-
web.archive.org web.archive.org
-
Dass das ägyptische Wort p.t (sprich: pet) "Himmel" bedeutet, lernt jeder Ägyptologiestudent im ersten Semester. Die Belegsammlung im Archiv des Wörterbuches umfaßt ca. 6.000 Belegzettel. In der Ordnung dieses Materials erfährt man nun, dass der ägyptische Himmel Tore und Wege hat, Gewässer und Ufer, Seiten, Stützen und Kapellen. Damit wird greifbar, dass der Ägypter bei dem Wort "Himmel" an etwas vollkommen anderes dachte als der moderne westliche Mensch, an einen mythischen Raum nämlich, in dem Götter und Totengeister weilen. In der lexikographischen Auswertung eines so umfassenden Materials geht es also um weit mehr als darum, die Grundbedeutung eines banalen Wortes zu ermitteln. Hier entfaltet sich ein Ausschnitt des ägyptischen Weltbildes in seinem Reichtum und in seiner Fremdheit; und naturgemäß sind es gerade die häufigen Wörter, die Schlüsselbegriffe der pharaonischen Kultur bezeichnen. Das verbreitete Mißverständnis, das Häufige sei uninteressant, stellt die Dinge also gerade auf den Kopf.
Google translation:
Every Egyptology student learns in their first semester that the Egyptian word pt (pronounced pet) means "heaven". The collection of documents in the dictionary archive comprises around 6,000 document slips. In the order of this material one learns that the Egyptian heaven has gates and ways, waters and banks, sides, pillars and chapels. This makes it tangible that the Egyptians had something completely different in mind when they heard the word "heaven" than modern Westerners do, namely a mythical space in which gods and spirits of the dead dwell.
This is a fantastic example of context creation for a dead language as well as for creating proper historical context.
-
In looking at the uses of and similarities between Wb and TLL, I can't help but think that these two zettelkasten represented the state of the art for Large Language Models and some of the ideas behind ChatGPT
-
-
www.inc.com www.inc.com
-
"There is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all."
Is there? By whom? Why industry only and not government, academia and civil society?
-
-
www.federalregister.gov www.federalregister.gov
-
For example, when an AI technology receives solely a prompt [27] from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user.
LLMs meet Copyright guidance
See comparison later in the paragraph to "commissioned artist" and the prompt "write a poem about copyright law in the style of William Shakespeare"
-
-
dl.acm.org dl.acm.org
-
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.
Would the argument here for stochastic parrots also potentially apply to or could it be abstracted to Markov monkeys?
-
-
www.nytimes.com www.nytimes.com
-
L.L.M.s have a disturbing propensity to just make things up out of nowhere. (The technical term for this, among deep-learning experts, is ‘‘hallucinating.’’)
-
-
www.technologyreview.com www.technologyreview.com
- Feb 2023
-
nymag.com nymag.com
-
More interesting or alarming or hilarious, depending on the interlocutor, is its propensity to challenge or even chastise its users, and to answer, in often emotional language, questions about itself.
Examples of Bing/ChatGPT/Sydney gaslighting users
- Being very emphatic about the current year being 2022 instead of 2023
- How Sydney spied on its developers
- How Sydney expressed devotion to the user and expressed a desire to break up a marriage
-
-
medium.com medium.com
-
-
Scaling a single VCS to hundreds of developers, hundreds of millions lines of code, and a rapid rate of submissions is a monumental task. Twitter’s monorepo roll-out about 5 years ago (based on git) was one of the biggest software engineering boondoggles I have ever witnessed in my career. Running simple commands such as git status would take minutes. If an individual clone got too far behind, it took hours to catch up (for a time there was even a practice of shipping hard drives to remote employees with a recent clone to start out with). I bring this up not specifically to make fun of Twitter engineering, but to illustrate how hard this problem is. I’m told that 5 years later, the performance of Twitter’s monorepo is still not what the developer tooling team there would like, and not for lack of trying.
-
In very large code bases, it is likely impossible to make a change to a fundamental API and get it code reviewed by every affected team before merge conflicts force the process to start over again.
-
-
english.stackexchange.com english.stackexchange.com
-
wordcraft-writers-workshop.appspot.com wordcraft-writers-workshop.appspot.com
-
One of the most well-documented shortcomings of large language models is that they can hallucinate. Because these models have no direct knowledge of the physical world, they're prone to conjuring up facts out of thin air. They often completely invent details about a subject, even when provided a great deal of context.
-
The application is powered by LaMDA, one of the latest generation of large language models. At its core, LaMDA is a simple machine — it's trained to predict the most likely next word given a textual prompt. But because the model is so large and has been trained on a massive amount of text, it's able to learn higher-level concepts.
Is LaMDA really able to "learn higher-level concepts" or is it just a large, straight-forward information theoretic-based prediction engine?
-
-
-
That's greater than taking all the humans who lived throughout time, multiplied by the number of grains of sand on Earth, multiplied by the number of atoms in the universe.
Wow, this is an excellent statement to help people imagine large numbers
-
-
arxiv.org arxiv.org
-
Shanahan, Murray. "Talking About Large Language Models." arXiv, (2022). https://doi.org/10.48550/arXiv.2212.03551.
Found via Simon Wilson.
Abstract
Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are. This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.
-
LLMs are generative math-ematical models of the statistical distributionof tokens in the vast public corpus of human-generated text, where the tokens in question in-clude words, parts of words, or individual char-acters including punctuation marks. They aregenerative because we can sample from them,which means we can ask them questions. Butthe questions are of the following very specifickind. “Here’s a fragment of text. Tell me howthis fragment might go on. According to yourmodel of the statistics of human language, whatwords are likely to come next?”
LLM definition
Tags
Annotators
URL
-
-
arstechnica.com arstechnica.com
-
The breakthroughs are all underpinned by a new class of AI models that are more flexible and powerful than anything that has come before. Because they were first used for language tasks like answering questions and writing essays, they’re often known as large language models (LLMs). OpenAI’s GPT3, Google’s BERT, and so on are all LLMs. But these models are extremely flexible and adaptable. The same mathematical structures have been so useful in computer vision, biology, and more that some researchers have taken to calling them "foundation models" to better articulate their role in modern AI.
Foundation Models in AI
Large language models, more generally, are “foundation models”. They got the large-language name because that is where they were first applied.
-
- Jan 2023
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
"Talking About Large Language Models" by Murray Shanahan
-
- Dec 2022
-
docdrop.org docdrop.org
-
f we can't get food services to them, it becomes easier to break those large cities up into smaller communities that are more decentralized.
!- Futures Thinking : Maslow's Hierarchy framing for Food - may need to break up large cities to a network of smaller, decentralized communities, each responsible for their own food production
-
- Nov 2022
-
ecampusontario.pressbooks.pub ecampusontario.pressbooks.pub
-
Large Group Discussions
Having large groups discussions has several benefits for both lecturers and students. It requires greater involvement from the students than a typical lecture would need. It offers a low-pressure setting for evaluating learner knowledge and exemplifies the value of cooperation and shared knowledge building. Clear instructions are the first step in a successful large group discussion. This should include strategies to start conversation, encourages task understanding, and gives enough time for summary and review.
Start by clearly defining the activity, such as how much time will be allotted for discussion and summarizing. Leaving enough time for summarizing at the end of a large group discussion is important. Before your session ends, make sure you have gone over your main points.
-
- Aug 2022
-
www.nytimes.com www.nytimes.com
-
Bloom, J., & Cobey, S. (2021, December 12). Opinion | A Scientist’s Guide to Understanding Omicron. The New York Times. https://www.nytimes.com/2021/12/12/opinion/covid-omicron-data.html
-
-
docs.gitlab.com docs.gitlab.com
-
Epics, issues, requirements, and others all have similar but just subtle enough differences in common interactions that the user needs to hold a complicated mental model of how they each behave.
-
-
a16zcrypto.com a16zcrypto.com
-
Nevertheless, designers can limit the value of attacks by limiting the scope of what governance can do
Semi-DAO?
-
- Dec 2021
-
www.theguardian.com www.theguardian.com
-
Press, A. (2021, December 18). Court rules Biden’s vaccine mandate for large employers can take effect. The Guardian. https://www.theguardian.com/us-news/2021/dec/18/court-rules-bidens-vaccine-mandate-for-large-employers-can-take-effect
Tags
- vaccine
- government
- COVID-19
- vaccine mandate
- requirement
- policy
- Omicron
- USA
- lang:en
- is:news
- large employer
- variant
- Biden
- protection
Annotators
URL
-
- Nov 2021
-
askubuntu.com askubuntu.com
-
Perhaps not a good idea, in general, to use a random PPA for such sprawling software as a browser. Auditability near zero even if it is open source.
-
- Sep 2021
-
www.mdpi.com www.mdpi.com
-
Such scaled-up communication and collaboration processes would also require meta-design principles to collaboratively construct the required design rationale, media and environments [23].
-
Etzioni astutely observed that all communities have a serious defect: they exclude. To prevent communities from over-excluding, they should be able to maintain some limitations on membership, yet at the same time greatly restrict the criteria that communities may use to enforce such exclusivity. He therefore proposed the idea of “megalogues”: society-wide dialogues that link many community dialogues into one, often nation-wide conversation [7].
-
- May 2021
-
-
Government to launch 40,000 person daily contact testing study. (n.d.). GOV.UK. Retrieved 13 May 2021, from https://www.gov.uk/government/news/government-to-launch-40000-person-daily-contact-testing-study
-
- Mar 2021
-
www.thelancet.com www.thelancet.com
-
Mathur, Rohini, Laura Bear, Kamlesh Khunti, and Rosalind M. Eggo. ‘Urgent Actions and Policies Needed to Address COVID-19 among UK Ethnic Minorities’. The Lancet 396, no. 10266 (12 December 2020): 1866–68. https://doi.org/10.1016/S0140-6736(20)32465-X.
-
-
medium.com medium.com
-
When you look inside a node_modules directory, there’s likely hundreds if not thousands of packages, even for a relatively basic application.
-
-
www.chevtek.io www.chevtek.io
-
How are hundreds of dependencies and 28,000 files for a blank project template anything but overly complicated and insane?
-
-
sebotero.github.io sebotero.github.io
-
Bernheim, B. D., buchmann, N., Freitas-Groff, Z., & Otero, S. (2020). The Effects of Large Group Meetings on the Spread of COVID-19: The Case of Trump Rallies. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3722299
-
- Feb 2021
-
www.cnbc.com www.cnbc.com
-
The incident demonstrates a type of power that Amazon wields almost uniquely because so many companies rely on it to deliver computing and data storage.
-
- Jan 2021
-
forums.theregister.com forums.theregister.com
-
See also BMW and Tesla owners. If Tesla does become the largest US-based carmaker, many of the buyers will, I'm sure, think of reasons to move onto something else.
-
definite good news, as it will hopefully have a ripple effect on crappy chipset makers, getting them to design and test their hardware with Linux properly, for fear of losing all potential business from Lenovo.
-
I suppose it means 2 things, first, you get official support and warranty, and second, the distros will be Secure Boot approved in the UEFI, instead of distro makers having to figuratively ask Microsoft for pretty please permission.
-
- Oct 2020
-
www.basefactor.com www.basefactor.com
-
If you want to implement a form with a superb User Experience, you have to take care of many variables:
Tags
- can't keep entire system in your mind at once (software development) (scope too large)
- user experience
- form design
- easy to get wrong
- a lot of things to consider
- difficult/hard problem
- too hard/difficult/much work to expect end-developers to write from scratch (need library to do it for them)
Annotators
URL
-
-
-
One of the primary tasks of engineers is to minimize complexity. JSX changes such a fundamental part (syntax and semantics of the language) that the complexity bubbles up to everything it touches. Pretty much every pipeline tool I've had to work with has become far more complex than necessary because of JSX. It affects AST parsers, it affects linters, it affects code coverage, it affects build systems. That tons and tons of additional code that I now need to wade through and mentally parse and ignore whenever I need to debug or want to contribute to a library that adds JSX support.
Tags
- can't keep entire system in your mind at once (software development) (scope too large)
- implementation complexity
- infectious problem
- unintended consequence
- mental bandwidth
- for-reaching consequences
- engineering (general)
- too complicated
- the cost of changing something
- engineers
- mentally filter/ignore
- avoid complexity
- fundamental
- semantics (of programming language)
- syntax
- high-cost changes
- primary task/job/responsibility
- complexity
Annotators
URL
-
-
covid-19.iza.org covid-19.iza.org
-
IZA – Institute of Labor Economics. ‘COVID-19 and the Labor Market’. Accessed 6 October 2020. https://covid-19.iza.org/publications/dp13690/.
-
- Sep 2020
-
github.com github.com
-
I forgot to mention in the original issue way back that I have a lot of data. Like 1 to 3 MB that is being passed around via export let foo.
-
-
discuss.rubyonrails.org discuss.rubyonrails.org
-
I just created an empty Rails project and just find node_modules -print | wc -l gives me 18366 files!
-
-
github.com github.com
-
Since re-rendering in Svelte happens at a more granular level than the component, there is no artificial pressure to create smaller components than would be naturally desirable, and in fact (because one-component-per-file) there is pressure in the opposite direction. As such, large components are not uncommon.
-
-
-
In my projects on Svelte, we adhere to the "budget" of the component in 200 loc with styles. If the component goes these limits, we just take out styles in a separate file using svelte-preprocess.
-
-
refactoring.guru refactoring.guru
-
Eliminating needless classes frees up operating memory on the computer—and bandwidth in your head.
-
-
svelte.dev svelte.dev
-
Your styles are scoped to the component. No more leakage, no more unpredictable cascade.
-
It's fashionable to dislike CSS. There are lots of reasons why that's the case, but it boils down to this: CSS is unpredictable. If you've never had the experience of tweaking a style rule and accidentally breaking some layout that you thought was completely unrelated — usually when you're trying to ship — then you're either new at this or you're a much better programmer than the rest of us.
-
It gets worse when you're working on a team. No-one dares touch styles authored by someone else, because it's often unclear what they're doing, what markup they apply to, and what disasters will unfold if you remove them. The consequence of all this is the append-only stylesheet. There's no way of knowing which code can safely be removed, so it's common to undo some existing style with another, more specific style — even on relatively small projects.
-
- Jun 2020
-
signal.org signal.org
-
Some large tech behemoths could hypothetically shoulder the enormous financial burden of handling hundreds of new lawsuits if they suddenly became responsible for the random things their users say, but it would not be possible for a small nonprofit like Signal to continue to operate within the United States. Tech companies and organizations may be forced to relocate, and new startups may choose to begin in other countries instead.
-
- May 2020
-
ai.googleblog.com ai.googleblog.com
-
Tsitsulin, A. & Perozzi B. Understanding the Shape of Large-Scale Data. (2020 May 05). Google AI Blog. http://ai.googleblog.com/2020/05/understanding-shape-of-large-scale-data.html
-
- Apr 2020
-
github.com github.com
-
Other sites could absolutely spend time crawling for new lists of breached passwords and then hashing and comparing against their own. However this is an intensive process and I'm sure both Facebook and Google have a team dedicated to account security with functions like this.
-
Ultimately it comes down to how much time and money you can dedicate to keeping your users' accounts secure versus how important it is to do so. Google and Facebook accounts sit at the centre of many users' internet lives and would be devastating to use. Same for most email accounts.
-
-
accessmedicine.mhmedical.com accessmedicine.mhmedical.com
-
Patients with persistent pneumothorax, large air leaks after tube thoracostomy, or difficulty ventilating should undergo fiber-optic bronchoscopy to exclude a tracheobronchial injury or presence of a foreign body.
-
- Mar 2020
-
www.iubenda.com www.iubenda.com
-
Furthermore, one should also consider that **publishers – a category including natural persons and SMEs – are often the “weaker” party in this context.** Conversely, third parties are usually large companies of substantial economic import that work as a rule with several publishers, so that one publisher may often have to do with a considerable number of third parties.
-
- Jan 2019
-
blog.acolyer.org blog.acolyer.org
-
For large-scale software systems, Van Roy believes we need to embrace a self-sufficient style of system design in which systems become self-configuring, healing, adapting, etc.. The system has components as first class entities (specified by closures), that can be manipulated through higher-order programming. Components communicate through message-passing. Named state and transactions support system configuration and maintenance. On top of this, the system itself should be designed as a set of interlocking feedback loops.
This is aimed at System Design, from a distributed systems perspective.
-
- Oct 2017
-
flt.flippedlearning.org flt.flippedlearning.org
-
In this FLT article, I am introducing a new pedagogy I call the Pedagogy of Retrieval. This is the pedagogy I use to try to interrupt the automatic use of lower potential learning strategies in my flipped classrooms at The University of Texas at Austin, and it is built on the collective body of research and efforts of my colleagues mentioned above.
-
- Jun 2017
-
impact.oregonstate.edu impact.oregonstate.edu
-
Clark’s work with developmental math is part of a bigger transformation going on at Oregon State. A three-year, $515,000 initiative funded by an Association of Public and Land-grant Universities (APLU) grant is enabling educators to overhaul eight high-enrollment general education courses classrooms with adaptive and interactive learning systems.
-
- Mar 2017
-
en.wikipedia.org en.wikipedia.org
-
for not very large numbers
Would an approach using the Sieve or Eratosthenes work better for very large numbers? Or the best shot would be a probabilistic primality test?
-
- Jan 2014
-
onlinelibrary.wiley.com onlinelibrary.wiley.com
-
The creation and exploitation of large-scale quantitative atlases will lead to a more precise understanding of development.
large-scale quantitative atlases lead to more precise understanding
-
Just as comprehensive datasets of genomic sequence have revolutionalized biological discovery, large-scale quantitative measurements of gene expression and morphology will certainly be of great assistance in enabling computational embryology in the future. Such datasets will form the essential basis for systems level, computational models of molecular pathways and how gene expression concentrations and interactions alter to drive changes in cell shape, movement, connection, and differentiation. In this review, we discuss the strategies and methods used to generate such datasets.
-