- Mar 2023
-
www.filmsforaction.org www.filmsforaction.org
-
The European materialist tradition of despiritualizing the universe is very similar to the mental process which goes into dehumanizing another person. And who seems most expert at dehumanizing other people? And why? Soldiers who have seen a lot of combat learn to do this to the enemy before going back into combat. Murderers do it before going out to commit murder. Nazi SS guards did it to concentration camp inmates. Cops do it. Corporation leaders do it to the workers they send into uranium mines and steel mills. Politicians do it to everyone in sight. And what the process has in common for each group doing the dehumanizing is that it makes it all right to kill and otherwise destroy other people. One of the Christian commandments says, "Thou shalt not kill," at least not humans, so the trick is to mentally convert the victims into nonhumans. Then you can proclaim violation of your own commandment as a virtue.
- Despiritualization
- Definition
- Comment
- This is a very salient term appropriate to modernity's propensity for objectification that uses language and acculturation to remove the sacred from being a lived experience:
- Means says that European materialist tradition of despiritualizing the universe is very similar to the mental process which goes into dehumanizing another person. This is also the process of despiritualizing nature so we can plunger her. It is the raison d'etre for objectifying nature in the scientific / technological / industrialist / globalist capitalism supply chain. Means provided some examples:
- Soldiers to kill an enemy soldier. - We do it to eat food - Murderers do it before killing . - Nazi SS guards did it to inmates. - Cops do it to those they arrest. - Corporation leaders do it to their workers and to the environment - Politicians do it to everyone - factory farming takes away the individuality and recognition of each unique, living being and commodities then all by replacing each life by with the genetic label "food" ( author's addition)
- Mean says that for each group doing the Despiritualization, it makes it all right to kill and otherwise destroy other people/species.
- Means further says:
- One of the Christian commandments says, "Thou shalt not kill," at least not humans,
- so the trick is to mentally convert the victims into nonhumans.
- Then you can proclaim violation of your own commandment as a virtue.
- In most indigenous traditions, if not all, prayer is given before a meal.
- Prayer can be seen as a spiritualistion practice that recognizes that we, as participants in life, must take some other life in order to sustain our own
- It is the practice of recognizing the built-in cruelty of life.
- If taking another living beings life is the ultimate transgression, and we must commit that murderous act many times a day in order to survive meal prayer establishes a direct connection with the individual plant or animall that has made the ultimate sacrifice and has forfeited its life so that we may continue ours.
- meal prayer is therefore, in the context of Deep Humanity practice, a BEing journey of continuous gratitude
-
- Feb 2023
-
docdrop.org docdrop.org
-
morton suggests our worlds are perforated there they're not 00:03:54 complete in them in and of themselves so this sensory screen actually has holes in it and the worlds of others are also perforated and they these worlds are 00:04:07 constantly leaking into and out of one another so it's not that the sensory screen isn't there it's not that the ego isn't projecting but the screen has 00:04:17 holes in it and we're all inextricably bound up our fragile worlds are constantly overlapping with one another bumping into one another
- Motion suggests our worlds are perforated and constantly leaking into each other
- Comment
- this is equivalent to = SRG = Deep Humanity idea of the = multi-meaningverse
-
-
linkingmanifesto.org linkingmanifesto.orgHome1
-
Manifesto for Ubiquitous Linking
Some interesting early signers here... Brett Terpstra, Frode Alexander Hegland, Mark Bernstein (Tinderbox)...
-
- Jan 2023
-
docdrop.org docdrop.org
-
human-devised measures of time hold within them powerful political and economic forces they track people within the patterns of activity they become habituated to machine time measured and parceled out by industrial society
!- comment : Deep conditioning
-
-
humansandnature.org humansandnature.org
-
Regarding climate change, it is as if humanity stands poised before two buttons: one is an economic and cultural reset, while the other triggers a self-destruct sequence. As a community of nations, we can’t seem to agree on which is which. Or, even if we did, we don’t seem to have the collective political will to stop those who seem intent on pushing the self-destruct button—in order, they say, to protect our liberty.
!- comment : the need to spiral towards an INCLUSIVE sacred - science and religion are not opposites, but seek the sacred from different avenues - humanity has collective evolved towards this polycrisis and fragmented worldviews must find their common human denominators and unite in an INCLUSIVE global commons and citizenship
-
The old humanisms have not so much died as faded away. Alarms about the danger of climate change have been sounded now for so long that urgency is also fading, not the objective reasons for urgency—they burn more brightly than ever—but the willingness or even capacity in many societies to feel it.[19] Indeed, countless Americans won’t submit to a Covid-19 vaccine even though this fundamental gesture of solidarity makes one’s life not merely safer, but better. But as Secretary General Guterres reminds us, there will be no vaccine for climate change. Americans might not take it even if there were. Where can we find the subjunctive politics we need?
!- comment : Old Humanism - Deep Humanity is the intentional examination of the deepest assumptions of our humanity, to exclude nothing and include all the contradictions of a consciousness examining its social reality - The sacred is our birthright, but it has been abandoned, leaving us in a lurch. A world lived without a living, breathing wisdom of the sacred is a dead world, and that is the world we have created
-
The posture of democratic citizenship is avowal of rights and obligations of membership in a civic community. The rationale for this is the moral and political goodness of a civic way of living and the shared promise of human self-realization through interdependence. As such it is the exemplary, most inclusive form of membership; it is a precondition for the sustainability in the modern secular era of other expressions of membership in our lives—social, economic, kinship, familial, and intimate.[17] Again, citizenship avows—makes a vow, takes on a trust—on behalf of a future of moral and political potential toward which it is reasonable to strive. Citizenship is iterative and ongoing; it provides continuity and provokes innovation; each generation of democratic citizens begins a new story of the demos and continues an ongoing one.[18]
!- key finding : citizenship is a trusteeship - in which the individual takes on responsibility to participate in upholding the mutually agreed principles and promises leading to collective human self-realization - the individual works with others to collective realize this dream which affects all individuals within the group
!- implement : TPF / DH / SRG -implement this education program globally as part of Stop Reset Go / Deep Humanity training that recognizes the individual collective entanglement and include in the Tipping Point Festival as well
-
Let me pose the question in the following way: Is the condition of autonomia fulfilled or undermined by the condition of sumbiōsis? Could it be that autos and sumbios—the most fully realized, best self and the companion—are two sides of the same coin; that is to say, entangled?
!- comment : autonomy and symbiosis entangled - this goes to the heart of Deep Humanity, the entangled individual / collective
-
Such relational practices of recognition avow that concern and respect are due to others as persons of inherent, not simply instrumental, worth.
!- inherent worth : each person is sacred !- comment : treating ALL human (and non-human) beings as sacred and not just transactional or instrumental is a key starting point - practice of Deep Humanity
-
new way of seeing could lead to loss of dignity, oppression, and even greater inequality; there are many historical examples of that.[10] But there is also the open horizon of new ways of being that are more humane, more authentic, more just. This horizon is what political theorist William Connolly refers to when he says: “Today perhaps it is wise to try to transfigure the old humanisms that have played important roles in Euro-American states into multiple affirmations of entangled humanism in a fragile world.”[11]
!- quotable : William Connolly !- comment - Deep Humanity?
Tags
- citizen education
- entangled autonomy / symbiosis
- instrumental worth
- Tipping Point Festival
- Deep Humanity
- entangled individual/civilizational journey
- William Connolly
- quotable sentence
- DH
- inclusive global commons
- sacred
- inherent worth
- entangled individual / collective
- transactional worth
- citizen empowerment
- Stop Reset Go
- TPF
- polycrisis
Annotators
URL
-
-
aeon.co aeon.co
-
The deep AnthropoceneA revolution in archaeology has exposed the extraordinary extent of human influence over our planet’s past and its future
!- Title : The deep Anthropocene - A revolution in archaeology has exposed the extraordinary extent of human influence over our planet’s past and its future !- Author : Lucas Stephens - researcher at archaeoGLOBE project
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
"Talking About Large Language Models" by Murray Shanahan
-
-
docdrop.org docdrop.org
-
he regards 00:01:35 the idea of isolated individual as a myth what interested david much more was a dialogue he believed that it is only in dialogue in the class of opinions where answers are formed and how human consciousness is born we humans according to david are the product of our social relationship that's why it was so important for him to be involved in a situation in which 00:02:03 people think and act collectively and david grabber foundation will follow the same path
!- David Grabber Foundation : hosts Fight Club - this talk is on Debt with guests Michael Hudson and Thomas Piketty - David regarded isolated individual as a myth - human consciousness is a product of social relationship
!- isolated individual mythology : comment - Deep Humanity praxis is aligned, seeing the deep entanglement between the individual and the collective(s) the individual is embedded within
-
-
www.imperial.ac.uk www.imperial.ac.uk
-
We know the information. But information is not changing our minds. Most people make decisions on the basis of feelings, including the most important decisions in life – what football team you support, who you marry, which house you live in. That is how we make choices.” “Thought is at the basis of our feelings, and before we have ideas we have feelings that lead to those ideas. So how do we change minds? A change in feelings changes minds.”
!- "So how do we change minds? A change in feeling changes minds" : Comment - Brian Eno's comment is very well aligned with Deep Humanity praxis, which can be summed up as: The heart feels, the mind thinks, the body acts, an impact appears in our shared reality. - Also see the related story: - Storytelling will save the Earth: https://hyp.is/go?url=https%3A%2F%2Fwww.wired.com%2Fstory%2Fenvironment-climate-change-storytelling%2F&group=world
-
-
-
Success in handling the human element, like success inhandling the materials element, depends upon knowledge of theelement itself and knowledge as to how it can best be handled.
-
- Dec 2022
-
docdrop.org docdrop.org
-
i'm going to be doing a powerpoint presentation for which i apologize because i know you're probably sick and tired of these in the zoom world but we do need um to do that in order to 00:09:49 make things work
!- limitations of : current presentation technology !- question : why are people tired of powerpoint presentation technology? - possibly because it is not truly interactive and is simplex (one direction) communication - an alternative technology model is offered by Indyweb, which is based on the people-centered, interpersonal ecosystem founded on Deep Humanity principles of the individual/collective entanglement - The Indyweb /Deep Humanity model articulates a new language that is more aligned to person without a self: it recognizes the human being (noun) as a process (verb) related to the entangled individual / collective
-
-
docdrop.org docdrop.org
-
you're then talking about 15 to 25 cuts and Emissions a year on year for 00:33:27 developed countries which sounds impossible um but if we started earlier it would have been much simpler to do um but the equity part I think gives us real scope there because within our countries there are huge differences in 00:33:40 in our emissions um if we wanted to live on Paris we're going to need to reconsider what does growth mean what's progress what is development we have to ask these sorts of questions about our society and we don't have a long time to answer them
!- key point : developed countries faced with 15 to 25 percent annual decarbonization - unheard of, but we left it too late with our decades of procastination, and we are still procastinating in the same way!
Tags
Annotators
URL
-
-
docdrop.org docdrop.org
-
I don't know how this will look like. What I do think is it will come to cultural identity. What is the cultural identity? And that's what we will all gravitate to, and we'll gravitate.
!- future global fragmentation : by culture - Michaux believes people will fragment in the future along cultural boundaries as we move through tumultuous transition. This makes sense as ingroups will naturally form - this should be further explored to explore implications: - will we get political polarization? At what level? National, regional, city / community scale? - what implications will this have on cooperation and sharing? will it create policy gridlock? Will it become even more urgent to educate everyone on a Deep Humanity type of open praxis that finds common human denominators (CHD)?
-
- Nov 2022
-
nlp.seas.harvard.edu nlp.seas.harvard.edu
-
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence.
-
- Oct 2022
-
stedolan.github.io stedolan.github.io
-
Multiplying two objects will merge them recursively: this works like addition but if both objects contain a value for the same key, and the values are objects, the two are merged with the same strategy.
Unfortunately, it doesn't merge/concatenate arrays. Sometimes that's what you want (you want the 2nd value to override the 1st but sometimes not.
If you want it to concatenate instead, here are some workarounds:
-
https://stackoverflow.com/questions/53661930/jq-recursively-merge-objects-and-concatenate-arrays
-
If you only need/want to concatenate for some fixed list of keys, you could do it more simply like this (but could get repetitive to repeat for each key you want it for):
⟫ jq -n '[{hosts: ["a"]}, {hosts: ["b"]}] | .[]' | jq -s '.[0] * .[1] * {hosts: (.[0].hosts + .[1].hosts)}' { "hosts": [ "a", "b" ] }
-
-
- Sep 2022
-
docdrop.org docdrop.org
-
the human brain I've argued for at least two million years has co-evolved with the emergence of these distributed networks and it can't realize its design 00:02:13 potential is to say we wouldn't even be speaking for example until it is immersed in such a network these networks themselves 00:02:24 generate complex cognitive structures which were connected to and which reformat our our brains and therefore the brains task is is very complex we have to assimilate the structures of 00:02:37 culture and manage them and I'm going to argue that a lot of our most complex thinking strategies are actually culturally imposed in the starting point 00:02:51 of the human journey
!- for : individual / collective gestalt - In Deep Humanity praxis, the individual / collective gestalt is fundamental - the individual is enmeshed and entangled with culture before birth - culture affects individual and individual affects culture in entangled feedback loops
-
-
Local file Local file
-
On this road we encounter the psychological obstacles to adoptingnew thinking as recognizable staging posts along the road: denial, anger,bargaining, depression and, finally, acceptance.
!- similiar to : Mortality Salience - grieving of the loss of a loved one - grieving the future loss of one's own life - Ernest Becker is relevant - Denial of Death, Death Terror !- aligned : Deep Humanity
-
This book takes an entirelyfresh approach by focusing on globalization’s inner aspects – the way wethink and feel about it as individuals and as cultures and how it impedesour ability to solve global problems.
!- aligned : Deep Humanity - Let's see exactly how Simpol Inner aspects match up to Deep Humanity inner aspects
-
-
www.dailymaverick.co.za www.dailymaverick.co.za
-
Describing himself as a “messenger from the past”, Berger says that this discovery destroyed the preconceptions of a progressive, linear development of humans from apelike ancestors to what we are now. H. naledi is now dated at between 236,000 and 335,000 years old and was, therefore, a contemporary of Homo sapiens at that stage, which proves that a small-brained hominid was living side by side with its large-brained cousin, who is supposed to represent the apotheosis of sentient beings.
!- for : Deep Humanity - intriguing result with important implications on cultural evolution
-
-
www.youtube.com www.youtube.com
-
once in a while you get a cop out at kerpow is out of that world it's actually an escape from that world
!- similar to : Deep Humanity - Stop Reset Go Deep Humanity praxis is observing the Kerpows of being human - It is the examination of all the assumptions we use but never question in our daily life - such as our use of symbols, pragmatic self/other dualism and our personal mortality
Tags
Annotators
URL
-
-
deepai.org deepai.org
-
Text To Image
-
- Aug 2022
-
www.uml-diagrams.org www.uml-diagrams.org
-
Local file Local file
-
We might represent the deep structure in this sample case by formula 1, and thesurface structure by formula 2, where paired brackets are labeled to show thecategory of phrase that they bound.
surface structure vs. deep structure
-
- Jul 2022
-
bafybeiac2nvojjb56tfpqsi44jhpartgxychh5djt4g4l4m4yo263plqau.ipfs.dweb.link bafybeiac2nvojjb56tfpqsi44jhpartgxychh5djt4g4l4m4yo263plqau.ipfs.dweb.link
-
we term these individually constructed networks by the aggregate namepersonware. Serving as a medium between the individual and the social world, personware provides aself-reinforced and self-cohered narrative of the individual and its relationships with society. It is boththe sense-maker and the sense being made of social reality entangled into an interactive autopoieticconstruct. It maintains a personal line of continuity that interfaces with the broader societal threads bymeans of concrete intentional cognitive selections. These cognitive selections determine how individualminds represent (encode) the state of affairs of the world in language, how they communicate theserepresentations and how they further decode received communications into an understanding of thestate of affairs of the world that eventually trigger actions in the world and further cognitive selections.At moments of decision, that is, attempting to make a choice to affect the world, the human is thusmore often than not symbolically pre-situated. He enacts a personal narrative of which he is hardlythe author and to which almost every decision is knitted in.
!- definition : personware * individually constructed network of relationships and social systems that * provides self-reinforced, self-cohered narrative of the individual and its relationship with society * Metaphorically conceive of personware as a suit we don based on years and decades of social conditioning "Personware" is a good word to use in SRG / DH framework that views the individual human organism's life journey as a deeply entangled individual AND collective journey or entangled individual/civilzational journey * From SRG/DH perspective the individual human organism is always on an entangled dual journey - from birth to death within a biological body and as part of a much longer civilizational journey since the beginning of modern humans (or even further back) * Individuals make intentional cognitive selections * Individual minds encode state of affair of the world via a combination of cognitive experience and language * Individual minds share their understanding of the world through outgoing language communication * Individual minds decode incoming information and store
-
Can they reshape the contours and boundaries of their socialsituations instead of being shaped by them?
!- key insight : can an individual reshape the contours of their social situations instead of being shaped by them? * This realization would open up the door to authentic inner transformation * This is an important way to describe the discovery of personal empowerment and agency via realization of the bare human spirit, the "thought sans image"
-
Consequently, theshape of the gridlock [9], in which further progression towards an ever-greater executive capacity givento a selected group of institutions has become nearly impossible, is not an anomaly to be overcome.The gridlock is the only configuration in which the global system could have settled. It isthe configuration any system is bound to adopt when it is composed of a multitude of differentlypositioned, differently oriented, heterogeneous decision-makers, operating in different dimensionsand scales, none of which universally dominant and all are co-dependent and constrained by others.
!- question : governance gridlock of disparate actors
- This claim seems to make common sense but is it universally true?
- It would be useful for the authors to frame it in Stop Reset Go (SRG) / Deep Humanity (DH) epistemological framework of a multi-meaningverse of human actors in a world of self/other to explain the misunderstandings that lead to potential gridlock
- Employing the SRG/DH framing of the multi-meaningverse employs the concept of Husserl/Kraus Lebenswelt (lifeworld)/Lebenslage (life conditions) https://hyp.is/go?url=https%3A%2F%2Fbafybeicyqgzvzf7g3zprvxebvbh6b4zpti5i2m2flbh4eavtpugiffo5re.ipfs.dweb.link%2F08_article_kraus.pdf&group=world
-
The Human Takeover: A Call for a Venture into anExistential Opportunity
- Title: The Human Takeover: A Call for a Venture into an Existential Opportunity
- Author: Marta Lenartowicz, David R. Weinbaum, Francis Heylighen, Kate Kingsbury and Tjorven Harmsen
- Date: 5 April, 2018
Tags
- insight
- Future of the web
- bare human spirit
- governance gridlock
- individual governance agency
- Deep Humanity
- entangled individual/civilizational journey
- inner transformation
- Freedom
- Bjorn Kraus
- DH
- personware
- dual journey
- Edmund Husserl
- reshape social contours
- multi-meaningverse
- SRG
- lebenswelt
- Stop Reset Go
- Future of the Internet
- lebenslage
- thought sans image
Annotators
URL
-
-
bafybeihfoajtasczfz4s6u6j4mmyzlbn7zt4f7hpynjdvd6vpp32zmx5la.ipfs.dweb.link bafybeihfoajtasczfz4s6u6j4mmyzlbn7zt4f7hpynjdvd6vpp32zmx5la.ipfs.dweb.linkuntitled1
-
The projected timing of climate departurefrom recent variability
- Title: he projected timing of climate departure from recent variability
- Author: Camilo Mora et al.
- Date: 2013*
-
-
-
What happens if the world gets too hot for animals to survive? By Matthew Huber | July 20, 2022
- Title: What happens if the world gets too hot for animals to survive?
- Author: Matthew Huber
- Date: July 20, 2022
-
-
bafybeicuq2jxzrw7omddwzohl5szkqv6ayjiubjy3uopjh5c3cghxq6yoe.ipfs.dweb.link bafybeicuq2jxzrw7omddwzohl5szkqv6ayjiubjy3uopjh5c3cghxq6yoe.ipfs.dweb.link
-
Open-Ended Intelligence
- Title: Open Ended Intelligence
- Author: David R. Weinbaum (Weaver)
- Date: Feb 15, 2018
-
-
bafybeicyqgzvzf7g3zprvxebvbh6b4zpti5i2m2flbh4eavtpugiffo5re.ipfs.dweb.link bafybeicyqgzvzf7g3zprvxebvbh6b4zpti5i2m2flbh4eavtpugiffo5re.ipfs.dweb.link
-
The Life We Live and the Life We Experience: Introducing theEpistemological Difference between “Lifeworld” (Lebenswelt) and “LifeConditions” (Lebenslage)
- Title:The Life We Live and the Life We Experience: Introducing the Epistemological Difference between “Lifeworld” (Lebenswelt) and “Life Conditions” (Lebenslage)
- Author: Bjorn Kraus
- Date: 2015
- Source: https://d-nb.info/1080338144/34
- Annotation status: incomplete
-
-
gist.github.com gist.github.com
-
5.12 Be cautious about trusting AI without having deep understanding.
5.12 Be cautious about trusting AI without having deep understanding.
-
-
journals.sagepub.com journals.sagepub.com
-
Is our planet doubly alive? Gaia, globalization, and the Anthropocene’s planetary superorganisms
Title: Is our planet doubly alive? Gaia, globalization, and the Anthropocene’s planetary superorganisms Author: Shoshitaishvili, Boris Date: 25 April, 2022
-
-
en.wikipedia.org en.wikipedia.org
-
Intentional programming
Title: Intentional Programming Author: https://en.wikipedia.org/w/index.php?title=Intentional_programming&action=history Date: 2021
-
-
bafybeifajt2qvaapl2vgek66uqcx2fe3cmgmhiw3i5ex6otvfvyqdnc2ty.ipfs.dweb.link bafybeifajt2qvaapl2vgek66uqcx2fe3cmgmhiw3i5ex6otvfvyqdnc2ty.ipfs.dweb.link
-
The trajectory of theAnthropocene: The GreatAcceleration
- Title: The trajectory of the Anthropocene: The Great Acceleration
- Author: Steffen, Will; Broadgate, Wendy; Deutsch, Lisa; Gaffney, Owen and Ludwig, Cornelia Date: 2015
-
-
ernestbecker.org ernestbecker.org
-
Menu Workshops Mortality Awareness Preparedness Project About Us Mission History People Contact About Becker Biography Becker’s Synthesis Books Related Works Becker Fans Resources Terror Management Theory Webinars Educator Resources Book & Film Reviews Interviews Lecture Texts Audio Recordings Video Resources This Mortal Life Becker in the World Death Acceptance Religion and Death Anxiety Art and Artists Climate Talk Discrimination and Racial Justice See All Blog Store The Denial of Death and the Practice of Dying
- Title:THE DENIAL OF DEATH AND THE PRACTICE OF DYING
- Author: Huges, Glenn
- Date:?
-
-
bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link bafybeicho2xrqouoq4cvqev3l2p44rapi6vtmngfdt42emek5lyygbp3sy.ipfs.dweb.link
-
ind outside Brain:a radically non-dualist foundation for distributed cognition
- Title: Mind outside Brain: a radically non-dualist foundation for distributed cognition
- Author: Heylighen, Francis & Beigi, Shima
- Date: 2016
Tags
- SRG
- Heylighen
- distributed cognition
- Francis Heylighen
- nonduality
- Stop Reset Go
- Deep Humanity
- Shima Beigi
- nondual
- DH
Annotators
URL
-
-
docdrop.org docdrop.org
-
so first i'm going to really focus on that allure of immediacy and then move into this kind of arc from yamaka through yogachara and into zen and my aim is going to be 00:09:49 um to show you that i think the buddhist tradition gets the all of these issues roughly right that is i'm not simply going to be characterizing what buddhists say about this i'm actually defending it and i think that we can 00:10:02 therefore learn a great deal about subjectivity through very careful attention to the multiple ways in which buddhist philosophers have considered this issue so i'm going to try to be shedding light 00:10:13 on contemporary debates as well by attention to buddhist resources
For Deep Humanity open praxis, we can learn from these compelling philosophical findings from Buddhism and remix them in a form that is authentic to the source but makes it more widely accessible to non-Buddhists.
The key distinction Jay is trying to convey is that our sense and the allure of immediacy is in contrast to the complex and opaque mediating mechanisms that are responsible for us perceiving the world the way we do and cognizing / feeling about the world the way we do.
-
cognitive illusion and immediate experience perspectives 00:01:44 from buddhist philosophy
Title: cognitive illusion and immediate experience perspectives from buddhist philosophy Author: Jay L. Garfield Year: 2022
This is a very important talk outlining a number of key concepts that Stop Reset Go and Deep Humanity are built upon and also a rich source of BEing Journeys.
In brief, this talk outlines key humanistic (discoverable by a modern human being regardless of any cultural, gender, class, etc difference) concepts of Buddhist philosophy that SRG / DH embeds into its framework to make more widely accessible..
The title of the talk refers to the illusions that our own cognition produces of both outer and inner appearances because the mechanisms that produce them area opaque to us. Their immediacy feels as if they are real.
If what we sense and think is real is an illusion, then what is real? "Real" in this case implies ultimate truth. As we will see, Nagarjuna's denial of any argument that claims to be the ulitmate is denied. What is left after such a complete denial? Still something persists.
-
-
-
so this is white light passing through a dispersive prison and this is a visible spectrum from about 420 nanometers in the violet through 500 nanometers and 00:00:18 the green 580 yellow 610 and orange and 650 red and some of the slides that have this along the bottom axis so how dependent I'll be in color what do you 00:00:30 think we depend on color a lot a little lots okay
- Title: How do we see colours?
- Author: Andrew Stockman
- Date: 2016
Many different color illusions Good to mine for BEing Journeys
-
-
-
I want to start with a game. Okay? And to win this game, all you have to do is see the reality that's in front of you as it really is, all right? So we have two panels here, of colored dots. And one of those dots is the same in the two panels. And you have to tell me which one. Now, I narrowed it down to the gray one, the green one, and, say, the orange one. 00:00:41 So by a show of hands, we'll start with the easiest one. Show of hands: how many people think it's the gray one? Really? Okay. How many people think it's the green one? And how many people think it's the orange one? Pretty even split. Let's find out what the reality is. Here is the orange one. (Laughter) Here is the green one. And here is the gray one. 00:01:16 (Laughter) So for all of you who saw that, you're complete realists. All right? (Laughter) So this is pretty amazing, isn't it? Because nearly every living system has evolved the ability to detect light in one way or another. So for us, seeing color is one of the simplest things the brain does. And yet, even at this most fundamental level, 00:01:40 context is everything. What I'm going to talk about is not that context is everything, but why context is everything. Because it's answering that question that tells us not only why we see what we do, but who we are as individuals, and who we are as a society.
- Title: Optical illusions show how we see
- Author: Beau Lotto
- Date: 8 Oct, 2009
The opening title is very pith:
No one is an outside observer of nature, each of us is defined by our ecology.
We need to unpack the full depth of this sentence.
Seeing is believing. This is more true than we think.Our eyes trick us into seeing the same color as different ones depending on the context. Think about the philosophical implications of this simple finding. What does this tell us about "objective reality"? Colors that we would compare as different in one circumstance are the same in another.
Evolution helps us do this for survival.
-
-
docdrop.org docdrop.org
-
so here's a straightforward question what color are the strawberries in this photograph the red right wrong those strawberries are gray if you don't 00:00:12 believe me we look for one of the reddest looking patches on this image cut it out now what color is that it's great right but when you put it back on 00:00:25 the image it's red again it's weird right this illusion was created by a Japanese researcher named Akiyoshi Kitaoka and it hinges on something called color constancy it's an incredible visual 00:00:39 phenomenon by which the color of an object appears to stay more or less the same regardless of the lighting conditions under which you see it or the lighting conditions under which your brain thinks you're seeing it
Title: Why your brain thinks these strawberries are red Author: WIRED Date:2022
Color Constancy
Use this for BEing journey
-
-
bafybeibbaxootewsjtggkv7vpuu5yluatzsk6l7x5yzmko6rivxzh6qna4.ipfs.dweb.link bafybeibbaxootewsjtggkv7vpuu5yluatzsk6l7x5yzmko6rivxzh6qna4.ipfs.dweb.link
-
Mobilization Systems:technologies for motivating and coordinating human action
Title: Mobilization Systems: technologies for motivating and coordinating human action Authors: Heylighen, Francis; Kostov, Iavor,; Kiemen, Mixel Date: 2014
-
-
www.thegreatsimplification.com www.thegreatsimplification.com
-
Peter Whybrow: “When More is Not Enough”
Title: Demand, services and social aspects of mitigation Author: Nate Hagen Guest: Peter Whybrow, psychiatrist, neuroscientist, and author Date: 6 July, 2022
-
-
bylinetimes.com bylinetimes.com
-
We Need to Stop Pretending we can Limit Global Warming to 1.5°C
Title: We Need to Stop Pretending we can Limit Global Warming to 1.5°C Author: James Dyke Date: 6 July 2022
-
-
-
you are probably somewhat unfamiliar with the term biosemiotics is not in widespread use um and but it represents a very very 00:00:17 important reference point when we come to theories of embodied cognition the founder of biosemiotics is typically held to be jacob von xcool 00:00:29 biosemiotics is a field within the broader domain of semiotics which considers the manner in which meaning arises through various forms of mediation such as signs indices indexes 00:00:42 symbols and the like
Title: Introduction to Umwelt theory and Biosemiotics Author
-
-
docdrop.org docdrop.org
-
so that's me trying to do a synoptic integration of all of the four e-cognitive science and trying to get it 00:00:12 into a form that i think would help make make sense to people of the of cognition and also in a form that's helpful to get them to see what's what we're talking about when i'm talking about the meaning 00:00:25 that's at stake in the meaning crisis because it's not sort of just semantic meaning
John explains how the 4 P's originated as a way to summarize and present in a palatable way of presenting the cognitive science “4E” approach to cognition - that cognition does not occur solely in the head, but is also embodied, embedded, enacted, or extended by way of extra-cranial processes and structures.
-
-
www.judithragir.org www.judithragir.org
-
Dogen and Nagarjuna’s Tetralemma #6 of 21
Title: http://www.judithragir.org/2017/08/dogen-nagarjunas-tetralemma-6/ Author: Judith Ragir Date: 2017
-
When we see the world from the vantage point of all-at-oneness, always right here, we can be said to be like a pearl in a bowl. Flowing with every turn without any obstructions or stoppages coming from our emotional reactions to different situations. This is a very commonly used image in Zen — moving like a pearl in a bowl. As usual, our ancestors comment on this phrase, wanting to break open our solidifying minds even more. Working from Dogen’s fascicle Shunju, Spring and Autumn, we have an example of opening up even the Zen appropriate phrase — a pearl in a bowl. Editor of the Blue Cliff Record Engo ( Yuan Wu) wrote: A bowl rolls around a pearl, and the pearl rolls around the bowl. The absolute in the relative and the relative in the absolute. Dogen: The present expression “a bowl rolls around a pearl” is unprecedented and inimitable, it has rarely been heard in eternity. Hitherto, people have spoken only as if the pearl rolling in the bowl were ceaseless.
This is like the observation I often make in Deep Humanity and which is a pith BEing Journey
When we move is it I who goes from HERE to THERE? Or am I stationary, like the eye of the hurricane spinning the wild world of appearances to me and surrounding me?
I am like the gerbil running on a cage spinning appearances towards me but never moving an inch I move while I am still The bowl revolves around this pearl.
-
The absolute in the relative and the relative in the absolute
Title: The absolute in the relative and the relative in the absolute Author: Judith Ragir Date: ?
-
I have written previously about the issue of “both”. In one sense, interdependence and total dynamic working implies that everything is both form and emptiness simultaneously. But the problem is that you can’t PERCEIVE both form and emptiness at the same time. They are both there supporting each other but our discriminative thought can only see one or the other; like the front and back foot in walking, like the old lady and young lady optical illusion, or the front and back of a hand. Both sides are always there. We have a whole hand but you can only see either the front of the hand or the back of the hand in a single moment.
This needs unpacking and can be a good Deep Humanity BEing journey exercise, as all of this can be.
Tags
- Engo
- Nagarjuna
- Spring and Autumn
- Tetralemma
- Catuskoti
- absolute
- Diamond Sutra
- Book of Serenity
- Gerbil
- Deep Humanity
- loom
- relative
- eye of the hurricane
- Buddhism
- paradox
- DH
- Judith Ragir
- Blue Cliff Record
- SRG
- Shunju
- Middle Way
- Zen
- Gerbil wheel
- BEing Journey
- Stop Reset Go
- Mulamadhyamakakarika
- Dogen
- Pearl in a bowl
Annotators
URL
-
-
bafybeifum5ioeus3y3hl4lqdwclgxpd6in4muleocuhsk3jev2rd7j3hpu.ipfs.dweb.link bafybeifum5ioeus3y3hl4lqdwclgxpd6in4muleocuhsk3jev2rd7j3hpu.ipfs.dweb.link
-
THE LOGIC OF THE CATUSKOTI
Title: THE LOGIC OF THE CATUSKOTI Author: GRAHAM PRIEST Year: 2010
-
-
historyofphilosophy.net historyofphilosophy.net
-
Jan Westerhoff on Nāgārjuna
Title: JAN WESTERHOFF ON NĀGĀRJUNA Author:Adamson, Peter & Negary, Jardin (???) Date: 23 July 2017
-
-
historyofphilosophy.net historyofphilosophy.net
-
No Four Ways About It: Nāgārjuna’s Tetralemma
Title: 46. NO FOUR WAYS ABOUT IT: NĀGĀRJUNA’S TETRALEMMA Author:
insightful explanation of Nagarjuna's tetralemma!
-
-
theblacksheepagency.com theblacksheepagency.com
-
Understanding our situatedness, blowing up assumptionsWhat are the things your brain has been conditioned to believe as “true”? What should you re-examine, pull apart and re-assemble with intention?
Title: Understanding our situatedness, blowing up assumptions What are the things your brain has been conditioned to believe as “true”? What should you re-examine, pull apart and re-assemble with intention? Author: Laird, Katie
-
-
Local file Local file
-
there has been a tendency in popular discussion to confuse “deep structure”with “generative grammar” or with “universal grammar.” And a number of pro-fessional linguists have repeatedly confused what I refer to here as “the creativeaspect of language use” with the recursive property of generative grammars, avery different matter.
Noam Chomsky felt that there was a tendency for people to confuse the ideas of deep structure with the ideas of either generative grammar or universal grammar. He also thought that professional linguists confused what he called "the creative aspect of language use" with the recursive property of generative grammars.
-
-
newhumanist.org.uk newhumanist.org.uk
-
"Ignorance really is blissful, especially for the powerful" Q&A with Linsey McGoey, author of "The Unknowers: How Strategic Ignorance Rules the World".
Title: "Ignorance really is blissful, especially for the powerful" Q&A with Linsey McGoey, author of "The Unknowers: How Strategic Ignorance Rules the World".
-
-
-
we're going to talk in this series 00:01:10 about a series of papers that i just published in the in the journal sustainability that that series is titled science driven societal transformation
Title: Science-driven Societal Transformation, Part 1, 2 and 3 John Boik, Oregon State University John's Website: https://principledsocietiesproject.org/
Intro: A society can be viewed as a superorganism that expresses an intrinsic purpose of achieving and maintaining vitality. The systems of a society can be viewed as a societal cognitive architecture. The goal of the R&D program is to develop new, integrated systems that better facilitate societal cognition (i.e., learning, decision making, and adaptation). Our major unsolved problems, like climate change and biodiversity loss, can be viewed as symptoms of dysfunctional or maladaptive societal cognition. To better solve these problems, and to flourish far into the future, we can implement systems that are designed from the ground up to facilitate healthy societal cognition.
The proposed R&D project represents a partnership between the global science community, interested local communities, and other interested parties. In concept, new systems are field tested and implemented in local communities via a special kind of civic club. Participation in a club is voluntary, and only a small number of individuals (roughly, 1,000) is needed to start a club. No legislative approval is required in most democratic nations. Clubs are designed to grow in size and replicate to new locations exponentially fast. The R&D project is conceptual and not yet funded. If it moves forward, transformation on a near-global scale could occur within a reasonable length of time. The R&D program spans a 50 year period, and early adopting communities could see benefits relatively fast.
-
-
bafybeiapea6l2v2aio6hvjs6vywy6nuhiicvmljt43jtjvu3me2v3ghgmi.ipfs.dweb.link bafybeiapea6l2v2aio6hvjs6vywy6nuhiicvmljt43jtjvu3me2v3ghgmi.ipfs.dweb.link
-
Pervasive human-driven decline of life on Earthpoints to the need for transformative change
Title: Pervasive human-driven decline of life on Earth points to the need for transformative change
-
-
www.sciencedaily.com www.sciencedaily.com
-
From giant elephants to nimble gazelles: Early humans hunted the largest available animals to extinction for 1.5 million years, study finds
Title: From giant elephants to nimble gazelles: Early humans hunted the largest available animals to extinction for 1.5 million years, study finds
Annotation of source paper preview: https://hyp.is/go?url=https%3A%2F%2Fwww.sciencedirect.com%2Fscience%2Farticle%2Fabs%2Fpii%2FS0277379121005230&group=world
-
-
www.sciencedirect.com www.sciencedirect.com
-
Levantine overkill: 1.5 million years of hunting down the body size distributionAuthor links open overlay panelJacobDembitzeraRanBarkaibMikiBen-DorbShaiMeiriac
Title: Levantine overkill: 1.5 million years of hunting down the body size distribution
-
-
-
Ronald Wright: Can We Still Dodge the Progress Trap? Author of 2004’s ‘A Short History of Progress’ issues a progress report.
Title: Ronald Wright: Can We Still Dodge the Progress Trap? Author of 2004’s ‘A Short History of Progress’ issues a progress report.
Ronald Wright is the author of the 2004 "A Short History of Progress" and popularized the term "Progress Trap" in the Martin Scroses 2011 documentary based on Wright's book, called "Surviving Progress". Earlier Reesarcher's such as Dan O'Leary investigated this idea in earlier works such as "Escaping the Progress Trap http://www.progresstrap.org/content/escaping-progress-trap-book
-
-
report.ipcc.ch report.ipcc.ch
-
Chapter 5: Demand, services and social aspects of mitigation
Public Annotation of IPCC Report AR6 Climate Change 2022 Mitigation of Climate Change WGIII Chapter 5: Demand, Services and Social Aspects of Mitigation
NOTE: Permission given by one of the lead authors, Felix Creutzig to annotate with caveat that there may be minor changes in the final version.
This annotation explores the potential of mass mobilization of citizens and the commons to effect dramatic demand side reductions. It leverages the potential agency of the public to play a critical role in rapid decarbonization.
-
- Jun 2022
-
-
Social tipping dynamics for stabilizing Earth’s climate by 2050
-
-
besjournals.onlinelibrary.wiley.com besjournals.onlinelibrary.wiley.com
-
The experts were asked to independently provide a comprehensive list of levers and leverage points for global sustainability, based on the potential for disproportionate effects to address and reverse the deterioration of nature while meeting societal needs. They were asked to consider actions by the full range of possible actors, and both top-down and bottom-up effects across various sectors. The collection of all responses became our initial set of levers and leverage points. Ensuing processes were then informed by five linked conceptualizations of transformative change identified by the experts (Chan et al., 2019): ● Complexity theory and leverage points of transformation (Levin et al., 2013; Liu et al., 2007; Meadows, 2009); ● Resilience, adaptability and transformability in social–ecological systems (Berkes, Colding, & Folke, 2003; Folke et al., 2010); ● A multi-level perspective for transformative change (Geels, 2002); ● System innovations and their dynamics (Smits, Kuhlmann, & Teubal, 2010; OECD, 2015) and ● Learning sustainability through ‘real-world experiments’ (Geels, Berkhout, & van Vuuren, 2016; Gross & Krohn, 2005; Hajer, 2011).
Set of levers and leverage points identified by the authors.
Creating an open public network for radical collaboration, which we will call the Indyweb, can facilitate bottom-up engagement to both educate the public on these levers as well as be an application space to crowdsource the public to begin sharing local instantiations of these levers.
An Indyweb that is in the form of an interpersonal space in which each individual is the center of their data universe, and in which they can see all the data from their diverse digital interactions across the web and in real life all consolidated in one place offers a profound possibility for both individual and collective learning. Such an Indyweb would bring the relational nature of the human being, the so called "human INTERbeing" alive, and would effortlessly emerge the human INTERbeing explicitly as the natural form merely from its daily use. One can immediately see the relational nature of individual learning, how it is so entangled with collective learning, and would be reinforced with each social interaction on the web or in real life. This is what is needed to track both individual inner transformation (IIT) as well as collective outer transformation (COT) towards a rapid whole system change mobilization. Accelerated by a program of open access Deep Humanity (DH) knowledge that plumbs the very depth of what it is to be human, this can accelerate the indirect drivers of change and provide practical tools for granular monitoring of both IIT and COT.
Could we use AI to search for levers and leverage points?
-
An important step towards such widespread changes in action would be to unleash latent capabilities and relational values (including virtues and principles regarding human relationships involving nature, such as responsibility, stewardship and care
Practices such as open source Deep Humanity praxis focusing on inner transformation can play a significant role.
-
Embracing visions of a good life that go beyond those entailing high levels of material consumption is central to many pathways. Key drivers of the overexploitation of nature are the currently popular vision that a good life involves happiness generated through material consumption [leverage point 2] and the widely accepted notion that economic growth is the most important goal of society, with success based largely on income and demonstrated purchasing power (Brand & Wissen, 2012). However, as communities around the world show, a good quality of life can be achieved with significantly lower environmental impacts than is normal for many affluent social strata (Jackson, 2011; Røpke, 1999). Alternative relational conceptions of a good life with a lower material impact (i.e. those focusing on the quality and characteristics of human relationships, and harmonious relationships with non-human nature) might be promoted and sustained by political settings that provide the personal, material and social (interpersonal) conditions for a good life (such as infrastructure, access to health or anti-discrimination policies), while leaving to individuals the choice about their actual way of living (Jackson, 2011; Nussbaum, 2001, 2003). In particular, status or social recognition need not require high levels of consumption, even though in some societies, status is currently related to consumption (Røpke, 1999).
A redefinition of a good life that decouples it from materialism is critical to lowering carbon emissions. Practices such as open source Deep Humanity praxis focusing on inner transformation can play a significant role.
-
trust in neighbours, access to care, opportunities for creative expression, recognition
Practices such as open source Deep Humanity praxis focusing on inner transformation can play a significant role.
-
Given that the fate of nature and humanity depends on transformative change of the human enterprise (IPBES, 2019a, 2019b), indirect drivers clearly play a central role.
Key statement supporting transformative change of indirect drivers - ie. Deep Humanity and other work to bring about inner transformation
-
Levers and leverage points for pathways to sustainability
Leverage points to study for Rapid Whole System Change
Tags
- indirect drivers
- collective outer transformation
- access to care
- IIT
- Deep Humanity
- inner transformation
- new definition of a good life
- interpersonal computing
- levers
- DH
- leverage point
- lever
- indyweb
- individual inner transformation
- recognition
- human INTERbeing
- SRG
- COT
- redefinition of a good life
- intangible goals
- creative expression
- community engagement
- Stop Reset Go
Annotators
URL
-
-
-
if the process of seeing differently is the process of first and foremost having awareness of the fact that everything you do has an assumption 00:00:14 figure out what those are and by the way the best person to reveal your own assumptions to you is not yourself it's usually someone else hence the power of diversity the importance of diversity 00:00:26 because not only does that diversity reveal your own assumptions to you but it can also complexify your assumptions right because we know from complex systems theory that the best solution is most likely to 00:00:40 exist within a complex search space not a simple search space simply because of statistics right so whereas a simple search space is more adaptable it's more easily to adapt it's 00:00:52 less likely to contain the best solution so what we really want is a diversity of possibilities a diversity of assumptions which diverse groups for instance enable
From a Stop Reset Go Deep Humanity perspective, social interactions with greater diversity allows multi-meaningverses to interact and the salience landscape from each conversant can interact. Since each life is unique, the diversity of perspectival knowing allows strengths to overlap weaknesses and different perspectives can yield novelty. The diversity of ideas encounter each other like diversity in a gene pool, evolving more offsprings which may randomly have greater fitness to the environment.
Johari's window is a direct consequence of this diversity of perspectives, this converged multi-meaningverse of the Lebenswelt..
-
-
-
the inter-connectedness of the crises we face climate pollution biodiversity and 00:07:54 inequality require our change require a change in our exploitative relationship to our planet to a more holistic and caring one but that can only happen with a change in our behavior
As per IPCC AR6 WGIII, Chapter 5 outlining for the first time, the enormous mitigation potential of social aspects of mitigation - such as behavioral change - can add up to 40 percent of mitigation. And also harkening back to Donella Meadows' leverage points that point out shifts in worldviews, paradigms and value systems are the most powerful leverage points in system change.
Stop Reset Go advocates humanity builds an open source, open access praxis for Deep Humanity, understand the depths of what it means to be a living and dying human being in the context of an entwined culture. Sharing best practices and constantly crowdsourcing the universal and salient aspects of our common humanity can help rapidly transform the inner space of each human INTERbeing, which can powerfully influence outer (social) transformation.
-
- May 2022
-
report.ipcc.ch report.ipcc.ch
-
Demand-side solutions require both motivation and capacity for change (high confidence).34Motivation by individuals or households worldwide to change energy consumption behaviour is35generally low. Individual behavioural change is insufficient for climate change mitigation unless36embedded in structural and cultural change. Different factors influence individual motivation and37capacity for change in different demographics and geographies. These factors go beyond traditional38socio-demographic and economic predictors and include psychological variables such as awareness,39perceived risk, subjective and social norms, values, and perceived behavioural control. Behavioural40nudges promote easy behaviour change, e.g., “improve” actions such as making investments in energy41efficiency, but fail to motivate harder lifestyle changes. (high confidence) {5.4}
We must go beyond behavior nudges to make significant gains in demand side solutions. It requires an integrated strategy of inner transformation based on the latest research in trans-disciplinary fields such as psychology, sociology, anthropology, neuroscience and behavioral economics among others.
-
-
www.livescience.com www.livescience.com
-
Chinese scientists call for plan to destroy Elon Musk's Starlink satellites
This is another example that our culture has reached an inflection point, when we begin to divert previous time and material resources on conflict because we are not wise enough to cooperate, instead of more urgent problems affecting us all.
The journey of civilization to a technological modernity places us in a precarious position. The fundamental misunderstandings arising from a toxic mix of different political, religious and cultural ideologies threatens to destabilise the human project. We are spending ever increasing resources on defensive and offensive technologies to protect our ingroup against perceived outgroups instead of technologies for defending against the destruction of the global commons which e ourselves have brought about and which threatens our entire species.
By so doing, we create a self-reinforcing feedback loop of antagonism which increases the likelihood of violence.
This underscores the urgency for deep inner transformation, trapping into our deep Humanity to mitigate the social antagonism that is so destructive to global society.
-
-
www.usmcu.edu www.usmcu.edu
-
new way of being
It may be that our civilization must undergo a transformation process that places less emphasis on intelligence and more intelligence on wisdom. That wisdom is intimately bound with rediscovering the essence of what it is to be a living and dying human being. The enormous polycrisis of the Anthropocene leads us to the inescapable conclusion that human intelligence alone is insufficient to lead to a holistic wellbeing within the biosphere. Insofar as the biosphere is a vast interconnected mutually supporting web of life, the overconsumption by one species, modern humans has upset the balance of the biosphere.
A new way of being requires fundamental collective reassessment of what it is to be a living and dying being. The intelligence alone of our species has led to an extreme imbalance of the natural world, whose blowback we are now beginning to experience. The blind, recursive application of intelligence has led to greater and greater separation from nature to the point of the present polycrisis. As a species, we can only distance ourselves apart from nature to an extreme extent when nature reminds us we are NOT separate from her. She is now violently reminding us that we ARE a part of nature. A new way of being is to reconcile technology, that pushes the limits of intelligence alone, with ourselves as being a product of nature herself.
Deep Humanity is that open collective process we call which reminds us that we are a product of nature in every way, and is a journey to reconnect with nature. BEing journeys are the crowdsourced processes of rediscovering our deep connection with nature through participatory, compelling, interactive, immersive explorations of what it is to be a living and dying human being.
-
OP VAK is a concerted effort to make the hyperthreat visible and knowable across the broad spectrum of society. This has practical, educational aspects, including increasing CEC literacy and improving ecoproduct and services labeling. It also links to the integration of CEC into the remit of mainstream intelligence agencies. To address sensory and affective knowing, as well as the deep framing and meaning-making dimension of hyperthreat “knowing,” it will partner with the communications, arts, and humanities sectors in line with the “60,000 artists” concept.24 It will also harness the potential of virtual reality technologies, which have already proven effective in CEC communication.25 Finally, it will involve fast-tracking relevant research and improved mechanisms for conveying and sharing research and knowledge.
Deep Humanity open access education program in museums, workshops and at public festivals can use the tool of compelling, engaging, interactive BEing Journeys to make the invisible hyperthreat visible.
-
Here, in PLAN E, the concept of entangled security translates this idea into meaning that humanity itself can make a great sudden leap.
An Open Access Deep Humanity education program whose core principles are continuously improved through crowdsourcing, can teach the constructed nature of reality, especially using compelling BEing Journeys. This inner transformation can rapidly create the nonlinear paradigm, worldview and value shifts that Donella Meadows identified as the greatest leverage points in system change.
-
What prospects are there to reconfigure great powers’ approach to geopolitical security in a way that aids containment of the hyperthreat? Possible angles include:
Othering needs to be critically examined from a Deep Humanity lens so that we can begin to see ourselves as one united but diverse human family instead of multiple fractured families.
-
Given wide-ranging concerns about globalization, the performance of international organizations, and perceptions that the so-called “liberal rules-based order” holds lingering colonial power dimensions, an overarching conclusion is that the post-World War II global architecture, designed before the advent of CEC or the internet, is outdated and ripe for redesign.16 A new neutral rules-based order could be established, one that is based on ecological survival and safe Earth requirements. Akin to the 2015 Paris Agreement, this might be acceptable to all nations because all are threatened by the CEC hyperthreat. It is an approach that builds on environmental peacekeeping rationale.
Again, like the above point, some kind of global Deep Humanity training that results in gaining appreciation of the Common Human Denominators (CHD) is critical for open communications and finding common ground for dialogue.
-
A global ceasefire could be declared for between 2022 and 2030 to enable all nations to undertake an emergency hyper-response.
State level government officials would need to undergo some kind of global open Deep Humanity type education to begin to shift their inner worldviews, paradigms and value systems, along with business leaders, as the close ties between the influence of business lobbies on governments has a very powerful controlling influence.
Of course, this would be easier if there were a concerted global effort to nominate proactive, empathetic ecocivilizationally and social justice minded women to positions of power.
-
Second, acknowledging increased affective insecurity and that heightened vulnerability and fear will be a factor, great efforts must be made to bolster the care, support and protection provided to people.
Mortality salience for the masses - operationalizing terror management theory (TMT) and Deep Humanity BEing Journeys that take individuals to explore the depths of their humanity to make sense of the times we are in will play a critical role in contextualizing fear of death triggered by unstable circumstances and ameliorating these fears with the wisdom that comes from a living comprehension of the sacredness of our life and eventual death.
-
A stretch target set for the second half of the twenty-first century is for it to be a time in which humanity has gained knowledge, experience, and confidence in dealing with an entangled security environment and coexisting with the hyperthreat. The collective global effort and learning during phases 1–4 will have allowed ingenious solutions for interdependence to emerge. It will be a time of flourishing invention and inspiration.
A critical part of Deep Humanity is the elucidation of progress traps, the unintended consequences of progress. There is an urgent need to advocate for an entirely new human science discipline on progress traps. The reason is because the polycrisis can be seen and critically explained from a progress trap lens.
Progress traps emerge from the unbridgeable gap between finite, reductionist human knowledge and the fractally infinite patterns of the universe and reality, which exists at all scales and dimensions.
The failure to gain a system level understanding of this has led to the premature global scaling of technologies whose unintended consequences emerged after global markets have been established, causing a conflict of interest between biospheric wellbeing and individual profit.
A systematic study or progress traps has rich data to draw from. Ever since the Industrial Revolution, there has been good records of scientific ideas, their associated engineering and technological exploitation and subsequent news media reports of their phase-delayed unintended consequences. Applying AI and a big data scientometric approach can yield patterns in which progress traps emerge. From this, our scientific-technological-industrial-capitalist framework can be modified to include improved regulatory mechanisms based on progress trap research that can systematically grade the risk factors of any new technology. Such risk categorization can result in technologies that require different time scales and aggregate knowledge understanding before they can be fully commercialized with time scale grades ranging from years to decades and even centuries.
All future technology innovations must past through these systematic, evidence-based regulatory barriers before they can be introduced into widespread commercial use.
-
Attributes such as hope, heroism, humor, humanity, hospitality, and honor will be critical.
Early education starting in 2022 of open access Deep Humanity praxis will be critical to prepare future generations to cope with the future shock to come.
-
It is anticipated that this period will address the harder aspects of global transition, in terms of technology, infrastructure, and social behavior change. As initial enthusiasm may have waned, a stoic approach will be required, refreshing the workforce and dealing with more dangerous hyperthreat actions.
It is clear that through such a massive and unprecedented transition, a whole being approach must be adopted. This means dealing with the inner transformation of the individual in addition to the outer transformation. The hyperthreat increases the attention to each individual's mortality salience, their awareness of their own death. As cultural anthropologist Ernest Becker noted in his "Denial of Death", our fear of death is normatively suppressed as a compromised coping mechanism. When extreme weather, food shortage, war, pandemic become an unrelenting onslaught, however, we have no escape from mortality as the threat to our lives will be broadcast relentlessly through mass media. Inner transformation must accompany the outer transformation in order for the general population to emotionally cope with the enormous stress. Deep Humanity (DH) is conceived as an open praxis to assist with the inner transformation that will be needed for mental and emotional well being during these trying times to come.
Tags
- paradigm shift
- politician education
- mortality salience
- donella meadows
- progress trap
- Deep Humanity
- political inner ttransformation
- terror management theory
- denial of death
- inner transformation
- DH
- individual inner transformation
- CHD
- BEing journeys
- OP VAK
- ernest becker
- value system
- unintended consequences
- BEing Journey
- BEing journey
- PLAN E
- leverage points
- worldviews
- fear of death
- Common Human Denominators
- TMT
Annotators
URL
-
- Apr 2022
-
winnielim.org winnielim.org
-
We have to endlessly scroll and parse a ton of images and headlines before we can find something interesting to read.
The randomness of interesting tidbits in a social media scroll help to put us in a state of flow. We get small hits of dopamine from finding interesting posts to fill in the gaps of the boring bits in between and suddenly find we've lost the day. As a result an endless scroll of varying quality might have the effect of making one feel productive when in fact a reasonably large proportion of your time is spent on useless and uninteresting content.
This effect may be put even further out when it's done algorithmically and the dopamine hits become more frequent. Potentially worse than this, the depth of the insight found in most social feeds is very shallow and rarely ever deep. One is almost never invited to delve further to find new insights.
How might a social media stream of content be leveraged to help people read more interesting and complex content? Could putting Jacques Derrida's texts into a social media-like framing create this? Then one could reply to the text by sentence or paragraph with their own notes. This is similar to the user interface of Hypothes.is, but Hypothes.is has a more traditional reading interface compared to the social media space. What if one interspersed multiple authors in short threads? What other methods might work to "trick" the human mind into having more fun and finding flow in their deeper and more engaged reading states?
Link this to the idea of fun in Sönke Ahrens' How to Take Smart Notes.
-
-
github.com github.com
-
Instead read this gems brief source code completely before use OR copy the code straight into your codebase.
Tags
- copy and paste programming
- software development: use of libraries vs. copying code into app project
- learning by reading the source
- having a deep understanding of something
- software development: use of libraries: only use if you've read the source and understand how it works
- read the source code
Annotators
URL
-
-
joanakompa.com joanakompa.com
-
David Chalmer’s beautiful metaphor of the ‘Extended Mind’ (Chalmers, 1998). Chalmers promotes the idea that media, such as, e.g., smartphones, have already begun to function as an extension to our mind, allowing us to navigate and manage an increasingly complex world
The extended mind of Chalmers is like the expansion of the sensory bubble in Stop Reset Go / Deep Humanity framing. It can also be seen as an extension of our Umwelt (Uexskull).
-
- Mar 2022
-
docdrop.org docdrop.org
-
You know, when you look at the real power balance, if the Europeans stick together, if the Americans and the Europeans stick together and stop this culture war and stop tearing themselves apart, they have absolutely nothing to fear -- the Russians or anybody else.
Indeed, if we can unite ALL cultures together because of the Common Human Denominators (CHD) that is the hallmark of being human, this is the cultural shift that needs to happen to navigate the existential polycrisis we now face. Deep Humanity praxis is a framework for exactly this.
Within the diversity of cultural lens' are common human denominators that unite all of the subclasses of homo sapien Left vs Right Russian elites vs Ukraine and the West Straight vs LBGTQ+ West vs Arabic Black vs White
-
- Feb 2022
-
www.udacity.com www.udacity.com
- Jan 2022
-
scattered-thoughts.net scattered-thoughts.net
-
Exposing myself to addictive interactions trained me to self-interrupt - whenever I encountered a difficult decision or a tricky bug I would find myself switching to something easier and more immediately rewarding. Making progress on hard problems is only possible if I don't allow those habits to be reinforced.
Highlighting this, but really the whole section is almost perfectly written. Hardest is achieving your desired inner discipline and then having to fight with people who don't understand this shit (because their performance never matters, or they don't give a damn).
-
-
github.com github.com
-
having inconsistencies when all the "subtle" conditions were met is unfriendly. it requires the user to have much deeper understanding of the nuances of the language.
-
- Dec 2021
-
media-exp1.licdn.com media-exp1.licdn.com
-
Deep learning: A definition
Deep learning: A definition
-
- Nov 2021
-
docdrop.org docdrop.org
-
Professional musicians, concert pianists get to know this instrument deeply, intimately. And through it, they're able to create with sound in a way that just dazzles us, and challenges us, and deepens us. But if you were to look into the mind of a concert pianist, and you used all the modern ways of imaging it, an interesting thing that you would see 00:11:27 is how much of their brain is actually dedicated to this instrument. The ability to coordinate ten fingers. The ability to work the pedal. The feeling of the sound. The understanding of music theory. All these things are represented as different patterns and structures in the brain. And now that you have that thought in your mind, recognize that this beautiful pattern and structure of thought in the brain 00:11:52 was not possible even just a couple hundred years ago. Because the piano was not invented until the year 1700. This beautiful pattern of thought in the brain didn't exist 5,000 years ago. And in this way, the skill of the piano, the relationship to the piano, the beauty that comes from it was not a thinkable thought until very, very recently in human history. 00:12:17 And the invention of the piano itself was not an independent thought. It required a depth of mechanical engineering. It required the history of stringed instruments. It required so many patterns and structures of thought that led to the possibility of its invention and then the possibility of the mastery of its play. And it leads me to a concept I'd like to share with you guys, which I call "The Palette of Being." 00:12:44 Because all of us are born into this life having available to us the experiences of humanity that has come so far. We typically are only able to paint with the patterns of thoughts and the ways of being that existed before. So if the piano and the way of playing it is a way of being, this is a way of being that didn't exist for people 5,000 years ago. 00:13:10 It was a color in the Palette of Being that you couldn't paint with. Nowadays if you are born, you can actually learn the skill; you can learn to be a computer scientist, another color that was not available just a couple hundred years ago. And our lives are really beautiful for the following reason. We're born into this life. We have the ability to go make this unique painting with the colors of being that are around us at the point of our birth. 00:13:36 But in the process of life, we also have the unique opportunity to create a new color. And that might come from the invention of a new thing. A self-driving car. A piano. A computer. It might come from the way that you express yourself as a human being. It might come from a piece of artwork that you create. Each one of these ways of being, these things that we put out into the world 00:14:01 through the creative process of mixing together all the other things that existed at the point that we were born, allow us to expand the Palette of Being for all of society after us. And this leads me to a very simple way to go frame everything that we've talked about today. Because I think a lot of us understand that we exist in this kind of the marvelous universe, 00:14:30 but we think about this universe as we're this tiny, unimportant thing, there's this massive physical universe, and inside of it, there's the biosphere, and inside of that, that's society, and inside of us, we're just one person out of seven billion people, and how can we matter? And we think about this as like a container relationship, where all the goodness comes from the outside to the inside, and there's nothing really special about us. 00:14:56 But the Palette of Being says the opposite. It says that the way that we are in our lives, the way that we affect our friends and our family, begin to change the way that they are able to paint in the future, begins to change the way that communities then affect society, the way that society could then affect its relationship to the biosphere, and the way that the biosphere could then affect the physical planet 00:15:21 and the universe itself. And if it's a possible thing for cyanobacteria to completely transform the physical environment of our planet, it is absolutely a possible thing for us to do the same thing. And it leads to a really important question for the way that we're going to do that, the manner in which we're going to do that. Because we've been given this amazing gift of consciousness.
The Palette of Being is a very useful idea that is related to Cumulative Cultural Evolution (CCE) and autopoiesis. From CCE, humans are able to pass on new ideas from one generation to the next, made possible by the tool of inscribed language.
Peter Nonacs group at UCLA as well as Stuart West at Oxford research Major Evolutionary Transitions (MET) West elucidates that modern hominids integrate the remnants of four major stages of MET that have occurred over deep time. Amanda Robins, a researcher in Nonacs group posits the idea that our species of modern hominids are undergoing a Major Systems Transition (MST), due specifically to our development of inscribed language.
CCE emerges new technologies that shape our human environments in time frames far faster than biological evolutionary timeframes. New human experiences are created which have never been exposed to human brains before, which feedback to affect our biological evolution as well in the process of gene-culture coevolution (GCC), also known as Dual Inheritance theory. In this way, CCE and GCC are entangled. "Gene–culture coevolution is the application of niche-construction reasoning to the human species, recognizing that both genes and culture are subject to similar dynamics, and human society is a cultural construction that provides the environment for fitness-enhancing genetic changes in individuals. The resulting social system is a complex dynamic nonlinear system. " (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3048999/)
This metaphor of experiences constituting different colors on a Palette of Being is a powerful one that can contextualize human experiences from a deep time framework. One could argue that language usage automatically forces us into an anthropomorphic lens, for sophisticated language usage at the level of humans appears to be unique amongst our species. Within that constraint, the Palette of Being still provides us with a less myopic, less immediate and arguably less anthropomorphic view of human experience. It is philosophically problematic, however, in the sense that we can speculate about nonhuman modalities of being but never truly experience them. Philosopher Thomas Nagel wrote his classic paper "What it's like to be a bat" to illustrate this problem of experiencing the other. (https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf)
We can also leverage the Palette of Being in education. Deep Humanity (DH) BEing Journeys are a new kind of experiential, participatory contemplative practice and teaching tool designed to deepen our appreciation of what it is to be human. The polycrisis of the Anthropocene, especially the self-induced climate crisis and the Covid-19 pandemic have precipitated the erosion of stable social norms and reference frames, inducing another crisis, a meaning crisis. In this context, a re-education of embodied philosophy is seen as urgent to make sense of a radically shifting human reality.
Different human experiences presented as different colors of the Palette of Being situate our crisis in a larger context. One important Deep Humanity BEing journey that can help contextualize and make sense of our experiences is language. Once upon a time, language did not exist. As it gradually emerged, this color came to be added to our Palette of Being, and shaped the normative experiences of humanity in profound ways. It is the case that such profound shifts, lost over deep time come to be taken for granted by modern conspecifics. When such particular colors of the Palette of Being are not situated in deep time, and crisis ensues, that loss of contextualizing and situatedness can be quite disruptive, de-centering, confusing and alienating.
Being aware of the colors in the Palette can help us shed light on the amazing aspects that culture has invisibly transmitted to us, helping us not take them for granted, and re-establish a sense of awe about our lives as human beings.
-
That's a picture of it in the background. And this organism has the special trick that we call "photosynthesis," the ability to go take energy from the sun and transform carbon dioxide into oxygen. And over the course of billions of years, so starting from two and a half billion years ago, little by little these bacteria spread across the planet 00:07:08 and converted all that carbon dioxide in the air into the oxygen that we now have. And it was a very slow process. First, they had to saturate the seas, then they had to saturate the oxygen that the earth would absorb, and only then, finally, could oxygen begin to build up in the atmosphere. So you see, just after about 900 million years ago, oxygen starts to build up in the atmosphere. And about 600 million years ago, something really amazing happens. 00:07:35 The ozone layer forms from the oxygen that has been released in the atmosphere. And it sounds like a small deal, like we talked about the ozone a couple decades ago, but it actually turns out that before the ozone layer existed, earth was not really able to sustain complex, multicellular life. We had single-celled organisms, we had a couple of simple, multicellular organisms, but we didn't really have anything like you or me. 00:07:59 And shortly after the ozone layer came into place, the earth was able to sustain complex multicellular life. There was a Cambrian explosion of life in the seas. And the first plants got onto land. In fact, there was actually no life on land ahead of that. Another way to see this is, this is kind of a chart of pretty much most of the animals that you guys are familiar with. 00:08:24 And right at the bottom in time is the formation of the ozone layer. Like nothing that you are familiar with today could exist without the contributions of these tiny organisms over those billions of years. And where are they now? Well actually, they never really left us. The direct descendants of the cyanobacteria were eventually captured by plants. And they're now called chloroplasts. 00:08:49 So this is a zoom-in of a plant leaf - and we probably ate some of these guys today - where tons of little chloroplasts are still trapped - contributing photosynthesis and making energy for the plants that continue to be the other half of our lungs on earth. And in this way, our breaths are very deeply united. Every out-breath is mirrored by the in-breath of a plant,
This would be nice to turn into a science lesson or to represent this in an experiential, participatory Deep Humanity BEing Journey. To do this, it would be important to elucidate the series of steps leading from one stated result to the next, showing how scientific methodology works to link the series of interconnected ideas together. Especially important is the science that glues everything together over deep time.
-
today I'm here to describe that everything really is connected, 00:02:02 and not in some abstract, esoteric way but in a very concrete, direct, understandable way. And I am going to do that with three different stories: a story of the heart, a story of the breath, and a story of the mind.
These three are excellent candidates for multimedia Stop Reset Go (SRG) Deep Humanity (DH) BEing Journey.
It is relevant to introduce another concept that provides insights into another aspect required for engaging a non-scientific audience, and that is language.
The audience is important! BEing Journeys must take that into consideration. We can bias the presentation by implicit assumptions. How can we take those implicit assumptions into consideration and thereby expand the audience?
For a non-scientific audience, these arguments may not be so compelling. In this case, it is important to demonstrate how science can lead us to make such astounding predictions of times and space not directly observable to normative human perception.
Tags
- Amanda Robins
- we are all connected
- GCC
- Deep Humanity
- Major Evolutionary Transition
- Thomas Nagel
- peter nonacs
- What it's like to be a bat
- DH
- palette of being
- deep time
- SRG
- MET
- Major System Transition
- Stuart West
- MST
- big history
- BEing Journey
- Stop Reset Go
- CCE
- Gene culture coevolution
- Cumulative Cultural Evolution
Annotators
URL
-
-
www.newyorker.com www.newyorker.com
-
Since around 2010, Morton has become associated with a philosophical movement known as object-oriented ontology, or O.O.O. The point of O.O.O. is that there is a vast cosmos out there in which weird and interesting shit is happening to all sorts of objects, all the time. In a 1999 lecture, “Object-Oriented Philosophy,” Graham Harman, the movement’s central figure, explained the core idea:The arena of the world is packed with diverse objects, their forces unleashed and mostly unloved. Red billiard ball smacks green billiard ball. Snowflakes glitter in the light that cruelly annihilates them, while damaged submarines rust along the ocean floor. As flour emerges from mills and blocks of limestone are compressed by earthquakes, gigantic mushrooms spread in the Michigan forest. While human philosophers bludgeon each other over the very possibility of “access” to the world, sharks bludgeon tuna fish and icebergs smash into coastlines.We are not, as many of the most influential twentieth-century philosophers would have it, trapped within language or mind or culture or anything else. Reality is real, and right there to experience—but it also escapes complete knowability. One must confront reality with the full realization that you’ll always be missing something in the confrontation. Objects are always revealing something, and always concealing something, simply because they are Other. The ethics implied by such a strangely strange world hold that every single object everywhere is real in its own way. This realness cannot be avoided or backed away from. There is no “outside”—just the entire universe of entities constantly interacting, and you are one of them.
Object Oriented Ontology - Objects are always revealing something, and always concealing something, simply because they are Other. ... There is no "outside" - just the entire universe of entities constantly interacting, and you are one of them.
This needs to be harmonized with Stop Reset Go (SRG) complimentary Human Inner Transformation (HIT) and Social Outer Transformation (SOT) strategy.
-
The next day, I assumed we would begin our quest to find signs of hyperobjects in and around the city of Houston
This would make an excellent Stop Reset Go (SRG) Deep Humanity (DH) BEing Journey
-
-
www.annualreviews.org www.annualreviews.org
-
Recent research suggests that globally, the wealthiest 10% have been responsible for as much as half of the cumulative emissions since 1990 and the richest 1% for more than twice the emissions of the poorest 50% (2).
Even more recent research adds to this:
See the annotated Oxfam report: Linked In from the author: https://hyp.is/RGd61D_IEeyaWyPmSL8tXw/www.linkedin.com/posts/timgore_inequality-parisagreement-emissionsgap-activity-6862352517032943616-OHL- Annotations on full report: https://hyp.is/go?url=https%3A%2F%2Foxfamilibrary.openrepository.com%2Fbitstream%2Fhandle%2F10546%2F621305%2Fbn-carbon-inequality-2030-051121-en.pdf&group=__world__
and the annotated Hot or Cool report: https://hyp.is/KKhrLj_bEeywAIuGCjROAg/hotorcool.org/hc-posts/release-governments-in-g20-countries-must-enable-1-5-aligned-lifestyles/ https://hyp.is/zo0VbD_bEeydJf_xcudslg/hotorcool.org/hc-posts/release-governments-in-g20-countries-must-enable-1-5-aligned-lifestyles/
This suggests that perhaps the failure of the COP meetings may be partially due to focusing at the wrong level and demographics. the top 1 and 10 % live in every country. A focus on the wealthy class is not a focus area of COP negotiations perse. The COP meetings are focused on nation states. Interventions targeting this demographic may be better suited at the scale of individuals or civil society.
Many studies show there are no extra gains in happiness beyond a certain point of material wealth, and point to the harmful impacts of wealth accumulation, known as affluenza, and show many health effects: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1950124/, https://theswaddle.com/how-money-affects-rich-people/, https://www.marketwatch.com/story/the-dark-reasons-so-many-rich-people-are-miserable-human-beings-2018-02-22, https://www.nbcnews.com/better/pop-culture/why-wealthy-people-may-be-less-successful-love-ncna837306, https://www.apa.org/research/action/speaking-of-psychology/affluence,
A Human Inner Transformation approach based on an open source praxis called Deep Humanity is one example of helping to transform affluenza and leveraging it to accelerate transition.
Anderson has contextualized the scale of such an impact in his other presentations but not here. A recent example is the temporary emission decreases due to covid 19. A 6.6% global decrease was determined from this study: https://www.nature.com/articles/d41586-021-00090-3#:~:text=After%20rising%20steadily%20for%20decades,on%20daily%20fossil%20fuel%20emissions. with the US contributing 13% due to lockdown impacts on vehicular travel (both air and ground). After the pandemic ends, experts expect a strong rebound effect.
-
A final cluster gathers lenses that explore phenomena that are arguably more elastic and with the potential to both indirectly maintain and explicitly reject and reshape existing norms. Many of the topics addressed here can be appropriately characterized as bottom-up, with strong and highly diverse cultural foundations. Although they are influenced by global and regional social norms, the expert framing of institutions, and the constraints of physical infrastructure (from housing to transport networks), they are also domains of experimentation, new norms, and cultural change. Building on this potential for either resisting or catalyzing change, the caricature chosen here is one of avian metaphor and myth: the Ostrich and Phoenix cluster. Ostrich-like behavior—keeping heads comfortably hidden in the sand—is evident in different ways across the lenses of inequity (Section 5.1), high-carbon lifestyles (Section 5.2), and social imaginaries (Section 5.3), which make up this cluster. Yet, these lenses also point to the power of ideas, to how people can thrive beyond dominant norms, and to the possibility of rapid cultural change in societies—all forms of transformation reminiscent of the mythological phoenix born from the ashes of its predecessor. It is conceivable that this cluster could begin to redefine the boundaries of analysis that inform the Enabler cluster, which in turn has the potential to erode the legitimacy of the Davos cluster. The very early signs of such disruption are evident in some of the following sections and are subsequently elaborated upon in the latter part of the discussion.
The bottom-up nature of this cluster makes it the focus area for civil society movements, human inner transformation (HIT) approaches and cultural methodologies.
Changing the mindset or paradigm from which the system arises is the most powerful place to intervene in a system as Donella Meadows pointed out decades ago in her research on system leverage points: https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/
The sleeping giant of billions of potential change actors remains dormant. How do we awaken them and mobilize them. If we can do this, it can constitute the emergence of a third unidentified actor in system change.
The Stop Reset Go (SRG) initiative is focused on this thematic lens, bottom-up, rapid whole system change, with Deep Humanity (DH) as the open-source praxis to address the needed shift in worldview advocated by Meadows. One of the Deep Humanity programs is based on addressing the psychological deficits of the wealthy, and transforming them into heroes for the transition, by redirecting their WEALTH-to-WELLth.
There are a number of strategic demographics that can be targeted in methodical evidence-based ways. Each of these is a leverage point and can bring about social tipping points.
A number of 2021 reports characterize the outsized impact of the top 1% and top 10% of humanity. Unless their luxury, high ecological footprint behavior is reeled in, humanity won't stand a chance. Annotation of Oxfam report: https://hyp.is/go?url=https%3A%2F%2Foxfamilibrary.openrepository.com%2Fbitstream%2Fhandle%2F10546%2F621305%2Fbn-carbon-inequality-2030-051121-en.pdf&group=__world__ Annotation of Hot or Cool report: https://hyp.is/go?url=https%3A%2F%2Fhotorcool.org%2Fhc-posts%2Frelease-governments-in-g20-countries-must-enable-1-5-aligned-lifestyles%2F&group=__world__
-
Fundamental features of human psychology can constrain the perceived personal relevance and importance of climate change, limiting both action and internalization of the problem. Cognitive shortcuts developed over millennia make us ill-suited in many ways to perceiving and responding to climate change (152), including a tendency to place less emphasis on time-delayed and physically remote risks and to selectively downplay information that is at odds with our identity or worldview (153). Risk perception relies on intuition and direct perceptual signals (e.g., an immediate, tangible threat), whereas for most high-emitting households in the Global North, climate change does not present itself in these terms, except in the case of local experiences of extreme weather events. Where strong concern does exist, this tends to be linked to care for others (154) combined with knowledge about the causes and possible consequences of climate change (155).
This is indeed a problematic feature of human evolution. It has given rise to what Timothy Morton refers to as "hyperobjects", objects of such vastness in space and time that it defeats human mechanisms of perception at local scale: https://www.upress.umn.edu/book-division/books/hyperobjects https://hyp.is/xROjpD_jEey4a6-Urbh4_Q/www.newyorker.com/culture/persons-of-interest/timothy-mortons-hyper-pandemic
This psychological constraint is worth demonstrating to individuals to illustrate how we construct our values and responses. These constraints can be demonstrated in a vivid way within the context of Deep Humanity open source praxis BEing journeys.
As in the New Yorker interview with Morton, we can take Deep Humanity participants on BEing journeys, walkabouts to identify hyperobjects.
Hyperobjects are also cognitive and highly abstract in nature. This is a clue to another idea that could be enlightening to understanding the problematic context of appreciating hyperobjects such as climate change, and that is the idea of Jakob Von Uexkull's Umwelt:
"Uexküll was particularly interested in how living beings perceive their environment(s). He argued that organisms experience life in terms of species-specific, spatio-temporal, 'self-in-world' subjective reference frames that he called Umwelt (translated as surrounding-world,[9] phenomenal world,[10] self-world,[10] environment[11] - lit. German environment). These Umwelten (plural of Umwelt) are distinctive from what Uexküll termed the "Umgebung" which would be the living being's surroundings as seen from the likewise peculiar perspective or Umwelt of the human observer. Umwelt may thus be defined as the perceptual world in which an organism exists and acts as a subject. By studying how the senses of various organisms like ticks, sea urchins, amoebae, jellyfish and sea worms work, he was able to build theories of how they experience the world. Because all organisms perceive and react to sensory data as signs, Uexküll argued that they were to be considered as living subjects. This argument was the basis for his biological theory in which the characteristics of biological existence ("life") could not simply be described as a sum of its non-organic parts, but had to be described as subject and a part of a sign system.
The biosemiotic turn in Jakob von Uexküll's analysis occurs in his discussion of the animal's relationship with its environment. The Umwelt is for him an environment-world which is (according to Giorgio Agamben), "constituted by a more or less broad series of elements [called] "carriers of significance" or "marks" which are the only things that interest the animal". Agamben goes on to paraphrase one example from Uexküll's discussion of a tick, saying,
"...this eyeless animal finds the way to her watchpoint [at the top of a tall blade of grass] with the help of only its skin’s general sensitivity to light. The approach of her prey becomes apparent to this blind and deaf bandit only through her sense of smell. The odor of butyric acid, which emanates from the sebaceous follicles of all mammals, works on the tick as a signal that causes her to abandon her post (on top of the blade of grass/bush) and fall blindly downward toward her prey. If she is fortunate enough to fall on something warm (which she perceives by means of an organ sensible to a precise temperature) then she has attained her prey, the warm-blooded animal, and thereafter needs only the help of her sense of touch to find the least hairy spot possible and embed herself up to her head in the cutaneous tissue of her prey. She can now slowly suck up a stream of warm blood."[12]
Thus, for the tick, the Umwelt is reduced to only three (biosemiotic) carriers of significance: (1) The odor of butyric acid, which emanates from the sebaceous follicles of all mammals, (2) The temperature of 37 degrees Celsius (corresponding to the blood of all mammals), (3) The hairiness of mammals." ( From Wikipedia: https://en.wikipedia.org/wiki/Jakob_Johann_von_Uexk%C3%BCll)
The human umwelt limits us to a relatively small range of sensed signs. CO2 particles is not one of them. We rely on scientific narratives but these are far removed from direct sensing, into the field of conceptualization and abstraction. We have evolved to respond to danger that is sensed, less so to danger that is conceptualized.
-
-
hotorcool.org hotorcool.org
-
This report is an essential companion for policymakers working at the intersection of society and climate change.”
Policy alone may not be sufficient to change this deeply ingrained luxury lifestyle. It may require deep and meaningful education of one's deeper humanity leading to a shift in worldviews and value systems that deprioritize materially luxurious lifestyles for using that wealth to redistribute to build the future wellbeing ecocivilization. Transform the wealthy into the heros of the transition. Shaming them and labeling them as victims will only create distance. Rather, the most constructive approach is a positive one that shifts our own perspective from holding them as villains to heros.
-
Dr. Lewis Akenji, the lead author of the report says: “Talking about lifestyle changes is a hot-potato issue to policymakers who are afraid to threaten the lifestyles of voters. This report brings a science based approach and shows that without addressing lifestyles we will not be able to address climate change.”
This underscores the critical nature of dealing with the cultural shift of luxury lifestyle. It is recognized as a "hot potato" issue, which implies policy change may be slow and difficult.
Policy changes and new legal tools are ways to force an unwilling individual or group into a behavior change.
A more difficult but potentially more effective way to achieve this cultural shift is based on Donella Meadows' leverage points: https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/ which identifies the top leverage point as: The mindset or paradigm out of which the system — its goals, power structure, rules, its culture — arises.
The Stop Reset Go (SRG) open collective project applies the Deep Humanity (DH) Human Inner Transformation (HIT) process to effect impactful Social Outer Transformation (SOT). This is based on the inner-to-outer flow: The heart feels, the mind thinks, the body acts and a social impact manifests in our shared, public collective human reality.
Meadows top leverage point identifies narratives, stories and value systems that are inner maps to our outer behavior as critical causal agents to transform.
We need to take a much deeper look at the pysche of the luxury lifestyle. Philospher David Loy has done extensive research on this already. https://www.davidloy.org/media.html
Loy is a Buddhist scholar, but Buddhist philosophy can be understood secularly and across all religions.
Loy cites the work of cultural anthropologist Ernest Becker, especially his groundbreaking Pulitzer-prize-winning book: The Denial of Death. Becker wrote:
"Man is literally split in two: he has an awareness of his own splendid uniqueness in that he sticks out of nature with a towering majesty, and yet he goes back into the ground a few feet in order to blindly and dumbly rot and disappear forever. It is a terrifying dilemma to be in and to have to live with. The lower animals are, of course, spared this painful contradiction, as they lack a symbolic identity and the self-consciousness that goes with it. They merely act and move reflexively as they are driven by their instincts. If they pause at all, it is only a physical pause; inside they are anonymous, and even their faces have no name. They live in a world without time, pulsating, as it were, in a state of dumb being. This is what has made it so simple to shoot down whole herds of buffalo or elephants. The animals don't know that death is happening and continue grazing placidly while others drop alongside them. The knowledge of death is reflective and conceptual, and animals are spared it. They live and they disappear with the same thoughtlessness: a few minutes of fear, a few seconds of anguish, and it is over. But to live a whole lifetime with the fate of death haunting one's dreams and even the most sun-filled days—that's something else."
But Loy goes beyond mortality salience and strikes to the heart of our psychological construction of the Self that is the root of our consumption and materialism exasperated crisis.
To reach the wealthy in a compassionate manner, we must recognize that the degree of wealth and materialist accumulation may be in many cases proportional to the anxiety of dying, the anxiety of the groundlessness of the Self construction itself.
Helping all humans to liberate from this anxiety is monumental, and also applies to the wealthy. The release of this anxiety will naturally result in breaking through the illusion of materialism, seeing its false promises.
Those of the greatest material wealth are often also of the greatest spiritual poverty. As we near the end of our lives, materialism's promise may begin to lose its luster and our deepest unanswered questions begin to regain prominence.
At the end of the day, policy change may only effect so much change. What is really required is a reeducation campaign that results in voluntary behavior change that significantly reduces high impact luxury lifestyles. An exchange for something even more valued is a potential answer to this dilemma.
-
- Oct 2021
-
www.csmonitor.com www.csmonitor.com
-
A recent survey found that only 14% of people they surveyed in the United States talk about climate change. A previous Yale study found that 35% either discuss it occasionally or hear somebody else talk about it. Those are low for something that over 70% of people are worried about.
Conversation is not happening! There is a leverage point in holding open conversations where we understand each other’s language of different cultural groups. Finding common ground, the common human denominators (CHD) between polarized groups is the lynchpin.
-
For a talk at one conservative Christian college, Dr. Hayhoe – an atmospheric scientist, professor of political science at Texas Tech University, and the chief scientist for The Nature Conservancy – decided to emphasize how caring about climate change is in line with Christian values and, ultimately, is “pro-life” in the fullest sense of that word. Afterward, she says, people “were able to listen, acknowledge it, and think about approaching [climate change] a little differently.”
We often talk about the same things, share the same values, have the same common human denominators, but couched in different language. It is critical to get to the root of what we have in common in order to establish meaningful dialogue.
-
Climate scientist Katharine Hayhoe stresses the need for finding shared values, rather than trying to change someone’s mind, as a basis for productive conversations
What first appears as difference may actually emerge from consciousnesses that have more in common than one first realizes. Finding the common ground, what we refer to as the common human denominators (CHD) within the open source Deep Humanity praxis becomes the critical climate change communication leverage point for establishing genuine communication channels between politically polarized groups.
This is aligned to the Stop Reset Go project and its open source offshoot, Deep Humanity praxis that seeks conversations and personal and collective journeys to appreciate Common Human Denominators that are salient for all participants. It also underscores the value of integrating with the Indieverse Knowledge system, with its focus on symathessy embedded directly into its codebase.
-
I was speaking in Iowa, and I was asked, “How do you talk to people in Iowa about polar bears?” I said, “You don’t; you talk to them about corn.” If we begin a conversation with someone with something we already agree on, then the subtext is: “You care about this, and I care too. We have this in common.”
This stresses the importance of applying Deep Humanity wisely by finding the most compelling, salient and meaningful common human denominators appropriate for each conversational context. Which group are we interacting with? What are the major landmarks embedded in THEIR salience landscape?
The BEing journeys we craft will only be meaningful and impactful if they are appropriately matched to the cultural context.
The whole mind- body understanding of how we cognitively construct our reality, via Deep Humanity BEing journeys, can help shift our priorities.
-
I am frequently shamed for not doing enough. Some of that comes from the right side of the [political] spectrum, but increasingly a larger share of that shaming comes from people at the opposite end of the spectrum, who are so worried and anxious about climate impacts that their response is to find anyone who isn’t doing precisely what they think they should be doing and shame them.
Love, or recognizing the other person in the other tribe as sacred, is going to connect with that person because we are, after all, all of us are human INTERbeings, and love is the affective variable that connects us while shame is a variable that DISconnects us. Love is , in fact, one of our most powerful common human denominators.
Tags
- Common human denominators
- climate change communication
- Climate communication
- Deep humanity
- common denominators
- deep humanity
- climate communication
- stop rest go
- katherine hayhoe
- climate conversation
- climate change conversations
- CHD
- effective climate communication
- common human denominator
- indieverse
- symathessy
- polarization
- BEing journeys
- Shaming
- Depolarization
- human interbeing
- Katherine Hayhoe
Annotators
URL
-
-
bafybeiery76ov25qa7hpadaiziuwhebaefhpxzzx6t6rchn7b37krzgroi.ipfs.dweb.link bafybeiery76ov25qa7hpadaiziuwhebaefhpxzzx6t6rchn7b37krzgroi.ipfs.dweb.link
-
Fundamental features of human psychology can constrain the perceived personal relevance andimportance of climate change, limiting both action and internalization of the problem. Cognitiveshortcuts developed over millennia make us ill-suited in many ways to perceiving and respondingto climate change (152),including a tendency to place less emphasis on time-delayed and physicallyremote risks and to selectively downplay information that is at odds with our identity or worldview(153). Risk perception relies on intuition and direct perceptual signals (e.g., an immediate, tangiblethreat), whereas for most high-emitting households in the Global North, climate change does notpresent itself in these terms, except in the case of local experiences of extreme weather events.
This psychological constraint is worth demonstrating to individuals to illustrate how we construct our values and responses. These constraints can be demonstrated in a vivid way wiithin the context of Deep Humanity BEing journeys.
-
A final cluster gathers lenses that explore phenomena that are arguably more elastic and withthe potential to both indirectly maintain and explicitly reject and reshape existing norms. Many ofthe topics addressed here can be appropriately characterized as bottom-up, with strong and highlydiverse cultural foundations.
The bottom-up nature of this cluster makes it the focus area for civil society movements, inner transformation approaches and cultural methodologies. Changing the mindset or paradigm from which the system arises is the most powerful place to intervene in a system as Donella Meadows pointed out decades ago in her research on system leverage points: https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/
The Stop Reset Go initiative is focused on this thematic lens, bottom-up, rapid whole system change, with Deep Humanity as the open-source praxis to address the needed shift in worldview. One of the Deep Humanity programs is based on addressing the psychological deficits of the wealthy, and transforming them into heroes for the transition, by redirecting their WEALTH-to-WELLth.
-
Recent research suggests that globally, the wealthiest 10% have been responsible foras much as half of the cumulative emissions since 1990 and the richest 1% for more than twicethe emissions of the poorest 50% (2).
this suggests that perhaps the failure of the COP meetings may be partially due to focusing at the wrong level and demographics. the top 1 and 10 % live in every country. A focus on the wealthy class is not a focus area of COP negotiations perse. Interventions targeting this demographic may be better suited at the scale of individuals or civil society.
Many studies show there are no extra gains in happiness beyond a certain point of material wealth, and point to the harmful impacts of wealth accumulation, known as affluenza, and show many health effects: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1950124/, https://theswaddle.com/how-money-affects-rich-people/, https://www.marketwatch.com/story/the-dark-reasons-so-many-rich-people-are-miserable-human-beings-2018-02-22, https://www.nbcnews.com/better/pop-culture/why-wealthy-people-may-be-less-successful-love-ncna837306, https://www.apa.org/research/action/speaking-of-psychology/affluence,
A Human Inner Transformation approach based on an open source praxis called Deep Humanity is one example of helping to transform affluenza and leveraging it accelerate transition.
-
-
goertzel.org goertzel.org
-
With the aid of the concept of opposing pairs of magnetic poles, we can clearly contribute in a significant way to our expression and understanding of basic relationships in the overall magnetic field. We are proposing to look at soma and significance in a similar way. That is to say, we regard them as two aspects introduced at an arbitrary conceptual cut in the flow of the field of reality as a whole. These aspects are distinguished only in thought, but this distinction helps us to express and understand the whole flow of reality.
When Bohm writes "These aspects are distinguished only in thought, but this distinction helps us to express and understand the whole flow of reality", it reveals the true nature of words. Their only ever revealing aspects of the whole. They are NOT the whole.Hence, as linguistic animals, we are constantly dissecting parts of the whole of reality.
Deep Humanity open-source praxis linguistic BEing journeys can be designed to reveal this ubiquitious aspectualizing nature of language to help us all better understand what we are as linguistic beings.
-
- Aug 2021
-
awarm.space awarm.space
-
I like the differentiation that Jared has made here on his homepage with categories for "fast" and "slow".
It's reminiscent of the system 1 (fast) and system2 (slow) ideas behind Kahneman and Tversky's work in behavioral economics. (See Thinking, Fast and Slow)
It's also interesting in light of this tweet which came up recently:
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>I very much miss the back and forth with blog posts responding to blog posts, a slow moving argument where we had time to think.
— Rachel Andrew (@rachelandrew) August 22, 2017Because the Tweet was shared out of context several years later, someone (accidentally?) replied to it as if it were contemporaneous. When called out for not watching the date of the post, their reply was "you do slow web your way…" #
This gets one thinking. Perhaps it would help more people's contextual thinking if more sites specifically labeled their posts as fast and slow (or gave a 1-10 rating?). Sometimes the length of a response is an indicator of the thought put into it, thought not always as there's also the oft-quoted aphorism: "If I Had More Time, I Would Have Written a Shorter Letter".
The ease of use of the UI on Twitter seems to broadly make it a platform for "fast" posting which can often cause ruffled feathers, sour feelings, anger, and poor communication.
What if there were posting UIs (or micropub clients) that would hold onto your responses for a few hours, days, or even a week and then remind you about them after that time had past to see if they were still worth posting? This is a feature based on Abraham Lincoln's idea of a "hot letter" or angry letter, which he advised people to write often, but never send.
Where is the social media service for hot posts that save all your vituperation, but don't show them to anyone? Or which maybe posts them anonymously?
The opposite of some of this are the partially baked or even fully thought out posts that one hears about anecdotally, but which the authors say they felt weren't finish and thus didn't publish them. Wouldn't it be better to hit publish on these than those nasty quick replies? How can we create UI for this?
I saw a sitcom a few years ago where a girl admonished her friend (an oblivious boy) for liking really old Instagram posts of a girl he was interested in. She said that deep-liking old photos was an obvious and overt sign of flirting.
If this is the case then there's obviously a social standard of sorts for this, so why not hold your tongue in the meanwhile, and come up with something more thought out to send your digital love to someone instead of providing a (knee-)jerk reaction?
Of course now I can't help but think of the annotations I've been making in my copy of Lucretius' On the Nature of Things. Do you suppose that Lucretius knows I'm in love?
-
- Jun 2021
-
www.theatlantic.com www.theatlantic.com
-
Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
I like this concept of deep reading.
Compare/contrast with close reading and distant reading.
What other types of reading might we imagine?
-
The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
But are Google's tools really making us more productive thinkers? One might argue that it's attempting to do all the work for us and take out the process of thought all together. We're just rats in a maze hitting a bar to get the food pellet.
What if the end is a picture of us as the people on the space ship at the end of WALL-E? What if it's keeping us from thinking?
What if it's making us more shallow thinkers rather than deep thinkers?
Cross reference P.M. Forni.
-
-
stackoverflow.com stackoverflow.com
-
Programmers should be encouraged to understand what is correct, why it is correct, and then propagate.
new tag?:
- understand why it is correct
Tags
- programming languages: learning/understanding the subtleties
- quotable
- having a deep understanding of something
- annotation meta: may need new tag
- combating widespread incorrectness/misconception by consistently doing it correctly
- programming: understand the language, don't fear it
- good advice
- spreading/propagating good ideas
Annotators
URL
-
- May 2021
-
80000hours.org 80000hours.org
-
career capital
You must first generate this capital by becoming good at something rare and valuable. It is something that makes you hard to replace and is therefore the result of putting effort into developing skills that differentiate you from others.
Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World (1 edition). Grand Central Publishing.
-
-
syslog.ravelin.com syslog.ravelin.com
-
Before we dive into the details of the actual migration, let’s discuss the theory behind it.
-
-
www.opendemocracy.net www.opendemocracy.net
- Apr 2021
-
yashuseth.blog yashuseth.blog
-
tutorial article on BERT
-
-
linusakesson.net linusakesson.net
-
placesjournal.org placesjournal.org
-
If we accept the idea that the entire surface of the earth is migratory, then why not landscapes in particular? A landscape — as a scene, landschap, ecosystem, and socio-political territory — is a material assembly of moving entities, a dynamic medium which changes in quality and structure through the aggregate movements or actions of the things that constitute it.
-
- Mar 2021
-
arxiv.org arxiv.org
-
Kozlowski, Diego, Jennifer Dusdal, Jun Pang, and Andreas Zilian. ‘Semantic and Relational Spaces in Science of Science: Deep Learning Models for Article Vectorisation’. ArXiv:2011.02887 [Physics], 5 November 2020. http://arxiv.org/abs/2011.02887.
-
-
github.com github.com
-
Right now major changes require a deep and broad understanding of the codebase and how things get done.
-
- Feb 2021
-
moritz.digital moritz.digital
-
The RECALL Augmenting Memory architecture. It can, for example, help users restore context before their next conference or class. The student, walking to a lecture, could be primed with a summary of it through his smart glasses, surfacing relevant information. The description of the "Memory vault" in this architecture exhibits a high similarity to Vannevar Bush's Memex.
It's these deep learning breakthroughs that now make a lot of these memex and semantic web technologies accesssible. This is a note I also referenced in the SWTs whitepaper for Cortex. Great to see Moritz and RemNote picking up on this change as well.
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
Although one thing you want to avoid is using frames in such a manner that the content of the site is in the frame and a menu is outside of the frame. Although this may seem convienient, all of your pages become unbookmarkable.
-
-
stackoverflow.com stackoverflow.com
-
Iframes can have similar issues as frames and inconsiderate use of XMLHttpRequest: They break the one-document-per-URL paradigm, which is essential for the proper functioning of the web (think bookmarks, deep-links, search engines, ...).
-
The most striking such issue is probably that of deep linking: It's true that iframes suffer from this to a lesser extent than frames, but if you allow your users to navigate between different pages in the iframe, it will be a problem.
-
- Jan 2021
-
arxiv.org arxiv.org
-
during the backward pass, feedback connectionsare used in concert with forward connections to dynamically invert the forward transformation,thereby enabling errors to flow backward
Tags
Annotators
URL
-
-
blogs.scientificamerican.com blogs.scientificamerican.com
-
Johnson: Earlier I interviewed you about patrilocal residence patterns and how that alters women’s sexual choices. In contrast, matrilocal societies are more likely to be egalitarian. What are the factors that lead to the differences between these two systems?Hrdy: I think in societies where women have more say, and that does tend to be in societies that are matrilocal and with matrilineal descent or where, as it is among many small scale hunter-gatherers, you have porous social boundaries and flexible residence patterns. If I had to say what kind of residence patterns our ancestors had it would have been very flexible, what Frank Marlowe calls multilocal.
Matrilocality, matrilinearity and egailitarianism.
-
- Dec 2020
-
sites.research.google sites.research.google
-
www.unicef.org www.unicef.org
-
Appeal highlights
Gives obvious area for facts (logos) which also can create ethos from them knowing so much information
-
UNICEF’s Humanitarian Action for Children appeal helps support the agency’s work as it provides conflict- and disaster-affected children with access to water, sanitation, nutrition, education, health and protection services. Read more about this year’s appeal here.
Gives overall mission as UNICEF, to show wide range of efforts
-
Key planned results for 2020
Shows what they want to achieve
-
Funding requirements for 2020
Show how much money is needed to achieve results
-
- Nov 2020
-
stackoverflow.com stackoverflow.com
-
For anyone interested in reading more about it, Stack Overflow user kangax has written an incredibly in-depth blog post about the delete statement on their blog, Understanding delete. It is highly recommended.
-
- Oct 2020
-
stackoverflow.com stackoverflow.com
-
However, this will only walk the object shallowly. To do it deeply, you can use recursion:
-
Note that if you're willing to use a library like lodash/underscore.js, you can use _.pick instead. However, you will still need to use recursion to filter deeply, since neither library provides a deep filter function.
-
-
final-form.org final-form.org
-
Note that the fields are kept in a flat structure, so a "deep" field like "shipping.address.street" will be at the key "shipping.address.street", with the dots included.
-
-
basarat.gitbook.io basarat.gitbook.io
-
-
More in-depth examples definitely sound like a good idea. I've seen cookbooks quite a few times already and they are always helpful.
-
- Sep 2020
-
www.newscientist.com www.newscientist.com
-
Lu, D. (n.d.). AI can edit video in real time to sync new audio to people’s lips. New Scientist. Retrieved September 14, 2020, from https://www.newscientist.com/article/2254326-ai-can-edit-video-in-real-time-to-sync-new-audio-to-peoples-lips/
-
-
artbreeder.com artbreeder.com
-
-
daverupert.com daverupert.com
-
There’s a lot of value in slow thinking. You use the non-lizard side of your brain. You make more deliberate decisions. You prioritize design over instant gratification. You can “check” your gut instincts and validate your hypothesis before incurring mountains of technical debt.
Slow thinking is vergelijkbaar met Deep Work.
-
- Jun 2020
-
-
Murphy, C., Laurence, E., & Allard, A. (2020). Deep learning of stochastic contagion dynamics on complex networks. ArXiv:2006.05410 [Cond-Mat, Physics:Physics, Stat]. http://arxiv.org/abs/2006.05410
-
- May 2020
-
www.typescriptlang.org www.typescriptlang.org
-
We’ve dived deep here, with a series of different pull requests that optimize certain pathological cases involving large unions, intersections, conditional types, and mapped types.
-
-
support.google.com support.google.com
-
In depth
-
-
developer.android.com developer.android.com
-
Digital Asset Links protocol treats subdomains in your intent filters as unique, separate hosts
-
-
developer.android.com developer.android.com
-
Android App Links on Android 6.0 (API level 23) and higher allow an app to designate itself as the default handler of a given type of link
Tags
Annotators
URL
-
-
epjdatascience.springeropen.com epjdatascience.springeropen.com
-
Vilella, S., Paolotti, D., Ruffo, G. et al. News and the city: understanding online press consumption patterns through mobile data. EPJ Data Sci. 9, 10 (2020). https://doi.org/10.1140/epjds/s13688-020-00228-9
-
-
www.sciencedirect.com www.sciencedirect.com
-
this is done by Tshinghua Univ
-
- Apr 2020
-
-
Punn, N. S., Sonbhadra, S. K., & Agarwal, S. (2020). COVID-19 Epidemic Analysis using Machine Learning and Deep Learning Algorithms [Preprint]. Health Informatics. https://doi.org/10.1101/2020.04.08.20057679
-
- Mar 2020
-
researchworkspace.com researchworkspace.com
-
Teledyne
I can make comments here, Teledyne is a high-tech sounding name.
-
- Jan 2020
-
www.diplomaticourier.com www.diplomaticourier.com
-
This is just the tip of the innovation iceberg in a new deep-truth reality that is here today
In a moment of crisis for truth and trust, it is encouraging to encounter the term deep-truth and may offer a valuable term that is both accessible and powerful in advocating not just against what we despise but for what we hope to see in a better world.
-
This is just the tip of the innovation iceberg in a new deep-truth reality that is here today
In a moment of crisis for truth and trust, it is encouraging to encounter the term deep-truth and may offer a valuable term that is both accessible and powerful in advocating not just against what we despise but for what we hope to see in a better world.
-
- Dec 2019
-
-
In 2016, the journal Science published a remarkable bit of insight: It's possible to reduce prejudice, and sway opinions on anti-transgender legislation, with one 10-minute conversation. What's more, the researchers found that the change of heart can last at least three months and is resistant to anti-transgender attack ads. It worked because the canvassers in the study did a simple thing: they listened.
deep canvassing
-
- Nov 2019
-
gist.github.com gist.github.com
-
They called it a deep dive here:
Originally published in the react-playbook.
-
- Oct 2019
-
neuralnetworksanddeeplearning.com neuralnetworksanddeeplearning.com
-
As a prototype it hits a sweet spot: it's challenging - it's no small feat to recognize handwritten digits - but it's not so difficult as to require an extremely complicated solution, or tremendous computational power. Furthermore, it's a great way to develop more advanced techniques, such as deep learning. And so throughout the book we'll return repeatedly to the problem of handwriting recognition. Later in the book, we'll discuss how these ideas may be applied to other problems in computer vision, and also in speech, natural language processing, and other domains.Of course, if the point of the chapter was only to write a computer program to recognize handwritten digits, then the chapter would be much shorter! But along the way we'll develop many key ideas about neural networks, including two important types of artificial neuron (the perceptron and the sigmoid neuron), and the standard learning algorithm for neural networks, known as stochastic gradient descent. Throughout, I focus on explaining why things are done the way they are, and on building your neural networks intuition. That requires a lengthier discussion than if I just presented the basic mechanics of what's going on, but it's worth it for the deeper understanding you'll attain. Amongst the payoffs, by the end of the chapter we'll be in position to understand what deep learning is, and why it matters.PerceptronsWhat is a neural network? To get started, I'll explain a type of artificial neuron called a perceptron. Perceptrons were developed in the 1950s and 1960s by the scientist Frank Rosenblatt, inspired by earlier work by Warren McCulloch and Walter Pitts. Today, it's more common to use other models of artificial neurons - in this book, and in much modern work on neural networks, the main neuron model used is one called the sigmoid neuron. We'll get to sigmoid neurons shortly. But to understand why sigmoid neurons are defined the way they are, it's worth taking the time to first understand perceptrons.So how do perceptrons work? A perceptron takes several binary inputs, x1,x2,…x1,x2,…x_1, x_2, \ldots, and produces a single binary output: In the example shown the perceptron has three inputs, x1,x2,x3x1,x2,x3x_1, x_2, x_3. In general it could have more or fewer inputs. Rosenblatt proposed a simple rule to compute the output. He introduced weights, w1,w2,…w1,w2,…w_1,w_2,\ldots, real numbers expressing the importance of the respective inputs to the output. The neuron's output, 000 or 111, is determined by whether the weighted sum ∑jwjxj∑jwjxj\sum_j w_j x_j is less than or greater than some threshold value. Just like the weights, the threshold is a real number which is a parameter of the neuron. To put it in more precise algebraic terms: output={01if ∑jwjxj≤ thresholdif ∑jwjxj> threshold(1)(1)output={0if ∑jwjxj≤ threshold1if ∑jwjxj> threshold\begin{eqnarray} \mbox{output} & = & \left\{ \begin{array}{ll} 0 & \mbox{if } \sum_j w_j x_j \leq \mbox{ threshold} \\ 1 & \mbox{if } \sum_j w_j x_j > \mbox{ threshold} \end{array} \right. \tag{1}\end{eqnarray} That's all there is to how a perceptron works!That's the basic mathematical model. A way you can think about the perceptron is that it's a device that makes decisions by weighing up evidence. Let me give an example. It's not a very realistic example, but it's easy to understand, and we'll soon get to more realistic examples. Suppose the weekend is coming up, and you've heard that there's going to be a cheese festival in your city. You like cheese, and are trying to decide whether or not to go to the festival. You might make your decision by weighing up three factors: Is the weather good? Does your boyfriend or girlfriend want to accompany you? Is the festival near public transit? (You don't own a car). We can represent these three factors by corresponding binary variables x1,x2x1,x2x_1, x_2, and x3x3x_3. For instance, we'd have x1=1x1=1x_1 = 1 if the weather is good, and x1=0x1=0x_1 = 0 if the weather is bad. Similarly, x2=1x2=1x_2 = 1 if your boyfriend or girlfriend wants to go, and x2=0x2=0x_2 = 0 if not. And similarly again for x3x3x_3 and public transit.Now, suppose you absolutely adore cheese, so much so that you're happy to go to the festival even if your boyfriend or girlfriend is uninterested and the festival is hard to get to. But perhaps you really loathe bad weather, and there's no way you'd go to the festival if the weather is bad. You can use perceptrons to model this kind of decision-making. One way to do this is to choose a weight w1=6w1=6w_1 = 6 for the weather, and w2=2w2=2w_2 = 2 and w3=2w3=2w_3 = 2 for the other conditions. The larger value of w1w1w_1 indicates that the weather matters a lot to you, much more than whether your boyfriend or girlfriend joins you, or the nearness of public transit. Finally, suppose you choose a threshold of 555 for the perceptron. With these choices, the perceptron implements the desired decision-making model, outputting 111 whenever the weather is good, and 000 whenever the weather is bad. It makes no difference to the output whether your boyfriend or girlfriend wants to go, or whether public transit is nearby.By varying the weights and the threshold, we can get different models of decision-making. For example, suppose we instead chose a threshold of 333. Then the perceptron would decide that you should go to the festival whenever the weather was good or when both the festival was near public transit and your boyfriend or girlfriend was willing to join you. In other words, it'd be a different model of decision-making. Dropping the threshold means you're more willing to go to the festival.Obviously, the perceptron isn't a complete model of human decision-making! But what the example illustrates is how a perceptron can weigh up different kinds of evidence in order to make decisions. And it should seem plausible that a complex network of perceptrons could make quite subtle decisions: In this network, the first column of perceptrons - what we'll call the first layer of perceptrons - is making three very simple decisions, by weighing the input evidence. What about the perceptrons in the second layer? Each of those perceptrons is making a decision by weighing up the results from the first layer of decision-making. In this way a perceptron in the second layer can make a decision at a more complex and more abstract level than perceptrons in the first layer. And even more complex decisions can be made by the perceptron in the third layer. In this way, a many-layer network of perceptrons can engage in sophisticated decision making.Incidentally, when I defined perceptrons I said that a perceptron has just a single output. In the network above the perceptrons look like they have multiple outputs. In fact, they're still single output. The multiple output arrows are merely a useful way of indicating that the output from a perceptron is being used as the input to several other perceptrons. It's less unwieldy than drawing a single output line which then splits.Let's simplify the way we describe perceptrons. The condition ∑jwjxj>threshold∑jwjxj>threshold\sum_j w_j x_j > \mbox{threshold} is cumbersome, and we can make two notational changes to simplify it. The first change is to write ∑jwjxj∑jwjxj\sum_j w_j x_j as a dot product, w⋅x≡∑jwjxjw⋅x≡∑jwjxjw \cdot x \equiv \sum_j w_j x_j, where www and xxx are vectors whose components are the weights and inputs, respectively. The second change is to move the threshold to the other side of the inequality, and to replace it by what's known as the perceptron's bias, b≡−thresholdb≡−thresholdb \equiv -\mbox{threshold}. Using the bias instead of the threshold, the perceptron rule can be rewritten: output={01if w⋅x+b≤0if w⋅x+b>0(2)(2)output={0if w⋅x+b≤01if w⋅x+b>0\begin{eqnarray} \mbox{output} = \left\{ \begin{array}{ll} 0 & \mbox{if } w\cdot x + b \leq 0 \\ 1 & \mbox{if } w\cdot x + b > 0 \end{array} \right. \tag{2}\end{eqnarray} You can think of the bias as a measure of how easy it is to get the perceptron to output a 111. Or to put it in more biological terms, the bias is a measure of how easy it is to get the perceptron to fire. For a perceptron with a really big bias, it's extremely easy for the perceptron to output a 111. But if the bias is very negative, then it's difficult for the perceptron to output a 111. Obviously, introducing the bias is only a small change in how we describe perceptrons, but we'll see later that it leads to further notational simplifications. Because of this, in the remainder of the book we won't use the threshold, we'll always use the bias.I've described perceptrons as a method for weighing evidence to make decisions. Another way perceptrons can be used is to compute the elementary logical functions we usually think of as underlying computation, functions such as AND, OR, and NAND. For example, suppose we have a perceptron with two inputs, each with weight −2−2-2, and an overall bias of 333. Here's our perceptron: Then we see that input 000000 produces output 111, since (−2)∗0+(−2)∗0+3=3(−2)∗0+(−2)∗0+3=3(-2)*0+(-2)*0+3 = 3 is positive. Here, I've introduced the ∗∗* symbol to make the multiplications explicit. Similar calculations show that the inputs 010101 and 101010 produce output 111. But the input 111111 produces output 000, since (−2)∗1+(−2)∗1+3=−1(−2)∗1+(−2)∗1+3=−1(-2)*1+(-2)*1+3 = -1 is negative. And so our perceptron implements a NAND gate!The NAND example shows that we can use perceptrons to compute simple logical functions. In fact, we can use networks of perceptrons to compute any logical function at all. The reason is that the NAND gate is universal for computation, that is, we can build any computation up out of NAND gates. For example, we can use NAND gates to build a circuit which adds two bits, x1x1x_1 and x2x2x_2. This requires computing the bitwise sum, x1⊕x2x1⊕x2x_1 \oplus x_2, as well as a carry bit which is set to 111 when both x1x1x_1 and x2x2x_2 are 111, i.e., the carry bit is just the bitwise product x1x2x1x2x_1 x_2: To get an equivalent network of perceptrons we replace all the NAND gates by perceptrons with two inputs, each with weight −2−2-2, and an overall bias of 333. Here's the resulting network. Note that I've moved the perceptron corresponding to the bottom right NAND gate a little, just to make it easier to draw the arrows on the diagram: One notable aspect of this network of perceptrons is that the output from the leftmost perceptron is used twice as input to the bottommost perceptron. When I defined the perceptron model I didn't say whether this kind of double-output-to-the-same-place was allowed. Actually, it doesn't much matter. If we don't want to allow this kind of thing, then it's possible to simply merge the two lines, into a single connection with a weight of -4 instead of two connections with -2 weights. (If you don't find this obvious, you should stop and prove to yourself that this is equivalent.) With that change, the network looks as follows, with all unmarked weights equal to -2, all biases equal to 3, and a single weight of -4, as marked: Up to now I've been drawing inputs like x1x1x_1 and x2x2x_2 as variables floating to the left of the network of perceptrons. In fact, it's conventional to draw an extra layer of perceptrons - the input layer - to encode the inputs: This notation for input perceptrons, in which we have an output, but no inputs, is a shorthand. It doesn't actually mean a perceptron with no inputs. To see this, suppose we did have a perceptron with no inputs. Then the weighted sum ∑jwjxj∑jwjxj\sum_j w_j x_j would always be zero, and so the perceptron would output 111 if b>0b>0b > 0, and 000 if b≤0b≤0b \leq 0. That is, the perceptron would simply output a fixed value, not the desired value (x1x1x_1, in the example above). It's better to think of the input perceptrons as not really being perceptrons at all, but rather special units which are simply defined to output the desired values, x1,x2,…x1,x2,…x_1, x_2,\ldots.The adder example demonstrates how a network of perceptrons can be used to simulate a circuit containing many NAND gates. And because NAND gates are universal for computation, it follows that perceptrons are also universal for computation.The computational universality of perceptrons is simultaneously reassuring and disappointing. It's reassuring because it tells us that networks of perceptrons can be as powerful as any other computing device. But it's also disappointing, because it makes it seem as though perceptrons are merely a new type of NAND gate. That's hardly big news!However, the situation is better than this view suggests. It turns out that we can devise learning algorithms which can automatically tune the weights and biases of a network of artificial neurons. This tuning happens in response to external stimuli, without direct intervention by a programmer. These learning algorithms enable us to use artificial neurons in a way which is radically different to conventional logic gates. Instead of explicitly laying out a circuit of NAND and other gates, our neural networks can simply learn to solve problems, sometimes problems where it would be extremely difficult to directly design a conventional circuit.Sigmoid neuronsLearning algorithms sound terrific. But how can we devise such algorithms for a neural network? Suppose we have a network of perceptrons that we'd like to use to learn to solve some problem. For example, the inputs to the network might be the raw pixel data from a scanned, handwritten image of a digit. And we'd like the network to learn weights and biases so that the output from the network correctly classifies the digit. To see how learning might work, suppose we make a small change in some weight (or bias) in the network. What we'd like is for this small change in weight to cause only a small corresponding change in the output from the network. As we'll see in a moment, this property will make learning possible. Schematically, here's what we want (obviously this network is too simple to do handwriting recognition!): If it were true that a small change in a weight (or bias) causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want. For example, suppose the network was mistakenly classifying an image as an "8" when it should be a "9". We could figure out how to make a small change in the weights and biases so the network gets a little closer to classifying the image as a "9". And then we'd repeat this, changing the weights and biases over and over to produce better and better output. The network would be learning.The problem is that this isn't what happens when our network contains perceptrons. In fact, a small change in the weights or bias of any single perceptron in the network can sometimes cause the output of that perceptron to completely flip, say from 000 to 111. That flip may then cause the behaviour of the rest of the network to completely change in some very complicated way. So while your "9" might now be classified correctly, the behaviour of the network on all the other images is likely to have completely changed in some hard-to-control way. That makes it difficult to see how to gradually modify the weights and biases so that the network gets closer to the desired behaviour. Perhaps there's some clever way of getting around this problem. But it's not immediately obvious how we can get a network of perceptrons to learn.We can overcome this problem by introducing a new type of artificial neuron called a sigmoid neuron. Sigmoid neurons are similar to perceptrons, but modified so that small changes in their weights and bias cause only a small change in their output. That's the crucial fact which will allow a network of sigmoid neurons to learn.Okay, let me describe the sigmoid neuron. We'll depict sigmoid neurons in the same way we depicted perceptrons: Just like a perceptron, the sigmoid neuron has inputs, x1,x2,…x1,x2,…x_1, x_2, \ldots. But instead of being just 000 or 111, these inputs can also take on any values between 000 and 111. So, for instance, 0.638…0.638…0.638\ldots is a valid input for a sigmoid neuron. Also just like a perceptron, the sigmoid neuron has weights for each input, w1,w2,…w1,w2,…w_1, w_2, \ldots, and an overall bias, bbb. But the output is not 000 or 111. Instead, it's σ(w⋅x+b)σ(w⋅x+b)\sigma(w \cdot x+b), where σσ\sigma is called the sigmoid function* *Incidentally, σσ\sigma is sometimes called the logistic function, and this new class of neurons called logistic neurons. It's useful to remember this terminology, since these terms are used by many people working with neural nets. However, we'll stick with the sigmoid terminology., and is defined by: σ(z)≡11+e−z.(3)(3)σ(z)≡11+e−z.\begin{eqnarray} \sigma(z) \equiv \frac{1}{1+e^{-z}}. \tag{3}\end{eqnarray} To put it all a little more explicitly, the output of a sigmoid neuron with inputs x1,x2,…x1,x2,…x_1,x_2,\ldots, weights w1,w2,…w1,w2,…w_1,w_2,\ldots, and bias bbb is 11+exp(−∑jwjxj−b).(4)(4)11+exp(−∑jwjxj−b).\begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)}. \tag{4}\end{eqnarray}At first sight, sigmoid neurons appear very different to perceptrons. The algebraic form of the sigmoid function may seem opaque and forbidding if you're not already familiar with it. In fact, there are many similarities between perceptrons and sigmoid neurons, and the algebraic form of the sigmoid function turns out to be more of a technical detail than a true barrier to understanding.To understand the similarity to the perceptron model, suppose z≡w⋅x+bz≡w⋅x+bz \equiv w \cdot x + b is a large positive number. Then e−z≈0e−z≈0e^{-z} \approx 0 and so σ(z)≈1σ(z)≈1\sigma(z) \approx 1. In other words, when z=w⋅x+bz=w⋅x+bz = w \cdot x+b is large and positive, the output from the sigmoid neuron is approximately 111, just as it would have been for a perceptron. Suppose on the other hand that z=w⋅x+bz=w⋅x+bz = w \cdot x+b is very negative. Then e−z→∞e−z→∞e^{-z} \rightarrow \infty, and σ(z)≈0σ(z)≈0\sigma(z) \approx 0. So when z=w⋅x+bz=w⋅x+bz = w \cdot x +b is very negative, the behaviour of a sigmoid neuron also closely approximates a perceptron. It's only when w⋅x+bw⋅x+bw \cdot x+b is of modest size that there's much deviation from the perceptron model.What about the algebraic form of σσ\sigma? How can we understand that? In fact, the exact form of σσ\sigma isn't so important - what really matters is the shape of the function when plotted. Here's the shape: -4-3-2-1012340.00.20.40.60.81.0zsigmoid function function s(x) {return 1/(1+Math.exp(-x));} var m = [40, 120, 50, 120]; var height = 290 - m[0] - m[2]; var width = 600 - m[1] - m[3]; var xmin = -5; var xmax = 5; var sample = 400; var x1 = d3.scale.linear().domain([0, sample]).range([xmin, xmax]); var data = d3.range(sample).map(function(d){ return { x: x1(d), y: s(x1(d))}; }); var x = d3.scale.linear().domain([xmin, xmax]).range([0, width]); var y = d3.scale.linear() .domain([0, 1]) .range([height, 0]); var line = d3.svg.line() .x(function(d) { return x(d.x); }) .y(function(d) { return y(d.y); }) var graph = d3.select("#sigmoid_graph") .append("svg") .attr("width", width + m[1] + m[3]) .attr("height", height + m[0] + m[2]) .append("g") .attr("transform", "translate(" + m[3] + "," + m[0] + ")"); var xAxis = d3.svg.axis() .scale(x) .tickValues(d3.range(-4, 5, 1)) .orient("bottom") graph.append("g") .attr("class", "x axis") .attr("transform", "translate(0, " + height + ")") .call(xAxis); var yAxis = d3.svg.axis() .scale(y) .tickValues(d3.range(0, 1.01, 0.2)) .orient("left") .ticks(5) graph.append("g") .attr("class", "y axis") .call(yAxis); graph.append("path").attr("d", line(data)); graph.append("text") .attr("class", "x label") .attr("text-anchor", "end") .attr("x", width/2) .attr("y", height+35) .text("z"); graph.append("text") .attr("x", (width / 2)) .attr("y", -10) .attr("text-anchor", "middle") .style("font-size", "16px") .text("sigmoid function"); This shape is a smoothed out version of a step function: -4-3-2-1012340.00.20.40.60.81.0zstep function function s(x) {return x < 0 ? 0 : 1;} var m = [40, 120, 50, 120]; var height = 290 - m[0] - m[2]; var width = 600 - m[1] - m[3]; var xmin = -5; var xmax = 5; var sample = 400; var x1 = d3.scale.linear().domain([0, sample]).range([xmin, xmax]); var data = d3.range(sample).map(function(d){ return { x: x1(d), y: s(x1(d))}; }); var x = d3.scale.linear().domain([xmin, xmax]).range([0, width]); var y = d3.scale.linear() .domain([0,1]) .range([height, 0]); var line = d3.svg.line() .x(function(d) { return x(d.x); }) .y(function(d) { return y(d.y); }) var graph = d3.select("#step_graph") .append("svg") .attr("width", width + m[1] + m[3]) .attr("height", height + m[0] + m[2]) .append("g") .attr("transform", "translate(" + m[3] + "," + m[0] + ")"); var xAxis = d3.svg.axis() .scale(x) .tickValues(d3.range(-4, 5, 1)) .orient("bottom") graph.append("g") .attr("class", "x axis") .attr("transform", "translate(0, " + height + ")") .call(xAxis); var yAxis = d3.svg.axis() .scale(y) .tickValues(d3.range(0, 1.01, 0.2)) .orient("left") .ticks(5) graph.append("g") .attr("class", "y axis") .call(yAxis); graph.append("path").attr("d", line(data)); graph.append("text") .attr("class", "x label") .attr("text-anchor", "end") .attr("x", width/2) .attr("y", height+35) .text("z"); graph.append("text") .attr("x", (width / 2)) .attr("y", -10) .attr("text-anchor", "middle") .style("font-size", "16px") .text("step function"); If σσ\sigma had in fact been a step function, then the sigmoid neuron would be a perceptron, since the output would be 111 or 000 depending on whether w⋅x+bw⋅x+bw\cdot x+b was positive or negative* *Actually, when w⋅x+b=0w⋅x+b=0w \cdot x +b = 0 the perceptron outputs 000, while the step function outputs 111. So, strictly speaking, we'd need to modify the step function at that one point. But you get the idea.. By using the actual σσ\sigma function we get, as already implied above, a smoothed out perceptron. Indeed, it's the smoothness of the σσ\sigma function that is the crucial fact, not its detailed form. The smoothness of σσ\sigma means that small changes ΔwjΔwj\Delta w_j in the weights and ΔbΔb\Delta b in the bias will produce a small change ΔoutputΔoutput\Delta \mbox{output} in the output from the neuron. In fact, calculus tells us that ΔoutputΔoutput\Delta \mbox{output} is well approximated by Δoutput≈∑j∂output∂wjΔwj+∂output∂bΔb,(5)(5)Δoutput≈∑j∂output∂wjΔwj+∂output∂bΔb,\begin{eqnarray} \Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b, \tag{5}\end{eqnarray} where the sum is over all the weights, wjwjw_j, and ∂output/∂wj∂output/∂wj\partial \, \mbox{output} / \partial w_j and ∂output/∂b∂output/∂b\partial \, \mbox{output} /\partial b denote partial derivatives of the outputoutput\mbox{output} with respect to wjwjw_j and bbb, respectively. Don't panic if you're not comfortable with partial derivatives! While the expression above looks complicated, with all the partial derivatives, it's actually saying something very simple (and which is very good news): ΔoutputΔoutput\Delta \mbox{output} is a linear function of the changes ΔwjΔwj\Delta w_j and ΔbΔb\Delta b in the weights and bias. This linearity makes it easy to choose small changes in the weights and biases to achieve any desired small change in the output. So while sigmoid neurons have much of the same qualitative behaviour as perceptrons, they make it much easier to figure out how changing the weights and biases will change the output.If it's the shape of σσ\sigma which really matters, and not its exact form, then why use the particular form used for σσ\sigma in Equation (3)σ(z)≡11+e−zσ(z)≡11+e−z\begin{eqnarray} \sigma(z) \equiv \frac{1}{1+e^{-z}} \nonumber\end{eqnarray}$('#margin_387419264610_reveal').click(function() {$('#margin_387419264610').toggle('slow', function() {});});? In fact, later in the book we will occasionally consider neurons where the output is f(w⋅x+b)f(w⋅x+b)f(w \cdot x + b) for some other activation function f(⋅)f(⋅)f(\cdot). The main thing that changes when we use a different activation function is that the particular values for the partial derivatives in Equation (5)Δoutput≈∑j∂output∂wjΔwj+∂output∂bΔbΔoutput≈∑j∂output∂wjΔwj+∂output∂bΔb\begin{eqnarray} \Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b \nonumber\end{eqnarray}$('#margin_727997094331_reveal').click(function() {$('#margin_727997094331').toggle('slow', function() {});}); change. It turns out that when we compute those partial derivatives later, using σσ\sigma will simplify the algebra, simply because exponentials have lovely properties when differentiated. In any case, σσ\sigma is commonly-used in work on neural nets, and is the activation function we'll use most often in this book.How should we interpret the output from a sigmoid neuron? Obviously, one big difference between perceptrons and sigmoid neurons is that sigmoid neurons don't just output 000 or 111. They can have as output any real number between 000 and 111, so values such as 0.173…0.173…0.173\ldots and 0.689…0.689…0.689\ldots are legitimate outputs. This can be useful, for example, if we want to use the output value to represent the average intensity of the pixels in an image input to a neural network. But sometimes it can be a nuisance. Suppose we want the output from the network to indicate either "the input image is a 9" or "the input image is not a 9". Obviously, it'd be easiest to do this if the output was a 000 or a 111, as in a perceptron. But in practice we can set up a convention to deal with this, for example, by deciding to interpret any output of at least 0.50.50.5 as indicating a "9", and any output less than 0.50.50.5 as indicating "not a 9". I'll always explicitly state when we're using such a convention, so it shouldn't cause any confusion. Exercises Sigmoid neurons simulating perceptrons, part I \mbox{} Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, c>0c>0c > 0. Show that the behaviour of the network doesn't change.Sigmoid neurons simulating perceptrons, part II \mbox{} Suppose we have the same setup as the last problem - a network of perceptrons. Suppose also that the overall input to the network of perceptrons has been chosen. We won't need the actual input value, we just need the input to have been fixed. Suppose the weights and biases are such that w⋅x+b≠0w⋅x+b≠0w \cdot x + b \neq 0 for the input xxx to any particular perceptron in the network. Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant c>0c>0c > 0. Show that in the limit as c→∞c→∞c \rightarrow \infty the behaviour of this network of sigmoid neurons is exactly the same as the network of perceptrons. How can this fail when w⋅x+b=0w⋅x+b=0w \cdot x + b = 0 for one of the perceptrons? The architecture of neural networksIn the next section I'll introduce a neural network that can do a pretty good job classifying handwritten digits. In preparation for that, it helps to explain some terminology that lets us name different parts of a network. Suppose we have the network: As mentioned earlier, the leftmost layer in this network is called the input layer, and the neurons within the layer are called input neurons. The rightmost or output layer contains the output neurons, or, as in this case, a single output neuron. The middle layer is called a hidden layer, since the neurons in this layer are neither inputs nor outputs. The term "hidden" perhaps sounds a little mysterious - the first time I heard the term I thought it must have some deep philosophical or mathematical significance - but it really means nothing more than "not an input or an output". The network above has just a single hidden layer, but some networks have multiple hidden layers. For example, the following four-layer network has two hidden layers: Somewhat confusingly, and for historical reasons, such multiple layer networks are sometimes called multilayer perceptrons or MLPs, despite being made up of sigmoid neurons, not perceptrons. I'm not going to use the MLP terminology in this book, since I think it's confusing, but wanted to warn you of its existence.The design of the input and output layers in a network is often straightforward. For example, suppose we're trying to determine whether a handwritten image depicts a "9" or not. A natural way to design the network is to encode the intensities of the image pixels into the input neurons. If the image is a 646464 by 646464 greyscale image, then we'd have 4,096=64×644,096=64×644,096 = 64 \times 64 input neurons, with the intensities scaled appropriately between 000 and 111. The output layer will contain just a single neuron, with output values of less than 0.50.50.5 indicating "input image is not a 9", and values greater than 0.50.50.5 indicating "input image is a 9 ". While the design of the input and output layers of a neural network is often straightforward, there can be quite an art to the design of the hidden layers. In particular, it's not possible to sum up the design process for the hidden layers with a few simple rules of thumb. Instead, neural networks researchers have developed many design heuristics for the hidden layers, which help people get the behaviour they want out of their nets. For example, such heuristics can be used to help determine how to trade off the number of hidden layers against the time required to train the network. We'll meet several such design heuristics later in this book. Up to now, we've been discussing neural networks where the output from one layer is used as input to the next layer. Such networks are called feedforward neural networks. This means there are no loops in the network - information is always fed forward, never fed back. If we did have loops, we'd end up with situations where the input to the σσ\sigma function depended on the output. That'd be hard to make sense of, and so we don't allow such loops.However, there are other models of artificial neural networks in which feedback loops are possible. These models are called recurrent neural networks. The idea in these models is to have neurons which fire for some limited duration of time, before becoming quiescent. That firing can stimulate other neurons, which may fire a little while later, also for a limited duration. That causes still more neurons to fire, and so over time we get a cascade of neurons firing. Loops don't cause problems in such a model, since a neuron's output only affects its input at some later time, not instantaneously.Recurrent neural nets have been less influential than feedforward networks, in part because the learning algorithms for recurrent nets are (at least to date) less powerful. But recurrent networks are still extremely interesting. They're much closer in spirit to how our brains work than feedforward networks. And it's possible that recurrent networks can solve important problems which can only be solved with great difficulty by feedforward networks. However, to limit our scope, in this book we're going to concentrate on the more widely-used feedforward networks.A simple network to classify handwritten digitsHaving defined neural networks, let's return to handwriting recognition. We can split the problem of recognizing handwritten digits into two sub-problems. First, we'd like a way of breaking an image containing many digits into a sequence of separate images, each containing a single digit. For example, we'd like to break the imageinto six separate images, We humans solve this segmentation problem with ease, but it's challenging for a computer program to correctly break up the image. Once the image has been segmented, the program then needs to classify each individual digit. So, for instance, we'd like our program to recognize that the first digit above,is a 5.We'll focus on writing a program to solve the second problem, that is, classifying individual digits. We do this because it turns out that the segmentation problem is not so difficult to solve, once you have a good way of classifying individual digits. There are many approaches to solving the segmentation problem. One approach is to trial many different ways of segmenting the image, using the individual digit classifier to score each trial segmentation. A trial segmentation gets a high score if the individual digit classifier is confident of its classification in all segments, and a low score if the classifier is having a lot of trouble in one or more segments. The idea is that if the classifier is having trouble somewhere, then it's probably having trouble because the segmentation has been chosen incorrectly. This idea and other variations can be used to solve the segmentation problem quite well. So instead of worrying about segmentation we'll concentrate on developing a neural network which can solve the more interesting and difficult problem, namely, recognizing individual handwritten digits.To recognize individual digits we will use a three-layer neural network: The input layer of the network contains neurons encoding the values of the input pixels. As discussed in the next section, our training data for the network will consist of many 282828 by 282828 pixel images of scanned handwritten digits, and so the input layer contains 784=28×28784=28×28784 = 28 \times 28 neurons. For simplicity I've omitted most of the 784784784 input neurons in the diagram above. The input pixels are greyscale, with a value of 0.00.00.0 representing white, a value of 1.01.01.0 representing black, and in between values representing gradually darkening shades of grey.The second layer of the network is a hidden layer. We denote the number of neurons in this hidden layer by nnn, and we'll experiment with different values for nnn. The example shown illustrates a small hidden layer, containing just n=15n=15n = 15 neurons.The output layer of the network contains 10 neurons. If the first neuron fires, i.e., has an output ≈1≈1\approx 1, then that will indicate that the network thinks the digit is a 000. If the second neuron fires then that will indicate that the network thinks the digit is a 111. And so on. A little more precisely, we number the output neurons from 000 through 999, and figure out which neuron has the highest activation value. If that neuron is, say, neuron number 666, then our network will guess that the input digit was a 666. And so on for the other output neurons.You might wonder why we use 101010 output neurons. After all, the goal of the network is to tell us which digit (0,1,2,…,90,1,2,…,90, 1, 2, \ldots, 9) corresponds to the input image. A seemingly natural way of doing that is to use just 444 output neurons, treating each neuron as taking on a binary value, depending on whether the neuron's output is closer to 000 or to 111. Four neurons are enough to encode the answer, since 24=1624=162^4 = 16 is more than the 10 possible values for the input digit. Why should our network use 101010 neurons instead? Isn't that inefficient? The ultimate justification is empirical: we can try out both network designs, and it turns out that, for this particular problem, the network with 101010 output neurons learns to recognize digits better than the network with 444 output neurons. But that leaves us wondering why using 101010 output neurons works better. Is there some heuristic that would tell us in advance that we should use the 101010-output encoding instead of the 444-output encoding?To understand why we do this, it helps to think about what the neural network is doing from first principles. Consider first the case where we use 101010 output neurons. Let's concentrate on the first output neuron, the one that's trying to decide whether or not the digit is a 000. It does this by weighing up evidence from the hidden layer of neurons. What are those hidden neurons doing? Well, just suppose for the sake of argument that the first neuron in the hidden layer detects whether or not an image like the following is present:It can do this by heavily weighting input pixels which overlap with the image, and only lightly weighting the other inputs. In a similar way, let's suppose for the sake of argument that the second, third, and fourth neurons in the hidden layer detect whether or not the following images are present:As you may have guessed, these four images together make up the 000 image that we saw in the line of digits shown earlier:So if all four of these hidden neurons are firing then we can conclude that the digit is a 000. Of course, that's not the only sort of evidence we can use to conclude that the image was a 000 - we could legitimately get a 000 in many other ways (say, through translations of the above images, or slight distortions). But it seems safe to say that at least in this case we'd conclude that the input was a 000.Supposing the neural network functions in this way, we can give a plausible explanation for why it's better to have 101010 outputs from the network, rather than 444. If we had 444 outputs, then the first output neuron would be trying to decide what the most significant bit of the digit was. And there's no easy way to relate that most significant bit to simple shapes like those shown above. It's hard to imagine that there's any good historical reason the component shapes of the digit will be closely related to (say) the most significant bit in the output.Now, with all that said, this is all just a heuristic. Nothing says that the three-layer neural network has to operate in the way I described, with the hidden neurons detecting simple component shapes. Maybe a clever learning algorithm will find some assignment of weights that lets us use only 444 output neurons. But as a heuristic the way of thinking I've described works pretty well, and can save you a lot of time in designing good neural network architectures.Exercise There is a way of determining the bitwise representation of a digit by adding an extra layer to the three-layer network above. The extra layer converts the output from the previous layer into a binary representation, as illustrated in the figure below. Find a set of weights and biases for the new output layer. Assume that the first 333 layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least 0.990.990.99, and incorrect outputs have activation less than 0.010.010.01. Learning with gradient descentNow that we have a design for our neural network, how can it learn to recognize digits? The first thing we'll need is a data set to learn from - a so-called training data set. We'll use the MNIST data set, which contains tens of thousands of scanned images of handwritten digits, together with their correct classifications. MNIST's name comes from the fact that it is a modified subset of two data sets collected by NIST, the United States' National Institute of Standards and Technology. Here's a few images from MNIST: As you can see, these digits are, in fact, the same as those shown at the beginning of this chapter as a challenge to recognize. Of course, when testing our network we'll ask it to recognize images which aren't in the training set!The MNIST data comes in two parts. The first part contains 60,000 images to be used as training data. These images are scanned handwriting samples from 250 people, half of whom were US Census Bureau employees, and half of whom were high school students. The images are greyscale and 28 by 28 pixels in size. The second part of the MNIST data set is 10,000 images to be used as test data. Again, these are 28 by 28 greyscale images. We'll use the test data to evaluate how well our neural network has learned to recognize digits. To make this a good test of performance, the test data was taken from a different set of 250 people than the original training data (albeit still a group split between Census Bureau employees and high school students). This helps give us confidence that our system can recognize digits from people whose writing it didn't see during training.We'll use the notation xxx to denote a training input. It'll be convenient to regard each training input xxx as a 28×28=78428×28=78428 \times 28 = 784-dimensional vector. Each entry in the vector represents the grey value for a single pixel in the image. We'll denote the corresponding desired output by y=y(x)y=y(x)y = y(x), where yyy is a 101010-dimensional vector. For example, if a particular training image, xxx, depicts a 666, then y(x)=(0,0,0,0,0,0,1,0,0,0)Ty(x)=(0,0,0,0,0,0,1,0,0,0)Ty(x) = (0, 0, 0, 0, 0, 0, 1, 0, 0, 0)^T is the desired output from the network. Note that TTT here is the transpose operation, turning a row vector into an ordinary (column) vector.What we'd like is an algorithm which lets us find weights and biases so that the output from the network approximates y(x)y(x)y(x) for all training inputs xxx. To quantify how well we're achieving this goal we define a cost function* *Sometimes referred to as a loss or objective function. We use the term cost function throughout this book, but you should note the other terminology, since it's often used in research papers and other discussions of neural networks. : C(w,b)≡12n∑x∥y(x)−a∥2.(6)(6)C(w,b)≡12n∑x‖y(x)−a‖2.\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2. \tag{6}\end{eqnarray} Here, www denotes the collection of all weights in the network, bbb all the biases, nnn is the total number of training inputs, aaa is the vector of outputs from the network when xxx is input, and the sum is over all training inputs, xxx. Of course, the output aaa depends on xxx, www and bbb, but to keep the notation simple I haven't explicitly indicated this dependence. The notation ∥v∥‖v‖\| v \| just denotes the usual length function for a vector vvv. We'll call CCC the quadratic cost function; it's also sometimes known as the mean squared error or just MSE. Inspecting the form of the quadratic cost function, we see that C(w,b)C(w,b)C(w,b) is non-negative, since every term in the sum is non-negative. Furthermore, the cost C(w,b)C(w,b)C(w,b) becomes small, i.e., C(w,b)≈0C(w,b)≈0C(w,b) \approx 0, precisely when y(x)y(x)y(x) is approximately equal to the output, aaa, for all training inputs, xxx. So our training algorithm has done a good job if it can find weights and biases so that C(w,b)≈0C(w,b)≈0C(w,b) \approx 0. By contrast, it's not doing so well when C(w,b)C(w,b)C(w,b) is large - that would mean that y(x)y(x)y(x) is not close to the output aaa for a large number of inputs. So the aim of our training algorithm will be to minimize the cost C(w,b)C(w,b)C(w,b) as a function of the weights and biases. In other words, we want to find a set of weights and biases which make the cost as small as possible. We'll do that using an algorithm known as gradient descent. Why introduce the quadratic cost? After all, aren't we primarily interested in the number of images correctly classified by the network? Why not try to maximize that number directly, rather than minimizing a proxy measure like the quadratic cost? The problem with that is that the number of images correctly classified is not a smooth function of the weights and biases in the network. For the most part, making small changes to the weights and biases won't cause any change at all in the number of training images classified correctly. That makes it difficult to figure out how to change the weights and biases to get improved performance. If we instead use a smooth cost function like the quadratic cost it turns out to be easy to figure out how to make small changes in the weights and biases so as to get an improvement in the cost. That's why we focus first on minimizing the quadratic cost, and only after that will we examine the classification accuracy.Even given that we want to use a smooth cost function, you may still wonder why we choose the quadratic function used in Equation (6)C(w,b)≡12n∑x∥y(x)−a∥2C(w,b)≡12n∑x‖y(x)−a‖2\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2 \nonumber\end{eqnarray}$('#margin_501822820305_reveal').click(function() {$('#margin_501822820305').toggle('slow', function() {});});. Isn't this a rather ad hoc choice? Perhaps if we chose a different cost function we'd get a totally different set of minimizing weights and biases? This is a valid concern, and later we'll revisit the cost function, and make some modifications. However, the quadratic cost function of Equation (6)C(w,b)≡12n∑x∥y(x)−a∥2C(w,b)≡12n∑x‖y(x)−a‖2\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2 \nonumber\end{eqnarray}$('#margin_555483302348_reveal').click(function() {$('#margin_555483302348').toggle('slow', function() {});}); works perfectly well for understanding the basics of learning in neural networks, so we'll stick with it for now.Recapping, our goal in training a neural network is to find weights and biases which minimize the quadratic cost function C(w,b)C(w,b)C(w, b). This is a well-posed problem, but it's got a lot of distracting structure as currently posed - the interpretation of www and bbb as weights and biases, the σσ\sigma function lurking in the background, the choice of network architecture, MNIST, and so on. It turns out that we can understand a tremendous amount by ignoring most of that structure, and just concentrating on the minimization aspect. So for now we're going to forget all about the specific form of the cost function, the connection to neural networks, and so on. Instead, we're going to imagine that we've simply been given a function of many variables and we want to minimize that function. We're going to develop a technique called gradient descent which can be used to solve such minimization problems. Then we'll come back to the specific function we want to minimize for neural networks.Okay, let's suppose we're trying to minimize some function, C(v)C(v)C(v). This could be any real-valued function of many variables, v=v1,v2,…v=v1,v2,…v = v_1, v_2, \ldots. Note that I've replaced the www and bbb notation by vvv to emphasize that this could be any function - we're not specifically thinking in the neural networks context any more. To minimize C(v)C(v)C(v) it helps to imagine CCC as a function of just two variables, which we'll call v1v1v_1 and v2v2v_2:What we'd like is to find where CCC achieves its global minimum. Now, of course, for the function plotted above, we can eyeball the graph and find the minimum. In that sense, I've perhaps shown slightly too simple a function! A general function, CCC, may be a complicated function of many variables, and it won't usually be possible to just eyeball the graph to find the minimum.One way of attacking the problem is to use calculus to try to find the minimum analytically. We could compute derivatives and then try using them to find places where CCC is an extremum. With some luck that might work when CCC is a function of just one or a few variables. But it'll turn into a nightmare when we have many more variables. And for neural networks we'll often want far more variables - the biggest neural networks have cost functions which depend on billions of weights and biases in an extremely complicated way. Using calculus to minimize that just won't work!(After asserting that we'll gain insight by imagining CCC as a function of just two variables, I've turned around twice in two paragraphs and said, "hey, but what if it's a function of many more than two variables?" Sorry about that. Please believe me when I say that it really does help to imagine CCC as a function of two variables. It just happens that sometimes that picture breaks down, and the last two paragraphs were dealing with such breakdowns. Good thinking about mathematics often involves juggling multiple intuitive pictures, learning when it's appropriate to use each picture, and when it's not.)Okay, so calculus doesn't work. Fortunately, there is a beautiful analogy which suggests an algorithm which works pretty well. We start by thinking of our function as a kind of a valley. If you squint just a little at the plot above, that shouldn't be too hard. And we imagine a ball rolling down the slope of the valley. Our everyday experience tells us that the ball will eventually roll to the bottom of the valley. Perhaps we can use this idea as a way to find a minimum for the function? We'd randomly choose a starting point for an (imaginary) ball, and then simulate the motion of the ball as it rolled down to the bottom of the valley. We could do this simulation simply by computing derivatives (and perhaps some second derivatives) of CCC - those derivatives would tell us everything we need to know about the local "shape" of the valley, and therefore how our ball should roll.Based on what I've just written, you might suppose that we'll be trying to write down Newton's equations of motion for the ball, considering the effects of friction and gravity, and so on. Actually, we're not going to take the ball-rolling analogy quite that seriously - we're devising an algorithm to minimize CCC, not developing an accurate simulation of the laws of physics! The ball's-eye view is meant to stimulate our imagination, not constrain our thinking. So rather than get into all the messy details of physics, let's simply ask ourselves: if we were declared God for a day, and could make up our own laws of physics, dictating to the ball how it should roll, what law or laws of motion could we pick that would make it so the ball always rolled to the bottom of the valley?To make this question more precise, let's think about what happens when we move the ball a small amount Δv1Δv1\Delta v_1 in the v1v1v_1 direction, and a small amount Δv2Δv2\Delta v_2 in the v2v2v_2 direction. Calculus tells us that CCC changes as follows: ΔC≈∂C∂v1Δv1+∂C∂v2Δv2.(7)(7)ΔC≈∂C∂v1Δv1+∂C∂v2Δv2.\begin{eqnarray} \Delta C \approx \frac{\partial C}{\partial v_1} \Delta v_1 + \frac{\partial C}{\partial v_2} \Delta v_2. \tag{7}\end{eqnarray} We're going to find a way of choosing Δv1Δv1\Delta v_1 and Δv2Δv2\Delta v_2 so as to make ΔCΔC\Delta C negative; i.e., we'll choose them so the ball is rolling down into the valley. To figure out how to make such a choice it helps to define ΔvΔv\Delta v to be the vector of changes in vvv, Δv≡(Δv1,Δv2)TΔv≡(Δv1,Δv2)T\Delta v \equiv (\Delta v_1, \Delta v_2)^T, where TTT is again the transpose operation, turning row vectors into column vectors. We'll also define the gradient of CCC to be the vector of partial derivatives, (∂C∂v1,∂C∂v2)T(∂C∂v1,∂C∂v2)T\left(\frac{\partial C}{\partial v_1}, \frac{\partial C}{\partial v_2}\right)^T. We denote the gradient vector by ∇C∇C\nabla C, i.e.: ∇C≡(∂C∂v1,∂C∂v2)T.(8)(8)∇C≡(∂C∂v1,∂C∂v2)T.\begin{eqnarray} \nabla C \equiv \left( \frac{\partial C}{\partial v_1}, \frac{\partial C}{\partial v_2} \right)^T. \tag{8}\end{eqnarray} In a moment we'll rewrite the change ΔCΔC\Delta C in terms of ΔvΔv\Delta v and the gradient, ∇C∇C\nabla C. Before getting to that, though, I want to clarify something that sometimes gets people hung up on the gradient. When meeting the ∇C∇C\nabla C notation for the first time, people sometimes wonder how they should think about the ∇∇\nabla symbol. What, exactly, does ∇∇\nabla mean? In fact, it's perfectly fine to think of ∇C∇C\nabla C as a single mathematical object - the vector defined above - which happens to be written using two symbols. In this point of view, ∇∇\nabla is just a piece of notational flag-waving, telling you "hey, ∇C∇C\nabla C is a gradient vector". There are more advanced points of view where ∇∇\nabla can be viewed as an independent mathematical entity in its own right (for example, as a differential operator), but we won't need such points of view.With these definitions, the expression (7)ΔC≈∂C∂v1Δv1+∂C∂v2Δv2ΔC≈∂C∂v1Δv1+∂C∂v2Δv2\begin{eqnarray} \Delta C \approx \frac{\partial C}{\partial v_1} \Delta v_1 + \frac{\partial C}{\partial v_2} \Delta v_2 \nonumber\end{eqnarray}$('#margin_512380394946_reveal').click(function() {$('#margin_512380394946').toggle('slow', function() {});}); for ΔCΔC\Delta C can be rewritten as ΔC≈∇C⋅Δv.(9)(9)ΔC≈∇C⋅Δv.\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v. \tag{9}\end{eqnarray} This equation helps explain why ∇C∇C\nabla C is called the gradient vector: ∇C∇C\nabla C relates changes in vvv to changes in CCC, just as we'd expect something called a gradient to do. But what's really exciting about the equation is that it lets us see how to choose ΔvΔv\Delta v so as to make ΔCΔC\Delta C negative. In particular, suppose we choose Δv=−η∇C,(10)(10)Δv=−η∇C,\begin{eqnarray} \Delta v = -\eta \nabla C, \tag{10}\end{eqnarray} where ηη\eta is a small, positive parameter (known as the learning rate). Then Equation (9)ΔC≈∇C⋅ΔvΔC≈∇C⋅Δv\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_31741254841_reveal').click(function() {$('#margin_31741254841').toggle('slow', function() {});}); tells us that ΔC≈−η∇C⋅∇C=−η∥∇C∥2ΔC≈−η∇C⋅∇C=−η‖∇C‖2\Delta C \approx -\eta \nabla C \cdot \nabla C = -\eta \|\nabla C\|^2. Because ∥∇C∥2≥0‖∇C‖2≥0\| \nabla C \|^2 \geq 0, this guarantees that ΔC≤0ΔC≤0\Delta C \leq 0, i.e., CCC will always decrease, never increase, if we change vvv according to the prescription in (10)Δv=−η∇CΔv=−η∇C\begin{eqnarray} \Delta v = -\eta \nabla C \nonumber\end{eqnarray}$('#margin_48762573303_reveal').click(function() {$('#margin_48762573303').toggle('slow', function() {});});. (Within, of course, the limits of the approximation in Equation (9)ΔC≈∇C⋅ΔvΔC≈∇C⋅Δv\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_919658643545_reveal').click(function() {$('#margin_919658643545').toggle('slow', function() {});});). This is exactly the property we wanted! And so we'll take Equation (10)Δv=−η∇CΔv=−η∇C\begin{eqnarray} \Delta v = -\eta \nabla C \nonumber\end{eqnarray}$('#margin_287729255111_reveal').click(function() {$('#margin_287729255111').toggle('slow', function() {});}); to define the "law of motion" for the ball in our gradient descent algorithm. That is, we'll use Equation (10)Δv=−η∇CΔv=−η∇C\begin{eqnarray} \Delta v = -\eta \nabla C \nonumber\end{eqnarray}$('#margin_718723868298_reveal').click(function() {$('#margin_718723868298').toggle('slow', function() {});}); to compute a value for ΔvΔv\Delta v, then move the ball's position vvv by that amount: v→v′=v−η∇C.(11)(11)v→v′=v−η∇C.\begin{eqnarray} v \rightarrow v' = v -\eta \nabla C. \tag{11}\end{eqnarray} Then we'll use this update rule again, to make another move. If we keep doing this, over and over, we'll keep decreasing CCC until - we hope - we reach a global minimum.Summing up, the way the gradient descent algorithm works is to repeatedly compute the gradient ∇C∇C\nabla C, and then to move in the opposite direction, "falling down" the slope of the valley. We can visualize it like this:Notice that with this rule gradient descent doesn't reproduce real physical motion. In real life a ball has momentum, and that momentum may allow it to roll across the slope, or even (momentarily) roll uphill. It's only after the effects of friction set in that the ball is guaranteed to roll down into the valley. By contrast, our rule for choosing ΔvΔv\Delta v just says "go down, right now". That's still a pretty good rule for finding the minimum!To make gradient descent work correctly, we need to choose the learning rate ηη\eta to be small enough that Equation (9)ΔC≈∇C⋅ΔvΔC≈∇C⋅Δv\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_560455937071_reveal').click(function() {$('#margin_560455937071').toggle('slow', function() {});}); is a good approximation. If we don't, we might end up with ΔC>0ΔC>0\Delta C > 0, which obviously would not be good! At the same time, we don't want ηη\eta to be too small, since that will make the changes ΔvΔv\Delta v tiny, and thus the gradient descent algorithm will work very slowly. In practical implementations, ηη\eta is often varied so that Equation (9)ΔC≈∇C⋅ΔvΔC≈∇C⋅Δv\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_157848846275_reveal').click(function() {$('#margin_157848846275').toggle('slow', function() {});}); remains a good approximation, but the algorithm isn't too slow. We'll see later how this works. I've explained gradient descent when CCC is a function of just two variables. But, in fact, everything works just as well even when CCC is a function of many more variables. Suppose in particular that CCC is a function of mmm variables, v1,…,vmv1,…,vmv_1,\ldots,v_m. Then the change ΔCΔC\Delta C in CCC produced by a small change Δv=(Δv1,…,Δvm)TΔv=(Δv1,…,Δvm)T\Delta v = (\Delta v_1, \ldots, \Delta v_m)^T is ΔC≈∇C⋅Δv,(12)(12)ΔC≈∇C⋅Δv,\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v, \tag{12}\end{eqnarray} where the gradient ∇C∇C\nabla C is the vector ∇C≡(∂C∂v1,…,∂C∂vm)T.(13)(13)∇C≡(∂C∂v1,…,∂C∂vm)T.\begin{eqnarray} \nabla C \equiv \left(\frac{\partial C}{\partial v_1}, \ldots, \frac{\partial C}{\partial v_m}\right)^T. \tag{13}\end{eqnarray} Just as for the two variable case, we can choose Δv=−η∇C,(14)(14)Δv=−η∇C,\begin{eqnarray} \Delta v = -\eta \nabla C, \tag{14}\end{eqnarray} and we're guaranteed that our (approximate) expression (12)ΔC≈∇C⋅ΔvΔC≈∇C⋅Δv\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_869505431896_reveal').click(function() {$('#margin_869505431896').toggle('slow', function() {});}); for ΔCΔC\Delta C will be negative. This gives us a way of following the gradient to a minimum, even when CCC is a function of many variables, by repeatedly applying the update rule v→v′=v−η∇C.(15)(15)v→v′=v−η∇C.\begin{eqnarray} v \rightarrow v' = v-\eta \nabla C. \tag{15}\end{eqnarray} You can think of this update rule as defining the gradient descent algorithm. It gives us a way of repeatedly changing the position vvv in order to find a minimum of the function CCC. The rule doesn't always work - several things can go wrong and prevent gradient descent from finding the global minimum of CCC, a point we'll return to explore in later chapters. But, in practice gradient descent often works extremely well, and in neural networks we'll find that it's a powerful way of minimizing the cost function, and so helping the net learn.Indeed, there's even a sense in which gradient descent is the optimal strategy for searching for a minimum. Let's suppose that we're trying to make a move ΔvΔv\Delta v in position so as to decrease CCC as much as possible. This is equivalent to minimizing ΔC≈∇C⋅ΔvΔC≈∇C⋅Δv\Delta C \approx \nabla C \cdot \Delta v. We'll constrain the size of the move so that ∥Δv∥=ϵ‖Δv‖=ϵ\| \Delta v \| = \epsilon for some small fixed ϵ>0ϵ>0\epsilon > 0. In other words, we want a move that is a small step of a fixed size, and we're trying to find the movement direction which decreases CCC as much as possible. It can be proved that the choice of ΔvΔv\Delta v which minimizes ∇C⋅Δv∇C⋅Δv\nabla C \cdot \Delta v is Δv=−η∇CΔv=−η∇C\Delta v = - \eta \nabla C, where η=ϵ/∥∇C∥η=ϵ/‖∇C‖\eta = \epsilon / \|\nabla C\| is determined by the size constraint ∥Δv∥=ϵ‖Δv‖=ϵ\|\Delta v\| = \epsilon. So gradient descent can be viewed as a way of taking small steps in the direction which does the most to immediately decrease CCC.Exercises Prove the assertion of the last paragraph. Hint: If you're not already familiar with the Cauchy-Schwarz inequality, you may find it helpful to familiarize yourself with it. I explained gradient descent when CCC is a function of two variables, and when it's a function of more than two variables. What happens when CCC is a function of just one variable? Can you provide a geometric interpretation of what gradient descent is doing in the one-dimensional case? People have investigated many variations of gradient descent, including variations that more closely mimic a real physical ball. These ball-mimicking variations have some advantages, but also have a major disadvantage: it turns out to be necessary to compute second partial derivatives of CCC, and this can be quite costly. To see why it's costly, suppose we want to compute all the second partial derivatives ∂2C/∂vj∂vk∂2C/∂vj∂vk\partial^2 C/ \partial v_j \partial v_k. If there are a million such vjvjv_j variables then we'd need to compute something like a trillion (i.e., a million squared) second partial derivatives* *Actually, more like half a trillion, since ∂2C/∂vj∂vk=∂2C/∂vk∂vj∂2C/∂vj∂vk=∂2C/∂vk∂vj\partial^2 C/ \partial v_j \partial v_k = \partial^2 C/ \partial v_k \partial v_j. Still, you get the point.! That's going to be computationally costly. With that said, there are tricks for avoiding this kind of problem, and finding alternatives to gradient descent is an active area of investigation. But in this book we'll use gradient descent (and variations) as our main approach to learning in neural networks.How can we apply gradient descent to learn in a neural network? The idea is to use gradient descent to find the weights wkwkw_k and biases blblb_l which minimize the cost in Equation (6)C(w,b)≡12n∑x∥y(x)−a∥2C(w,b)≡12n∑x‖y(x)−a‖2\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2 \nonumber\end{eqnarray}$('#margin_1246306310_reveal').click(function() {$('#margin_1246306310').toggle('slow', function() {});});. To see how this works, let's restate the gradient descent update rule, with the weights and biases replacing the variables vjvjv_j. In other words, our "position" now has components wkwkw_k and blblb_l, and the gradient vector ∇C∇C\nabla C has corresponding components ∂C/∂wk∂C/∂wk\partial C / \partial w_k and ∂C/∂bl∂C/∂bl\partial C / \partial b_l. Writing out the gradient descent update rule in terms of components, we have wkbl→→w′k=wk−η∂C∂wkb′l=bl−η∂C∂bl.(16)(17)(16)wk→wk′=wk−η∂C∂wk(17)bl→bl′=bl−η∂C∂bl.\begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\eta \frac{\partial C}{\partial w_k} \tag{16}\\ b_l & \rightarrow & b_l' = b_l-\eta \frac{\partial C}{\partial b_l}. \tag{17}\end{eqnarray} By repeatedly applying this update rule we can "roll down the hill", and hopefully find a minimum of the cost function. In other words, this is a rule which can be used to learn in a neural network.There are a number of challenges in applying the gradient descent rule. We'll look into those in depth in later chapters. But for now I just want to mention one problem. To understand what the problem is, let's look back at the quadratic cost in Equation (6)C(w,b)≡12n∑x∥y(x)−a∥2C(w,b)≡12n∑x‖y(x)−a‖2\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2 \nonumber\end{eqnarray}$('#margin_214093216664_reveal').click(function() {$('#margin_214093216664').toggle('slow', function() {});});. Notice that this cost function has the form C=1n∑xCxC=1n∑xCxC = \frac{1}{n} \sum_x C_x, that is, it's an average over costs Cx≡∥y(x)−a∥22Cx≡‖y(x)−a‖22C_x \equiv \frac{\|y(x)-a\|^2}{2} for individual training examples. In practice, to compute the gradient ∇C∇C\nabla C we need to compute the gradients ∇Cx∇Cx\nabla C_x separately for each training input, xxx, and then average them, ∇C=1n∑x∇Cx∇C=1n∑x∇Cx\nabla C = \frac{1}{n} \sum_x \nabla C_x. Unfortunately, when the number of training inputs is very large this can take a long time, and learning thus occurs slowly.An idea called stochastic gradient descent can be used to speed up learning. The idea is to estimate the gradient ∇C∇C\nabla C by computing ∇Cx∇Cx\nabla C_x for a small sample of randomly chosen training inputs. By averaging over this small sample it turns out that we can quickly get a good estimate of the true gradient ∇C∇C\nabla C, and this helps speed up gradient descent, and thus learning.To make these ideas more precise, stochastic gradient descent works by randomly picking out a small number mmm of randomly chosen training inputs. We'll label those random training inputs X1,X2,…,XmX1,X2,…,XmX_1, X_2, \ldots, X_m, and refer to them as a mini-batch. Provided the sample size mmm is large enough we expect that the average value of the ∇CXj∇CXj\nabla C_{X_j} will be roughly equal to the average over all ∇Cx∇Cx\nabla C_x, that is, ∑mj=1∇CXjm≈∑x∇Cxn=∇C,(18)(18)∑j=1m∇CXjm≈∑x∇Cxn=∇C,\begin{eqnarray} \frac{\sum_{j=1}^m \nabla C_{X_{j}}}{m} \approx \frac{\sum_x \nabla C_x}{n} = \nabla C, \tag{18}\end{eqnarray} where the second sum is over the entire set of training data. Swapping sides we get ∇C≈1m∑j=1m∇CXj,(19)(19)∇C≈1m∑j=1m∇CXj,\begin{eqnarray} \nabla C \approx \frac{1}{m} \sum_{j=1}^m \nabla C_{X_{j}}, \tag{19}\end{eqnarray} confirming that we can estimate the overall gradient by computing gradients just for the randomly chosen mini-batch. To connect this explicitly to learning in neural networks, suppose wkwkw_k and blblb_l denote the weights and biases in our neural network. Then stochastic gradient descent works by picking out a randomly chosen mini-batch of training inputs, and training with those, wkbl→→w′k=wk−ηm∑j∂CXj∂wkb′l=bl−ηm∑j∂CXj∂bl,(20)(21)(20)wk→wk′=wk−ηm∑j∂CXj∂wk(21)bl→bl′=bl−ηm∑j∂CXj∂bl,\begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial w_k} \tag{20}\\ b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial b_l}, \tag{21}\end{eqnarray} where the sums are over all the training examples XjXjX_j in the current mini-batch. Then we pick out another randomly chosen mini-batch and train with those. And so on, until we've exhausted the training inputs, which is said to complete an epoch of training. At that point we start over with a new training epoch.Incidentally, it's worth noting that conventions vary about scaling of the cost function and of mini-batch updates to the weights and biases. In Equation (6)C(w,b)≡12n∑x∥y(x)−a∥2C(w,b)≡12n∑x‖y(x)−a‖2\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \| y(x) - a\|^2 \nonumber\end{eqnarray}$('#margin_85851492824_reveal').click(function() {$('#margin_85851492824').toggle('slow', function() {});}); we scaled the overall cost function by a factor 1n1n\frac{1}{n}. People sometimes omit the 1n1n\frac{1}{n}, summing over the costs of individual training examples instead of averaging. This is particularly useful when the total number of training examples isn't known in advance. This can occur if more training data is being generated in real time, for instance. And, in a similar way, the mini-batch update rules (20)wk→w′k=wk−ηm∑j∂CXj∂wkwk→wk′=wk−ηm∑j∂CXj∂wk\begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial w_k} \nonumber\end{eqnarray}$('#margin_801900730537_reveal').click(function() {$('#margin_801900730537').toggle('slow', function() {});}); and (21)bl→b′l=bl−ηm∑j∂CXj∂blbl→bl′=bl−ηm∑j∂CXj∂bl\begin{eqnarray} b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial b_l} \nonumber\end{eqnarray}$('#margin_985072620111_reveal').click(function() {$('#margin_985072620111').toggle('slow', function() {});}); sometimes omit the 1m1m\frac{1}{m} term out the front of the sums. Conceptually this makes little difference, since it's equivalent to rescaling the learning rate ηη\eta. But when doing detailed comparisons of different work it's worth watching out for.We can think of stochastic gradient descent as being like political polling: it's much easier to sample a small mini-batch than it is to apply gradient descent to the full batch, just as carrying out a poll is easier than running a full election. For example, if we have a training set of size n=60,000n=60,000n = 60,000, as in MNIST, and choose a mini-batch size of (say) m=10m=10m = 10, this means we'll get a factor of 6,0006,0006,000 speedup in estimating the gradient! Of course, the estimate won't be perfect - there will be statistical fluctuations - but it doesn't need to be perfect: all we really care about is moving in a general direction that will help decrease CCC, and that means we don't need an exact computation of the gradient. In practice, stochastic gradient descent is a commonly used and powerful technique for learning in neural networks, and it's the basis for most of the learning techniques we'll develop in this book.Exercise An extreme version of gradient descent is to use a mini-batch size of just 1. That is, given a training input, xxx, we update our weights and biases according to the rules wk→w′k=wk−η∂Cx/∂wkwk→wk′=wk−η∂Cx/∂wkw_k \rightarrow w_k' = w_k - \eta \partial C_x / \partial w_k and bl→b′l=bl−η∂Cx/∂blbl→bl′=bl−η∂Cx/∂blb_l \rightarrow b_l' = b_l - \eta \partial C_x / \partial b_l. Then we choose another training input, and update the weights and biases again. And so on, repeatedly. This procedure is known as online, on-line, or incremental learning. In online learning, a neural network learns from just one training input at a time (just as human beings do). Name one advantage and one disadvantage of online learning, compared to stochastic gradient descent with a mini-batch size of, say, 202020. Let me conclude this section by discussing a point that sometimes bugs people new to gradient descent. In neural networks the cost CCC is, of course, a function of many variables - all the weights and biases - and so in some sense defines a surface in a very high-dimensional space. Some people get hung up thinking: "Hey, I have to be able to visualize all these extra dimensions". And they may start to worry: "I can't think in four dimensions, let alone five (or five million)". Is there some special ability they're missing, some ability that "real" supermathematicians have? Of course, the answer is no. Even most professional mathematicians can't visualize four dimensions especially well, if at all. The trick they use, instead, is to develop other ways of representing what's going on. That's exactly what we did above: we used an algebraic (rather than visual) representation of ΔCΔC\Delta C to figure out how to move so as to decrease CCC. People who are good at thinking in high dimensions have a mental library containing many different techniques along these lines; our algebraic trick is just one example. Those techniques may not have the simplicity we're accustomed to when visualizing three dimensions, but once you build up a library of such techniques, you can get pretty good at thinking in high dimensions. I won't go into more detail here, but if you're interested then you may enjoy reading this discussion of some of the techniques professional mathematicians use to think in high dimensions. While some of the techniques discussed are quite complex, much of the best content is intuitive and accessible, and could be mastered by anyone. Implementing our network to classify digitsAlright, let's write a program that learns how to recognize handwritten digits, using stochastic gradient descent and the MNIST training data. We'll do this with a short Python (2.7) program, just 74 lines of code! The first thing we need is to get the MNIST data. If you're a git user then you can obtain the data by cloning the code repository for this book,git clone https://github.com/mnielsen/neural-networks-and-deep-learning.git If you don't use git then you can download the data and code here.Incidentally, when I described the MNIST data earlier, I said it was
@fuelpress
-
-
-
gram matrix must be normalized by dividing each element by the total number of elements in the matrix.
true, after downsampling your gradient will get smaller on later layers
-
-
www.lifeworth.com www.lifeworth.com
-
Restoration asks us “what can we bring back to help us with the coming difficulties and tragedies?”
-
Relinquishment asks us “what do we need to let go of in order to notmake matters worse?”
-
Resilience asks us “how do we keep what we really want to keep?”
-
The third area can be called “restoration.” It involves people and communities rediscovering attitudes and approaches to life and organisation that our hydrocarbon-fuelled civilisation eroded.
-
second area of this agenda, which I have named “relinquishment.” It involves people andcommunities letting go of certain assets, behaviours and beliefs where retaining them could make matters worse.
-
we can conceive of resilience of human societies as the capacity to adapt to changing circumstances so as to survive with valued norms and behaviours.
-
- Sep 2019
-
github.com github.com
-
Deep Learning for Search - teaches you how to leverage neural networks, NLP, and deep learning techniques to improve search performance. (2019) Relevant Search: with applications for Solr and Elasticsearch - demystifies relevance work. Using Elasticsearch, it teaches you how to return engaging search results to your users, helping you understand and leverage the internals of Lucene-based search engines. (2016)
-
Elasticsearch with Machine Learning (English translation) by Kunihiko Kido Recommender System with Mahout and Elasticsearch
-
- Jun 2019
-
-
- May 2019
-
www.andrew.cmu.edu www.andrew.cmu.edu
Tags
Annotators
URL
-
- Apr 2019
-
arxiv.org arxiv.org
Tags
Annotators
URL
-
- Mar 2019
-
www.phontron.com www.phontron.com
Tags
Annotators
URL
-
-
www.comp.nus.edu.sg www.comp.nus.edu.sg
Tags
Annotators
URL
-
-
stacks.stanford.edu stacks.stanford.edu
-
NEURAL READING COMPREHENSION AND BEYOND
-
-
csus-dspace.calstate.edu csus-dspace.calstate.edu
-
DEEP LEARNING WITH CONVOLUTIONAL NEURAL NETWORKS FOR IMAGE RECOGNITION: STEP-BY-STEP PROCESS FROM PREPARATION TO GENERALIZATION
-
-
scholarworks.sjsu.edu scholarworks.sjsu.edu
-
Deep Learning for Chatbots
Tags
Annotators
URL
-
-
stacks.stanford.edu stacks.stanford.edu
-
EFFICIENT METHODS AND HARDWARE FOR DEEP LEARNING
Deep Compression" can reduce the model sizeby 18?to 49?without hurting the prediction accuracy. We also discovered that pruning and thesparsity constraint not only applies to model compression but also applies to regularization, andwe proposed dense-sparse-dense training (DSD), which can improve the prediction accuracy for awide range of deep learning models. To efficiently implement "Deep Compression" in hardware,we developed EIE, the "Efficient Inference Engine", a domain-specific hardware accelerator thatperforms inference directly on the compressed model which significantly saves memory bandwidth.Taking advantage of the compressed model, and being able to deal with the irregular computationpattern efficiently, EIE improves the speed by 13?and energy efficiency by 3,400?over GPU
-
-
cjc.ict.ac.cn cjc.ict.ac.cn
-
深度文本匹配综述
-
-
github.com github.com
-
all kinds of text classification models and more with deep learning
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification
Tags
Annotators
URL
-
-
www.ijcai.org www.ijcai.org
-
Differentiated Attentive Representation Learning for Sentence Classification
-
-
arxiv.org arxiv.org
-
A Gentle Tutorial of Recurrent Neural Network with ErrorBackpropagation
A Gentle Tutorial of Recurrent Neural Network with ErrorBackpropagation
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
A Tutorial on Deep Latent Variable Models of Natural Language
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
BACKGROUND: ATTENTIONNETWORKS
Tags
Annotators
URL
-
-
-
To the best of our knowl-edge, there has not been any other work exploringthe use of attention-based architectures for NMT
目前并没人来用attention来做机器翻译
Tags
Annotators
URL
-
-
github.com github.com
Tags
Annotators
URL
-
-
-
LSTM Derivations
-
-
jalammar.github.io jalammar.github.io
-
博客很赞!
-
-
arxiv.org arxiv.org
-
One of the challenges of deep learning is that the gradients with respect to the weights in one layerare highly dependent on the outputs of the neurons in the previous layer especially if these outputschange in a highly correlated way. Batch normalization [Ioffe and Szegedy, 2015] was proposedto reduce such undesirable “covariate shift”. The method normalizes the summed inputs to eachhidden unit over the training cases. Specifically, for theithsummed input in thelthlayer, the batchnormalization method rescales the summed inputs according to their variances under the distributionof the data
batch normalization的出现是为了解决神经元的输入和当前计算值交互的高度依赖的问题。因为要计算期望值,所以需要拿到所有样本然后进行计算,显然不太现实。因此将取样范围和训练时的mini-batch保持一致。但是这就把局限转移到mini-batch的大小上了,很难应用到RNN。因此需要LayerNormalization.
-
Layer Normalization
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN AND TRANSLATE
Tags
Annotators
URL
-
-
emnlp2014.org emnlp2014.org
-
Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation
Tags
Annotators
URL
-
-
arxiv.org arxiv.org
-
A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDING
Tags
Annotators
URL
-