- Jan 2024
-
www.everylearnereverywhere.org www.everylearnereverywhere.org
-
flexibility
Flexibility in response to learners' is a strategy. When to schedule office hours. When to schedule class messages, due dates/times, etc.
-
“the practice of purposefully involving minoritized communities throughout a design process with the goal of allowing their voices to directly affect how the solution will address the inequity at hand.”
Including voice is a practice for embedding equity.
-
- Jun 2023
-
workcred.org workcred.org
-
approaches to align data analytics microcredentials with undergraduate experiences;
embedded MCs
-
-
credentialengine.org credentialengine.org
-
SECURE, DIGITALSTUDENT RECORDS
Access to student records is an equity issue. It's now been said. Out loud. Regardless of how essential that revenue is, maintaining barriers to accessing student records amounts to institutional efforts to perpetuate inequities.
-
We now have the capacity to ensurethat all possible pathways – andthe essential information about allthe providers, credentials, skills,assessments, quality indicators,outcome measures, transfer values,and links to job skills critical tounderstanding and building thosepathways – can be made fully open,transparent and interoperable sothat a new generation of tools tocustom pathways to meet everyone’sindividual need
There is a lot in this little paragraph, and a big point to not miss is the call out of "individual need." There will be dashboards and other tools that purport to serve learners/earners with comprehensive data about the possible pathways that are open to their successful futures. A harmful that we can anticipate many falling into however, will be generalized data that fails to leverage "nearest neighbor" practices that provide users with data based on the outcomes experienced by people with shared characteristics to their own. For example, if a specific pathway has great outcomes that are disproportionately enjoyed by White males under 45 who already work in that industry, then the generalized data may be misleading to a career-changing Black woman in her early 60s who is investigating the next steps in her journey..
-
- May 2023
-
www.rand.org www.rand.org
-
It is also important to note that this positive evidence for low-income certificate-earners stands in con-trast to findings for other historically underserved groups; studies indicate that individuals of color and older individuals go on to stack credentials at lower rates and see smaller earnings gains relative to White individuals and younger individuals (Bohn and McConville, 2018; Bohn, Jackson and McConville, 2019; Daugherty et al., 2020; Daugherty and Anderson, 2021). Although we suspect many low-income individuals are also individuals of color, the findings suggest that there are inequities within stackable credential pipelines that might be more strongly tied to race, ethnicity, and age than to socioeconomic status. It is also possible that many low-income individuals never complete a first certificate and thus do not enter a stackable credential pathway
-
Important note on Equity: The positive findings for credential-stacking among low-income individuals stand in contrast to findings for other historically underserved populations, such as older learners and individuals of color, which show some evidence indicating lower rates of stacking and lower returns from stacking relative to younger individuals and White individuals.
-
- Apr 2023
-
clementneo.com clementneo.com
-
It seems like the neuron basically adds the embedding of “ an” to the residual stream, which increases the output probability for “ an” since the unembedding step consists of taking the dot product of the final residual with each token2.
This cleared the dust from my eyes in understanding what the MLP layer does
-
- Mar 2023
-
journals.sagepub.com journals.sagepub.com
-
‘networked accumulation’ platform firms (hereafter NAPFs) rely on existing or easily replaceable assets with minimal infrastructure, as distinct to platform firm models that extend or complement transport and accommodation infrastructures through the acquisition of their own fleets of vehicles or suites of properties (Stehlin et al., 2020). NAPFs typically launch local services under a cloud of ‘regulatory indeterminacy’ (Stehlin et al., 2020, 1256), relying on being “simultaneously embedded and disembedded from the space-times they mediate” (Graham, 2020, 454)
The thread in this article is the paradoxical relationship to local place, using it instrumentally to turn it into space
This also highlights the grey regulatory zone that is typical here
-
- Dec 2022
-
eddesignlab.org eddesignlab.org
-
same groups that are going to makethe scale happen can also perpetuate theinequities. We have to be asking the rightquestions with the right stakeholders toensure that we are not recreating anotherinequitable system that marginalizes thepeople we are trying to support
Holly Custard of Strada
-
Employer associations drive consistent skills languageacross job postings in their sectors
-
L)earners can make themselves visible toemployers around the country and around theglobe by “opting in” to digital sector recruitingnetworks for internships, gigs, and full-time jobs
What equity looks like
-
- Feb 2022
-
www.workcred.org www.workcred.org
-
To ensure an equitable and inclusive participation, Workcred invited executive directors and directors of certification that represented certification bodies based on selected criteria—whether their certification(s) could be aligned at the cognitive content level of a bachelor’s degree, whether the organization participates in Workcred’s Credentialing Body Advisory Council, and if they are accredited by a third-party.4 For purposes of this project, accreditation served as a proxy for the industry value of the certification. Select employers in industries related to the focus of a convening were also invited to participate.
Trust: in investigating how to embed credentials that will be trusted, leaders convened participants whose affiliations might add trust to the effort. Meta. Also, interesting detail in the Change Management approach.
-
- Aug 2021
-
arxiv.org arxiv.org
Tags
Annotators
URL
-
-
colah.github.io colah.github.io
-
t-SNE visualizations of word embeddings.
-
-
mccormickml.com mccormickml.com
-
The second-to-last layer is what Han settled on as a reasonable sweet-spot.
Pretty arbitrary choice
-
- Mar 2021
-
psyarxiv.com psyarxiv.com
-
Lindow, Mike, David DeFranza, Arul Mishra, and Himanshu Mishra. ‘Scared into Action: How Partisanship and Fear Are Associated with Reactions to Public Health Directives’. PsyArXiv, 12 January 2021. https://doi.org/10.31234/osf.io/8me7q.
-
- Oct 2020
-
facebook.github.io facebook.github.io
-
ECMAScript 6th Edition (ECMA-262) introduces template literals which are intended to be used for embedding DSL in ECMAScript.
-
-
raw.githubusercontent.com raw.githubusercontent.com
-
<Playground> ```html filename=index.html
-
- Sep 2020
-
-
Ehlert, A., Kindschi, M., Algesheimer, R., & Rauhut, H. (2020). Human social preferences cluster and spread in the field. Proceedings of the National Academy of Sciences, 117(37), 22787–22792. https://doi.org/10.1073/pnas.2000824117
-
- May 2020
-
psyarxiv.com psyarxiv.com
-
Golino, H., Christensen, A. P., Moulder, R. G., Kim, S., & Boker, S. M. (2020, April 14). Modeling latent topics in social media using Dynamic Exploratory Graph Analysis: The case of the right-wing and left-wing trolls in the 2016 US elections. https://doi.org/10.31234/osf.io/tfs7c
-
-
github.com github.com
-
for query, query_embedding in zip(queries, query_embeddings): distances = scipy.spatial.distance.cdist([query_embedding], corpus_embeddings, "cosine")[0]
How to calculate cosine distance between vector and corpus
-
-
mccormickml.com mccormickml.com
-
simple approach is to average the second to last hiden layer of each token producing a single 768 length vector
Proposition how to obtain single vector for the whole sentence
-
It is worth noting that word-level similarity comparisons are not appropriate with BERT embeddings because these embeddings are contextually dependent, meaning that the word vector changes depending on the sentence it appears in. This allows wonderful things like polysemy so that e.g. your representation encodes river “bank” and not a financial institution “bank”, but makes direct word-to-word similarity comparisons less valuable. However, for sentence embeddings similarity comparison is still valid such that one can query, for example, a single sentence against a dataset of other sentences in order to find the most similar. Depending on the similarity metric used, the resulting similarity values will be less informative than the relative ranking of similarity outputs since many similarity metrics make assumptions about the vector space (equally-weighted dimensions, for example) that do not hold for our 768-dimensional vector space.
Thoughts on similarity comparison for word and sentence level embeddings.
-
For out of vocabulary words that are composed of multiple sentence and character-level embeddings, there is a further issue of how best to recover this embedding. Averaging the embeddings is the most straightforward solution (one that is relied upon in similar embedding models with subword vocabularies like fasttext), but summation of subword embeddings and simply taking the last token embedding (remember that the vectors are context sensitive) are acceptable alternative strategies.
Strategies for how to get an embedding for a OOV word
-
It should be noted that although the [CLS] acts as an “aggregate representation” for classification tasks, this is not the best choice for a high quality sentence embedding vector. According to BERT author Jacob Devlin: “I’m not sure what these vectors are, since BERT does not generate meaningful sentence vectors. It seems that this is is doing average pooling over the word tokens to get a sentence vector, but we never suggested that this will generate meaningful sentence representations.”
About [CLS] token not being a good quality sentence level embedding :O
-
In order to get the individual vectors we will need to combine some of the layer vectors…but which layer or combination of layers provides the best representation?
Strategies for aggregating the information from 12 layers
-
This object has four dimensions, in the following order: The layer number (12 layers) The batch number (1 sentence) The word / token number (22 tokens in our sentence) The hidden unit / feature number (768 features) That’s 202,752 unique values just to represent our one sentence!
Expected dimensionality for a sentence embedding
-
BERT offers an advantage over models like Word2Vec, because while each word has a fixed representation under Word2Vec regardless of the context within which the word appears, BERT produces word representations that are dynamically informed by the words around them.
Advantage of BERT embedding over word2vec
-
-
jalammar.github.io jalammar.github.io
-
BERT for feature extraction The fine-tuning approach isn’t the only way to use BERT. Just like ELMo, you can use the pre-trained BERT to create contextualized word embeddings. Then you can feed these embeddings to your existing model – a process the paper shows yield results not far behind fine-tuning BERT on a task such as named-entity recognition.
How to extract embeddings from BERT
Tags
Annotators
URL
-
-
blog.usejournal.com blog.usejournal.com
-
([CLS]).The final hidden state corresponding to this token is used as the ag- gregate sequence representation for classification tasks.
Aggregate sequence representation? Does it mean it is the sentence embedding?
-
-
-
I extracted embeddings from a pytorch model (pytorch_model.bin file). The code to extract is pasted here. It assumes the embeddings are stored with the name bert.embeddings.word_embeddings.weight.
How to extract raw BERT input embeddings? Those are not context aware.
-
-
towardsdatascience.com towardsdatascience.com
-
about 30,000 vectors or embeddings (we can train the model with our own vocabulary if needed- though this has many factors to be considered before doing so, such as the need to pre-train model from scratch with the new vocabulary). These vectors are referred to as raw vectors/embeddings in this post to distinguish them from their transformed counterparts once they pass through the BERT model.These learned raw vectors are similar to the vector output of a word2vec model — a single vector represents a word regardless of its different meanings or senses. For instance, all the different senses/meanings (cell phone, biological cell, prison cell) of a word like “cell” is combined into a single vector.
BERT offers two kind of embeddings:
- similar to word2vec - a single vector represents a word regardless of its different meanings or senses
- context aware embedding - after they pass through the model
-
- Oct 2019
-
www.gatsbyjs.org www.gatsbyjs.org
-
MDX is a superset of Markdown. It allows you to write JSX inside markdown. This includes importing and rendering React components!
-
- Sep 2019
-
developers.googleblog.com developers.googleblog.com
-
Text embedding models convert any input text into an output vector of numbers, and in the process map semantically similar words near each other in the embedding space: Figure 2: Text embeddings convert any text into a vector of numbers (left). Semantically similar pieces of text are mapped nearby each other in the embedding space (right). Given a trained text embedding model, we can directly measure the associations the model has between words or phrases. Many of these associations are expected and are helpful for natural language tasks. However, some associations may be problematic or hurtful. For example, the ground-breaking paper by Bolukbasi et al. [4] found that the vector-relationship between "man" and "woman" was similar to the relationship between "physician" and "registered nurse" or "shopkeeper" and "housewife"
love that Big Lebowski reference
-
- Jan 2019
-
www.bitwig.com www.bitwig.com
-
Grid devices can be nested or layered along with other devices and your plug-ins,
Thanks to training for Cycling ’74 Max, had a kind of micro-epiphany about encapsulation, a year or so ago. Nesting devices in one another sounds like a convenience but there’s a rather deep effect on workflow when you start arranging things in this way: you don’t have to worry about the internals of a box/patcher/module/device if you really know what you can expect out of it. Though some may take this for granted (after all, other modular systems have had it for quite a while), there’s something profound about getting modules that can include other modules. Especially when some of these are third-party plugins.
-
- Aug 2018
-
scholarlykitchen.sspnet.org scholarlykitchen.sspnet.org
-
Publishers and other sites can include a simple line of javascript to enable annotation by default across their content.
Publishers and platform hosts who want to learn more about embedding annotations can learn more about best practices here.
-
- Apr 2017
-
www.tensorflow.org www.tensorflow.org
-
Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Section 3.1 and 3.2 in Mikolov et al.).
Tags
Annotators
URL
-