- Jun 2021
-
files.slack.com files.slack.com
-
rlkj <- function (K, eta = 1) {
what does this function do? what is the point of it?
-
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
Account for a positive correlation between the growth premium and population growth, as people are more likely to move to a fast growing city
Is this different from 5?
-
-
adv-r.had.co.nz adv-r.had.co.nz
-
inary search is particularly useful for this. To do a binary search, you repeatedly remove half of the code until you find the bug. This is fast because, with each step, you reduce the amount of code to look through by half.
I think I came up with this on my own, it seemed obvious. Anyone else?
-
-
adv-r.had.co.nz adv-r.had.co.nz
-
Welcome
@peterhurford's gist on how to read this efficiently
-
-
daaronr.github.io daaronr.github.io
-
Citr package (addin) for RStudio
This is now outdated, as there is now citation support within Rstudio, at least in the visual mode
-
- May 2021
-
r4ds.had.co.nz r4ds.had.co.nz
-
im %>% mutate(sims = invoke_map(f, params, n = 10)) #> # A tibble: 3 x 3
invoke_map is deprecated. How to use the newer syntax?
-
One restriction of summarise() is that it only works with summary functions that return a single value. That means that you can’t use it with functions like quantile() that return a vector of arbitrary length:
you can use it, but it adds rows
-
workflow for managing many models,
We saw a workflow for managing
- split the data apart
- run the same model model for each of the groups
- save this all in a single organized tibble
- report and graph the results in different ways
-
broom::glance)
Glance gets some key elements of the models' outputs, I guess.
-
by_country %>% mutate(glance = map(model, broom::glance)) %>% unnest(glance)
Note that unnest seems to spread out the elements of the glance output into columns, but as these are specific to each country, it doesn't add more rows (while unnesting
resids
would do so). -
resids
second arg refers to 'the thing that's being unnested' I guess
-
resids = map2(data, model, add_residuals)
how does this syntax work? How do
data
andmodel
end up referring to columns in theby_country
tibble?Because it's inside the 'mutate' I guess, so the data frame is 'implied'.
-
-
stackoverflow.com stackoverflow.com
-
do(map_dfr(.$fit, tidy, .id = "dataset"))
do(map_dfr(.$fit, broom::tidy, .id = "dataset"))
-
mutate(fit = flatten(pmap(.l = list(.f = funcs, .formulas = models, data = dat), .f = modelr::fit_with)))
this actually 'runs' the model. Check out the element [[4]][1] of the object defined here
-
-
r4ds.had.co.nz r4ds.had.co.nz
-
You can also use an integer to select elements by position: x <- list(list(1, 2, 3), list(4, 5, 6), list(7, 8, 9)) x %>% map_dbl(2) #> [1] 2 5 8
select same element across each list
-
map_*() uses … ([dot dot dot]) to pass along additional arguments to .f each time it’s called:
this is big
-
-
en.wikipedia.org en.wikipedia.org
-
The immediate consequence of the exogeneity assumption is that the errors have mean zero: E[ε] = 0, and that the regressors are uncorrelated with the errors: E[XTε] = 0.
remember we are talking about the errors in the true equation not the estimated residuals ... the latter are set orthogonal to the X's as part of the minimization problem.
-
OLS estimation can be viewed as a projection onto the linear space spanned by the regressors. (Here each of X 1 {\displaystyle X_{1}} and X 2 {\displaystyle X_{2}} refers to a column of the data matrix.)
the diagram is not so clear. So the \(X\hat{\beta}\) is some vector in \(x_1, x_2\) space, but what exactly does it represent?
Perhaps it would be the line in the direction along which we get the greatest predicted increase (per distance) in the the y variable... but so what?
-
applicable
what do you mean 'applicable'?
-
This is illustrated at the right.
illustration missing
-
In other words, the gradient equations at the minimum can be written as: ( y − X β ^ ) T X = 0. {\displaystyle (\mathbf {y} -X{\hat {\boldsymbol {\beta }}})^{\rm {T}}X=0.}
this comes from a standard first order condition in vector calculus
-
when y is projected orthogonally onto the linear subspace spanned by the columns of X.
Here I believe they mean ...
the goal is to minimize the 'length' of the residual vector, where the 'length' is defined by squaring all the residuals and adding them up
this goal is attained (derived through vector calculus) when 'y is projected orthogonally onto the linear subspace spanned by the column.'
But, suppose there are 2 predictor variables, age and height. The linear subspace spanned by columns of X will typically represent (e.g.) all values of age and height (including negative and ridiculously large ones.)
So what does it mean 'when \(\hat{y}\) is projected orthogonally unto this subspace'?
-
. Thus, the residual vector y − Xβ
vector of residuals of dimension equal to the number of observations
-
ing from the moment conditions E [ x i ( y i − x i T β ) ] = 0. {\displaystyle \mathrm {E} {\big [}\,x_{i}(y_{i}-x_{i}^{T}\beta )\,{\big ]}=0.}
rem: \(y_i - x_i^{T}\beta\) is the distance between the "predicted" (or 'projected') and actual y. It is a distance or difference in the y (outcome) dimension only. I.e., (in 2 dimensions) the 'vertical distance'.
This is not the same as the Euclidean distance (L2 norm) between the observation in x,y space and the 'prediction plane' -- the latter is orthogonal by definition.
Here we are saying that the sum of the 'prediction error' (the vertical distances) weighted by the values of each x is set to be zero, and this must hold for all x's. But each vertical distance is of course positive and thus each of these terms themselves must be positive. The orthogonality condition is saying 'get these weighted vertical distances to sum to zero please'.
This condition arises as a result of the previous 'minimize the sum of squared deviations' problem.
-
In particular, this assumption implies that for any vector-function ƒ, the moment condition E[ƒ(xi)·εi] = 0 will hold.
but we sometimes use other fancier moment conditions in Econometrics IIRC.
-
The OLS estimator β ^ {\displaystyle {\hat {\beta }}} in this case can be interpreted as the coefficients of vector decomposition of ^y = Py along the basis of X.
as in principal-component analysis
-
-
en.wikipedia.org en.wikipedia.org
-
In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, of the fitted values of a dependent variable. In the Bayesian setting, the term MMSE more specifically refers to estimation with quadratic loss function. In such case, the MMSE estimator is given by the posterior mean of the parameter to be estimated. Since the posterior mean is cumbersome to calculate, the form of the MMSE estimator is usually constrained to be within a certain class of functions. Linear MMSE estimators are a popular choice since they are easy to use, easy to calculate, and very versatile. It has given rise to many popular estimators such as the Wiener–Kolmogorov filter and Kalman filter.
And remember that the Max Likelihood estimator is setting the parameters (betas) so as to 'maximize the likelihood of the data given these parameters, assuming (e.g.) a normally distributed error structure'.
In contrast, the Bayesian 'estimator' (really the posterior) will consider the likelihood of different parameters given the data (the 'likelihood') and the prior (normal) distribution.
-
- Apr 2021
-
www2.hawaii.edu www2.hawaii.edu
-
save persons’ factor scores, and
This is what I need!
-
Principal axis factoring is not to be confused with principal components analysis (PCA), which strictly speaking is not a type of common factor analysis because it generates components rather than factors. Unlike factors, components include the unique variances of the observed variables. This is similar to how we calculate composite scores in classical test theory (though PCA finds the best possible solution to reducing variables, with their error, into composites). PCA is appropriate when we are not interested in assuming that there is an underlying latent variable such as a construct corresponding with the components. When we assume that the factors do represent latent traits, common factor analysis is more appropriate than PCA. Because we are interested in measuring latent traits or constructs, we do not use PCA in this chapter.
PCA versus EFA
-
library(psych) fa(r = , nfactors = , n.obs = , fm = , rotate = ) The first argument, which can replace the r =, is for the data, whether it be a data frame of the raw data or a correlation or covariance matrix.
So whatever this is doing it's a function of the covariance matrix and the sample size
-
-
blogs.worldbank.org blogs.worldbank.org
-
but in order to account for correlations, the current best-practice approach is to follow Katz, Kling and Liebman (2007) in calculating bootstrapped estimates of adjusted p-values using a modification of the free step-down algorithm of Westfall and Young (1993).
outdated?
-
-
happygitwithr.com happygitwithr.com
-
Do a bit more work. Re-check that your project is still in a functional state. Commit again but this time amend your previous commit. RStudio offers a check box for “Amend previous commit” or in the shell: git commit --amend --no-edit The --no-edit part retains the current commit message of “WIP”. Don’t push! Your history now looks like this: A -- B -- C -- WIP* but the changes associated with the WIP* commit now represent your last two commits, i.e. all the accumulated changes since state C. Keep going like this. Let’s say you’ve finally achieved
repeated 'amending commits' to avoid a clutter of commits. You can't push in the interim
-
-
happygitwithr.com happygitwithr.com
-
git checkout issue-5 git reset HEAD^
Note this does not remove changes in the files, it just rolls back the 'commit' that was made
-
git commit --all -m "WIP" git checkout master Then when you come back to the branch and continue your work, you need to undo the temporary commit by resetting your state. Specifically, we want a mixed reset. This is “working directory safe”, i.e. it does not affect the state of any files. But it does peel off the temporary WIP commit. Below, the reference HEAD^ says to roll the commit state back to the parent of the current commit (HEAD). git checkout issue-5 git reset HEAD^
quick switches between branches without losing content
-
-
happygitwithr.com happygitwithr.com
-
Check the remote was cloned successfully:
need to cd into dir first
-
Unless you use the GitHub API, most of the GitHub bits really have to be done from the browser.
some day I will learn to use this
-
- Mar 2021
-
journal.r-project.org journal.r-project.org
-
. Bilder and Loughin (2007) introduceda flexible loglinear modeling approach that allows researchers to consider alternative associationstructures somewhere between SPMI and complete dependence. Within this framework, a modelunder SPMI is given aslog(μab(ij))=γij+ηWa(ij)+ηYb(ij)
association under independence (as a linear model). Probability is the product, so log probability is the sum of each probability
-
asymptotic distribution is a linear combination ofindependentχ21random variables (Bilder and Loughin, 2004)
but with a different asymmetric distribution
-
a test for simultaneous pairwise marginalindependence (SPMI), involves determining whether eachW1, . . . ,WIis pairwise independent of eachY1, . . . ,YJ. OurMI.test()function calculates their modified Pearson statistic
it sums the Chi-sq over pairwise combinations
-
Examining all possible combinations of the positive/negative item responses between MRCVsis the preferred way to display and subsequently analyze MRCV data.
rather bulky
-
-
-
3 How EAs get Involved in EA
If I annotate here, where does it go?
-
- Feb 2021
-
systematicreviewsjournal.biomedcentral.com systematicreviewsjournal.biomedcentral.com
-
To deal with this, we organised all of the factors into six overarching categories, comprising three barriers and three facilitators: 1. Difficulties in accessing evidence (six studies) 2. Challenges in understanding the evidence (three studies) 3. Insufficient resources (six studies) 4. Knowledge sharing and ease of access (six studies) 5. Professional advisors and networks (three studies) 6. A broader definition of what counts as credible evidence and better standardisation of reporting (three studies).
barriers and facilitators organised - seems to miss psychological factors?
-
-
giving-evidence.com giving-evidence.com
-
hey run conjoint analysis: in which customers are offered goods with various combinations of characteristics and price – maybe a pink car with a stereo for £1,000, a pink car without a stereo for £800, a blue car for £1,100 and a blue car without a stereo for £950 – to identify how much customers value each characteristic.
But these are usually (always) hypothetical choices, I believe.
-
et me tell you a story. Once upon a time, researcher Dean Karlan was investigating microloans to poor people in South Africa, and what encourages people to take them. He sent people flyers with various designs and offering loans at various rates and sizes. It turns out that giving consumers only one choice of loan size, rather than four, increased their take-up of loans as much as if the lender had reduced the interest rate by about 20 percent. And if the flyer features a picture of a woman, people will pay more for their loan – demand was as high as if the lender had reduced the interest rate by about a third. Nobody would say in a survey or interview that they would pay more if a flyer has a lady on it. But they do. Similarly, Nobel Laureate Daniel Kahneman reports that, empirically, people are more likely to be believe a statement if it is written in red than in green. But nobody would say that in a survey, not least because we don’t know it about ourselves.
on self-reported motivations
-
-
towardsdatascience.com towardsdatascience.com
-
do(cluster_summary = summary(.))
do was old dplyr syntax, replaced by something more consistent but more verbose
-
gives us the best segmentation possible.
that's a bit strong
-
Just like K-means and hierarchical algorithms go hand-in-hand with Euclidean distance, the Partitioning Around Medoids (PAM) algorithm goes along with the Gower distance.
why can't I do hierarchical with Gower distance?
-
The silhouette width is one of the very popular choices when it comes to selecting the optimal number of clusters. It measures the similarity of each point to its cluster, and compares that to the similarity of the point with the closest neighboring cluster. This metric ranges between -1 to 1, where a higher value implies better similarity of the points to their clusters.
This is under- explained.
Silhouette width of each obs: Scaled measure of dissimilarity from (nearest) neighbor cluster relative to dissimilarity from own cluster.
-
library(cluster)gower_df <- daisy(german_credit_clean, metric = "gower" , type = list(logratio = 2))
Code needs a line
mutate_if(is.character, as.factor)
To avoid an error
-
We find that the variable amount needs a log transformation due to the positive skew in its distribution.
just by visual inspection?
the others DON'T all seem normally distributed to me
-
e details about the mathematics of Gower distance are quite complicated and left out for another article.
I want to know
-
Clustering datasets having both numerical and categorical variables
discusses the vignette I used before more completely
-
-
www.datanovia.com www.datanovia.com
-
For each observation iii, calculate the average dissimilarity aiaia_i between iii and all other points of the cluster to which i belongs. For all other clusters CCC, to which i does not belong, calculate the average dissimilarity d(i,C)d(i,C)d(i, C) of iii to all observations of C. The smallest of these d(i,C)d(i,C)d(i,C) is defined as bi=minCd(i,C)bi=minCd(i,C)b_i= \min_C d(i,C). The value of bibib_i can be seen as the dissimilarity between iii and its “neighbor” cluster, i.e., the nearest one to which it does not belong. Finally the silhouette width of the observation iii is defined by the formula: Si=(bi−ai)/max(ai,bi)Si=(bi−ai)/max(ai,bi)S_i = (b_i - a_i)/max(a_i, b_i).
Silhouette width of each obs: Scaled measure of dissimilarity from (nearest) neighbor cluster relative to dissimilarity from own cluster.
-
Average silhouette method
this is not really an explanation!
-
The total WSS measures the compactness of the clustering and we want it to be as small as possible.
as small as possible (within sample) for a given number of clusters
-
To avoid distortions caused by excessive outliers, it’s possible to use PAM algorithm, which is less sensitive to outliers.
another solution to outliers?
-
Next, the wss (within sum of square) is drawn according to the number of clusters. The location of a bend (knee) in the plot is generally considered as an indicator of the appropriate number of clusters.
need more explanation here. What is the value of this "within sum of square" and why does a 'bend' lead to the appropriate number
-
K-means algorithm can be summarized as follow: Specify the number of clusters (K) to be created (by the analyst) Select randomly k objects from the dataset as the initial cluster centers or means Assigns each observation to their closest centroid, based on the Euclidean distance between the object and the centroid For each of the k clusters update the cluster centroid by calculating the new mean values of all the data points in the cluster. The centoid of a Kth cluster is a vector of length p containing the means of all variables for the observations in the kth cluster; p is the number of variables. Iteratively minimize the total within sum of square. That is, iterate steps 3 and 4 until the cluster assignments stop changing or the maximum number of iterations is reached. By default, the R software uses 10 as the default value for the maximum number of iterations.
the implicit claim is that this 'mean-finding' procedure will minimise the sum of squared distances
-
to use correlation distance, the data are input as z-scores.
normalization to weigh each dimension the same
-
-
en.wikipedia.org en.wikipedia.org
-
A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts.
But what if the traits you are trying to measure are actually correlated in the real world?
-
-
en.wikipedia.org en.wikipedia.org
-
The remaining term, 1 / (1 − Rj2) is the VIF. It reflects all other factors that influence the uncertainty in the coefficient estimates. The VIF equals 1 when the vector Xj is orthogonal to each column of the design matrix for the regression of Xj on the other covariates. By contrast, the VIF is greater than 1 when the vector Xj is not orthogonal to all columns of the design matrix for the regression of Xj on the other covariates. Finally, note that the VIF is invariant to the scaling of the variables
VIF interpretation
-
It turns out that the square of this standard error, the estimated variance of the estimate of βj, can be equivalently expressed as:[3][4] var ^ ( β ^ j ) = s 2 ( n − 1 ) var ^ ( X j ) ⋅ 1 1 − R j 2 , {\displaystyle {\widehat {\operatorname {var} }}({\hat {\beta }}_{j})={\frac {s^{2}}{(n-1){\widehat {\operatorname {var} }}(X_{j})}}\cdot {\frac {1}{1-R_{j}^{2}}},} where Rj2 is the multiple R2 for the regression of Xj on the other covariates (a regression that does not involve the response variable Y). This identity separates the influences of several distinct factors on the variance of the coefficient estimate: s2: greater scatter in the data around the regression surface leads to proportionately more variance in the coefficient estimates n: greater sample size results in proportionately less variance in the coefficient estimates var ^ ( X j ) {\displaystyle {\widehat {\operatorname {var} }}(X_{j})} : greater variability in a particular covariate leads to proportionately less variance in the corresponding coefficient estimate The remaining term, 1 / (1 − Rj2) is the VIF. It reflects all other factors that influence the uncertainty in the coefficient estimates
a useful decomposition of the variance of the estimated coefficient
-
-
danielmiessler.com danielmiessler.com
-
Summary: Algorithms to Live By
these annotations look like a great resource
-
-
maxkasy.github.io maxkasy.github.io
-
When treatment assign-ment takes place in waves, it is natural to adapt Thompson sampling by assigning a non-random numberpdtNtof observations in wavetto treatmentd, in order to reduce ran-domness. The remainder of observations are assigned randomly so that expected sharesremain equal topdt.
not sure what this means
-
-
en.wikipedia.org en.wikipedia.org
-
Q = 12 n k ( k + 1 ) ∑ j = 1 k ( r ¯ ⋅ j − k + 1 2 ) 2 {\displaystyle Q={\frac {12n}{k(k+1)}}\sum _{j=1}^{k}\left({\bar {r}}_{\cdot j}-{\frac {k+1}{2}}\right)^{2}} . Note
Q is something that will increase the more certain wine tends to be ranked systematically lower or higher than average
-
is the rank of x i j {\displaystyle x_{ij}}
Just rank the 'scores' of the wines within each rater
-
Find the values r ¯ ⋅ j = 1 n ∑ i = 1 n r i j {\displaystyle {\bar {r}}_{\cdot j}={\frac {1}{n}}\sum _{i=1}^{n}{r_{ij}}}
average rank of wine j across all raters
-
- Jan 2021
-
-
For some reason I'm having trouble commenting on particular parts of this page with hypothesis
-
-
daaronr.github.io daaronr.github.io
-
Definitions
@Jasonschukraft wrote:
Not sure where to put this comment, but how are you thinking about uncertainty about effectiveness? There's a small pool of donors who deny that GiveWell has identified the most effective global poverty/health charities because (e.g.) GiveWell is too focused on "randomista" interventions and doesn't give enough weight to "systematic" interventions.
-
Individual donors, governments and firms demonstrate substantial generosity (e.g., UK charity represents 0.5-1% of GDP, US charity around 2% of GDP).
Things to emphasize, from Jason Shukraft conversation.
Do the ‘masses of donors’ matter, or only the multimillionaire response? The average person … do small donations add up Also, knowing more about how average people to respond to analytical information (in an other regarding /social context) will inform how to influence good LT decision-making. (edited) 4:05 how to get USDA to care about animals/WAW… government to care about LT
-
-
daaronr.github.io daaronr.github.io
-
how people react to the presentation of charity-effectiveness information.
@JasonSchukraft wrote:
Maybe. I suppose it depends on our goals. Do we want people to give to top charities for the right reason (i.e., because those charities are effective) or do we just want people to give to top charities, simpliciter? If the latter, then maybe it doesn't matter how people react to effectiveness information; we should just go with whatever marketing strategy maximizes donations.
Tags
Annotators
URL
-
- Dec 2020
-
daaronr.github.io daaronr.github.io
-
Beem101: Project, discussion of research
I was asked about the 'structure' of the project. This depends on the option, on your topic choice, and on how you wish to pursue it. Nonetheless, a rough structure might look like the following:
Across the topics (more or less... it depends on the project option and topic)
- Introduce the topic, model, question, overview of what you are going to do, and why this is relevant and interesting (some combination of this)
The Economic theory/theories and model(s) presented
with reference to academic authors (papers textbooks)
using formal (maths) modeling, giving at least one simple but formal presentation, and explaining it clearly and in your own voice (remember to explain what all variables mean),
considering the assumptions and simplifications of the model, the 'Economic tool/fields considered' (e.g., optimisation, equilibrium)
Sensitivity of the 'predictions' to the assumptions
The justification for these assumptions
Relationship between this model and your (applied) topic or focus... are the assumptions relevant, what are the 'predictions' etc.
- The application or real world example:
- Explain it in practical terms and what the 'issues and questions are' (possibly engaging the previous literature a bit, but not too much)
- describe and express it formally
relate it to the model/theory and apply the model theory to your real world example
Try to 'model it' and derive 'results' or predictions, continually justifying the application of the model to the example
- Presenting and assessing the insights from the model for the application and vice/versa
- considering the relevance and sensitivity
- what alternative models might be applied, how might it be adjusted
- Discuss 'what modeling and theory achieved or did not achieve here'
Note that "2" could come before or after "3" ... you can present the application first, or the model first... (or there might even be a way to go between the two, presenting one part of each)
-
- Oct 2020
-
globalprioritiesinstitute.org globalprioritiesinstitute.org
-
pure’ altruism or ‘warm glow’ altruism (Andreoni 1990; Ashraf and Bandiera 2017)
This classification is often misunderstood and misused. The Andreoni 'Warm Glow' paper was meant to consider a fairly simple general question about giving overall, not to unpick psychological motivations.
-
The Global Priorities Institute’s vision and mission
Intending to read this and add comments when I have a chance
-
-
en.wikipedia.org en.wikipedia.org
-
Formula[edit] The Y-intercept of the SML is equal to the risk-free interest rate. The slope of the SML is equal to the market risk premium and reflects the risk return tradeoff at a given time: S M L : E ( R i ) = R f + β i [ E ( R M ) − R f ] {\displaystyle \mathrm {SML} :E(R_{i})=R_{f}+\beta _{i}[E(R_{M})-R_{f}]\,} where: E(Ri) is an expected return on security E(RM) is an expected return on market portfolio M β is a nondiversifiable or systematic risk RM is a market rate of return Rf is a risk-free rate
The key equation ... specifying risk vs return
-
The Y-intercept of the SML is equal to the risk-free interest rate. The slope of the SML is equal to the market risk premium and reflects the risk return tradeoff at a given time: S M L : E ( R i ) = R f + β i [ E ( R M ) − R f ] {\displaystyle \mathrm {SML} :E(R_{i})=R_{f}+\beta _{i}[E(R_{M})-R_{f}]\,} where: E(Ri) is an expected return on security E(RM) is an expected return on market portfolio M β is a nondiversifiable or systematic risk RM is a market rate of return Rf is a risk-free rate
This is one statement of the key relationship.
The point is that the market will have a single tradeoff between unavoidable (nondiversifiable) risk and return.
Asset's returns must reflect this, according to the theory. Their prices will be bid up (or down), until this is the case ... the 'arbitrage' process.
Why? Because (assuming borrowing/lending at a risk free rate) *any investor can achieve a particular return for a given risk level simply by buying the 'diversified market basket' and leveraging this (for more risk) or investing the remainder in the risk free-asseet (for less risk). (And she can do no better than this.)
-
This abnormal extra return above the market's return at a given level of risk is what is called the alpha.
this is why you here the stock-touts bragging about their 'alpha'
-
-
en.wikipedia.org en.wikipedia.org
-
Capital asset pricing model
please read this article
-
quantity beta (β)
You hear about this 'beta' all the time as the measure of 'the correlation of the risk of an asset with the representative market basket'...
but confusingly, \(\beta\) is used to represent the slope of the expected return of an asset as this risk increases.
-
systematic risk (beta) t
The concept of "systematic risk" is crucial in order to understand the CAPM. This relates to the risk of an 'optimally diversified portfolio'
-
-
en.wikipedia.org en.wikipedia.org
-
If the fraction q {\displaystyle q} of a one-unit (e.g. one-million-dollar) portfolio is placed in asset X and the fraction 1 − q {\displaystyle 1-q} is placed in Y, the stochastic portfolio return is q x + ( 1 − q ) y {\displaystyle qx+(1-q)y} . If x {\displaystyle x} and y {\displaystyle y} are uncorrelated, the variance of portfolio return is var ( q x + ( 1 − q ) y ) = q 2 σ x 2 + ( 1 − q ) 2 σ y 2 {\displaystyle {\text{var}}(qx+(1-q)y)=q^{2}\sigma _{x}^{2}+(1-q)^{2}\sigma _{y}^{2}} . The variance-minimizing value of q {\displaystyle q} is q = σ y 2 / [ σ x 2 + σ y 2 ] {\displaystyle q=\sigma _{y}^{2}/[\sigma _{x}^{2}+\sigma _{y}^{2}]} , which is strictly between 0 {\displaystyle 0} and 1 {\displaystyle 1} . Using this value of q {\displaystyle q} in the expression for the variance of portfolio return gives the latter as σ x 2 σ y 2 / [ σ x 2 + σ y 2 ] {\displaystyle \sigma _{x}^{2}\sigma _{y}^{2}/[\sigma _{x}^{2}+\sigma _{y}^{2}]} , which is less than what it would be at either of the undiversified values q = 1 {\displaystyle q=1} and q = 0 {\displaystyle q=0} (which respectively give portfolio return variance of σ x 2 {\displaystyle \sigma _{x}^{2}} and σ y 2 {\displaystyle \sigma _{y}^{2}} ). Note that the favorable effect of diversification on portfolio variance would be enhanced if x {\displaystyle x} and y {\displaystyle y} were negatively correlated but diminished (though not eliminated) if they were positively correlated.
Key building block formulae.
Start with 'what happens to the variance when we combine two assets (uncorrelated with same expected return)'
What are the variance minimizing shares and what is the resulting variance of the portfolio.
-
Similarly, a 1985 book reported that most value from diversification comes from the first 15 or 20 different stocks in a portfolio.[6]
the conventional wisdom is that there are sharply diminishing returns to this diversification
-
-
bookdown.org bookdown.org
-
d(p)=(209000-130p)
a simple demand function ('price-response function')
-
CLV Formula
customer lifetime value formula
-
-
daaronr.github.io daaronr.github.io
-
“Sue’s mother” RaRaR_a “Sue’s lecturer in the UK” →→\rightarrow false (so it’s not ‘transitive’)
I think this is where Andrea meant to ask her question:
I wanted to ask how is this a false statement? I want to clarify. Is it that, she is a mother and and this does not relate with her being a lecturer in the UK? From my understanding the theory of transitive means that there is consistency, hence from the first statement to the last it would make sense…
-
intend
I have a video. Need to add it!
-
-
daaronr.github.io daaronr.github.io
-
(Highly optional): Properties of binary relations - O-R problem 1a.
I went over this in the 16 October Q&A. Available to Exeter students HERE: https://web.microsoftstream.com/video/c2e218a8-0632-4d86-8ad2-d0ab7b70ebfb
Tags
Annotators
URL
-
-
daaronr.github.io daaronr.github.io
-
Students
A household chooses how to invest ... to lay aside money for future consumption... which asset to buy To store this value and hopefully get “high payoffs” with little risk
-
-
-
We say that uu is ’a utility function for ≿\succsim.
Does "u is a utility function for \($\succsim$\)" mean that the utility function 'represents' \($\succsim$\)/
-
-
daaronr.github.io daaronr.github.io
-
Differentiating this wrt III yields Engel aggregation:
TODO: make video of this
-
- Sep 2020
-
github.com github.com
-
direct
what is meant by 'direct?'
-
-
rtcharity.org rtcharity.org
-
Past Projects
These are not all 'past'; the survey continues
-
- Aug 2020
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
That's because cause prioritization research is extremely difficult, not because no one has thought to do this.
Yeah, I thought the same
-
4. It is difficult to find cause neutral funding.I think funders like to choose their cause and stick with it so there is a lack of cause neutral funding.
A good point!
-
Growth and the case against randomista development,
I would say this one raised a lot of questions but didn't provide definitive answers
-
me that when reading the GPI research agenda, the economics parts read like it was written by philosophers.
I would agree with this
-
(Also, I have never worked in academia so there may be theories of change in the academic space that others could identify.)
There are some explicit 'Impact targets' in the REF, and pots of ESRC funding for 'impact activities'.
In general I don't think we believe that our 'publications' will themselves drive change. It's more like publications $$\rightarrow$$ status $$\rightarrow$$ influence policymakers
-
But for a new organisation to solely focus on doing the research that they believed would be most useful for improving the world it is unclear what the theory of change would be.
I'm not quote sure how this is differentiated from 'for a big funder'
-
I think that people are hesitant to do something new if they think it is being done, and funders want to know why the new thing is different so the abundance of organisations that used to do cause prioritisation research or do research that is a subcategory of cause prioritisation research limits other organisations from starting up.
Very good point. I think this happens in a lot of spheres.
-
Theoretical cause selection beyond speculation. Evidence of how to reason well despite uncertainty and more comparisons of different causes.
I also think this may have run into some fundamental obstacles.
-
more consideration of second order effect
super hard to measure
-
Let me give just one example, if you look at best practice in risk assessment methodologies[5] it looks very different from the naive expected value calculations used in EA
I agree somewhat, but I'm not sure if the 'risk-assesment methodologies' are easily communicated, nor if they apply to the EA concerns here.
-
theorists
here you are equating 'theorists' with long-termists
-
e. From my point of view, I could save a human life for ~£3000. I don’t want to let kids die needlessly if I can stop it. I personally think that the future is really important but before I drop the ball on all the things I know will have an impact it would be nice to have:
Reasonable statement of 'risk-aversion over the impact that i have'
-
(There could be experimental hits based giving.)
what does this mean?
-
Now let’s get a bit more complicated and do some more research and find other interventions and consider long run effects and so on”. There could be research looking for strong empirical evidence into:the second order or long run effects of existing interventions.how to drive economic growth, policy change, structural changes, and so forth.
These are just extremely difficult to do/learn about. Economists, political scientists, and policy analysts have been debating these for centuries. I'm not sure there are huge easy wins here.
-
Looking around it feels a like there is a split down the middle of the EA community:[4] On the one hand you have the empiricals: those who believe that doing good is difficult, common sense leads you astray and to create change we need hard data, ideally at least a few RCTs.On the other side are the theorists: those who believe you just need to think really hard and to choose a cause we need expected value calculations and it matters not if calculations are highly uncertain if the numbers tend to infinity.Personally I find myself somewhat drawn to the uncharted middle ground.
I agree that much of the most valuable work doesn't fall into either camp
-
Post community building I moved back into policy and most recently have found myself in the policy space, building support for future generations in the UK Parliament. Not research. Not waiting. But creating change.
This sounds a little self-aggrandizing. I don't think it was meant in such a way, though
-
The case of the missing cause prioritisation research
Putting in some Hypothes.is comments. Curious if others like this tool.
-
We theoretically expect and empirically observe impact to be “heavy tailed” with some causes being orders of magnitude more impactful
What are these 'theoretical' reasons we should expect this? Remind me.
-
-
daaronr.github.io daaronr.github.io
-
Students: please propose some of these as a Hypothes.is comment HERE.
Add some examples here, please.
Tags
Annotators
URL
-
-
daaronr.github.io daaronr.github.io
-
well
What do you mean "Wel;"
$$x^2=4$$
-
How individuals interact with one another, and the consequences of this (Game theory and mechanism design/agency problems)
What does this mean? Does it mean \(x^2=4\)
-
-
-
sometimes put together as measure like 'd' of 'effect relative to noise.... effect size/SD
-
- Jul 2020
-
daaronr.github.io daaronr.github.io
-
This relies heavily on:
also raw html code
-
- Jun 2020
-
rethinkpriorities.freshteam.com rethinkpriorities.freshteam.com
-
We’re backed by Open Philanthropy, Effective Altruism Funds, and viewers like you.
The funders
-
-
bookdown.org bookdown.org
-
In typical meta-analyses, we do not have the individual data for each participant available, but only the aggregated effects, which is why we have to perform meta-regressions with predictors on a study level
But in principle we could do more if we had the raw data? This would then be a standard regression with an interaction and a study level 'random effect', I guess.
-
-
bookdown.org bookdown.org
-
Same is the case once we detect statistical heterogeneity in our fixed-effect-model meta-analysis, as indicated by
I think empirically I-sq will always exceed 0. It's a matter of degree.
-
-
handbook-5-1.cochrane.org handbook-5-1.cochrane.org
-
A useful statistic for quantifying inconsistency is , where Q is the chi-squared statistic and df is its degrees of freedom (Higgins 2002, Higgins 2003). This describes the percentage of the variability in effect estimates that is due to heterogeneity rather than sampling error (chance).
I-sq measure of heterogeneity
-
- May 2020
-
www.openbookpublishers.com www.openbookpublishers.com
-
MODELS IN MICROECONOMIC THEORY
Commenting as a placeholder. Hope to use this in teaching soon.
-
-
daaronr.github.io daaronr.github.io
-
wasting
test comment -- I wouldn't say 'wasting'
Tags
Annotators
URL
-
-
bookdown.org bookdown.org
-
We can use the ecdf function to implement the ECDF in R, and then check the probability of our pooled effect being smaller than 0.30. The code looks like this.
should put this first and the plot afterwards
-
We see that the posterior distributions follow a unimodal, and roughly normal distribution, peaking around the values for μμ\mu and ττ\tau we saw in the output.
Consider: why are the peaks not exactly these values? Mean versus mode, I guess.
-
By using the ranef function, we can also extract the estimated deviation of each study’s “true” effect size from the pooled effect: ranef(m.brm) ## $Author ## , , Intercept ## ## Estimate Est.Error Q2.5 Q97.5 ## Call et al. 0.07181028
these are measures of deviations. But they don't exactly equal the difference between the input effect size and the estimated pooled effect size. I assume that somewhere this estimates a true effect for each study which 'averages towards the mean' following some criteria.
-
0.09
Is this like a measure of the standard deviation of the estimated intercept?
-
Please be aware that Bayesian methods are much more computationally intensive compared to the standard meta-analytic techniques we covered before; it may therefore take a few minutes until the sampling is completed.
I found it was the compiling of the C++ that took a bit of time
-
m.brm <- brm(TE|se(seTE) ~ 1 + (1|Author), data = ThirdWave, prior = priors, iter = 4000)
Here r asks me to install tools and opens this link: https://www.cnet.com/how-to/install-command-line-developer-tools-in-os-x/
But I don't know which tools I need to install
-
In this example, I will use my ThirdWave dataset, which contains data of a real-world meta-analysis investigating the effects of “Third-Wave” psychotherapies in college students. The data is identical to the madata dataset we used in Chapter 4.
Again, Bayesian analysis only seems to need the right summary stats, not the raw data
-
-
r4ds.had.co.nz r4ds.had.co.nz
-
using a sophisticated algorithm
Is OLS such a sophisticated algorithm?
-
-
adv-r.hadley.nz adv-r.hadley.nz
-
call2() is often convenient to program with,
why?
-
lobstr::ast(f1(f2(a, b), f3(1, f4(2))))
I'm having trouble seeing the point of this.
-
f <- expr(f(x = 1, y = 2)) # Add a new argument f$z <- 3 f #> f(x = 1, y = 2, z = 3)
You can 'add an argument' to an expression
-
function specifically designed to capture user input in a function argument: enexpr()
I think I need a more concrete example here
-
expr() lets you capture code that you’ve typed
but what do you do with it?
-
-
adv-r.hadley.nz adv-r.hadley.nz
-
Note that when you attach another package with library(), the parent environment of the global environment changes:
Installed packages are 'between' the global and base environments. But when you create a new environment with the env command it is 'after' (a child of) the global environment?
-
Unlike lists, setting an element to NULL does not remove it, because sometimes you want a name that refers to NULL. Instead, use env_unbind():
setting a list element to null removes it
-
But you can’t use [[ with numeric indices, and you can’t use [:
no 'element number'
-
Only one environment doesn’t have a parent: the empty environment.
poor guy
-
The current environment, or current_env() is the environment in which code is currently executing. When you’re experimenting interactively, that’s usually the global environment, or global_env(). The global environment is sometimes called your “workspace”, as it’s where all interactive (i.e. outside of a function) computation takes place.
this is super important
-
env_print() which gives us a little more information:
env print to see parent and 'bindings; of environment
-
e1$d <- e1
referring to or setting a list element with "$" ... it can also contain itself. mind blower
-
-
adv-r.hadley.nz adv-r.hadley.nz
-
Advanced R
Is this book dynamically updated?
-
-
eml.berkeley.edu eml.berkeley.edu
-
Strong evidence for the perils of underpowered practive
-
-
www.replicationmarkets.com www.replicationmarkets.com
-
Replication is testing the same claims using data that was not used in the original study. That required some changes from us. Starting in Round 6, Replication Markets will no longer distinguish between “data replication” and “direct replication.”
But what if it is impossible to find data 'not used in the original study' that is still a direct test of the claims?
-
-
-
t has been argued that a good approach is to use weakly informative priors (Williams, Rast, and Bürkner 2018). Weaky informative priors can be contrasted with non-informative priors.
!
-
integrate prior knowledge and assumptions when calculating meta-analyses.
including uncertainty over methodological validity?
-
-
bookdown.org bookdown.org
-
It can either be stored as the raw data (including the Mean, N, and SD of every study arm) Or it only contains the calculated effect sizes and the standard error (SE).
note that this process does not 'dig in' to the raw data, it just needs the summary statistics
-
-
bookdown.org bookdown.org
-
meta and metafor package which do most of the heavy lifting, there are still some aspects of meta-analyses in the biomedical field and psychology which we consider important, but are not easy to do in R currently, particularly if you do not have a programming or statistics background. To fill this gap, we developed the dmetar package, which serves as the companion R package for this guide. The dmetar package has its own documentation, which can be found here. Functions of the dmetar package provide additional functionality for the meta and metafor packages (and a few other, more advanced packages), w
dmetar package
-
- Apr 2020
-
cran.r-project.org cran.r-project.org
-
set_variable_labels(s1 = "Sex", s2 = "Yes or No?")
Adding variable labels with pipe
-
Adding variable labels using pipe
-
-
bookdown.org bookdown.org
-
preview_chapter()
when I try this I get
Error in files2[[format]] : attempt to select less than one element in get1index
However, I'm also not able to use the knit function, only the 'build' function
-
- Mar 2020
-
r4ds.had.co.nz r4ds.had.co.nz
-
But if you end up with a very long series of chained if statements, you should consider rewriting. One useful technique is the switch() function. It allows you to evaluate selected code based on position or name. #> function(x, y, op) { #> switch(op, #> plus = x + y, #> minus = x - y, #> times = x * y, #> divide = x / y, #> stop("Unknown op!") #> ) #> }
switch is great!
-
-
-
The second type of tutorial provides much richer feedback and assessment, but also requires considerably more effort to author. If you are primarily interested in this sort of tutorial, there are many features in learnr to support it, including exercise hints and solutions, automated exercise checkers, and multiple choice quizzes with custom feedback.
full-blown course/learning materials
-
There are two main types of tutorial documents: Tutorials that are mostly narrative and/or video content, and also include some runnable code chunks. These documents are very similar to package vignettes in that their principal goal is communicating concepts. The interactive tutorial features are then used to allow further experimentation by the reader. Tutorials that provide a structured learning experience with multiple exercises, quiz questions, and tailored feedback. The first type of tutorial is much easier to author while still being very useful. These documents will typically add exercise = TRUE to selected code chunks, and also set exercise.eval = TRUE so the chunk output is visible by default. The reader can simply look at the R code and move on, or play with it to reinforce their understanding.
the easier kind of tutorial... just content with some code chunks (some pre-populated with code) the user can play with
-
-
bookdown.org bookdown.org
-
button “Run Document” in RStudio, or call the function rmarkdown::run() on this Rmd file
Hitting the button worked for me; the script did not
-
-
www.sciencedirect.com www.sciencedirect.com
-
image conscience donors
they meant 'image-conscious'
-
-
www.nytimes.com www.nytimes.com
-
First, many health experts, including the surgeon general of the United States, told the public simultaneously that masks weren’t necessary for protecting the general public and that health care workers needed the dwindling supply. This contradiction confuses an ordinary listener. How do these masks magically protect the wearers only and only if they work in a particular field?
exactly what I was thinking
-
-
www.the-brights.net www.the-brights.net
-
These results arein line with predictions, such that in those cases in which aconsequentialist judgment does not clearly violate fairness-basedprinciples about respecting others and not treating them as meremeans, people do not infer that the agent is necessarily an untrust-worthy social partner
but isn't it still a consequentialist judgement?!
-
We reasoned that if deontological agents are preferred overconsequentialist agents because they are perceived as more com-mitted to social cooperation, such preferences should be lessenedif consequentialist agents reported their judgments as being verydifficult to make, indicating some level of commitment to coop-eration (Critcher, Inbar, & Pizarro, 2013). From the process dis-sociation perspective (Conway & Gawronski, 2013), a person whoreports that it is easy to make a characteristically consequentialistjudgment can be interpreted as being high in consequentialism
I'm not sure I understand or like this approach. Couldn't it just be seen as merely a stronger consequentialism if they had no doubts? And is it even a meaningful distinction ... can I like the 'presence of cold' versus the 'absence of heat'.
-
In contrast to the previous studies, for the switch dilemma,consequentialist agents were rated to be no less moral (Z0.73,p.47,d0.10) or trustworthy (Z1.87,p.06,d0.26)than deontological agents.
To me, this seems to weigh against their main claim. In the one case in which a majority favored the consequentialist choice, the consequentialists are not disfavored! They are really playing this down. Am I missing something?
-
. Despite thegeneral endorsement many people have that “ends do not justifymeans,” people do typically judge that sacrificing the one man bydiverting the train is less morally wrong than sacrificing the manby using his body to stop the train (Foot, 1967; Greene et al.,2001).
How is this 'despite'? It doesn't seem to be in contradiction.
-
The switch case differs from the footbridge case in two criticalways
But it is still in the domain of HARMING people (more versus fewer).
-
The only difference is thatAdam does not push the large man, but instead pushes a button thatopens a trapdoor that causes the large man to fall onto the tracks.
Meh. This difference hardly seems worth bothering with.
-
The amount of moneyparticipants transferred to the agent (from $0.00 to $0.30) was usedas an indicator of trustworthiness, as was how much money theybelieved they would receive back from the agent (0% to 100%)
Note that this is a very small stake. (And was it even perhaps hypothetical?)
-
. However, the data did not support a meresimilarity effect: Our results were robust to controlling for partic-ipants’ own moral judgments, such that participants who made adeontological judgment (the majority) strongly preferred a deon-tological agent, whereas participants who made a consequentialistjudgment (the minority) showed no preference between the two
But this is a lack of a result in the context of a critical underlying assumption. Yes, the results were 'robust', but could we really be statistically confident that this was not driving the outcome? How tight are the error bounds?
-
However, the central claims behind thisaccount—that people who express deontological moral intuitions areperceived as more trustworthy and favored as cooperation partners—has not been empirically investigated.
Here is where the authors claim their territory.
-
the typicaldeontological reason for why specific actions are wrong is that theyviolate duties to respect persons and honor social obligations—fea-tures that are crucial when selecting a social partner. An individualwho claims that stealing is always morally wrong and believes them-selves morally obligated to act in accordance with this duty seemsmuch less likely to steal from me than an individual who believes thatthe stealing is sometimes morally acceptable depending on the con-sequences. Actors who express characteristically deontological judg-ments may therefore be preferred to those expressing consequentialistjudgments because these judgments may be more reliable indicatorsof stable cooperative behavior.
Key point.. deontological ethics signals stable cooperative behavior
-
First, deontologists’ prohibition of certain acts or behaviors mayserve as a relevant cue for inferring trustworthiness, because theextent to which someone claims to follow rule or action-based judg-ments may be associated with the reliability of their moral behavior.One piece of preliminary evidence for this comes from a studyshowing that agents willing to punish third parties who violate fair-ness principles are trusted more, and actually are more trustworthy(Jordan, Hoffman, Bloom, & Rand, 2016).
But couldn't this punishment be seen as utilitarian... as it promotes the general social good?
-
One approach to explaining why moral intuitions often align withdeontology comes from mutualistic partner choice models of theevolution of morality. These models posit a cooperation market suchthat agents who can be relied upon to act in a mutually beneficial wayare more likely to be chosen as cooperation partners, thus increasingtheir own fitness
this is the key theoretical argument
-
intriguingly
let the reader decide whether it is intriguing, please.
-
nd recent theoretical work has demon-strated that “cooperating without looking”—that is, without consid-ering the costs and benefits of cooperation—is a subgame perfectequilibrium (Hoffman, Yoeli, & Nowak, 2015). Therefore, expressingcharacteristically deontological judgments could constitute a behaviorthat enhances individual fitness in a cooperation market because thesejudgments are seen as reliable indicators of a specific valued behav-ior—cooperation
Is this relevant to the idea that '(advocating) Effective giving is a bad signal'?
Does utilitarian decision-making in 'good space' contradict this?
I'm not convinced. An 'excuse not to do something' is not the same as a 'choice to be effective'.
-
Across 5 studies, we show that people who make characteristically deontological judgments arepreferred as social partners, perceived as more moral and trustworthy, and are trusted more in economicgames.
But this does NOT hold in the switching case/switching study
-
-
citeseerx.ist.psu.edu citeseerx.ist.psu.edu
-
Table 3also suggests that conditional norm enforcement is more pronounced among the populationwith intermediate and high levels of education. This finding is consistent with the observationthat conditional cooperation is particularly robust in lab experiments with student subjectpools (see G ̈achter, 2007). The data further show that females tend to be more inclined tosanction, in particular deviations from the strong norms. In contrast, employed respondentsare less engaged in sanctioning. All other socioeconomic characteristics do not show a clear
demographic breakdown of survey responses ... evidence
-
Ina national survey conducted in Austria, respondents were confronted with eight different‘incorrect behaviors’, including tax evasion, drunk driving, fare dodging or skiving off work.Respondents were then asked how they would react if an acquaintance followed such behavior.The response categories cover positive reactions – like approval (Rege and Telle, 2004) – aswell as negative reactions like cooling down the contact or expressing disapprova
below... targeted to be nationally representative.
Tags
Annotators
URL
-
- Feb 2020
-
daaronr.github.io daaronr.github.io
-
A dissertation or final-year project allows you to explore your aptitude for, and interest in doing economic research
This should be a separate bullet point. This is big. If you are going to do postgraduate study it WILL involve research.
Aside from the academic track, much professional work involves research.
-
-
www1.essex.ac.uk www1.essex.ac.uk
-
James, Gareth; Witten, Daniela; Hastie, Trevor; Tibshirani, Robert. (2013) An introduction to statistical learning: with applications in R, New York: Springer. vol. Springer texts in statistics
This would seem to overlap the ML module ?
-
-
www1.essex.ac.uk www1.essex.ac.uk
-
- construct factorial experiments in blocks;
Did they get into power calculation and design efficiency? This seems more general statistics and less experimetrics. OK, it doesn't say 'design'
-
-
www1.essex.ac.uk www1.essex.ac.uk
-
Overleaf /LaTex
Not sure students need to know too much latex anymore… markdown/r-md is a lot simpler and using it with css and html bits is very flexible. (although it still helps to know how to code maths in Latex)
-
-
declaredesign.org declaredesign.org
-
f you can avoid assigning subjects to treatments by cluster, you should.
Sometimes clustered assignment is preferable if mixing treatments in a cluster --> contaminated treatments (e.g., because participants communicate)
-
fit_simple <- lm(Y_simple ~ Z_simple, data=hec)
'regress' the outcome on the treatment. Yields ATE with even with heterogeneity if treatment is equiprobable.
-
This complication is typically addressed in one of two ways: “controlling for blocks” in a regression context, or inverse probability weights (IPW), in which units are weighted by the inverse of the probability that the unit is in the condition that it is in.
I don't think these are equivalent. I believe only the latter recovers the ATE under heterogeneity... but this is just my memory.
-
The gains from a blocked design can often be realized through covariate adjustment alone.
I believe Athey and Heckman come out strongly in favor of blocking instead of covariate adjustment.
-
Of course, such heterogeneity could be explored if complete random assignment had been used, but blocking on a covariate defends a researcher (somewhat) against claims of data dredging.
A preregistration plan can accomplish this without any cost.
-
In this simulation complete random assignment led to a -0.59% decrease in sampling variability. This decrease was obtained with a small design tweak that costs the researcher essentially nothing.
This is not visible in the html. You specified too few digits.
Also, the results would be more striking if you had a smaller data set.
-
with(hec, mean(Y1 - Y0))
ATE with heterogeneity?
-
# Reveal observed potential outcomes
He means 'the outcome observed given random assignment'
-
when deploying a survey experiment on a platform like Qualtrics, simple random assignment is the only possibility due to the inflexibility of the built-in random assignment tools.
That's not entirely true
-
Since you need to know N beforehand in order to use simple_ra(), it may seem like a useless function.
this is a confusing sentence
-
depending on the random assignment, a different number of subjects might be assigned to each group.
In large samples this won't usually matter much... but still worth avoiding, to make power as high as possible.
-
Y0 <- rnorm(n = N,mean = (2*as.numeric(Hair) + -4*as.numeric(Eye) + -6*as.numeric(Sex)), sd = 5)
linear heterogeneity of baseline and of TE
-
hec <- within(hec,{
why does he use 'within' rather than mutate?
-
-
community.spotify.com community.spotify.com
-
Solution! Re: Export To Excel Mark as New Bookmark Subscribe Mute Subscribe to RSS Feed Permalink Print Email to a Friend Report Inappropriate Content slipstream42 Regular 2017-07-31 04:22 PM another csv export link it is quite nicehttps://rawgit.com/watsonbox/exportify/master/exportify.htmlcode on github View solution in original post 31 Likes
works great
-
-
www.vox.com www.vox.com
-
As you can see, having one fewer child still comes out looking like a solid way to reduce carbon emissions — but it’s absolutely nowhere near as effective as it first seemed. It no longer dwarfs the other options. On this model, instead of having one fewer kid, you can skip a couple of transatlantic flights and you’ll save the same amount of carbon. That seems like a way more manageable sacrifice if you’re a young person who longs to be a parent.
Even if I believed the highly optimistic predictions of very strong climate policy in the USA (which I don't), having one fewer child still reduces emissions each year more than twice as much as living car free or avoiding a trans-atlantic flight every year.
And they state it as "instead of having one fewer kid, you can skip a couple of transatlantic flights and you’ll save the same amount of carbon." ... but this requires each parent to forgo 2 transatlantic flights they would have taken every year for the rest of their life, if I understand correctly.
-
-
r4ds.had.co.nz r4ds.had.co.nz
-
commas <- function(...) stringr::str_c(..., collapse = ", ")
no braces needed for function on a single line
Tags
Annotators
URL
-
-
r4ds.had.co.nz r4ds.had.co.nz
-
it’s the same as the input!
because we want to modify columns in place
-
Compute the mean of every column in mtcars.
output <- vector("double", ncol(mtcars)) # 1. output for (i in seq_along(mtcars)) { # 2. sequence output[[i]] <- mean(mtcars[[i]]) # 3. body }
Tags
Annotators
URL
-
-
www.fmassari.com www.fmassari.com
-
The rational expectation and thelearning-from-price literatures argue that equilibrium prices are accurate becausethey reveal and aggregate the information of all market participants. The MarketSelection Hypothesis,MSH, proposes instead that prices become accurate becausethey eventually reflect only the beliefs of the most accurate agent. The Wisdomof the Crowd argument,WOC, however suggests that market prices are accuratebecause individual, idiosyncratic errors are averaged out by the price formationmechanism
Three models (arguments for) drivers of market efficiency
-
-
developers.facebook.com developers.facebook.com
-
external fundraising page
What is meant by 'external'?
-
Fundraising Dashboard / Participant Center Visited When a person visits their fundraising dashboard or participant center
who is the 'person' visiting the dashboard here?
-
Fundraising Page Created / Registration Complete Upon completion of the last step of the registration flow that creates a fundraising page
by whom? which ones can be detected?
-