464 Matching Annotations
  1. Mar 2019
  2. Jan 2019
    1. the strongest first factor accounted for 86.3% of observed variable variance

      I suspect that this factor was so strong because it consisted of only four observed variables, and three of them were written measures of verbal content. All of the verbal cariables correlated r = .72 to .89. Even the "non-verbal" variable (numerical ability) correlates r = .72 to .81 with the other three variables (Rehna & Hanif, 2017, p. 25). Given these strong correlations, a very strong first factor is almost inevitable.

    2. The weakest first factor accounted for 18.3% of variance

      This factor may be weak because the sample consists of Sudanese gifted children, which may have restricted the range of correlations in the dataset.

  3. Dec 2018
    1. The Doomsday argument

      The Doomsday argument (DA) is a probabilistic argument that claims to predict the number of future members of the human species given only an estimate of the total number of humans born so far. Simply put, it says that supposing that all humans are born in a random order, chances are that any one human is born roughly in the middle.

      From Wikipedia, Doomsday argument

  4. Nov 2018
    1. Online Options Give Adults Access, but Outcomes Lag

      In this article, drivers that increase and improve online learning success in adults are explored. State by state data along with federal stats contribute to the conclusions presented.

      Roughly 13% of all undergraduates are full-time online students and between 2012 and 2017 online students grew y 11 percent, about 2.25 million. The article presents a map showing state by state stats and the information provided can assist in growing individual school needs.

      RATING: 4/5 (rating based upon a score system 1 to 5, 1= lowest 5=highest in terms of content, veracity, easiness of use etc.)

  5. Sep 2018
    1. predictive analysis

      Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.

  6. Apr 2018
  7. Mar 2018
  8. Feb 2018
  9. Jan 2018
  10. www.laurenbcollister.com www.laurenbcollister.com
    1. ThiswasthenumberofpagesbeingsearchedbyGooglewhenwewereputtingthebooktogether.HavealookatthefigurepublishedonthebottomoftheGooglehomepagetoseewhatitisnow.

      This cute service is no longer supported. However, at the end of 2016, the estimate was over 130 trillion pages.

  11. Dec 2017
  12. Nov 2017
    1. pairwise overlaps using Fisher’s test and mutual exclusion (Leiserson et al., 2016xA weighted exact test for mutually exclusive mutations in cancer. Leiserson, M.D.M., Reyna, M.A., and Raphael, B.J. Bioinformatics. 2016; 32: i736–i745Crossref | PubMed | Scopus (4)See all ReferencesLeiserson et al., 2016)
  13. Oct 2017
    1. One of the main ways computers are changing the textual humanities is by mediating new connections to social science. The statistical models that help sociologists understand social stratification and social change haven’t in the past contributed much to the humanities, because it’s been difficult to connect quantitative models to the richer, looser sort of evidence provided by written documents.

      DH as moving English more toward the statistical...

  14. May 2017
    1. s. If we want to understand the effects of global warming or whether the economy is headed for a recessio

      The class on The Rhetorical Situation brought up discussion on the evolving notion of "weather" as a changeable, even rhetorical, thing. Moving to integrate database and narrative as symbionts makes a connection between data and delivery/appeal.

  15. Apr 2017
  16. Mar 2017
    1. I never regret the eleven months which hardened my resolve, to go beyond 98 'Nos' to get to the precious, unexpected 'Yes's'. I was nobody, I was selling nothing, I could be nobody selling anything.

      Numbers

      Statistics

      Alienation

  17. Feb 2017
    1. That two dice marked in the common way will tum up seven, is thrice as probable as that they will tum up eleven, and six times as probable as that they will tum up twelve

      D&D has made me embarrassingly good at estimating probable outcomes of platonic die in my head.

    2. In moral reasoning we ascend from pos-~ibility, by an insensible gradation, to probabil-ity, and thence, in the same manner, to the sum-mit of moral certainty.

      I believe Campbell addresses some of the uncertainty of Inductive Reasoning here. The phrase "insensible gradation" seems meaningful--how we go from a possibility to moral certainty is something fundamentally difficult in a manner Hume cannot accept. But Campbell explains in this section many of the difficulties of this, and how it's still useable, for moral judgment.

      On the same side, I come back to Bayesian Probabilities, wondering if Campbell knew about them, and how they transfer statistical, mathematical knowledge towards determining if a hypothesis is true. Once again, I'm hesitant that I'll exceed my grasp of stats if I talk to much about it, though.

    3. The course of nature will be the same tomorrow that it is today; or, the future will resemble the past"

      Apparently, this is a surprisingly successful rationale for meteorology. If you just assume "tomorrow's weather will resemble today's," you'll end up more right than not, and can actually beat some meteorologists. Then again, Jim Flowers and the KMTV Accu-Weather Forecast might have just been terrible.

  18. Oct 2016
  19. Sep 2016
    1. According to the language periodical Språktidningen, ‘hen’ was by 2014 used once in the Swedish media for every 300 used of ‘hon’ or ‘han’, up from one in every 13,000 in 2011

      Increasing rate of usage of hen vs. hon or han: 1/13,000 in 2011; 1/300 in 2014.

  20. May 2016
    1. the algorithm was somewhat more accurate than a coin flip

      In machine learning it's also important to evaluate not just against random, but against how well other methods (e.g. parole boards) do. That kind of analysis would be nice to see.

  21. Mar 2016
  22. Feb 2016
    1. 3,068 adults in August 2014, found that 72 percent of Americans reported feeling stressed about money at least some of the time during the past month. Twenty-two percent said that they experienced extreme stress about money during the past month (an 8, 9 or 10 on a 10-point scale, where 1 is “little or no stress” and 10 is “a great deal of stress”). For the majority of Americans (64 percent), money is a somewhat or very significant source of stress, but especially for parents and younger adults (77 percent of parents, 75 percent of millennials [18 to 35 years old] and 76 percent of Gen Xers [36 to 49 years old]).

      Along the lines of the first paragraph except putting some percentages into it. Almost three quarters of Americans (out of a 3,000 person survey) feels some kind of "extreme stress about money" each month, the majority coming from parents, adults and young adults (18-35). I'll incorporate this into my paper by using statistics to show how money is a huge reason for stress in adults.

    1. He expects that the logging project near Quimby’s land will likely generate about $755,250 at the state’s average sale price, $50.35 per cord of wood. The land has about 1,500 harvestable acres that contain about 30 cords of wood per acre, or 45,000 cords, but only about a third of that will be cut because the land is environmentally sensitive, Denico said. The Bureau of Parks and Lands expects to generate about $6.6 million in revenue this year selling about 130,000 cords of wood from its lots, Denico said. Last year, the bureau generated about $7 million harvesting about 139,000 cords of wood. The Legislature allows the cutting of about 160,000 cords of wood on state land annually, although the LePage administration has sought to increase that amount.
    1. From 1926 until the early 1950s, US military aircraft relied on a "one size fits all" design based on average measurements of hundreds of male pilots.

      But a 1950 study by Lt. Gilbert Daniels showed that out of 4,063 airmen, not even one was average in all ten measurements. They started designing cockpits and controls to be adjustable. Accidents decreased, and pilot performance increased.

      Standardized education makes the same mistake.

    2. The science of the individual relies on dynamic systems theory rather than group statistics. Its research methodology is characterized by “analyze, then aggregate” (analyze each subject separately, then combine individual patterns into collective understanding) rather than “aggregate, then analyze” (derive group statistics based on aggregate data, then use these statistics to evaluate and understand individuals).

      A mathematical psychologist at Penn State University, Molenaar extended ergodic theory (link is external) to prove that it was not mathematically permissible to use assessment instruments based on group averages to evaluate individuals.

      A Manifesto on Psychology as Idiographic Science, Peter Molenaar

    1. Great explanation of 15 common probability distributions: Bernouli, Uniform, Binomial, Geometric, Negative Binomial, Exponential, Weibull, Hypergeometric, Poisson, Normal, Log Normal, Student's t, Chi-Squared, Gamma, Beta.

  23. Jan 2016
    1. 50 Years of Data Science, David Donoho<br> 2015, 41 pages

      This paper reviews some ingredients of the current "Data Science moment", including recent commentary about data science in the popular media, and about how/whether Data Science is really di fferent from Statistics.

      The now-contemplated fi eld of Data Science amounts to a superset of the fi elds of statistics and machine learning which adds some technology for 'scaling up' to 'big data'.

    1. P(B|E) = P(B) X P(E|B) / P(E), with P standing for probability, B for belief and E for evidence. P(B) is the probability that B is true, and P(E) is the probability that E is true. P(B|E) means the probability of B if E is true, and P(E|B) is the probability of E if B is true.
    2. The probability that a belief is true given new evidence equals the probability that the belief is true regardless of that evidence times the probability that the evidence is true given that the belief is true divided by the probability that the evidence is true regardless of whether the belief is true. Got that?
    3. Initial belief plus new evidence = new and improved belief.
    1. This criterion is not based on any specific shape of the dose-response relationship.

      I would expect that the relationship must be monotonic to support the causal hypothesis.

    1. paradox of unanimity - Unanimous or nearly unanimous agreement doesn't always indicate the correct answer. If agreement is unlikely, it indicates a problem with the system.

      Witnesses who only saw a suspect for a moment are not likely to be able to pick them out of a lineup accurately. If several witnesses all pick the same suspect, you should be suspicious that bias is at work. Perhaps these witnesses were cherry-picked, or they were somehow encouraged to choose a particular suspect.

  24. Oct 2015
    1. In 1930 its population was 112,000. Today it is 36,000. The halcyon talk of “interracial living” is dead. The neighborhood is 92 percent black. Its homicide rate is 45 per 100,000—triple the rate of the city as a whole. The infant-mortality rate is 14 per 1,000—more than twice the national average.

      These are some intense statistics.. It'd be interesting to compare them to other cities in the area..

  25. Aug 2015
  26. Jul 2015
  27. Feb 2015
    1. The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance)

      Why in \(\sigma^2\) is not equal to \(s^2\)

    2. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
    1. Suppose the value of for wages is 10% and the values of for kilograms of meat is 25%. This means that the wages of workers are consistent. Their wages are close to the overall average of their wages. But the families consume meat in quite different quantities. Some families use very small quantities of meat and some others use large quantities of meat. We say that there is greater variation in their consumption of meat. The observations about the quantity of meat are more dispersed or more variant.

      Interpretation of Relative Deviation Coefficient

  28. Nov 2013
    1. n its space-time representation (Ogata, 1998), the ETASmodel is a temporal marked point process model, and a special case of marked Hawks process, withconditional intensity function(t;x;yjHt) =(x;y) +Xti<tk(mi)g(tti)f(xxi;yyijmi)

      Testing out PDF annotation that also include LaTeX rendered formulas.

  29. Sep 2013
    1. Hence the man who makes a good guess at truth is likely to make a good guess at probabilities

      At first, I didn't like this quote, then I thought back to good ol' Oakley's stats class. We make scientific theories based on what idea is most likely to happen (we reject/do not reject the null hypothesis, but we do not say we accept the null hypothesis). Science: putting me in my place since I had a place to be put.