469 Matching Annotations
  1. May 2021
  2. Apr 2021
    1. The complement of an event AAA in a sample space SSS, denoted AcAcA^c, is the collection of all outcomes in SSS that are not elements of the set AAA. It corresponds to negating any description in words of the event AAA.

      The complement of an event \(A\) in a sample space \(S\), denoted \(A^c\), is the collection of all outcomes in \(S\) that are not elements of the set \(A\). It corresponds to negating any description in words of the event \(A\).


      The complement of an event \(A\) consists of all outcomes of the experiment that do not result in event \(A\).

      Complement formula:

      $$P(A^c)=1-P(A)$$

    1. Jeremy Faust MD MS (ER physician) on Twitter: “Let’s talk about the background risk of CVST (cerebral venous sinus thrombosis) versus in those who got J&J vaccine. We are going to focus in on women ages 20-50. We are going to compare the same time period and the same disease (CVST). DEEP DIVE🧵 KEY NUMBERS!” / Twitter. (n.d.). Retrieved April 15, 2021, from https://twitter.com/jeremyfaust/status/1382536833863651330

    1. Dr Lea Merone MBChB (hons) MPH&TM MSc FAFPHM Ⓥ. ‘I’m an Introvert and Being Thrust into the Centre of This Controversy Has Been Quite Confronting. I’ve Had a Little Processing Time Right Now and I Have a Few Things to Say. I Won’t Repeat @GidMK and His Wonderful Thread but I Will Say 1 This Slander of Us Both Has Been 1/n’. Tweet. @LeaMerone (blog), 29 March 2021. https://twitter.com/LeaMerone/status/1376365651892166658.

  3. Mar 2021
    1. Ashish K. Jha, MD, MPH. (2020, December 12). Michigan vs. Ohio State Football today postponed due to COVID But a comparison of MI vs OH on COVID is useful Why? While vaccines are coming, we have 6-8 hard weeks ahead And the big question is—Can we do anything to save lives? Lets look at MI, OH for insights Thread [Tweet]. @ashishkjha. https://twitter.com/ashishkjha/status/1337786831065264128

    1. Stefan Simanowitz. (2020, November 14). “Sweden hoped herd immunity would curb #COVID19. Don’t do what we did” write 25 leading Swedish scientists “Sweden’s approach to COVID has led to death, grief & suffering. The only example we’re setting is how not to deal with a deadly infectious disease” https://t.co/azOg6AxSYH https://t.co/u2IqU5iwEn [Tweet]. @StefSimanowitz. https://twitter.com/StefSimanowitz/status/1327670787617198087

  4. Feb 2021
    1. A fairly comprehensive list of problems and limitations that are often encountered with data as well as suggestions about who should be responsible for fixing them (from a journalistic perspective).

  5. Jan 2021
  6. Dec 2020
    1. In addition, for music and movies, we also normalize the resulting scores (akin to "grading on a curve" in college), which prevents scores from clumping together.
    1. Stuaert Rtchie [@StuartJRitchie] (2020) This encapsulates the problem nicely. Sure, there’s a paper. But actually read it & what do you find? p-values mostly juuuust under .05 (a red flag) and a sample size that’s FAR less than “25m”. If you think this is in any way compelling evidence, you’ve totally been sold a pup. Twitter. Retrieved from:https://twitter.com/StuartJRitchie/status/1305963050302877697

    1. Inferential statistics are the statistical procedures that are used to reach conclusions aboutassociations between variables. They differ from descriptive statistics in that they are explicitly designed to test hypotheses.

      Descriptive statistics are used specifically to test hypotheses.

  7. Nov 2020
  8. Oct 2020
    1. CDC reverses course on testing for asymptomatic people who had Covid-19 contact

      Take Away

      Transmission of viable SARS-CoV-2 RNA can occur even from an infected but asymptomatic individual. Some people never become symptomatic. That group usually becomes non-infectious after 14 days from initial infection. For persons displaying symptoms , the SARS-CoV-2 RNA can be detected for 1 to 2 days prior to symptomatology. (1)

      The Claim

      Asymptomatic people who had SARS-CoV-2 contact should be tested.

      The Evidence

      Yes, this is a reversal of August 2020 advice. What is the importance of asymptomatic testing?

      Studies show that asymptomatic individuals have infected others prior to displaying symptoms. (1)

      According to the CDC’s September 10th 2020 update approximately 40% of infected Americans are asymptomatic at time of testing. Those persons are still contagious and are estimated to have already transmitted the virus to some of their close contacts. (2)

      In a report appearing in the July 2020 Journal of Medical Virology, 15.6% of SARS-CoV-2 positive patients in China are asymptomatic at time of testing. (3)

      Asymptomatic infection also varies by age group as older persons often have more comorbidities causing them to be susceptible to displaying symptoms earlier. A larger percentage of children remain asymptomatic but are still able to transmit the virus to their contacts. (1) (3)

      Transmission modes

      Droplet transmission is the primary proven mode of transmission of the SARS-CoV-2 virus, although it is believed that touching a contaminated surface then touching mucous membranes, for example, the mouth and nose can also serve to transmit the virus. (1)

      It is still unclear how big or small a dose of exposure to viable viral particles is needed for transmission; more research is needed to elucidate this. (1)

      Citations

      (1) https://www.who.int/news- room/commentaries/detail/transmission-of-sars-cov-2- implications-for-infection-prevention-precautions

      (2) https://www.cdc.gov/coronavirus/2019- ncov/hcp/planning-scenarios.html

      (3) He J, Guo Y, Mao R, Zhang J. Proportion of asymptomatic coronavirus disease 2019: A systematic review and metaanalysis. J Med Virol. 2020;1– 11.https://doi.org/10.1002/jmv.26326

  9. Sep 2020
    1. The lowest value for false positive rate was 0.8%. Allow me to explain the impact of a false positive rate of 0.8% on Pillar 2. We return to our 10,000 people who’ve volunteered to get tested, and the expected ten with virus (0.1% prevalence or 1:1000) have been identified by the PCR test. But now we’ve to calculate how many false positives are to accompanying them. The shocking answer is 80. 80 is 0.8% of 10,000. That’s how many false positives you’d get every time you were to use a Pillar 2 test on a group of that size.

      Take Away: The exact frequency of false positive test results for COVID-19 is unknown. Real world data on COVID-19 testing suggests that rigorous testing regimes likely produce fewer than 1 in 10,000 (<0.01%) false positives, orders of magnitude below the frequency proposed here.

      The Claim: The reported numbers for new COVID-19 cases are overblown due to a false positive rate of 0.8%

      The Evidence: In this opinion article, the author correctly conveys the concern that for large testing strategies, case rates could become inflated if there is (a) a high false positive rate for the test and (b) there is a very low prevalence of the virus within the population. The false positive rate proposed by the author is 0.8%, based on the "lowest value" for similar tests given by a briefing to the UK's Scientific Advisory Group for Emergencies (1).

      In fact, the briefing states that, based on another analysis, among false positive rates for 43 external quality assessments, the interquartile range for false positive rate was 0.8-4.0%. The actual lowest value for false positive rate from this study was 0% (2).

      An upper limit for false positive rate can also be estimated from the number of tests conducted per confirmed COVID-19 case. In countries with low infection rates that have conducted widespread testing, such as Vietnam and New Zealand, at multiple periods throughout the pandemic they have achieved over 10,000 tests per positive case (3). Even if every single positive was false, the false positive rate would be below 0.01%.

      The prevalence of the virus within a population being tested can affect the positive predictive value of a test, which is the likelihood that a positive result is due to a true infection. The author here assumes the current prevalence of COVID-19 in the UK is 1 in 1,000 and the expected rate of positive results is 0.1%. Data from the University of Oxford and the Global Change Data Lab show that the current (Sept. 22, 2020) share of daily COVID-19 tests that are positive in the UK is around 1.7% (4). Therefore, based on real world data, the probability that a patient is positive for the test and does have the disease is 99.4%.

      Sources: (1) https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/895843/S0519_Impact_of_false_positives_and_negatives.pdf

      (2) https://www.medrxiv.org/content/10.1101/2020.04.26.20080911v3.full.pdf+html

      (3) https://ourworldindata.org/coronavirus-data-explorer?yScale=log&zoomToSelection=true&country=USA~DEU~IND~ITA~AUS~VNM~FIN~NZL~GBR&region=World&testsPerCaseMetric=true&interval=smoothed&aligned=true&smoothing=7&pickerMetric=location&pickerSort=asc

      (4) https://ourworldindata.org/coronavirus-data-explorer?zoomToSelection=true&country=USA~DEU~IND~ITA~AUS~VNM~FIN~NZL~GBR&region=World&positiveTestRate=true&interval=smoothed&aligned=true&smoothing=7&pickerMetric=location&pickerSort=asc

    1. H not

      I'm sorry but this is kind of lazy from the author. Either write H0, \(H_0\) or H naught. H not sounds like you're saying H "not" (negation)

  10. Aug 2020
  11. Jul 2020
  12. Jun 2020
    1. higher when Ericksen conflict was present (Figure 2A)

      Yeah, in single neurons you can show the detection of general conflict this way, and it was not partitionable into different responses...

    2. G)

      Very clear effect! suspicious? how exactly did they even select the pseudo-populations, its not clear exactly from the methods to me

    3. pseudotrial vector x

      one trial for all different neurons in the current pseudopopulation matrix?

    4. The separating hyperplane for each choice i is the vector (a) that satisfies: 770 771 772 773 Meaning that βi is a vector orthogonal to the separating hyperplane in neuron-774 dimensional space, along which position is proportional to the log odds of that correct 775 response: this is the the coding dimension for that correct response

      Makes sense: If Beta is proportional to the log-odds of a correct response, a is the hyperplane that provides the best cutoff, which must be orthogonal. Multiplying two orthogonal vectors yields 0.

    5. X is the trials by neurons pseudopopulation matrix of firing rates

      So these pseudopopulations were random agglomerates of single neurons that were recorded, so many fits for random groups, and the best were kept?

    6. Within each neuron, 719 we calculated the expected firing rate for each task condition, marginalizing over 720 distractors, and for each distractor, marginalizing over tasks.

      Distractor = specific stimulus / location (e.g. '1' or 'left')?

      Task = conflict condition (e.g. Simon or Ericksen)?

    7. condition-averaged within neurons (9 data points per 691 neuron, reflecting all combinations of the 3 correct response, 3 Ericksen distractors, and 3 692 Simon distractors)

      How do all combinations of 3 responses lead to only 9 data points per neuron? 3x2x2 = 12.

  13. May 2020
    1. For comparisons between 3 or more groups that typically employ analysis of variance (ANOVA) methods, one can use the Cumming estimation plot, which can be considered a variant of the Gardner-Altman plot.

      Cumming estimation plot

    2. Efron developed the bias-corrected and accelerated bootstrap (BCa bootstrap) to account for the skew whilst obtaining the central 95% of the distribution.

      Bias-corrected and accelerated bootstrap (BCa boostrap) deals with skewed sample distributions. However; it must be noted that it "may not give very accurate coverage in a small-sample non-parametric situation" (simply said, take caution with small datasets)

    3. We can calculate the 95% CI of the mean difference by performing bootstrap resampling.

      Bootstrap - simple but powerful technique that creates multiple resamples (with replacement) from a single set of observations, and computes the effect size of interest on each of these resamples. It can be used to determine the 95% CI (Confidence Interval).

      We can use bootstrap resampling to obtain measure of precision and confidence about our estimate. It gives us 2 important benefits:

      1. Non-parametric statistical analysis - no need to assume normal distribution of our observations. Thanks to Central Limit Theorem, the resampling distribution of the effect size will approach normality
      2. Easy construction of the 95% CI from the resampling distribution. For 1000 bootstrap resamples of the mean difference, 25th value and 975th value can be used as boundaries of the 95% CI.

      Bootstrap resampling can be used for such an example:

      Computers can easily perform 5000 resamples:

  14. Apr 2020
    1. the limitations of the PPS

      Limitations of the PPS:

      1. Slower than correlation
      2. Score cannot be interpreted as easily as the correlation (it doesn't tell you anything about the type of relationship). PPS is better for finding patterns and correlation is better for communicating found linear relationships
      3. You cannot compare the scores for different target variables in a strict math way because they're calculated using different evaluation metrics
      4. There are some limitations of the components used underneath the hood
      5. You've to perform forward and backward selection in addition to feature selection
    2. How to use the PPS in your own (Python) project

      Using PPS with Python

      • Download ppscore: pip install ppscoreshell
      • Calculate the PPS for a given pandas dataframe:
        import ppscore as pps
        pps.score(df, "feature_column", "target_column")
        
      • Calculate the whole PPS matrix:
        pps.matrix(df)
        
    3. The PPS clearly has some advantages over correlation for finding predictive patterns in the data. However, once the patterns are found, the correlation is still a great way of communicating found linear relationships.

      PPS:

      • good for finding predictive patterns
      • can be used for feature selection
      • can be used to detect information leakage between variables
      • interpret PPS matrix as a directed graph to find entity structures Correlation:
      • good for communicating found linear relationships
    4. Let’s compare the correlation matrix to the PPS matrix on the Titanic dataset.

      Comparing correlation matrix and the PPS matrix of the Titanic dataset:

      findings about the correlation matrix:

      1. Correlation matrix is smaller because it doesn't work for categorical data
      2. Correlation matrix shows a negative correlation between TicketPrice and Class. For PPS, it's a strong predictor (0.9 PPS), but not the other way Class to TicketPrice (ticket of 5000-10000$ is most likely the highest class, but the highest class itself cannot determine the price)

      findings about the PPS matrix:

      1. First row of the matrix tells you that the best univariate predictor of the column Survived is the column Sex (Sex was dropped for correlation)
      2. TicketID uncovers a hidden pattern as well as it's connection with the TicketPrice

    5. Let’s use a typical quadratic relationship: the feature x is a uniform variable ranging from -2 to 2 and the target y is the square of x plus some error.

      In this scenario:

      • we can predict y using x
      • we cannot predict x using y as x might be negative or positive (for y=4, x=2 or -2
      • the correlation is 0. Both from x to y and from y to x because the correlation is symmetric (more often relationships are assymetric!). However, the PPS from x to y is 0.88 (not 1 because of existing error)
      • PPS from y to x is 0 because there's no relationship that y can predict if it only knows its own value

    6. how do you normalize a score? You define a lower and an upper limit and put the score into perspective.

      Normalising a score:

      • you need to put a lower and upper limit
      • upper limit can be F1 = 1, and a perfect MAE = 0
      • lower limit depends on the evaluation metric and your data set. It's the value that a naive predictor achieves
    7. For a classification problem, always predicting the most common class is pretty naive. For a regression problem, always predicting the median value is pretty naive.

      What is a naive model:

      • predicting common class for a classification problem
      • predicting median value for a regression problem
    8. Let’s say we have two columns and want to calculate the predictive power score of A predicting B. In this case, we treat B as our target variable and A as our (only) feature. We can now calculate a cross-validated Decision Tree and calculate a suitable evaluation metric.

      If the target (B) variable is:

      • numeric - we can use a Decision Tree Regressor and calculate the Mean Absolute Error (MAE)
      • categoric - we can use a Decision Tree Classifier and calculate the weighted F1 (or ROC)
    9. More often, relationships are asymmetric

      a column with 3 unique values will never be able to perfectly predict another column with 100 unique values. But the opposite might be true

    10. there are many non-linear relationships that the score simply won’t detect. For example, a sinus wave, a quadratic curve or a mysterious step function. The score will just be 0, saying: “Nothing interesting here”. Also, correlation is only defined for numeric columns.

      Correlation:

      • doesn't work with non-linear data
      • doesn't work for categorical values

      Examples:

    1. Suppose you have only two rolls of dice. then your best strategy would be to take the first roll if its outcome is more than its expected value (ie 3.5) and to roll again if it is less.

      Expected payoff of a dice game:

      Description: You have the option to throw a die up to three times. You will earn the face value of the die. You have the option to stop after each throw and walk away with the money earned. The earnings are not additive. What is the expected payoff of this game?

      Rolling twice: $$\frac{1}{6}(6+5+4) + \frac{1}{2}3.5 = 4.25.$$

      Rolling three times: $$\frac{1}{6}(6+5) + \frac{2}{3}4.25 = 4 + \frac{2}{3}$$

    1. Therefore, En=2n+1−2=2(2n−1)

      Simplified formula for the expected number of tosses (e) to get n consecutive heads (n≥1):

      $$e_n=2(2^n-1)$$

      For example, to get 5 consecutive heads, we've to toss the coin 62 times:

      $$e_n=2(2^5-1)=62$$


      We can also start with the longer analysis of the 5 scenarios:

      1. If we get a tail immediately (probability 1/2) then the expected number is e+1.
      2. If we get a head then a tail (probability 1/4), then the expected number is e+2.
      3. If we get two head then a tail (probability 1/8), then the expected number is e+2.
      4. If we get three head then a tail (probability 1/16), then the expected number is e+4.
      5. If we get four heads then a tail (probability 1/32), then the expected number is e+5.
      6. Finally, if our first 5 tosses are heads, then the expected number is 5.

      Thus:

      $$e=\frac{1}{2}(e+1)+\frac{1}{4}(e+2)+\frac{1}{8}(e+3)+\frac{1}{16}\\(e+4)+\frac{1}{32}(e+5)+\frac{1}{32}(5)=62$$

      We can also generalise the formula to:

      $$e_n=\frac{1}{2}(e_n+1)+\frac{1}{4}(e_n+2)+\frac{1}{8}(e_n+3)+\frac{1}{16}\\(e_n+4)+\cdots +\frac{1}{2^n}(e_n+n)+\frac{1}{2^n}(n) $$

    1. Repeated measures involves measuring the same cases multiple times. So, if you measured the chips, then did something to them, then measured them again, etc it would be repeated measures. Replication involves running the same study on different subjects but identical conditions. So, if you did the study on n chips, then did it again on another n chips that would be replication.

      Difference between repeated measures and replication

  15. Mar 2020
    1. This volume of paper should be the same as the coaxial plug of paper on the roll.

      Calculating volume of the paper roll: $$\mathbf{Lwt = \pi w(R^2 - r^2)} \~\ L = \text{length of the paper} \ w = \text{width of the paper} \ t = \text{thickness} \ R = \text{outer radius} \ r = \text{inner radius}$$ And that simplifies into a formula for R: $$\color{red} {\bf R = \sqrt{\frac{Lt}{\pi}+r^2}}$$

    2. This shows the nonlinear relationship and how the consumption accelerates. The first 10% used makes just a 5% change in the diameter of the roll. The last 10% makes an 18.5% change.

      Consumption of a toilet paper roll has a nonlinear relationship between the:

      • y-axis (outer Radius of the roll (measured as a percentage of a full roll))
      • x-axis (% of the roll consumed)
    3. Toilet paper is typically supplied in rolls of perforated material wrapped around a central cardboard tube. There’s a little variance between manufacturers, but a typical roll is approximately 4.5” wide with an 5.25” external diameter, and a central tube of diameter 1.6” Toilet paper is big business (see what I did there?) Worldwide, approximately 83 million rolls are produced per day; that’s a staggering 30 billion rolls per year. In the USA, about 7 billion rolls a year are sold, so the average American citizen consumes two dozen rolls a year (two per month). Americans use 24 rolls per capita a year of toilet paper Again, it depends on the thickness and luxuriousness of the product, but the perforations typically divide the roll into approximately 1,000 sheets (for single-ply), or around 500 sheets (for double-ply). Each sheet is typically 4” long so the length of a (double-ply) toilet roll is approximately 2,000” or 167 feet (or less, if your cat gets to it).

      Statistics on the type and use of toilet paper in the USA.

      1" (inch) = 2.54 cm

    1. In the interval scale, there is no true zero point or fixed beginning. They do not have a true zero even if one of the values carry the name “zero.” For example, in the temperature, there is no point where the temperature can be zero. Zero degrees F does not mean the complete absence of temperature. Since the interval scale has no true zero point, you cannot calculate Ratios. For example, there is no any sense the ratio of 90 to 30 degrees F to be the same as the ratio of 60 to 20 degrees. A temperature of 20 degrees is not twice as warm as one of 10 degrees.

      Interval data:

      • show not only order and direction, but also the exact differences between the values
      • the distances between each value on the interval scale are meaningful and equal
      • no true zero point
      • no fixed beginning
      • no possibility to calculate ratios (only add and substract)
      • e.g.: temperature in Fahrenheit or Celsius (but not Kelvin) or IQ test
    2. As the interval scales, Ratio scales show us the order and the exact value between the units. However, in contrast with interval scales, Ratio ones have an absolute zero that allows us to perform a huge range of descriptive statistics and inferential statistics. The ratio scales possess a clear definition of zero. Any types of values that can be measured from absolute zero can be measured with a ratio scale. The most popular examples of ratio variables are height and weight. In addition, one individual can be twice as tall as another individual.

      Ratio data is like interval data, but with:

      • absolute zero
      • possibility to calculate ratio (e.g. someone can be twice as tall)
      • possibility to not only add and subtract, but multiply and divide values
      • e.g.: weight, height, Kelvin scale (50K is 2x hot as 25K)
    1. when AUC is 0.5, it means model has no class separation capacity whatsoever.

      If AUC = 0.5

    2. ROC is a probability curve and AUC represents degree or measure of separability. It tells how much model is capable of distinguishing between classes.

      ROC & AUC

    3. In multi-class model, we can plot N number of AUC ROC Curves for N number classes using One vs ALL methodology. So for Example, If you have three classes named X, Y and Z, you will have one ROC for X classified against Y and Z, another ROC for Y classified against X and Z, and a third one of Z classified against Y and X.

      Using AUC ROC curve for multi-class model

    4. When AUC is approximately 0, model is actually reciprocating the classes. It means, model is predicting negative class as a positive class and vice versa

      If AUC = 0

    5. AUC near to the 1 which means it has good measure of separability.

      If AUC = 1

    1. Softmax turns arbitrary real values into probabilities

      Softmax function -

      • outputs of the function are in range [0,1] and add up to 1. Hence, they form a probability distribution
      • the calcualtion invloves e (mathematical constant) and performs operation on n numbers: $$s(x_i) = \frac{e^{xi}}{\sum{j=1}^n e^{x_j}}$$
      • the bigger the value, the higher its probability
      • lets us answer classification questions with probabilities, which are more useful than simpler answers (e.g. binary yes/no)
    1. 1. Logistic regression IS a binomial regression (with logit link), a special case of the Generalized Linear Model. It doesn't classify anything *unless a threshold for the probability is set*. Classification is just its application. 2. Stepwise regression is by no means a regression. It's a (flawed) method of variable selection. 3. OLS is a method of estimation (among others: GLS, TLS, (RE)ML, PQL, etc.), NOT a regression. 4. Ridge, LASSO - it's a method of regularization, NOT a regression. 5. There are tens of models for the regression analysis. You mention mainly linear and logistic - it's just the GLM! Learn the others too (link in a comment). STOP with the "17 types of regression every DS should know". BTW, there're 270+ statistical tests. Not just t, chi2 & Wilcoxon

      5 clarifications to common misconceptions shared over data science cheatsheets on LinkedIn

    1. An exploratory plot is all about you getting to know the data. An explanatory graphic, on the other hand, is about telling a story using that data to a specific audience.

      Exploratory vs Explanatory plot

  16. Feb 2020
    1. (Återkommande forskning visar att 85-90 procent av tonårskillar begår brott. Allt från snatteri upp till rån och mord. Och det oavsett om de har invandrarbakgrund eller inte. Cirka 97-98 procent av de här killarna blir sedan skötsamma arbetande vuxna medborgare – som beklagar sig över ungdomsbrottsligheten.)
  17. Jan 2020
  18. Dec 2019
    1. The average IQs of adopted children in lower and higher socioeconomic status (SES) families were 85 (SD = 17) and 98 (SD = 14.6), respectively, at adolescence (mean age = 13.5 years)

      I'm looking for the smallest standard deviation in an adopted sample to compare the average difference to that of identical twins. This study suggests that the SD in adoption is identical to the SD in the general population. This supports the idea that lower SD in adopted identical twins is entirely down to genes (or, in principal, prenatal environment).

      Note that this comment is referring to this Reddit inquiry.

    1. If you are running a regression model on data that do not have explicit space or time dimensions, then the standard test for autocorrelation would be the Durbin-Watson test.

      Durbin-Watson test?

  19. Nov 2019
    1. For most of the twentieth century, Census Bureau administrators resisted private-sector intrusion into data capture and processing operations, but beginning in the mid-1990s, the Census Bureau increasingly turned to outside vendors from the private sector for data captureand processing. Thisprivatization led to rapidly escalating costs, reduced productivity, near catastrophic failures of the 2000 and 2010 censuses, and high risks for the 2020 census.

      Parallels to ABS in Australia

    1. This booklet itells you how to use the R statistical software to carry out some simple analyses that are common in analysing time series data.

      what is time series?

    1. Top 20 topic categories.

      Immigration, Guns, Education, that exactly what I choose for my three letters comments. I think this result is also influenced by media. Every day these three areas are the main subject developed on media. 10 years ago the result will show different areas.

  20. Aug 2019
    1. On public transport ridership in the EU

      A screenshot is needed

    1. ie. decision tree split, entropy minimum or information max at 0.5:0.5 split

  21. Jul 2019
    1. In statistical testing, we structure experiments in terms of null & alternative hypotheses. Our test will have the following hypothesis schema: Η0: μtreatment <= μcontrol ΗA: μtreatment > μcontrol Our null hypothesis claims that the new shampoo does not increase wool quality. The alternative hypothesis claims the opposite; new shampoo yields superior wool quality.

      hypothesis schema; statistics

  22. Jun 2019
    1. Ministries will be involved in close monitoring and supervision of the field work to ensure data quality and good coverage. This is the first time that the rigours of monitoring and supervision of field work exercised in NSS will be leveraged for the Economic Census so that results of better quality would be available for creation of a National Statistical Business Register. This process has been catalysed by the establishment of a unified National Statistical Office (NSO).  
  23. May 2019
    1. It’s as if they’d been “describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot,” Alexander wrote.
    1. Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modelled by a probability distribution that has a fixed set of parameters.[1] Conversely a non-parametric model differs precisely in that the parameter set (or feature set in machine learning) is not fixed and can increase, or even decrease, if new relevant information is collected.[2] Most well-known statistical methods are parametric.[3] Regarding nonparametric (and semiparametric) models, Sir David Cox has said, "These typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".[4]

      Non-parametric vs parametric stats

    1. Statistical hypotheses concern the behavior of observable random variables.... For example, the hypothesis (a) that a normal distribution has a specified mean and variance is statistical; so is the hypothesis (b) that it has a given mean but unspecified variance; so is the hypothesis (c) that a distribution is of normal form with both mean and variance unspecified; finally, so is the hypothesis (d) that two unspecified continuous distributions are identical. It will have been noticed that in the examples (a) and (b) the distribution underlying the observations was taken to be of a certain form (the normal) and the hypothesis was concerned entirely with the value of one or both of its parameters. Such a hypothesis, for obvious reasons, is called parametric. Hypothesis (c) was of a different nature, as no parameter values are specified in the statement of the hypothesis; we might reasonably call such a hypothesis non-parametric. Hypothesis (d) is also non-parametric but, in addition, it does not even specify the underlying form of the distribution and may now be reasonably termed distribution-free. Notwithstanding these distinctions, the statistical literature now commonly applies the label "non-parametric" to test procedures that we have just termed "distribution-free", thereby losing a useful classification.

      Non-parametric vs parametric statistics

    2. Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in ordinal data. As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust. Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding. The wider applicability and increased robustness of non-parametric tests comes at a cost: in cases where a parametric test would be appropriate, non-parametric tests have less power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.

      Non-parametric vs parametric statistics

    1. The concept of data type is similar to the concept of level of measurement, but more specific: For example, count data require a different distribution (e.g. a Poisson distribution or binomial distribution) than non-negative real-valued data require, but both fall under the same level of measurement (a ratio scale).
    1. Even if Muslims were hypothetically behind every single one of the 140,000 terror attacks committed worldwide since 1970, those terrorists would represent barely 0.009 percent of global Islam

      This is a veryyy relevant statistic, thank god.

    2. That is, deaths from terrorism account for 0.025 of the total number of murders, or 2.5%

      Irrelevant statistics IMO

    3. American Muslims have killed less than 0.0002 percent of those murdered in the USA during this period

      selection of detail

    4. How many people did toddlers kill in 2013? Five, all by accidentally shooting a gun

      selection of detail of outlandish statistic to emphasise main point

    5. you actually have a better chance of being killed by a refrigerator falling on you

      selection of detail of outlandish statistic to emphasise main point

    6. Since 9/11, Muslim-American terrorism has claimed 37 lives in the United States, out of more than 190,000 murders during this period

      stats

    7. pproximately 60 were carried out by Muslims. In other words, approximately 2.5% of all terrorist attacks on US soil between 1970 and 2012 were carried out by Muslims.

      stats

    8. 94 percent of the terror attacks were committed by non-Muslims

      stats

    9. Muslim terrorists were responsible for a meagre 0.3 percent of EU terrorism during those years.

      stats

    10. in 2013, there were 152 terrorist attacks in Europe. Only two of them were “religiously motivated”, while 84 were predicated on ethno-nationalist or separatist beliefs

      stats

    11. in the 4 years between 2011 and 2014 there were 746 terrorist attacks in Europe. Of these, only eight were religiously-inspired, which is 1% of the total

      stats

    12. official data from Europol

      Stats

    1. info-request

      What is the current price of cyber insurance? Has it gone up in price?

    2. info-request

      Looking for statistics on the number of cybercrime prosecutions over time.