335 Matching Annotations
  1. Nov 2020
    1. Let’s fit regression line to our model:

      plot() and lines() seem to plot regression lines

      • Can they be added to a ggplot?
      • Can they be used to print R2 on the plot?
  2. Oct 2020
    1. You need to get out of the habit of thinking using quotes is ugly. Not using them is ugly! Why? Because you've created a function that can only be used interactively - it's very difficult to program with it. – hadley

      Does it seem like Hadley still stands by this statement after tidy evaluation from this article <Do you need tidyeval>

    1. In practice, functional programming is all about hiding for loops, which are abstracted away by the mapper functions that automate the iteration.
    1. All figures were created using R Statistical Computing Software version 3.6.3 (R Core Team, 2020), relying primarily on the dplyr package (Wickham et al., 2015) for data manipulation and the ggplot2 package (Wickham 2016) for plotting. The code used to create each figure can be found at https://github.com/mkc9953/SARS-CoV-2-WW-EPI/tree/master.
  3. Sep 2020
    1. The neighbour‐joining tree was prepared with the R package {Ape} (Paradis, Claude, & Strimmer, 2004) and visualized using the R package {ggtree} (Yu, Smith, Zhu, Guan, & Lam, 2017).
  4. Aug 2020
  5. Jul 2020
  6. Jun 2020
    1. How to prevent the environment from being “invalidated”?Docker containers (Rocker)

      Rocker

    2. SAS, R, Stata, SPSS may return different results even for quantiles, or due to floating number representation! The results should be maximally close to each other, but what about resampling methods (SAS and R gives different random numbers for the same seed)?

      Different results between SAS, R, Stata, SPSS

    3. 99.9% open-source. 0.1% is licensed (free for non-commercial use)

      License of libraries in R

    4. Status of R on the Clinical Research market
      • In general bioscience and academia, S ---> R has built over years its position of one of the industry standards
      • In clinical research, however, SAS reigns par excellence
      • Pharmaceutical companies, CROs and even FDA do use R “internally”.But they resist (or hesitate) to use it in submissions (to FDA)
      • Clinical Programmer or Biostatistician ≝ SAS Programmer. Period
    5. Differences in

      Differences between R and SAS:

      • origin of dates
      • default contrasts
      • used sum of squares
      • calculation of quantiles
      • generation of random numbers
      • implementation of advanced model
      • representation of floating point numbers
    6. Tospeeduptheprocesswithoutsacrificingaccuracy,theteamalsousesRevolutionRanalyticproducts

      Revolution R

    1. In most programming languages, you can only access the values of a function’s arguments. In R, you can also access the code used to compute them. This makes it possible to evaluate code in non-standard ways: to use what is known as non-standard evaluation
  7. May 2020
    1. You can create estimation plots here at estimationstats.com, or with the DABEST packages which are available in R, Python, and Matlab.

      You can create estimation plots with:

  8. Apr 2020
    1. Pharma, which is one of the biggest, richest, most rewarding and promising industries in the world. Especially now, when the pharmaceutical industry, including the FDA, allows R to be used the domain occupied in 110% by SAS.

      Pharma industry is one of the most rewarding industries, especially now

    2. CR is one of the most controlled industries in this world. It's insanely conservative in both used statistical methods and programming. Once a program is written and validated, it may be used for decades. There are SAS macros written in 1980 working still by today without any change. That's because of brilliant backward compatibility of the SAS macro-language. New features DO NOT cause the old mechanisms to be REMOVED. It's here FOREVER+1 day.

      Clinical Research is highly conservative, making SAS macros applicable for decades. Unfortunately, that's not the same case with R

  9. Mar 2020
    1. Thanks to ggforce, you can enhance almost any ggplot by highlighting data groupings, and focusing attention on interesting features of the plot
    1. Descriptive Statistic

      R provides a wide range of functions for obtaining summary statistics. One method of obtaining descriptive statistics is to use the sapply( ) function with a specified summary statistic.

    1. dplyr in R also lets you use a different syntax for querying SQL databases like Postgres, MySQL and SQLite, which is also in a more logical order
    1. We save all of this code, the ui object, the server function, and the call to the shinyApp function, in an R script called app.R

      The same basic structure for all Shiny apps:

      1. ui object.
      2. server function.
      3. call to the shinyApp function.

      ---> examples <---

    2. ui

      UI example of a Shiny app (check the code below)

    3. server

      Server example of a Shiny app (check the code below):

      • random distribution is plotted as a histogram with the requested number of bins
      • code that generates the plot is wrapped in a call to renderPlot
    4. I want to get the selected number of bins from the slider and pass that number into a python method and do some calculation/manipulation (return: “You have selected 30bins and I came from a Python Function”) inside of it then return some value back to my R Shiny dashboard and view that result in a text field.

      Using Python scripts inside R Shiny (in 6 steps):

      1. In ui.R create textOutput: textOutput("textOutput") (after plotoutput()).
      2. In server.R create handler: output$textOutput <- renderText({ }].
      3. Create python_ref.py and insert this code:
      4. Import reticulate library: library(reticulate).
      5. source_python() function will make Python available in R:
      6. Make sure you've these files in your directory:
      • app.R
      • python_ref.py and that you've imported the reticulate package to R Environment and sourced the script inside your R code.

      Hit run.

    5. Currently Shiny is far more mature than Dash. Dash doesn’t have a proper layout tool yet, and also not build in theme, so if you are not familiar with Html and CSS, your application will not look good (You must have some level of web development knowledge). Also, developing new components will need ReactJS knowledge, which has a steep learning curve.

      Shiny > Dash:

      • Dash isn't yet as stabilised
      • Shiny has much more layout options, whereas in Dash you need to utilise HTML and CSS
      • developing new components in Dash needs ReactJS knowledge (not so easy)
    6. You can host standalone apps on a webpage or embed them in R Markdown documents or build dashboards. You can also extend your Shiny apps with CSS themes, Html widgets, and JavaScript actions.

      Typical tools used for working with Shiny

    7. You can either create a one R file named app.R and create two seperate components called (ui and server inside that file) or create two R files named ui.R and server.R

  10. Feb 2020
  11. Dec 2019
    1. “A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).”

      What is valence in music according to Spotify?

  12. Nov 2019
  13. Oct 2019
  14. Jun 2019
  15. varsellcm.r-forge.r-project.org varsellcm.r-forge.r-project.org
    1. missing values are managed, without any pre-processing, by the model used to cluster with the assumption that values are missing completely at random.

      VarSelLCM package

  16. May 2019
    1. Some of the best and cheapest tombstones come from India. In 2013 India produced 35,342 million tons of granite, making it the world’s largest producer

      This is interesting to me because I guess I never really thought about where the tombstones came from, I just knew that they came engaved and i never thought about who had to do it

  17. Apr 2019
  18. Feb 2019
    1. Network centralization

      degree.cent <- centr_degree(g, mode = "all") degree.cent$res degree.cent$centralization degree.cent$theoretical_max

  19. Dec 2018
    1. I came across this via the cran.r-project, referred to be a computer scientist at an NIH lecture. It might be an interesting source to see code-sharing norms and practices.

  20. Sep 2018
  21. May 2018
    1. hi there please check on the Recent Updated SAS Training and Tutorial Course which can explain about the SAS and its integration with the R as well so please go through the Link:-

      https://www.youtube.com/watch?v=IOxaKq4lB-0

  22. Mar 2018
  23. Feb 2018
    1. In the six states that prohibit ex-felons from voting, one in four African-American men is permanently disenfranchised.
  24. Jan 2018
  25. Dec 2017
  26. Nov 2017
  27. Oct 2017
  28. Aug 2017
  29. Jul 2017
  30. Jun 2017
  31. May 2017
    1. National Research Council
      The National Research Council (NRC) is an organization within the Government of Canada dedicated to research and development. Today, the NRC works with members of the Canadian industry to provide meaningful research and development for many different types of products. The areas of research and development that the NRC participates in include aerospace, aquatic and crop resource development, automotive and surface transportation, construction, energy, mining, and environment, human health therapeutics, information and communications technologies, measurement science standards, medical devices, astronomy and astrophysics, ocean, coastal, and river engineering, and security and disruptive technologies. The NRC employs scientists, engineers, and business experts. The mission of the NRC is as follows: “Working with clients and partners, we provide innovation support, strategic research, scientific and technical services to develop and deploy solutions to meet Canada's current and future industrial and societal needs.” The main values of the NRC include impact, accountability, leadership, integrity, and collaboration. The most recent success stories of the NRC include research regarding “green buildings,” math games, mechanical insulation, and many more (Government of Canada 2017). Here is a link to their achievement page where these stories and more are posted: http://www.nrc-cnrc.gc.ca/eng/achievements/index.html. Here is a link to the NRC webpage: http://www.nrc-cnrc.gc.ca/eng/index.html.  
      

      References

      Government of Canada. 2017. National Research Council Canada. May 5. Accessed May 8, 2017. http://www.nrc-cnrc.gc.ca/eng/index.html.

  32. Feb 2017
    1. Reciprocity

      This one was easy! Getting good a good directed network to play around with it with into R and able to be modified was... way harder than getting this info.

      reciprocity(g, ignore.loops = TRUE)

      There is an additional mode operator where if you put the mode = ("ratio") it calculates (unordered) vertex pairs are classified into three groups: (1) not-connected, (2) non-reciprocaly connected, (3) reciprocally connected. The result is the size of group (3), divided by the sum of group sizes (2)+(3).

    2. Centralization

      Centralization interests me for analyzing discussion forums--are there key players, and do these key players show higher degrees of cognitive presence?

      Calculating for centralization by number of connections seems quite straightforward in R: centralization.degree

    3. Clustering

      I am very interested in clustering measures, because I plan to analyze data from a Slack group that I am a part of, where I suspect there are many subgroups who only interact with each other.

      After looking around for some different clustering algorithms, I found the "cluster_label_prop" function in the igraph package, which seems to do what I would like to do. To summarize, this function automatically detects groups within a network by initially labeling every node with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities.

      There seem to be many different ways to define clustering though, so I am sure that I will need to do more research on the topic of clustering as I move forward with my research project.

    1. See http://kateto.net/network-visualization↩

      This is incredible! Thank you for sharing this link.

    1. THE WESTERN LAND, nervous under the beginning change. The Western States, nervous as horses before a thunder storm

      Steinbeck groups the Western states together in one entity that feels and experiences the same nervous energy, like that of horses. The repetition of this idea throughout Ch. 14 serves to underscore the unity of these states as a single group separate and distinct from the rest of the country.

    1. Despite many editors being unpaid or poorly remunerated for their work, plant scientist Jaime A. Teixeira da Silva believes they “should be held accountable” if authors are made to wait for an “excessive or unreasonable amount of time” before a decision is made on their research.

      How would this be enforced exactly?

    1. The crawler represented a third option: a way to figure out how humans work.

      Good way to look at it.

  33. Jan 2017
    1. marine species that calcify have survived through millions of years when CO2 was at much higher levels

      Some calcifying species were indeed abundant in the Cretaceous, a time at which the atmospheric CO2 concentration was high. However, seawater alkalinity was also high due to intense weathering on land. Hence, the concentration of carbonate ions (CO3, which controls calcification) was elevated. That compensation does not happen today and will not happen in the near future because total alkalinity does not change significantly on time scales of centuries. There is ample evidence in the literature for that.

  34. Jun 2016
  35. Mar 2016
    1. Ensures that vital information is provided to educators, families, students, and communities through annual statewide assessments that measure students' progress toward those high standards.

      Naturally, not every student is capable of reaching those high standards but, ability based grouping will help those students reach those standards.

  36. Feb 2016
    1. When students are grouped by ability, then collaborative work becomes important because this type of learning environment is heavily dependent on team work.

      This prevents the one or two "smart" students in the group from doing all the work because all the students are on the same academic level.

    2. Students can move at their own pace: When students are grouped together based on skill level, the pressure is lessened of when the topic must be covered.

      This is probably the most apparent benefit to ability based grouping.

    1. Between-class grouping - a school's practice of separating students into different classes, courses, or course sequences (curricular tracks) based on their academic achievement

      This is how i envision the education system should look.

    2. Within-class grouping - a teacher's practice of putting students of similar ability into small groups usually for reading or math instruction
    3. Proponents of ability grouping say that the practice allows teachers to tailor the pace and content of instruction much better to students' needs and, thus, improve student achievement.
  37. Dec 2015
    1. Considered by the beef industry to be an impressive innovation, lean finely textured beef is made from the remnant scraps of cattle carcasses that were once deemed too fatty to go into human food.

      The textured beef is made of just scraps and waste that was not going to be put into food.

  38. Aug 2015
    1. R Grouping functions: sapply vs. lapply vs. apply. vs. tapply vs. by vs. aggregate var ados = ados || {}; ados.run = ados.run || []; ados.run.push(function () { ados_add_placement(22,8277,"adzerk794974851",4).setZone(43); }); up vote 463 down vote favorite 606 Whenever I want to do something "map"py in R, I usually try to use a function in the apply family. (Side question: I still haven't learned plyr or reshape -- would plyr or reshape replace all of these entirely?) However, I've never quite understood the differences between them [how {sapply, lapply, etc.} apply the function to the input/grouped input, what the output will look like, or even what the input can be], so I often just go through them all until I get what I want. Can someone explain how to use which one when? [My current (probably incorrect/incomplete) understanding is... sapply(vec, f): input is a vector. output is a vector/matrix, where element i is f(vec[i]) [giving you a matrix if f has a multi-element output] lapply(vec, f): same as sapply, but output is a list? apply(matrix, 1/2, f): input is a matrix. output is a vector, where element i is f(row/col i of the matrix) tapply(vector, grouping, f): output is a matrix/array, where an element in the matrix/array is the value of f at a grouping g of the vector, and g gets pushed to the row/col names by(dataframe, grouping, f): let g be a grouping. apply f to each column of the group/dataframe. pretty print the grouping and the value of f at each column. aggregate(matrix, grouping, f): similar to by, but instead of pretty printing the output, aggregate sticks everything into a dataframe.] r sapply tapply r-faq

      very useful article on apply functions in r

  39. Jun 2015
    1. download and install the ACS package in addition to going to requesting a secret key

      Troubleshooting csv file - step 1.

  40. Jan 2015
    1. R requires forward slashes (/) not back slashes (\) when specifying a file location even if the file is on your hard drive.