165 Matching Annotations
  1. Oct 2019
    1. A Million Brains in the Cloud

      Arno Klein and Satrajit S. Gosh published this research idea in 2016 and opened it to review. In fact, you could review their abstract directly in RIO, but for the MOOC activity "open peer review" we want you to read and annotate their proposal using this Hypothes.is layer. You can add annotations by simply highlighting a section that you want to comment on or add a page note and say in a few sentences what you think of their ideas. You can also reply to comments that your peers have already made. Please sign up to Hypothes.is and join the conversation!

  2. Sep 2019
    1. Transparent Review in Preprints will allow journals and peer review services to show peer reviews next to the version of the manuscript that was submitted and reviewed.

      A subtle but important point here is that when the manuscript is a preprint then there are two public-facing documents that are being tied together-- the "published" article and the preprint. The review-as-annotation becomes the cross-member in that document association.

    1. I am writing this review for the Drummond and Sauer comment on Mathur and VanderWeele (2019). To note, I am familiar with the original meta-analyses considered (one of which I wrote), the Mathur and VanderWeele (henceforth MV2019) article, and I’ve read both Drummond and Sauer’s comment on MV2019 and Mathur’s review of Drummond and Sauer’s comment on MV2019 (hopefully that wasn’t confusing). On balance, I think Drummond and Sauer’s (henceforth DSComment) comment under review here is a very important contribution to this debate. I tended to find DSComment to be convincing and was comparatively less convinced by Mathur’s review or, indeed, MV2019. I hope my thoughts below are constructive.

      It’s worth noting that MV2019 suffered from several primary weaknesses. Namely:

      1. On one hand, it didn’t really tell us anything we didn’t already know, namely that near-zero effect sizes are common for meta-analyses in violent video game research.
      2. MV2019, aside from one brief statement as DSComment notes, neglected the well-known methodological issues that tend to spuriously increase effect sizes (unstandardized aggression measures, self-ratings of violent game content, identified QRPs in some studies such as the Singapore dataset, etc.) This resulted in a misuse of meta-analytic procedures.
      3. MV2019 naïvely interprets (as does Mathur’s review of DSComment) near-zero effect sizes as meaningful, despite numerous reasons not to do so given concerns of false positives.
      4. MV2019, for an ostensible compilation of meta-analyses, curiously neglect other meta-analyses, such as those by John Sherry or Furuyama-Kanamori & Doi (2016).

      At this juncture, publication bias, particularly for experimental studies, has been demonstrated pretty clearly (e.g. Hilgard et al., 2017). I have two comments here. MV2019 offered a novel and not well-tested alternative approach (highlighted again by Mathur’s review) for bias, however, I did not find the arguments convincing as this approach appears extrapolative and produces results that simply aren’t true. For instance, the argument that 100% of effect sizes in Anderson 2010 are above 0, is quickly falsified merely by looking at the reported effect sizes in the studies included, at least some of which are below .00. Therefore, this would appear to clearly indicate some error in the procedure of MV2019.

      Further, we don't need statistics to speculate about publication bias in Anderson et al. (2010) as there are actual specific examples of published null studies missed by Anderson et al. (see Ferguson & Kilburn, 2010). Further, the publication of null studies in the years immediately following (e.g. von Salisch et al., 2011) indicate that Anderson's search for unpublished studies was clearly biased (indeed, I had unpublished data at that time but was not asked by Anderson and colleagues for it). So there's no need at all for speculation given we have actual examples of missed studies and a fair number of them.

      It might help to highlight also that traditional publication bias techniques probably are only effective with small sample experimental studies. For large sample correlational/longitudinal studies, effect sizes tend to be a bit more homogeneous, hovering closely to zero. In such studies the accumulation of p-values near .05 is unlikely given the power of small studies. Relatively simple QRPs can make p-values jump rapidly from non-significance to something well below.05. Thus, traditional publication bias procedures may return null results for this pool of studies, despite QRPs, and thus, publication bias having taken place.

      It might also help to note that meta-analyses with weak effects are very fragile to unreported null studies, which probably exist in greater numbers (particularly for large n studies) that would be indicated by publication bias techniques.

      I agree with Mathur’s comment about experiments not always offering the best evidence, given lack of generalizability to real-world aggression (indeed, that’s been a long-standing concern). However, it might help DSComment to note that, by this point, probably the pool of evidence least likely to find effects are longitudinal studies. I’ve got two preregistered longitudinal analyses of existing datasets myself (here I want to make clear that citing my work is by no means necessary for my positive evaluation of any revisions on DSComment), and there are other fine studies (such as Lobel et al., 2017, Breuer et al., 2015, Kuhn et al., 2018; von Salisch et al., 2011, etc.) The authors may also want to note Przybylski and Weinstein (2019) which offer an excellent example of a preregistered correlational study.

      Indeed, in a larger sense, as far as evidence goes, DSComment could highlight recent preregistered evidence from multiple sources (McCarthy et al., 2016; Hilgard et al., 2019, Przybylski & Weinstein, 2019, Ferguson & Wang, 2019, etc.) This would seem to be the most crucial evidence and, aside from one excellent correlational study (Ivory et al.) all of the preregistered results have been null. Even if we think the tiny effect sizes in existing metas provide evidence in support of hypotheses (and we shouldn’t), these preregistered studies suggest we shouldn’t trust even those tiny effects to be “true.”

      The weakest aspect of MV2019 was the decision to interpret near-zero effects as meaningful. Mathur, argues that tiny effects can be important once spread over a population. However, this is merely speculation, and there’s no data to support it. It’s kind of a truthy thing scholars tend to say defensively when confronted by the possibility that effect sizes don’t support their hypotheses. By making this argument, Mathur invites an examination of population data where convincing evidence (Markey, Markey & French, 2015; Cunningham et al., 2016; Beerthuizen, Weijters & van der Laan, 2017) shows that violent game consumption is associated with reduced violence in society. Granted, some may express caution about looking at societal-level data, but here is where scholars can’t have it both ways: One can’t make claims about societal-level effects, and then not want to look at the societal data. Such arguments make unfalsifiable claims and are unscientific in nature.

      The other issue is that this line of argument makes effect sizes irrelevant. If we’re going to interpret effect sizes no matter how near to zero as hypothesis supportive, so long as they are “statistically significant” (which, given the power of meta-analyses, they almost always are), then we needn’t bother reporting effect sizes at all. We’re still basically slaves to NHST, just using effect sizes as a kind of fig leaf for the naked bias of how we interpret weak results.

      Also, that’s just not how effect sizes work. They can’t be sprinkled like pixie dust over a population to make them meaningful.

      As DSComment points out, effect sizes that are this small have high potential for Type 1 error. Funder and Ozer (2019) recent contributed to this discussion in a way I think was less than helpful (to be very clear I respect Funder and Ozer greatly, but disagree with many of their comments on this specific issue). Yet, as they note, interpretation of tiny effects is based on such effects being “reliable”, a condition clearly not in evidence for violent game research given the now extensive literature on the systematic methodological flaws in that literature.

      In her comment Dr. Mathur dismisses the comparison with ESP research, but I disagree with (or dismiss?) this dismissal. The fact that effect sizes in meta-analyses for violent game research are identical to those for “magic” is exactly why we should be wary of interpreting such effect sizes as hypothesis supportive. Saying violent game effects are more plausible is irrelevant (and presumably the ESP people would disagree). However, the authors of DSComment might strengthen their argument by noting that some articles have begun examining nonsense outcomes within datasets. For example, in Ferguson and Wang (2019) we show that the (weak and in that case non-significant) effects for violent game playing are no different in predicting aggression than nonsense variables (indeed, the strongest effect was for the age at which one had moved to a new city). Orben and Przybylski (2019) do something similar and very effective with screen time. Point being, we have an expanding literature to suggest that the interpretation of such weak effects is likely to lead us to numerous false positive errors.

      The authors of DSComment might also note that MV2019 commit a fundamental error of meta-analysis, namely assuming that the “average effect size wins!” When effect sizes are heterogeneous (as Mathur appears to acknowledge unless I misunderstood) the pooled average effect size is not a meaningful estimator of the population effect size. That’s particularly true given GIGO (garbage in, garbage out). Where QRPs have been clearly demonstrated for some studies in this realm (see Przybylski & Weinstein, 2019 for some specific examples of documentation involving the Singapore dataset), the pooled average effect size, however it is calculated, is almost certainly a spuriously high estimate of true effects.

      DSComment could note that other issues such as citation bias are known to be associated with spuriously high effect sizes (Ferguson, 2015), another indication that researcher behaviors are likely pulling effect sizes above the actual population effect size.

      Overall, I don’t think MV2019 were very familiar with this field and, appearing unaware of the serious methodological errors endemic in much of the literature which pull effect sizes spuriously high. In the end, they really didn’t say anything we didn’t already know (the effect sizes across metas tend to be near zero), and their interpretation of these near-zero effect sizes was incorrect.

      With that in mind, I do think DSComment is an important part of this debate and is well worth publishing. I hope my comments here are constructive.

      Signed, Chris Ferguson

    2. [This was a peer review for the journal "Meta-Psychology", and I am posting it via hypothes.is at the journal's suggestion.]

      I thank the authors for their response to our article. For full disclosure, I previously reviewed an earlier version of this manuscript. The present version of the manuscript shows improvement, but does not yet address several of my substantial concerns, each of which I believe should be thoroughly addressed if a revision is invited. My concerns are as follows:

      1.) The publication bias corrections still rely on incorrect statistical reasoning, and using more appropriate methods yields quite different conclusions.

      Regarding publication bias, the first analysis of the number of expected versus observed p-values between 0.01 and 0.05 that is presented on page 3 (i.e., “Thirty nine…should be approximately 4%”) cannot be interpreted as a test of publication bias, as described in my previous review. The p-values would only be uniformly distributed if the null were true for every study in the meta-analysis. If the null does not hold for every study in the meta-analysis, then we would of course expect more than 4% of the p-values to fall in [0.01, 0.05], even in the absence of any publication bias. I appreciate that the authors have attempted to address this by additionally assessing the excess of marginal p-values under two non-null distributions. However, these analyses are still not statistically valid in this context ; they assume that every study in the meta-analysis has exactly the same effect size (i.e., that there is no heterogeneity), which is clearly not the case in the present meta-analyses. Effect heterogeneity can substantially affect the distribution and skewness of p-values in a meta-analysis (see Valen & Yuan, 2007). To clarify the second footnote on page 3, I did not suggest this particular analysis in my previous review, but rather described why the analysis assuming uniformly distributed p-values does not serve as a test of publication bias.

      I would instead suggest conducting publication bias corrections using methods that accommodate heterogeneity and allow for a realistic distribution of effects across studies. We did so in the Supplement of our PPS piece (https://journals.sagepub.com/doi/suppl/10.1177/1745691619850104) using a maximum-likelihood selection model that accommodates normally-distributed, heterogeneous true effects and essentially models a discontinuous “jump” in the probability of publication at the alpha threshold of 0.05. These analyses did somewhat attenuate the meta-analyses’ pooled point estimates, but suggested similar conclusions to those presented in our main text. For example, the Anderson (2010) meta-analysis had a corrected point estimate among all studies of 0.14 [95% CI: 0.11, 0.16]. The discrepancy between our findings and Drummond & Sauer’s arises partly because the latter analysis focuses only on pooled point estimates arising from bias correction, not on the heterogeneous effect distribution, which is the very approach that we described as having led to the apparent “conflict” between the meta-analyses in the first place. Indeed, as we described in the Supplement, publication bias correction for the Anderson meta-analyses still yields an estimated 100%, 76%, and 10% of effect sizes above 0, 0.10, and 0.20 respectively. Again, this is because there is substantial heterogeneity. If a revision is invited, I would (still) want the present authors to carefully consider the issue of heterogeneity and its impact on scientific conclusions.

      2.) Experimental studies do not always yield higher-quality evidence than observational studies.

      Additionally, the authors focus only the subset of experimental studies in Hilgard’s analysis. Although I agree that “experimental studies are the best way to completely eliminate uncontrolled confounds”, it is not at all clear that experimental lab studies provide the overall strongest evidence regarding violent video games and aggression. Typical randomized studies in the video game literature consist, for example, of exposing subjects to violent video games for 30 minutes, then immediately having them complete a lab outcome measure operationalizing aggression as the amount of hot sauce a subject chooses to place on another subject’s food. It is unclear to what extent one-time exposures to video games and lab measures of “aggression” have predictive validity for real-world effects of naturalistic exposure to video games. In contrast, a well-conducted case-control study with appropriate confounding control and assessing violent video game exposure in subjects with demonstrated violent behavior versus those without might in fact provide stronger evidence for societally relevant causal effects (e.g., Rothman et al., 2008).

      3.) Effect sizes are inherently contextual.

      Regarding the interpretation of small effect sizes, we did indeed state several times in our paper that the effect sizes are “almost always quite small”. However, to universally dismiss effect sizes of less than d = 0.10 as less than “the smallest effect size of practical importance” is too hasty. Exposures, such as violent video games, that have very broad outreach can have substantial effects at the population level when aggregated across many individuals (VanderWeele et al., 2019). The authors are correct that small effect sizes are in general less robust to potential methodological biases than larger effect sizes, but to reiterate the actual claim we made in our manuscript: “Our claim is not that our re-analyses resolve these methodological problems but rather that widespread perceptions of conflict among the results of these meta-analyses—even when taken at face value without reconciling their substantial methodological differences—may in part be an artifact of statistical reporting practices in meta-analyses.” Additionally, the comparison to effect sizes for psychic phenomena does not strike as particularly damning for the violent video game literature. The prior plausibility that psychic phenomena exist is extremely low, as the authors themselves describe, and it is surely much lower than the prior plausibility that video games might increase aggressive behavior. Extraordinary claims require extraordinary evidence, so any given effect size for psychic phenomena is much less credible than for video games.

      Signed, Maya B. Mathur Department of Epidemiology Harvard University

      References

      Johnson, Valen, and Ying Yuan. "Comments on ‘An exploratory test for an excess of significant findings’ by JPA loannidis and TA Trikalinos." Clinical Trials 4.3 (2007): 254.

      Rothman, K. J., Greenland, S., & Lash, T. L. (2008). Modern epidemiology (Vol. 3). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins.

      VanderWeele, T. J., Mathur, M. B., & Chen, Y. (2019). Media portrayals and public health implications for suicide and other behaviors. JAMA Psychiatry.

  3. Aug 2019
  4. Jul 2019
  5. Apr 2019
  6. Feb 2019
    1. Interactions of tomato and Botrytis genetic diversity: Parsing the contributions of host differentiation, domestication and pathogen variation

      This article has a Peer Review Report

  7. Jan 2019
    1. Web annotation, for example, is catching on as a new mode of collaboration, peer review, and other research functions.

      And the combination of community feedback on preprints with traditional and post-publication peer review through collaborative annotation is catching on with a variety of publishers. See InReview by BMC and ResearchSquare. Also COS preprint servers such as SocArXiv and Psyarxiv.

    1. Banyak kelemahan yang ditemukan Dian ketika menelaah proposal penelitian yang masuk. Ide yang ditawarkan banyak yang kurang kreatif dan aktual. Ada juga yang hanya merupakan duplikasi atau daur ulang dari penelitian sebelumnya.

      Apakah hasil peninjauan ini terbuka untuk umum dan diberikan juga kepada peneliti? Maaf kalau saya keliru, saya setiap tahun mengirimkan proposal ke Kemristekdikti, tapi hasil peninjauan secara lengkap belum pernah saya terima.

    1. The Receptor-like Pseudokinase GHR1 Is Required for Stomatal Closure

      Please find a Peer Review Report here.

      The report shows the major requests for revision and author responses. Minor comments for revision and miscellaneous correspondence are not included. The original format may not be reflected in this compilation, but the reviewer comments and author responses are not edited, except to correct minor typographical or spelling errors that could be a source of ambiguity.

  8. Oct 2018
  9. Jun 2018
    1. or at least they pretend

      I don't think we're pretending. I know I'm not!

    2. Senior colleagues indicate that I should not have to balance out publishing in “traditional, peer-reviewed publications” as well as open, online spaces.

      Do your colleagues who read your work, annotate it, and comment on it not count as peer-review?

      Am I wasting my time by annotating all of this? :) (I don't think so...)

  10. May 2018
    1. “OER are not typically counted toward research requirements, because they are seen as lacking the vetting process that comes with, for example, peer-reviewed articles.”
  11. Mar 2018
    1. In what appears to be a first, a U.S. court is forcing a journal publisher to breach its confidentiality policy and identify an article's anonymous peer reviewers.

      Wow. This could have a chilling effect on reviews for certain subjects.

    1.  By asking my students to craft and peer-review multiple-choice questions based on the concepts covered that week (and scaffolding this process over the semester)

      Este párrafo muestra el "ingrediente" de los 8 ingredientes de la pedagogia abierta de Peer review al hacer que los alumnos colaboren en el proceso de evaluación.

  12. Feb 2018
    1. Behind all the things on the panel is a pinkish/peach  layer.

      I would avoid using the words "things."

    2. Lettering It spells out the word Mitchell David Mucha M.D., in white with a pinkish/peach boarder. The stitching on the words is very rough. The letters are very huge and take up a majority of the space on the panel.

      I would maybe try to add transition sentences to more smoothly transition between ideas.

    3. idea of the Stethoscope was from René Théophile Hyacinthe Laënnec

      Good job including links and images on the page to give the reader a fuller experience.

    4. This long part of it is the same color as the bag except it has more of  rough touch to it.

      There is a couple grammar errors. Here there should be an "a" before "rough." "...more of a rough touch to it."

    5. About The Panel

      Before describing the panel, I would give the audience some background information on The Quilt, such as the founding or what each panel means.

    6. The orginal Doctor’s Bag was the Gladstone which was made in the mid nineteen century by J.G. Beard. It was used for house visits to patients house. The contents inside of the bag is medical tools like stethoscope, clinical thermometer and tongue depressor some form of illumination, such as a torch, plessor, ophthalmoscope and auriscope; a test tube or two; and bottles of Benedict’s reagent and acetic acid to complete the kit (RACGP).

      Good job giving some background information the Doctor's Bag.

    7. The items that I will be describing from the panel is a doctor bag, stethoscope, the colors, and finally the lettering. I will be describing them in the same order I have mentioned them.

      I would avoid writing in the third person. (Not using "I")

    1. The top center panel, belonging to Eddie (no last name reported), has a mosaic background of 6″x 6″ burgundy, soft dusty rose, light bubblegum pink, and sapphire blue squares. His name is then sewn in large, cursive lettering across the top left half of the panel.

      This description makes me feel as if I'm looking at the panel. Love it

    2. Since the panels would be featured in the Quilt as a visual memorial and not as a blanket, I wondered why the panels that were predominantly paintings were not made of canvas fabric instead of fabrics associated with apparel, or at least primed with some kind of Gesso to preserve the piece. I am by no means an expert, but as an artist who has experimented with different mediums on both primed and un-primed fabrics, I can attest for the value of using the right mediums on their respective materials. Though I am sure acceptable fabric paints were mostly used, I could tell where they were not.

      Great connections between your experience as an artist and what you've observed from the quilt. I enjoy the objectivity rather than simply taking the panel for what it is.

    3. Although tied by a similar tragedy, each panel exhumes individuality through applying different artistic methods.

      The individuality of the panels is nicely described and understood. From the previous description, there is distinct differences yet similarity in the pieces.

    4. Each panel is made of a soft fabric and sewn onto a large, 12’x 12′ piece of ivory linen fabric.

      This first description of the panel gives a brief but detailed imagery. Very good

    5. Block #621

      Great pictures - but could be more multi modal as a whole

    1. the rainbow in the body of the image is shown vibrantly in the majority of picture which by its colorful characteristics

      Was there any writing? Any markings? Differences in stitches? Add more details, even mentioning the lack of detail ( Ex: stating the panel has no visible markings...) to give the reader a fuller image.

    2. Another important visual element that I detected within this image was the blue bracelet circling the shiny arm holding the dog.

      Were there any additional objects (letters, pictures, notes,ETC) that came with the panel? Maybe there will be some clues as to what the dog or bracelet means.

    3. This gives the rainbow an ever greater meaning, maybe Jimmy had an aspiration for music and pursued a career in one, or possibly just has a respect for the fine musical arts.

      From reading this I can tell you are really starting to question and investigate this panel as well as, Jimmy Popejoy.

    4. hings within the panel that I took notice of immediately was the dog that was held by the shiny arm.

      I like that you started with the most attention grabbing thing to you. However, I think you should first introduce The AIDS Quilt. That way the reader will understand what it is.

    5. The panel base is mostly a pure, bloody, vibrant red color. This is also in the material of somewhat a shiny, soft, velvet material that vividly gives the panel some extravagant flare.

      Love the word choice, the description brings a vivid image to mind and flows smoothly.

  13. Nov 2017
    1. Media Maker Spaces is an exploration for them experience a creating, editing, storing. publishing and streaming media to their peers and to their immediate context.

      Seems quite related to #CollecionAndIdentity

    1. Festivals and rituals - Living archives of the memories

      Seems quite related to #CollectionAndIdentity

    1. An excellent commentary on what ails our current peer review system and how alternative quality assurance system might work in academics.

  14. Oct 2017
    1. review and critique each other’s work.

      This is the process of replying to annotations. But annotation can also be leveraged for peer review of student writing.

    1. Brianna:I had a negative experience where, in my master’s, my supervisor encouraged me to submit one of my papers to a journal for publication. I just submitted the paper to a journal as a course paper without making any changes, not even changing the title page. The journal told me to re-submit with revisions, but I thought thatit was a rejection, and I stopped the process—it was intimidating. I thought being involved in a journal where I know some of the people and they won’t just get an online e-mail response from editors would be helpful

      Misunderstanding revise and resubmit; misunderstanding the difference between a student paper and an article.

  15. Sep 2017
    1. The problems here stem from a lack of comprehensiveness, interoperability, and critical mass uptake as the de facto platform for PPPR. The result of this is a mess of different platforms having different types of commentary on different articles, or sometimes the same ones, none of which can be viewed easily in a single, standardised way. That doesn’t seem very efficient.

      This is really key.

  16. Jun 2017
    1. protected platform whereby many expert reviewers could read and comment on submissions, as well as on fellow reviewers’ comments

      Conduct prepeer review during the manuscript development on a web platform. That is what is happening in Therapoid.net.

    2. intelligent crowd reviewing

      Crowdsourcing review? Prepeer review as precursor to preprint server.

  17. Apr 2017
    1. p. 1

      Peer review is a mechanism, then, for quality control; It protects us from contamination by error and poor argument, and affords us truth or contributions to attaining truth.

    2. Shatz, David. 2004. Peer Review: A Critical Inquiry. Issues in Academic Ethics. Lanham, Md: Rowman & Littlefield.

    1. The Effects of the Built Environment on Child Friendliness and Obesity: Analysis on Auburn Ave

      Charmaine's claim isnt clear at this very moment but she provides alot of background information on her location that I didnt knw about. If she specifically points out her claim them this analysis would be good.

    1. Conclusion

      Claim: By investing in communities through new businesses and innovations, the population demographic may experience a drastic change from low income residents to high income residents. This then leads to other benefits such as a lower crime rate.

      Evidence: The author uses a chart to display the distribution of income when looking at ages in the Ponce de Leon area (secondary source).

      Overall, I really enjoyed reading your BEA. I felt that it was professionally crafted with the rhetorical situation kept in mind. One addition I would suggest would be looking also at the negative aspects of of gentrification, such as the displacement of residents that resided in the area before the process of gentrification began. By doing this, it would add to your credibility by recognizing and arguing the opposing position.

      GREAT JOB!!!!!

    1. Description of Ponce City Market

      Like the last page, the current description of Ponce City Market provides strong evidence for the claim of how gentrification can improve an area in various ways. Be sure to actually state your ideas to reinforce your ideas. Think about once again addressing the questions as well.

      The pictures may also need to be cited if not personally captured.

    1. Description of Krog Street Market

      Claim: The introduction claim needs to be incorporated into this description, because it directly provides evidence for such ideas.

      Evidence: By using a highly detailed description as well as pictures, it's clear that the Krog Street Market has experienced obvious improvements. Use the questions suggested at the beginning to further emphasis how much the area has improved for the better!

    1. Demographics of Ponce de Leon Area

      Claim: Like previous pages, the claim was not explicitly stated, but the information implied a change in population over the years.

      Evidence: The chart was utilized in this case form outside sources cited both at the bottom of the page as well as within the conclusion. As I previously mentioned, I would include more about income, education, and business statistics, because I feel that this would relate better to your claim in the introduction.

    1. Demographics of Krog Street Area

      Once again, this is great evidence for your claim in the introduction.

      Claim: Your introduction claim of gentrification could really be used here to drive the point home. (The claim was implied but not stated.)

      Evidence: For evidence, secondary sources were utilized to describe the changing demographic. I would suggest to discuss in further detail the categories of income and business. These categories would provide a great deal of validity to your claim, should you include it.

      BE SURE TO CITE YOUR CHART

    1.  (“725 Ponce” 2015)

      I would personally consider putting the full citation at the bottom of each page. I feel that this would be more helpful and easier to access for your audience.

    2. History of Ponce City Market

      Wow! This is great information. You did a great job constructing the history of Ponce in a way that is easy for the reader to understand and enjoy. I also really like the vintage pictures!

      Claim: Like the previous section on the history of Krog Street Market, there is no claim physically present in this paragraph. To me, it is easily understood to be in support of the introduction claim, but you may want to explicitly state how Ponce City Market relates to the claim for the purposes of the project.

      Evidence: The evidence is taken from secondary source cited both at the bottom of the page and the conclusion. Once again, I would consider addressing the questions I suggested in the previous annotation as a way to address the claim implied.

    1. History of Krog Street Market

      I really enjoyed reading the history of this particular location. Surprisingly, I actually remember when this property was purchased and the media attention that followed on the local news channels.

      Claim: There wasn't a claim explicitly stated in this paragraph but the claim in the introduction was obviously implied.

      Evidence: This paragraph mainly consisted of second hand information sited in the conclusion. I personally believe that learning about the history of the market and the surrounding area adds to your authenticity and credibility.

      It maybe helpful to restate the claim and provide further evidence such as what the area looked like before versus years after the purchase. Was it run down? Also relating back to the introduction what were the crime rates before and after at this location? What was the local economy like before the recent innovations? Overall, I feel that this is a strong paragraph that could be improved through relating more to the claims in the introduction.

    1. Introduction

      I really enjoyed your introduction. I feel that discussing what a built environment is and how it came about improves the readability for your audience. Further, your introduction highlights your claims about gentrification in Atlanta! I'm excited to continue reading!

      Claim: When lower income neighborhoods experience gentrification, the population demographic undergoes drastic change (i.e. higher incomes). In turn, this may lead to positive benefits for the economy as well as the safety and happiness of residents.

      Evidence: There is no evidence in this specific paragraph, but I assume that the other pages on Ponce City Market and Krog Street Market will act as examples. Maybe give a brief introduction of both in order to provide evidence in this paragraph.

    1. Thesis: The rhetoric of the built environment of Atlanta shows that racial discrimination, white flight, car dominated transportation network, and segregation by race and class have caused Atlanta to have the highest income inequality ratio in the country, and the same factors that led to severe income inequality in Atlanta are perpetuating the problem today.

      The author makes the claim that Atlanta has a preeminent car based transporation system and that race, class, and racial discrimination has been a determinant in income inequality in Atlanta, stating it to be the highest in the country. Providing data or a referenced source would help to give clarity to Atlanta's ratio compared to all other countries in the U.S.

      The authors' thesis is very thought-provoking and descriptive, but can be broken up into two-three sentences.

    2. Built Environment Analysis (DRAFT)

      Overall, the author does a great job in providing claims and arguments, but the draft in totality is not complete and needs the inclusion of citations from sources and data, graphs, etc. to provide evidence for the claims that are being made and also for general reference when mentioning ratios and numbers. Also, the author did not incorporate multiple modes of presentation. To meet the requirement, the authour should provide photographs, videos, charts, etc. which will also help to give stated claims and arguments a varied perspective.

    3. As a result of a long history of white flight and racial discrimination, Atlanta’s transportation network is predominately designed for travel by car. Consistent public transportation is present downtown and in the immediately surrounding areas. Evidence:

      This section makes the claim that Atlanta has an intentional, leading car based transportation system and that it has a connection to past racial segregation.

    4. Cost of living map and MARTA map side by side

      Listed here seems to be a description of two photographs that are to be compared, however there are no photograpghs posted to compare anything. The author should provide those images for reference and comparison as well as a description on what is being compared and why.

    5. The neighborhood one grows up in has been shown to impact their chances for upward economic mobility, therefore gentrification and neighborhoods segregated by class perpetuate income inequality.

      The author makes the claim that the neighborhood of ones upbringing has a direct correlation to their future rank in the economic sphere of the world. Examples on how this statement may be accurate should be provided in this area.

    6. The Fading American Dream: Trends in Absolute Income Mobility Since 1940” by Raj Chetty and Nathaniel Hendren “All Cities Are Not Created Unequal” by Alan Berube

      The author provides a list of evidence based sources, but does not provide key details in how the sources benefit the claim. Author needs to provide citations and references from sources to support the claim that is being made

    7. history of white flight

      Listed here is a term that was unfamilar and provided research shows that there is a book entitled by this name White Flight by Kevin M. Kruse: http://press.princeton.edu/titles/8043.html

      Author could provide a brief description of the term because it will enhance this specific claim amd also include the link to the book's overview and how it possibly relates to the claim.

    8. While there is some public transportation for people living further away from the center of the city, the current accommodations are insufficient for people without cars.

      The author makes the claim that exisitng public transportation does not benefit persons without a vehicle, but acknowledges its benefits to persons that are located in somewhat distant locations from Atlanta.

    9. The trend of Atlanta’s middle and upper classes moving out to the suburbs is shifting, and these groups are beginning to move back into the city. Therefore, neighborhoods are being gentrified to meet the growing demand.

      The author makes the claim that Atlanta neighborhoods are becoming more gentrified due to the incoming masses of middle class persons/families.

    10. Low-income residents that have settled close to the city, along public transportation routes, are having to move further out because the gentrification of neighborhoods raises the cost of housing.

      Author's claim is gentrification and higher property taxes in Atlanta neighborhoods have caused perons/families of lower incomes to leave cities allowing middle-class persons/families to move in. Specific examples of this scenario in scecific Atlanta areas should be incorparted into the author's claim and provided for reference.

    11. The quality and quantity of public transportation decreases as you move further away from the center of the city. Consequently, those living in poverty who have relocated further away from the city are in a worse situation because they do not have the same amenities available to them.

      This claim explains the disadvantages that lower-income person families face when leaving areas that provide abundant access to public transportation and move to areas with little to less public transportation prone to be of lower quality. An example of this case should be provided in this area to compare public tranportaion between a city area and an area in which it lacks.

    12. “Atlanta: Unsafe at any Speed: Transit Fatality Raises Issues of Race, Poverty and Transportation Justice” by Laurel Paget-Seekins “Health Impact Assessment of the Atlanta Beltline” by Catherine Ross

      This is a list of the sources that the author included to support the claim, but there are no specific details provided to explain how or why it benefits the argument being made.

      The author should provide direct, specific evidence from the listed sources to support the claim.

    13. “Using Vehicle Value as a Proxy for Income: A Case Study on Atlanta’s I-85 HOT Lane” by Sara Khoeini and Randall Guensler “Atlanta: Unsafe at any Speed: Transit Fatality Raises Issues of Race, Poverty and Transportation Justice” by Laurel Paget-Seekins “The Human Scale” by Andreas Mol Dalsgaard

      This is a list of the evidence based sources for the authors claim. The author should add links to the sources and provide specific evidence and citations from the sources that pertains to the claim that is being made.

    14. Photo of bench in Little Five Points

      The author makes note of a photograph that is not posted so there needs to be a photograpgh in this area as well as a description on its relation to the claim.

    15. “Atlanta: Unsafe at any Speed: Transit Fatality Raises Issues of Race, Poverty and Transportation Justice” by Laurel Paget-Seekins “How Cities Use Design to Drive Homeless People Away” by Robert Rosenberger

      Here are the sources provided to accomodate the author's claim, but the evidence to back up the claim in not provided.

    16. “CHANGING BOHEMIA Little Five Points, a Haven of Counterculture, Faces Gentrification and Dissension” by Melissa Turner “Health Impact Assessment of the Atlanta Beltline” by Catherine Ross

      Sources for this claim focus on the Atlanta Beltline and the Little Five Points Area, however, the author does not include any major points or refernces to enhance the claim on gentrification in Atlanta areas directly. An incorporation of in-text citations is needed.

    17. “Atlanta: Unsafe at any Speed: Transit Fatality Raises Issues of Race, Poverty and Transportation Justice” by Laurel Paget-Seekins “The Human Scale” by Andreas Mol Dalsgaard

      Provided here are a list of evidence based sources to reiterate the authors claim.

      Again, the author should add links to the sources and provide specific evidence and citations from the sources that pertains to the claim that is being made.

  18. Mar 2017
    1. ittle direct indication that the Trump administration or congressional leaders known for attacking scientific research on climate change and human health are looking to exploit reproducibility campaigns as a political opportunity.

      Easy to connect these dots though.

    1. Eve Marder, a neurobiologist at Brandeis University and a deputy editor at eLife, says that around one third of reviewers under her purview sign their reviews.

      Perhaps these could routinely become page notes?

    2. If Kriegeskorte is invited by a journal to write a review, first he decides whether he’s interested enough to review it. If so, he checks whether there’s a preprint available—basically a final draft of the manuscript posted publicly online on one of several preprint servers like arxiv and biorxiv. This is crucial. Writing about a manuscript that he’s received in confidence from a journal editor would break confidentiality—talking about a paper before the authors are ready. If there’s a preprint, great. He reviews the paper, posts to his blog, and also sends the review to the journal editor.

      Interesting workflow and within his rights.

    3. The tweet linked to the blog of a neuroscientist named Niko Kriegeskorte, a cognitive neuroscientist at the Medical Research Council in the UK who, since December 2015, has performed all of his peer review openly.

      Interesting...

  19. Feb 2017
    1. object you are photographing b

      "Maybe give some examples..."

    2. I hope to show with this tutorial, however, that the Dino-Lite Premier AM-311S has the potential to create useful models at an affordable price.

      "Good introduction of scope..."

    3. capture images for processing

      "Maybe be more specific..." (generic commentary)

    1. The struggle between Whewell and Lubbock represented two distinct visions of what a referee might be. Whewell was the authoritative generalist, glancing down on the landscape of knowledge. He was unconcerned with — and probably not in a position to critique — the details. Such referees were, according to the Royal Society's president, “Elevated by their character and reputation above the influence of personal feelings of rivalry or petty jealousy”4. Lubbock was a younger specialist, Airy's equal. This allowed him to take a fine-tooth comb to Airy's arguments; it also put him in the position of reviewing a direct competitor.

      Two versions of what a review is.

    1. Pivotal roles are played by three enzymes, (phospho-fructokinase (PFK), pyruvate kinase (PK) and phosphofructoki-nase/fructose-2,6-bisphosphatase (PFKFB)) through their inhibi-tion or activation by three reaction intermediates (fructose-1,6-bisphosphate (F16BP), fructose-2,6-bisphosphate (F26BP), andphosphoenolpyruvate (PEP)) in glycolysis. These enzymes havemultiple isoforms (PFKL/M/P, PKM1/M2/L/R and PFKFB1-4)which are subjected to contrasting allosteric regulations [9–11].Each isoform, therefore, affects the glycolytic activity in a distinctmanner.All three isoforms of PFK are activated by F6P and F26BP [12],but only PFKM and PFKL are activated by F16BP [13–15].PFKFB is a bifunctional enzyme whose kinase and bisphosphatasedomains catalyze the formation and hydrolysis reaction of F26BP,respectively [9,16]. Isozymes of PFKFB differ in their kinase andphosphatase activities as well as in their sensitivity to feedbackinhibition by phosphoenolpyruvate (PEP) [17–19]. Thus, eachisozyme of PFKFB has a profoundly distinct capacity inmodulating PFK activity. Pyruvate kinase (PK) in mammaliansystems is encoded by two genes that can produce two isoformseach. Except for the PKM1 isoform, the other three isoformsof PK, PKM2, PKL and PKR, are activated by F16BP to varyingextents [11]. The M2 isoform of PK, in addition to activation byF16BP, is also under the control of a host of allosteric modulatorsincluding serine, succinylaminoimidazolecarboxamide ribose-5-phosphate (SAICAR) and phenylalanine among others [

      Need a figure presenting the regulation network.

  20. Jan 2017
  21. Oct 2016
  22. Aug 2016
    1. Clarity on what qualifies as a respected preprint

      Why should we respect a preprint? I'm not sure that we should respect anonymously peer reviewed journal articles as much as we do. It's important to remain critical, and I worry that trying to put a veneer of 'respectability' over preprints is not as helpful as expecting people to read them to judge content.

  23. Jul 2016
    1. Page 62

      Borgman discussing the purpose of peer review

      Pre-publication mechanisms serve as expert filters on what becomes part of the scholarly record, when doing out there researchers reading list.

    2. Page 60

      The use of a publication form as a proxy measure for the quality of research productivity has distorted the peer-review system so severely that some consider it broken. Peer reviewing is an expensive process, requiring considerable time and attention of editors comma editorial board members, and other reviewers. Top journals in the sciences and medicine they put fewer than half of the submitted papers through a full P review process, rejecting the remainder on an initial editorial review, and ultimately publish 6 to 10% of the total submissions. Particularly in The Sciences, researchers are under so much pressure to place papers and talk to your journals that they submit them to the same journals, whether or not the content is appropriate.

  24. Jun 2016
    1. No Bias, No Merit: The Case against Blind Submission

      Fish, Stanley. 1988. “Guest Column: No Bias, No Merit: The Case against Blind Submission.” PMLA 103 (5): 739–48. http://www.jstor.org/stable/462513.

      An interesting essay in the context I'm reading it (alongside Foucault's What is an author in preparation for a discussion of scientific authorship.

      Among the interesting things about it are the way it encapsulates a distinction between the humanities and sciences in method (though Fish doesn't see it and it comes back to bite him in the Sokol affair). What Frye thinks is important because he is an author-function in Foucault's terms, I.e. a discourse initiator to whom we return for new insight.

      Fish cites Peters and Ceci 1982 on peer review, and sides with those who argue that ethos should count in review of science as well.

      Also interesting for an illustration of how much the field changed, from new criticism in the 1970s (when the first draft was written) until "now" i.e. 1989 when political criticism is the norm.

    2. Nevertheless, there were a few who questioned that definition of fairness and challenged the assumption that it was wrong for reviewers to take institutional affiliation and history into consider- ation. "We consider a result from a scientist who has never before been wrong much more seriously than a similar report from a scientist who has never before been right. . . . It is neither unnatural nor wrong that the work of scientists who have achieved eminence through a long record of important and suc- cessful research is accepted with fewer reservations than the work of less eminent scientists" (196). "A reviewer may be justified in assuming at the outset that [well-known] people know what they are do- ing" (211). "Those of us who publish establish some kind of track record. If our papers stand the test of time . . . it can be expected that we have acquired expertise in scientific methodology" (244). (This last respondent is a woman and a Nobel laureate.)

      Fish reporting on the minority in response to Peters and Ceci who argued that track records should count in peer review of science

    3. A similar point is made by some of the participants in a discussion of peer review published in the Behavioral and Brain Sciences: An International Journal of Current Research and Theory with Open Peer Commentary (5 [1982]: 187-255). The occasion was the report of research conducted by D. P. Peters and S. J. Ceci. Peters and Ceci had taken twelve articles published in twelve different journals, altered the titles, substituted for the names of the authors fictitious names identified as researchers at institu- tions no one had ever heard of (because they were, made up), and resubmitted the articles to the jour- nals that had originally accepted them. Three of the articles were recognized as resubmissions, and of the remaining nine eight were rejected. The response to these results ranged from horror ("It puts at risk the whole conceptual framework within which we are accustomed to make observations and con- struct theories" [245]) to "so what else is new."

      Peters & Ceci 1982 comes up!

    4. . Predict- ably, Schaefer's statement provoked a lively exchange in which the lines of battle were firmly, and, as I will argue, narrowly, drawn. On the one hand those who agreed with Schaefer feared that a policy of anonymous review would involve a surrender "to the spurious notions about objectivity and absolute value that . . . scientists and social scientists banter about"; on the other hand those whose primary concern was with the fairness of the procedure believed that "[jiustice should be blind" ("Correspon- dence" 4). Each side concedes the force of the opposing argument-the proponents of anonymous re- view admit that impersonality brings its dangers, and the defenders of the status quo acknowledge that it is important to prevent "extraneous considerations" from interfering with the identification of true merit (5)

      Discussion of debate at MLA about plan to introduce blind submission to PMLA and comparison with sciences and social sciences.

    1. ouble-blind) peer review became an established component of thepost-war scientific bureaucracy (Chubin & Hackett, 1990,pp. 19 –24)

      history of peer review

    Tags

    Annotators

  25. Apr 2016
    1. Does peer review work? Is peer review broken? The vast majority of authors believe it improves their final work, and since it’s evolving from this solid base, it’s clearly not broken. But before we can have a useful discussion about its purpose and effectiveness, we need to agree on which approach to peer review we’re talking about, then whether our expectations of it are reasonable and accurate.
    2. Here are some variables around peer-review we have to understand before we know what kind of peer review we’re actually talking about: Is it blinded? If it is blinded, is it single-blinded or double-blinded? Is there statistical or methodological review in addition to external peer-review? Are the peer reviewers truly experts in the field or a more general assemblage of individuals? What are the promises and goals of the peer review process? What type of disclosure of financial or other potential competing interests is made? Are reviewers aware of these? Is there a senior editor of some sort involved along with outside peer reviewers? Is the peer-review “inherited” from another body, such as a committee or a preceding journal process (e.g., in “cascading” title situations or when expert panels have been involved)? Are there two tiers of peer review within the same journal’s practices? Is the peer-review done at the article level or at the corpus level (as happens with some supplements)? Is plagiarism-detection software used as part of the process? Are figures checked for manipulation? Is the peer reviewer graded by a senior editor as part of an internal evaluation and improvement process?
    1. White (1984, cited by Vaughan, 1991) reported on a study conducted at California State University in which two essays were tucked into a huge sample of essays and read a year apart by the same readers using a 6-point scale. The reading a year later produced scores that were identical to the first in only 20 per cent of the cases. The scores differed by one point or less in 58 per cent of cases and 2 points or less in 83 per cent of the cases. As White points out, a 1-point difference is generally considered unproblematic, but on a 6-point scale the difference between a 3 and a 4 is the difference between a pass and a fail. Obviously, then, changes in examiner severity/leniency over-time have implications for maintaining standards, and must be monitored. Research has been conducted into variations in examiner severity/leniency during the marking of a particular allocation of scripts, a marking period, and over more extended periods of time.

      intrarater reliability is only 20%

    2. According to Stemler, consistency estimates of interrater reliability assume that it is not necessary for judges to share a common meaning of the rating scale, so long as each judge is consistent in their classifications.

      Wittgenstein's beetle in a box

    3. (2004) notes that most research papers describe interrater reliability as though it is a single, universal concept. He argues this practice is imprecise and potentially misleading. The specific type of interrater reliability being discussed should be indicated. He categorises the most common statistical methods for reporting interrater reliability into one of three classes: consensus estimates; consistency estimates; and measurement estimates.

      Stemler 2004

    1. in the latter both the wide differential in manuscript rejection rates and the high correlation between refereerecommendations and editorial decisions suggests that reviewers and editors agree more on acceptance than on rejection.

      In "specific and focussed" fields, the agreement tends to be more on acceptance than rejection.

    2. In the former there is also much more agreement on rejectionthan acceptance

      In "general and diffuse" fields, there is more agreement on paper rejection than in "specific and focussed."

    3. . Referees ofgrant proposals agree much more about what is unworthy of support than about what does have scientific value. In

      Grant referees are better at agreeing on inadequate work than adequate