93 Matching Annotations
  1. Last 7 days
  2. Jun 2020
  3. May 2020
  4. Apr 2020
  5. Dec 2019
    1. By the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs, * * * I rushed out of the room. Page 43

      This epigram to the frontispiece of the 1831 edition quotes from Book I, chapter 4, p. 43 of the original print edition, the scene in which the Creature comes alive in Victor's laboratory. The frontispiece depicts the Creature's birth and was engraved for the 1831 edition by William Chevalier, adapting a painted illustration by Theodor von Holst. This picture appears on our interface.


      Unlike the three-volume 1818 edition, the 1831 revision was published in a single volume (with chapter renumbering and extensive revision) in Colburn and Bentley's "Standard Novels" series. Outside London, the novel was published as a standalone volume--not a part of the London-based "Standard Novels" series--in Edinburgh and Dublin.

    3. The day of my departure at length arrived Page 31.

      This epigram appears underneath an illustration on the novel's first 1831 title page, facing the frontispiece; it was also engraved by William Chevalier after a painting by Theodor von Holst, Colburn and Bentley's illustrators for the Standard Novel Series. The epigram refers readers to chapter 3, page 31, in which Victor first departs the family home to attend the University of Ingolstadt--and to study there the sciences that will motivate him to create the Creature. The illustration shows Elizabeth Lavenza standing in the doorway of their home, smiling, as Victor steps into the street.

    4. the preface. As far as I can recollect, it was entirely written by him.

      The 1818 edition of Frankenstein was published anonymously, and early readers and reviewers often attributed it to Percy Shelley, not Mary. According to Mary in 1831 (and subsequent scholarship), Percy did largely write the 1818 "Preface," and it is now estimated by Charles Robinson that Percy contributed between 4000 and 5000 words to Mary's 72,000 word manuscript. See Robinson, ed., The Original Frankenstein; or, the Modern Prometheus, by Mary Shelley (with Percy Shelley) [New York: VIntage Classics, 2009].

    5. illustrated

      The 1831 edition has two illustrations, the frontispiece and the picture on the first title page, crafted by T. Holst and W. Chevalier, engravers for Coburn's and Bentley's Standard Novels series.


      The first title page in the 1831 edition lists only a short version of the original title--Frankenstein--and Mary Shelley's name as author. A second title page following this one gives the original full title and identifies Mary only as "The Author of The Last Man, Perkin Warbeck, &tc. &c." This format was typical of the book series in which the 1831 edition of the novel appeared: Henry Colburn's and Richard Bentley's "Standard Novels." All books in the series have well-illustrated frontispieces and first title pages, followed by a more detailed title page without illustrations. It was this publishing format that launched Frankenstein as a widely reprinted popular novel.


      The 1823 edition's title page differs almost entirely from the 1818 original title page. The title is the same, but for the first time the 1823 edition lists Mary Shelley as the novel's author. (Though it does not list Wiliam Godwin, her father, as the editor responsible for the minor revisions in this edition.) The page also shows that the 1823 edition appears in two volumes, not three as in 1818. The new publisher for the novel is G. and W.B. Whittaker.


      As it does with the 1818 edition, this title appears on its own page. It is followed by a page listing the printer Thomas Davison, and then by the full title page.

  6. Nov 2019
  7. Sep 2019
  8. Jul 2019
    1. I am a researcher working on topics related to subjective well-being (sometimes also called happiness).

      I should preface by saying that I have relatively modest training in statistics, and the arguments put forth in this paper are quite out of my depth. For example, I have not heard of things like first order stochastic dominance before reading this paper. I hope that by being open about things that I might be somewhat ignorant, this can be a path for me to develop a deeper understanding of the concerns raised in the paper.

      I think (which could well be wrong) the paper is saying that in an ordinal measure like happiness, groups and individuals differ in their 'standard' in reporting happiness (e.g., what it takes to push my happiness from 0 to 1 is different from what pushes your happiness from 0 to 1). This makes comparing 'latent' (or true level of) happiness across groups difficult, if not impossible.

      Put differently, if I report a 1 and you report a 0, I cannot be certain that I am happier than you. It could be the case that my standard for reporting a 1 is lower than you. The authors showed that by changing this standard around, inferences about 'true' happiness would change.

      I think this is an important point. I think happiness researchers have grappled with this to some degree (from a more abstract perspective; instead of the more statistical/mathematical perspective). E.g., A hypothesis about how people report life satisfaction is that they compare their life to an ideal life (here, the ideal life sets the standard; i.e., two people with the exact same life can have different levels of life satisfaction because they have different ideas about ideal life). Related research in social comparison could be interpreted as moving the standard for happiness higher (instead of lowering 'true' happiness). In contrast, things like gratitude may lead to higher happiness ratings because it lowers happiness standard (instead of increasing 'true' happiness). The set point hypothesis can be interpreted as 1) people fully adapting their 'true' happiness to baseline levels after experiencing major life events or 2) people create a new happiness standard after experiencing a major life event.

      This paper prompts me to think harder about happiness measures. It could well be the case that the standard people set for their happiness level (a cognitive process?) may be just as important as 'true' happiness itself.

  9. Apr 2019
  10. Mar 2019
    1. To investigate whether and how user data are shared by top rated medicines related mobile applications (apps) and to characterise privacy risks to app users, both clinicians and consumers.

      "24 of 821 apps identified by an app store crawling program. Included apps pertained to medicines information, dispensing, administration, prescribing, or use, and were interactive."

  11. Jan 2019
    1. This information was not explicitly stated in either article, but the sample and community description makes it clear that the participants of these studies are the same people, though the sample sizes differ slightly (ns = 85 and 86). However, this redundancy did not produce any analysis problems because the correlation matrix in the Grigorenko et al. (2001) article was not positive definite.

      The duplication of data across articles and the non-positive definite dataset have never been fully explained. In light of Sternberg's history of self-plagiarism (see link below), this is troubling.


  12. Mar 2018
    1. The serendipity of networked practice together with a heightened attention to the importance of protecting the place of human interaction in education resulted in many conference presentations and publications

      Reflective practice, research, publication

  13. Dec 2017
  14. Oct 2017
  15. Sep 2017
    1. The problems here stem from a lack of comprehensiveness, interoperability, and critical mass uptake as the de facto platform for PPPR. The result of this is a mess of different platforms having different types of commentary on different articles, or sometimes the same ones, none of which can be viewed easily in a single, standardised way. That doesn’t seem very efficient.

      This is really key.

  16. Apr 2017
    1. Samson-Steinbach Delphine, Legeai Fabrice, Karsenty Emmanuelle et al. (2003) GénoPlante-Info (GPI): a collection of databases and bioinformatics resources for plant genomics. Nucleic Acids Res., 31, 179–182.

      Lien vers l'article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC165507/

      (open access)

  17. Feb 2017
    1. Pivotal roles are played by three enzymes, (phospho-fructokinase (PFK), pyruvate kinase (PK) and phosphofructoki-nase/fructose-2,6-bisphosphatase (PFKFB)) through their inhibi-tion or activation by three reaction intermediates (fructose-1,6-bisphosphate (F16BP), fructose-2,6-bisphosphate (F26BP), andphosphoenolpyruvate (PEP)) in glycolysis. These enzymes havemultiple isoforms (PFKL/M/P, PKM1/M2/L/R and PFKFB1-4)which are subjected to contrasting allosteric regulations [9–11].Each isoform, therefore, affects the glycolytic activity in a distinctmanner.All three isoforms of PFK are activated by F6P and F26BP [12],but only PFKM and PFKL are activated by F16BP [13–15].PFKFB is a bifunctional enzyme whose kinase and bisphosphatasedomains catalyze the formation and hydrolysis reaction of F26BP,respectively [9,16]. Isozymes of PFKFB differ in their kinase andphosphatase activities as well as in their sensitivity to feedbackinhibition by phosphoenolpyruvate (PEP) [17–19]. Thus, eachisozyme of PFKFB has a profoundly distinct capacity inmodulating PFK activity. Pyruvate kinase (PK) in mammaliansystems is encoded by two genes that can produce two isoformseach. Except for the PKM1 isoform, the other three isoformsof PK, PKM2, PKL and PKR, are activated by F16BP to varyingextents [11]. The M2 isoform of PK, in addition to activation byF16BP, is also under the control of a host of allosteric modulatorsincluding serine, succinylaminoimidazolecarboxamide ribose-5-phosphate (SAICAR) and phenylalanine among others [

      Need a figure presenting the regulation network.

  18. Jan 2017
    1. Belinda Cleary For Daily Mail Australia

      Who is this author? Does she have an area of expertise that's related to this story? How would you find out? What is this source and what are its biases? How would you find out?

  19. Aug 2016
    1. Page XVIII

      Borgman notes that no social framework exist for data that is comparable to this framework that exist for analysis. CF. Kitchen 2014 who argues that pre-big data, we privileged analysis over data to the point that we threw away the data after words . This is what creates the holes in our archives.

      He wonders capabilities [of the data management] must be compared to the remarkably stable scholarly communication system in which they exist. The reward system continues to be based on publishing journal articles, books, and conference papers. Peer-reviewed legitimizes scholarly work. Competition and cooperation are carefully balanced. The means by which scholarly publishing occurs is an unstable state, but the basic functions remained relatively unchanged. while capturing and managing the "data deluge" is a major driver of the scholarly infrastructure developments, no Showshow same framework for data exist that is comparable to that for publishing.

  20. Jun 2016
    1. T he Future of Publications in the Humanities

      Fuchs, Milena Žic. 2014. “The Future of Publications in the Humanities: Possible Impacts of Research Assessment.” In New Publication Cultures in the Humanities: Exploring the Paradigm Shift, edited by Péter Dávidházi, 147–71. Amsterdam University Press. http://books.google.ca/books/about/New_Publication_Cultures_in_the_Humaniti.html?hl=&id=4ffcoAEACAAJ.

  21. Apr 2016
    1. Does peer review work? Is peer review broken? The vast majority of authors believe it improves their final work, and since it’s evolving from this solid base, it’s clearly not broken. But before we can have a useful discussion about its purpose and effectiveness, we need to agree on which approach to peer review we’re talking about, then whether our expectations of it are reasonable and accurate.
    2. Here are some variables around peer-review we have to understand before we know what kind of peer review we’re actually talking about: Is it blinded? If it is blinded, is it single-blinded or double-blinded? Is there statistical or methodological review in addition to external peer-review? Are the peer reviewers truly experts in the field or a more general assemblage of individuals? What are the promises and goals of the peer review process? What type of disclosure of financial or other potential competing interests is made? Are reviewers aware of these? Is there a senior editor of some sort involved along with outside peer reviewers? Is the peer-review “inherited” from another body, such as a committee or a preceding journal process (e.g., in “cascading” title situations or when expert panels have been involved)? Are there two tiers of peer review within the same journal’s practices? Is the peer-review done at the article level or at the corpus level (as happens with some supplements)? Is plagiarism-detection software used as part of the process? Are figures checked for manipulation? Is the peer reviewer graded by a senior editor as part of an internal evaluation and improvement process?
  22. Feb 2016
    1. 44-45 Ingelfinger rule: won't publish articles that have been presented, discussed with reporters, or published in any form elsewhere--including data. Once a paper is under consideration and production, it can't be discussed with reporters.

      This clearly harms science in the interest of journals.

  23. Jul 2015
    1. Moving Museum Catalogues Online: An Interim Report from the Getty Foundation"The Online Scholarly Catalogue Initiative *2012 interim report from the Getty Foundation regarding their activities moving towards digital publishing

      • does this deal with issues of fair use, permissions, and copyright?
  24. May 2015
    1. Author and peer reviewer anonymity haven’t been shown to have an overall benefit, and they may cause harm. Part of the potential for harm is if journals act as though it’s a sufficiently effective mechanism to prevent bias.
    2. Peer reviewers were more likely to substantiate the points they made (9, 14, 16, 17) when they knew they would be named. They were especially likely to provide extra substantiation if they were recommending an article be rejected, and they knew their report would be published if the article was accepted anyway (9, 15).