1,755 Matching Annotations
  1. May 2016
    1. However, this approach has to face the potential limitations that have been widely discussed in the past, e.g., McGath (2013) stressed how new formats develop and existing ones evolve officially and unofficially thus making the maintenance of the registry challenging.

      Any solution that does not recognize this will fail. Scientists have to be able to utilize the latest technology.

    2. This situation largely restricts the facilities that a repository can offer to support data publication.

      Which is why, if there is a specialized repository, it is generally better to submit the data there. Of course, that often involves more effort.

    3. Independently of file formats, repositories have some limitations on allowed file sizes. They tend to have an upper bound limit yet are open to negoziate extensions to this limit with additional costs (cf. Sec. 4.4). Dryad allows uploading of no more than 10GB of material for a single publication; 3TU.Datacentrum supports the upload of datasets up to 4 GB; Zenodo currently accepts files up to 2GB although it reports that the current infrastructure has been tested with 10GB files; Figshare enables users to store up to 1 GB data in their private space with files up to 250 MB each.

      Remains a challenge for some types of data. But, then again, we've always limited the size of publications to certain contexts-a journal article is of a certain size; if it is larger, it becomes a book and is handled differently. So perhaps the fact that a single repository cannot handle all data sizes is to be expected.

    4. For content format, selected repositories somehow neglect it and describe how the dataset is organised through dataset documentation (cf. Sec. 4.2).

      Thus leading to criticism of generalist repositories

    5. The repository exposing the largest variety of subjects is Dryad where a total of 19,829 distinct subjects have been used to characterise its 9,676 datasets.

      Interesting, because I thought Dryad focused more on earth and evolutionary sciences.

    6. The selected repositories have published a total of 336,647 datasets (Tab. 2). The large majority of such datasets – 85% circa – has been published in the last three years, namely 33% circa of datasets have been published in 2013, 29% circa in 2014, and more than 22% in 2015

      Does, perhaps, suggest that data sharing is picking up. I wonder, then, whether we should use the term "data publishing" instead of data sharing, as it is perhaps a more accurate and less loaded term.

    7. 320,415

      Dwarfs all the others. But I wonder at the average size of the data set here?

    8. This study embraces the definition of (research) data given by Borgman (2015), i.e., “entities used as evidence of phenomena for the purpose of research or scholarship”, and uses “dataset” to refer to the unit of data subject of the data publishing activity, no matter how many files it materialises (Renear et al., 2010). This “dataset” definition includes the term “data package” as adopted by Dryad to mean a set of data files associated with a publication, as well as “dataset” and “fileset” as used by Figshare to indicate data (the former) and a group of multiple files citable as a single object (the latter).

      Reasonable definitions of data and data set.

    9. Scientific data repositories are often proposed as instruments for supporting data publishing as they provide facilities for all the different players involved in this process.

      I think they do more than just "support data publishing", I think they "publish data", i.e., they are the publishers.

    10. They are called to implement systematic data stewardship practices thus to foster adequate scientific datasets collection, curation, preservation, long term availability, dissemination and access.

      Nice encapsulation of the role of a data repository.

    1. “Our ability to understand what to build is so far behind what we can build,” said Dr. Minshull,

      I tend to agree with this statement, particularly if one is going to synthesize "an original bacterial genome".

    2. They were therefore not supposed to discuss the idea publicly before publication.

      Since when can't you discuss results at conferences that are subsequently published?

    1. A positive correlation suggests that the new indicator at least partially reflects the quality that the better known indicator signifies.

      Presuming that the previous indicator indicates quality, which is somewhat hard to argue.

    1. you are booted out of foster care and their argument is based on some of the research that you are now citing that these young people are not really ready for the adult world.

      Saying that your brain isn't fully developed does not equate with "young people are not really ready for the adult world". One might argue that being in the "real world" will help tune any subsequent brain development appropriately for real world situations.

    1. In general, we intend the software citation principles to cover the minimum of what is necessary for software citation for the purpose of software identification.

      We have taken the same tack in data citation: we stick to just the metadata that is required for citation, not for what is required for reproducibility or understanding the software. The analogy we use is the article citation: the metadata required for citation does not include all the details of what is in the paper.

    2. any other research products

      Should be: "other research products" or "any other research product"

  2. Apr 2016
    1. Prevalence estimates suggest that 1.5–3.0% of the population will develop bipolar disorder,1,2 which is the sixth leading cause of disability worldwide.

      I didn't realize it was that common

    1. They focus more and more precious institutional energy on reaching a platform’s audience rather than their own, and their voice changes.

      Publishers that lose their identify will cease to exist as they are no longer necessary. So shouldn't that counteract this tendency?

    2. The idea that Facebook and its ilk could act as information gatekeepers is also a bleak prospect.

      The difference between a platform and a publisher, I think. Publishers may be selective about what they publish. A good argument for Facebook also to be careful.

    1. The problem with cooking up a system is that it trades the creative contributions of thousands of individuals for the more refined and articulate plan of a small number of elite advocates. If the advocates were not as accomplished as they are, it would be easy to dismiss any proposed system out of hand. But intelligence is a great seductress; it slyly leads us to assume that being smart and being right are the same thing. Meanwhile, the evidence to the contrary is messy and contradictory.

      Nicely stated.

    1. Set up common e-infrastructures

      Or make better use of commercial ones

    2. Data sharing and stewardship is the default approach for all publicly funded research

      Open by default

    3. Open science also increases business opportunities.

      Has this been quantified anywhere? It makes sense that it would, but has it been directly studied?

    1. Make persistent annotations in texts by highlighting passages, adding notes, and tagging with keywords.Enable conversations within peer groups, journal and book clubs, classrooms and amongst the general public.Add context and depth to works and value to collections with annotations from authoritative sources and expert knowledge. Keep res ources more current with the ability to update fixed content without sacrificing the version of record.Share passages, bookmarks, and digital citations with communities of readers or more broadly.Support pre- and post-publication peer review and commentin

      Maybe make the spacing between bullets slightly larger so it easier to read the list (and fill up some of the empty space).

    1. Genius seems destined to favor those with too much. Want to show off how devastatingly smart you are by taking down someone else? Genius is the tool you need.

      Again, I find this a very odd reaction.

    2. what starts out as a desire to be heard often quickly morphs into an unreasonably angry demand to be recognized

      That is the problem.

    3. What’s more: To those who are already relentlessly criticized in comments and on social media, Genius’s granular focus can itself feel threatening—as though commenters already have bats, and someone handed them knives. 

      That is an interesting reaction. Coming from the scholarly community, I think granular feedback is a good idea, as it precisely targets what is being annotated and therefore critiqued rather than painting with broad strokes. Perhaps all scholars are pendants.

    1. Zhiyong Lu Discovering Biomedical Semantic Relations in PubMed Queries for Database Curation and Literature Retrieval

      I've always thought that this was a gold mine.

    2. GO annotation in InterPro: why stability does not indicate accuracy in a sea of changing annotations

      Annotation is a dynamic process; it is never done because knowledge is not static.

    3. Tonia Korves Exploring human-machine cooperative curation

      The future of curation: partnership with machines.

    4. crowd curation

      You can annotate with pictures too: From Wikipedia:

    1. Large datasets allow for analytic flexibility, and it is all too tempting to trawl a dataset for “significant” associations.

      But closed data doesn't solve this problem

    2. But there’s no doubt that damage was done.

      I don't think that closing data makes faulty analyses any less likely, based on the fact that the original paper that reported the vaccination-autism findings was in the literature for so long.

    1. A study in 2014 sought data from 217 studies published between 2000 and 2013. But the team could secure only 40% of what they requested, and responses varied according to the requester's seniority3.

      That is interesting.

    2. that citations for software or data have little currency in academia

      Not yet

    3. Researchers also point to the time sink that is involved in preparing data for others to view. Once the data and associated materials appear in a repository, answering questions and handling complaints can take many hours.

      Again, even more reason to elevate the credit one gets for sharing data. We might also ask why there isn't similar burden answering questions about published papers.

    4. “Everybody has a scary story about someone getting scooped,”

      OK. This is the problem right here. The person who publishes the data deserves full academic credit for the finding. If data are hard to acquire, the person who published the data deserves more credit that the person who analyzed it. If the data are easy to acquire or are generated automatically, then the person who analyzes it probably deserves more credit than the generator of the data.

    5. But many young researchers, especially those who have not been mentored in open science, are uncertain about whether to share or to stay private.

      To me, this is the most important next phase. We have to be trained in how to do Open Science, including proper behavior.

    6. A few psychology journals have created incentives to increase interest in reproducible science — for example, by affixing an 'open-data' badge to articles that clearly state where data are available. According to social psychologist Brian Nosek, executive director of the Center for Open Science, the average data-sharing rate for the journal Psychological Science, which uses the badges, increased tenfold to 38% from 2013 to 2015.

      Interesting statistic.

    7. On the other hand, scientists disagree about how much and when they should share data, and they debate whether sharing it is more likely to accelerate science and make it more robust, or to introduce vulnerabilities and problems.
    1. APP/PS1 and APP/PS1;Abca7−/−

      Need to get the RRID's for this one, although it looks like this needs to be registered.

  3. Mar 2016
    1. Ontoquest: A system that provides powerful yet easy-to-use query and reasoning utilities.

      We no longer user Ontoquest, correct?

    1. The chimps are seen on camera picking up stones, then lobbing them at the trees while letting out a "long-distance pant hoot vocalization." It's been mostly males observed engaging in this behavior (and only in West Africa), though there have been some females and juvenile chimps that have also taken part.

      Sounds a lot like my brothers. I don't think it is mysterious at all!

    1. The commoners who participate are just as importantly the commons, making it a dynamic and evolving eco-system.

      Love this phrase! A prevailing sense at this workshop was that not just the PhD's inhabit the commons; everyone does.

    1. In 2000, Roy Fielding published his PhD thesis Architectural Styles and the Design of Network-based Software Architectures – summarised by its mantra “hypermedia as the engine of application state“, and effectively establishing the Representational State Transfer (REST) as the software architectural style of the Web.

      Nice to have the originators acknowledged.

    2. While keeping a clear distinction between data and metadata seems like a good idea, in practice it can be quite hard as one researcher’s metadata is another researchers data. 

      Again, over-specifying is the kiss of death in these things. Better to make it simpler for the tagger and use other means for differentiating.

    3. So the LSID design suffered with a requirements for resolution that made it trickier (or at least gave the impression of being trickier) to use and mint, but in the end only the identifier bit of LSID was used.

      Again, why we kept the resolution services separate from the identifiers.

    4. Many large data providers in life sciences did not adapt LSIDs, probably because it meant too many changes to their architecture. Thus the exposure, knowledge and skills around LSIDs did not propagate to the masses.

      When we were designing the Resource Identification Initiative, we deliberately made sure that we didn't overload the requirements with technology. See report of pilot project as well.

  4. Feb 2016
    1. Hypothes.is offers an optional overlay and enables permissionless annotation by 3rd parties unaffiliated with either the content consumer or the content provider. Crucially, this bypasses much of the friction associated with working with publishers to provide an annotation overlay, although a consortium of publishers is working closely with hypothes.is

      Perhaps add this sentence in front of this section: "While deceptively simple in concept, the potential of Hypothes.is to provide a dynamic unifying layer across biomedicine is significant.

      Just feel we should put some punch in there.

    2. The hypothes.is service currently offers an overlay display where these annotations are placed on HTML or PDF views of documents.

      Perhaps also add: Hypothes.is annotations are designed for the web: they are interactive, sharable and globally searchable.

    3. First, large portions of the biomedical scholarly literature are completely inaccessible to automated, large scale analysis.

      Perhaps add a sentence: potentially large portions of the biomedical literature have been annotated, either by humans or automated algorithms, but these annotations are not easily shared, nor are they available for global analysis. Rather, each is locked in an individual silo.

    4. Chrome plugin

      Perhaps just "browser plug in" as we technically have bookmarklets for Safari and Firefox.

    5. associated with text

      Probably should mention that it is a web tool up front.

    1. libre access is better than gratis access

      As I read the distinction in Wikipedia, this sentence doesn't make sense. Gratis = free of charge; libre = unrestricted use regardless of charge. Example given: free beer (gratis) vs free speech (libre).

    1. We know that what is being proposed challenges a hierarchical arrangement of monumental proportions-a status system that is firmly fixed in the consciousness of the present faculty and the academ/s organizational policies and practices. What is being called for is a broader, more open field where these different forms of scholarship can interact, inform, and enrich one another, and faculty can follow their interests, build on their strengths, and be rewarded for what they spend most of their scholarly energy doing.

      Good charge for Re-imagining scholarship

    2. This fourth dimension of scholarship has an integrity of its own, but is deeply embedded in the other three forms-the advancement, integration, and application of knowledge. In addition, the scholarship for teaching has three distinct elements: first, the synoptic capacity, the ability to draw the strands of a field together in a way that provides both coherence and meaning, to place what is known in context and open the way for connection to be made between the knower

      Yes, that is what I meant by my comment above.

    3. We want to challenge this understanding and argue that quality teaching requires substantive scholarship that builds on, but is distinct from original research, and that this scholarly effort needs to be honored and rewarded.

      Yes. I would argue that the best teaching is, in fact, integrating knowledge.

    1. ontrary to the strict ensheathment of BG on molecular layer synapses, astrocyte processes in the granular layer border small groups of granule cells and completely enwrap the glomeruli, the synaptic structures comprising mossy fibre rosettes, Golgi neuron boutons and granule cell dendrites, but fail to penetrate within these junctional complexes

      Has been known for a while.

    2. Compared to BG, very little is known about the function of velate protoplasmic astrocytes of the granular layer.

      Yes, we never really talk of them.

    3. It is now well established that the expression of scaffolding properties in BG is regulated by granule cells.

      makes sense

    4. The scaffolding phenotype is actually induced, or re-induced, and maintained by migrating neurons themselves, who regulate the expression of a specific set of molecules, needed to establish contact and to direct navigation

      Like that term: scaffolding phenotype

    5. While a purely radial trajectory may be consistent with BG-guided navigation, time-lapse video microscopy has revealed that the migration of molecular layer interneurons is actually more complex and involves multiple radial and tangential phases

      As in cortex

    6. Following the discovery that molecular layer interneurons do not originate from the external granular layer, but derive from VN progenitors that continue to divide in the PWM (Hallonet and Le Douarin, 1993, Zhang and Goldman, 1996a and Maricich and Herrup, 1999), it has been proposed that young basket and stellate cells migrate radially up to the edge of the EGL and become progressively integrated in the expanding molecular layer (e.g. Yamanaka et al., 2004).

      migration of interneurons

    7. On the whole, these findings indicate that the role of BG in granule cell migration is to provide an adhesive substrate and a conducive scaffold.

      Just like we thought

    8. Disruption of this interplay leads to a characteristic phenotype (Fig. 3B and D), in which: (i) the folia are formed but fuse to each other along the cortical surface; (ii) granule cells are decreased in number and remain ectopically located along the fused folia and in the molecular layer; (iii) Purkinje cells are preserved, but the final monolayer arrangement is not achieved.

      That is really interesting. The surface of the brain isn't formed.

    9. BG is necessary to initiate the foliation process, which is abortive if the radial glia to BG transition is prevented by interfering with the execution of the cell-intrinsic ontogenetic program (Hoser et al., 2007).

      No foliation without transition of radial glia to BG, but normal layering still disrupted if astrocytes are killed after birth.

    10. The most comprehensive model of cerebellar foliation posits that localized changes in the behaviour of granule cell progenitors, at precise spots along the external granular layer, induce the anchoring of nearby Purkinje cells to the underlying white matter, thus defining the base of the fissures (Sudarov and Joyner, 2007).

      Interesting

    11. the Purkinje cell plate

      several cells thick at first

    12. Throughout these developmental phases, radial glia and BG provide trails to guide the migration of different cell types, but also regulate the directional elongation of axons and dendrites.

      Essential for the overall patterning of the cerebellum

    13. On the whole, available data indicate that BG morphogenesis requires tight and timely regulated interactions with the surrounding cerebellar microenvironment. Impairment of these regulatory mechanisms results in the acquisition a stellate morphology, which may thus represent a default differentiation pathway for cerebellar astroglial precursors.

      Then again...

    14. However, no clear transition to a multipolar stellate morphology occurs as for PTEN deletion.

      Perhaps not

    15. It is thus plausible that the PI3K/AKT pathways, normally antagonised by PTEN, actively promote the acquisition of multipolar phenotypes in astroglial precursors at the expenses of the polarized morphology of BG.

      Suggests they arise from the same precursors?

    16. Few cell-intrinsic determinants are known that take part in cerebellar astroglial differentiation.

      Mostly extrinsic factors

    17. If endfoot anchorage is defective, as a consequence of abnormal membrane composition or altered functioning of BG, fibre formation and orientation are perturbed and BG cell bodies translocate to the molecular layer, thereby severely disrupting cerebellar foliation and layering (see Section 3.2).

      proper morphology depends on physical anchoring.

    18. Differently from radial glia that possesses a single basal branch, BG cells exhibit multiple ascending processes (usually three to six per cell) crossing the molecular layer and forming palisades parallel to the long axis of the folium. In the rat, the multiple fibres emerge after birth, increase in number until the end of the first postnatal week and then decrease, in parallel with the expansion and reduction of the EGL (Shiga et al., 1983b), suggesting that granule cells contribute to regulate BG process formation.

      Distinct from radial glia, but produced from radial glia

    19. When added to the cultures, cerebellar neurons reduced astroglial proliferation, induced polarized BG-like or multipolar shapes instead of flattened morphologies, and down-regulated specific antigens, revealing that neurons are key regulators of cerebellar astroglial differentiation.

      neurons regulate differentiation of astrocytes

    20. Birthdating studies consistently indicate that most cerebellar astrocytes are generated during late embryonic and postnatal development (Miale and Sidman, 1961, Altman and Bayer, 1997 and Sekerková et al., 2004).

      Gliogenesis and neurogenesis both post natal

    21. and fate mapping analyses using inducible reporter genes expressed in radial glia essentially confirmed the VN origin of cerebellar astrocytes (Mori et al., 2006 and Sudarov et al., 2011).

      Ventricular neuroepithelium

    22. In spite of the morphological variety of the astrocytes that populate the different subdivisions of the mature cerebellum, in the following sections devoted to the ontogenesis of cerebellar astroglia we will exclusively refer to BG and parenchymal astrocytes (the latter comprising all the other categories).

      We recognize that there are many types, but tend to focus only on the Bergmann glia.

    23. he latter cells were later called “Golgi epithelial cells” (Palay and Chan-Palay, 1974), but they are commonly known as Bergmann glia (BG), and here we shall use this term.

      Golgi epithelial cells = Bergmann glia

    1. As scientists, we need to define our culture and take ownership in developing a system for communicating research results that best suits our needs as well as the needs of the public.

      The journals are just selling us what we are willing to buy. We should not be ceding so much power to them.

    2. A preprint server provides a solution for improving the ease and speed of communicating a paper, but it does not necessarily address the escalating amount of data needed for publications in journals (Fig. 1).

      Allowing the journal to act as agent rather than gatekeeper is a model that should be explored. That is, if the journals identify a promising study that they want for their journal, they should be able to elevate that study. That in itself would be an honor, but it would not be the means by which the study or even the pieces of the study first appear. That is, the journal does not determine what gets into the body scholarly biology.

    3. Submission to a preprint repository allows a paper to be seen and evaluated by colleagues and search/grant committees immediately after its completion.

      Many institutions also make this type of service available.

    4. However, preprints in biology have not achieved a critical mass for takeoff. Last year, for example, bioRxiv received 888 preprints compared with 97,517 for arXiv, even though many more papers are published in the life sciences.

      Sobering statistic. Whatever solutions we propose for biomedicine, they generally are not adopted.

    5. Thus, the story of DNA, like a Charles Dickens novel, came out in installments. F
    6. One reason is that nonelite journals want to improve their status and, as a consequence, strive to be selective and seek more mature stories.

      As an editor in chief, I can tell you this "pull" is very real. But I don't believe that it is good for science to have the same bar for everything.

    7. Thus, authors feel as though they are held hostage, fearful that their paper will not be accepted if they do not comply with most, if not all, of the requests.

      Although "fear" is not enough of an excuse. Authors are free to push back and a good editor will let them.

    8. With these market forces at work and a positive feedback loop between journal editors and reviewers, the expectations for publication have ratcheted up insidiously over the past few decades.

      So much for "Don't publish, release"

    9. A “high impact” result constitutes one important criterion for publication. However, a second and increasingly important benchmark is having a very well-developed or “mature” research story, which effectively translates into more experiments and more data.

      Note the use of the word "story"

    10. Over the past 30 y, the US scientific workforce (e.g., postdoctoral fellows and graduate students) has increased by almost threefold (5, 6), fueled, in part, by the doubling of the NIH budget between 1998 and 2003.

      biomedical workforce participation

    1. Although online platforms such as PubMed Commons offer a convenient way to comment on published papers, they do not include a mediating role for journal editors, and the comments are not incorporated into the literature. Posted concerns are rarely prominent on journals' websites and are not cross-referenced in any useful way. As a result, readers may assume that a flawed paper is correct, potentially leading to misinformed decisions in science, patient care and public policy.

      Argument for an independent channel that sits on top of the literature, like Hypothes.is

    2. For one article that we believed contained an invalidating error, our options were to post a comment in an online commenting system or pay a 'discounted' submission fee of US$1,716. With another journal from the same publisher, the fee was £1,470 (US$2,100) to publish a letter

      I think this is one of the most outrageous statements I have ever heard.

    1. Think about that. Is it right that almost anyone with a laptop can benefit from the hard work of the original researchers?

      If it leads to better outcomes, YES! It shouldn't be about fairness to the researchers who collected their data, one presumes, not by promising NIH and the taxpayers that they would become famous, but that they would cure a disease. So if re-analysis by others accelerates either positive or negative findings, then it is a win for science.

    1. As science itself becomes a body of data that we can analyze and study, thereare staggeringly large opportunities for improving the accuracy and validity of science, through thescienti c study of data analysis
    2. Such coping consumes our time and energy, deforms our judgements about what isappropriate, and holds us back from data analysis strategies that we would otherwise eagerly pursue

      A step backwards for data science?

    3. Census data are roughly the scale of today's big data; but they havebeen around more than 200 years

      Point well taken.

    4. Statistics is the least important part of data science

      That does seem worrisome, given how badly we are doing in the area of reproducible science.

  5. Jan 2016
    1. collection, management, processing, analysis, visualization, and interpretation of vast amounts ofheterogeneous data associated with a diverse array of ... applications

      But surely the new data science also concerns itself with the computational infrastructure and visualization much more so than traditional statistics?

    2. though, the new initiatives steer away from close involvementwith academic statistics departments.

      That is interesting that they are viewed as separate. But I think that the new data science is viewed differently than simply the collection of a data set and its analysis in the context of an experimental design.

    3. This chosensuperset is motivated by commercial rather than intellectual developments.

      The new data science departments do seem to have that flavor.

    1. Similar attention must be devoted to stressors and threats to science that arise in response to research that is considered inconvenient.

      I agree, but the downsides of open science should not be used as an excuse to decrease transparency.

    2. Journals and professional societies should condemn specious calls for retraction. Journals and institutions can also publish threats of litigation, and use sunlight as a disinfectant.

      Agree strongly.

    3. ublication retractions have historically been reserved for cases of fraud or grave errors

      And I favor keeping it that way.

    4. All who participate in post-publication review should identify themselves.

      It also helps to enforce a better standard of behavior. You can criticize politely.

    5. symmetrical standards of openness

      Ah, ignore my previous comment.

    6. hey suspect that requestors will cherry-pick data to discredit reasonable conclusions.

      But if that is done in a transparent way, then wouldn't the results of the cherry-pickers be subject to the same type of scrutiny? What better way to discredit the discreditors?

    7. limited consent

      Is this acceptable anymore though?

    8. Calls for a data set that ignore its open availability (including limitations agreed on during publication, where applicable) could suggest harassment.

      Well then, half of the requests I get from certain Research Social Networking sites (who shall remain nameless) are harassment, as they are usually already available through open access.

    9. Discredit inconvenient results.

      But this happens in non-transparent science as well; we just have no way of easily finding out.

    10. mpugn scientists’ integrity (when data is already available); biased re-analyses.

      Rather than forbidding this, shouldn't we be developing "replication etiquette"? I will have to look up who first used that phrase.

    11. reputable scholarship

      But isn't that the way scientific revolutions are born?

    12. But as scientists work to boost rigour, they risk making science more vulnerable to attacks.

      I'm not sure how I feel about this statement. Why should science be invulnerable to attacks?

    13. Endless information requests, complaints to researchers' universities, online harassment, distortion of scientific findings and even threats of violence

      Would be nice to know how common these are relative to the amount of science research published.

    1. Searches will be conducted in CINAHL, CENTRAL, EMBASE, MEDLINE, Neuroscience Information Framework (NIF), PEDro, PsycINFO, PubMed, Scopus, and Web of Science databases.

      Intent to use NIF as an information source for systematic review

    1. The Neuroscience Information Framework (http://neuinfo.org/) lists over 2500 different databases with relevance for neuroscience.

      Use of NIF Registry

    1. The neuroscience community has also been instrumental in promoting the use of a standardized reporting system for research reagents, including and especially Abs, through the Neuroscience Information Framework (www.neuinfo.org), which serves as a prominent portal to the Antibody Registry (www.antibodyregistry.org), a database of over 2.4 million Abs, each of which has a unique identifier (e.g., ‘AB_1234567’), to ensure the unambiguous description of any particular Ab. This unique identifier is then built into the larger Research Resource Identification Initiative (www.scicrunch.org/resources) or RRID (Table 2) [51] such that each Antibody Registry unique identifier becomes a corresponding RRID identifier (e.g., ‘AB_1234567’ becomes ‘RRID: AB_1234567’).
    1. Along with the data, we provide a data dictionary of terms. In the dictionary, standard descriptions for which ontologies exist are used. We searched the following sources for ontology: NeuroLEX (http://neurolex.org/wiki/Main_Page), a semantic wiki for terms used in neuroscience; the Neuroscience Information Framework (http://www.neuinfo.org/), a dynamic resource of web-based neuroscience data, materials, and tools (NeuroLEX terms are actually published in NIF); and NCI Metathesaurus http://ncimeta.nci.nih.gov/ncimbrowser/), a biomedical terminology database for translational and basic research. A detailed list of these terms is presented in Table 3, and examples include “socioeconomic status,” “SAPS” and “cognitive assessment.” For descriptions of without a standard ontology, such as “working memory” or “global rating of hallucinations,” we plan to work with NeuroLEX to arrive at standard definitions. The current version of the data dictionary can be downloaded through the data portal website, described below.

      Use of Neurolex for terms and reconciliation of terms

    1. C57BL/6 male mice (27 total) from the National Institute on Aging representing three age groups (nice mice per comparative age group)—mature (5 months old), old (12 months old), and aged (24 months old)—using the Neuroscience Information Framework (NIF; http://neuinfo.org) [24]

      Used NIF standards

    1. The Neuroscience Information Framework (https://www.neuinfo.org) provides links into a very large amount of material on the web that relates to neuroscience. Its searching mechanism is sophisticated, and ontology based, making it a very effective start point.

      Nice plug for NIF

    1. We concluded that axonal morphometric features are the strongest morphological predictors of interneuron families.
    2. http://www.columbia.edu/cu/biology/faculty/yuste/petilla/index.html).

      Article on the Petilla naming conventions was published in 2008: http://www.nature.com/nrn/journal/v9/n7/full/nrn2402.html

    1. In addition to the above-mentioned discrepancies, we noticed that several connection patterns we observed in mature neocortex appeared to be different from that in the developing neocortex.

      Connectivity patterns not the same across adult and developing brain, as we know from earlier periods of development.

    2. SOM-Cre driver line (SOM-IRES-Cre) (31) labels a population of neurons that can be grouped into distinct types both functionally

      Somatostatin cells heterogeneous

    3. The connectivity of each type was not random, but highly predictable, and each morphologically distinct type of neuron had its own characteristic input-output connectivity profile, connecting with other constituent neuronal types with varying degrees of specificity in postsynaptic targets, layer location, and synaptic characteristics

      If you are classifying on the basis of axonal morphology, then this is not a surprise.

    4. whereas some types were not found in any of the PV+, SOM+, or VIP+ Cre driver lines (table S5).

      Don't have markers for all interneurons

    5. In contrast to L23 and L5, a considerable proportion of L1 interneurons were unlabeled (~26%, n = 25/95), and, interestingly, all unlabeled interneurons were SBC-like cells.

      But not entire population in layer 1.

    6. All unlabeled neurons recorded from L23 (n = 120) and L5 (n = 105) in these transgenic mice were morphologically and electrophysiologically confirmed as pyramidal neurons, and none were interneurons, suggesting that indeed the entire population of GABAergic interneurons in L23 and L5 was labeled in these mice.

      Rather an expansive claim based on 225 neurons.

    7. Neurons within a type also tended to have the same dendritic arborization pattern and electrophysiological properties, but as for L23 interneurons, these properties were often not cell type–specific (figs. S3, C and F, and S4).

      So, dendritic patterns not unique but axonal patterns unique.

    8. The remaining five types have not been previously described in L5, and we named them as follows: neurogliaform cells (L5NGCs), basket cells (L5BCs), shrub cells (SCs), horizontally elongated cells (HECs), and deep-projecting cells (DCs)

      New classifications for layer 5 interneurons

    9. L23 neurogliaform cells

      One assumes from the name that these are a subtype of neurogliaform cell.

    10. L23 Martinotti cells

      Layer-based classification

    11. A detailed description of the morphology, firing patterns, and intrinsic membrane properties of these two major types of L1 neurons can be found in the supplementary text.

      But is the data publicly available?

    12. owever, many of them (~60%) had atypical axonal projection patterns compared to those previously described for SBCs, and their axon arborized mostly within L1, with only one or two side branches extending to deep layers (not deeper than L4). Despite their variable axonal projection patterns, non-neurogliaform L1 neurons shared similar dendritic and electrophysiological features (tables S1 and S2) and similar connectivity profiles (table S3) that correspond to SBCs in rat somatosensory cortex (5, 12). We thus refer to this group as SBC-like cells

      Qualification: SBC-like cells

    13. neurogliaform cells
    14. The morphologies of interneurons were highly diverse, whereas the morphologies of pyramidal neurons in L23 and L5 were relatively uniform (for a discussion of the morphological diversity of pyramidal cells, see the supplementary text).

      Well, of course.

    15. carried out in juvenile animals due to the technical difficulties of preparing high-quality slices from adult tissue

      Our information is so incomplete.

    16. Maturation of GABAergic interneurons takes longer than for pyramidal cells, and their continuous development throughout adolescence into adulthood often further obscures our understanding (7, 8). Therefore, it is imperative to study the mature neocortex to gain a true understanding of interneuron cell types in the neocortex.

      I guess that makes sense given their tangential migration.

    1. “It’s an experiment; no one has ever done this before

      Well, actually the Allen Brain Institute has, but perhaps that is a special case as it did not have an academic model. Kudos to MNI though.

    1. nly four (1.5%) clearly claimed or were inferred to be replication efforts trying to validate previous knowledge.

      Very small number of replication studies.

    2. In this survey, we assessed the current status of reproducibility and transparency addressing these indicators in a random sample of 441 biomedical journal articles published in 2000–2014. Only one study provided a full protocol and none made all raw data directly available.

      Wow! I am frankly surprised that none of the data were available.

    1. D1 and D2 dopamine receptor eGFP BAC transgenic mice were used, between postnatal days 20–35 (developed by the GENSAT).

      GENSAT mice should have RRID's

    1. One's initial thought might be that the systems approach is more useful than the topographic, but from a broad functional perspective this is not strictly true.

      We teach neuroanatomy from the topological point of view and systems neuroscience from the systems point of view. Both are valid.

    2. By contrast, there are at least five different theoretical frameworks for grouping or arranging these ten basic parts (and many more variations on the basic themes; Fig. 2), and it is doubtful that there are compelling reasons at this stage for adopting one particular scheme

      In other words, the significance of these parts, if any, beyond their utility as a reference system for location in the brain is still under debate. I distinguish between the "neuroinformatics of neuroanatomy" and the science of neuroanatomy.

    3. (ignoring disputes about exactly what they are called, and where their borders are placed)

      This is the crux of the matter. The borders are somewhat arbitrary, but the identiability of the structures is not.

    4. All that remain of this model (except for MacLean's triune brain concept48; Fig. 2) are the constantly used, basically inaccurate, terms neocortex and neostriatum.

      But as we shall see, it is very difficult to dislodge terms that refer to something useful, even if they are conceptually fuzzy.

    5. Omitting details, this line of thinking led to designations like paleo-, archi- and neothalamus; and paleo-, archi- and neocerebellum (see Ref. 45). It has had very little influence on contemporary neuroscience and the basic connectional principles that it rested on almost a century ago have not been borne out (for example, that almost all sensory inputs to the pallium of fish and amphibians are inevitably olfactory)4

      But I would say that it has had a lot of influence on psychology and comparative behavior.

    6. According to this model, brain evolution took place in a geological fashion by the addition of strata.

      Perhaps also influenced by our increased understanding of geology around that time?

    7. brief, disruptive life

      i.e., they contributed mightily to the nomenclature mess!

    8. Luys and Meynert suggested that sensory information travels up dorsal regions of the neuraxis on route to the cerebral hemispheres, whereas motor information leaving the hemispheres takes a more ventral route.

      See above.

    9. Subsequently, the question of where to place the rostral end of the limiting sulcus has generated endless and unresolved controversy, although His himself placed it at the optic chiasm in the 1895 ‘Basle Nomina Anatomica’ (BNA), an ‘official’ tabu-lation of anatomical nomenclature39. and 62

      The question is whether this rough dorsal = sensory and ventral = motor carries rostrally into the forebrain. When I was taught neuroanatomy, I remember the argument that the dorsal thalamus is generally sensory and the hypothalamus visceral motor. So this organizational principle could still hold.

    10. The great Baer has the distinction of having provided simple, descriptive names (Fig. 2) for the embryonic brain vesicles first identified by Malpighi (see above), and for demonstrating5 that these vesicles are probably common to all vertebrates.

      In my opinion, the easiest way to understand adult neuroanatomy. So bravo, Dr. Baer!

    11. refined by division of the central trunk into a brainstem part, generating the cranial nerves and a spinal cord part, generating the spinal nerves31. and 32

      So the segmental model holds in some parts of the CNS

    12. but the important point here is that Malpighi discovered a fundamental transverse organization of the neural tube, and thus, presumably, of the adult CNS

      Which lays the foundation for our current system of understanding brain organization.

    13. Willis inexplicably reversed this

      For the better, I think!

    14. Based on the origin of regularly spaced nerve pairs, Vesalius was able to escape from the all too easy convention (advocated by Galen and still common today) of dividing what we call the CNS into a part within the skull and a part within the vertebral column (brain as a whole and spinal cord, respectively).

      One might claim that this convention has stuck because it is useful.

    15. As with all such ambiguities, the meaning of a specific example of the word can only be inferred from its context, which is a crucial problem today because keywords with more than one meaning might be used (without context) for database queries.

      On the other hand, if we can make some of the terminology more machine processable, it becomes possible to use informatics to determine what a term means by examining how it is used. While we currently lack the ability to say that one usage is more correct than another, it should give us the capacity to examine each concept for how fuzzy or precise its definition is. If it is precise, then a "signature" of that concept should be present in the data that would allow us to automatically validate "correct" use.

    16. he precise meaning of certain terms such as ‘brainstem’, ‘basal ganglia’, and ‘cerebrum’ has never been entirely clear to me, although I usually have not worried too much about it.

      Those of us in informatics have worried about it and it turns out, that they are conceptually "weak" terms. The groupings of structures are what I call a "weak partonomy" that generally lacks any clear or consistent criteria for what should be included or not. Thus, many of these aggregate terms are convenient, but of somewhat arbitrary functional significance.

    17. what are the basic parts of the brain, or even more to the point, what do we mean by the word ‘brain’ itself?

      Such questions become more crucial in the information age where we have the capacity to search and aggregate across mounds of data.

  6. Dec 2015
    1. easier data publication mechanisms, including better integration with data acquisition instrumentation, so that the process becomes automated.

      We are seeing some integration with GitHub, for example, and I think Zenodo and Open Science Framework support this as well.

    2. fail-safe versioning

      I don't think Google Docs does this appropriately or well, but I think some of the other platforms are better in that regard.

    3. better systems to permit collaborative work by geographically distributed colleagues;

      Certainly Google Docs and some of the new on-line authoring tools. Way more prominent and developed than in 2011.

    4. the emergence of data repositories within which datasets have globally unique identifiers and explicit links to journal articles, which by necessity provide some form of attribution and provenance information;

      They are still emerging and our current data citation pilot project will help this along, as will work in Elixr and BioCADDIE.

    5. There is a temporal aspect to research and the scholarly lifecycle that also needs to be recorded, either within research objects or between research objects, and that should also be capable of being reproduced.

      Is this referring to provenance? If so, there is the W3C prov standard, correct?

    1. and we aim to deliver that capability

      Perhaps link to the roadmap or our new public trello cards?

    2. David Kennedy is a neurobiologist who periodically reviews the literature in his field and extracts findings, which are structured interpretations of statements in scientific papers.

      Perhaps add a sentence after the opening that indicates this is a fairly general type of use case in science.

    1. Criticism is essential to the scientific process. It's what drives us to hold ourselves to better standards, forces us to continually improve what we do, and question what we think we already know.

      Plus, almost everything we know is wrong. So we must question or we would still be talking about the aether.

    2. Sometimes the criticisms that offend us the most are the ones that we need to hear the most.

      Wise

    3. But many scientists do not want to have to defend their work against an army of internet commenters.

      I think that post publication peer review should not be anonymous.

    4. They'll just see that it exists, and count that as a strike against the author.

      That's what I was originally worried about when I started this blog. That Dr. Sarkar was let go because of rumor.

    5. If the paper goes out with your name on it, you should be able to verify every single piece of data in it and take responsibility for it.

      I agree. Authors fight for the senior author position. It should mean something. And the cases above suggest that the senior author, if not responsible for the misrepresentation of data, did not look over it carefully. I can believe that anyone can have a student or post doc in the lab who engages in fraud. But this many examples over this many years?

    6. Does this mean Dr Sarkar is guilty of misconduct ?

      Well, I started reading this with an open mind and concern for Dr. Sarkar. But after seeing the evidence, it seems that there are repeated pattern of misconduct-yes misconduct-those were not errors but deliberate attempts to mislead or sloppiness on a scale such that nothing in the papers can be believed.

    7. The undeclared Splicing of different gels together, making them seem like they've been run side by side with the same exposure.

      That's not an error; that's fraud.

    8. But, if this culture of criticism is to be successful, then it needs a solution to the second problem. Those who critique science need to be protected, otherwise they will never be able to speak freely about a papers problems. This is why scientific peer review is conducted anonymously, and why any website attempting post publication review also needs to guarantee the anonymity of its user base.

      Actually, I'm not sure that I agree with this statement. We should be able to speak freely about a study and we should be able to respond to these criticisms. And we should do it openly. What we need to figure out is how to handle such critiques and what the proper etiquette for both making and receiving such criticisms needs to be. If our work gets trashed anonymously and we lose a job because of it, how is that fair?

    9. The criticisms they provide can be used to improve the work, and to make it suitable for publication.

      Yes. The crux of the problem is whether peer review is used to make a paper better for publication or is used as a filter to keep certain papers out.

    10. That such a highly regarded journal could fumble its peer review so disastrously provided more evidence that post publication peer review is crucial to science.

      Yes it is. And that makes the University of Mississippi's actions even more disturbing.

    11. With these allegations lodged in a public space and presented directly to colleagues here (I am not sure of the scope of the anonymous distribution), to move forward would jeopardize our research enterprise and my own credibility"

      That's a rather remarkable thing to say.

  7. Nov 2015
    1. The adaptive response (conditioning) to environmental stressors evokes evolutionarily conservedprograms in uni- and multicellular organisms that result in increased fitness and resistance tostressor induced injury. Although the concept of conditioning has been around for a while, itstranslation into clinical ther

      Can we see this annotation in Hypothes.is?

    Tags

    Annotators

    1. We must send down the main-top-sail yard, sir. The band is working loose and the lee lift is half-stranded. Shall I strike it, sir?"

      Starbuck is still trying to do business as usual

    1. As in the hurricane that sweeps the plain, men fly the neighborhood of some lone, gigantic elm, whose very height and strength but render it so much the more unsafe, because so much the more a mark for thunderbolts; so at those last words of Ahab's many of the mariners did run from him in a terror of dismay.

      Homeric. He is the target of nemesis.

    2. Mene, Mene, Tekel Upharsin
    3. "Yes, yes, round the Cape of Good Hope is the shortest way to Nantucket," soliloquized Starbuck suddenly, heedless of Stubb's question. "The gale that now hammers at us to stave us, we can turn it into a fair wind that will drive us towards home. Yonder, to windward, all is blackness of doom; but to leeward, homeward- I see it lightens up there; but not with the lightning."

      This way lies madness and a storm.

    1. Indeed, one might go a step further and contend that what Ishmael repeatedly refers to as the whale’s appalling demonic whiteness signals the author’s stand against his nation’s racist practices.

      Clearly, the author is forgetting the chapter "The Whiteness of the whale" : "Though in many natural objects, whiteness refiningly enhances beauty, as if imparting some special virtue of its own ...though this preeminence in it applies to the human race itself, giving the white man ideal mastership over every dusky tribe;"

    2. But the novel, like all great works of art, grows on you.

      I actually think you have to read it at least twice to appreciate it. The first time I read it, because I knew the story so well, I kept waiting for the encounter and ultimate battle. So the chapters in between (almost the entire book) seemed like a distraction. The second time, I wasn't anticipating, so I just enjoyed (not always-whaling is brutal)-the in between parts.

    3. Despite the fact that her founders had promised liberty and freedom for all,

      That is not fair to the founders, many of whom recognized the evils of slavery, but lacked the political means in the 18th century to stop it.

    4. Not only is he funny, wise, and bighearted, he is the consummate survivor, for it is he and he alone who lives to tell about Ahab’s encounter with the White Whale.

      Was just reading that the original version published in England left out the epilogue, so critics were very harsh about the fact that Ishmael was narrating events from beyond the grave.

  8. app.waiverforever.com app.waiverforever.com
    1. 3.2 Coursera will be entitled to recoup the entire Advance amount by withholding future Revenue Share payable under the Addendum to University until the amount of the Advance has been fully paid back to Coursera from University’s portion of the Revenue Share for the RFP Approved Content.

      So it's really an advance.

    1. RuBisCO is thus the major conduit through which life on Earth is energised

      Rather a poetic thought

    1. Information

      Many types of sensitive information in academia, not just PID. Need to think about unanticipated future use of data.

    2. One of the Committee’searly challenges was todistinguish the intertwined concepts of autonomy privacy,information privacy,and information securityfrom one another, name them and define them:•Autonomy privacy isan individual’s ability to conduct activities without concern of or actual observation.•Information privacy isthe appropriate protection, use,and dissemination of information about individuals.•Information securityisthe protection of information resources from unauthorized access, which could compromise their confidentiality, integrity,and availability.

      Good summary of different types of privacy.

    1. And then I told people I had read Moby-Dick. That seemed the point.

      Yes, that is why I read it, although many years after college. But it did lead to the founding of our bookclub in 1999 because I wanted to discuss it with somebody. We have finally gotten around to it in 2015. It's even better the second time.

    1. “Next, scientists simply modify their study’s goals to align with the vision of potential funders and wait for several months to hear back. At this point—should this step be successful, of course—they can move on to the experimental stage, and then to analysis.”

      Ah, for the good old days when all you needed to do good science was a rich patron.

    1. As part of the fellowship

      Need a comma after this phrase.

    2. on the information provided on the form below and on the basis of the need requested

      "based on need, as determined by the information to be provided in the form below".