- Mar 2023
-
www.nature.com www.nature.com
-
An alternative, more-challenging solution would be to establish a specialized software journal (or subsection of an existing journal) that includes update articles
That cedes the credit system to the publishing industry. The key is to develop a promotion metric that is based on use of the software by others, which can be tracked through published studies and other metrics, not create an artificial article type for software maintenance. For software to be used it has to be maintained.
-
Citations and grants for software are tangible credits for first versions of software, but may not be sufficient incentive for continuing maintenance.
But why not? The impact of scientific software is in its use. It cannot be used if it is not maintained. So tracking use is critical and could serve as a credit model.
-
We must implement new policies to align academic career goals with scientific goals.
Hear, hear!
-
-
www.wsj.com www.wsj.com
-
They grade those companies from A (totally halted Russian engagement or completely exited Russia) to F (continuing business as usual in Russia).
That seems a major source of bias as their grading system is also a value judgement, so it seems they prefer one outcome over the other.
-
- Mar 2021
-
iopscience.iop.org iopscience.iop.org
-
While DOIs are often used to link to and identify first-class research objects, e.g., research articles or data sets, MAST and AAS are in agreement that data DOIs should not be treated as first-class citable references on their own, and thus should not show up independently in the bibliography.
Differs from the data citation principle recommendations.
-
- Apr 2020
-
www.nature.com www.nature.comuntitled1
-
Beyond proper collection, annotation, and archival, data stewardship includes the notion of ‘long-term care’ of valuable digital assets, with the goal that they should be discovered and re-used for downstream investigations, either alone, or in combination with newly generated data.
Definition of stewardship.
Tags
Annotators
URL
-
- Jul 2019
-
www.sciencedirect.com www.sciencedirect.com
-
Sharing of information: the physician informs the family about the disease, treatment options, and prognosis with and without treatment. The family informs the physician about the patient's values and preferences. Building up rapport with the relatives and showing empathy is essential.
Underline essential 5 times.
-
However, surrogates still predicted patients' preferences better than physicians.
That is the critical point. The surrogates have to take into account a lot of factors, beyond what the physician knows, e.g., surviving spouses, children, pets, finances-all of these go into the equation.
-
n a review of 17 studies with 151 hypothetical scenarios describing severe diseases of different kinds,7 surrogates incorrectly predicted the patients' end-of life treatment preferences in one third of the cases.
That's interesting.
-
There are several limitations to surrogate decision making. First, families are often stressed and distracted and might therefore have a reduced ability to make any decision.
That is paternalistic.
-
severe or moderately severe disability.
needs to be defined for different age populations. Unable to drive or can't dress without assistance in an 85 year old is different than in a 50 year old.
-
-
www.labfolder.com www.labfolder.com
-
Less than 10% of researchers are using an electronic lab notebook today. However, in my mind there is no doubt that the digital trend will accelerate and that we’ll see a lot more of the EL
Would be nice to know where this figure came from.
-
- May 2019
-
www.ndexbio.org www.ndexbio.org
-
NDEx lets you specify Licenses and Request DOIs for your networks to include in grant proposals or publications thus enabling papers to link directly to your data.
-
-
www.home.ndexbio.org www.home.ndexbio.org
-
can upload networks via their custom scripts using the NDEx REST API
Tags
Annotators
URL
-
-
www.mousephenotype.org www.mousephenotype.org
-
REST API documentation for Genotype associated phenotype calls
Tags
Annotators
URL
-
-
github.com github.com
-
Deleting Data Collections
No persistence policy.
-
Require DOI?: Set this to true in order to generate a permanent identifier for this collection (Please set this to true for collections related to publications).
-
-
www.metabolomicsworkbench.org www.metabolomicsworkbench.org
-
The use of the common metabolite names in the RefMet database is strongly encouraged in order to be able to compare and contrast metabolite data across different experiments and studies.
-
- Apr 2019
- Feb 2019
-
www.pnas.org www.pnas.org
-
Although this possibility remains to be examined systematically, the few pieces of evidence available in the literature suggest that synaptic density is constant across species
I find that surprising. Are numbers of spines similar across species?
-
Nevertheless, we are so convinced of our primacy that we carry it explicitly in the name given by Linnaeus to the mammalian order to which we belong—Primata, meaning “first rank,” and we are seemingly the only animal species concerned with developing entire research programs to study itself.
This is a marvelous sentence!
-
- Jan 2019
-
www.nature.com www.nature.com
-
The h-index varies by field: life scientists top out at 200; physicists at 100 and social scientists at 20–30 (ref. 8). It is database dependent: there are researchers in computer science who have an h-index of around 10 in the Web of Science but of 20–30 in Google Scholar9.
Very important to see how statistics vary across different databases.
-
5) Allow those evaluated to verify data and analysis.
And to benefit from their efforts to improve other's data.
-
Recent commercial entrants should be held to the same standards; no one should accept a black-box evaluation machine.
This is an important point.
-
Interest in the journal impact factor grew steadily after 1995 (see 'Impact-factor obsession').
At last. I was trying to figure out when we started to care so much about the impact factor. When I was in my early career stage-80's and 90's-the term was never used. We all knew the high impact journals in our fields without any numbers.
Tags
Annotators
URL
-
- Dec 2018
-
www.collabra.org www.collabra.org
-
PMid: 25271090
This is not the correct PMID: PMID: 26428912
Tags
Annotators
URL
-
- Aug 2018
-
www.simonsfoundation.org www.simonsfoundation.org
-
“It’s very unlikely that [the U.S. National Institutes of Health] would have gambled on it.”
Ouch!
-
-
science.sciencemag.org science.sciencemag.org
-
Will this become a cheap pretense used to justify budget reduction in experimental basic neuroscience? It seems indeed easier in terms of budget control to turn scientists into high-tech engineers rather than to fund basic research on a wider spectrum with reduced short-term impact.
I know that this is always raised as a reason not to share data (because we'll never then generate any new data), but the point of the Ferguson article was not to diminish the role of the individual investigator, but rather to elevate it by showing that you can make big data from small data and that in some cases, e.g., spinal cord injury, it may be better doing it that way than just funding ever larger studies.
-
for modalities (vision) less adapted to its behavioral repertoire and, more obviously still, for higher cognitive functions
Thank you. Someone has to say it.
-
-
www.simonsfoundation.org www.simonsfoundation.org
-
The RNN data resembled the animals’ neural activity, suggesting that the prefrontal cortex uses the same approach.
Careful!
-
What neuroscientists do know is that the brain makes these adaptive decisions quickly — faster than it takes to alter the structure of neural circuits. “So something about the dynamics of the neural circuit must change,” says David Sussillo, a research scientist at Google and an investigator with the Simons Collaboration on the Global Brain (SCGB).
-
-
grants.nih.gov grants.nih.gov
-
Development or extension of tools to link different types of data relevant to the BRAIN Initiative. These tools could: allow searches across multiple data repositories for data relevant to a researcher. However, tools that focus on the development of broad ontologies will not be responsive and will be withdrawn prior to peer review. Applications that focus on the development of a narrow ontology for a particular purpose are an acceptable component of an application.
-
-
www.cell.com www.cell.com
-
Upon finding a similarity between humans and another species, it is often claimed that the feature is conserved from that species to humans; to paraphrase Aristotle, however, two species do not a phylogenetic comparison make.
Very true. The decrease in comparatively trained anatomists and other neuroscientists has been a detriment to the field.
-
In 1929, at a time before there were designer organisms, August Krogh [13Krogh, A.The progress of physiology.Am. J. Physiol. 1929; 90: 243-251Google Scholar] wrote what has come to be called Krogh’s principle: “For a large number of problems there will be some animal of choice or a few such animals on which it can be most conveniently studied”.
Neuroscientists would do well to go back to that.
-
-
www.nature.com www.nature.com
-
Reflections on the Decline of Science in England: And on Some of Its Causes.
Nothing new under the sun
-
for “excellence”
I think if you substitute "sensational" for "excellence", the article would ring truer. These journals, etc., aren't selecting for excellence, they're selecting for novelty and interest.
-
“Having to sensationalize and embellish impact claims was seen to have become a normalized and necessary, if regretful, aspect of academic culture and arguably par for the course in applying for competitive research funds”
This is a remarkable statement. The embellishment part and the sensationalizing I get, but the lying?
-
Articles that are initially rejected and then go on to be published to great acclaim or even just in journals of a similar or higher ranking represent what are in essence false negatives in our ability to assess “excellence.”
See comment above.
-
the basic sense that journal peer review is a gatekeeper that is frequently circumvented remains.
I agree with this statement, but it does not consider that many scientists are asked to review for "goodness of fit" for a particular journal.
-
how that in terms of citation metrics the most novel work is systematically undervalued over the time frames that conventional measures use,
I believe this to be true, which is why we have no good measures of impact.
-
it turns out, appear to be particularly poor at recognizing a given instance of “excellence” when they see it, or, if they think they do, getting others to agree with them.
I think this needs some supporting evidence.
-
As with most problems in scholarly communication, the challenge with peer review is therefore not technical but social.
I have come to disagree with the notion that the problems in scholarly communication are merely social and not technological. The question is not whether such technology exists, it is whether it is usable by those who are creating knowledge.
-
- Jul 2018
-
journals.plos.org journals.plos.org
-
and interactions with societal stakeholders in defining research questions
Researching what is important to the public
-
Within evidence-based medicine, systematic reviews are considered stronger evidence than individual studies.
But you can't do systematic reviews without individual studies.
-
‘we should be able to improve research if we reward scientists specifically for adopting behaviours that are known to improve research’.
Very Skinnerian!
-
We interpreted that several of the documents pointed to a disconnect between the production of research and the needs of society (i.e., productivity may lack translational impact and societal added value).
Again, without a strong statement in the Principles of the Scholarly Commons regarding the compact between researchers and society, none of the principles or rules will make sense, as they don't address this issue.
-
A burgeoning number of scientific leaders believe the current system of faculty incentives and rewards is misaligned with the needs of society and disconnected from the evidence about the causes of the reproducibility crisis and suboptimal quality of the scientific publication record.
Good quote
Tags
Annotators
URL
-
-
bcdc.us.aldryn.io bcdc.us.aldryn.io
-
these data
What data? I think I would beef up this paragraph a bit, as the preceding sentence just says "a comprehensive reference". So I would draw the relationship between the reference and the mention of data here.
-
R24
I would change to "Data", as again, I'm not sure how much meaning R24 has with the public or even much of the research community.
-
including U01, U19 data centers, R24 data archives
Will this be understandable to the public? I'm not sure this is necessary.
-
-
scicrunch.org scicrunch.org
-
Joining ODC-SCI
Ask James how to insert anchors
-
2
Ask James how I fix this.
-
-
www.frontiersin.org www.frontiersin.org
-
Naming Convention
A similar proposal was made in Eckers et al (2017)
-
-
www.sciencedirect.com www.sciencedirect.com
-
Finally, the importance of establishing a common cell type nomenclature across species cannot be overstated. This should best be done through a community effort. Here we would like to suggest the following considerations. The nomenclature could follow a hierarchical order, starting at the highest level: the species, then the brain region annotated based on a unified anatomical reference atlas system with cross-correlations among species, and then the cell type as defined by a multimodal feature set (including locational, molecular, morphological, physiological, and ontological features).
More of less what was proposed in Hamilton et al.
-
- Apr 2018
-
www.cell.com www.cell.com
-
The individually distorted caricatures here, and in the caricature world in general, do not contradict or match the reality but are similar enough to be shared and communicated with others.
My basic premise about human language. We just need enough specificity for shared action.
-
hus, we can conclude that although the world is very complex, it is well structured and thus not so difficult to understand as it would be if comprised of totally independent elements. Dimension reduction—description in terms of a few summarizing features—can strikingly decrease the complexity of both the physical world and brain functions (Pang et al., 2016xDimensionality reduction in neuroscience. Pang, R., Lansdell, B.J., and Fairhall, A.L. Curr. Biol. 2016; 26: R656–R660Abstract | Full Text | Full Text PDF | PubMed | Scopus (6) | Google ScholarSee all ReferencesPang et al., 2016). Since ecologically valid stimuli often elicit stronger brain responses than do artificial and static stimuli, the difficulties of using naturalistic stimuli in laboratories of human brain imaging may have been overemphasized.
And in animal studies too
-
-
www.braininitiative.nih.gov www.braininitiative.nih.gov
-
participationofastatisticianortheoristincollaborationwithanexperimentalist
Can explore pre-registration
-
buttheyshouldalsopermitthestudyofrichbehaviorsappropriatetothespecies
Most important part of the BRAIN initiative
-
-
www.sciencedirect.com www.sciencedirect.com
-
The results suggest that synaptic connectivity diverged during evolution while behavior was conserved.
Bravo Paul!
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
any scientists who work on model organisms, including both of us, have been known to contrive a connection to human disease to boost a grant or paper. It's fair: after all, the parallels are genuine, but the connection is often rather indirect.
Very honest
-
but so are an implausible 22% of all chemicals tested.
Why is that implausible?
-
Ironically, this hypothesis has not been questioned as hypotheses should be questioned in science, hence our calling it an overarching hypothesis
Oh, I think many of us have questioned it. Just in private.
-
If a modality consistently fails to make accurate predictions then the modality cannot be said to be predictive simply because it occasionally forecasts a correct answer.
Tell that to people on the evening news.
-
others have argued that words must have meaning in science and in fact these meanings separate science from pseudoscience
Tags
Annotators
URL
-
-
en.wikipedia.org en.wikipedia.org
-
For example, funding agencies may be more interested in performance measures related to the translation of team research findings to practical applications, whereas team researchers may use the number of publications produced and amount grant funding obtained to gauge the success of a team science endeavor. In addition, the method of evaluation and metrics of success may vary at different points during the team research project. Short-term measures may include indicators of synergistic output, whereas long-term measures may be related to the impact of the research on the evolution of a discipline or the development of public policy.
Complexities of measuring effectiveness of team science from different perspectives.
-
-
arxiv.org arxiv.org
-
A dataset alone, without accompanying documentation of the research methods by which it was created, analysis and interpretation of the findings, and associated context such as instruments, models, and software, may be littlemore than a string of numbers. The better documented and curated, the more useful any given set of data will be to others.
Again, what I would consider to be the minimum for publishing data.
-
The promises of open access to research data are vast, althoughmired in hyperbole
Yes, I could concede that. We're on the ascending phase of the technology hype curve.
-
a publication in the formal sense of peer review,
Is that really the definition of publish, however. I can publish works that have not undergone peer review by making them public. In my view, it is the preparation of the data so that it can be understood and re-used that is at the center of data publishing. Some sort of QC is required, but is it peer review?
-
The other problem is conflating data release with “data publishing,” which has become popular terminology.
They are not the same thing.
-
-
wiki.nci.nih.gov wiki.nci.nih.gov
-
Common Data Elements (CDEs) are standardized terms for the collection and exchange of data. CDEs are metadata; they describe the type of data being collected, not the data itself. A basic example of metadata is the question presented on a form, "Patient Name," whereas an example of data would be "Jane Smith."
Definition of CDE.
-
-
blog.dshr.org blog.dshr.org
-
You’ll basically never need a so-called “permissioned blockchain.” If you have a trusted third party, you don’t need any sort of blockchain — and in real-world use cases, you’ll almost always have a trusted third party, or central administrator of some sort.
Which is largely the case in academic and medical infrastructure.
-
-
journalologik.uk journalologik.uk
-
-
insights.uksg.org insights.uksg.org
-
Beyond a shared, but quite vague, notion of using digital and networked technologies better, FORCE11 seeks ways to support the ideas and projects that can bubble up when disparate groups of people come together. My view of what makes FORCE11 unique and valuable is that it is a kind of social infrastructure that can support this.
Nicely said. Without an organization like FORCE to convene and then provide the platform for the work to continue beyond the ad hoc meetings, such consensus is generally not achieved and acted upon.
-
What I will say is that top-down initiatives led by people who are already well known are easier to resource than platforms to support innovation initiated by those we do not know, or are designed to help us to solve problems that we have not yet realized we have.
It is ironic that almost every major funder wants to promote interdisciplinary science and collaboration, yet they won't (or can't) fund the places that incubate this type of work. Because the outcomes are not specified in advance and there may not actually any at all. Those to whom the funders are accountable, e.g., board members, patrons, taxpayers, don't like anything this open ended and non-committal.
-
ne of my frustrations has been that funding to collate what we have learned and turn it into systems, guidelines and templates that could be used by others has been difficult or impossible to find.
I give Helmsley a lot of credit for funding the scholarly commons group to do just that.
-
The FORCE conference has become an important part of the calendar for many different communities, and is unusual in the diversity of different groups that it brings together.
Perhaps why people who attend it for the first time are often confused about what FORCE11 is all about. This confusion has been expressed to me several times by colleagues. Perhaps it is just the nature of the beast; interdisciplinary work is hard. But I maintain that having a platform like FORCE for these different communities is necessary and unique.
-
it is worth noting that the Manifesto did not grapple with issues of structural power, diversity or geographical inclusion, assuming as many of us did at that time, that the utopian vision it outlined would solve all of those problems.
The Manifesto was indeed technologically focused. The Scholarly Commons working group and project at FORCE11 later on became very much concerned with this issues.
-
-
grants.nih.gov grants.nih.gov
-
o make data FAIR through use of a shared virtual space to store and work with biomedical research data and analytical tools.
If big data is the "oil" of the 21st century, then metadata is the refinery. Little mention of metadata in this document.
-
scientists across a wide array of fields said they spend most of their work time (about 80 percent) doing what they least like to do: collecting existing data sets and organizing data.
I'm not sure that is accurate. Most scientists I know like to gather data.
-
- Mar 2018
-
grants.nih.gov grants.nih.gov
-
17Implementation Tactics:•Promote community development and adoption of uniform standards for data indexing, citation, and modification-tracking (provenance)
If NIH doesn't support the community databases then I don't see where this is going to happen.
-
By separating the evaluation and funding for tool development and dissemination from support for databasesand knowledgebases, innovative new tools and methods shouldrapidly overtake and supplant older, obsolete ones
I don't understand the rationale for this statement. Why does separating funding for databases and tools lead to better tools?
-
Longer-term: Expand NIH Data Commons to allow submission, open sharing, and indexing of individual, FAIR datasets.
Again, the NIH is pretending they haven't paid for quite a few of these already. What are they planning on doing with these?
-
NIH will create an environment in which individual laboratoriescan link datasets to publications in the NCBI’sPubMedCentral publication database
Throwing out the contributions of the individuals who have been maintaining databases for specialized data types. Do not undermine the efforts of these databases.
-
NIH’s strategic approach will move towarda common architecture, infrastructure, and set of toolsupon which individual Institutes and Centers (ICs)and scientific communitieswill build and tailor for specific needs
This seems highly unlikely given the diversity of data. Might it not end up hurting more than helping?
-
adopt and adapt tools
If NIH data is to be FAIR, then these tools have to be built on open data standards.
-
most of them linked to research grantmechanismsthat prioritized innovation and hypothesis testingover user service, utility, access, or efficiency.
Indeed. And continue to do so. But many are also run by academics and their career paths will not be determined by service.
-
NIH will develop strategies to link high-value NIH data systems, building a frameworkto ensure they can be used together rather than existing asisolated data silos(see text box, below, “Biomedical Data Translator”). A key goal is to promote expanded data sharing to benefit not only biomedical researchers but also policymakers, funding agencies, professional organizations,and the public.
But need to support those fields, e.g., neuroscience that are not well supported by the large NCBI databases.
-
4In subsequent years, NIH's needs have evolved, and as such the agency has established a new position to advance NIH data science across the extramural and intramural research communities. The inaugural NIH Chief Data Strategist, in close collaboration with the NIH Scientific Data Council and NIH Data Science Policy Council,will guide the development and implementation of NIH’s data-science activities and provide leadership withinthe broader biomedical research data ecosystem. This new leadership position will also forge partnershipsoutside NIH’sboundaries, including withother federaland international fundingagencies and withthe private sectorto ensure synergy and efficiency, and prevent unnecessary duplication of efforts. As a result of the rapid pace of change in biomedical research and information technology, several pressing issues related to the data-resource ecosystem confront NIH and othercomponentsofthe biomedical research community, including: •The growing costs of managingdata could diminish NIH’s ability to enable scientists to generatedata for understanding biology and improving health. •The current data-resource ecosystem tends to be “siloed” and is not optimally integrated or interconnected.•Important datasets exist in many different formats and are often not easily shared, findable, or interoperable.•Historically, NIH hasoften supported data resourcesusing funding approaches designed for researchprojects, which has led to a misalignment of objectives and review expectations.•Funding for tool development and data resources has become entangled, making it difficult to assess theutilityof each independentlyand to optimize value and efficiency.•There is currently no generalsystem to transform, or harden,innovative algorithmsand tools created by academic scientists into enterprise-ready resourcesthat meet industry standards of ease of use and efficiency of operation. As a public steward of taxpayer funds, NIH must think and plan carefully to ensure that its resources are spent efficiently toward extracting the most benefit from its investments.
Finally thinking in terms of ROI
-
-
pkp.sfu.ca pkp.sfu.ca
-
Most do not contribute anything back to PKP
Did the report consider why this is the case?
-
o major society publishers with many high-profile journals.
Does not jive with the "46 articles per year".
-
We have been successful as a research group, but our research does not always make its way into our offerings.
It is very difficult to be both a research and service entity. They require different mindsets. It is very hard for a small non-profit to straddle both, in my opinion.
-
Developing tools, documentation, and service packages to make it easier for existing OJS2 users to migrate to OJS3, so they can enjoy its full benefits.
But it seems above that it is OJS --> OJS2/3 that is required. Am I misunderstanding?
-
an average of 46 articles per year
Does this suggest that no major journals are using this platform?
-
at least at least
Duplicated
-
-
ec.europa.eu ec.europa.euuntitled1
-
n order to avoid lock-in by individual service providers, the EOSC should foster fair competition of public, PPP and private providerson clear value propositions of highly professional services.
Principles of open infrastructures
-
- Feb 2018
-
www.force11.org www.force11.org
-
open by default
-
-
-
Thus, reactive astrocytes seem to contribute to the inhibition of neurogenesis from transplanted stem cells and inhibit axonal regeneration. The general concept evolving is that in early stages of injury, reactive astrocytes are needed to limit tissue damage including scar formation, which provides beneficial effects initially in shielding the brain, whereas the consequences of persistent reactive astrocytosis can be harmful
The bad
-
Experimentally ablating all reactive astrocytes that proliferate in response to injury highlights the beneficial role of reactive gliosis. Specifically, it resulted in a pronounced increase in tissue damage, lesion size and neuronal loss in various mouse models of injury
The good.
-
Thus, protoplasmic astrocytes have the capacity to resume cell proliferation and re-express proteins present in radial glia at earlier developmental stages or adult NSCs.
We carry the potential of radial glia within us.
-
-
thehaguedeclaration.com thehaguedeclaration.com
-
5. INNOVATION AND COMMERCIAL RESEARCH BASED ON THE USE OF FACTS, DATA, AND IDEAS SHOULD NOT BE RESTRICTED BY INTELLECTUAL PROPERTY LAW
Should make sure this is prominently featured in the commons materials as it is an important point that CC-NC is not commons compliant.
-
3. LICENSES AND CONTRACT TERMS SHOULD NOT RESTRICT INDIVIDUALS FROM USING FACTS, DATA AND IDEAS Generally, licences and contract terms that regulate and restrict how individuals may analyse and use facts, data and ideas are unacceptable and inhibit innovation and the creation of new knowledge and, therefore, should not be adopted. Similarly, it is unacceptable that technical measures in digital rights management systems should inhibit the lawful right to perform content mining.
Link to this in the Matrix. Critically missing in most discussions of open access.
-
-
www.physiology.org www.physiology.org
-
If each of these astrocyte subsets serves a specific functional role, identifying the signals or cell types regulating this subset may allow selectively influencing specific types of reactive astrocyte functions and not others. Given that reactive astrocytes with NSC properties seemingly exert beneficial functions as discussed above (235, 242, 255),
May be the reason why the evidence about the role of astrocytes after injury promotes regeneration or inhibits it.
-
his is important as activated, reactive astrocytes surround the injury site in the mammalian CNS (see below), while such cells are absent upon injury in species with radial glial cells instead of astrocytes.
So what advantage did astrocytes give us, given what we gave up with losing radial glia?
-
The adjacent neurogenic SEZ region still supplies large numbers of neuroblasts migrating to the injury site (274), demonstrating persistent failure of these to survive and integrate even after the acute reaction has vanished.
Evidence that the glial scar inhibits brain healing.
-
Thus the hypothalamus certainly contains cells with NSC properties, while neurogenesis at resting state in the adult brain appears to be rather low (Figure 3A). Importantly, however, it can be activated by metabolic stimuli (93, 213), prompting the important question about the functional consequences of this induced adult neurogenesis.
I didn't realize that the hypothalamus was also a site of adult neurogenesis.
-
Summing up, many vertebrates are equipped with widespread adult NSCs that resemble the NSCs in the developing brain, either the neuroepithelial or the radial glial cells. This has important consequences for the reaction after brain injury, not only in regard to the repertoire of glial cells reacting to injury which is obviously different from mammalian brains (e.g., the zebrafish CNS largely lacks parenchymal astrocytes; see, e.g., Ref. 14) but also in regard to the capacity to replace degenerated neurons that is largely lacking in the mammalian CNS.
Do we know in the wild that these vertebrates recover from brain injuries?
-
Thus the lineage of NG2-expressing cells in the developing neural tube is not restricted to oligodendrocyte cells, and the term OPCs is not appropriate.
Ah ha! This seems to be the key to the confusion. So is it that all OPCs are NG2-glia but not all NFG2-glia are OPCs?
-
Intriguingly, the remaining OPCs that were largely ventrally derived could readily replenish the depleted pool of dorsally derived OPCs and give rise to mature oligodendrocytes that could fully substitute the missing cell population
Further evidence that these cells are more promiscuous.
-
Intriguingly, these differences between OPCs and astrocyte progenitors at early postnatal stages anticipate their respective behavior in the adult brain after injury, with astrocytes remaining very stationary with virtually no migration towards the injury site (8, 280), as opposed to glial progenitors of the OPC lineage that readily migrate and accumulate around the injury site (see below and Figure 4).
That's very interesting. So astrocytes are regional and OPCs are migrants.
-
So, neighboring astrocytes do not invade and compensate for the partial loss of astrocytes within a specific CNS region, highlighting the important concept of regional diversity of astrocytes, similar and/or in relation to the regional diversity of neuronal subtypes.
Also perhaps why astrocytes exclude other astrocytes from their territory.
-
Given this functional importance of the long radial processes, it was difficult to perceive that these cells should divide at the same time despite their DNA synthesis as demonstrated by [3H]thymidine incorporation
Answers question above.
-
further extend their radial processes reaching out up to several millimeters in the human cerebral cortex.
That's impressive. I never realized that glial cells could be so large. Rivals neurons.
-
How can such a cell divide or are astrocytes with stem cell functions a special kind? In the most extreme case, glial cells with stem or progenitor cell function may have just been mistaken for glial cells due to the expression of “a few markers” and there would be a deep divide between a mature glial cell and any type of stem or progenitor cell.
I always assumed that astrocytes like neurons did not de-differentiate and re-enter the cell cycle.
-
This prompts a short comment on the confusing use of the term oligodendrocyte precursor/progenitor cells. The term progenitor refers to a proliferating cell, while precursor refers to an immature state, which is why we use the term oligodendrocyte progenitor cell as we are referring to proliferating cells.
This is a helpful distinction.
-
The axon is indeed a further site of neural computation, rather than a sheer propagating cable.
That is a proper way to look at almost anything in the nervous sytem.
-
the NG2-glia
Should be considered a distinct class of glia.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
We apologize to the authors of papers we could not include in this Review owing to space limitations.
Why citation analysis is problematic.
-
Another question to investigate is whether the complex axonal mRNA repertoire is a property of individual axons or a collective property of diverse axons.
Good question
-
Intriguingly, there is evidence suggesting that axonal protein synthesis might be augmented after nerve injury by an unconventional mechanism of transceullar ribosomal delivery from glial cells
Now that is cool. Another idea that has been floating around for a long time: glial-axon transfer.
-
As local mRNA translation mediates adaptive responses to extracellular signals, it is not surprising that mRNA translation can occur even in mature axons, especially during plastic responses such as regeneration.
Nothing in neurobiology makes more sense than to have axons synthesize their own proteins. But just because it makes sense, doesn't mean it works that way!
-
Tyrosine hydroxlase mRNAs can be detected in the striatum by reverse transcription-polymerase chain reaction and in situ hybridization and is decreased by pharmacological lesion to the nigrostriatal pathway
Some evidence from in vivo work. Assume this is in adults?
-
Nerve terminals that secrete neurotransmitters remotely from their cell bodies might benefit from local mRNA translation.
Yes, I have always thought so. But does nature think so?
-
This finding is in accordance with studies using mollusc neurons, which show that presynaptic protein synthesis89 is required for synapse formation
Does seem that most studies of axonal protein synthesis still involve invertebrate, peripheral or cultured neurons.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
INCF for Fiji Image Processing School.
Shows the difficulty in tracking impact.
Tags
Annotators
URL
-
-
gist.github.com gist.github.com
-
13452119
This paper is from 1957 and has no DOI.
-
-
www.newyorker.com www.newyorker.com
-
The fear with these billionaire donors is that they’ll fund junky science, wasting money and time,”
Ummmm. Have you been reading about the reproducibility problems in government-funded science?
-
This seems like a very peculiar institutional and organizational form to champion in a democratic society.
The key word is is "democratic". Why wouldn't we want to have different models to take on different challenges?
-
-
-
machine learning amateurs
Again, you just admitted this is not just a mistake of amateurs.
-
For example, overfitting is a common machine learning mistake made by even experienced data scientists.
Rampant in the scientific community.
-
Just as these tools turned knowledge workers into amateur presenters and financial analysts, the ongoing democratization of machine learning invites them to become amateur data scientists. But as data and smarter algorithms proliferate enterprise-wide, how sustainable will that be?
"Amateur data scientists". That is an interesting phrase.
-
Passive AI/ML, by contrast, means the algorithms largely determine people’s parameters and processes for getting the job done. The software is in charge; the machines tell the humans what to do. Machines rule.
Less appreciated, I think.
-
- Jan 2018
-
-
But more and more, colleges and universities are getting rid of their botany programs, either by consolidating them with zoology and biology departments, or eliminating them altogether because of a lack of faculty, funds or sometimes interest. And at the same time, many trained botanists in federal agencies, such as the Bureau of Land Management, are nearing retirement age, and those agencies are clamoring for new talent.
Good example of a domain, like neuroanatomy, where the knowledge is disappearing and we are not training new faculty.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
data or metadata
I actually think the entire use of the term data by the ontology community is incorrect or perhaps incomplete. To me data are measurements, not statements about results.
-
When should new knowledge be qualified for a new term?
I find this an absolutely remarkable statement? The answer is: when one is created. No one has control over what new words enter the language. New terms do not come from the creators of dictionaries; they come from people, who create terms as they are needed.
-
New knowledge requires new vocabularies, but when do we stop?
Why would you think you would ever stop?
-
-
www.ebi.ac.uk www.ebi.ac.uk
-
give semantics to your data
May not be obvious to everyone what this means.
-
-
scholarlykitchen.sspnet.org scholarlykitchen.sspnet.org
-
There is no Totalitarian lock-in, there may be no Authoritarian lock-in. But there is Democratic lock-in, where the independent choice of an individual leads inexorably to a stem-to-stern series of services, all provided by the same company.
I don't see the difference between this and Apple, though, in a sense.
-
There is no Manichean struggle in scholarly communications with Good fighting Evil, though to look at some library blogs or the effluvium from SPARC you would think that the devil had found a home on the boards of directors of the larger commercial firms. What we have instead is a suite of activities that are generated from the ground up, with each participant looking out for their own interests, as we would expect, and as we do ourselves in our personal dealings. The scholarly community has no unambiguous borders, despite cries to build a wall to keep out the commercial immigrants, and even capitalism itself. No one speaks for the community as a whole. Regarding lock-in, what we have is a situation where vendors want it and customers don’t. This is the natural order of things. Let the battle begin.
These are very important points that are often forgotten about almost everything in academia, especially: No one speaks for the community as a whole.
-
-
dknet.org dknet.org
-
(kristenjensencv AT gmail.com
I don't think she works here anymore
-
dkNET contributes to the SciCrunch Registry, a dynamic database of research resources (databases, data sets, software tools, materials and services) of interest to and produced by biomedical researchers.
Update this text.
-
Suggest a resource (resources include software, organizations, databases, etc). Organisms and antibodies should not be submitted to the resource registry.
Remove. It's confusing.
-
For all other resource types, e.g., software tools, services
Redundant. Remove.
-
Bulk Resources
Move to the bottom and change text to "add resources in bulk, please contact..."
-
Resource
Should say "Digital resources".
-
Step 2. Enter the resource into the SciCrunch Registry
This should not say SciCrunch Registry. It should say: Choose a Resource Type
-
- Dec 2017
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
It depends primarily on better research and better statistical analysis, which will be brought about through improved education and training.
Also depends on more robust and rigorous science and public availability of data underlying experiments.
Tags
Annotators
URL
-
-
journals.plos.org journals.plos.org
-
Indeed, in journals that had an impact factor greater than ten, almost twice as many papers used incorrect statistics or failed to report statistics (10/107; 95% CI 5.2%–16.4%) as reported statistics correctly (3/69; 95% CI 1.5%–12.0%).
Wow. This surprises me.
-
This suggests that authors, referees, and editors generally are ignoring guidelines, and the editorial endorsement is yet to be effectively implemented.
Rather discouraging. Hope 3 years after this has been published the situation has changed.
-
-
www.humancellatlas.org www.humancellatlas.org
-
20 to 55 years
For brain, may be too old?
-
both isolated and in their tissue context, from major tissues and systems from healthy research participants of both genders (Section 2;Table 1). It will combine single-cell profiling of dissociated cells and single-nucleus profiling of frozen samples with spatial analysis of cells in the context of tissues
Would really help this group, I think, to have a presentation from a developmental biologist who can show how the body plan develops.
-
The HCA will bea foundation for biological research and medicine: a comprehensive reference map of the types and properties of all human cellsand a basisfor understandingandmonitoring health and diagnosing and treating disease
Science enabler, not science itself. Important to keep that in mind.
-
We do not yet comprehensively know our cells —how they are defined by their molecular products, how they vary across tissues, systems, and organs, and how they influence health and dis
So it's not just the brain!
-
-
elifesciences.org elifesciences.org
-
Biomedical scientists have invested significant effort into making it easy to perform lots of experiments quickly and cheaply. These “high throughput” methods are the workhorses of modern “systems biology” efforts. However, we simply cannot perform an experiment for every possible combination of different cell type, genetic mutation and other conditions. In practice this has led researchers to either exhaustively test a few conditions or targets, or to try to pick the experiments that best allow a particular problem to be explored. But which experiments should we pick? The ones we think we can predict the outcome of accurately, the ones for which we are uncertain what the results will be, or a combination of the two?
A synopsis of our current methods for setting up big data projects
-
- Nov 2017
-
github.com github.com
-
gender
Sex (if you mean biological sex); gender is a social construct!
-
-
scholarlykitchen.sspnet.org scholarlykitchen.sspnet.org
-
However, it is not the sites that post these papers, it is academics themselves.
Yes, this is the important part.
-
We see this at work in the Kudos survey where 83% of academics felt that copyright policies should be respected, but at the same time 63% felt that despite such policies, academics should be allowed to post their papers on SNS.
That is correct. Because I think copyright to us means that no one can use our work in another article or another purpose without permission (as in the selling example above). We have always given copies of our papers to colleagues when they asked for them. So posting to SNS is no different.
-
Thus, you have: 1) practices that are legal under copyright but are contrary to scholarly culture; 2) practices that are accepted scholarly culture, but are not supported by copyright; and 3) practices in the middle where copyright supports or overlaps with scholarly culture. An example of 1) might be the taking of a CC-BY licensed work and selling it: definitely legal but definitely contrary to accepted scholarly norms. An example of 2) might be attributing 500 authors on a journal paper. Copyright law has clear guidelines as to what constitutes authorship and you’d struggle to argue that 500 individuals were joint authors (and therefore copyright owners) of 5,000 words. However, it is accepted scholarly culture to attribute large research groups on research papers. An example of 3) might be where a work is plagiarized (infringement of accepted scholarly culture) and copyright law allows the copyright owner to bring a court case based on infringement of copyright.
This is a very helpful paragraph in laying out the issues.
-
There are many commentators who put academics’ reluctant copyright transfer activity down to a desperation to get published at any cost. There is no doubt some truth in this.
I would say it is more likely indifference than desperation.
-
-
journals.plos.org journals.plos.org
-
Table 1. List of resources currently available as Community Resources and abbreviations used in text.
The authors apologize for incorrectly identifying the National Mouse Metabolic Phenotyping Centers in Table 1, row 9. We have submitted a correction to PLoS but they have elected not to correct it or formally note it as it does not change the results of the article. But to our. colleagues at MMPC, we are very sorry!
Tags
Annotators
URL
-
- Oct 2017
-
cdn.substance.io cdn.substance.io
-
The eventual goal is to use Texture as an integral building block in modern and customised end-to-end publishing systems, where the document sits in the center (single-source) and is edited by all involved parties (author, editor, reviewer) in a collaborative way.
Should target automated deposition within pre-print archives.
-
Texture is designed to be embeddable. It provides a set of components that can be configured to fit different integration scenarios. For example, Texture will be soon available as a stand-alone desktop application (using Electron), ready to read and write JATS XML files from local filesystems. In contrast, Open Journal Systems (OJS) will integrate Texture into their web-based journal management software, where documents are read from and saved to a database.
This would be a welcome development.
-
Similarly, references can be initially tagged as mixed citations, but at a later stage, Texture can connect to third party APIs (e.g. DataCite, CrossRef) and convert them into fully structured element citations.
Very nice example; can we even do better than this?
Tags
Annotators
URL
-
-
www.elsevier.com www.elsevier.com
-
any country that moves to gold open access first would need to pay to broadcast its articles
But aren't gold open access published by these same publishers? Why do they need to broadcast their articles?
-
Another reason APCs would rise is that the money flowing into the current system from outside the academic research community – i.e., journal subscriptions from industry – is estimated to be about 25 percent of the total. In a “pay-to-publish model,” systemic costs would need to be borne by the academic research community rather than shared with industry. This is because the costs of publishing in a gold OA system are covered entirely by those who publish articles – the academic research community – and not spread among consumers including the commercial sector, which accesses large amounts of research but publishes comparatively little. These two points have not been addressed in discussions to date but need to be worked through if gold open access is to be a viable, long-term solution globally.
This is an interesting point that I have not heard raised before. Would like to have this verified.
-
-
www.wsj.com www.wsj.com
-
AI companies are now targeting everything from criminal justice to health care. But we need much more research about how these systems work before we unleash them on our most sensitive social institutions.
Typical irrational exuberance when a new technology comes out followed by the inevitable crash. X-rays for measuring shoe size anyone?
-
Today’s AI is extraordinarily powerful when it comes to detecting patterns but lacks social and contextual awareness.
Danger of purely data driven decisions without human knowledge to challenge it.
-
As one RAND study showed, Chicago’s algorithmic “heat list” system for identifying at-risk individuals failed to significantly reduce violent crime and also increased police harassment complaints by the very populations it was meant to protect.
That's interesting; I would have thought that prediction is AI's strong suit.
-
-
www.pnas.org www.pnas.org
-
Principle 3. If publicly accessible repositories for data have been agreed on by a community of researchers and are in general use, the relevant data should be deposited in one of the repositories by the time of publication.
Map to the Repositories principle for the Scholarly Commons
-
-
github.com github.com
-
see episode XXXXX) probably know the problems likely to be encountered (ToDO: Could be an exercise).
Think about this.
-
-
link.springer.com link.springer.com
-
any health-specific domain expertise
There is Google Health. I know they've recruited some top people and it's been underway a few years.
-
A way of conceptualizing our way out of a single provider solution by a powerful first-mover is to think about datasets as public resources, with attendant public ownership interests.
Support for the commons, even with sensitive patient data.
-
If we are to see the true promise of artificial intelligence, a much more positive solution would be to heavily constrain the dataset and to introduce a competitive, open process for simultaneous technology development by a range of private, public, and private-public providers.
Something to consider.
-
It is important to note that, while giving DeepMind access to NHS data does not in principle preclude the same access being given to other companies in future, the willingness to recreate work, and ability to catch up, will diminish over time. Already, anecdotally, startups are reluctant to move in places where DeepMind has started deploying its immense resources.
These statements should be supported. I don't find them convincing.
-
DeepMind did not have the requisite approvals for research from the Health Research Authority (HRA) and, in the case of identifiable data in particular, the Confidentiality Advisory Group (CAG)
That seems rather shocking. Was this not considered research?<br> See below. This was considered health care, even though they really didn't seem to know how to do it and the results were unreliable and untested. So if it wasn't research, it should have been.
-
Part of how this would be achieved technically, he indicated, was by making patient health data repurposable through an application programming interface termed FHIR (Fast Healthcare Interoperability Resources; pronounced ‘fire’); an open, extensible standard for exchanging electronic health records.
I wonder if UC uses this
-
The document, which is not legally binding, talks about plans for DeepMind to develop new systems for Royal Free as part of a “broad ranging, mutually beneficial partnership… to work on genuinely innovative and transformational projects” [37].
Yet, earlier in the story, it said the Royal approached Google and not the other way around. Given the nature of what they claim they were doing, that seems odd to me.
-
Drawing boundaries around the patients who are in a direct care relationship is not likely to be as clean as saying that it extends only to those who contract AKI, since the purpose of the app also includes monitoring
Yes, that seems to be the point. If the algorithm is monitoring, then wouldn't it need to monitor everyone?
-
As DeepMind has acknowledged, “the national algorithm can miss cases of AKI, can misclassify their severity, and can label some as having AKI when they don’t” [30].
How does it do relative to clinicians?
-
Why DeepMind, an artificial intelligence company wholly owned by data mining and advertising giant Google, was a good choice to build an app that functions primarily as a data-integrating user interface, has never been adequately explained by either DeepMind or Royal Free.
Yes, that is a very good question.
-
Beloved in the UK, the NHS is a key part of the national identity.
Is the NHS really beloved in the UK?
-
It also elaborates on the problematic basis on which data was shared by Royal Free, namely, the assertion that DeepMind maintains a direct care relationship with every patient in the Trust
Yes, I would say that is problematic.
-
the article assesses the first year of a deal between Google DeepMind and the Royal Free London NHS Foundation Trust, which involved the transfer of identifiable patient records across the entire Trust, without explicit consent, for the purpose of developing a clinical alert app for kidney injury.
Without explicit consent? I hope there was at least some boilerplate consent somewhere that said the NHS can sell your data to third parties.
-
-
github.com github.com
-
Types of repositories: A variety of types of repositories are available, from specialized repositories developed around a specific domain, e.g., NITRC-IR, openfMRI, to general repositories that will take all domains and most data types, e.g.,Figshare, Dryad, OSF, DataVerse, Zenodo (Table 2). Many research institutions are maintaining data repositories for their researchers as well (e.g., University of California DASH).
Need to settle on a list of repositories.
-
-
github.com github.com
-
e.g., NITRC-IR, openfMRI, to general repositories that will take all domains and most data types, e.g.,Figshare, Dryad, OSF, DataVerse, Zenodo (Table 2).
Need to create a list of acceptable neuroimaging repositories for people to deposit their data.
-
(see Exercise 1: Finding a data repository for your data).
Need to create this or provide a link
Tags
Annotators
URL
-
-
-
F1: (meta) data are assigned globally unique and persistent identifiers
Tags
Annotators
URL
-
-
www.reproducibleimaging.org www.reproducibleimaging.org
-
modulde
Typo
-
-
www.force11.org www.force11.org
-
Object
Organize according to entity, research object, Other
-
Comments/Persistent Identifier Systems
Add another column: Governing document
-
- Sep 2017
-
www.uniprot.org www.uniprot.org
-
MLRGSARTYW
Annotating a protein sequence just to show that I can.
Tags
Annotators
URL
-
-
www.theatlantic.com www.theatlantic.com
-
He has previously shown that decapitated flatworms can retain their memories after they regrow a new brain, clearly showing that memory doesn’t depend on neurons.
But neurons aren't only in the brain.
-
-
-
“convey the patient’s full emotional state, especially the suffering resulting from the turmoil the person is undergoing”
Very true. The simple cartoons are almost insulting to someone in severe pain, I would think.
-
challenging clinicians to “look attentively enough and acknowledge that there is something more than the scientific list of signs to account for” and to “go beyond the set of nociceptive and affective features coming together in a specific but quite mechanistic configuration” when trying to understand another’s pain.
True of almost any type of disability.
-
-
www.ncbi.nlm.nih.gov www.ncbi.nlm.nih.gov
-
The Maf-family leucine zipper transcription factor NRL is essential for rod photoreceptor development and functional maintenance in the mammalian retina.
This is a test annotation
-
-
www.mendeley.com www.mendeley.com
-
Hong Hao1,{, Shob
I made this at Mendeley
-
-
watermark.silverchair.com watermark.silverchair.comuntitled2
-
novelReep6isoform (termedReep6.1)
Local annotation
-
TheMaf-familyleucinezipper transcription factor NRL isessentialforrod photoreceptordevelopmentandfunc-tional maintenance in the mammalian retina.
Test annotation from Maryann
-
-
www.thenewatlantis.com www.thenewatlantis.com
-
he thinks that much of neuroscience has been seduced by what she terms the “dogma” of reductionism. “Everyone is convinced that if you can find the genetic molecular explanation for something now then you understand it and hence you can fix it, even though there is literally no evidence for this.”
And they've been pursuing this doggedly for decades
-
But if your constituency, to use Marqusee’s term, is society, not scientists, then the choice of what data and knowledge you need has to be informed by the real-world context of the problem to be solved.
The motivation for the scholarly commons.
-
Like Visco and Fitzpatrick, Marqusee thinks that the absence of such accountability has led to “a system which produces far too many publications” and has “too many mouths to feed.”
Yes! I think the doubling of the NIH budget was one of the worse things that happened to biomedicine.
-