1,755 Matching Annotations
  1. May 2015
    1. Why did we not at least have honesty? Why was there not some integrity with this when it came to the human life? Why could decent human beings do this to other human beings?”

      Science has to answer this question.

    1. 68,000 postdocs

      More importantly, do we have the oversight and training necessary for these post docs? The recent failures in statistics and power analysis suggest that the universities do not have the capacity to train their graduate students or post docs.

    2. It’s hard not to imagine that all of that competition is pushing some scientists to cut corners, or worse, so they can publish in the top journals, they need to in order to earn tenure.

      I always said that the doubling of the NIH budget was the worse thing that could have happened to biomedical science.

    3. For a decade, science magazines have documented trends in employment of PhDs. According to an article last year in Times Higher Education, for example, there are as many as 68,000 postdocs in the United States at the moment – up from about 28,000 in 1979.

      Wow. That is a huge increase in the number of post docs.

    1. Citation: Ioannidis JPA (2014) How to Make More Published Research True. PLoS Med 11(10): e1001747. doi:10.1371/journal.pmed.1001747
    1. The estimate that 85% of research is wasted referred only to activities prior to the point of publication. Much waste clearly occurs after publication: from poor access, poor dissemination, and poor uptake of the findings of research.

      Good quote

    1. One problem in finding out why GFT has run amok is that Google has never disclosed which 45 search terms it uses, nor how it weights them, to generate its forecast.

      Lack of transparency!

    2. The discovery has led them to warn of "big data hubris" in which organisations or companies give too much weight to analyses which are inherently flawed – but whose flaws are not easily revealed except through experience.

      I like this phrase: big data hubris

    1. Google Flu Trends now provides a prospective view of current influenza search patterns throughout the United States

      But I believe that this finding has not held up. Need to track down.

    1. INTEGRATED AGING STUDIES DATABANK AND REPOSITORY (IASDR)

      Check to see if in SciCrunch

    1. Research publication can both communicate and miscommunicate

      Great quote; of course, this statement is true of all communications. We have entire industries devoted to spin.

    1. o address this, the NIH is contemplating modifying the format of its 'biographical sketch' form, which grant applicants are required to complete, to emphasize the significance of advances resulting from work in which the applicant participated, and to delineate the part played by the applicant. Other organizations such as the Howard Hughes Medical Institute have used this format and found it more revealing of actual contributions to science than the traditional list of unannotated publications.

      Another role for annotation

    2. restrictions on the length of methods sections have been abolished to ensure the reporting of key methodological details; authors use a checklist to facilitate the verification by editors and reviewers that critical experimental design features have been incorporated into the report, and editors scrutinize the statistical treatment of the studies reported more thoroughly with the help of statisticians.

      But this needs to be an interactive process. The fact is that much is discovered after the publication and inconsistencies, omissions and improvements need to be added.

    3. Consequently, we are reaching out broadly to the research community, scientific publishers, universities, industry, professional organizations, patient-advocacy groups and other stakeholders to take the steps necessary to reset the self-corrective process of scientific inquiry. Journals should be encouraged to devote more space to research conducted in an exemplary manner that reports negative findings, and should make room for papers that correct earlier work.
    4. If sufficiently meritorious applications to develop the DDI are received, a funding award of up to three years in duration will be made by September 2014.

      After I retire, I will have very pointed things to say about this program.

    5. the key publications on which the application is based (which may or may not come from the applicant's own research efforts). This question will be particularly important when a potentially costly human clinical trial is proposed, based on animal-model results. If the antecedent work is questionable and the trial is particularly important, key preclinical studies may first need to be validated independently.

      But if they appear in top rated journals, how is this one reviewer going to be able to make this decision when experts could not?

    6. This will be incorporated into the mandatory training on responsible conduct of research for NIH intramural postdoctoral fellows later this year. I

      Can't just be for post-docs; must be for all scientists. But beyond that, if research universities are supposed to be training our scientist, why is this problem endemic?

    7. and withhold details from publication or describe them only vaguely to retain a competitive edge

      This statement is a bit unfair. The Vasilevsky paper shows that scientists are not supplying enough detail to identify reagents used, but that is because the reporting standards are outmoded. This issue is being addressed in the Resource Identification Initiative, and authors are complying.

    8. Science has long been regarded as 'self-correcting', given that it is founded on the replication of earlier work. Over the long term, that principle remains true. In the shorter term, however, the checks and balances that once ensured scientific fidelity have been hobbled. This has compromised the ability of today's researchers to reproduce others' findings.
    1. “People overestimate what knowledge can do for you,” he said with a shrug.

      That is a remarkable admission for someone doing what he is doing. But it is the crux of the argument, in many cases. It's why whole body scans fell out of favor so fast.

    2. swelling in his belly

      That's not minor

    3. Federal patient privacy rules under the Health Insurance Portability and Accountability Act don’t apply to most of the information the gadgets are tracking. Unless the data is being used by a physician to treat a patient, the companies that help track a person’s information aren’t bound by the same confidentiality, notification and security requirements as a doctor’s office or hospital. That means the data could theoretically be made available for sale to marketers, released under subpoena in legal cases with fewer constraints — and eventually worth billions to private companies that might not make the huge data sets free and open to publicly funded researchers.

      That is quite remarkable; why wouldn't it be covered under HIPAA?

    4. In March, when Apple announced its ResearchKit initiative to allow people to share their information with researchers working on projects in asthma, heart disease, diabetes, breast cancer and Parkinson’s through various apps, more than 41,000 people volunteered within the first five days.

      We are all our own clinical trial

    5. “Getting the data is much easier than making it useful,” said Deborah Estrin, a professor of computer science and public health at Cornell University.

      I should make this my signature line

    6. More sophisticated tools in development, such as a smartphone app that analyzes a bipolar person’s voice to predict a manic episode, and injectables and implants that test the blood, offer greater medical benefit but also pose greater risks.

      I would argue in the case of the bipolar person, the risk is worth it; in the case of the healthy person, it may drive them into mental illness. Obsession.

    7. Some physicians, academics and ethicists criticize the utility of tracking as prime evidence of the narcissism of the technological age — and one that raises serious questions about the accuracy and privacy of the health data collected, who owns it and how it should be used.

      In truth, I happen to agree. Being obsessed with oneself should be one of the 7 deadly sins. Perhaps it is?

    1. (According to Nature, a third of all studies never even get cited, let alone repeated.)

      Statistic on the number of papers cited. Wish this article would give references!

    2. Because most of these studies were randomized controlled trials—the “gold standard” of medical evidence—they tended to have a significant impact on clinical practice, and led to the spread of treatments such as hormone replacement therapy for menopausal women and daily low-dose aspirin to prevent heart attacks and strokes. Nevertheless, the data Ioannidis found were disturbing: of the thirty-four claims that had been subject to replication, forty-one per cent had either been directly contradicted or had their effect sizes significantly downgraded.

      Here is the cost right here!

    3. It’d be really great if the initial studies gave us an accurate summary of things. But they don’t. And so what happens is we waste a lot of money treating millions of patients and doing lots of follow-up studies on other themes based on results that are misleading.”

      Need to be able to reference these statements

    4. Palmer has since documented a similar problem in several other contested subject areas. “Once I realized that selective reporting is everywhere in science, I got quite depressed,” Palmer told me. “As a researcher, you’re always aware that there might be some nonrandom patterns, but I had no idea how widespread it is.” In a recent review article, Palmer summarized the impact of selective reporting on his field: “We cannot escape the troubling conclusion that some—perhaps many—cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a-priori beliefs often repeated.”

      Look this article up

    5. Sterling saw that if ninety-seven per cent of psychology studies were proving their hypotheses, either psychologists were extraordinarily lucky or they published only the outcomes of successful experiments.

      Or, we already know everything we need to know!

    6. Ronald Fisher, who picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier.

      Unbelievable!

    7. The bias was first identified by the statistician Theodore Sterling, in 1959, after he noticed that ninety-seven per cent of all published psychological studies with statistically significant data found the effect they were looking for.

      That is rather outrageous that we've known about this since 1959 and have done nothing about it.

    8. For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.

      Interesting, but again points to a real need to accelerate this process.

    9. “Unfortunately, I couldn’t find the effect,” he said. “But the worst part was that when I submitted these null results I had difficulty getting them published.

      Important point to make again in the proposal as to why negative results are difficult to publish

    10. “One of my mentors told me that my real mistake was trying to replicate my work. He told me doing that was just setting myself up for disappointment.”

      Whoa! That is truly disturbing!

    11. But while Schooler was publishing these results in highly reputable journals, a secret worry gnawed at him: it was proving difficult to replicate his earlier findings. “I’d often still see an effect, but the effect just wouldn’t be as strong,” he told me. “It was as if verbal overshadowing, my big new idea, was getting weaker.”

      Another case for annotation

    12. Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

      Ouch!

    13. The test of replicability, as it’s known, is the foundation of modern research.

      Nice quote

    1. Given the idiosyncrasies of lab practices, that’s a concentrated risk profile. Wait for more labs to repeat the work, or conduct a full lab notebook audit

      Full lab notebook audit

    2. Academic investigator’s directly or indirectly pressured their labs to publish sensational “best of all experimental” results rather than the average or typical study; The “special sauce” of the author’s lab – how the experiment was done, what serum was used, what specific cells were played with, etc.. – led to a local optimum of activity in the paper that can’t be replicated elsewhere and isn’t broadly applicable; or, Systemically ignoring contradictory data in order to support the lab’s hypothesis, often leading to discounting conflicting findings as technical or reagent failures.

      Good material for the proposal

    3. Only positive findings are typically published, not negative ones.

      Cost of not publishing negative results

    4. The company spent $5M or so trying to validate a platform that didn’t exist

      Costs of irreproducibility

    1. However, with reasonable efforts (sometimes the equivalent of 3–4 full-time employees over 6–12 months), we have frequently been unable to reconfirm published data

      Estimate of time spent on an experiment. Wondering whether having a communication channel from the article to the author would help? Almost a "live help" function.

    2. McDonald, R. J., Cloft, H. J. & Kallmes, D. F. Fate of submitted manuscripts rejected from the American Journal of Neuroradiology: outcomes and commentary. Am. J. Neuroradiol. 28, 1430–1434 (2007)
    3. Schroter, S, et al. What errors do peer reviewers detect, and does training improve their ability to detect them? J. R. Soc. Med. 101, 507–514 (2008)

      Look this one up

    4. or insufficient description of materials and methods

      For RRID proposal

    5. Our findings are mirrored by 'gut feelings' expressed in personal communications with scientists from academia or other companies, as well as published observations.

      Wouldn't it be great if these gut feelings were actually annotations?

    6. Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility.

      This seems to be at least one reproducible result!

    7. Interestingly, a transfer of the models — for example, by changes in the cell lines or assay formats — was not crucial for the discrepancies that were detected.

      That's actually a rather odd result, depending on the nature of the experiment.

    8. However, validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced.
    1. including the caudate nucleus, the putamen and the globus pallidus), 6, thalamus, 7, choroid plexus, and the optic chiasm (not shown).

      I'm annotating this in the html format new window.

    2. Segmentation of T1-weighted image using FreeSurfer. T1-weighted images were segmented using FreeSurfer in order to create masks defining 1, cerebellar cortex, 2, cerebellar white matter, 3,

      Try annotating this figure in the pop up window; see if the figure displays.

    1. or the data showed inconsistencies that led to project termination

      Wondering whether annotation of pop-up figures is referenced to the same article.

    1. As part of the CCDB project, we are developing coordinate systems for individual neuron types, so that subcellular data can be placed in a spatial context relative to the other cellular components.

      We got farthest with this goal using the Subcellular Anatomy Ontology and also the Whole Brain Catalog. The latter project allowed docking of subcellular structures into a cellular scaffold, although the project is no longer active.

    2. These projects include the Mouse Atlas Project at the University of California, Los Angeles

      See the NIF Registry for a complete list of tools; many of these projects are still in existence.

    3. The Smart Atlas is a java-based GIS tool currently built on top of a commercially available brain atlas

      Again, due to various combinations of technology, licensing issue and politics, this tool (which was by far and away my favorite) never made it into full production.

    4. Each CCDB concept is mapped to its corresponding UMLS ID.

      Again, this was something that was never done completely nor particularly well. When the developers would update the database, they would often lose the mappings. At the time, we also never thought in URI's, just unique ID's. Took me a long time to learn the difference and I'm still not sure that one is better than the other.

    5. Because the set of operations on a tree is well understood in computer science, this models a single neuron well enough to enable questions like "Find the diameter distribution of the third-order branches of those Purkinje neurons that have more than one primary branch". Unlike searching on descriptive attributes, which requires access to an explicit representation in the schema, a user can potentially query for any property that can be computed from a tree structure.

      One of my greatest disappointments in the CCDB was that we never fully implemented the unique data types in the production database. They remained, unfortunately, just demonstrations. I learned a valuable lesson about using technology that was experimental (I think it was a new feature in Oracle) and in working with computer scientists. Computer scientists need to develop new cutting edge technology for their career advancement; they are less interested in all the hard work that goes into implementing these features in a production system. But biologists need stability. I no longer make this mistake, but it was a hard lesson to learn!

    6. CCDB; www.ncmir.ucsd.edu/CCDB

      The updated link is: http://ccdb.ucsd.edu. However, CCDB has merged with the Cell Image Library as of 2014.

    7. However, relatively few databases have investigated more rigorous modeling of complex imaging data, so that its content is exposed to direct query.

      I believe that this is still largely true, although the Allen Brain Atlas has certainly gone the farthest at a production level to create spatially referenced and queryable data.

    8. To quote from the NIH data-sharing policy guidelines: "...sharing data reinforces open scientific inquiry, encourages diversity of analysis and opinion, promotes new research, makes possible the testing of new or alternative hypotheses and methods of analysis, supports studies on data collection methods and measurement, facilitates the education of new researchers, enables the exploration of topics not envisioned by the initial investigators, and permits the creation of new datasets when data from multiple sources are combined".

      For some examples that show this to be true, see our updated commentary: http://www.nature.com/neuro/journal/v17/n11/full/nn.3838.html

    9. Biomedical Informatics Research Network (BIRN).

      The original BIRN project described here ended in 2008, although a version of it continued and may still be active.

    10. Kotter
    11. These range from sociological hurdles involved in sharing hard-won data that has not yet been fully utilized, to technological problems involved in representing, storing and accessing large amounts of non-standardized, complex data.

      Amazing that 12 years later, these issues are still the same.

    12. National Partnerships for Advanced Computational Infrastructure (NPACI; www.npaci.edu)

      Also no longer in existence: the URL no longer works.

    13. Human Brain Project

      The HBP was discontinued somewhere around 2005. Many of the projects that it originally supported, however, continue to exist. The URL listed is no longer functional.

    1. Therefore, Congress should create a Center for Patient Safety that would set national safety goals and track progress in meeting them;

      This suggestion does not inspire confidence. Why do we need to set national safety goals? Shouldn't the goal be mistake-free health care?

    2. One of the report’s main conclusions is that the majority of medical e r­ rors do not result from individual recklessness or the actions of a particular group--this is not a “bad apple” problem. More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them.

      I suppose that is comforting. Same is true in science, I think, although some would argue that the perverse incentives lead to less than optimal process.

    3. The Quality of Health Care in America Committee of the Institute of Medicine (IOM) concluded that it is not acceptable for patients to be harmed by the health More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them . care system that is supposed to offer healing and comfort--

      Not to be too cynical, but "duh"!!!!

    1. A tool to improve reproducibility of data intensive science, recording progress as you work with tools such as R and Python.
    1. “.. the majority of medical errors do not result from individual recklessness or the actions of a particular group–this is not a “bad apple” problem. More commonly, errors are caused by faulty systems, processes, and conditions that lead people to make mistakes or fail to prevent them.”

      Great quote for reproducibility

    2. Of course I do not knowingly have any mistakes in print. But I could have a mistake out there I don’t know about.

      The importance of post-publication annotation. To me, this is the critical missing piece.

    3. The reason my syllogism doesn’t eliminate science as a paragon of correctness is that – contrary to the popular view about lone geniuses – science is not about individuals or single papers. It is about the community and the total body of evidence. One individual can be right, wrong, a crack-pot, a genius, mistaken, right for the wrong reasons, and etc. But the community as a whole (given time) checks each other and identifies wrong ideas and mistakes.

      The essence of science

  2. Apr 2015
    1. Outside the triple, information is lost and a literal is just data without any meaning.

      That does seem to be a problem.

    2. hey can not be subjects in RDF triples – they are always the objects used to describe a resource.
    3. Literals are nodes in an RDF graph, used to identify values such as numbers and dates by means of a lexical representation.

      Yeah! At last a definition I can understand!

    1. We also hope that neuroscientists will see the value of knowledge frameworks as a critical part of neuroscience in the digital age and actively participate in the refinement and utilization of these frameworks to advance the practice of neuroscienc

      Still waiting!

    2. The new SAO imports multiple NIF modules: NIF Cell, Molecule, Anatomy and Subcellular structure, directly from NIF, rather than recreating them. Instead, the SAO confines itself to providing the relationships among these classes, e.g., Subcellular Structure is located in Brain Region; Subcellular structure has part Molecule. Subcellular structure is part of Cell. NIF Cell, Molecule, Brain Anatomy and Subcellular Anatomy.

      I had always hoped to redo the SAO, but never got around to it.

    3. For example, the SAO was imported to cover subcellular structures in the nervous system

      We've subsequently done quite a few more upgrades to NIFSTD, e.g., replacing SAO with GO, and our gross anatomy module with UBERON.

    4. NIFSTD (http://purl.org/nif/ontology/nif.owl)

      Can also be found in BioPortal

    5. the Neuroscience Information Framework (NIF)

      Still running strong in 2015!

    6. The SAO has been deployed through Jinx, a segmentation program designed principally for looking at micrographs that are the result of electron tomography experiments (Martone et al., 2008).

      This tool is no longer available. It was a good idea, but never performed well enough for regular use. We also found that researchers did not want to go through the trouble of creating structured annotations. We did not have autocomplete available at the time, and one had to do a lot of manual look-up. The results were really nice though...

    7. Cell Centered Database (CCDB; http://ccdb.ucsd.edu),

      The Cell Centered Database has merged with the Cell Image Library, although the website is still functional.

    8. Subcellular Anatomy Ontology

      The Subcellular Anatomy Ontology has now been subsumed into the Gene Ontology Cell Component ontology. Roncaglia et al. 2013

    9. We then create a class “GABAergic neuron” defined by a restriction that states “it is any neuron that has neurotransmitter GABA”. A reasoner then will classify all neurons for which this condition holds under that class (see Figure 5; Larson et al., 2007).

      See also Imam et al. 2012 for more extensive example in NIFSTD

    10. When annotating data, the essential point is not whether the researcher agrees wholeheartedly with the entity definition, but that the definition is clear and can be applied correctly.

      Reflects a certain amount of naivete.

    11. Our goal at first was not, therefore, to encapsulate within the ontology everything that we know about biological systems, but rather to create a structure that enabled clear communication about data

      I still maintain that this is a good rule of thumb.

    12. hrough these relationships, the SAO allows us to relate macromolecules to subcellular structures, parts of cells to a whole cell or to higher-order brain structures.

      These relationships were not subsumed by GO Cell Component.

    13. that takes advantage of first-order logic

      Not sure that this statement is correct, that is, does ontology require the use of first-order logic.

    14. experimental methodologies tend to reveal only a limited aspect of nervous system organization

      "tend"? I wonder why I used such a weak word here. We do not have any experimental methodology that reveals a complete picture of nervous system organization.

    1. The premise that neuroscience will benefit from routine and universal data sharing has been around since the early days of the Internet.

      I annotated this in Chrome via the VPN.

    2. The launch of the US BRAIN and European Human Brain Projects coincides with growing international efforts toward transparency and increased access to publicly funded research in the neurosciences.

      Annotation works with the Hypothes.is bookmark.

    1. This quick conversion of cells allows for the rapid testing of any number of potential treatments for a disease.

      Of course, this statement presupposes that the neuron itself is a primary target of the disease process.

    2. Neuroprotective effects

      I don't think that this is the correct subject heading, as I would expect that it would refer to the neuroprotective effects of cholinergic neurons. I would probably label this section as "Trophic factors" or something to that effect.

    3. Basal forebrain cholinergic neurons are homologous

      Better word is "homogeneous", although I'm not sure that this statement is well supported.

    4. dendritic fields that project into almost all layers of the cortical region

      This statement is not correct. Should be "axonal projections", not dendritic fields.

    1. One of the bigger changes going from engineer to manager was to redefine what I meant by the question: how are we going to do this? As an engineer I would deconstruct that question to ask what is the software we need to build, and the technical barriers we need to remove, to achieve our goals. As a manager I would deconstruct that question to ask what is the process by which we achieve our goals.

      Nice characterization of "engineer vs scientist" divide as well. Add to the scientist persona the idea of constant experimentation and thinking out loud, and, perhaps too, a tolerance for failure that the engineer cannot afford.

    1. Now some news organizations are instead placing higher value on being right even if that means not being first in reporting a story.

      I have seen no evidence that this is true; in breaking stories, all sorts of misinformation is put out very quickly by all the major news outlets, just to be first it seems.

    1. There is now a strong body of evidence showing failure to comply with results-reporting requirements across intervention classes, even in the case of large, randomised trials [3–7]. This applies to both industry and investigator-driven trials. I

      Compliance not mechanism

    2. “the registration of all interventional trials is a scientific, ethical, and moral responsibility”

      World Health Organization's statement

    1. Patient groups, lastly, could write open letters to all companies and researchers withholding methods and results of trials on treatments taken by their members, represent their constituencies by holding individuals to reasonable account, and again help improve compliance.

      Hmmm. Perhaps annotation would be a better mechanism.

    2. This negates a key defence commonly cited by trialists and sponsors when facing calls for greater transparency: that journals reject “negative” results. All trials can now be reported, immediately, using clinicaltrials.gov as a first or last resort, if the trialist is willing. The question remains: how can we ensure this is done?

      Raises the question about whether regulatory agencies could use annotations, as part of Resource Watch, to question whether data that should have been released was released.

    3. However, it is worth noting that academic journal publication may ultimately prove to be a red herring, as an indicator of transparency. Academic publishing decisions can be arbitrary, and introduce lengthy delays in access to knowledge. Furthermore, there is a growing body of evidence demonstrating that journals often fall short of the basic expected standards for reporting of clinical trials. It is commonplace to find that primary outcomes have been switched, for example [7]; findings are routinely “spun” [8]; and compliance with reporting standards such as CONSORT is highly variable. When compared with the long and formal structured Clinical Study Reports created for all industry-sponsored trials, academic papers have been shown to be incomplete and inconsistent [9].

      Another damning statement.

    4. Anyone withholding the methods and results of a clinical trial is already in breach of multiple codes and regulations, including the Declaration of Helsinki, various promises from industry and professional bodies, and, in many cases, the United States Food and Drug Administration (FDA) Amendment Act of 2007. Indeed, a recently published cohort study of trials in clinicaltrials.gov found that more than half had failed to post results; and even though the FDA is entitled to issue fines of $10,000 a day for transgressions, no such fines have ever been levied [3].

      Sticks don't work if they aren't used. I find this rather disturbing.

    5. The best currently available evidence shows that the methods and results of clinical trials are routinely withheld from doctors, researchers, and patients [2–5], undermining our best efforts at informed decision making.
    1. This week there was an amazing landmark announcement from the World Health Organisation: they have come out and said that everyone must share the results of their clinical trials, within 12 months of completion, including old trials (since those are the trials conducted on currently used treatments).
    1. a review system so flawed that a fair review is not possible,

      This is my feeling as well. But I am wondering what specifically is behind this statement.

    2. Good people are leaving academic science, forced out by a lack of money, inequity in decision making, and hypocrisy in career recognition and advancement. Many are tired of playing a game whose rules change before you even know what the rules are.

      I certainly feel this way. Maybe I just couldn't cut it, but I felt I had a role to play. It was just that I could no longer stand trying to keep that role supported.

    3. by a seriously flawed reviewing system

      Here, here.

    1. You know what chimpanzees don't like? Getting filmed by a drone without their permission, that's what.

      Must add this to my list of favorites.

    1. In contrast to traditional assumptions about decision-related attitude change, more recent models of cognitive dissonance suggest that the psychological distress associated with cognitive dissonance can begin to be resolved rapidly,

      Test

    1. "The margins of manuscripts often contain medieval and early modern reactions to the text, and these can cast light on what our ancestors thought about what they were reading," Williams explained. "The 'Black Book' was particularly heavily annotated before the end of the 16th century."

      Great quote about annotation; as far back as the 13th century.

    1. dster CFTC Contributor CFTC Apple iOS 8.3 Has Nasty New Bug Gordon Kelly Contributor Apple iOS 8.3 Admits To Massive Problem Gordon Kelly Contributor Apple Releases i

      Well, I don't have touch ID

    2. Gordon Kelly Contributor I write about technology's biggest companies full bio → Opinions expressed by Forbes Contributors are their own. FOLLOW Comment Now Follow Comments CFTC​Voice: The Top 5 Signs Of A Fraudster CFTC Contributor CFTC Apple iOS 8.3 Has Nasty New Bug Gordon Kelly Contributor Apple iOS 8.3 Admits To Massive Problem Gordon Kelly Contributor Apple Releases iOS 8.3 Update, Beats Mystery Continues Gordon Kelly Contributor iPhone 6 Vs Galaxy S6 And Galaxy S6 Edge: Samsung Gatecrashes Apple Gordon Kelly Contributor TECH 4/09/2

      Whoa, that's creepy.

    1. Led Zeppelin was the definitive heavy metal band. It wasn't just their crushingly loud interpretation of the blues -- it was how they incorporated mythology, mysticism, and a variety of other genres (most notably world

      Should I expect to be able to annotate Pandora?

    Tags

    Annotators

    URL

    1. As structuring knowledge for machines is akin to writing code, many classes and relationships in ontologies make little inherent sense to a domain scientist, yet most ontology editing tools expose this level of complexity.

      First pass knowledge extraction interface. Getting scientists to semi-structure their information when they can.

    2. First, the domain is a poor candidate because the domain of all entities relevant to neurobiological function is extremely large, highly fragmented into separate subdisciplines, and riddled with lack of consensus (Shirky, 2005).

      Probably a good thing to add to the Complex Data integration workshop write up

    1. He had a Chinese characteristic, which was that when something bad happened, he smiled.

      I wonder if this is still true?

    2. Today it’s a famous course, but in those days it was a laughable idea, alarmingly American.

      Great quote, although I'm not sure why this idea is "alarmingly American"

    1. Although formal ontologies can be readily processed by computers, their complexity discourages casual human use.

      Yes, indeed.

    2. Keeping type-maps separate from the events allows the user to edit, combine, or eliminate tags based on the application.

      I think that this type of approach will be necessary for tagging methods.

    3. n semi-structured tagging, users select tags from a tag hierarchy, but may add tags within the hierarchy as needed. By reusing existing tags, users gain the structural benefits of ontologies while still retaining the flexibility of open tagging

      Yes, I believe that this is the best compromise.

    1. Visualization and Modeling Laboratory research projects integrate machine learning, visualization, statistical analysis and advanced data-handling to understand large scale data sets. We focus on building tools that are usable not only in our lab, but for other researchers. Here is a brief overview of some of our projects:

      Should probably add these to the SciCrunch Registry

    1. MSeqDR: the Mitochondrial Disease Sequence Data Resource Consortium

      Add this to the SciCrunch Registry

    1. his Web portal provides users with a flexible and expandable suite of resources to enable variant-, gene-, and exome-level sequence analysis in a secure, Web-based, and user-friendly fashion. Users can also elect to share data with other MSeqDR Consortium members, or even the general public, either by custom annotation tracks or through the use of a convenient distributed annotation system (DAS) mechanism

      Need to look into the DAS and portals like this that are annotating sequences.

    1. He worked for Gunner A. Olsen Corp. out of New York City building radio towers for the government and built nuclear power plants.

      This is my uncle's business. Am marking this for my mother.

    1. his memorial statue is of a young man, William (Bill) Spenncke, from Glen Cove, who fought in

      Finally tracked down the information on the Glen Cover doughboy statue, the one my brother and I call the "Mourning Doughboy".

    1. The Egyptians made their share of mistakes so I’m entitled to mine.

      I'm sorry. I would have thought that you would have learned something after 5000 years of trying.

    1. Gunnar A. Olsen of the Belmont Electric Corp.

      Gunnar A. Olsen was my uncle. He had his own corporation, the Gunnar A. Olsen corporation. He may have been subcontracted by Belmont Electronic, but he was not part of that company.

    1. The specialist benefits most from the increased dissemination of dark data; therefore, why not liberate the research report from its review-like wrapper, perhaps even from the main text altogether? We could strip the paper down to its minimal components: the methods, the data and enough well-chosen keywords to enable the manuscript to come up in response to relevant searches.

      Yes! We should optimize the container for the content!

    2. http://www.arjournals.com/ojs/

      Does not look heavily used, although it is still in existence.

    3. Why sully a CV with papers from the ‘Journal of Failed Experiments’? Don’t we want our colleagues (and especially our competitors) to believe that we succeed at every undertaking?

      Same reason pharma hates the term: Failed drugs

    4. Thus, although the arguments in favor of small-unit publishing all seem to revolve around benefits to the community, the costs of generating these small units would fall on individual authors. If the community is to reap the benefits, then the costs to the individual authors must be driven to zero – or associated with some reward.

      Will they do it?

    5. Time spent publishing small papers is time not spent developing big ones
    6. In these cases, the problem isn’t that the data are unpublishable in any journal, but that they are unlikely to be published by journals that boost the reputation of the author.

      Journal's interest vs science's interest. A problem that I struggle with as an Editor in Chief.

    7. Thus, journal articles are research reports wrapped in literature reviews

      Another nice quote

    8. Scientific papers are not historical records of the scientific process; rather, they are ahistorical texts designed to maximize their chances of acceptance by the editors and reviewers of high-impact journals.

      Nice quote.

    9. but because journal editors are obsessively vigilant about rejecting papers that fall below a threshold of ‘novelty’, these papers become unpublishable in practical terms

      The Inglefinger rule.

    1. NF-κB p65 subunit from Abcam (ab7970

      This antibody has RRID:AB_306184

      This antibody did not appear to be included in the specificity tests on nf-kb antibodies by Herkenham et al. (2011) doi:10.1186/1742-2094-8-141. and Slotta et al. 2014: doi:10.1369/0022155413515484

      However, given the widespread problems with nf-kb antibodies, one might be concerned.

    2. p65 from Chemicon (MAB 3026

      This antibody has RRID:AB_2178887.

      Flag: A concern about specificity of this antibody in brain has been noted in Herkenham et al. (2011) doi:10.1186/1742-2094-8-141.

      An issue with this antibody has been raised in Slotta et al. 2014: doi:10.1369/0022155413515484

    1. To enable what I’m showing here, and if you’re running our Chrome extension, you’ll need to make sure you’re allowing access to local file URLs. Start by opening up the extensions setting

      Does this only work with the Chrome extension? I'm having trouble annotating pdf's using Via in Safari.

  3. Mar 2015
    1. Novartis Institutes for BioMedical Research (NIBR)

      Add these to SciCrunch too

    1. Microscopy product ID: 1132

      This cell was misidentified as a basket cell. I think it is more likely a candelabrum cell. Laine and Axelrad, 1994

    1. monoclonal pSer-536 clone 93H1

      This antibody has RRID:AB_331284

    2. NLS-RelA (monoclonal MAB 3026 clone 12H11, 1∶250; Millipore, Darmstadt, Germany

      This antibody has RRID:AB_2178887.

      Flag: A concern about specificity of this antibody in brain has been noted in Herkenham et al. (2011) doi:10.1186/1742-2094-8-141.

      An issue with this antibody has been raised in Slotta et al. 2014: doi:10.1369/0022155413515484