122 Matching Annotations
  1. Aug 2019
    1. When social mechanisms fail, decay, or are destroyed, we absolutely would expect to see certain forms of capitalism arising.

      Makes me think of all the startups that essentially do what the community used to do - loan things, do odd jobs, etc.

  2. Jul 2019
    1. Note that mentions tagged by “Incorrect” and“InsufficientMetaData” are deemed not legitimate and it is desirable that RDW andRRID-by-RDW not identify them.

      but there's no way any analysis restricted to the article text will ID this, because you have to resolve the RRID to figure that out, right?

    2. Papers containing SCR RRID

      Why would papers have a higher percentage of SCR RRIDs? Where are the other RRIDs found?

    3. Summary and Conclusions

      the conclusion is in the paragraphs above titled comparison. Perhaps this para should be titled "future directions" or something?

    4. The Use of RRIDs vs Data Citation

      This section seems like it should be in the introduction.

    5. corpi

      correct plural is corpora

    6. where authors did not report an RRID forthe resource that they used, constituting 37% of all RRID mentions identified by SciBot

      Ok so Scibot is identifying digital resources from a list & flagging when there's no RRID but there probably should be?

    7. RDW recognized mentions of digital resource names, RRIDs or URLs from a total of701110 articles

      There are 190000 RRIDs in 13000 articles. RDW found RRIDs (doesn't say how many) in 701110/(2341133+738910+72493+151784=3304320) articles. So there are resources mentioned in about 21% of articles, based on extraction, but presuming all of the 13000 RRID containing articles were included in the 3 million, the RRID prevalence is closer to 6%, but RRIDs mentioning digital resources are 26748 or .8%. So 4/5 of articles don't mention digital resources at all?

    Tags

    Annotators

    1. In October, Chetty’s institute released an interactive map of the United States called the Opportunity Atlas, revealing the terrain of opportunity down to the level of individual neighborhoods.

      I should look at this for the Bay Area

    1. WE’RE MOSTLY JUST DESPERATELY FLAILING AROUND LOOKING FOR SOLUTIONS HERE.

      The "it's all complexity" take would be that we aren't even really looking for solutions, but rather decision-makers are all advocates (because it takes so much effort to have an informed opinion, effort which is uncompensated unless you're a professional advocate) who use evidence of problems to burnish their personal reputations.

    2. This suggests a revealed preference that elderly people are willing to tolerate a certain fall probability in order to save money and convenience.

      How legible is risk prevention? Might this just be complexity again?

    3. might markets just not work?

      Same thing in publishing, too. Lots of people say journal costs are inflated & they can run one cheaper. They're right, but there are two considerations: a) in a market economy prices reflect more than just costs. They reflect the economic value, which includes things like brand value, prestige and also, as this & the other posts argue, an inflation due to productivity rising in adjacent market sectors. So the market failures seem to come from a) the difficulty knowing how much something should cost (having comparables and not having too much complexity to understand) and b) too high value ascribed to the status or prestige (which, if understood as a social consensus proxy that reduces the complexity of actually understanding the business & what it's value to the consumer should be, collapses b into a).

    4. it seems very possible to get the same improving life expectancies as the US without octupling health care spending.

      Support for socialism is rising among young people in the US. Is the support rising because people are looking at the data themselves and coming to the conclusion that socialism is better, or that there's more this kind of information available for use by advocates to make the case for socialism? If you look at the data yourself, you don't necessarily go to socialism as the solution, as PG's and Tyler's posts make clear.

    1. In so far as the primary feature of being human is human emotions

      So basically Kegan is talking about emotions, not understanding, when he's talking about the process being the fundamental thing.

    1. utilitarianism, deontology and virtue ethics; it is the latter that has been neglected and overlooked and that needs to be revived to create a politics of the common good.

      Definitions: The right thing is that which leads to the right consequences. The right thing is that which is arrived at by following the right rules. The right thing is that which is done by a good person.

    2. Philosophers like Iain McGilchrist, James Williams and Matthew Crawford

      Philosophers who have something to say about attention.

    3. Fourth, tribes often coalesce around shared interests or expertise but become so encultured within their domain of expertise that they struggle to communicate with other tribes – the academic world is awash with this problem. What we seem to need today are expert generalists, which I have characterised as follows:

      So academic tribes would be an example of the federated type, but they find it as hard to collaborate on ideas as nations do on emissions.

    4. The deep roots of tribal affiliation are still place and family and religion and other shared affiliations, interests and values, but what is emerging today is a kind of reflexive tribalism in which the nature and purpose of tribes has become an open question for all of us.

      This and other writings about tribalism seem to be arguing for a global approach, but I wonder if that's just fighting our biology. If we accept tribalism as part of who we are, does this instead argue for a greater focus on the establishment of a commons in the Ostrom sense, where boundaries are clear & control is local. These are by definition small, so they'd have to be federated, and I wonder to what extent federated commons export their problems? They obviously can't handle global issues like emissions leading to climate change, so what do we do about our genetic legacy? If only a superintelligence could grasp everything needed to understand global issues, does that mean AI and all its attendant risks is the only way?

  3. Jun 2019
    1. The problem is without community level chapters in personal histories, we’re all driven to represent our values alone.

      I wonder if this is what drove @TheAnnaGat to create @whatstheii?

    1. a new, more constrained game with simpler rules

      This is what @vgr would call an "escaped reality" https://www.ribbonfarm.com/2015/01/16/on-the-design-of-escaped-realities/ Here's you're not doing something, but refraining from doing something. When you crash out, you'll crash out to a local reality - your town & your local sphere of influence.

    2. Their goal is to “make you a better, more informed consumer of political news by showing you indicators that the news you are reading may be affected by (1) adherence to narratives and other abstractions, (2) the association/conflation of topics and (3) the presence of opinions.”

      Great goal, but I'm still not clear how to use ET on an ongoing basis.

    3. reconnect offline by transforming our inspiring internet communities into IRL friendships and collaborations

      Relevant to @whatistheii and emergentism and all that.

    4. In fact, this has already begun to happen in academia and in new media.

      This is his proof point for how the younger generation will deal with traditional sources of truth losing their authority.

    5. Going local is a potential antidote to conspiratorial spiraling.

      Going local is probably the one thing that we all can do. Reminds me of "keep your identity/ego small"

    6. how they spiraled into their current belief system

      People like to believe transgressive lies. It feels good to support an overturning of an oppressive system, whether that's doctors or school administrators or the government.

    7. the only laypeople I’ve seen doing experiments are the conspiracy theorists featured in Netflix’s documentary about Flat Earthers.

      There's a whole citizen science movement & biohacker spaces all over the world.

    8. One day, we’ll learn to see through this sort of corporate emotional manipulation.

      Optimistic, but on what basis?

    9. data voids

      @sarahdoingthing gave a great presentation at Refactor Camp on "how to see voids". No link, but maybe video will show up here at some point: https://www.refactorcamp.com/

    10. Inter-mingle #3: Post-truth reading 1/2

    1. the reasoning behind the pork taboo

      I heard from a rabbi, who was teaching us about kosher preparation in culinary school, that the reason was that pork tended to harbor more parasites.

    1. We normally start at the home vertex, and by choosing between the be somebody and do something paths, we make our way out into the world, heading towards either the public or frontier vertices

      Out into the world is into the public, where things become a thing, or to the frontier, where you may find a there. I reckon people born at the frontier nexus would have to head public, despite hating it, because there no such thing as home for them.

    2. Why I'm reading this: because I'm going to Refactor Camp & I figured I should be up on Ribbonfarm stuff.

    1. There’s nothing there, we say, when we dismiss subcultures of being while located in subcultures of doing. It’s a thing, tastemakers insist, in resisting the dismissals of thereness-seeking doers.

      Central point

    1. install Hypothesis in Brave.

      When I visit that link, the install button says install for Desktop, and it doesn't show up in Brave for Android.

    1. And I feel like now the discussion here in the US is shifting, shifting away from the idea that bigger is always better, shifting away from the idea that the big tech platforms are a resounding success. We can talk about this. So I think that there is a bigger appreciation here now to the people are more than just consumers. And rights can be articulated in a different way.

      these seem to be the central points

    2. If we don't step up together to build a governance model based on democratic values, then I think it will only give more space to the authoritarian models and profit driven models that are also fragmenting the open Internet, and really not putting people in the public interest first.

      China ain't gonna work with us, and if we hobble FB and Twitter while Weibo/WeChat keep growing.

    3. So really, the sovereignty model is getting stronger and stronger.

      I think she means state-sovereignty here, which probably works against disciplinary community self-regulation.

    4. I think we're really going from a moment of huge promise of the open Internet, the promise that democracy would go viral, that individuals would be immensely empowered, to much more of an understanding of the practice.

      She expressed a general sentiment that platforms will need more regulation to address the places where the reality turned out darker than the vision

    5. CERN, CERN is, in many ways a good place, because we have almost the entire community together, coherently active. other scientists are perhaps a more fragmented, so we can cohere in solutions in ways that perhaps other communities can't

      So the answer to "how to incentivize the adoption of open tools" is... to focus on small groups, self-regulating commons, and not try to do anything in a centralized way. It's self-regulating that's the hard part, because almost all research assessment uses the same structure, which means they're not really self-regulating. They're affected by the broader research assessment norms - https://hyp.is/v1Q2bIY8EemOKE-i0Z8M7g/otter.ai/note/22D2N3FNIVAHRFSC?f=/folder/18838&h=Conferences

    6. And so it's not just technology that changes their incentive systems built around academia, tenure promotion, things like that favor certain practices. So all of those have to change as well.

      So it's a social problem, not a technology problem, which means it's not surprising that social science and psychology are leading the way here

    7. what are the institutional features that made it possible to launch such a project?

      It's hard to understand even in the audio, but the question was basically about incentives. How did you manage to build this and make it work? His answer is basically that they had everybody together under one roof initially, which kinda points away from decentralization and towards a knowledge commons in the Ostrom sense - one with barriers and sanctions for participants which break rules.

    8. So this really will be, if we do it right, a decentralized new infrastructure.

      Decentralization was a major theme of the conference.

    9. So things like the Journal of open source software has been created, which is an overlay journal on top of these repositories where you can actually do peer review on this software, to the rescience journal, which explicitly encourages open source implementations of really useful theses algorithms and pieces of code and allows you to publish it, again, something that wouldn't be allowed in classic journals even more experimental, the Journal of Brief Ideas where you can share ideas before you've even done the research yourself and somebody else can do it for you if you can express it in less than ??? words.

      All these things are still called journals, because they want researchers to actually use them.

    10. But as I said, CERN is about sharing all our knowledge, sharing all of our tools, so we also wanted the same, the same code in services that we've tried to offer the world as well. So we created something called Zenodo,

      frames Zenodo as the public version of the internal tools - the open data portal, the analysis preservation platform, the reusable analysis platform

    11. e was able to describe in just a paragraph, she was able to get the data, it's just one table.

      now we turn data tables into charts and graphs, and you often can't get the data back out. Atypon showed a neat system, based on some Authorea work, on making interactive figures where the mouse position showed the data points at that position, even in 3D.

    12. So what we learned from the open source movement was that you must make things in a modular manner

      prevents people from building monolithic things that are open, but not reusable in practice

    13. What we realize is to do open science properly, you have to do it all the way along, you have to change all the processes, and capture the data, as it's being created, capture this software as it's being used. So we took the same software stack, reworked it a little bit and created something called the analysis preservation portal,

      basically a sort of ELN, still no capturing data directly from instruments, because in particle physics, they have a few big instruments, not the many small instruments we deal with in biomed.

    14. uestion any parts of that chain

      The idea of knowledge chains ties in nicely with my idea of provenance chains - capturing data at the source.

    15. it was the combined effort of the year of computer scientists, information scientists and physicists to actually make this work

      The CERN open data portal took huge amounts of effort to make useful.

    16. Sharing platforms are fantastic, for doing just that, for collaboration and sharing. But they have no guarantees goes into them that they're going to be there forever

      Sets up the rationale for something like Zenodo

    17. Because to actually understand the research that's going on, you have to access the algorithms, the statistical methods, the data period comparisons that have gone into the analysis, to understand better you must get at the data and the code itself, the reconstructed data that's usable by other scientists. And if you want to really go and change things you must get at the raw data

      The real Article of the Future isn't an article at all.

    18. are forgetting or not being taught anymore the value of the scientific process behind the knowledge generation

      I think it's rather that there's an anti-authoritarian streak to people these days, for good and for bad. They won't take an authority's word for it & insist on making up their own minds, but unfortunately one heuristic some people have landed on is "what the recognized authority figure says is probably biased, so discount that" and that's how the fall into misinformation traps. Respect for authority isn't likely to come back into fashion, so what now?

    19. the goal of it was to, to share to share what we know, to share our knowledge.

      Tim Smith of CERN recounting the formation of the Web and how the mission of sharing research data inspired its design.

  4. May 2019
    1. Faculty salaries have not risen proportionally to these tuition increases

      Collegiate head coaches are among the most highly compensated public employees. They often make much more than the university president.

    2. Would you rather have a Princeton diploma without a Princeton education, or a Princeton education without a Princeton diploma? If you pause to answer, you must think signaling is pretty important.”

      What about a Princeton network without a diploma? Education + network is the main value. A network is exactly what Coursera et al failed to deliver.

    3. Emerging forms of accreditation will reduce the value of college as a signaling tool, and students will be increasingly uneasy about the cost and time required to receive a diploma.

      The fact that it costs a lot in time and money is what makes it such an effective signal! New forms of accreditation do exist, but the strength of the signal is proportional to the time & cost. You don't get 20000 points on Stack Overflow or a serious Github contribution graph overnight.

    4. As colleges lost their monopoly on information, college became less about learning and more about signaling

      He was just arguing above that college served a primarily signaling role, de-risking hiring for companies, so the correct view seems to be that they've always been about signaling.

    5. Information is easier to access today, by an order of magnitude

      Obligatory note that information isn't knowledge. A proper measure would be how much people know relative to how much they need to know, or perhaps the knowledge distribution, which will of course be more skewed in a fragmented society.

    6. Libraries were rare and expensive. They were mostly located in cities.

      He seems to be confusing academic and public libraries here.

    7. tied their curriculums to the needs of cozy corporate partners

      Citation needed! I don't think this happened much except in technical fields like chemistry & software engineering, physics. Maybe it's more common in business?

    8. Since knowledge could only be accessed at top-tier universities, strong network effects emerged. Two universities, Harvard and Yale, produced 12 of America’s 44 presidents

      Yes, network effects are important for knowledge, but the presidential alumni is a different kind of network.

    9. Since professors couldn’t record or distribute their lectures, students had to witness them first-hand.

      We could record & distribute things before the Internet. It was more expensive, but there would have been a market for Ivy League lectures. The overall point about scarcity holds, but it wasn't the medium, it was the university, enforcing it.

    10. The long term matters more than it used to.

      Definitely a good thing

    11. The internet is better on every dimension: cost, convenience, depth, speed, personalization

      Interesting that there's no mention of quality or trustworthiness, especially given the brand discussion above.

    12. Reading the newspaper was my favorite ritual. But now, my daily sports entertainment comes from internet bloggers who tweet in their underwear.

      It's not daily, it's minutely.

    13. Television equalized culture.

      Dan Rather on the evening news, the distance between political parties, and social cohesion

    14. When information is scarce and asymmetric, consumers flock to trusted brands. But in many parts of the economy, when consumers have reviews at their fingertips, they no longer defer to brands when they make a purchasing decision.

      If I'm buying a cheap functional item, I definitely look for something with good reviews, but if I'm buying a power tool, for example, I do still tend to look for a trusted brand.

  5. Feb 2019
    1. But there’s no reason that Google and Facebook shouldn’t be accepting deposits, facilitating payments, making loans, managing assets, running quantitative investment funds.

      Except that there's a hesitation among tech firms to enter heavily regulated industries.

    2. I think it could be a big mistake to have the population at large play around with algorithms.

      Interesting that a trader, the person who'd most likely be on the winning side of inexperienced people playing with algorithmic finance, would be hesitant to release it on the world at large.

    3. When, not if

    1. it does not operate by a thoughtful consideration of local/global tradeoffs, but through the imposition of a singular view as “best for all” in a pseudo-scientific sense

      Similar to how some would "fix" economic inefficiencies with a legible imposed system, instead of just letting the market work.

    2. Shannon entropy

      information content

    3. Kolmogorov-Chaitin complexity

      descriptive complexity

    4. This  imposed simplification, in service of legibility to the state’s eye, makes the rich reality brittle

      Does legibility necessarily imply antifragility?

    5. it arises from a flawed pattern of reasoning rather than values

      Both well-meaning populist states and right-wing dictatorships share the same failure mode.

    6. Or at least more legible to the all-seeing statist eye in the sky (many of the pictures in the book are literally aerial views) than to the local, embedded, eye on the ground.

      One of Ostrom's insights about common pool resources is that the community regulates themselves better than distant governors, because they have this local knowledge.

  6. Mar 2018
  7. Jun 2017
    1. articles

      Very dated perspective. You mostly get back data & software, only some of which is described in an accompanying narrative aka article.

    2. He always said we don’t compete on sales, we compete on authors,

      A key insight which led to the author - centric focus of publishers today.

    3. doing things that they need that they either cannot, or do not do on their own

      Researchers write & review the papers, write and review the grants, apply for tenure & sit on tenure assessment committees. They thing they don't do is develop the platforms which facilitate the above. Why shouldn't a company which hires software developers to build the platforms be able to charge a fair price for doing so?

    4. It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free

      Except journalists are reporting on what other people have done. They're not doing the things upon which they're reporting.

    5. then buys most of the published product”.

      Except researchers have fought for years to get people to understand & value the contributions they make in code & data, not just in publications, so the whole "paying twice" thing just doesn't make sense, because the article isn't the work - it's the report of the work.

    6. 36% margin

      According to 2017 annual report, profit is currently 30%

    7. Claudio Aspesi, a senior investment analyst at Bernstein Research in London,

      This report is still discussed within Elsevier. It seems to have had a large impact on the company's shift in direction to services.

    1. Furthermore, the JIF –in its normalized variant –seems to differentiate more or less successfully between promising and uninteresting candidates not only in the short term, but also in the long term.

      Except that the effect sizes are too small for them to be credible in the absence of pre-registration of this hypothesis.

    2. The publication data areindependent of each other.

      Which they clearly are not - there are a researcher at an elite institution will have a citation pattern more like another researcher at an elite institution, not like another researcher in the same field at a non-elite institution.

    3. 3,976researchers who published theirfirst paper in 1998 and at least one paper in 2012.One can expect that these researchers published more or less continuously over 15 years

      This is not something I would assume. This needs to be demonstrated.

    4. Althoughit cannot be taken for granted that the publication lists on RIDare error-free, these lists will probably be more reliable thantheautomatically generated lists (by Elsevier).

      Seems like a list which is automatically populated and then edited by an researcher would be better than one manually created. I don't think there's any factual basis for this claim.

  8. Jun 2016
    1. a collection of Wikipedias

      FWIW, PLOS tried this with PLOS Currents. It didn't get much traction, but I think there were some good use cases around rapid communications for disease outbreaks.

    2. dynamic documents

      A group of experts got together last year at Daghstuhl and wrote a white paper about this.

      Basically the idea is that the data, the code, the protocol/analysis/method, and the narrative should all exist as equal objects on the appropriate platform. Code in a code repository like Github, Data in a data repo that understands data formats, like Mendeley Data (my company) and Figshare, protocols somewhere like protocols.io and the narrative which ties it all together still at the publisher. Discussion and review can take the form of comments, or even better, annotations just like I'm doing now.

    3. static historical museum snapshots

      Part of this is because the people, besides publishers, most involved in discussions of publishing formats are librarians, who have preservation on their mind. If your job to to curate and preserve the world's knowledge, you have to think very carefully about what needs to be kept. Preservation is a surprisingly tricky subject when you get into the details of what constitutes a new version, at what level you do preservation - bit level, file level, text level, etc.

    4. the role of journal editor as human traffic cop would largely fade away

      Yes, copyediting and managing review and such are valuable, and to some extent either already outsourced or replaceable by technology. However, getting your paper in front of the right people who need to read it still both requires a talented human in the loop and command of a large audience, which no one but the publishers can yet match.

    5. sites will host a PDF for free

      Of course, as the author returns to below, publishing is much more than just hosting a PDF online somewhere. Knowing that the right people will read what you publish is still worth quite a bit, and publishers command the largest audience. That's hard to replicate!

    6. many publishers still cannot include figures beyond 1MB

      Fair point, but this is changing. Have you checked out the submission flow at Heliyon?

    1. what the market will bear

      So the forces which keep this from being actually a market are in part due to the ability of publishers to charge monopoly rent for their unique content. That's an intrinsic part of the content business, though, and not unique to academic publishing. Take a look at the monopolistic practices engaged in by the must-have producers in the wine industry, for example. Prices are absolutely not just a multiple of costs there!

      However, we're rapidly approaching a place where just plain old access and branding isn't good enough of a differentiator anymore. Even the big guys are re-branding as information service providers and services are by nature more competitive than unique content. This is where market dynamics will come into play. OLH providing a publishing service is a welcome part of this, but if OLH chains itself to consortia, they're going to introduce longer development times, be forced to have smaller teams and pay lower salaries, and will be hard pressed to compete against the companies that engage their users in a rapid iterative product-market fit cycle.

    2. a good solution

      Doesn't this just change who has the monopoly, rather than actively stimulate a market? What company would want to enter a market where they couldn't set their prices according to the value they think they can provide, but rather had to negotiate with various consortia for what they'd be allowed to charge?

    3. we have a price point

      Specifically, you have a price/value ratio, not just a price.

    4. proportionate to the cost of the activities

      Maybe I misunderstand, but I think there's something fundamentally wrong with the economic situation proposed here. I think the price should be proportional to the value delivered to the end consumer.

      There are two fundamental ways the price can be set:

      • The publishers tell the consortium that their costs are X and the consortium pays that.
      • The publishers tell their many customers what they would like to be paid for their product and some customers pay that and some don't.

      In the "single payer"-ish scenario, different publishers will have different costs. The consortium will negotiate with each publisher and may decide that some higher costs are OK for one publisher but not another. In order to raise salaries, do capital investment in new technology, or just about any other significant increase in cost, the publisher has to get approval from the consortium for this. There will undoubtedly be some cases where a publisher wants to do something for one community but the consortium doesn't want to pay for it because it doesn't help enough people. Researchers are forced to lobby the consortium to allow publishers to do what they want. Since the only way more money gets into a publisher's hands is via the consortium, they're in a position to make deals with publishers along the lines of "If you want to increase your salaries this year, we'll allow your costs to increase, but only if you do X." New publishers that want to do things a different way are at a disadvantage because everyone who wants to get paid has to do what the consortia wants. Not exactly a recipe for innovation!

      In the other scenario, publishers are free to make their own decisions about how to run their businesses and which new products to launch, etc. They listen to what the end users wants and profit to the degree that their product is something the end users value. Existing large organizations do have the power to buy out or suppress new companies, so the market isn't perfectly functional here, but surely it's better than having no market at all?

      A profit margin for a commercial company is essentially the same thing as a operational surplus for a non-profit, right?

    5. sustainability surplus

      So they are going to periodically ask the consortia members to just give them a little more for nothing in the name of operational safety? Seems politically fraught!

    6. but without article processing charges.

      I think you may be trying to make too fine a point here. Pretty much everyone understands Gold OA, insofar as they understand a difference at all, to be the kind of OA where you pay a APC and get a immediately shareable and open article.

    7. not-for-profit basis

      You see no place for commercial interests in publishing, at all, or just in publishing of content in your field?

    8. this diverse set of goals

      This doesn't sound like a problem to me. Individual research communities should be able to adopt an OA style that works for them. I've never thought it realistic that the sciences and humanities will pick the same kind of OA, it's never been clear to me why it even needs to be the same kind, but [deity] do we waste a lot of breath and text talking about this.

  9. May 2016
    1. not a simple reflection of the underlying relationship with research quality

      because quality has never been measured at any point!

    2. citation-specific influences that are independent of quality,

      Mendeley readership is valuable in its own right, not only in how closely it correlates to citations.

    3. partial dependence

      Knowing a correlation exists is useful for knowing the relationship between the two measures. Neither reflect something so abstract and variable as "quality".

    4. his issue could be circumvented by replacing “quality” in the above discussion and methods by a term such as “citability”.

      Yes, please!

    5. a low correlation suggests that the new indicator predominantly reflects something other thanscholarly quality

      or that the previous metric wasn't capturing that dimension of quality

    1. “The worst thing,” he says, “is that your science gets published just to be proven faulty or wrong soon after.”

      Everyone will be proven wrong - that's an intrinsic part of science.

      The worst thing would be for fraudulent results to be published because you didn't do a good job in your peer review. Less worse, but still bad, would be for irreplicable work to be published.

    2. novel and 'big'

      whether novelty and "bigness" are important for the journal. Many do not have this as a criteria, because need for novelty is a primary driver of the file-drawer effect, where the certainty of knowledge is overestimated because negative or boring results aren't published.

    3. what extra tests they think are needed and why

      again, whether extra tests are needed. It should not be the default to ask for more work to be done!

    4. questionable peer-review

      If a editor is contacting you as an expert to get a review of a manuscript, they're already mostly past the point of being a dodgy outfit. You should keep a copy of your review in case the paper comes out without addressing your review, though.

    5. questions

      I would suggest adding the following questions: "Is the data available in a suitable repository?" "Is the use of statistical methods appropriate?" "Did the authors pre-register their hypothesis, experimental plan, and analysis to prevent multiple testing bias or selective reporting of data?"

    6. Describe any extra experimentation or data analysis needed to warrant publication.

      or alternatively, suggest claims be scaled back. Too frequently, I see "more work needs to be done" as the default position.

    7. Outline the novelty of the science and judge the significance

      or maybe just assess whether the data support the conclusions and dispense with the whole "novelty and significance" bit?

    8. they will be expected to suggest that the authors do more analysis or more experiments

      Calling for more experiments is probably the greatest barrier to speed of publication. Sometimes it makes sense, but sometimes it also makes sense to scale back any claims about generalizability, for example, and publish as is.

    9. original enough to deserve publication

      I wouldn't call this a "must", given that the most successful category of journal has abandoned importance as a barrier. Requiring originality is what held back replication studies for so long, as we now see to our detriment.

    10. Graduate students generally are not recognized for their ability to conduct independent peer review unless

      It's true they're not recognized, but in my experience, a high amount of the reviews done by PIs are really done by their grad students.

    1. metrics

      In the #altmetrics community, we talk a lot about the difference between measures and indicators. Measures imply units and scales, whereas indicators are just possibly relevant bits of data. It's good that policymakers are looking to data as a decision support resource, though we do have to be careful that the data are used with the proper care and attention. So, too early to say misguided, I think.

    2. highest-level lobbying

      A meeting to discuss a tender is a normal business practice, so this would appear to be part of the normal sales process, not "high-level lobbying".

    3. Elsevier continues its march into data analytics at a pace that should terrify anyone on the ground in HE

      Policy makers and administrations have used Scival for years as a decision support resource. As discussed on Twitter, there is an open standard for these metrics: http://www.snowballmetrics.com/

      The data are useful to help support decisions about HE policy, though they are more useful in STEM than in the humanities, partly due to the lack of identifiers and comprehensive indexing of outputs in the humanities.

      Hopefully that makes it a little less terrifying.

  10. Oct 2014
    1. Sharing

      This is a annotation of "Sharing" that that links to the annotation of the annotation.