68 Matching Annotations
  1. May 2019
    1. Faculty salaries have not risen proportionally to these tuition increases

      Collegiate head coaches are among the most highly compensated public employees. They often make much more than the university president.

    2. Would you rather have a Princeton diploma without a Princeton education, or a Princeton education without a Princeton diploma? If you pause to answer, you must think signaling is pretty important.”

      What about a Princeton network without a diploma? Education + network is the main value. A network is exactly what Coursera et al failed to deliver.

    3. Emerging forms of accreditation will reduce the value of college as a signaling tool, and students will be increasingly uneasy about the cost and time required to receive a diploma.

      The fact that it costs a lot in time and money is what makes it such an effective signal! New forms of accreditation do exist, but the strength of the signal is proportional to the time & cost. You don't get 20000 points on Stack Overflow or a serious Github contribution graph overnight.

    4. As colleges lost their monopoly on information, college became less about learning and more about signaling

      He was just arguing above that college served a primarily signaling role, de-risking hiring for companies, so the correct view seems to be that they've always been about signaling.

    5. Information is easier to access today, by an order of magnitude

      Obligatory note that information isn't knowledge. A proper measure would be how much people know relative to how much they need to know, or perhaps the knowledge distribution, which will of course be more skewed in a fragmented society.

    6. Libraries were rare and expensive. They were mostly located in cities.

      He seems to be confusing academic and public libraries here.

    7. tied their curriculums to the needs of cozy corporate partners

      Citation needed! I don't think this happened much except in technical fields like chemistry & software engineering, physics. Maybe it's more common in business?

    8. Since knowledge could only be accessed at top-tier universities, strong network effects emerged. Two universities, Harvard and Yale, produced 12 of America’s 44 presidents

      Yes, network effects are important for knowledge, but the presidential alumni is a different kind of network.

    9. Since professors couldn’t record or distribute their lectures, students had to witness them first-hand.

      We could record & distribute things before the Internet. It was more expensive, but there would have been a market for Ivy League lectures. The overall point about scarcity holds, but it wasn't the medium, it was the university, enforcing it.

    10. The long term matters more than it used to.

      Definitely a good thing

    11. The internet is better on every dimension: cost, convenience, depth, speed, personalization

      Interesting that there's no mention of quality or trustworthiness, especially given the brand discussion above.

    12. Reading the newspaper was my favorite ritual. But now, my daily sports entertainment comes from internet bloggers who tweet in their underwear.

      It's not daily, it's minutely.

    13. Television equalized culture.

      Dan Rather on the evening news, the distance between political parties, and social cohesion

    14. When information is scarce and asymmetric, consumers flock to trusted brands. But in many parts of the economy, when consumers have reviews at their fingertips, they no longer defer to brands when they make a purchasing decision.

      If I'm buying a cheap functional item, I definitely look for something with good reviews, but if I'm buying a power tool, for example, I do still tend to look for a trusted brand.

  2. Feb 2019
    1. But there’s no reason that Google and Facebook shouldn’t be accepting deposits, facilitating payments, making loans, managing assets, running quantitative investment funds.

      Except that there's a hesitation among tech firms to enter heavily regulated industries.

    2. I think it could be a big mistake to have the population at large play around with algorithms.

      Interesting that a trader, the person who'd most likely be on the winning side of inexperienced people playing with algorithmic finance, would be hesitant to release it on the world at large.

    3. When, not if

    1. it does not operate by a thoughtful consideration of local/global tradeoffs, but through the imposition of a singular view as “best for all” in a pseudo-scientific sense

      Similar to how some would "fix" economic inefficiencies with a legible imposed system, instead of just letting the market work.

    2. Shannon entropy

      information content

    3. Kolmogorov-Chaitin complexity

      descriptive complexity

    4. This  imposed simplification, in service of legibility to the state’s eye, makes the rich reality brittle

      Does legibility necessarily imply antifragility?

    5. it arises from a flawed pattern of reasoning rather than values

      Both well-meaning populist states and right-wing dictatorships share the same failure mode.

    6. Or at least more legible to the all-seeing statist eye in the sky (many of the pictures in the book are literally aerial views) than to the local, embedded, eye on the ground.

      One of Ostrom's insights about common pool resources is that the community regulates themselves better than distant governors, because they have this local knowledge.

  3. Mar 2018
  4. Jun 2017
    1. articles

      Very dated perspective. You mostly get back data & software, only some of which is described in an accompanying narrative aka article.

    2. He always said we don’t compete on sales, we compete on authors,

      A key insight which led to the author - centric focus of publishers today.

    3. doing things that they need that they either cannot, or do not do on their own

      Researchers write & review the papers, write and review the grants, apply for tenure & sit on tenure assessment committees. They thing they don't do is develop the platforms which facilitate the above. Why shouldn't a company which hires software developers to build the platforms be able to charge a fair price for doing so?

    4. It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free

      Except journalists are reporting on what other people have done. They're not doing the things upon which they're reporting.

    5. then buys most of the published product”.

      Except researchers have fought for years to get people to understand & value the contributions they make in code & data, not just in publications, so the whole "paying twice" thing just doesn't make sense, because the article isn't the work - it's the report of the work.

    6. 36% margin

      According to 2017 annual report, profit is currently 30%

    7. Claudio Aspesi, a senior investment analyst at Bernstein Research in London,

      This report is still discussed within Elsevier. It seems to have had a large impact on the company's shift in direction to services.

    1. Furthermore, the JIF –in its normalized variant –seems to differentiate more or less successfully between promising and uninteresting candidates not only in the short term, but also in the long term.

      Except that the effect sizes are too small for them to be credible in the absence of pre-registration of this hypothesis.

    2. The publication data areindependent of each other.

      Which they clearly are not - there are a researcher at an elite institution will have a citation pattern more like another researcher at an elite institution, not like another researcher in the same field at a non-elite institution.

    3. 3,976researchers who published theirfirst paper in 1998 and at least one paper in 2012.One can expect that these researchers published more or less continuously over 15 years

      This is not something I would assume. This needs to be demonstrated.

    4. Althoughit cannot be taken for granted that the publication lists on RIDare error-free, these lists will probably be more reliable thantheautomatically generated lists (by Elsevier).

      Seems like a list which is automatically populated and then edited by an researcher would be better than one manually created. I don't think there's any factual basis for this claim.

  5. Jun 2016
    1. a collection of Wikipedias

      FWIW, PLOS tried this with PLOS Currents. It didn't get much traction, but I think there were some good use cases around rapid communications for disease outbreaks.

    2. dynamic documents

      A group of experts got together last year at Daghstuhl and wrote a white paper about this.

      Basically the idea is that the data, the code, the protocol/analysis/method, and the narrative should all exist as equal objects on the appropriate platform. Code in a code repository like Github, Data in a data repo that understands data formats, like Mendeley Data (my company) and Figshare, protocols somewhere like protocols.io and the narrative which ties it all together still at the publisher. Discussion and review can take the form of comments, or even better, annotations just like I'm doing now.

    3. static historical museum snapshots

      Part of this is because the people, besides publishers, most involved in discussions of publishing formats are librarians, who have preservation on their mind. If your job to to curate and preserve the world's knowledge, you have to think very carefully about what needs to be kept. Preservation is a surprisingly tricky subject when you get into the details of what constitutes a new version, at what level you do preservation - bit level, file level, text level, etc.

    4. the role of journal editor as human traffic cop would largely fade away

      Yes, copyediting and managing review and such are valuable, and to some extent either already outsourced or replaceable by technology. However, getting your paper in front of the right people who need to read it still both requires a talented human in the loop and command of a large audience, which no one but the publishers can yet match.

    5. sites will host a PDF for free

      Of course, as the author returns to below, publishing is much more than just hosting a PDF online somewhere. Knowing that the right people will read what you publish is still worth quite a bit, and publishers command the largest audience. That's hard to replicate!

    6. many publishers still cannot include figures beyond 1MB

      Fair point, but this is changing. Have you checked out the submission flow at Heliyon?

    1. what the market will bear

      So the forces which keep this from being actually a market are in part due to the ability of publishers to charge monopoly rent for their unique content. That's an intrinsic part of the content business, though, and not unique to academic publishing. Take a look at the monopolistic practices engaged in by the must-have producers in the wine industry, for example. Prices are absolutely not just a multiple of costs there!

      However, we're rapidly approaching a place where just plain old access and branding isn't good enough of a differentiator anymore. Even the big guys are re-branding as information service providers and services are by nature more competitive than unique content. This is where market dynamics will come into play. OLH providing a publishing service is a welcome part of this, but if OLH chains itself to consortia, they're going to introduce longer development times, be forced to have smaller teams and pay lower salaries, and will be hard pressed to compete against the companies that engage their users in a rapid iterative product-market fit cycle.

    2. a good solution

      Doesn't this just change who has the monopoly, rather than actively stimulate a market? What company would want to enter a market where they couldn't set their prices according to the value they think they can provide, but rather had to negotiate with various consortia for what they'd be allowed to charge?

    3. we have a price point

      Specifically, you have a price/value ratio, not just a price.

    4. proportionate to the cost of the activities

      Maybe I misunderstand, but I think there's something fundamentally wrong with the economic situation proposed here. I think the price should be proportional to the value delivered to the end consumer.

      There are two fundamental ways the price can be set:

      • The publishers tell the consortium that their costs are X and the consortium pays that.
      • The publishers tell their many customers what they would like to be paid for their product and some customers pay that and some don't.

      In the "single payer"-ish scenario, different publishers will have different costs. The consortium will negotiate with each publisher and may decide that some higher costs are OK for one publisher but not another. In order to raise salaries, do capital investment in new technology, or just about any other significant increase in cost, the publisher has to get approval from the consortium for this. There will undoubtedly be some cases where a publisher wants to do something for one community but the consortium doesn't want to pay for it because it doesn't help enough people. Researchers are forced to lobby the consortium to allow publishers to do what they want. Since the only way more money gets into a publisher's hands is via the consortium, they're in a position to make deals with publishers along the lines of "If you want to increase your salaries this year, we'll allow your costs to increase, but only if you do X." New publishers that want to do things a different way are at a disadvantage because everyone who wants to get paid has to do what the consortia wants. Not exactly a recipe for innovation!

      In the other scenario, publishers are free to make their own decisions about how to run their businesses and which new products to launch, etc. They listen to what the end users wants and profit to the degree that their product is something the end users value. Existing large organizations do have the power to buy out or suppress new companies, so the market isn't perfectly functional here, but surely it's better than having no market at all?

      A profit margin for a commercial company is essentially the same thing as a operational surplus for a non-profit, right?

    5. sustainability surplus

      So they are going to periodically ask the consortia members to just give them a little more for nothing in the name of operational safety? Seems politically fraught!

    6. but without article processing charges.

      I think you may be trying to make too fine a point here. Pretty much everyone understands Gold OA, insofar as they understand a difference at all, to be the kind of OA where you pay a APC and get a immediately shareable and open article.

    7. not-for-profit basis

      You see no place for commercial interests in publishing, at all, or just in publishing of content in your field?

    8. this diverse set of goals

      This doesn't sound like a problem to me. Individual research communities should be able to adopt an OA style that works for them. I've never thought it realistic that the sciences and humanities will pick the same kind of OA, it's never been clear to me why it even needs to be the same kind, but [deity] do we waste a lot of breath and text talking about this.

  6. May 2016
    1. not a simple reflection of the underlying relationship with research quality

      because quality has never been measured at any point!

    2. citation-specific influences that are independent of quality,

      Mendeley readership is valuable in its own right, not only in how closely it correlates to citations.

    3. partial dependence

      Knowing a correlation exists is useful for knowing the relationship between the two measures. Neither reflect something so abstract and variable as "quality".

    4. his issue could be circumvented by replacing “quality” in the above discussion and methods by a term such as “citability”.

      Yes, please!

    5. a low correlation suggests that the new indicator predominantly reflects something other thanscholarly quality

      or that the previous metric wasn't capturing that dimension of quality

    1. “The worst thing,” he says, “is that your science gets published just to be proven faulty or wrong soon after.”

      Everyone will be proven wrong - that's an intrinsic part of science.

      The worst thing would be for fraudulent results to be published because you didn't do a good job in your peer review. Less worse, but still bad, would be for irreplicable work to be published.

    2. novel and 'big'

      whether novelty and "bigness" are important for the journal. Many do not have this as a criteria, because need for novelty is a primary driver of the file-drawer effect, where the certainty of knowledge is overestimated because negative or boring results aren't published.

    3. what extra tests they think are needed and why

      again, whether extra tests are needed. It should not be the default to ask for more work to be done!

    4. questionable peer-review

      If a editor is contacting you as an expert to get a review of a manuscript, they're already mostly past the point of being a dodgy outfit. You should keep a copy of your review in case the paper comes out without addressing your review, though.

    5. questions

      I would suggest adding the following questions: "Is the data available in a suitable repository?" "Is the use of statistical methods appropriate?" "Did the authors pre-register their hypothesis, experimental plan, and analysis to prevent multiple testing bias or selective reporting of data?"

    6. Describe any extra experimentation or data analysis needed to warrant publication.

      or alternatively, suggest claims be scaled back. Too frequently, I see "more work needs to be done" as the default position.

    7. Outline the novelty of the science and judge the significance

      or maybe just assess whether the data support the conclusions and dispense with the whole "novelty and significance" bit?

    8. they will be expected to suggest that the authors do more analysis or more experiments

      Calling for more experiments is probably the greatest barrier to speed of publication. Sometimes it makes sense, but sometimes it also makes sense to scale back any claims about generalizability, for example, and publish as is.

    9. original enough to deserve publication

      I wouldn't call this a "must", given that the most successful category of journal has abandoned importance as a barrier. Requiring originality is what held back replication studies for so long, as we now see to our detriment.

    10. Graduate students generally are not recognized for their ability to conduct independent peer review unless

      It's true they're not recognized, but in my experience, a high amount of the reviews done by PIs are really done by their grad students.

    1. metrics

      In the #altmetrics community, we talk a lot about the difference between measures and indicators. Measures imply units and scales, whereas indicators are just possibly relevant bits of data. It's good that policymakers are looking to data as a decision support resource, though we do have to be careful that the data are used with the proper care and attention. So, too early to say misguided, I think.

    2. highest-level lobbying

      A meeting to discuss a tender is a normal business practice, so this would appear to be part of the normal sales process, not "high-level lobbying".

    3. Elsevier continues its march into data analytics at a pace that should terrify anyone on the ground in HE

      Policy makers and administrations have used Scival for years as a decision support resource. As discussed on Twitter, there is an open standard for these metrics: http://www.snowballmetrics.com/

      The data are useful to help support decisions about HE policy, though they are more useful in STEM than in the humanities, partly due to the lack of identifiers and comprehensive indexing of outputs in the humanities.

      Hopefully that makes it a little less terrifying.

  7. Oct 2014
    1. Sharing

      This is a annotation of "Sharing" that that links to the annotation of the annotation.