429 Matching Annotations
  1. Dec 2019
    1. Supplementary data

      Of special interest is that a reviewer openly discussed in blog his general thoughts about the state of the art in the field based on what he had been looking at in the paper. This blog came out just after he completed his 1st round review, and before an editorial decision was made.

      http://ivory.idyll.org/blog/thoughts-on-assemblathon-2.html

      This spawned additional blogs that broadened the discussion among the community-- again looking toward the future.<br> See: https://www.homolog.us/blogs/genome/2013/02/23/titus-browns-thoughts-on-the-assemblathon-2-paper/

      And

      https://flxlexblog.wordpress.com/2013/02/26/on-assembly-uncertainty-inspired-by-the-assemblathon2-debate/

      Further the authors, now in the process of revising their manuscript, joined in on twitter, reaching out to the community at large for suggestions on revisions, and additional thoughts. Their paper had been posted in arxiv- allowing for this type of commenting and author/reader interaction See: https://arxiv.org/abs/1301.5406

      The Assemblathon.org site collected and presented all the information on the discussion surrounding this article. https://assemblathon.org/page/2

      A blog by the editors followed all this describing this ultra-open peer review, highlighting how these forms of discussions during the peer review process ended up being a very forward-looking discussion about the state of based on what the reviewers were seeing in this paper, and the directions the community should now focus on. This broader open discussion and its very positive nature could only happen in an open, transparent, review process. See: https://blogs.biomedcentral.com/bmcblog/2013/07/23/ultra-open-peer-review/

  2. Nov 2019
    1. Considerable obstacles remain, however, before the genetic therapy can be tested on human heart attack patients. Most of the treated pigs died after the treatment because the microRNA-199 continued to be expressed in an uncontrolled way.

      My imagination is running wild, but not in a good way. 😞

    1. Show HN: Deskulu – Opensource knowledgebase and ticketing system

      Built on Drupal

      Targetted as a helpdesk and ticketing system, although it says knowledge base

    1. Raneto

      A knowledge base

      Stack: node

      Search: form-based. No autocomplete. String-based. Not NLP (accepting questions)

      Adding: create a new file.

  3. www.mkdocs.org www.mkdocs.org
    1. MkDocs is a fast, simple and downright gorgeous static site generator that's geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.

      Static site - no server needed

      Search: autocomplete via in-browser lunr

      Conclusion: I'd much rather use more established static site generators like hugo or jekyll.

    1. Betsy Ross Vegan "beef", Vegan Cheese, Onions. Add Shrooms and bells for One Dollar

      Mine is always with extra shrooms. :)

    1. Tea cites Chavisa Woods’s recent memoir of sexism 100 Times, Andrea Lawlor’s Paul Takes the Form of a Mortal Girl and Brontez Purnell’s Since I Laid My Burden Down as examples of books that have fearlessly and artfully tackled themes of power and gender relations, misogyny and sexual violence. “Right now, I think the [publishing] industry is responding to what is happening and saying: ‘Yes we really need these voices, we need these ideas out in the world.’

      So true!

      My review of Chavisa Woods's book is here.

  4. Oct 2019
    1. A Million Brains in the Cloud

      Arno Klein and Satrajit S. Gosh published this research idea in 2016 and opened it to review. In fact, you could review their abstract directly in RIO, but for the MOOC activity "open peer review" we want you to read and annotate their proposal using this Hypothes.is layer. You can add annotations by simply highlighting a section that you want to comment on or add a page note and say in a few sentences what you think of their ideas. You can also reply to comments that your peers have already made. Please sign up to Hypothes.is and join the conversation!

    1. The Politics of Sustainability and Development

      This reading is to help you better understand the role and importance of literature review. Literature review connects us to a bigger community of scientists who study the same research topic, and helps us build up, illustrate, and develop our theory (what is happening between the IV and the DV?) and research design (how one plans to answer the RQ).

    1. According to the McDonald's website, here are the ingredients in the French fries: Ingredients: Potatoes, Vegetable Oil (Canola Oil, Corn Oil, Soybean Oil, Hydrogenated Soybean Oil, Natural Beef Flavor [Wheat and Milk Derivatives]*), Dextrose, Sodium Acid Pyrophosphate (Maintain Color), Salt. *Natural beef flavor contains hydrolyzed wheat and hydrolyzed milk as starting ingredients. Here is the ingredient list for McDonald's hash browns, taken from their website: Ingredients: Potatoes, Vegetable Oil (Canola Oil, Soybean Oil, Hydrogenated Soybean Oil, Natural Beef Flavor [Wheat and Milk Derivatives]*), Salt, Corn Flour, Dehydrated Potato, Dextrose, Sodium Acid Pyrophosphate (Maintain Color), Extractives of Black Pepper. *Natural beef flavor contains hydrolyzed wheat and hydrolyzed milk as starting ingredients

      Elsewhere there’s the claim that McDonalds fries are cooked in some amount of beef fat in processing. Warrants further vetting.

  5. Sep 2019
    1. Transparent Review in Preprints will allow journals and peer review services to show peer reviews next to the version of the manuscript that was submitted and reviewed.

      A subtle but important point here is that when the manuscript is a preprint then there are two public-facing documents that are being tied together-- the "published" article and the preprint. The review-as-annotation becomes the cross-member in that document association.

    1. “To me, this is symptomatic of a much larger problem of transparency within the company. Nobody is forthcoming with information that dramatically affects editorial,” Binkowski said. “One of those things was me not knowing if I was in trouble.”

      This firing of Snopes managing editor reiterates previous concerns over an unhealthy working environment. This article is old enough I'd like to see followups.

    1. When I asked how many articles she’d written for the site, she came back with a “verified count” of 1,905. She told me how she came to that number: “By examining every Snopes.com HTML file on my computer, rereading every email David and I exchanged from 1997 until now, and in cases where doubt still existed, examining my research files. The task took a week, but I am satisfied I now have a fair list and that all lurking doubles (a result of David’s penchant for renaming files) have been excised.”

      Impressive. Her alleged painstaking data-hoarding makes me like her. I'm not sure what to think of David's ambiguity yet.

    1. I am writing this review for the Drummond and Sauer comment on Mathur and VanderWeele (2019). To note, I am familiar with the original meta-analyses considered (one of which I wrote), the Mathur and VanderWeele (henceforth MV2019) article, and I’ve read both Drummond and Sauer’s comment on MV2019 and Mathur’s review of Drummond and Sauer’s comment on MV2019 (hopefully that wasn’t confusing). On balance, I think Drummond and Sauer’s (henceforth DSComment) comment under review here is a very important contribution to this debate. I tended to find DSComment to be convincing and was comparatively less convinced by Mathur’s review or, indeed, MV2019. I hope my thoughts below are constructive.

      It’s worth noting that MV2019 suffered from several primary weaknesses. Namely:

      1. On one hand, it didn’t really tell us anything we didn’t already know, namely that near-zero effect sizes are common for meta-analyses in violent video game research.
      2. MV2019, aside from one brief statement as DSComment notes, neglected the well-known methodological issues that tend to spuriously increase effect sizes (unstandardized aggression measures, self-ratings of violent game content, identified QRPs in some studies such as the Singapore dataset, etc.) This resulted in a misuse of meta-analytic procedures.
      3. MV2019 naïvely interprets (as does Mathur’s review of DSComment) near-zero effect sizes as meaningful, despite numerous reasons not to do so given concerns of false positives.
      4. MV2019, for an ostensible compilation of meta-analyses, curiously neglect other meta-analyses, such as those by John Sherry or Furuyama-Kanamori & Doi (2016).

      At this juncture, publication bias, particularly for experimental studies, has been demonstrated pretty clearly (e.g. Hilgard et al., 2017). I have two comments here. MV2019 offered a novel and not well-tested alternative approach (highlighted again by Mathur’s review) for bias, however, I did not find the arguments convincing as this approach appears extrapolative and produces results that simply aren’t true. For instance, the argument that 100% of effect sizes in Anderson 2010 are above 0, is quickly falsified merely by looking at the reported effect sizes in the studies included, at least some of which are below .00. Therefore, this would appear to clearly indicate some error in the procedure of MV2019.

      Further, we don't need statistics to speculate about publication bias in Anderson et al. (2010) as there are actual specific examples of published null studies missed by Anderson et al. (see Ferguson & Kilburn, 2010). Further, the publication of null studies in the years immediately following (e.g. von Salisch et al., 2011) indicate that Anderson's search for unpublished studies was clearly biased (indeed, I had unpublished data at that time but was not asked by Anderson and colleagues for it). So there's no need at all for speculation given we have actual examples of missed studies and a fair number of them.

      It might help to highlight also that traditional publication bias techniques probably are only effective with small sample experimental studies. For large sample correlational/longitudinal studies, effect sizes tend to be a bit more homogeneous, hovering closely to zero. In such studies the accumulation of p-values near .05 is unlikely given the power of small studies. Relatively simple QRPs can make p-values jump rapidly from non-significance to something well below.05. Thus, traditional publication bias procedures may return null results for this pool of studies, despite QRPs, and thus, publication bias having taken place.

      It might also help to note that meta-analyses with weak effects are very fragile to unreported null studies, which probably exist in greater numbers (particularly for large n studies) that would be indicated by publication bias techniques.

      I agree with Mathur’s comment about experiments not always offering the best evidence, given lack of generalizability to real-world aggression (indeed, that’s been a long-standing concern). However, it might help DSComment to note that, by this point, probably the pool of evidence least likely to find effects are longitudinal studies. I’ve got two preregistered longitudinal analyses of existing datasets myself (here I want to make clear that citing my work is by no means necessary for my positive evaluation of any revisions on DSComment), and there are other fine studies (such as Lobel et al., 2017, Breuer et al., 2015, Kuhn et al., 2018; von Salisch et al., 2011, etc.) The authors may also want to note Przybylski and Weinstein (2019) which offer an excellent example of a preregistered correlational study.

      Indeed, in a larger sense, as far as evidence goes, DSComment could highlight recent preregistered evidence from multiple sources (McCarthy et al., 2016; Hilgard et al., 2019, Przybylski & Weinstein, 2019, Ferguson & Wang, 2019, etc.) This would seem to be the most crucial evidence and, aside from one excellent correlational study (Ivory et al.) all of the preregistered results have been null. Even if we think the tiny effect sizes in existing metas provide evidence in support of hypotheses (and we shouldn’t), these preregistered studies suggest we shouldn’t trust even those tiny effects to be “true.”

      The weakest aspect of MV2019 was the decision to interpret near-zero effects as meaningful. Mathur, argues that tiny effects can be important once spread over a population. However, this is merely speculation, and there’s no data to support it. It’s kind of a truthy thing scholars tend to say defensively when confronted by the possibility that effect sizes don’t support their hypotheses. By making this argument, Mathur invites an examination of population data where convincing evidence (Markey, Markey & French, 2015; Cunningham et al., 2016; Beerthuizen, Weijters & van der Laan, 2017) shows that violent game consumption is associated with reduced violence in society. Granted, some may express caution about looking at societal-level data, but here is where scholars can’t have it both ways: One can’t make claims about societal-level effects, and then not want to look at the societal data. Such arguments make unfalsifiable claims and are unscientific in nature.

      The other issue is that this line of argument makes effect sizes irrelevant. If we’re going to interpret effect sizes no matter how near to zero as hypothesis supportive, so long as they are “statistically significant” (which, given the power of meta-analyses, they almost always are), then we needn’t bother reporting effect sizes at all. We’re still basically slaves to NHST, just using effect sizes as a kind of fig leaf for the naked bias of how we interpret weak results.

      Also, that’s just not how effect sizes work. They can’t be sprinkled like pixie dust over a population to make them meaningful.

      As DSComment points out, effect sizes that are this small have high potential for Type 1 error. Funder and Ozer (2019) recent contributed to this discussion in a way I think was less than helpful (to be very clear I respect Funder and Ozer greatly, but disagree with many of their comments on this specific issue). Yet, as they note, interpretation of tiny effects is based on such effects being “reliable”, a condition clearly not in evidence for violent game research given the now extensive literature on the systematic methodological flaws in that literature.

      In her comment Dr. Mathur dismisses the comparison with ESP research, but I disagree with (or dismiss?) this dismissal. The fact that effect sizes in meta-analyses for violent game research are identical to those for “magic” is exactly why we should be wary of interpreting such effect sizes as hypothesis supportive. Saying violent game effects are more plausible is irrelevant (and presumably the ESP people would disagree). However, the authors of DSComment might strengthen their argument by noting that some articles have begun examining nonsense outcomes within datasets. For example, in Ferguson and Wang (2019) we show that the (weak and in that case non-significant) effects for violent game playing are no different in predicting aggression than nonsense variables (indeed, the strongest effect was for the age at which one had moved to a new city). Orben and Przybylski (2019) do something similar and very effective with screen time. Point being, we have an expanding literature to suggest that the interpretation of such weak effects is likely to lead us to numerous false positive errors.

      The authors of DSComment might also note that MV2019 commit a fundamental error of meta-analysis, namely assuming that the “average effect size wins!” When effect sizes are heterogeneous (as Mathur appears to acknowledge unless I misunderstood) the pooled average effect size is not a meaningful estimator of the population effect size. That’s particularly true given GIGO (garbage in, garbage out). Where QRPs have been clearly demonstrated for some studies in this realm (see Przybylski & Weinstein, 2019 for some specific examples of documentation involving the Singapore dataset), the pooled average effect size, however it is calculated, is almost certainly a spuriously high estimate of true effects.

      DSComment could note that other issues such as citation bias are known to be associated with spuriously high effect sizes (Ferguson, 2015), another indication that researcher behaviors are likely pulling effect sizes above the actual population effect size.

      Overall, I don’t think MV2019 were very familiar with this field and, appearing unaware of the serious methodological errors endemic in much of the literature which pull effect sizes spuriously high. In the end, they really didn’t say anything we didn’t already know (the effect sizes across metas tend to be near zero), and their interpretation of these near-zero effect sizes was incorrect.

      With that in mind, I do think DSComment is an important part of this debate and is well worth publishing. I hope my comments here are constructive.

      Signed, Chris Ferguson

    2. [This was a peer review for the journal "Meta-Psychology", and I am posting it via hypothes.is at the journal's suggestion.]

      I thank the authors for their response to our article. For full disclosure, I previously reviewed an earlier version of this manuscript. The present version of the manuscript shows improvement, but does not yet address several of my substantial concerns, each of which I believe should be thoroughly addressed if a revision is invited. My concerns are as follows:

      1.) The publication bias corrections still rely on incorrect statistical reasoning, and using more appropriate methods yields quite different conclusions.

      Regarding publication bias, the first analysis of the number of expected versus observed p-values between 0.01 and 0.05 that is presented on page 3 (i.e., “Thirty nine…should be approximately 4%”) cannot be interpreted as a test of publication bias, as described in my previous review. The p-values would only be uniformly distributed if the null were true for every study in the meta-analysis. If the null does not hold for every study in the meta-analysis, then we would of course expect more than 4% of the p-values to fall in [0.01, 0.05], even in the absence of any publication bias. I appreciate that the authors have attempted to address this by additionally assessing the excess of marginal p-values under two non-null distributions. However, these analyses are still not statistically valid in this context ; they assume that every study in the meta-analysis has exactly the same effect size (i.e., that there is no heterogeneity), which is clearly not the case in the present meta-analyses. Effect heterogeneity can substantially affect the distribution and skewness of p-values in a meta-analysis (see Valen & Yuan, 2007). To clarify the second footnote on page 3, I did not suggest this particular analysis in my previous review, but rather described why the analysis assuming uniformly distributed p-values does not serve as a test of publication bias.

      I would instead suggest conducting publication bias corrections using methods that accommodate heterogeneity and allow for a realistic distribution of effects across studies. We did so in the Supplement of our PPS piece (https://journals.sagepub.com/doi/suppl/10.1177/1745691619850104) using a maximum-likelihood selection model that accommodates normally-distributed, heterogeneous true effects and essentially models a discontinuous “jump” in the probability of publication at the alpha threshold of 0.05. These analyses did somewhat attenuate the meta-analyses’ pooled point estimates, but suggested similar conclusions to those presented in our main text. For example, the Anderson (2010) meta-analysis had a corrected point estimate among all studies of 0.14 [95% CI: 0.11, 0.16]. The discrepancy between our findings and Drummond & Sauer’s arises partly because the latter analysis focuses only on pooled point estimates arising from bias correction, not on the heterogeneous effect distribution, which is the very approach that we described as having led to the apparent “conflict” between the meta-analyses in the first place. Indeed, as we described in the Supplement, publication bias correction for the Anderson meta-analyses still yields an estimated 100%, 76%, and 10% of effect sizes above 0, 0.10, and 0.20 respectively. Again, this is because there is substantial heterogeneity. If a revision is invited, I would (still) want the present authors to carefully consider the issue of heterogeneity and its impact on scientific conclusions.

      2.) Experimental studies do not always yield higher-quality evidence than observational studies.

      Additionally, the authors focus only the subset of experimental studies in Hilgard’s analysis. Although I agree that “experimental studies are the best way to completely eliminate uncontrolled confounds”, it is not at all clear that experimental lab studies provide the overall strongest evidence regarding violent video games and aggression. Typical randomized studies in the video game literature consist, for example, of exposing subjects to violent video games for 30 minutes, then immediately having them complete a lab outcome measure operationalizing aggression as the amount of hot sauce a subject chooses to place on another subject’s food. It is unclear to what extent one-time exposures to video games and lab measures of “aggression” have predictive validity for real-world effects of naturalistic exposure to video games. In contrast, a well-conducted case-control study with appropriate confounding control and assessing violent video game exposure in subjects with demonstrated violent behavior versus those without might in fact provide stronger evidence for societally relevant causal effects (e.g., Rothman et al., 2008).

      3.) Effect sizes are inherently contextual.

      Regarding the interpretation of small effect sizes, we did indeed state several times in our paper that the effect sizes are “almost always quite small”. However, to universally dismiss effect sizes of less than d = 0.10 as less than “the smallest effect size of practical importance” is too hasty. Exposures, such as violent video games, that have very broad outreach can have substantial effects at the population level when aggregated across many individuals (VanderWeele et al., 2019). The authors are correct that small effect sizes are in general less robust to potential methodological biases than larger effect sizes, but to reiterate the actual claim we made in our manuscript: “Our claim is not that our re-analyses resolve these methodological problems but rather that widespread perceptions of conflict among the results of these meta-analyses—even when taken at face value without reconciling their substantial methodological differences—may in part be an artifact of statistical reporting practices in meta-analyses.” Additionally, the comparison to effect sizes for psychic phenomena does not strike as particularly damning for the violent video game literature. The prior plausibility that psychic phenomena exist is extremely low, as the authors themselves describe, and it is surely much lower than the prior plausibility that video games might increase aggressive behavior. Extraordinary claims require extraordinary evidence, so any given effect size for psychic phenomena is much less credible than for video games.

      Signed, Maya B. Mathur Department of Epidemiology Harvard University

      References

      Johnson, Valen, and Ying Yuan. "Comments on ‘An exploratory test for an excess of significant findings’ by JPA loannidis and TA Trikalinos." Clinical Trials 4.3 (2007): 254.

      Rothman, K. J., Greenland, S., & Lash, T. L. (2008). Modern epidemiology (Vol. 3). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins.

      VanderWeele, T. J., Mathur, M. B., & Chen, Y. (2019). Media portrayals and public health implications for suicide and other behaviors. JAMA Psychiatry.

    1. Introduction

      Introduction is a bit longer summary of the entire paper. This is where researchers describe and justify their research questions and briefly discuss what is to come. Typically, introduction is about 500 -- 1000 words.

      Please identify and highlight a research question(s).

  6. Aug 2019
  7. Jul 2019
    1. My post-publication peer review and re-analysis identifies problems with the included temperature range and the linearity of effects.

  8. Jun 2019
    1. It didn’t matter if people called him selfish; Leonard looked out for himself, because ultimately, the fans never do.

      connection: we shouldn't seek external approval because that approval is unreliable.

      a corollary is that by doing so, we get more external approval. We are more grounded.

    1. An Introduction to Variational Autoencoders

      【导读】变分自编码器(VAE)是重要的生成式模型。与生成式对抗网络(GAN)类似,VAE也可以被用来生成逼真的图像和文本信息,但VAE的思想却与GAN有很大的区别。本文介绍Arxiv上的一篇93页VAE导论,该导论包含大量的公式推导和图示。

      近几年来,生成式对抗网络(GAN)吸引了大量科研人员和工程师的关注。然而除了GAN,变分自编码器(VAE)也是这几年较为火热的重要的生成式模型。与GAN的利用生成器和判别器进行对抗的思路不同,VAE的核心组件是自编码器和KL散度约束。

  9. Apr 2019
    1. Recent Advances in Open Set Recognition: A Survey

      异常检测的综述

  10. Mar 2019
    1. Evaluation of technology enhanced learning programs for health care professionals: systematic review This article is included because it is a systematic review. It is presented in academic language. The intention is to evaluate the quality of the articles themselves, not to guide e-learning development. Criteria for evaluating articles was established in advance. The utility of the article for my purposes may be a new search term, continuous professional development. rating 2/5

    1. New Media Consortium Horizon Report This page provides a link to the annual Horizon Report. The report becomes available late in the year. The report identifies emerging technologies that are likely to be influential and describes the timeline and prospective impact for each. Unlike the link to top learning tools that anyone can use, the technologies listed here may be beyond the ability of the average trainer to implement. While it is informative and perhaps a good idea to stay abreast of these listings, it is not necessarily something that the average instructional designer can apply. Rating: 3/5

    1. A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification

    1. Abstrac

      In his commentary, Alex Holcombe makes the argument that only ‘one or two exemplars of a color category’ are typically examined in color studies, and this is problematic because a color such as ‘red’ is a category, not a single hue.

      Although in some fields it is very important to examine a range of stimuli, and in general examining the generalizability of findings has an important place in research lines, I do not think that currently this issue is a pressing concern in color psychology. Small variations in hue and brightness naturally occur in online studies, and these are assumed not to matter for the underlying mechanism. Schietecat, Lakens, IJsselsteijn, and De Kort (2018) write: “In addition, we conducted Experiments 1 and 3 in a laboratory environment, but Experiments 2, 4, and 5 were conducted in participants’ homes with an internet-based method. Therefore, we could not be completely sure that the presentation of the stimuli on their personal computers was identical for every participant in those experiments. However, we expected that the impact of these variations on our results is not substantial. The labels of the IAT (i.e., red vs blue) increased the salience of the relevant hue dimension, and we do not expect our results to hold for very specific hues, but for colors that are broadly categorized as red, blue, and green. The similar associative patterns across Experiments 2 and 3 seem to support this expectation.”

      We wrote this because there is nothing specific about the hue that is expected to drive the effects in association based accounts of psychological effects of colors. If the color ‘red’ is associated with specific concepts (and the work by Schietecat at all supports the idea that red can activate associations related to either activity and evaluation, such as aggression or enthusiasm, depending on the context). This means that the crucial role of the stimulus is to activate the association with ‘red’, no the perceptual stimulation of the eye in any specific way. The critical manipulation check would thus be is people categorize a stimulus as ‘red’. As long as this is satisfied, we can assume the concept ‘red’ is activated, which can then activate related associations, depending on the context.

      Obviously, the author is correct that there are benefits in testing multiple variations of the color ‘red’ to demonstrate the generalizability of observed effects. However, the authors is writing too much as a perception researcher I fear. If there is a strong theoretical reason to assume slightly different hues and chromas will not matter (because as long as a color is recognized as ‘red’ it will activate specific associations) the research priority of varying colors is much lower than in other fields (e.g., research on human faces) where it is more plausible that the specifics of the stimuli matter. A similar argument holds for the question whether “any link is specifically to red, rather than extending to green, yellow, purple, and brown”. This is too a-theoretical, and even though not all color research has been replicable, and many studies suffered from problems identified during the replication crisis, the theoretical models are still plausible, and specific to predictions about certain hues. We know quite a lot about color associations for prototypical colors in terms of their associations with valence and activity (e.g., Russell & Mehrabian, 1977) and this can be used to make more specific predictions than to a-theoretically test the entire color spectrum.

      Indeed, across the literature many slightly different variations of red are used, or in online studies (Schietecat et al., 2018) studies have been performed online, where different computer screens will naturally lead to some variation in the exact colors presented. This doesn’t mean that more dedicated exploration of the boundaries of these effects can be worthwhile in the future. But currently, the literature is more focused on examining whether these effects are reliable to begin with, and explaining basic questions about their context dependency, than that they are concerned about testing the range of hues for which effects can be observed. So, although in principle it is often true that the generalizability of effects is understudies and deserved more attention, it is not color psychology’s most pressing concern, because we have theoretical predictions about specific colors, and because theoretically as long as a color activates the concept (e.g., ‘red’), the associated concepts that influence subsequent psychological responses are assumed to be activated, irrespective of minor differences in for example hue or brightness.

      Daniel Lakens

      References

      Russell, J. A., & Mehrabian, A. (1977). Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11(3), 273–294. DOI: https://doi.org/10.1016/0092-6566(77)90037-X Schietecat, A. C., Lakens, D., IJsselsteijn, W. A., & Kort, Y. A. W. de. (2018). Predicting Context-dependent Cross-modal Associations with Dimension-specific Polarity Attributions. Part 2: Red and Valence. Collabra: Psychology, 4(1). https://doi.org/10.1525/COLLABRA.126

    1. Neural Approaches to Conversational AI

      Question Answering, Task-Oriented Dialogues and Social Chatbots

      The present paper surveys neural approaches to conversational AI that have beendeveloped in the last few years. We group conversational systems into three cat-egories: (1) question answering agents, (2) task-oriented dialogue agents, and(3) chatbots. For each category, we present a review of state-of-the-art neuralapproaches, draw the connection between them and traditional approaches, anddiscuss the progress that has been made and challenges still being faced, usingspecific systems and models as case studies

  11. Feb 2019
    1. Dialog System & Technology Challenge 6 Overview of Track 1 - End-to-End Goal-Oriented Dialog learning

      End-to-end dialog learning is an important research subject inthe domain of conversational systems. The primary task consistsin learning a dialog policy from transactional dialogs of a givendomain. In this context, usable datasets are needed to evaluatelearning approaches, yet remain scarce. For this challenge, atransaction dialog dataset has been produced using a dialogsimulation framework developed and released by Facebook AIResearch. Overall, nine teams participated in the challenge. Inthis report, we describe the task and the dataset. Then, we specifythe evaluation metrics for the challenge. Finally, the results ofthe submitted runs of the participants are detailed.

    1. we define conversation-turns per session (CPS) as the success metric for social chatbots. The larger the CPS is, the better engaged the social chatbot is.

    1. Deep Learning for Image Super-resolution: A Survey

      【导读】图像超分(SR, Super-Resolution)图像处理及数据非常重要的应用方向,主要目标在于增强原始图像与视频的分辨率精度。最近这些年,非常多的图像超分问题研究均采用了深度学习的架构,本篇综述希望通过全面回顾基于深度学习的图像超分方法,帮助大家快速了解这一领域的最新动态。

      介绍:

      图像超分问题,主要目标是图像处理技术中的重要研究方向,主要目标是将图片从低分辨率恢复到高分辨率的图像。这类方法具有非常广泛的应用价值,比如医疗影像、安防等等。通常来说,这一问题非常具有挑战性,因为总是存在多种对应于相同低分辨率图像的高分辨率映射。在以往的研究中,提出了一些传统的超分方法,包括基于预测的方法,基于边的方法,以及稀疏表示方法等等。

      随着深度学习技术的快速发展,基于深度学习的图像超分方法已经被开发出来,并且在多个测试任务上,取得了目前最优的性能效果。多种深度学习方法已经被应用到具体的图像超分任务中,从早期的卷积神经网络(SRCNN)到最近提出的基于GAN的超分方法。通常来说,使用了深度学习的超分算法类中,存在着以下几种不同之处:网络架构不同、损失函数不同、学习原理、策略不同等。

      在这篇文章中,主要给出了在超分算法中使用深度学习技术的一些优势。虽然存在着一些其他关于超分算法的综述,但我们不同之处在于主要关注基于深度学习的超分算法,而不像其他早期工作那样关注于传统的超分算法综述。本篇综述给出了一个统一的深度学习视角,来回顾最近的超分技术进展。

      本文的主要贡献包括三个方面:

      1. 给出了一个综合的基于深度学习的图像超分技术综述,包括问题设置、数据集、性能度量、一组基于深度学习的图像超分方法集合,特定领域的图像超分方法应用等等。

      2. 为最近基于深度学习的图像超分算法提供了系统性、结构化的视角,并总结了高效图像超分解决方案中的优势于劣势。

      3. 我们讨论了这个领域的挑战与开放问题,并总结了最近的新趋势与未来的发展方向。

    2. Revisiting Self-Supervised Visual Representation Learning

      无监督的视觉表示学习在计算机视觉研究中仍然是一个很大程度上未解决的问题。在最近提出的用于无监督学习视觉表示的方法中,一类自我监督技术在许多具有挑战性的基准上实现了卓越的性能。已经研究了大量的自我监督学习的前提任务,但其他重要的方面,如卷积神经网络(CNN)的选择,并没有得到同等的关注。因此,我们重新审视了许多以前提出的自我监督模型,进行彻底的大规模研究,结果发现了多个关键的问题。我们挑战了自我监督的视觉表现学习中的一些常见实践,并观察到CNN设计的标准配方并不总是转化为自我监督的表征学习。作为我们研究的一部分,我们大大提高了先前提出的技术的性能,并且大大优于以前发布的最先进的结果。

    3. An Introduction to Image Synthesis with Generative Adversarial Nets

      GAN 自 2014 年诞生至今也有 4 个多年头了,大量围绕 GAN 展开的文章被发表在各大期刊和会议,以改进和分析 GAN 的数学研究、提高 GAN 的生成质量研究、GAN 在图像生成上的应用(指定图像合成、文本到图像,图像到图像、视频)以及 GAN 在 NLP 和其它领域的应用。图像生成是研究最多的,并且该领域的研究已经证明了在图像合成中使用 GAN 的巨大潜力。本文对 GAN 在图像生成应用做个综述。

    1. Julie Beck argues that unless we do something with what we have read within 24-hours then we often forget it.

      For a while I've been doing PESOS from reading.am to my website privately. Then a day or so later I come back to the piece to think about it again and post any additional thoughts, add tags, etc. I often find that things I missed the first time around manage to resurface. Unless I've got a good reason not to I usually then publish it.

    1. incorporating community feedback and expert judgment

      Biomed Central and ResearchSquare have partnered on a project called InReview, which enables community feedback in parallel with traditional peer review. More on that project here.

    1. Interactions of tomato and Botrytis genetic diversity: Parsing the contributions of host differentiation, domestication and pathogen variation

      This article has a Peer Review Report

  12. Jan 2019
    1. A Survey of the Recent Architectures of Deep Convolutional Neural Networks

      深度卷积神经网络(CNN)是一种特殊类型的神经网络,在各种竞赛基准上表现出了当前最优结果。深度 CNN 架构在挑战性基准任务比赛中实现的高性能表明,创新的架构理念以及参数优化可以提高 CNN 在各种视觉相关任务上的性能。本综述将最近的 CNN 架构创新分为七个不同的类别,分别基于空间利用、深度、多路径、宽度、特征图利用、通道提升和注意力。

    2. Optimization Models for Machine Learning: A Survey

      感觉此文于我而言真正有价值的恐怕只有文末附录的 Dataset tables 汇总整理了。。。。。

    1. Web annotation, for example, is catching on as a new mode of collaboration, peer review, and other research functions.

      And the combination of community feedback on preprints with traditional and post-publication peer review through collaborative annotation is catching on with a variety of publishers. See InReview by BMC and ResearchSquare. Also COS preprint servers such as SocArXiv and Psyarxiv.

    1. Banyak kelemahan yang ditemukan Dian ketika menelaah proposal penelitian yang masuk. Ide yang ditawarkan banyak yang kurang kreatif dan aktual. Ada juga yang hanya merupakan duplikasi atau daur ulang dari penelitian sebelumnya.

      Apakah hasil peninjauan ini terbuka untuk umum dan diberikan juga kepada peneliti? Maaf kalau saya keliru, saya setiap tahun mengirimkan proposal ke Kemristekdikti, tapi hasil peninjauan secara lengkap belum pernah saya terima.

    1. The Receptor-like Pseudokinase GHR1 Is Required for Stomatal Closure

      Please find a Peer Review Report here.

      The report shows the major requests for revision and author responses. Minor comments for revision and miscellaneous correspondence are not included. The original format may not be reflected in this compilation, but the reviewer comments and author responses are not edited, except to correct minor typographical or spelling errors that could be a source of ambiguity.

  13. Dec 2018
    1. How convolutional neural network see the world - A survey of convolutional neural network visualization methods

      果断收藏并且要细读下。。。Paper Summary 准备!

      这可是对 CNN 可视化方法的 review 啊!

      一篇很棒的综述,专门说 CNN 的可视化的!要好好读读了!

      Paper Summary 准备!

  14. Nov 2018
    1. Adversarial Attacks and Defences: A Survey

      一篇印度人写的对抗性防御的综述paper。

    2. The GAN Landscape: Losses, Architectures, Regularization, and Normalization

      看到 Goodfellow 亲自转推了此文~~

      很重要的 review,准备个 Paper Summary

    3. Generalization Error in Deep Learning

      一篇谈深度学习模型泛化能力的 review paper~ 不错。。。就是很理论。。很泛泛。。。

    4. Deep learning for time series classification: a review

      这个文与我的课题貌似相当相关!

      准备好好写一个 Paper Summary 为好~

    5. Deep Learning in Neural Networks: An Overview
      • 这是一篇很牛的 review。
      • 被引用数已经3000+。
      • 总共有88页,但正文仅35页。(88 pages, 888 references)
      • 详述了深度学习在神经网络历史与发展,很值得参考!
    6. Learning From Positive and Unlabeled Data: A Survey

      一篇关于无标签正样本学习(PU Learning)的综述。自己应该好好读读,在现实情况下,这种类别很多很重要。。。

    7. Deep Learning with the Random Neural Network and its Applications

      此文综述了最近 Random Neural Network (RNN) 模型的应用,文末的 summary 和 future work,可以看到 RNN 已经在很多方面有着不错的表现,但是还有更多的问题需要进一步探讨和研究。

    8. The Frontiers of Fairness in Machine Learning

      这个 review 在一定程度上指出了当前前沿 ML 研究的趋势和局限性,重要的是它为未来应当着力发展和探究的课题给出了几个方向。

    9. Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

      如题。

      review 一篇。。。。

    10. Generative adversarial networks and adversarial methods in biomedical image analysis

      这是一篇很有 review 气质的 paper,对GAN 和对抗方法等做了介绍(在生物医药领域中),也谈论了这些技术应用的优势和劣势。

      对我来说,这是一篇很适合快速入门GAN应用的 paper。

    11. Model Selection Techniques -- An Overview

      一篇关于模型选择的综述文章。涉及信号处理,图像处理等等多方面数据信息的处理。发表在信号处理的期刊杂志上。

      文中关于模型选择的大概念方向,和数学表示,是值得好好阅读的。

    12. A Survey on Deep Transfer Learning

      不仅综述了迁移学习的现状,也对其进行了分类。同时,还给出了“深度迁移”的概念,强调了待迁移的两个学习任务之间的非线性关系。其实这也很自然,我们本来对线性的“相似”学习任务迁移就兴趣一般,也没多大研究意义。。。。

    13. Analyzing biological and artificial neural networks: challenges with opportunities for synergy?

      Deepmind 的这篇文有点水,只能归为 review 了。涉及了很多数据分析的方法和 DNN 的概念,当然也未跑题的谈论了生物/人工神经网络之间的爱恨纠葛。

      Synergy 这个词特别有趣,译为:协同增效作用。

    1. The better way I found to do review is to replace ALL review lecturing with problems that the students  solve in class that cover the material I want to review. 

      I can relate this to my earlier experiences. Active involvement and students' engagement is guaranteed if they are tasked with solving actual physics problems related to the materials being taught before.

  15. Oct 2018
    1. A Tale of Three Probabilistic Families: Discriminative, Descriptive and Generative Models

      这个综述的角度比较有趣,既易读也很有启发性,将模型分为三大类,它们之间的关系总结为下图:

      当然并不是所有模型都被囊括在其中,比如玻尔兹曼机等~

    2. A Survey on Deep Learning: Algorithms, Techniques, and Applications

      【Review】【文章摘要】

      随着深度学习逐渐成为机器学习领域的引领技术,机器学习领域正在见证其黄金时代。深度学习使用多个层来表示数据的抽象,以构建计算模型。一些关键的使能深度学习算法,如生成对抗网络,卷积神经网络和模型迁移,彻底改变了我们对信息处理的方法。然而,在这个极其快节奏的领域背后存在着一种理解,因为它以前从未以多视角展示过。缺乏核心理解使得这些强大的方法成为黑盒机器,从根本上抑制了深度学习发展。此外,深度学习一再被视为机器学习中所有绊脚石的银弹,这远非事实。本文全面回顾了视觉,音频和文本处理方面的历史和最新技术方法;社交网络分析;和自然语言处理,然后深入分析深度学习应用中的突破性进展。还开展了审查深度学习所面临的问题,如无监督学习,黑盒模型和在线学习,并说明如何将这些挑战转化为多产的未来研究途径。

    1. Writing the Review

      1) read the paper thoroughly 2) summarize your main points 3) relevant past work 4) significance of contribution and benefit 5) coverage of all the criteria 6) review "as is" 7) polite, temperate language

    1. a good meta-review also discusses what comments you weighted more heavily from the reviewers, and why, in reaching your evaluation of the paper.

      good meta-review -> weight more heavily from reviews, and why

    2. Writing a good meta-review is a lot like writing a good review, only it takes into account the points raised by all of the reviewers, rather than just reflecting your own opinion.

      good meta-review: -> consider all of the reviewers' opinions

    3. make great suggestions for how the authors could improve the articulation or organization of their work

      good review: 1) make great suggestions for how the authors could improve the articulation or organization of their work

    4. The Good Review will raise smart and tough questions which the authors can then address in their revisions, or it might raise fresh considerations or new aspects of a design space that the authors hadn't fully fleshed out.

      good review: 1) raise smart and tough questions which the authors can address in their revisions 2) raise fresh considerations or new aspects of a design space that the authors hadn't fully fleshed out

    5. how the author’s arguments, results, and demonstrations fit into closely related work as well as the field as a whole.

      argument + results + demonstration + related word + the field, all of them should tight together!

    6. raise whole new perspectives and angles of contribution that might be suggested by the work, or propose connections to areas of the literature that the author might not have thought of or even been aware of.

      good review: 1) raise whole new perspectives and angles of contributions 2) propose connections to literature that the author might not have been aware of

    7. The Good Review reflects on the contributions or possible contributions of the work, and discusses the weaknesses and limitations in a positive manner, but most particularly clearly calls out the strengths and utility of the work as well.

      Good Review 1) reflect on the contributions of the work 2) discuss the weaknesses and limitations in a positive manner 3) clearly call out the strengths and utility of the work

    1. The Eden Club - Exclusive Golf Club Membership | International Private Members Club

      The Eden Club is an international private members' club providing three very special dimensions: the most luxurious private members club in St Andrews, Scotland – the home of golf; an outstanding schedule of annual events and a unique Secretariat service.

    1. Recent reviews on DFT may be found in Jones and Gunnarsson (1989) and Dreizler andGross (1990)

      This is before the PAW method came along (Blochl '94), so probably nothing method-specific.