890 Matching Annotations
  1. Mar 2016
    1. Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in thesocial sciences.Review of General Psychology, 13(2), 90–100.
    2. Shelton, R. D., Foland, P., & Gorelskyy, R. (2009). Do new SCI journals have a different national bias?Scientometrics, 79(2), 351–363. doi:
    3. Silvertown, J., & McConway, K. J. (1997). Does ‘‘publication bias’’ lead to biased science?Oikos, 79(1),167–168.
    4. Yousefi-Nooraie, R., Shakiba, B., & Mortaz-Hejri, S. (2006). Country development and manuscript selec-tion bias: A review of published studies.BMC Medical Research Methodology, 6, 37

      On developing countries and science

    5. Evanschitzky, H., Baumgarth, C., Hubbard, R., & Armstrong, J. S. (2007). Replication research’s disturbingtrend.Journal of Business Research, 60(4), 411–415. doi

      replication research

    6. Jeng, M. (2006). A selected history of expectation bias in physics.American Journal of Physics, 74(7),578–583

      History of expectation bias in physics

    7. Ioannidis, J. P. A. (2008a). Perfect study, poor evidence: Interpretation of biases preceding study design.Seminars in Hematology, 45(3), 160–166

      effect of positive bias

    8. Feigenbaum, S., & Levy, D. M. (1996). Research bias: Some preliminary findings.Knowledge and Policy:The International Journal of Knowledge Transfer and Utilization, 9(2 & 3), 135–142.

      Positive bias

    9. Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., et al. (2010). Dissemination andpublication of research findings: An updated review of related biases.Health Technology Assessment,14(8), 1–193. doi

      positive bias

    10. De Rond, M., & Miller, A. N. (2005). Publish or perish—Bane or boon of academic life?Journal ofManagement Inquiry, 14(4), 321–329. doi:

      On how increased pressure to publish diminishes creativity.

    11. Several possible problems have been hypothesised, including: undue proliferation ofpublications and atomization of results (Gad-el-Hak2004; Statzner and Resh2010);impoverishment of research creativity, favouring ‘‘normal’’ science and predictable out-comes at the expense of pioneering, high-risk studies (De Rond and Miller2005); growingjournal rejection rates and bias against negative and non-significant results (because theyattract fewer readers and citations) (Statzner and Resh2010; Lortie1999); sensationalism,inflation and over-interpretation of results (Lortie1999; Atkin2002; Ioannidis2008b);increased prevalence of research bias and misconduct (Qiu2010). Indirect empiricalevidence supports at least some of these concerns. The per-capita paper output of scientistshas increased, whilst their career duration has decreased over the last 35 years in thephysical sciences (Fronczak et al.2007). Rejection rates of papers have increased in thehigh-tier journals (Larsen and von Ins2010; Lawrence2003). Negative sentences such as‘‘non-significant difference’’ have decreased in frequency in papers’ abstracts, while catchyexpressions such as ‘‘paradigm shift’’ have increased in the titles (Pautasso2010; Atkin2002). No study, however, has yet verified directly whether the scientific literature isenduring actual changes in conten

      Good discussion (and bibliography) of problems involved in hyper competition

    12. Formann, A. K. (2008). Estimating the proportion of studies missing for meta-analysis due to publicationbias.Contemporary Clinical Trials, 29(5), 732–739. doi

      estimate of positive bias in clinical trials.

    13. Fronczak, P., Fronczak, A., & Holyst, J. A. (2007). Analysis of scientific productivity using maximumentropy principle and fluctuation-dissipation theorem.Physical Review E, 75(2), 026103. doi:10.1103/PhysRevE.75.026103.

      On rising scientific productivity over shorter careers.

    14. Atkin, P. A. (2002). A paradigm shift in the medical literature.British Medical Journal, 325(7378),1450–1451

      On the rise of sexy terms like "paradigm shift" in abstracts.

    15. Bonitz, M., & Scharnhorst, A. (2001). Competition in science and the Matthew core journals.Sciento-metrics, 51(1), 37–54

      Matthew effect

    1. The results presented here suggest that competition among researchers haspronounced effects on the way science is done. It affects the progress of sciencethrough secrecy and sabotage and interferes with peer review and other universal-istic merit-review systems. It twists relationships within a field and can increase thelikelihood of a scientist engaging in misconduct. None of the focus-groupparticipants made reference to positive effects of competition on their work,despite the fact that the focus-group questions dealt in a general way with scientists’work and the norms of conduct that govern that work. If the protocol questions hadasked explicitly about competition, doubtless there would have been somediscussion about the positive aspects of science. In the context of the generalquestions, though, the scientists referred to competition as a constant and negativeforce that interferes with the way science is done. It is disconcerting to ponder theconsequences of competition, such as mistrust and defensive posturing, for acommunity that has long been committed—in principle—to shared ideas andcollegiality.

      Conclusions on the negative results of competition

    2. Competition is also manifested in scientists’ pressured haste, leading tocarelessness, which can verge on questionable behavior. One discussant talkedabout scientists ‘‘cutting a little corner’’ in order to get a paper out before others orto get a larger grant, and another said that she once published a result that she gotthree times in one week but could not replicate the following week

      How competitiveness also results in error.

    3. But there’s, I think there is a question of how you interpret the data, even ... ifthe experiments are very well designed. And, in terms of advice—not that I’mgoing to say that it’s shocking—but one of my mentors, whom I very muchrespect as a scientist—I think he’s extraordinarily good—advised me to alwaysput the most positive spin you can on your data. And if you try to present, like,present your data objectively, like in a job seminar, you’re guaranteed tonotgetthe job

      Importance of "spinning" data

    4. You are. And you know what the problems are in doing the experiments. And ifyou, in your mind, think that there should be one more control—because youknow this stuff better than anybody else because you’re doing it, you know—you decided not to do that, not to bring up what the potential difficulties are, youhave a better chance of getting that paper published. But it’s—I don’t think it’sthe right thing to do.

      deliberate positive bias

    5. dishonesty occursmore with the postdoc. Because they want to get the data—whereas if they don’tpublish, they don’t move on. And they, I think, are more likely to sort of fudge alittle bit here and there if they need to get the data done. Unless, like you say, youwatch them.

      senior faculty over-estimate the likelihood of juniors committing misconduct.

    6. One mid-career scientist told a story of how he and others in his lab counteractedan abuse of power by his mentor, a senior scientist, while he was in training. Hismentor received a manuscript to review that was authored by a ‘‘quasi-competitor.’’It presented results of experiments similar to those that were going on in thementor’s lab. The scientist continued, ‘‘That paper ... basically would have beat usto the punch. They would have published these results before us, and they wouldhave gotten credit, and not us. And my mentor, God bless him, sat on the paper.’’The mentor not only delayed writing the review but asked someone working in thelab to write it (a move of questionable ethicality in itself). That lab person and ourrespondent decided, in response, to stall their own work, so that their lab would nothave an unfair advantage over the group who submitted the paper for review. In theend, the original group got credit for the findings, while the respondent’s lab wasalso able to publish their slightly different findings. He ended his story with,‘‘Sometimes you’re in an awkward position, and you try to do the best thing you canunder the circumstances, within your own internal ethical clock or whatever. Andsometimes it’s ugly and it’s imperfect, but it’s the only thing you can do. If we hadgone to the mentor and voiced this objection, our careers would have been over. Ifwe had approached the journal—God forbid, forget it.’’ The speaker qualified thisstory by saying that it made him sound much more ethical than he actually is.

      peer review deliberately delayed in order to slow competitor

    7. The focus-group discussions showed, however, that scientists see peer review asaffording a unique, even protected opportunity for competitors to take advantage ofthem. In this sense, competition infects the peer review process, not only throughscientists’ competition with other applicants, but also through scientists’ distrust ofthe reviewers themselves, as competitors. The following exchange among mid-career discussants shows their sense of vulnerability

      Evidence that peer-reviewers are competitors.

    8. By contrast, others use it in thepublication process solely to maintain their competitive bid for priority in a line ofresearch inquiry, as a way to sabotage others’ progress. A scientist in an early-career group acknowledged the need to make results reproducible by telling people‘‘the whole recipe of the whole method’’ if asked directly, but then she talkedabout the ‘‘little trick’’ of not including all the details in a publication orpresentation. Like the scientists in a different group quoted above, she mentionedothers’ practice of taking photographs of poster presentations in order then topublish the results first. She said that people, in defense, ‘‘omit tiny little details’’:‘‘But sometimes in the publication, people, just to protect themselves, will not giveall the details. It’s always right, but maybe it’s not totally complete—to protectthemselves. Because your ideas get stolen constantly, and it’s so competitive ifyou’re a small lab.

      sabotaging reproducibility

    9. A more deliberate form of not sharing is the omission of critical details inpresentations, papers and grant proposals so that others will have difficultyreplicating and extending one’s own research.

      Sabotaging reproducibility.

    10. I know a large number of people in that category, in my own experience, who ...opted out because they didn’t want to play. They didn’t want to play the kind ofgames that have to be played to be successful, and in bringing in money and gettingthe papers out. There’s so much more than just doing good science that comes intoit. There’s so much communication and there’s salesmanship that has to go on

      On the negative impact competition has on career choice

    11. I really hate to admit this, but you do the same thing with your competitors asyou do with grant agencies. You sucker-punch them. You might have—when Isubmit a paper, I already have the next two or three papers’ worth of data. Imean, I know what I’m going to be publishing a year from now, mostly. But thepaper that comes out of my lab is Part A. Parts B and C are mostly on my desk.And I’ve put things in part A to basically entice my competitors into making anass out of themselves, or to second guess, or say, ‘‘Oh that must be wrongbecause of that, or something.’

      Gaming referees.

    12. You submit the first grant, youpropose the novel thing. You know damn well any study section that’s evenmildly conservative is going give you, ‘‘Well, it sounds promising.’’ Theymight give you a good score, you hope for a good score, but it’s not going toget funded, because it’s too novel, it’s too risky, it’s too blah blah. But youalready have the damn data. You know on the second resubmit, you’re goingto say, ‘‘Good point! We took that to heart. Oh, what a wonderful suggestion!We will worry about this too. Guess what? Here’s the data!’’ Shove it downtheir throat. And then it’s funded. Because, wow, you flagged them, yousucker-punched them. They said, ‘‘This is really novel, blah, blah. Boy if youcould only do that, that would be a great grant.’’ Well, you alreadydiddo it,and that’s the point. And you basically sucker-punch the study section intogiving you the money by default. They have to at that point. They don’t havea choice.

      On the need to have results before funding is given.

    13. Competition for funding, publications, scientific priority and overall career successleads scientists to describe their work in terms of strategies one might use in a game.Focus-group participants revealed that, like it or not, working within the scientificcommunity requires artful maneuvering and strategizing.

      relationship of competition to explicit game-playing.

    14. To publish. And sometimes publish in the right journals.... In my discipline ...there’s just a few journals, and if you’re not in that journal, then yourpublication doesn’t really count

      Importance of "top" journals

    15. In addition to that, the other thing that they focus on is science as celebrity.... Sothe standards are, ‘‘How much did it cost, and is it in the news?’’ And if it didn’tcost much and if it is not in the news, but it got a lot of behind-the-scenes talkwithin your discipline, they don’t know that, nor do they care

      Importance of news-worthiness.

    16. You’ve got to have a billionpublications in my field. That is the bottom line. That’s the only thing that counts.You can fail to do everything else as long as you have lots and lots of papers

      Importance of publications in science--overrules everything else.

    17. In short, there are many people (the oversupply factor) competing for prestigious,desirable and scarce rewards and resources (the funding factor), in a struggle thatbestows those rewards disproportionately on those of marginally greater achieve-ment (the tournament factor). This situation is supported to the detriment of that‘‘legion of the discontented’’ and to the benefit of senior investigators, because it‘‘generates good research by employing idealistic young graduate students andpostdoctoral fellows at low cost’’ [26]. In other words, the benefits accrue to fundingand employing institutions. This paper explores some of the costs that accompanythese benefits

      Economic structure of competition at universities.

    18. Richard B. Freeman and colleagues [28] havecharacterized the problem as follows: ‘‘Research in the biosciences fits a tournamenteconomic structure. A tournament offers participants the chance of winning a bigprize—an independent research career, tenure, a named chair, scientific renown,awards—through competition.... It fosters intense competition by amplifying smalldifferences in productivity into large differences in recognition and reward. Well-structured tournaments stimulate competition. Because the differences in rewardsexceed the differences in output, there is a disproportionate incentive to ‘win’’’(p.2293). Research environments in which only small numbers of scientists have theopportunity to gain significant attention increase the competitive stakes: playing thegame may be a gamble, but the payoff for winning is significant [28,36]

      The tournament structure of biosciences.

    19. Another perspective sees competition as a function not just of funding, but of thebalance between supply and demand of resources, particularly human resources. Inthe current competitive system, young scientists are pitted against one another forattractive career opportunities that are becoming increasingly scarce [28].Researchers, feeling the pressure to be first to present findings in their fields,employ armies of graduate students and postdoctoral fellows and strive to maketheir laboratory groups the smartest and the fastest. The result is a ‘‘postdocbottleneck’’ [29] where the supply for highly educated and trained researchers farexceeds the demand [30–33]. In concrete terms, Donald Kennedy and colleagues[34] have described the structural problem as a source of excess supply of humancapital: ‘‘We’ve arranged to produce more knowledge workers than we can employ,creating a labor-excess economy that keeps labor costs down and productivity high’’(p. 1105). The system produces, they claim, a ‘‘legion of the discontented’’ [34].They argue that institutional and policy decisions about training scientists should becoupled to placement histories of recent graduates, numbers of intellectual offspringof faculty, and job markets for scientists. Roger L. Geiger [35] has suggested thatthe imbalance between supply and demand is due in part to deficiencies in graduate

      Role of lack of positions. Interestingly, this has been shown by Fang et al to be not reflected in misconduct stats: i.e. the vast majority of scientific fraud is conducted by senior (male) scientists, not job hungry post-docs or grad students.

    20. Analysts differ as to the reasons why competition has intensified. Some see thesituation in terms of money. Tempering the effects of competition is not a primeimpetus behind calls by the National Science Board [26] and by a recent coalition of140 college presidents and other leaders [27] for more federal funding for scientificresearch; however, some scientists see such advocacy movements in terms of easingcertain aspects of competition that are worsened by tight dollars. More money, morepositions, and overall expansion of the research enterprise would improve thesituation

      role of funding

    21. here are indications, however, that the natureof competition has changed in recent years. Goodstein [25] argues that this shift islinked to negative outcomes:Throughout most of its history, science was constrained only by the limits ofits participants’ imagination and creativity. In the past few decades, however,that state of affairs has changed dramatically. Science is now held back mainlyby the number of research posts and the amount of research funds available.What had been a purely intellectual competition has become an intensestruggle for scarce resources. In the long run, this change, which is permanentand irreversible, will probably have an undesirable effect on ethical behavioramong scientists. Instances of scientific fraud will almost surely become morecommon, as will other forms of scientific misconduct (p. 31)

      relationship of negative aspects of competition to change in funding model that promotes scarcity. See Goodstein, D. (2002). Scientific misconduct.Academe, 88, 28–31

    22. It is negatively correlated with subscription tonormative systems (either traditional or alternative) and sense of community

      Scientific competition is is negatively correlated to eithical systems and sense of community.

    23. Melissa S.Anderson [20] furthermore found that a competitive departmental environment inscience is positively correlated with exposure to misconduct, fears of retaliation forwhistle-blowing, and conflic

      More evidence of correlation between competition and "exposure to misconduct".

    24. Empirical findings show a strong, positive relationship between the level ofperceived competition in an academic department and the likelihood that depart-mental members will observe misconduct among their colleagues [19]

      Higher the level of perceived competition in academic departments, the greater the liklihood that people will see misconduct among peers.

    25. David Blumenthal and colleagues [17] found that university geneticists and otherlife scientists who perceive higher levels of competition in their fields are morelikely to withhold data or results. Such withholding took the form of omittinginformation from a manuscript or delaying publication to protect one’s scientificlead, maintaining trade secrets, or delaying publication to protect commercial valueor meet a sponsor’s requirements. John P. Walsh and Wei Hong [18] have reportedsimilar finding

      Evidence that competition causes scientists to withhold results and/or data

    26. ompetition in science has its bright side, which past analysts and commentatorstended to emphasize and current writers often affirm. It has been credited withensuring that ideas, work, proposals and qualifications of all interested parties areevaluated prior to the distribution of rewards, particularly funding and positions.From this perspective, competition promotes open examination and fair judgment.The norm of universalism [11] is supported when all qualified people have theopportunity to propose and defend their ideas and work in open competition [12].After all, absent competition, cronyism is likely to flouris

      Positive value of competition in science.

    27. Increases in levels of competition in science are symptomatic of a moregeneral hypercompetitive shift in organizations

      The general rise to hypercompetitiveness. (source: https://hypothes.is/a/AVO5uuxxH9ZO4OKSlamG)

    28. Because science is a cumulative, interconnected, andcompetitive enterprise, with tensions among the various societies in which researchis conducted, now more than ever researchers must balance collaboration andcollegiality with competition and secrecy

      Institute of medicine's call to balance cooperativeness vs. collaboration.

    29. Bok, D. (2003).Universities in the marketplace: The commercialization of higher education.Princeton: Princeton University Press

      Sources of competition among universities

    30. Theirdiscussions suggest clearly that the downside of competition has been underesti-mated and that it may have more prominent effects now than in past years. Asreputation, respect and prestige are increasingly connected to resources and tosuccess in the competitions that distribute those resources, scientists find more oftheir work and careers caught up in competitive arenas. The six categories ofcompetition’s effects that emerged in our analyses suggest reason for concern aboutthe systemic incentives of the U.S. scientific enterprise and their implications forscientific integrity.

      Implications of competition for scientific integrity.

    31. scienceWhen the actor Michael J. Fox was in the initial stages of creating his foundation forresearch on Parkinson’s Disease, he came to recognize the negative impact thatcompetition among scientific groups has on the overall progress of research on thedisease. The director of one group actually said to him, ‘‘Well, if you don’t help us,then, at least, don’t help them’’ [1, p. 236]. Such was his introduction to thecompetitive world of U.S. science.

      Anecdote about how Michael J. Fox discovered scientific competition when he set up his foundation for Parkinson's disease.

    1. The winner-take-all aspect of the priority rule has its drawbacks, however. It can encourage secrecy, sloppy practices, dishonesty and an excessive emphasis on surrogate measures of scientific quality, such as publication in high-impact journals. The editors of the journal Nature have recently exhorted scientists to take greater care in their work, citing poor reproducibility of published findings, errors in figures, improper controls, incomplete descriptions of methods and unsuitable statistical analyses as evidence of increasing sloppiness. (Scientific American is part of Nature Publishing Group.)As competition over reduced funding has increased markedly, these disadvantages of the priority rule may have begun to outweigh its benefits. Success rates for scientists applying for National Institutes of Health funding have recently reached an all-time low. As a result, we have seen a steep rise in unhealthy competition among scientists, accompanied by a dramatic proliferation in the number of scientific publications retracted because of fraud or error. Recent scandals in science are reminiscent of the doping problems in sports, in which disproportionately rich rewards going to winners has fostered cheating.

      How the priority rule is killing science.

    1. The role of external influences on the scientific enterprise must not be ignored. With funding success rates at historically low levels, scientists are under enormous pressure to produce high-impact publications and obtain research grants. The importance of these influences is reflected in the burgeoning literature on research misconduct, including surveys that suggest that approximately 2% of scientists admit to having fabricated, falsified, or inappropriately modified results at least once (24). A substantial proportion of instances of faculty misconduct involve misrepresentation of data in publications (61%) and grant applications (72%); only 3% of faculty misconduct involved neither publications nor grant applications.

      Importance of low funding rates as incitement to fraud

    2. The predominant economic system in science is “winner-take-all” (17, 18). Such a reward system has the benefit of promoting competition and the open communication of new discoveries but has many perverse effects on the scientific enterprise (19). The scientific misconduct among both male and female scientists observed in this study may well reflect a darker side of competition in science. That said, the preponderance of males committing research misconduct raises a number of interesting questions. The overrepresentation of males among scientists committing misconduct is evident, even against the backdrop of male overrepresentation among scientists, a disparity more pronounced at the highest academic ranks, a parallel with the so-called “leaky pipeline.” There are multiple factors contributing to the latter, and considerable attention has been paid to factors such as the unique challenges facing young female scientists balancing personal and career interests (20), as well as bias in hiring decisions by senior scientists, who are mostly male (21). It is quite possible that, in at least some cases, misconduct at high levels may contribute to attrition of woman from the senior ranks of academic researchers.

      Reason for fraud: winner take all

    1. Editors, Publishers, Impact Factors, and Reprint Income

      On the incentives for journal editors to publish papers they think might improve IF... and how citations are gamed.

    1. This is important. All abstractions can be grounded in some concrete event or situation. However, the practice of starting with theory often ignores this and students struggle to grasp abstractions that are suspended in space. Given that all abstractions have origins, it should be easy to situate instruction in something concrete.

  2. Jan 2016
    1. Is he saying something about inductive vs deductive methods? Where typically historians have a model or a hypothesis but now they are allowing the data to tell the story?

  3. Dec 2015
    1. Similarly, in science there exists substantial expertise making brilliant connectionsbetween concepts, but it is being conveyed in silos of English prose known as journalarticles. Every scientific journal article has a methods section, but it is almost impossibleto read a methods section and subsequently repeat the experiment—the English languageis inadequate to precisely and concisely convey what is being done.

      This issue of reproducible science is starting to be tackled but I do believe formal methods and abstractions would go along way to making sure we adhere these ideas. It is a bit like writing a program with global state vs a functionally defined program, but even worse, since you may forget to write down one little thing you did to the global state.

  4. Sep 2015
    1. Nais

      Nais

    2. Hirundinidae),

      Hirundinidae

    3. (Aves:

      Aves

    4. Tachycineta

      Tachycineta

    5. Apodidae)

      Apodidae

    6. (Aves:

      Aves

    7. Collocalia linchi

      Collocalia linchi

    8. Hirundinidae)

      Hirundinidae

    9. (Aves:

      Aves

    10. Aves)

      Aves

    11. (Apodes,

      Apodes

    12. Kina

      Kina

    13. Enicurus leschenaulti

      Enicurus leschenaulti

    14. Collocalia

      Collocalia

    15. (Brachypteraciidae)

      Brachypteraciidae

    16. Aerodramus

      Aerodramus

    17. Anatini).

      Anatini

    18. (Aves)

      Aves

    19. Ramphocelus

      Ramphocelus

    20. Australasia

      Australasia

    1. C. linchi

      Collocalia linchi

    2. C. linchi

      Collocalia linchi

    3. C. linchi

      Collocalia linchi

    4. C. esculenta

      Collocalia esculenta

    5. C. esculenta

      Collocalia esculenta

    6. C. linchi

      Collocalia linchi

    7. C. esculenta

      Collocalia esculenta

    8. C. linchi

      Collocalia linchi

    9. C. linchi

      Collocalia linchi

    10. C. esculenta

      Collocalia esculenta

    11. C. esculenta

      Collocalia esculenta

    12. C. linchi

      Collocalia linchi

    13. Collocalia linchi^

      Collocalia linchi

    14. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    15. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    16. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    17. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    18. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    19. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    20. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    21. Collocalia esculenta cyanoptila

      Collocalia esculenta cyanoptila

    22. Collocalia esculenta nitens^

      Collocalia esculenta nitens

    23. Collocalia esculenta becki^

      Collocalia esculenta becki

    24. Collocalia esculenta marginata

      Collocalia esculenta marginata

    25. Collocalia esculenta marginata

      Collocalia esculenta marginata

    26. Collocalia esculenta bagobo

      Collocalia esculenta bagobo

    27. Collocalia esculenta bagobo

      Collocalia esculenta bagobo

    28. Collocalia troglodytes^

      Collocalia troglodytes

    29. Aerodramus maximus

      Aerodramus maximus

    30. Aerodramus

      Aerodramus

    1. C. esculenta

      C. esculenta

    2. C. esculenta

      C. esculenta

    3. C. linchi

      C. linchi

    4. C. esculenta cyanoptila

      C. esculenta cyanoptila

    5. C. escidenta cyanoptila

      C. escidenta cyanoptila

    6. C. linchi

      C. linchi

    7. C. esculenta

      C. esculenta

    8. C. linchi

      C. linchi

    9. C. linchi

      C. linchi

    10. C. linchi

      C. linchi

    11. C. linchi

      C. linchi

    12. C. esculenta

      C. esculenta

    13. C. linchi

      C. linchi

    14. C. esculenta

      C. esculenta

    15. C. linchi

      C. linchi

    16. C. esculenta

      C. esculenta

    17. C. linchi

      C. linchi

    18. C. linchi

      C. linchi

    19. C. esculenta

      C. esculenta

    20. Nusa

      Nusa

    21. Madura

      Madura

    22. C. linchi

      C. linchi

    1. Borneo, dodgei

      Borneo dodgei

    2. C. esculenta

      Collocalia esculenta

    3. C. esculenta

      Collocalia esculenta

    4. C. dodgei

      Collocalia dodgei

    5. C. esculenta

      Collocalia esculenta

    6. C. esculenta

      Collocalia esculenta

    7. C. esculenta

      Collocalia esculenta

    8. C. esculenta

      Collocalia esculenta

    9. Collocalia dodgei

      Collocalia dodgei

    10. Collocalia troglodytes

      Collocalia troglodytes

    11. C. linchi

      Collocalia linchi

    12. C. esculenta

      Collocalia esculenta

    13. Aerodramus

      Aerodramus

    14. Collocalia

      Collocalia

    15. Aerodramus

      Aerodramus

    16. Collocalini

      Collocalini

    17. (Apodidae:

      Apodidae

    18. Nais,

      Nais

    19. Kinabalu 'linchi' swiftlet

      Kinabalu linchi swiftlet