120 Matching Annotations
  1. Apr 2017
  2. theolib.atla.com theolib.atla.com
    1. the visual culture both students and teachers inhabit

      Context of "visual culture" plz.

    2. conceptual and application-oriented instruction are provided through brief online videos prior to the class sessio

      Immediate Q: how many students actually watch them?

    3. Flip Over Research Instruction: Delivery, Assessment, and Feedback Strategies for “Flipped” Library

      This is our April 2017 #lisjc article!

  3. Mar 2017
    1. 55.00(31.09)65.00(17.12)78.57(20.73)

      If I multiply these out, the average of all 34 participants is 69. 69/100 CORRECT, not 69 incorrect. So, if this study is based on a math error (correct to incorrect), then actually scores got better with confidence. Otherwise, the actual average of the results was 30/100. An actual dismal score, not a D. I really need to see the data, dammit.

    2. minimal relationship

      You didn't want a "minimal" relationship, you wanted a strong inverse relationship.

    3. There were more total incorrect answers in all three confidence levels on the general test than on the library test

      This doesn't really seem notable because there's almost no way to evaluate the comparative "levels" of the two tests relative to each other.

    4. .

      YOU DIDN'T PRESENT A CROSS-SECTION OF FINDINGS BY ACADEMIC YEAR WHY DIDN'T YOU DO THAT

    5. “in-course evidence of skill deficiencies in theseareas, suchas weakcitation practices, sloppy search techniques,and problems constructing research questionsand theses”

      As noted where? By instructors marking assignments? Or in your own third-party analysis?

    6. least competent students were most vulnerable to failing to properly calibrate self-views and research skills

      I just want someone, somewhere, to say "This is a fundamental fact about information literacy." Just once.

    7. 198

      Now what are these numbers?! Total wrong answers, across 34 tests? WHY?! 198 / 34 = 5.8, and students did NOT get an average of 5.8 questions wrong if the mean was 69. I am so confused.

    8. 69 (D)

      OH NO so mean scores WERE 60 & 69, which means that it's totally impossible for the means by group to be 45, 35, and 20. So I want a look at the data.

      (Also, 69 is a C+ in Canada. So this gave me heart palpitations for a second. This is a Canadian journal, dammit!)

    9. there were fewer incorrect responses in the low confident category (M = 55.00) than in the high confident category (M = 78.57)

      I'm really confused by this. How can mean scores for the GK test be ~70 if the three group means were 45, 35, and 20? Unless you mean that the mean INCORRECT scores for the tests were 60 and 69.71. Which is not what the Results section starts with.

    10. each participant was categorized as low, medium, and high confidence.

      Curious as to why you would do this on a participant scale instead of on individual questions.

    11. Fifteen participants reported that they had visited the library at least once with a class for formal library instruction.

      Confused as to why "visiting the library" is part of this - do not formal library instruction sessions happen outside the library? That should be made clear.

    12. highest levels of confidence would be found in participants with the most

      AUGH no. There has to be a third influence here: not all confident students are right, not all confident students are wrong; sometimes confidence is overconfidence, and THAT is predicted by ___.

    13. First year students

      I feel like you would need a control group at a higher level, who had been exposed to academic research methods for longer, or something.

    14. wo causes of overconfidence within library research were proposed: a lack of personal interest in research methods, and anxiety surrounding the research process

      Anxiety + disinterest? Sounds like two symptoms of an underlying cause: some kind of misapprehension of the scholarly endeavour as a whole. Feels like the study can't stop here ....

    15. the important topic of overconfidence inacademic library patrons

      what about non-academic

    16. Given the high percentage of undergraduate students that visit the library for research instruction, there should be more examinations of the effects of overconfidence within the library environment

      What high percentage? This needs a citation.

    17. Rather than confront and resolve this lack, a person runs the risk of displacing their low grades upon external factors, such as difficult examinations or academically rigorous professors

      This is definitely something paralleled in similar studies about internal and external locuses on control, etc., things like happiness and satisfaction studies rather than academic success.

    18. studies investigating overconfidence demonstrate that low performing students are more disposed to overestimate their academic achievement than high performing students.

      I feel like this is backwards. I need to think of this as: overconfident students are more likely to be low performers; students who had a better grasp of their own skill level were higher performers.

    19. both groups benefited from confidence judgment training, and the need to justify answer selection on the testing instrument decreased overconfidence for low achievers.

      Decreased overconfidence but didn't improve achievement?

    20. within an academic library setting (G

      I don't understand what this "setting" specifically is. If you're measuring overconfidence in academic subjects, aren't you measuring it with research and use of library materials de facto? Whether it's measured by/on a test or earlier in the research process?

    21. very little research on overconfidence has been conducted in library and information science

      I would argue a significant portion of information-seeking research is about overconfidence in some way or another.

    22. notoverestimate that studentsare technologicallycompetent

      This has been a huge eye opener since I started working in a College Library.

    23. Students surveyed in 2009 defined themselves as intellectually above average a startling 54% more than students measured in 1966

      This isn't very surprising, especially now that this current generation is considered to be tech savy and think that they understand how research works since they have access to Google.

  4. Feb 2017
    1. (active manipulation). Hand/object manipulation may involve lifting, pulling, closing, rotating,or turning. Within the digital space, this is identified as scrolling orzooming.

      Glad to see this makes some inroads into "manipulative aesthetics" but I'm not sure we can account for the wide diversity of manipulative access in archives/museums/etc.

    2. Physical objectsoffered a higher level of emotional intensity and engagement for the user based upon the level of interest and complexity of the object. Users were highly engaged with the digital documentation providedwith the digital objects.

      That sounds like "Emotional : Physical :: Intellectual : Digital". I don't know how I feel about that dichotomy.

    3. the size

      The recurring emphasis on size seems significant to me: for digital objects, size seems temporary since you can (usually) easily zoom in and out on something. (I've heard similar observations used as objections to digital facsimiles.) Why does size matter to us, especially in matters of affect?

    4. Users’ experiences with the physical objectsencouraged inquisitivethinking and self-reflection but users were not as highly engaged with the documentation.

      Could the discrepancy stem from a tendency by the digital user to see the artifact itself as textual and that it therefore merges intellectually with the documentation that surrounds it?

    5. he results suggest that when users start with physical collections first, they spent more time on them than the digital collections,showing a mean of 964.3 for physical and 451.2 for digital. However, when users started with the digital collections, theytended to spend more time with them than the physical collections,showing a mean of 1150.0 for digital and 621.7 for physical.

      I'm not really surprised: doesn't that indicate simply that people spend more time on artifacts the first time they encounter them (whether physically or digitally)?

    6. A Comparative Study of User Experience betweenPhysical Objects and Their Digital Surrogates

      Hey, we're collaboratively annotating this article as part of #lisjc!

  5. Jan 2017
    1. all people, no matter what their position, would have to articulate and defend the values and assumptions on which their claims are made.

      Or, you can take the shortcut: produce 'alternative facts.'

    2. Neutrality is indeed a myth, and this is something that LIS professionals should acknowledge as well as embrace. This discovery means that it is both appropriate and necessary for LIS professionals to have a political position. Whether it be in collections and services or scholarly communication, it is crucial to have a position and to be able to articulate it, both for the sake of transparency and for the sake of social responsibility. Progress is possible only when one is willing to take a position that goes against the status quo and when those judging that position are similarly willing and able to stand for its merit

      Was definitely hoping for some more work in imagining how these things play out in practice. This conclusion seems a bit redundant at this point.

    3. Armstrong advocates for changing publishers’ decisions from whether to publish papers to how to publish them, to encourage the publication of innovative findings (1

      Well here's what I'm talking about.

    4. duplicity at worst;

      +1

    5. The literature reviewed, and the arguments contained here, suggest that there is an opposition between neutrality and social justice.

      Purely in the realm of collection development and procurement, these arguments fall far short of what I'd have hoped to read: about the act of presentation of materials to users, the context that a library creates (authority? popularity? wholesomeness?) and whether libraries can temper those associations through other means. Maybe a selection process can never be neutral, but more important is how users perceive those selections - how much they understand about how their library sausage gets made.

    6. Blanke’s argument is that librarians must have clearly articulated political and philosophical ideals or positions or else they will end up supporting power and privilege without purpose or direction.

      +1

    7. Berninghausen (1972) argued in favour of the principle of neutrality in librarianship and opposed the alternative, partisanship. He argued that interest in social and political issues would weaken the American Library Association and lead to librarians making decisions about “approved” library materials based on their own opinions concerning social and political issues

      Okay, here's the partisanship stuff.Does Berninghausen give examples of how this looks in practice, or get to the point of discussing how decisions get made based on opinions anyways?

    8. Libraries should provide materials and information presenting all points of view on current and historical issues.

      Again, I'd like to discuss how we do this during such a controversial time, while making all of our patrons feel safe in our libraries.

    9. Books and other library resources should be provided for the interest, information, and enlightenment of all people of the community the library serves. Materials should not be excluded because of the origin, background, or views of those contributing to their creation.

      I'm having a bit of trouble coming to terms with this. Due to the recent election results, how do we provide library resources for "all people" without making our patrons feel unsafe or unaccepted? I feel like this is a very important thing to discuss now that we are entering such a controversial time.

    10. This suggests that neutrality is never completely possible. Then I introduce the LIS debate about neutrality, which presents an opposition between neutrality and social justice in library services.

      It concerns me that "social justice" is opposite "neutrality" and not "social conservatism" - or that "neutrality" isn't opposite "partisanship" or "advocacy" or some other word that could indicate either side.

    11. Foucault, the “Facts,” and the Fiction of Neutrality: Neutrality in Librarianship and Peer Review

      Hey, we're annotating this article collaboratively as part of the #lisjc project!

  6. Nov 2016
    1. Future Research

      I would have liked to know the geographical breakdown of the respondents to see if there were trends or themes within the different parts of Canada.

    2. YouTube video

      I wonder if this is because they consider YouTube videos to still be public domain.

    3. -consuming process of getting copyright permission or clearance if necessary

      It would be neat if universities set up some sort of online "Ask a Copyright Officer" virtual reference chat to make this less time consuming.

    4. 40% didn’t know whether copyright training was offered

      Yes. I've heard time and time again (from current academic librarians) that librarians need to get better that marketing themselves and what we do.

    5. which is required at some institutions

      This basically forces them to have some sort of relationship with the librarians. However, this is also a way for administration to avoid making it mandatory that their teaching staff is training in copyright and fair dealing. I don't know if I like this policy.

    6. while 16% would ask the copyright owner for permission, and 14% would ask someone else such as a librarian

      I wonder if this result is biased based on the fact that they know they're surveyed about copyright?

    7. Of those who did, 55% asked a librarian, while 40% asked a colleague

      Yikes. I wonder why they asked colleagues instead of librarians?

    8. Respondents also said that faculty liaison and outreach were the most important methods of raising awareness

      Would be interested in hearing how librarians in our LISJC community reach out to faculty and staff regarding copyright issues.

    9. Elementary and secondary school teachers were found to lack proficiency in understanding of copyright law, although those with at least five years of experience using multimedia in the classroom knew more than those with less experience (Sh

      Perhaps this has to do with lack of technology? This was found in a study published in 1999.

    10. As of the time of writing, nine Universities Canada (UC) member institutions have publicly stated that they will not renew their Access Copyright licences, bringing the ratio to more than half of member institutions outside Quebec, as compared to 37% in 2014 (

      I'd be interesting in hearing why these Universities decided to separate from Access Copyright. Im assuming it mostly has to do with money, but maybe also because they didn't see value in AC?

    11. the Court ruled that 30-second previews of songs are used by consumers for research purposes, and can be considered fair dealing (para. 30).

      This seems strange to me. You can copy up to 10% of a copyright protected work. Why 30 seconds of a song? What if a song is 15 minutes or 1 minute?

    12. Among the concerns of university administrations is the risk of litigation due to unintentional copyright infringement by faculty who may be unacquainted with the finer points of copyright law

      Is this not reason enough to make it mandatory that their teaching staff are appropriately trained on copyright and fair dealing? I think this would also foster a closer relationship between the teaching staff and librarians.

    13. They found that respondents reported a limited knowledge of copyright and admitted gaps in their understanding, but that they did not want a required copyright course due to time constraints (Smi

      This is an important discussion point: do people really think that copyright "doesn't affect them" such that they shouldn't prepare themselves? Is it always a thing you have learn by screwing up (or by being screwed over)? Don't you waste more time that way? Wouldn't you think academics would know better?

    14. Even media directors, who one would imagine would have a more developed knowledge of copyright law than their colleagues in other subjects, were found to demonstrate limited competency in their understanding (C

      A bit concerned about the use of 20-year-old studies here.

  7. Oct 2016
    1. Awareness and Perception of Copyright Among Teaching Faculty at Canadian Universities

      Hey, we're annotating this article as part of #lisjc for November 2016!

  8. Aug 2016
    1. On Dark Continents and Digital Divides:Information Inequalityand the Reproduction of Racial Otherness in Library and Information Studies

      Hey, we'll be annotating this article for the September 2016 iteration of LIS Journal Club!

  9. Jul 2016
  10. www.comminfolit.org www.comminfolit.org
    1. “...all models are wrong; the practical question is how wrong do they have to be to not be useful”

      I wish you had opened with this!

    2. person

      I rankle every time I see "people" and not "students," because I think TCs work a lot better for the traditional (young, freshly-high-school-graduated) university student than they do for learners at other stages in their lives or outside formal spheres. I've seen little discussion on how TCs work in independent or self-guided learning, so I'm not sure that I can get behind this language.

    3. hreshold concepts may be understood as a shortcut through the theories for disciplinary faculty who do not hold advanced degrees in education (Meyer & Land, 2007)

      I would be hard-pressed to market this as a good thing.

    4. Threshold concepts will exist for specific areas of information science, such as metadata and discovery, and be articulated by librarians not traditionally associated with library instruction

      Intrigued by this, but unfortunately have not had time ahead of the lisjc chat / annotation to work up what these could be from a discovery perspective. As noted above, of those presented here Authority, Research Process, & especially Scholarly Discourse come through clearest in our discovery user experience work.

    5. Novices may find it uncomfortable to consider knowledge as negotiated rather than fixed; they may struggle to connect their work to the broader conversations in the discipline.

      It seems to me that "struggl[ing] to connect [one's] work to the broader conversations in the discipline" may well be something that affects LIS practitioners just as much as other disciplines.

      I would question the extent to which we ourselves buy into the idea that we are a part of an ongoing scholarly conversation ourselves - within our own domain of LIS. Do we engage with scholarly work or other 'existing conversations' to the extent that we are expecting of our users in this concept?

    6. scholarly conversation and knowledge creation take place in the context of a community that includes novices, apprentices, and experts

      This insight seems absolutely correct from ongoing user experience work at my workplace (a university). We see in more expert users a very well-developed idea that learning and knowledge creation takes places in a scholarly community. This community is understood as supportive, as it can provide recommendations, help and guidance, and has deep expertise in particular specialist areas that are tied to research in known and predictable ways.

    7. Librarians are trained in, care about, and often create database structures and search interfacesfor information retrieval

      I find this threshold concept the least "best fitting" of those identified. I suspect it may speak to a limitation of the Delphi method in this case - perhaps the participants selected, or a 'bandwagon' effect of some sort?

      What I'm seeing here (particularly my highlights above) are in 2016 perspectives that do not chime with current novice or expert understandings of library search and discovery systems that I see as a practitioner - let alone a more general, complicated idea of being an information literate person.

    8. Irreversible: once grasped, cannot be un-grasped

      ... which is something I've always found a bit questionable in this premise.

    9. Information is packaged in different formats because of how it was created and shared. Focusing on process de-emphasizes the increasingly irrelevant dichotomy between print and online sources by examining content creation in addition to how that content is delivered or experienced. While the relevance of the physical characteristics of various formats has waned with the increasing availability of digital information, understanding format in the context of the information cycle is still an essential part of evaluating information.

      I find this whole concept / Frame troubling as it simply does not fit very well with developed ('expert') understanding of format (or medium) as my own library users understand and interpret it. I found Lane Wilkinson's discussion of the equivalent draft Framework frame on format very useful and, I think, applicable here:

      this frame isn’t about format at all. From the standpoint of information use, the differences between a .jpg image and a .png image, or a print book and an ebook, are an engineering issue, not an information literacy issue. I think the real focus of this frame is media, not format, and a more intuitive way to state the concept might simply be "Medium Matters."

      from: https://senseandreference.wordpress.com/2014/07/25/is-format-a-process/

      I'm trying to read the article as-is rather than jumping into the finished Frames too much, but this is one case where I think the additional shaping and development of the Framework results in a better concept / Frame than the Delphi study did.

    10. This lack of technical expertise may have influenced the type of threshold concepts that emerged from the study.

      I would agree with this, based on how useful (or not) I have found the concepts / Frames from the study in my work which is systems-focused.

      I have been using these in the context of user experience research of a discovery system. This sits in a broader information seeking landscape which is complex, nuanced, and of which library discovery and "the library" generally is only one piece.

      Given the technical knowledge and expertise of my team, I've found that some concepts / Frames are simply much more useful in understanding infolit aspects of this process. These were: Authority, Research Process, & especially Scholarly Discourse.

    11. The authors also had to make an extra effort to include practicing librarians, as publishing metrics alone could have resulted in a panel composed solely of LIS academics

      I find it particularly interesting the authors note the need to expend extra effort on including practitioners, having discussed 'routine criteria for expertise' above.

      I think in this case there is very definitely a need for an integrative approach: inductively developing theory from practice as well as applying theory to infolit classroom practices.

    12. Though panelists were selected based on routine criteria for expertise (publishing, presenting, and participation in professional organizations)

      "Authority is constructed and contextual". ;-) I do not agree that these criteria necessarily allow reliable judgement about infolit expertise so much as reflecting those who are seen as authorities or 'prominent voices'.

    13. Kiley and Wisker’s work on threshold concepts for doctoral researchers (2009) raises the question of whether information literacy may have threshold concepts that are bounded by a discipline

      I found this a key problem to grapple with further on in the paper where the concepts themselves are listed. Are they bounded by information science as a discipline? Three explicitly are not.

    14. information literacy should not be taught as a linear series of competencies, often limited to search strategy

      Just to digress: I agree with this argument myself and feel I have a strong bias toward this view of things. I highlight it here as I wonder if it is a somewhat taken-for-granted principle that is often not always followed through in infolit practice? To me that would mitigate against the usefulness of a threshold concepts Framework - you'd want both a "concepts" and a "competencies" document.

    15. The profession as a whole may now be on the steep side of the learning curve when it comes to understanding threshold concepts; as Oakleaf (2014) points out, “For many librarians, threshold concepts are unfamiliar constructs, represent a different way of thinking about instruction and assessment, and require a concerted effort to integrate into practice.” It is not surprising that librarians might initially struggle to integrate and apply this new approach: “The idea of a threshold concept is in itself a threshold concept” (Atherton, Hadfield, & Meyers, 2008, p. 4)

      Not so sure about this argument. It just seems a little overstated. To me concept of threshold concepts appears relatively acceptable to libraryland - having seen it initially when Land spoke at the LILAC infolit conference with a keynote talk about threshold concepts (this is a very mainstream conference). It would not surprise me to see threshold concepts included in future infolit guidelines / frameworks from for example, SCONUL or CILIP with relatively little fuss.

    16. The Delphi method is a good fit to validate the threshold concept approach for information literacy instruction and define the threshold concepts for information literacy because threshold concepts are identified by subject experts

      My overall impression of the method is that it seems a reasonable way of getting some threshold concepts from subject matter experts. I would question if these concepts are necessarily "the" threshold concepts for infolit given the differences between the Delphi study and the Framework that draws on it.

    17. prominent voices in our field

      I already find this premise questionable.

    18. IDENTIFYINGTHRESHOLDCONCEPTSFORINFORMATIONLITERACY

      Hey, we're annotating this article collaboratively as part of the #lisjc group!

  11. Jun 2016
    1. We hope that the application of the J.O.I Factor in this article serves merely as a proof of concept

      Has anyone seen further uses of this?

    2. Also, the spectrum lumps open access publishing options, another of our data points, in with Reader Rights as “immediate access to some, but not all, articles (including the ‘hybrid’ model” — “hybrid” meaning the business model where articles can be made open access on a one-by-one basis for a fee. We decided to add a “-” for journals that offer open access publishing for a fee, illustrating the negative connotation that might have for authors.

      Interesting that the spectrum doesn't differentiate very well here.

    3. Toronto University Press

      But is it really that surprising?

    4. Suffice it to say that if the field of Library and Information Studies considers a green open access policy a good deal, there is much work to be done.

      This.

    5. Conclusions

      My final thoughts: It is rather difficult to call upon professionals to essentially sacrifice their professional wellbeing for open access publishing. I believe that we need to focus our activism on changing the foundation of what is valued in the field. It needs to start being recognized that librarians publishing in open access journals is fundamentally the only appropriate publishing option for the field and should merit tenure not the other way around. Having some sort of an incentive for publishing open access is going to be a game changer for rallying people to this very important cause.

    6. Open Access Publishing Policies

      Gosh, I sure wish I had learned about this in Library School..........

    7. Journal Selection

      This journal selection process is not exactly reproducible. This creates a problem as there is no way to independently verify that proper steps were taken to eliminate a selection bias. Any criteria that was used to select these journals should be clearly defined somewhere.

    8. All details were inputted to the spreadsheet and coded for consistency.

      Were any journals excluded for failing to provide these details?

    9. Our journal list includes an extraordinarily broad range of journals including research focused journals and those in subfields of librarianship like archives and technical services. This decision was made so as to gather data from the broadest possible representation of LIS scholarship

      I could really use a few contextual pieces of information: for one, what's the estimate of total LIS journals that you could potentially have drawn from? Two, might you have controlled in some way for the types and subfields represented here?

    10. The journals that we began with came from an internal list compiled as part of a professional development initiative at Florida State University Libraries. A student worker in the Assessment department compiled the original list of 74 journals, and then the co-authors of this piece expanded that list to 111 after consulting the LIS Publications Wiki.

      So, not really a "method" then.

    11. Top LIS journals can be identified and ranked into tiers by compiling journals that are peer-reviewed and highly rated by the experts, have low acceptance rates and high circulation rates, are journals that local faculty publish in, and have strong citation ratings as indicated by an ISI impact factor and a high h-index using Google Scholar data.

      I'm so interested in knowing precisely what constitutes a (relatively) high circulation, (relatively) low acceptance, (relatively) strong citation rate, etc. in LIS work. Do we have a journal that sets a standard?

    12. academic librarians often consider open access journals as a means of sharing their research but hold the same reservations about them as many other disciplines, i.e. concerns about peer review and valuation by administration in terms of promotion and tenure

      An issue I rarely see tackled head-on.

  12. May 2016
    1. Librarian, Heal Thyself: A Scholarly Communication Analysis of LIS Journals

      Hi! We're annotating this article as a group as part of #lisjc!

    1. hemes

      I have several questions about this method: 1) Is it, in fact, valuable to compare multiple participants' views of their organization's values? (When they are all from 1 organization.) 2) I feel like the variance in participants' values would affect how they saw their organizations, i.e. things Ps feels strongly about would be more noticeable to them if they thought O lacked those interests. High inter-participant correlations in ratings of O-values would be interesting. 3) There would be way less chance of any correlation between Ps of differing Os as to organizational values, except in the general thesis that organizations are generally conservative and/or ROI-focused, which we could debate all day, I guess ... 4) I feel like #1 would only be valuable as compared to #3, where 3 is your control (how all Ps feel about their Os) and 1 is your experimental (how these Ps feel about this one O). Controlling for 2 (how some Ps feel strongly about certain values and misperceive their Os in corresponding ways).

      ... does that make sense? Maybe I'm misreading but I don't feel like the PQMethod software is described as offering these analyses.

    2. it would be useful tomodifythe concourse to remove statements that did not differentiate between perspectives and to add statements that provide more granularity in terms of how values are expressed through actions

      !!!!! I'm SO interested in this. How could you test people on their interest in actions, laid out specifically, and the relation of those actions to their stated values?

    3. practices.

      It would be really interesting to speculate more on how we might typify cataloging quality - I feel like I could list a half-dozen as a non-cataloguer, but I would be interested to see how others see those things. (Completeness in # of fields + relevance of fields; vocabulary control as % of applicable fields; format + readability of long-form fields; typos + factual errors; relevance of subject headings based on available SHs and # of SHs used; ........)

    4. It was also part of the Innovation Focus themein a way that indicates that participants had strong perceptions of the institutional importance of usability, but placed the usability card at different ends of the distribution

      I find this very interesting. Something to follow up on with a larger study perhaps.

    5. eight participantsin total

      My natural tendency is to question the validity of the sample size, right off the bat. Is one participant per library going to be able to reflect meaningfully on the values of said library?

    6. For this study, the aboveliterature review was conducted to select the concourse

      See, this is where I wonder whether a different format would be better. For a research project where the lit review is part of the methodology, wouldn't you put methodology first and lit review second? Is it just me?

    7. discovery of an electronic resource leans almost exclusively on metadata

      ++++++ A thousand times yes

    8. Understanding the values individuals bring to making cataloging assessment decisionsmay help us as a professionto have more meaningful discussions about when to use which assessment tools

      I appreciate the introduction to this methodology and can see where it could very useful for strategic planning or other scenarios where alignment between personal/institutional values is desirable.

    9. Results

      I think I am still wrapping my head around Q-Sort. From scanning the appendicies, I think that each of the statements that are identified as "core" to the various factors (i.e. online catalog and/or discovery tool usability) have a one-to-one correspondence to a Q-statement that participants were asked to rank. Is that true? Or are there multiple Q-statements that might be reflected in one short-hand descriptor. If so, are these grouped together before hand or after the software analysis?

    10. our academic libraries

      The information about the libraries and participants below is useful. Am also curious to know how participants were recruited.

    11. The participants are selected because they provide a broad representation of viewpoints within the overall subject population

      This sounds a bit backwards; how does one "select" participants? How do you pre-select participants with a "broad representation of viewpoints"? Sounds laborious.

    12. Appendix 1 and Appendix 2.

      Yay for appendices w/survey instruments!

    13. Methodology

      This section clarifies a lot of my questions about the previous Q-Sort section: might be useful to present this information in one place.

    14. Q Methodology

      This section was enough encourage me to learn more about Q-Sort, but a some of it flew over my head. (Especially the part about factor analysis.) I wonder if narrating a few concrete examples from your study alongside the description of the method would have made it a little easier to process?

    15. For this study, envision a triangle: participants ranked each statement from least to most like their opinion, with the tallest part of the triangle containing the neutral responses, and the shorter edges containing their most positive and negative responses.

      I would have found a diagram very helpful here.

    16. Applying Simon’s organizational management ideas to the realm of cataloging assessment, decisionsabout what and how to assess can be evaluated by determining whether the desired objectives are achieved

      Minor question about wording: Because of the placement of this sentence, I am not sure if what follows (i.e. the statement about two different kinds of organizational decision making) come from the Simon article or are the author's assertion. (I'm assuming it's from the Simon article...)

    17. when asking about the actual outcomes of assessment activities, reallocating staff was the most common result, followed by streamlining processes and making collection development decisions.

      Does that mean that, even when the desired outcome of assessment was "improve services," for example, the results of activities were more often expressed in terms of listed changes such as staff reallocation, rather than in "Yes, we improved services! By [this much]!" ??? That's weird.

    18. Q methodology.

      I feel like this intro kind of washed over me - without a practical example or two, I wasn't really sure what was being discussed. I would have liked to have a short "intro" to the methodology here, and then have that expanded later in the methodology section (since that's where you describe your approach, not the theory you're working from).

  13. Apr 2016
    1. Examination

      Hey team, we're tackling this article as part of the #lisjc research methods reading club!

    1. Examination of Cataloging Assessment Values Using the Q Sort Method

      This article is being annotated and critiqued as part of the #LISjc project!

  14. Mar 2016
    1. Source of web chats:

      This was very helpful re: measuring the relative effectiveness of the various chat reference interventions; would love a similar measure for in-person intervention, although I recognize that's much more difficult (impossible?) to capture.

      Does anyone have an idea how that could be done?

    2. Figure 3

      Would be helpful if discussion explicitly referenced the Figures for additional context.

    3. If we succeed we will improve service but reduce our chat counts.

      This is a good indicator that simply "increase reference interactions" is not a meaningful goal in its own right.

    4. the addition of chat reference reversed that by a very modest .96%.

      Addition of chat reference isn't actually one of the interventions that were part of this study, so it seems inappropriate to refer to it as an outcome. This data is better suited for the previous section, which described the introduction of chat reference to the library.

    5. questions per FTE studen

      The fact that they normalized reference transaction data per FTE to account for the changing enrollment at the college (described in the paragraph above) over time is a strength.

    6. These studies, plus librarians’ own observations, showed that students frequently needed the help of a skilled librarian even when they did not ask.

      I would have liked to see a deeper exploration of the literature supporting this statement, as motivation for this project beyond the observation that reference transactions are declining.

    7. Our students are typical graduates of New York City public schools

      "typical graduates of NYC public schools" is under-defined; could lead reader to make problematic assumptions about the library's users.

    8. Discrete data points don't prove trends.

    9. Signage

      I wonder if they consulted existing signage and wayfinding research to improve this in line with best practices. (That's a whole other article, isn't it?)

    10. Improving Reference Service with Evidence

      Hey folks! This article is being annotated as part of #LISjc, a methodological-critique reading group for librar*s. Please tag your public annotations with #lisjc!