172 Matching Annotations
  1. Last 7 days
  2. May 2020
  3. Apr 2020
  4. Feb 2020
    1. The criteria you put in your assessment will guide students toward the content and skills you want them to learn. You might even want to get their input before you finalize the project’s assessment.Be sure that your assessment gives students lots of leeway in how they investigate and share their projects. Every project should turn out differently. As Chris Lehmann says, “If you assign a project and you get back 30 of the exact same thing, that’s not a project, that’s a recipe.”

      assessments and project based learning

  5. Jan 2020
    1. Current Assessment of SDL

      Assessment stage of paper

    Tags

    Annotators

  6. Nov 2019
    1. Validate Candidate’s Skills with Coding Assessment Test

      Test coding skills of candidates using Wild Noodle's online coding assessment tests. Their automated online coding challenges to assess developers' programming skills and also have an extensive pool of role based programming and objective questions. Contact now!

    1. Author Mary Burns discusses the key elements of computer adaptive testing (CAT). CAT is defined as assessments that use algorithms to progressively adjust test difficulty based upon learner's correct or incorrect responses. The benefits of CAT include more immediate data and are often more reliable. Types of test items are also covered to illustrate how the test can meet various levels of cognition and measure expertise. An issue related to CAT is the intensive time needed to develop multiple test items at multiple levels of cognition. Rating: 8/10

    1. The Office of Educational Technology website that is featured on Section 4: Measuring of Learning, discusses pedagogical implications of assessment shifts and how technology is part of enhancing assessment. The site places emphasis on using assessment as a real-time measurement of learning for real-life skills. The infographic that displays traditional models of assessment next to next-generation assessments is a quick reference for shifting ideology and practices. Ultimately, increased personalized learning serves the learner more efficiently and promotes increased achievement and mastery. Rating: 8/10

  7. Oct 2019
    1. Research advances in learning and teaching over the past few decades provide a way to meet these challenges. These advances have established expertise in university teaching: a set of skills and knowledge that consistently achieve better learning outcomes than the traditional and still predominant teaching methods practiced by most faculty. Widespread recognition and adoption of these expert practices will profoundly change the nature of university teaching and have a large beneficial impact on higher education.

      Carl Wieman paper on evidence based learning implementation in the disciplines

  8. Sep 2019
    1. Explaining requires you to organize and elaborate on the ideas that you are trying to convey to your audience. Depending on your audience, you will have to provide more details and, thereby, engage in deeper processing of the information. On the other hand, if you are asked to simply retrieve ideas from a text, you may be less likely to engage in elaborate structuring or re-organization of the material – at least not to the same extent as preparing an explanation to someone else.

      benefits of activities requiring students to explain a concept to peers vs. memory recall activities like quizzes

    1. The "doer effect" is an association between the number of online interactive practice activities students' do and their learning outcomes that is not only statistically reliable but has much higher positive effects than other learning resources, such as watching videos or reading text.

      "doer effect" - interactive practice activities have greater learning benefits that watching videos or reading

    1. Although unguided or minimally guided instructional approaches are very popular and intuitively appealing, the point is made that these approaches ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process.

      This paper provides a counter argument to minimally guided instruction approaches.

  9. Aug 2019
    1. a syllabus can’t mandate a particular emotional experience

      And yet, machines are being invented and put in to use that attempt to measure student emotion and attention to inform assessment...

    2. Could you imagine grading students on anger as an “outcome”?

      I'm imagining a course where the only way to earn an "A" would be to become totally outraged by its end.

    3. “It is likely that the authoritarian syllabus is just the visible symptom of a deeper underlying problem, the breakdown of trust in the student-teacher relationship.”

      Yes: aligned with the mentality that students are cheating on exams, plagiarizing works, and inventing excuses for late work. In all these cases, there are things teachers can do to restructure the educational experience and stop casting blame for the inadequacies of machine graded assessments, disposable assignments, and even date-based grading.

    1. Doing this too often, however, steals valuable time away from the teacher that may reduce the quality of instruction for all the other students.

      The teacher's mental and physical health is important, yes. But arguing that allowing retakes is a detriment to your own health, even though it is a benefit to the student, is a hard sell.

      Case in point: my wife's family weathered two deaths in the same week. I left school on bereavement for one and had to extend my absence in the wake of the second death. We were in the middle of budgeting and my requests were not finalized.

      My principal could certainly have disallowed an extension because I wasn't "proactive" and didn't have it done before the due date. Instead, I was given grace and I was able to submit a better report and request because of it.

      Grace goes a long way.

    2. However, every minute writing and grading retakes or grading long-overdue work is a minute that I’m not planning effective and creative instruction, grading current work so students receive timely feedback, or communicating with parents.

      This may mean you're grading too much.

      Assessment should be focused and pointed. Narrative feedback is helpful. Allowing retakes gives you an opportunity to focus only on what needs improvement. It is not a wholesale redo of the assignment. A retake should have the student focus on the specific gap in understanding which prevents them from achieving proficiency.

    3. Under retake policies, parents at my school have expressed concerns about how overwhelmed their children become due to being caught in a vicious cycle of retakes.

      This is not caused by a retake policy itself. It is caused by either A) not having a robust formative assessment strategy to catch struggling students or, B) not implementing reasonable checkpoints which help students learn to self-regulate.

    4. Retakes and soft deadlines allow students to procrastinate

      It is a major assumption that hard deadlines and tests prevent students from procrastinating. What disallowing retakes ends up doing is locking students into a cycle where they are actively discouraged from learning rather than taking the time to learn something.

    5. They spend hours a day on video games and social media

      Or:

      • working
      • taking care of siblings
      • taking care of other relatives
      • trying to find something to eat
      • ...
    6. In math classes, where concepts constantly build on one another, traditional policies hold students to schedules that keep them learning with the class.

      Assuming all students learn content at the same rate is dangerous. There may be fundamental math skills that take one student longer to learn than another. That may mean multiple attempts at demonstrating those skills.

      If I were to disallow retakes, even the intrinsically motivated student who struggles with fundamentals loses out on mastering the concept. I lose out on knowing that student is struggling. Retakes allow me to more fully assess a student's progress toward mastery, incrementally working on correcting errors and gaps in understanding.

      By promoting pacing over learning, we are doing our students a disservice.

    7. One of his research studies showed that college students who were held to firm deadlines performed better, in general, than students who chose their own deadlines or turned in all work at the end of the semester.

      This argument is errantly conflating two separate ideas: retakes and deadlines.

      The act of allowing a retake does not preclude the use of deadlines. Setting deadlines for initial work is important because that way, I can check student work before the major assessment. There are also deadlines for completing retakes…the end of the semester being the hard stop.

      I'm also building in structure for retakes. The fact that I allow a retake does not mean it happens when and where a student wants. They work within my defined schedule, which includes deadlines.

      Arguing against retakes because deadlines disappear assumes that they are contingent upon one another when in reality, they work together to help students develop agency and time management skills.

      This makes sense at a high level, but in reality, none of us - in school or out of school - lives in a deadline free world. I have deadlines to meet at work and if my product is not quality at the deadline, I have to do it again.

      The difference is that we cannot fire students from school.

    8. In my experience, however, the more lenient we are in these matters, the less students learn. The traditional policies—giving each assessment only once, penalizing late work, and giving zeros in some situations—help most students maximize their learning and improve their time management skills, preparing them for success in college and career.

      This statement comes with zero qualification for "in my experience." Is there research or empirical evidence that supports this statement? Are there other interventions or policies that could be used in place of allowing retakes?

      Setting up the entire post on the premise of "in my experience" makes it a hard sell to start.

  10. Jul 2019
    1. A variety of educational taxonomies have been adopted by districts and states nationwide. Examples of widely used taxonomies include but are not limited to Bloom’s Taxonomy of Educational Objectives;23 [ 23] Bloom’s revised Taxonomy for Learning, Teaching, and Assessing;24 [ 24] Marzano and Kendell’s New Taxonomy of Educational Objectives;25 [ 25] and Webb’s Depth of Knowledge Levels.26 [ 26] Using educational taxonomies to facilitate the development and guide the organization of learning objectives can improve content appropriateness, assessment effectiveness, and efficiency in learning and teaching.

      Bloom's Taxonomy

    2. How you track student progress can make a difference in their learning and your teaching.

      I will have to develop my own assessment strategies - formative and summative.

    1. Performance assessment does not have to be a time-consuming ordeal; it is a great way to assess our students' skills. It is essential to create a rubric that is simple, quick, and objective. This article discusses the process of creating a rubric as well as showing a rubric used by the author in her general music classroom for several years. Differences between assessment and evaluation are also mentioned.

      How to create a rubric for performance assessment?

    1. It is interesting to notice that this article from a decade ago doesn't even mention any online assessment. So much has changed since then! I'm glad to see that from measuring attendance and attitude we are moving toward a more professionally acceptable system where we can teach, assign and assess measurable knowledge in music ed, more specifically in choral programs.

    2. 11% for music knowledge

      Only 11% for knowledge! That is surprising and could be more if we don't try to measure "talent" but the knowledge that is teachable and factual. Again, this is old data (1991) so today the numbers might look different.

    3. attendance and attitude were the most common grading criteria employed by instrumental and choral music teachers.

      Yes. I noticed that in schools.

    4. Some music teachers believe the creative or interpretive nature of music precludes assessment but then readily employ subjective methods of assessment, many of which "are determined haphazardly, ritualistically, and/or with disregard for available objective information" (Boyle & Radocy, 1987, p. 2).

      This is old data (1987) but still true on some levels. By now, what I see in practice is that music educators have figured out what is that's measurable and what is not and in the school I was student teaching, the choral program is taken as an academic subject and is graded.

    1. In prior work, we found that different student choices oflearning methods (e.g., doing interactive activities, reading onlinetext, or watching online lecture videos) are associated withlearning outcomes [7]. More usage in general is associated withhigher outcomes, but especially for doing activities which has anestimated 6x greater impact on total quiz and final examperformance than reading or video watching.
  11. Apr 2019
    1. Trauma-Informed Screening and Assessment Tools

      Difference between trauma screening and trauma assessment tools: Screening tools are brief, used universally, and designed to detect exposure to traumatic events and symptoms. They help determine whether the child needs a professional, clinical, trauma-focused assessment. Functional assessments are more comprehensive and capture a range of specific information about the child’s symptoms, functioning, and support systems. A trauma assessment can determine strengths as well as clinical symptoms of traumatic stress. It assesses the severity of symptoms, and can determine the impact of trauma (how thoughts, emotions, and behaviors have been changed by trauma) on the child’s functioning in the various well-being domains.

  12. Mar 2019
    1. This article discusses that technology rich classroom research is lacking in the research world. This paper created a scale in which it could evaluate classroom environments. The authors tested this scale and determined it was a good starting framework for how to improve classroom environments. This scale could be useful later in class when evaluating technologies.Rating 9/10 for help assessment techniques

    1. 4Vision: Preparing Learning Communities to succeed in College and Careers in a global society through technology.Vision and Goals

      This proposal outlines a draft for a technology plan for Arizona regarding adult education. This plan outlines the goals of the plan and how Arizona can address them moving forward. This plan outlines trends for the future in technology and acknowledges challenges that might come up later down the line. This plan also reviews teaching standards and instruction, as well as operations for the future. Rating 6/10 for being a draft, but with good ideas!

    1. This page is free resource to download a book about how people learn. This selected chapter provides recommendations for assessments and feedback in learning environments in general which also applies to adult learning. In addition to these examples, this chapter provides a section on theory and framework to better understand the overall topics. Rating: 10/10 Great free, open source resource with reputable information about learning.

    1. personalized learning: how does it differ from traditional learning Some of the text here is gray and it is also small, so that does not make it easy to read. Nonetheless it is an infographic about personalized learning from which a fair amount of information can e learned in a short time. rating 4/5

    1. classroom assessment techniques These are quick ways to complete formative assessment during a class session. The results can help the instructor determine what he or she should address. it can unearth learner misconceptions. These were designed for college classrooms but can be used in other adult learning contexts. rating 4/5

    1. teachthought This particular page is entitled '20 simple assessment strategies you can use every day' but the reason it is listed here is because the page itself serves as a jumping off point for reviewing or learning about many educational theories and principles. This site may have been designed for K-12 teachers - I am not sure, but it is quite usable for those who teach adults. This is a place to come if you are interested in browsing - not if you have a specific thing you need at the moment. Rating 3/5

    1. This 69 page PDF offers good advice on writing a variety of types of test questions. It is called "Is this a trick question?" Despite the length of the PDF, it is easy to browse if you are interested in writing a specific type of question. As may be suggested by the length, this resource is more comprehensive than others. Rating 5/5

    1. NOT READY TO LET GO: A STUDYOFRESISTANCETO GRADING CONTRACTS

      This article was included in the curriculum for the Open Pedagogy track led by Dave Cormier in the 2019 Digital Pedagogy Lab Toronto.

      In a 19 March 2019 Virtually Connecting session, Dave explained that he uses the contract in this article as a negative example — not to be adopted uncritically, but as a starting place to think about how one might generate a better assessment model.

  13. Nov 2018
    1. Students are entitled to a more educative and user-friendly assessment system. They deserve far more feedback -- and opportunities to use it -- as part of the local assessment process.

      Evaluate and reflect on your assessment systems. Do you have a system in the sense that it is longitudinal and recursive? How do you need to adjust your practices to ensure students get this feedback on their learning?

    2. Our job is to teach to the standards, not the test.

      What would it take to go an entire year thinking this way? What habits in planning do you need to address? How would your assignments change?

    3. Inside the Black Box: Raising Standards Through Classroom Assessment
    4. The point of assessment in education is to advance learning, not to merely audit absorption of facts.

      How do we need to change language among teachers and students to change perception? What kinds of practical habits can we adopt?

    1. 1Engaging Adults Learners with Technology

      The pdf provides information from The Twin Cities Campus Library with instruction incorporating technology into teaching adult students.

      It includes a review of instructional technology, assessment for learning, framework for teaching adult learners and a workshop. This 14 page pdf provides the essentials necessary in understanding basic learning needs of adult learners.

      RATING: 3/5 (rating based upon a score system 1 to 5, 1= lowest 5=highest in terms of content, veracity, easiness of use etc.)

  14. Jul 2018
    1. an institutional rather than a user focus

      This is key: Desires to use portfolios in institutional/program assessment practices are part of what has made them cumbersome. Portfolio use in programs that emphasized their value for students and learning have always been the best examples in my opinion (eg, Portland State, LaGuardia CC, Clemson), even if they also use them in institutional/program assessment too.

    2. for many students owning their own domain and blog remains a better route to establishing a lifelong digital identity

      DoOO is definitely a great goal, especially if it is viewed in part as a portfolio activity, so people use their domains to build up a lifelong portfolio. What seems key is having the right supports in place to help people and institutions reach such goals not only technically, but most importantly, as a set of practices integrated into their personal and institutional workflows.

    1. Can explain concepts, principles, and processes by putting it their own words, teaching it to others, justifying their answers, and showing their reasoning.• Can interpret by making sense of data, text, and experience through images, analogies, stories, and models.• Can apply by effectively using and adapting what they know in new and complex contexts.• Demonstrate perspective by seeing the big picture and recognizing differ-ent points of view.• Display empathy by perceiving sensitively and walking in someone else’s shoes.• Have self-knowledge by showing meta-cognitive awareness, using productive habits of mind, and reflect-ing on the meaning of the learning and experience.

      Awesome examples! kind of reminds me of Bloom's taxonomy concept

    1. "The idea is bananas, as far as I'm concerned," says Kelly Henderson, an English teacher at Newton South High School just outside Boston. "An art form, a form of expression being evaluated by an algorithm is patently ridiculous."
  15. Feb 2018
  16. Jan 2018
    1. There are no audits matching your search

      There are no audits matching your search for Dispensary There are no audits matching your search for Cannabis There are no audits matching your search for Marijuana There are no audits matching your search for nutraceutical

  17. Nov 2017
    1. The aim is to demonstrate the distance travelled on their journey in the form of tangible, trackable learning outcomes and applications.
    1. At the very least, the tool should allow for robust formative assessment, and should be capable of giving timely, helpful feedback to learners.

      The “at the very least” part makes it sound as though this were the easy part.

  18. Oct 2017
    1. The distinction between assessment and surveillance seems really blurry to me.

      on the fine line between assessment and surveillance

    1. key skills they then can apply to other situations beyond this specific course or assessment

      Collaborative annotation as a way to assess skills rather than content mastery. Or in addition to.

    1. By giving student data to the students themselves, and encouraging active reflection on the relationship between behavior and outcomes, colleges and universities can encourage students to take active responsibility for their education in a way that not only affects their chances of academic success, but also cultivates the kind of mindset that will increase their chances of success in life and career after graduation.
  19. Sep 2017
    1. University-wide 33–39% of faculty said that fewer than half of their undergraduates meet their expectations

      This could mean that students are lacking in info lit skills, or that a minority of faculty have unrealistic expectations

  20. Jun 2017
    1. This is a draft - so Please make annotate and make suggestion or express concerns as you see fit.

      Please be courteous and respectful.

  21. May 2017
    1. TPS Reflective Exercises

      TPS as metacognition - worth trying out. Would have to budget time for it. Could we combine it with something to capture data? connect to qualtrics or google forms

    1. Summary: I really appreciate this post because of many reasons. The title for one is great and offers a twist to many of the other Fake News spotting articles I have seen, it is more empowering, and I want to empower my students when I teach them about Fake News. The summary for this is the same as the other two, educational about Fake News by a reliable source, so I will leave it to that. Assessment: I like how this article has ten questions with mini-questions underneath. It highlights important words in red and by having a red flag to symvolize Fake News, it can help the reader put the two together. This source is almost like a mix of the two other ones I have, it is a good mediator. Relfection: This source again is very useful for me. It gave me more ideas about how I would want tot each my students about Fake News by having the little red flags and tips at the bottom of the page. It goes into more detail on the surface of the article and I like that. It has shown me even more what to look out for when trying to spot Fake News. All three of my sources together can make me a powerhouse Fake News detector! Which is great because that is what I want my students to be too.

    2. Summary: This text was originally a picture provided by Facebook to help it's users spot Fake News. I really like it because it appeals to a wide-range of people. It helps young teenagers understand what Fake News can look like and gives adults a good, basic, overview of what Fake News can look like. I believe this past election season prompted Facebook to educate its users about Fake News since now, more than ever, people use Facebook to learn about the news, and consequentially, express their ideas. Assessment: This is most definitley a useful source. I appreciate how it shows me pictures with each tip it gives. It's language iss also very clear and understandable. Everything here is more black and white except for the last two tips which can be harder for people to figure out, but still just as important. This information is reliable, it came from Educators Technology.com and was put on Facebook, so it had to go through all their people as well. This source is a good templete for me to base off how I would teach Fake News to my students with Disabilities. Reflection: This source was the first one that really showed me the indicators of fake news. It is mostly about what one can see to identify Fake News, but it is super helpful. These obvious characteristics are what I can first teach my students with disabilities. The last two tips will be harder, but are necessary. Students need to know about Satire and how some people just write lies for a living, for the clicks.

    1. Summary: I really like this source because it provides amore in-depth analysis of Fake News Stories than my first article does. This source, just like the other ones I am showing for my annotated bibliography are all educational. (I think going over this again is not imperative.) Assessment: Everything I highlighted in yellow is something I believe might be more tricky to teach/talk to students with Disabilities about. This does not mean they are bad (they are actually great ideas to take in) I just have to think about how one can teach that information. What I highlighted in blue are tips the author said that I really appreciated and believe that a lot of people do not think about. I think people who are educated in a way about the fact that Fake News is out there would like this source. I see people who activley share Fake News everyday and there is no way this source would get them to see that all the news they know of is Fake. They would get really angry. That is why me educating my students about Fake News is so important! I think tis source seems less biased because in "Does teh story attach a generic enemy?" it includes the both the Liberal and Conservative side. Being liberal myself, I have been awre of mostly only Conservative Fake News that attacks liberals. Reflection: This source is a great addition for me because it gives me a more detailed lense through which to examine Fake News. It talks about points that rely on one's emotion as well as the actual writing. It gets to points that may are really important and go beyond the surface of a Fake News article.

  22. Apr 2017
    1. p. 12 Heintz 1987 is not in bibliography. A search for the quote suggests it is the same as this: Heintz, Lisa. 1992. “Consequences of New Electronic Communications Technologies for Knowledge Transfer in Science: Policy Implications.” In Washington, DC Congress of the United States. Office of Technology Assessment (OTA) Contractor Report.

      I can't find a full text though. Presumably because it is a contractor report, it isn't in either of the OTA archives:

      http://www.princeton.edu/~ota/ http://ota.fas.org/

  23. Feb 2017
    1. this kind of assessmen

      Which assessment? Analytics aren't measures. We need to be more forthcoming with faculty about their role in measuring student learning. Such as, http://www.sheeo.org/msc

    1. and reflect on its purposes — individually and as a class. 

      Metacognitive work on what the assessment is and how it works. Nice.

    2. Drier’s final grades are based on students’ written self-assessments, which, in turn, are based on their review of items in their portfolios. 

      Really appreciate modalities like this one where students are asked to show us what they've learned and to interact with the instructor and other students.

    3. Extrinsic motivation, which includes a desire to get better grades, is not only different from, but often undermines, intrinsic motivation, a desire to learn for its own sake (Kohn 1999a). 

      Focusing on grades as a / the measure of achievement also seems to undermine the kind of curiosity that is essential to authentic learning.

  24. Jan 2017
    1. No newspaper, no politician, no parent or school administrator should ever assume that a test score is a valid and meaningful indicator without looking carefully at the questions on that test to ascertain that they’re designed to measure something of importance and that they do so effectively.
  25. Sep 2016
    1. There is certainly value in assessing the quality of learning and teaching, but that doesn’t mean it’s always necessary, or even possible, to measurethose things — that is, to turn them into numbers.  Indeed, “measurable outcomes may be the least significant results of learning”

      Just because you need to measure learning get doesn't mean you can.

  26. Jul 2016
    1. How has learning already been changed by the tracking we already do?

      Alfie Kohn probably has a lot to say about this. Already.

    2. ensure that students feel more thoroughly policed

      That ship has sailed.

    1. students were being made to take them several times a year, including “benchmark” tests to prepare them for the other tests.

      Testing has gone sentient. Resistance is futile. At least in the US.

  27. Jun 2016
    1. Afurtherbarriertotheuseofformativefeedbackmaybethatsomestudentsincreasinglyfailtounderstandthetaken-for-grantedacademicdiscourseswhichunderpinassessmentcriteriaandthelanguageoffeedback(Hounsell,1987).AccordingtoEntwistle(1984,p.1),‘effectivecommunicationdependsonsharedassumptions,denitions,andunderstanding’.ButastudyatLancasterUniversityfoundthat50%ofthethird-yearstudentsinoneacademicdepartmentwereunclearwhattheassessmentcriteriawere(Baldwin,1993,citedinBrown&Knight,1994).Asoneofourstudentsnoted:‘Ihaven’tgotacluewhatI’massessedon’

      The extent to which students do not understand what they are being assessed on, even in higher years.

    1. Assessment and Classroom Learning

      Black, Paul, and Dylan Wiliam. 1998. “Assessment and Classroom Learning.” Assessment in Education: Principles, Policy & Practice 5 (1): 7–74.

      This is the original work in the area.

      Largely a literature review from 1988 through 1998.

  28. Apr 2016
    1. The process of peer review ensures the inviola bility of these codes and, in this way, discourages innovative work. What does not conform to the code is deemed unacceptable.

      Jeff Rice: "Assessment love the good guy."

  29. Mar 2016
    1. I told them you could work 60 hours a week, never take a holiday or weekend off, have internationally regarded publications – lots of them, write textbooks, be a great teacher, and managers will still ask for more. And more. I told them you are measured only by what you have not managed to achieve, not what you have achieved, never mind how valuable or prestigious.

      Unfortunately, this is how academics assess their students, too.

  30. Feb 2016
    1. Zoomerang

      zoomerang survey software option

    2. While the display is appealing and easy to read, it is not customizable

      Polldaddy: survey software selection. List of cons.

    3. Polldaddy for iOS was a great option for this type of assessment. The layout and design are optimized for the iPad’s screen, and survey results are loaded offline. Be-cause an Internet connection is not required to administer a survey, the researcher has more flexibility in location of survey administration.

      Polldaddy did not require wireless internet access, making it a more flexible survey-software option

    4. Polldaddy software chosen for second iteration of survey offered at GSU for assessment.

    5. Google Forms

      Chosen survey-taking software for iPad surveys given to users at GSU.

    6. A two-question survey was designed to pilot the iPadas a survey delivery device in GSU Library. The first survey question was ―Why did you come to the library today? Please choose the primary reason.‖ Ten response options were listed in alphabetical order, and survey takers were allowed to select one option. The tenth response option was ―other,‖ with a text field in which survey takers could enter their own explanations. This question was included because the library is ext