48 Matching Annotations
  1. Mar 2025
    1. Another advantage of performance assessments is that multiple, specific criteria for judging success are identified. You should share these criteria with students before the assessment so that the students can use them as they learn.

      Another benefit that is mentioned in the same breath as this passage is the opportunity to differentiate the end product. Allowing for students to use the method that best leads to their individual successes.

    2. Little or No Organization 1 2 3 4 5 6 7 Clear and Complete Organization

      In the article Tame the Beast: Tips for Designing and Using Rubrics, the author states not to use too many columns. While scales are slightly different, I think having 1-7 convolutes the assessment and leaves too much room for personal bias to take place in the grading of students. If it is 1-3 or even 1-5, it is going to be a more accurate representation of if the student had succeeded or not.

    3. Reference material. What resources (dictionaries, textbooks, class notes, CD-ROMs) will learners be able to consult while they are completing the assessment task?

      A major problem I have seen in secondary education, is that when some teachers assign performance-based assessments, they allow students to use any resources they can find online. In some cases, this isn't necessarily bad as students learn how to conduct research independently. However, not providing students with resources can reduce the quality of the end product.

    4. Authenticity is judged in the nature of the task completed and in the context of the task (e.g., in the options available, constraints, and access to resources). Authentic classroom assessment is excellent for motivating students—it gets them engaged and requires application thinking skills.

      Psychologist William Glasser refers to this concept as "competent work" in his Choice Theory in Schools idea. Stating that students are more likely to choose to learn if they deem the work as worthy of doing.

    5. The growth portfolio reveals change in student proficiency over time. Selections of student work are collected at different times to show how skills have improved.

      I'd like to note that a growth portfolio is probably the easiest to pull off. Just having students place all of their work in a binder accomplishes this goal.

    6. An obvious advantage of having an electronic portfolio is that it encourages and makes possible the use of multimedia elements. This feature is very motivating for students. It promotes the use of unique materials that reflect students’ individual voices. There is more student ownership, with opportunities to build self-efficacy and pride.

      One highlight of portfolios is that it helps build metacognitive strategies that are useful for students in all contents. While building these skills need to be taught to students to succeed, using digital portfolios allow for modern students to use skills that they already possess.

    7. Because many students will not be familiar with portfolios, you will need to explain carefully what is involved and what they will be doing.

      To me, this is the biggest drawback of a portfolio. The amount of time that has to be devoted to just instructing students on how to organize the portfolio seems daunting.

    8. Portfolio assessment works only when students understand where they are, where they need to go, and are provided with instruction to support the journey.

      This reminds me of the article we read this week for our digital power up. Teach students where they are going, assess where they are, and together teacher and student can work to close the gap between the two.

    9. And someday I imagine you’ll be able to design your constructed-response items and your own protocols for grading, all completed electronically.

      With the prevalence of AI in today's classroom this is a good prediction. It would be very helpful for teachers to be able to grade lengthy constructed assessments in the blink of an eye, with the help of these tools.

    10. Giving students a choice of questions, however, means that each student may be taking a unique test. Differences in the difficulty of each question are probably unknown. This makes scoring more problematic and your inferences of student knowledge and understanding less valid. It is true that you can’t measure every important target, and giving students a choice does provide them an opportunity to do their best work.

      I adamantly disagree with the assertion that giving students an option on essay questions is a bad practice. If I am assigning more than one essay question on an assessment I am either differentiating or expanding a concept beyond the target of the lesson. Therefore it shouldn't be an exact target such as a DOK 2 (or 3 in some cases) but building on concepts through DOK 3 or 4.

    11. , major advantage of using essay questions is that deep understanding, complex thinking, and reasoning skills can be assessed. Essays motivate better study habits and provide students with flexibility in how they wish to respond.

      I hope when we start to address rubrics that the text addresses how to more easily assess deeper reasoning. As it seems like a very difficult for students to know EXACTLY what is expected here.

    12. With completion items (some actually call them objective because there is typically only one correct answer), students are presented with an incomplete sentence with one or two blanks and write words as their answer(s) in the blanks.

      In other Education courses we have addressed EL strategies , more specifically sentence stems. Which I find interesting that the text does not use the same terminology as completion items seem to be one in the same.

    13. igital media is utilized in the form of graphics, audio and video clips, and presentations.

      Applications such as EdPuzzle and Nearpod, have really expanded what can be done with interpretive assessments. While utilizing digital media.

    14. (Critical thinking) Peter is deciding which car to buy. He is impressed with the sales representative for the Ford, and he likes the color of the Buick. The Ford is smaller and gets more miles to the gallon. The Buick takes larger tires and has a smaller trunk. More people can ride in the Ford. Which car should Peter purchase if he wants to do everything he can to ensure that his favorite lake does not become polluted?

      I feel that this should be in the "what not to do" section. I understand that the point the author had was hat it has nothing to do with pollution, but as a teacher, if I wrote this question it would have been time wasted as there are 5 sentences that were unnecessary.

    15. This is because a sentence out of context loses meaning, and rote memorization is encouraged. Move beyond recall knowledge to comprehension by changing the wording.

      I understand that we are trying to reach past rote-memorization. However, I think in the right context it makes sense to use the same word for word correct answer. For example if we are trying to teach specific content heavy vocabulary.

    16. When you write the items, begin with the stem, then the correct response, and finally the distractors. Once you have developed good items (you know about how good an item is only after students answer it and you can analyze their responses)

      I highlighted this section because it illustrates the importance of the use of formative assessments in proactively shaping future instruction. As well as effective assessments.

  2. Feb 2025
    1. You need to essentially suspend your role as classroom teacher for a while and assume the role of test administrator.

      This is easier said than done. It is difficult to shut off the nurturing nature of the classroom. It is also very difficult to not answer a question about what is meant by a question.

    2. Try to Create a Discussion with Parents, Rather Than Making a Presentation to Them. Ask questions to involve parents in the conference and to enhance your ability to determine whether they in fact understand the meaning of the scores.

      This should really be listed as #1 or #2, as collaboration with parents is crucial to a students success. Ensuring that parents know what is going on, better helps the teacher know what they need from the students.

    3. One approach to sound interpretation is to set in your mind a group of “minimally competent” students in reference to the target, then see how many items these students answer correctly. If the mean number of correct answers is, say, 7 of 10, then your “standard” becomes 70% of the items.

      Interesting, to focus on "minimally competent" students (sounds insulting). Though setting a baseline helps gauge actually improvement vs. who passed or failed.

    4. Most readiness tests are used in early elementary grades and for reading.

      In Secondary, readiness tests could be particularly effective in tasks that require knowledge across contents. For example algebra in physics or writing skills in history.

    5. In a test containing selected-response items students may ask about whether there is a penalty for guessing. In classroom tests it is very rare to find a correction for guessing. The best practice is to be very clear to students that they should try to answer each item (e.g., “Your score is the total number of correct answers, so answer every item”).

      This relates to the idea of grading and reviewing assessments with students as soon as possible. If you wait too long after the assessment, the answers that have been guessed will be just that, guesses. However, if you grade the assessment with students immediately, it becomes a learning opportunity as the student sees it was the correct or not-correct answer.

    6. Provide students with the test blueprint or outline of the assessment.

      I think this is the best strategy to mitigate test anxiety. If the teacher provides clear objectives and isn't vague with their expectations. Students should succeed, given that they try.

    7. For example, in a test of simple knowledge in a content area, secondary students can generally answer as many as 2 to 4 objective items per minute. For more difficult items, one per minute is a general rule of thumb. In math, students may need as long as 2 or 3 minutes for each constructed-response item

      These are great rules of thumb for me to keep in mind going forward. Time management is not always my strong suit.

    8. Missing one or two hard questions isn’t as revealing as missing a couple of easy ones.

      I see this a lot as a student. I can prepare and prepare for an exam, get all of the "easy" questions correct, but come up short on the final two hard questions. Professors sometimes like to weigh these heavier and the easier questions lower.

    9. On the other hand, testing companies have developed new types of items and reporting systems that more effectively than in the past address formative assessment (Bennett, 2015).

      This is an interesting concept to me, I think if we as teachers have a way to track data easier. Less time can be spent on coming up with assessments and more time spent on adjusting instruction. I'd love to learn more about these "new types of items and reporting systems".

    10. The key to using technology for formative assessment is to know what you want to formatively assess and why you want to use technology as a platform.

      In Pedagogy/Microteaching, I felt the biggest shortcoming was to put yourself in the shoes of the teacher for 10 minutes. Sure, you were teaching to a standard, but most had no idea what the objective of their lesson was, or the intended outcome that they wanted to assess.

    11. Helpful feedback is best provided when quizzes are graded in class, right after they are completed. With this procedure students learn right away what they know and what they missed. With just a few questions on the quiz, it’s easy for them relate their performance to how they learned the material, and to see what they do not know or misunderstand.

      I annotated this section because it is something I had seen my CT use in my JI. Specifically, she used it for a pretest that was for the district-wide assessment. Giving that feedback immediately was crucial for them to correct any misconceptions and be able to succeed the next class period.

    12. Testing students what they know about something they will learn may be intimidating and create anxiety about the class (on the other hand, a pretest can communicate to students that the teacher is serious about learning).

      As a college student, I enjoy a pretest as it allows me to understand the full objectives of the course. However when we are discussing a pretest for a high school physical science test, I do not see it going very well and causing anxiety from the start.

    13. At the elementary level, where teachers are primarily responsible for one class, it is much easier to give immediate feedback, to scaffold, to check student understanding of feedback, and to use elaborative feedback compared to the secondary level.

      Working in a Secondary SPED department, I have seen this firsthand. Many of the students have learned helplessness from years of low expectations or have given up because of the high expectations.

    14. On-the-fly formative assessment can occur at any time during the school day as a result of teacher–student and student–student interaction. It can involve individual students, small groups, or the entire class. Evidence is gathered constantly by the teacher during instruction and as students learn. The evidence can be verbal or nonverbal, and it has a spontaneous, “at-the-moment,” or “real-time” character.

      In Pedagogy, we discussed the "Cadillac Model" of assessment. Which is backed by quantitative or qualitative data. Yet, here we are talking about how useful the "Ford Pinto Model" can be. It still gets you from A-B and has importance in the classroom.

    15. It is clear that embedded formative assessment has the greatest documented positive benefit for increasing student achievement (Wiliam & Leahy, 2015), and large-scale assessments have the least impact (often none). There are many claims from testing companies that large-scale assessments can be used formatively, and sometimes that is true, but be wary. What matters most is what you can control in your own classroom.

      A really interesting notion. As negative as people are towards this profession. It is refreshing to be coming into teaching at a time that emphasis is being placed on teacher assessment vs. standardized testing. As it was at the height of NCLB.

    16. As students relate new ideas and knowledge to existing understandings, formative assessment helps them see the connections and clarify meaning in small, successive steps.

      I highlighted this portion because I feel it emphasizes the importance of culturally responsive teaching and real-world connections. Returning to Vygotsky's Zone of Proximal Development, if a concept is too far out of a students' ZPD we are only setting them up for failure. Formative assessment allows us to be able define this Zone.

  3. Jan 2025
    1. In a standards-based grading system, the student’s last, best performance most accurately reflects learning and should be heavily weighted (Nagel, 2015). This is essentially a matter of maintaining appropriate validity so that your inferences about academic performance are reasonable. If cooperativeness and participation are important targets, consider separate grades for each.

      The most difficult part of weighting the final summative. Is that if it is weighted too heavily, if the student fails the summative none of their other work may be able to save their grade. However, if all assessments are weighted equally, the final summative has less merit.

    2. Rather than grading only on achievement, which favors students who bring higher aptitude for learning, grades reflect how well each student has achieved in relation to his or her potential. High-aptitude students will be challenged, and low-aptitude students will have realistic opportunities for good grades.

      In Classroom Management, we studied William Glasser and his Choice Theory being used in the classroom. Part of his theory is that students are only motivated when they choose to do so. This is why he recommends what he calls "competent work", work that is based on real-life experiences and projects that show understanding past memorization of rote facts.

    3. As we have pointed out, items can differ tremendously in level of difficulty, so when students obtain a high percentage of correct answers, mastery may or may not be demonstrated, depending on the difficulty level of the assessment (Marzano & Heflebower, 2011). Thus, it is probably incorrect to conclude that when a student obtains a 100, he or she knows 100% of the learning targets, or that a score of 50 corresponds to mastery of half of the targets.

      As I mentioned in the discussion board for this week I enjoyed the article The Case Against Zeroes in Grading. I had honestly never thought about the unfairness in the A-F system.

    4. For example, instead of saying “you did your best” on a sincere effort but subpar performance, say “I see you worked hard on this” and follow up with a question or strategy that will focus their effort on making improvements.

      I am an Assistant Archery Coach and one aspect of our training is through the National Archery in Schools Program. Is that we compliment what the Archer is doing correctly before we ever point out what they doing wrong.

    5. Typically, multiple-choice tests obtain better estimates of reliability/precision than do constructed-response, performance, or portfolio assessments.

      I thought that this notion is interesting, because traditionally we relate the former assessments with higher DOK levels and deeper understanding. However, from the perspective of gaining data and those positive consequences for our future instruction, selected response may be more beneficial to the students' overall understanding.

    6. Giving students multiple assessments, rather than a single assessment, lessens fear and anxiety. When students are less apprehensive, risk taking, exploration, creativity, and questioning are enhanced.

      As teachers, we must put ourselves in the shoes of the student constantly. Before making instruction decisions, we should always ask ourselves: "how would I feel doing this assessment?". While also taking into consideration that we are not always going to have the same perspectives as everyone else.

    7. Because most classroom assessments are inexpensive, especially with access to online examples and test banks, cost is relatively unimportant (except perhaps for the district as a whole).

      Being a Science major, lab activities are critical. Although online science resources have come a long way, I would think that science will still use more resources than some other areas of content.

    8. Are the targets consistent with your overall goals and the goals of the school and district?

      Going through a teaching program, we are taught to think about our procedures, rules, lessons, learning objectives, and entire teaching philosophies. However, our first line of support are in fact the goals, procedures, and philosophies of our respective state, district, and buildings. Our perspectives should add to these supports, not ignore or contradict them.

    9. You will need to develop targets that are not too easy or too hard. It is also important to assess the readiness of your students to establish these challenging targets and standards.

      One of my favorite theories in education is that of Vygotsky's Zone of Proximal Development. The approach being to essentially meet students at where they are at academically or socially and scaffolding from there. If you start to teach outside of the students' ZPD, they are unable to succeed.

    10. Constructivists believe that new knowledge is acquired through a process of seeing how something relates, makes sense, and can be used in reasoning.

      I highlighted this section because I think that it points out a critical issue in assessments today. Students are taught to retain information for short periods of time to be able to pass an upcoming summative assessment, rather than being taught how to apply those concepts to other areas of content, assessments, and situations that may present themselves in life.

    11. Both Bloom’s revised taxonomy and Marzano & Kendall’s New Taxonomy have had mixed success. While the notion that nouns and verbs can be isolated into knowledge and thinking skills is very helpful for deconstructing standards, most approaches to standards use a more simplified approach (e.g., just knowledge, understanding, reasoning, skills).

      Sometimes I feel as educators we over complicate things. However, looking back at the two charts on this page between Bloom's and Marzano & Kendall's taxonomies. The more complicated of the two: Bloom's makes more sense to me.

    12. There is an emphasis in the science standards on thinking skills, not just knowledge, and meaningful connections across four science domains (physical science, earth and space science, life science, and engineering design).

      I am grateful to the author for including NGSS in the text, as I am very familiar with the resource already. I think it can be a very valuable resource to all Science educators. One thing that I enjoy about NGSS is the inclusion of cross-cutting concepts on all of their standards, highlighting how all of the realms of science relate and the skills that students are learning throughout.

    13. Teachers should be able to construct scoring schemes that quantify student performance on classroom assessments into useful information for decisions about students, classrooms, schools, and districts. These decisions should lead to improved student learning, growth, or development.

      Assessment should be used as a tool to better not only student understanding, but our understanding of the students. When we have quantitative date can make informed decisions going forward that result in quantitative gains.

    14. We found that two major sources of influence affect assessment and grading practices decision making. One source lies within the teacher and consists of beliefs and values about teaching, and learning more generally, that provide a basis for explaining how and why specific assessment and grading practices are used. A second source lies external to the teacher, consisting of pressures that need to be considered, such as high-stakes testing.

      I chose to annotate this section because from an outside perspective looking in, people often do not understand the importance of the individual teacher's philosophies on learning when we talk about the chance of success a student has in a classroom.

    15. The process of differentiation can be very formal and quantitative, such as using a thermometer to measure temperature, or can consist of less formal processes, such as observation (“It’s very hot today!”)

      In my studies I have only ever seen or understood differentiation as using different mediums for students to show the same understanding as others. It's refreshing to see an example that shows that we can differentiate that understanding as well.

    16. Goal Setting Students perform best when they know the goal, see examples or exemplars, and know how their performance compares with established standards of mastery.

      My own personal philosophy on the effective teacher is that of one that ensures students know what is needed behaviorally and academically to succeed in their classroom. The Cognitive Theory of Goal Setting, is naturally the theory I gravitate towards because of my own beliefs.