39 Matching Annotations
  1. Apr 2018
    1. one of my jobs was to run the mimeograph machine for their newsletter.

      I had to use the mimeograph to make handouts for the Ohio State Computer Center as my student job. Taping up the stencils was fun...

    2. Most scholars of hypertext of the time pointed to Vannevar Bush's 1945 article "As We May Think" as an important precursor to the Web and as providing important guidance for necessary development. Bush's model of hypertext was much richer than that of the early Web. Among other things, he envisioned people who would put together articles (or "trails") by finding a sequence of useful pages in different sources, annotating those pages, inserting a few pages of their own, and linking it all together. While the Web had "live links", those links were limited to the original authors of the text, so The Web provided essentially none of the features necessary for Bush's more collaborative model.

      Great summary.

  2. Jan 2018
    1. taskcade

      You've given a term to a very common occurrence in my daily life. Somehow it is reassuring to both have a word to describe this phenomenon as well as know that I'm not alone!

    2. I see learning outcomes as merely a tool which is part of the instructional alignment process. Instructional alignment means that the outcomes, instruction and assessment must all match for optimum learning experiences.

      As a tool, learning outcomes can be used for accountability, improvement, or both. (The accountability approach tents to rub people the wrong way, but is sometimes a necessity or at least an inevitability.) As with most tools, much of the end result depends on the skill and intention of the person using them. If measurable learning outcomes are a shovel, I can tend a garden to grow some awesome things or I can use it as a weapon to threaten or harm predators in my farm.

    3. I'm testing the question of whether they can program, but I'm testing it at a lower level.

      If you feel that the aggregate of all the "lower level" tested content should demonstrate achievement of the higher level skill, then I don't see a problem. If the aggregate is insufficient, then measures of student performance on more complex activities might be necessary.

      As educators we decide which are the most important sub-skills students must master and be able to combine appropriately. Measuring both the complex task and key specific tasks are important. Such measures tell me if I can complete the complex task, and if not, where the failure occurred.

      Example of measuring high and low: A high-level medical outcome might be “successfully conduct brain surgery”. Sub components of this include but are not limited to anesthesia and surgical technique. Within each of those we have skills related to drug dosages, patient monitoring, anatomy, controlling bleeding, suturing, etc. Within each of those we get into even more detailed points, ad infinitum. We might have a capstone assessment in which a student must successfully conduct brain surgery. If the student was successful, we’re probably safe in assuming that the student also had mastered all relevant sub-tasks. However, if the student failed, it would be good to know why, and we’d want information about sub-task performance. Did I fail because I didn’t have good eye-hand-control of the knife, because I didn’t know the anatomy and cut the wrong part, because I administered too much of the drug, because I failed to notice the patient stopped breathing? Thus, I need to gather data on both “high” and “low” level learning outcomes. From a learning outcome standpoint, I’d probably have “successfully conduct brain surgery” as my high-level course learning outcome. Week 1 learning outcome might be the more specific task of “choose appropriate drugs for anesthesia”. Week 2 learning outcome might be “monitors an anesthetized patient”. Week 3 learning outcome might be “locates anatomical structures in the brain.” Week 4 learning outcome might be “stop bleeding in surgical wounds”. …so forth

    4. My experience is that measurements are difficult to develop, difficult to administer, and often are less reliable than we would expect.

      Why do you think this happens? What examples do we have besides the writing study?

      To me, measuring outcomes is like designing a research study in which the learning outcome is the research question and the measure is a potential method. Some measurement methods are more difficult to develop, more challenging to administer, and more valid/reliable than others, but hopefully the investigators were weighing the pros and cons when selecting the most appropriate measurement approaches for the study. (To increase chances of success, I’d want to make sure that those who are designing institution-wide inquiries have experience designing publication-quality educational research.)

    5. things that we can measure

      Is something unmeasurable if it is not easy or not quantitative in nature? Not necessarily. Well-designed qualitative measures are perfectly valid (and often time-consuming) ways to inform the questions we have about student learning. Ease and expense certainly factor into the cost-benefit ratio for using particular measures. We may choose not to measure due to difficulty or expense, but that is not the same as being impossible to measure.

    6. [17] Don't you feel sorry for them?

      Perhaps I'm crazy, but I welcome questioning on these topics as it forces me to clarify my own understanding and enhances my ability to help others.

    7. In my experience, each student enters this course with their own goals and expectations and leaves the course with their own, individual learning outcomes. However, there are some common goals I have for this course. They include the following.

      This seems reasonable, but I'll need to re-evaluate the syllabus as a whole to truly evaluate it.

    8. With small classes, you see students learn and you can often tailor learning outcomes to individuals. If your experience is primarily in larger classes, it seems unlikely that you can see learning in individual students and you are more likely to focus on broad instruments rather than on the individual. Or maybe I'm just biased.

      It is very possible that this could be a contributing factor, but class size is not the whole story. This might be an area for a longer conversation as I’ve spent much of my academic career contemplating my experiences across different kinds of institutions and trying to understand what impacts the different institutional contexts have on student learning.

    9. if we look primarily at the common learning outcomes, we miss the individual outcomes

      Are you saying that students wouldn’t have individual learning outcomes or that we just wouldn’t know what those were because we didn’t collect the information?

      Why would looking at the common learning outcomes cause us to miss the individual outcomes? Also, what, if anything, does knowing the individual outcomes change for an instructor? How would knowing the students' individual outcomes serve decision-making regarding course or curriculum design, etc.? If it is true that looking at the common learning outcomes causes us to miss the individual outcomes, how will we know this is happening/ What impact would it have?

    10. I don't expect that every student will take the same thing away from my course, and I don't think they have to. There are things that every student will do. But there are also many things that are individual learning outcomes, often more minor outcomes.

      I see the purpose of official measurable learning outcomes is for a course to announce to everyone the minimum threshold skills, knowledge, and attitudes that every student who passes the course should be able to accomplish. This helps enrolled students and instructors teaching the next course in the series know what to expect from those completing the course. (It does not prevent students and instructors from exceeding those minimums)

      It is great if students are learning additional things in the course. In fact, I’ve taught classes where I had students develop a “contract” with me to identify what they wanted to learn and what we would do together to ensure that they accomplished their own learning outcomes in addition to the course learning outcomes. In these cases, the measures for the learning outcomes may be unique for each student.

    11. A key goal for many of my courses is that students practice working with other people.

      (By the way, I’ve thought a lot about group work in academia and how that relates to group work in the “real world”. My experiences are part of why I decided to make studying group work a focus of my dissertation. My group experiences throughout college left me with a deep dislike for group work and didn’t improve my collaborative skills. I’m pretty sure most of my group experiences were not intentionally designed to learn collaboration and were imposed for practical reasons—i.e. not enough supplies for individual students. People learn through intentional practice with feedback. The courses included no training or guidance on how to be a good team member and no opportunity to evaluate my partners on their teamwork, so problem behaviors never really changed. My personal learning outcomes were that group work was something to be avoided. It was only after taking some graduate-level courses taught by instructional designers that I had more useful experiences and we learned some techniques to improve our skills as team members. I attribute this difference to the way the instructor designed those courses. …So I have a lot of opinions on group work that I’ll spare you now, but we can chat about that topic at some future time if you want. )

    12. Most leave with better collaboration skills. And I could probably measure these, at least indirectly, such as through partner surveys. But I'm not sure that it's worth it. I see the students improve. My class mentors see the students improve. Why add another layer of complexity to the course?

      When designing their program curriculum, faculty determine what learning they want to happen in the various courses within the curriculum for that program. If your course doesn’t have any specific learning goals related to collaboration, you are not required to assess it. If you’ve designed a course that also helps students gain collaborative skills which are not an explicit learning outcome, that’s a bonus. You can certainly continue to use informal observations to monitor whether the instructional design is meeting your expectations. However, if collaboration is something that you want them to learn specifically, then I’d recommend intentionally designing the educational experience to teach collaboration skills and then assess those skills to determine if the instructional design was effective. Without intentionality in the instructional design and assessment, how can one be certain the students learned the skills?

    13. But that also hides a number of smaller (and more measurable) outcomes. For example, hidden within "Can write a non-trivial Scheme program" is a few dozen smaller things that I expect students to do with Scheme: Write a procedure; include parameters in the procedure; call a procedure; sequence operations by nesting; sequence side-effecting operations in the body of a procedure or let; define local variables with let or let*; know the difference between let and let*; understand lists and the core operations on lists, such as cons, car, cdr, and null?; understand the different kinds of numbers (integers, reals, and complex numbers, each of which can be exact or inexact) and the operations on them; use conditionals and predicates; and so on and so forth. And that ignores the higher-level things, such as design an algorithm, design tests for the algorithm, consider edge cases, decompose a problem, and more.

      Yes, pretty much every knowledge and skill I can think of can be broken down into smaller and smaller components. There is no single correct organization for learning outcomes as long as the system is something the human brain will handle and the amount of content is reasonable for the course timeframe. (e.g. 7 plus or minus 2 is generally something easily remembered)

      Many disciplines have outcome hierarchies organized from broad (high-level) to specific as we move from institution to department to course to lesson. However, some disciplines have a more flat system and just list hundreds of competencies that students must master. In a hierarchical system, measuring the key sub-parts gives you some information about the whole, but it may still be necessary to measure the whole when sub-parts interact with each other.

    14. Is this just an issue with me struggling to get the right "size" of the learning outcome?

      Educators face the challenge of choosing the level of specificity that is most meaningful for their context (discipline, level in the curriculum, etc.). For their course-level outcome, many instructors may choose to write just their most high-level skills, then they may write the specific sub-skills as lesson-level outcomes for each class meeting.

    15. It's also clear that the "measurable outcomes" folks wouldn't be happy with me saying "I have eighty-five outcomes for this course."

      It sounds like you’re doing OK. You’re assessing the complex task as a whole, but you’re also assessing key sub-components. If you have evidence that the number of outcomes you use works well for your students, just continue with what you’re doing. Don’t worry if you have more or less outcomes than the recommendation.

      However, I’d encourage you to ensure that the 85 outcomes are clearly organized so that students, and others, can understand how they relate to each other. Also, for every learning outcome you write down, I’d be asking you how you’re assessing each one. If you want to include something that you'd like to achieve in the course but don't want to measure, consider labeling the unmeasured ones "goals" and the measured ones "outcomes" or "objectives" to signal the distinction between them. This approach would be consistent with the recommendations from Palmer, Bach & Streifer (2014)

    16. [14]

      I assume this was supposed to be 13

    17. I will admit that this concern is less significant than the concern that focusing on measurable outcomes means we de-emphasize less measurable, but potentially more important outcomes.

      Can you provide an example of situations in which "less measurable" important outcomes were overshadowed by less significant more measurable outcomes?

    18. I should note that I am happy with the current measurement approach, which involves gathering both papers and assignments from early in students' careers and late in their careers and having a group of faculty from across the College read and compare those papers. But that is an incredibly time-consuming measurement.

      I agree. I think there is great potential in the idea of capturing and evaluating student works early and late in their college career. I think a re-design could either reduce the time or at least increase the effectiveness to ensure the time spent is well worth it.

      I think that college-wide studies can be useful to look at certain kinds of questions, such as those about change across time or commonalities across departments. The most value will come when these inquiries are well-designed, publishable-quality research studies. Also, such studies cannot stand alone as the only measure we use about student learning. These should be supplemented with other data about student learning taken from individual courses and other sources. In aggregate, the data from the college-wide study and other smaller measures should give us a reasonable picture of how well we’re accomplishing our institutional outcomes and mission.

      College-wide studies can be very tricky, especially in an environment where not all participants are committed to uniform procedures and instruments. Done well, they can provide some very compelling data. Done poorly, they can result in disillusionment and frustration.

    19. In measuring writing, Grinnell first tried evaluating student papers from their first course and from a senior-year course. But we didn't always gather the assignments. In a few cases, we observed that students went from writing tight five-paragraph essays in their first course to disjoint, twenty-page essays in their last course. But when we talked to faculty about some of those longer essays, we discovered that the goal was often not to write a coherent essay, but to record a series of observations or ideas. A subsequent attempt asked faculty to evaluate a variety of characteristics of student papers on a five-point scale; things like "includes a compelling thesis" and "transitions well between paragraphs" [12]. Given that my experience is that incoming Grinnell students are generally competent writers but have significant room for improvement, I was surprised to see that the initial averages were high. So I asked a senior colleague how they rated their students. They said something like "Most of my Tutorial students are good writers, so they generally earn a four for the different categories." I responded something like, "4 represents 'achieves the level we expect of Grinnell students'. Does that mean that your Tutorial students are achieving the level you expect of students in your senior seminar?" And they responded "No; of course not. But they achieve the level I expect of students in Tutorial." It's hard to measure growth if you don't have a consistent scale.

      It sounds like there were flaws in the design or implementation of the study which may have resulted in interrater reliability issues and data that was less useful than hoped. That is a problem, but doesn't necessarily mean that measurement, including college-wide measurements should be avoided.

      The idea generally sounded good, and I think those involved were doing the best they could within the context at the time. I think we could have better results if we redesign the approach based on what we learned from this pilot.

    20. If we focus in the syllabus on the things we can measure, we may lose the more important things that we cannot so easily measure.

      Why would we lose the important things?

    21. "Write a program in Scheme" is one component of "Think like a computer scientist" [11]; "write a non-trivial thesis" is one component of "write persuasively and even eloquently".

      Presumably, if we could identify all the components of “Think like a computer scientist,” we could measure the skills separately and aggregate them to measure “think like a computer scientist”. However, one could argue that “think like a computer scientist” is more than the sum of its parts. Perhaps some of the skills interact with each other differently when used together than when separate. Thus, we might also ensure we're also measuring more complex combinations of skills.

      How does one differentiate a person who is thinking like a computer scientist from someone who does not think that way?

    22. most faculty can say that they see changes in their students, even if they don't explicitly measure them

      Sounds like you are already subconsciously or informally measuring student learning. Instructor observations of student changes can be a valid form of qualitative assessment. The informal approach may be sufficient for an instructor to make decisions about his or her own course. If this course learning outcome were linked to the department outcomes or college-wide outcomes, evaluators would expect more formalized measures as evidence the instructional approach successfully produced learning. (Formalized measures could include having instructors write monthly observations of student performance or to capture student characteristics on a rubric. Then the instructor can use qualitative analysis to compare the characteristics over time.)

    23. The College aims to graduate individuals who can think clearly, who can speak and write persuasively and even eloquently, who can evaluate critically both their own and others' ideas, who can acquire new knowledge, and who are prepared in life and work to use their knowledge and their abilities to serve the common good.

      I'd consider this to be the very purpose for our existence. Thus, I agree these accomplishments are quite valuable. If our mission is valuable to us, do we not need to know if we’re actually accomplishing our mission? How do we do that, other than to measure in some way?

      Are these achievements measurable? Sure, as long as we agree on what success looks and how we (operationally) define these accomplishments. Only then can we develop appropriate tools to measure them. Are these achievements easily measurable? Probably not, but that shouldn’t stop us since we already established they’re highly valuable. These are complex accomplishments, which probably will require multiple measures to provide sufficiently comprehensive data to help us determine how well we’re accomplishing our goals. It may not be feasible to measure some achievements directly, but can make a reasonable approximation through triangulating a series of qualitative and quantitative indirect assessments.

    24. But our inability to measure these things

      Is something unmeasurable if past attempts at measuring failed? How are we determining success or failure. Measurement approaches can fail to accurately measure an achievement without this failure meaning that the achievement is unmeasurable. There are many factors that can contribute to failed measures including lack of agreement about the end goal and problems with the processes of selecting, designing, implementing and evaluating measures. Before declaring something as unmeasurable, I’d recommend ruling out all the other reasons the measurement attempt failed.

    25. As far as I can tell, we've spent a decade just trying to figure out how to measure "write persuasively and even eloquently"

      Is something unmeasurable if it takes a long time to measure or to figure out how to measure it? If that’s the case, is unmeasurability a permanent or temporary condition? Consider how many centuries it has taken for us to develop our current understanding of the universe and our world. As new technologies and processes arose, people measured things that they couldn’t measure directly before. Yet, long before these technologies came to be, they still could make very accurate predictions through indirect means.

    26. But our inability to measure these things does not mean that they aren't valuable. In fact, I'd say that these broad goals are more valuable than many of the things that we can measure.

      If measurability and value are linked, I see the relationship reversed. If it is valuable, we're obligated to measure it. On the other hand, there are many things that we can measure that aren't worth our time or effort.

      I'd go so far to claim that we indicate what we consider important by what we choose to measure, including the kinds of exams and assignments we design. Consider the following from Phye (1997, p455): "In some classes, higher order thinking skills were assessed because professors developed classroom exams that required demonstrating such skills. In other classes, although the same expectations were voiced by professors, the assessment activities required only the recognition of facts, definitions from lectures, and material straight out of the text. ...many students use the first exam as feedback for future learning in the course. If the first assessment activity requires the use of higher-order thinking skills, the students will typically structure subsequent study for exams with this in mind. Consequently, the learning experience is geared toward the development of both declarative and procedural knowledge with an emphasis on being able to use such knowledge to solve academic problems. If the first exam requires only declarative knowledge..., students will ...study...only declarative knowledge. In this latter case, a professor may be teaching...from a critical thinking and problem-solving perspective and assuming that students are engaged in learning those critical thinking and problem-solving skills. In fact, those students...will see critical thinking and reasoning being demonstrated by the professor but will plan...their study strategies at the level of declarative knowledge. After all, that is what is going to be on the next exam. Consequently, these students will know about critical thinking and reasoning...but will probably not know how to use these higher-order thinking skill with any proficiency."

    27. many outcomes are not measurable or at least not naturally measurable

      What are the criteria that determine whether an outcome is measurable or not?

    28. While I haven't drunk the kool aid [5] of measurable course learning outcomes, I do think it's worthwhile to try to follow recommendations [6]. So I've been thinking about learning outcomes.

      I appreciate instructors who are willing to closely examine their own educational contexts, try recommendations, carefully evaluate the results of the experimentation, and use evidence to support what to keep and what to discard in the future. To me, a scientific mindset towards education is the essence of assessment.

      When considering educational issues, it is important to remain mindful when collecting, interpreting and using evidence to ensure we’re making good decisions. I’ve seen huge changes to instructional approaches result in minimal change in student learning while seemingly minor alterations have massive impact. Compared to some "hard" sciences, Education is “messy” because it is much harder to identify and control the diverse variables including all the traits that instructors, students, institution, culture, physical environment, discipline, etc. bring into every educational interaction. Education, as a discipline, uses various methods to have reasonable confidence in what we can assert about teaching and learning; yet, there will always be an unpredictable component. Experts in the field may provide guidelines and best-practices, but these usually must be adapted to the unique considerations for each classroom.

      In summary, the guidelines are there for a reason and should be backed by empirical evidence. But, when applying guidelines, it is not unreasonable for an instructor to critically evaluate the guidelines and experiment with adapting them to his or her unique situation and goals.

    29. But the requirement bothers me.

      It is really more of a guideline than a requirement.

    30. Include five to eight measurable learning outcomes in your syllabus.

      It comes from Palmer, M. S., Bach, D. J., & Streifer, A. C. (2014). Measuring the promise: A learning‐focused syllabus rubric. To improve the academy: A journal of educational development, 33 (1), 14-36.


    31. This musing is my attempt to think more carefully through why they bother me.

      Thought I'd add a few additional questions to consider about learning outcomes:

      • Who are learning outcomes for…the instructors or the students or accreditors or colleagues or…?
      • How does writing learning outcomes in a syllabus change the student experience?
      • Does writing measurable learning outcomes statements alter the way that instructors design their course? Why?
    32. "Think like a computer scientist"

      Yes, this is the "global outcome" for any domain... how to think like a [fill in the blank]. Still, we should be able to articulate some criteria for differentiating someone who "thinks like a computer scientist" from someone who doesn't.

    33. Our students read better, think better, and argue better after most classes

      How do you know this? That's the question.

    34. inability to measure these things

      Perhaps we are taking a reductive view of "measuring" ... taking it to be limited to precise numerical analysis of learning ... and, granted, that is a typical meaning of the term. It's important, e.g., to use a measuring cup in a recipe when you have to add the exact amount of an ingredient.

      But perhaps "measuring" should be more broadly construed as "providing evidence for."

    35. measurable learning outcomes bother me.

      This calls to mind a similar but even more pointed expression of bother with learning outcomes, in this blog post by Gardner Campbell (and see his exchanges with Robert Talbert in the comment section).

    36. I end each of my classes with a review of what students might have learned

      What does this look like? By what process do you discover "what students might have learned"?

    37. I do think it's worthwhile to try to follow recommendations