221 Matching Annotations
  1. Jan 2022
    1. But folks are enamored of measurement

      Intentionality and accountability play an important role in instructional design. When designing educational experiences, measurement is how I know that my efforts are having the intended effect (i.e., learners are learning what I intended them to learn). How else, other than measurement to provide evidence, does one know? Isn't data-based decision making the scientific approach? The alternative to measurement seems to be basing decisions on faith.

    2. Learning outcomes are also distinct from learning goals, although I will admit that I lack the depth of understanding to describe the difference. At least in some situations, it seems that Grinnell uses learning outcomes for the course and learning goals for larger things, such as the major or an overall Grinnell education. At least one of my colleagues leading the charge uses the two terms interchangeably,

      The terminology is not specifically set in the field. I try to be as precise as possible to facilitate communication. The exact words are less important than making sure we're actually meaning the same thing with the words we use. I am still working on refining my terminology to ensure my words capture what I mean and that others are interpreting my words as I intended. For me, "student learning outcomes" are the things that we want the students to be able to do as a result of the learning experiences we have designed. "Goals" are pretty much anything else that we hope might happen but may or may not have control over.

    3. Isn’t that an awesome list?

      Does every student in the class have opportunities to demonstrate each of these outcomes in the course?

    4. Anyway, this moral modeling should not be an explicit outcome of CSC-151, even if it happens

      Based on what you said in this bullet point, the moral modeling relates to the faculty member's behavior which might inspire students to have certain behaviors or attitudes. The ability to make change is a form of power. Within a college, faculty have significant ability to make (or stall) change within the institution. It is certainly laudable for those with power to reflect on how their behaviors impact others, influence societal norms, and otherwise shape the communities they touch.

      To make this a "student learning outcome" for the course, each student would need to have the opportunity to demonstrate it. Thus, the instructor would need be be prepared to both teach and assess the moral behaviors expected of the students. This means that the instructor must a) determine what student behaviors are more or less moral, b) design the class so that all students will have the opportunity to learn, and c) assess how well students demonstrate those behaviors. I'd imagine that few secular institutions are willing to have learning outcomes related to morality because that can get messy very fast as they could come close to or even cross the borders on telling people what they should believe. If the desire is to have certain socially-accepted behaviors, I think it is safer to focus on ethics for learning outcomes. For example, academic honesty, which is a perfectly legitimate focus for a learning outcome, probably lives in the domain of ethics.

      What is the difference between Ethics, Morality and the Law?

    5. I like my seven high-level topics.

      It looks like you've done some nice work here. I hope that you find this helps your department to come to consensus on curricular matters.

    6. Yeah, I think that’s a good set of high-level outcomes, along with some associated lower-level outcomes. What’s left?

      I skimmed them (didn't reach each closely) and they mostly look good on the surface. To know for certain if they are effective student learning outcomes, it would be helpful to know a) how do you intend to teach each and b) how will you assess each.

    7. Sample topic-specific learning outcomes. These are optional.

      What does it mean that it is optional? Would not all students necessarily be expected to demonstrate these?

    8. I think I got that right.

      Close enough.

  2. Feb 2021
  3. grinco.sharepoint.com grinco.sharepoint.com
    1. How would having a learning experience at the center of all we do change your experience of community at Grinnell? What would it mean to acknowledge social presence in your work?

      This is a test

  4. Oct 2020
    1. students may be more likely to participate if they understand the impact(s) and/or benefits of their involvement. No one wants to waste their time or, worse, share their thoughts and see no action taken in response; thus adding to feelings of being unheard or unseen.

      Yep! This also applies for staff and other employees at a college whose voices are not regularly a part of decision-making.

    1. We need to ask ourselves, is it that we want students to demonstrate their knowledge and skills or attainment of learning outcomes in a particular way, or that they demonstrate their learning?

      Yes

    2. Instead, being student-focused calls for student involvement throughout the entire assessment process including the development of learning outcome statements, assessment tool selection/development process, data collection and interpretation, and use of results.

      Perhaps students should be on assessment committees?

    3. Toomey Zimmerman and Bell (2012) argue that the difference in performance indicates that learners competent in informal and everyday settings may falter in more formalized learning settings, requiring alternative means to demonstrate their knowledge outside of the traditional classroom

      Another benefit of authentic assessments (i.e. assessments that replicate real-world conditions) is that they may feel more relevant, thus improving student motivation, and also may help transfer learning to real-world situations.

  5. Aug 2020
  6. Mar 2020
    1. And that’s what I have for now

      General Advice:

      • An emergency situation is probably not the time to be trying out a bunch of new technologies for the first time. Whenever possible, teachers might want to opt for technologies familiar to both students and instructor to decrease cognitive overhead and frustrations involved with learning new tools. It may be necessary to adapt the instructional approach to the available technology in an emergency.
      • Consider the desired learning outcomes and select technologies and distance learning approaches that fit. The same goal may be achievable by a different approach than used in a face-to-face classroom. e.g. Various aspects of teamwork may be achieved with message boards, shared documents, email, chats, screen-sharing, video communication, phone calls, etc.
      • Choosing technologies supported by the institution improves the odds that there are knowledgeable folks available to assist if something goes wrong.
    2. Captioning

      In a pinch, I've found it helpful to upload private videos to YouTube and use the auto-caption feature to generate an initial pass at the captions, then I can go in and clean it up.

      For prepared lectures, consider coming up with a script before recording the lecture video and using that script to generate a nearly complete transcript.

    3. I don’t know who is hosting it

      It is looking like there is a group called Facilitators for Pandemic Response group which includes someone named Nancy White as a leader. Not quite sure this person's affiliations nor those of the membership.

    4. a community-developed document

      How interesting! How did you find this?

      Looks like it was generated by the Facilitators for Pandemic Response group at https://groups.io/g/f4c-response. Perhaps it is worthwhile to joing the group?

    5. as some guidance for faculty.

      Nice resource.

    6. send a note out to my broader community

      I could reach out to colleagues at the Iowa Distance Learning Association to see how to harness our collective knowledge and skills.

    7. What resources would you suggest to help them get started

      Grinnell College employees can access Linked In Learning, and they have some content about teaching online (though I haven’t had time to fully review each for quality.):

    8. Grinnell also has a variety of staff who teach online and who would probably have good suggestions and could serve as resources to others on campus [3].

      Thanks for thinking of the staff with online teaching experience.

      Do we know all the folks (faculty or staff) on campus who have experience with online teaching? Might we want to set up a way for these individuals to self-identify and volunteer in case we do end up in an emergency situation? If we need all-hands-on-deck, perhaps we could call on these individuals to assist teachers with less online instruction experience.

    9. I’ve signed up already.

      Me too.

  7. Feb 2020
    1. The workshop will produce a proposal for a pilot project that asks students to articulate their understanding of the liberal arts and their rationale for their four-year plans.

      What do you feel are the essential features that will ensure a successful pilot project proposal?

  8. Nov 2019
    1. Are the things that faculty measured most important to you, or are things like study habits, a passion for a subject, or a new way of looking at the world more important

      It is not an either-or; rather, it is a both-and. Yes, the things that faculty measured were very important to me as they were essential to me being a competent professional. However, I did also learn many things that were not explicit learning outcomes. That's OK. As a student, I can have personal learning outcomes beyond those articulated by the instructor. Additionally, an instructor can hope that lots of learning beyond the learning outcomes will happen. Not everything learned (or taught) in class needs to be named in a learning outcome.

      I've had a few awful college courses where I learned almost nothing of the subject matter, so the only learning I had were lessons about what not to do when teaching and how to teach myself the subject matter. The latter learning, while valuable considering my professional path, was not sufficient for me to feel that my time and tuition were well-used for that course.

    2. If these guidelines were last updated before Dean Harris was hired, let alone before she started, how are they the Dean’s guidelines for best practices?

      Perhaps because they were the former Dean's guides?

    3. Nonetheless, when you start with measurable outcomes as a primary goal of the syllabus, I want to make sure that you’re not going to force me to follow an educational philosophy that is much different than my own.

      So what would you suggest? How would you have presented this information in the guidelines?

    4. And isn’t figuring out the connection of other work in a class to learning outcomes a skill students should develop? I know that I regularly ask my students to take a moment to reflect on why I’ve asked them to do something. Won’t they learn more from that reflection than from me telling them?

      As an instructional designer, I strongly believe that an instructor should align the instructional approach and assessments with the desired learning (i.e. learning outcome). This doesn't necessarily need to be stated out loud for the student, but instructors should be able to articulate these things for themselves in order to make intentional decisions in the instructional design.

      A past colleague of mine would teach students study skills by having them read the learning outcomes on the syllabus and use those to help structure their preparation for course assignments. Thus, a well-designed and well-aligned syllabus should help students to be successful in the course. Some students may not be ready to guess what the instructor is wanting, especially if the instructor had not designed the course to be well aligned.

    5. What is the connection of an exam or quiz to a learning outcome? Aren’t such devices more intended as a way to assess outcomes?

      I don't understand.

    6. as good a practice for students and teachers interested in less tangible, but more important, outcomes

      If we think certain learning is important, then we should intentionally teach it. To know how well our teaching helped students learn, we need to gather evidence of student learning. To me this is essential to effective teaching and learning.

    7. Each faculty member should prepare a syllabus for each course the faculty member is teaching, in compliance with federal regulations with attention to the Dean’s guidelines for best practices. Copies should go to the Dean’s office and to the students enrolled in the course.

      This sounds pretty good to me.

    8. But I really hate the term best practices.

      If I thought that "best practices" were only taking into account one factor in a nuianced situation, I would hate them too. However, I mostly see best practices as "evidence-based practices" so I don't have a problem with them unless they're spoken as inviolate "rules". For example, I would consider the idea "Try to have only one idea per slide in a PowerPoint" to be a "best practice" but way too often there are stupid rules such as "Don't write more than 6 words per slide". The "best practice" should be a principle that can be applied universally or is sufficiently situated so people can apply them in the relevant context.

    9. A best practice, but primarily in terms of controlling costs. In terms of supporting and retaining staff, not so much. Firing the staff member at the bookstore with the deepest knowledge of books, and doing so without either (a) consulting with the Instructional Support Committee, who is supposed to have purview over the bookstore, or (b) reflecting on the broader purpose of the bookstore on campus?

      What language should we be using instead?

      I am getting the sense that you don't like "best practices" in the same way that some don't like the term "efficiency" when talking about teaching and learning. To me, both terms are fine but they get misused. I believe that teaching and learning should be efficient. The instructional designer can take steps to help students spend their precious study time focusing on the most important things. If they have to waste time trying to figure out when assignments are due or even trying to interpret the instructions because the assignment prompt and expectations weren't clear, then they're not really using their time efficiently. This is different than saying that students are just given the answer to a tricky problem. Trial and error and struggling with a wicked problem is important to learning and may take far more time than if the instructor were to just give the students the answer. However, the latter is a false efficiency because it achieves a quick outcome by circumventing the actual learning. In the long run it is not efficient because the student will need to spend additional time learning at some later opportunity. That's partly why I will always pair "efficiency" with "effectiveness" or just use "effective" to try and communicate a more nuanced view.

    10. But the rest of me notes that I post my syllabi to the public Web, which means that they are available for whatever uses the institution would like to make of them

      Some institutions post their syllabi online. I believe ISU used to post all the syllabi from the Education college, and possibly from other colleges as well.

      How wonderful for students to be able to see the syllabus when they're choosing classes!

    11. At some points in the history of this country, they might have used them to ensure that faculty were not taking an inappropriate stance on material, such as supporting a socialist agenda

      In my experience, syllabi tend to tell me very little about what students will actually learn in a course and almost nothing about the teaching that will happen. I once had to do an analysis of syllabi trying to extract information about what students were learning where in the curriculum. I came away from that experience feeling very jaded about the usefulness of syllabi for knowing anything about a course. I seriously doubt that one could look at a syllabus and have any real sense of whether the instructor is indoctrinating the students into an unapproved worldview. It would be very easy to create a completely bland syllabus and have very provocative discussions in class.

      I've seen too often how the syllabus does not reflect the reality of the classroom. I almost never ask instructors for their syllabi due to this experience. I would much rather see a lesson plan, but most professors don't create lesson plans.

      I think syllabi geared for students could be useful study aids, but they would need to have certain features which are aligned with what we know about how people learn. We could take an intentional and scientific approach to syllabus design by actually testing out how various designs impact students' experience.

    12. Over two-thirds of the syllabus consisted of the assignments for the semester because I thought it would be useful for the students to see them as we started the course.

      I was a student in a course called "College Teaching," the instructor had a really long syllabus for this same reason. The syllabus became the handbook for the class. The assignments were all within the syllabus from day 1. I thought it was a good idea and have tried to use this approach myself.

    13. are measurable, or at least not quantifiable

      Not sure what this means. I use "measurable" as one way of indicating "able to provide strong evidence for" and the evidence can come in many forms of quantitative and qualitative data.

      Also, assessment has to be good enough to provide useful evidence about the learning so the instructor can make informed choices about teaching. This depends, in part on the audience for the assessment. Some assessments may range from informal and formative (such as observing students' body language and puzzled faces when discussing a topic) all the way to highly-structured, summative assessments (such as final exams). The stakes of the situation and the stakeholders involved will impact the decision when choosing a quality assessment.

    14. Can we really measure how well students have learned how to learn?

      Yes, by operationalizing what is meant by "learning how to learn". Yes, it could require a complicated instrument. However, if this is important enough to make it a learning outcome, then it is important enough to determine whether or not the learning is happening.

      Not all learning needs to be measured. There are many things that students will learn. The things that should be listed as "learning outcomes" are those things we are intentionally teaching and plan to assess in order to find out if the students learned.

    15. best practices

      I tend to use "best practices" to be synonymous with "evidence-based practices" which means that the practices described are those that have the strongest evidence for effectiveness. This evidence can vary in strength from double-blind, controlled experiments all the way to anecdotal evidence, depending on what is available. Obviously the best practices viewed this way will morph over time as we become more refined in our knowledge and as we begin to have experimental data for the things that were previously anecdotal. It would probably be best if "best practices" documents had a works cited area so that the suggestions made have a clear connection to how those suggestions were constructed.

  9. Oct 2019
    1. focus on the presentation styles of the different speakers, reflecting on strategies, strengths, and weaknesses. In Grinnell’s post-graduation surveys, alumni regularly report that they would have appreciated deeper grounding in presentation skills. A Convocation-based seminar might help address that issue

      Interesting. Analyzing different presentation styles could be useful. To address a need for "deeper grounding in 'presentation skills,'" it might also be important to give students opportunities to practice presentation and receive feedback. Crazy idea...maybe they'd even try to give the same talk, but improve upon it with modified presentation styles?

    2. Second, we propose to develop a one- or two-credit Co-Convocation seminar that allows students to build upon the weekly Convocation series. In this seminar, students will read and discuss papers related to upcoming Convocation topics and then debrief after Convocation on both the papers and the talk itself.

      Since Convocation is an intellectual event aimed at a broad audience (i.e. all Grinnell College students and employees and community members), maybe consider having the seminar/debrief open to faculty and staff too... Perhaps that could generate some interesting discussions?

    3. Third, the number of events on campus has increased significantly, limiting the time many members of the campus community feel they have available to attend Convocation.

      Too many good things happening.

    4. With the institution of the No Requirements Curriculum a decade earlier, Grinnell students had lost the experience of having some common intellectual heritage, a set of readings or pieces of knowledge shared among all or most students.

      While I generally agree with the sentiment and the proposal generally sounds like a good thing, I do think it would be stronger by addressing the following:

      1. Why is it important to have a "experience of having some common intellectual heritage, a set of readings or pieces of knowledge shared among all or most students"? What does this achieve?
      2. How do we know that there isn't already a "common intellectual heritage"? Could there be things happening in co-curricular arenas or in dorm settings which could be filling this niche? How does Tutorial factor in? What about FYE? What about the common read?
      3. What have we done in the past to generate a "common intellectual heritage" in addition to Scholar's Convocation? What factors made those effective (or not effective)? What can we learn from those past efforts to select the best solution to achieve the goal? (This assumes that a Scholar's Convocation is one of many ways to achieve a goal of "common intellectual heritage" for the purpose articulated in answering the first question.)
    5. Second, a scheduling experiment temporarily moved Convocation from 11am on Thursdays to lunchtime; that change led to a drop-off in attendance that has continued through the reinstatement of Thursday convocation.

      Are there any data documenting the attendance drop after the time change?

    6. Many factors contribute to the change

      The many factors may make it hard to tease out what features are essential for the proposed approach to achieve the desired goal.

    7. The infrequent offerings make it less likely that students remember to attend Convocation; when Convocation is weekly, they know (or should know) that there’s an event to attend.

      This statement would be strengthened with evidence which demonstrates that holding an event regularly had and will attracted greater student attendance. (It might be necessary to refute those who remember poor student attendance despite weekly Convocation events...)

    8. expose students to a broad range of topics and, because the experience was common, give them an opportunity to employ that broader knowledge in their other academic endeavors

      What is it about a common experience that is critical in allowing students to employ their knowledge in other academic endeavors?

      How common does the knowledge need to be? Does it need to be beliefs, values, content knowledge held by all Grinnell College students, or is it sufficient to have common references with those in the same department?

  10. Sep 2019
    1. the external evaluators want to see evidence that our students are getting an appropriate liberal arts education

      If there is an expectation or requirement from the external stakeholders, we need to fully understand what that is. We will still need to meet any requirements imposed on us. If they explicitly demand certain courses or divisions, we might have difficulty using this project as satisfactory evidence for those external stakeholders unless we enforce a definition of liberal arts that clearly indicates how certain classes are essential. However, if they are OK with us providing strong evidence that we achieved liberal arts education as we defined it and are also OK with the definition being unique to each student, then the approach described here could work to achieve our goals while also satisfying external stakeholders.

      Basically, we need to know what the external stakeholders would accept as sufficient evidence and then make sure we take this into account.

    2. we’ve tried to prove that by giving statistics on the percentage of students who take at least three courses in each division, or meeting the Phi Beta Kappa requirements, or whatever

      The unstated assumption behind providing this sort of data as evidence of a liberal arts education is that particular classes in various areas of campus provide certain unique or critical factors that students should internalize (and perhaps integrate into their overall understanding). However, what are those unique and critical factors those courses bring? And if students take those courses, are they really taking away those things? And does every course within a division really provide the critical and unique learning that we're wanting them to have?

      That sort of "bean counting" only really provides meaningful information if you can also point to what each of those beans is really representing in the form of particular knowledge, skills, and attitudes students have as a result of that bean.

      To identify these things, perhaps it would help to ask ourselves: "Could I take all my courses in one division and still achieve a liberal education? Why or why not?" If we equate achieving a liberal education with achieving the College-Wide Learning Outcomes in a meaningful way, then I could see it very possible for students to achieve these while only taking coursework in one division. On the other hand, the "Elements of a Liberal Education" are written more as subject matter knowledge. So if we use these to indicate the learning essential for a liberal education, it becomes more difficult to complete coursework in one division and still achieve a liberal education by this definition; though, it could be possible depending on the course design. At minimum, I'd need to take courses in any discipline that use non-native languages, that involve writing/communication, that involve scientific reasoning, that involve mathematical reasoning, that examine human society and behavior, and that involve creative expression. If we are looking towards the college mission as the evidence of having achieved a liberal education, then I probably could still take courses in one division only as long as I can demonstrate that I can think clearly (whatever this means to us), speak persuasively, write persuasively, critically evaluate my own ideas, critically evaluate others' ideas, acquire new knowledge (whatever this means to us), and use my knowledge and abilities to serve the common good. If these are our working definitions for a liberal education and a student can achieve the learning all in one division, then we might need to ask ourselves: "Why is it important students take X courses in different divisions?" Perhaps the answer relates more to practical issues of college function (such as student enrollment) or other factors than it does for making sure students are learning the essential knowledge, skills, and attitudes?

      Ultimately, the various "definitions" of a liberal education that we have (CWLO, elements, mission) all point to different, but overlapping, knowledge, skills, and attitudes that we expect as evidence that a liberal education happened. For this reason, I see your approach to be the most straightforward: students define for themselves what a liberal arts education means, they select learning opportunities (curricular and co-curricular) to achieve a liberal education as they've defined it, and they demonstrate how well they achieved a liberal education according to their definition.

    3. In the ideal, this process requires students to think carefully about what aspects are essential to a liberal arts education and gives them more incentive to take the particular courses they have selected.

      Would most agree that these are the goals for the individually advised curriculum? Is this what your project is hoping to help accomplish or support? If so, these would be goals worth assessing. To assess these, I'd advise operationalizing these two components and then choosing instruments that will provide the most useful data.

      What I think I'm hearing is that you might have one project outcome: Students will be able to articulate the essential aspects of a liberal arts education and how these aspects relating to their personal learning goals. And you might have one hypothesis for an experiment: Students who identify their own coursework and clearly articulate how these selections fit their liberal arts education goals are more likely to be motivated to complete the coursework identified.

    4. It also stings less when a last-minute proposal gets rejected.

      Love it! Actually, I think this is pretty good for a last-minute submission.

    5. I would encourage the committee to set word limits, rather than page limits.

      I had similar thoughts. I would have found word limits far more useful as an author. Also, it would probably provide more consistency in length for the grant readers too.

    6. As most faculty know, the amount of text that a student can fit into two pages can vary significantly, depending on the typeface, font size, margins, inter-paragraph spacing, and more.

      yep

    7. The primary risk/challenge is the workload for both students and those who must support the process, such as by helping students craft their essays and prepare for their curriculum defenses (advisors), reviewing those essays and conducting the defense (panelists), and conducting the retrospective (panelists). Will we be able to obtain enough buy-in from faculty and staff to conduct this experiment? I believe so. I also expect that some financial incentives will help, at least during this pilot phase.

      How much work and time went into the "bean counting" approach? Would that approach still need to happen? If not, there could be some effort and time shifting from one approach to the other so it wouldn't entirely be an additional task.

    8. Objective 1: Students will develop and demonstrate an appropriate understanding of the liberal arts and choose curricula that support that understanding. Objective 2: Students will develop and demonstrate an appropriate understanding of diversity and choose curricula that support that understanding Objective 3: Grinnell will understand the costs (workload, financial) and benefits of employing this approach to assessing curricula. Measurement: The declaration essay, curriculum defense, and curriculum retrospective provide the primary forms of measurement for objectives 1 and 2. A post-project retrospective workshop will allow us to review these measures. Short surveys about time costs, along with the results of the workshop, will allow us to measure issues relating to objective 3.

      We can chat further about the specifics here if you'd like.

    9. co-curricular

      I love that you're including co-curricular since students perceive that they're mostly achieving CWLO #2 in co-curricular (or extra-curricular) areas.

    10. their ability to evaluate critically both their own and others’ ideas (in this case, about liberal arts education) and their ability to speak and write persuasively and even eloquently.

      To help with extracting evidence for assessment reporting purposes, it might be necessary to identify all the learning outcomes that you'd expect from all students and to construct some assessment instrument (such as a rubric) to make sense of the data.

      Based on this, it seems that you've got at least two learning outcomes that students will need to indicate how they've achieved these. I'd recommend stating these as clear learning outcomes for the students to account for in their declaration and defense and then to demonstrate in their retrospective.

    11. clearly state their perspective of the meanings and goals of a liberal arts education,

      Is there a chance that they might have an interpretation of liberal arts education that could be unacceptable? If so, how would we deal with that? Would they be allowed to pursue their own goal or would they need to bring it within the bounds of what we consider acceptable?

    12. What is the amount of funding that you are seeking? (List dollar amount for each year. Pilot project annual limit is 50,000. Planning project limit is 10,000.)

      If you don't get the grant this time and need to try again... Perhaps consider going for a planning grant first and then getting the pilot. That way you can maximize your potential funds. During the planning year, you could clearly articulate the desired learning outcomes and refine the assessment instruments that you would then implement during the pilot phase.

  11. Oct 2018
    1. hurdle rates are a bad metric

      Why are we using these "hurdle rates"? What are they intended to tell us? What is the goal state?

      According to Investopedia, a hurdle rate is "the minimum rate of return on a project or investment required by a manager or investor." A hurdle rate, in this sense, seems to imply a way to identify efforts that are not generating sufficient benefit and need to be re-considered. Are we using the terminology similarly?

    2. six core elements of a liberal education

      A challenge with these is that they tend to be interpreted as specific divisions and classes.<br> Why are we concerned about taking classes across divisions? What if a student could learn language/culture/art and the scientific method and study of society and humanities all within one division because the courses were integrating those topics?

      I tend to feel like we are conflating related, but separate issues. On one hand we have the knowledge/skills/attitudes we want students to learn. On the other hand, we have to figure out how to teach them these...which then involves the way that institutions decide to structure their courses/departments...which then gets into staffing issues...

      Theoretically, the students could learn all these things in a single department if the classes were designed to integrate all these topics. There are reasons we don't do this which could be less related to student learning than they are to personnel management and domains of knowledge.

    3. require each student to write a retrospective essay evaluating their liberal arts education

      I would caution that we don't default to determining students' ability (or extent of liberal education) asking asking their opinion of their education. (People are not necessarily good at estimating their own skills and knowledge, which is why it is good to have an external observer.) You had some earlier musing that sounded a lot like a process that a PhD candidate would experience, and I'd support something like that...

      The student would articulate learning goals (that also fit with the College-Wide Learning Outcomes and their major department learning outcomes). This student would need to defend these to a committee of faculty. Once the faculty agree, the student engages in the agreed-upon plan. At designated points in the students' education, the student would defend their path to demonstrate how it fits with the original plan and how they've accomplished the learning outcomes. This might include portfolios, presentations, an oral defense, etc. They can pull in evidence from anything in their life, including any skills and knowledge they acquired inside or outside of the classroom.

    4. hire a cadre of faculty to read those essays

      That would be a fairly faculty-time-intensive activity. Not necessarily a bad thing, but it takes buy-in which probably requires the benefit for learning to clearly offset the costs.

    5. For example, to provide to accreditors.

      Or, the examples and categories could be used on a formative basis to provide students with guidance along the way, so they can adjust their behaviors accordingly.

    6. build inter-rater reliability

      Yes, the inter-rater reliability is very important.

    7. may have demonstrated mastery

      This is why I'm a fan of learning outcomes. We can say we want our students to know X by the time they're done...and we measure X.

      If we don't care where they learned X, we can just measure X at various endpoints. (Students may have learned on their own, through co-curricular opportunities, through academic courses, or other places)

      If we do care about how much we contributed to the students' learning of X, we might need to add additional measures of their incoming knowledge/skills/attitudes.

      If we want to know which things we did were most useful in helping students learn X, we might add additional measures in the middle to gauge intermediate progress.

    8. Course tags are supposed to provide one option

      There definitely is potential in the course tags. In fact, I'd recommend renaming them Curriculum Tags, so they can be applied across the whole college, including co-curricular learning opportunities. The tags will be most useful if/when we can connect them to College-Wide Learning Outcomes and department outcomes. Currently there are gaps and overlaps with that mapping. Also, the way that the course tags are currently assigned is problematic due to inconsistent interpretation and numbers of tags per course.

      I could imagine well-designed tags could be helpful to incorporate into an advising tool. Theoretically, students (and their advisers) would be able to see any gaps in the students' education by looking at the kinds of tags that they're missing and which tags are particularly abundant. Then one could find classes that fit.

    9. What are the alternatives?

      I'd suggest learning outcomes.

  12. Jul 2018
    1. reduced to a rubric

      It might be interesting to know that some particularly frustrating reports to read were saved from very low evaluations due to having a rubric in place to focus attention on the most salient points.

      Rubrics are a tool. Like all tools, their usefulness depends on their design and the context in which they're used.

      Additionally, I think this article points to the latter of the two, somewhat conflicting, aspects/purposes of assessment: 1) Assessment can be used for growth, to provide support and training to support areas of identified weakness. 2) Assessment can also be used for ranking, judging, and sorting, in which case resources might be withheld from "poor" performers. I think a lot of the bad feeling about assessment arise from people feeling like assessment is the latter: we air our weaknesses so that someone can come from "outside" and use those weaknesses against us. I tend to prefer the growth mindset in which airing weaknesses is only to help us to strengthen those areas and help us be more successful.

    2. And do assessment budgets correlate with how much students learn? With how much they earn? With how well they’re prepared for productive work, the life of the mind and active citizenship? Do graduate and professional schools find that students are better prepared if their undergraduate institutions devoted substantial resources to formal assessment processes?

      Interesting questions. These might be worth pursuing.

    3. Do frequent assessment activities via administratively sanctioned rubrics lead to better learning than faculty members who experiment on a regular basis, bringing a variety of approaches in accordance with their variety of specialties and perspectives

      This is not really an either-or situation. These are not the only two options available for assessment.

      When instructors approach their individual courses and their curriculum with an evidence-based perspective, individual instructors and departments have found ways to improve their own teaching and to make improvements in the curriculum. In fact, a focus on learning outcomes and assessments can benefit some faculty who might want to approach a class differently than traditionally done in that department. That instructor can say "These assessments prove that the students are achieving learning outcomes X, Y, Z even though my approach is A, B, C". Should not those who are in favor of academic freedom celebrate this? (We're all going on a trip to Miami Florida. Each person is free to get there as they choose, whether it be by boat, train, plane, auto, bike, walking or Star Trek transporter beam)

      If the "administratively sanctioned rubrics" are not working well, then it might be a good idea to reevaluate what goal was the rubric intended to accomplish and whether it was accomplishing that goal. Perhaps it does need revision to align it with the goal.

    4. Do you know where those files might be?

      Very good question. Institutions may need to do a better job of preventing data from being caught in silos and for following-up on any reporting that is done. If no one is going to look at or do something with data (including reports) then we need to seriously ask ourselves why we're collecting it.

    5. Why don’t you write a report on that and get back to me

      Done. It is here in hypothesis. You can "grade" me by replying to my posts.

    6. through umpteen layers of curriculum committees

      Faculty govern academia. If you don't like the way it functions then work with your colleagues to change the processes. There are many things at academic institutions that would benefit from some streamlining and more efficient processes.

    7. Will they still get tenure

      Faculty are the ones who govern academia. Work with your colleagues to make it happen so that educational research for people in fields outside of Education can also have those publications count towards tenure.

    8. reconfiguring lecture halls

      Perhaps, but are there also ways that one can creatively overcome some of these challenges? Do we wait for just the right environment to be able to have "innovative" teaching? I've seen group learning activities in large lecture halls with fixed stadium seating. Talk to your technology support teams on campus, they might have digital tools which might provide additional realms. Take the class outside. Go to the library. Even if a new building were to break ground today, can you not start innovating in the classroom until the building is ready?

    9. for offering smaller classes

      Perhaps, if you prove that smaller classes alone will significantly change the learning outcomes.

    10. Would you want us to grade accordingly

      I'd want you to find out why they aren't learning what you thought they should be learning. Perhaps decide if the original learning outcomes were unreasonable. Look at the factors that were preventing students from achieving the learning outcomes. Get to the root of the problem and identify ways to fix it. Do students need additional supports--what kinds? Do we need to add in more intermediary courses between the foundations and advanced courses?

    11. Speaking of modest compensation, how much of a stipend will you pay the poor sucker who agrees to produce the reports that you claim to want?

      Assessment is part of the job. It is embedded in the work and cannot be divorced from it any more than you can breathe by only inhaling without exhaling. To suggest that one deserves extra compensation for assessment implies that it is something separate and extra. Assessment has always been part of good pedagogy.

      Now, what I think you're probably reacting to is having to report out beyond one's own department or having to report out to anyone at all. If so, that sounds a lot like saying that you shouldn't be subject to any oversight of your role as an educator.

      I'd encourage you to see it as if you're writing a grant application and providing evidence for why what you're doing is working and should continue and that X, Y, Z supports would help you to do it even better. Or even see it as a professional development opportunity to self-reflect on your own process with the opportunity to get some feedback from experts in teaching and learning.

    12. do you have suggestions for turning them into reports that would satisfy you

      I'm positive there are ways. We would probably need to discuss what that would look like because all reporting takes time. The question in my mind is how can we make this reporting valuable to you so that the time you spend feels worthwhile to you.

      I believe that transparency around assessments can be a really useful tool to stimulate conversations among instructors. Done well, it can be the foundation for faculty learning communities because faculty can learn tips and techniques from each other.

    13. Do you really think we don’t make changes in response to what we observe in the classroom and to how our students perform? Do you think we don’t talk to colleagues about what we’ve observed or share ideas for improvement?

      I'd certainly hope that you do. The assessment report is the opportunity for you to share how you've made changes in response to these observations. Tell us what you observed (your assessment/measure) and tell us what decisions you made based on that data.

      This data also helps us to help other instructors. Maybe you're doing something really cool in your class that we can share with other instructors when we coach them. We don't know unless you tell us.

    14. what concrete steps would you actually be prepared to take

      I'd hope that the institution would give me the access and authority to work with that professor over time in a one-on-one basis. Together, we could identify why we think that students are not learning. Then we could help the instructor make changes to improve the student outcomes. This would be a coaching process which involves problem-solving and professional development.

      I have not yet met a professor that doesn't want to actually succeed. Who wants to be bad at what they do? Most professors are grateful for opportunities to make their work better and more satisfying.

    15. But are all the rest of us similarly suspect?

      If our goal is student learning, we only care if students are able to demonstrate the knowledge and skills we desire by the time the learning experience is complete. How instructors help students get from point A to the desired state of knowledge and skill is completely up to them. The instructor's actions would hopefully be impacting learning, but there are no particular actions that guarantee learning. Assessment is how we know if learning occurred; we cannot guarantee this only by watching the instructor's teaching strategies.

      I haven't met an instructor yet who was willing to just tell students "There are no tests, quizzes, or other assignments in this class. Your grade will be whatever you tell me you think your grade should be." Most instructors expect students to do something to demonstrate that they understand the course content, whether this be in their discussions, tests, quizzes, written assignments, etc.

      When you give a class or assignment in your class, do you only require those who you suspect don't know the information to complete the assessment? Or, do you give the assessment to all students in the class to get a sense about the depth and quality of the student understanding? The in-class tests provide instructors with lots of information to tell us what students are and are not understanding so that we can make adjustments to the class or work with individuals to further enhance learning.

      The same applies here. How is an assessment person supposed to know what is going well and what is not without some data?

    16. interested in assessing the students’ learning of my subject, or are you really just checking my competency in the use of those verbs?

      I will ignore the sarcasm and other negative emotion here and try to address the legitimate question underneath, which I interpret as "Why is the wording of learning outcomes so important?"

      The secret is that there is no perfect wording for learning outcomes. But learning outcomes are a tool, like an outline when writing. It focuses the instructional design process on actual behaviors the students will exhibit to demonstrate they've learned and not focus on what the teacher will do. The outcomes communicate to oneself and to colleagues what we intend to accomplish regarding student learning. The more specific and intentional we can be, the easier it becomes to avoid miscommunication and to design instructional experiences and assessments that will help us accomplish our goals.

      A specific learning outcome is the difference between asking your travel agent to make transportation arrangements and then saying: "I want to go on a trip" versus "I need to be in Seattle by 9:00 am on August 1". The first option requires the travel agent to ask you many more questions to clarify what you had in mind by a trip.

    17. Do you think that this question can be substantively answered via your report-generating processes?

      As an assessment person, my desired learning outcome for instructors is that they can "align their instructional approaches and assessments with their desired student learning outcomes". This skill is a basic instructional design skill to ensure that we're teaching what we think we're teaching and testing what we think we're testing. There are a number of ways that instructors can demonstrate this. Looking at how instructors articulate their alignment in a report (possibly using a rubric) is an efficient way for a single assessment person to identify the areas of the college that are more and less in need of a deeper dive.

      Philosophically, I see assessment as a formative process rather than a summative one. I'm here to help, not to punish. The assessment reports give instructors an opportunity to show me what they understand about teaching and learning principles and share how they're systematically tackling their departmental problems. This gives me a starting point for a conversation with a department about areas of strength and weakness and to then assist them where needed. Sometimes the assistance is training and other times it may be to help them make a case for more resources. (Many professors in higher education get no training in teaching and learning as part of their graduate education. How can we expect people to be good at something without having had opportunities for training and practice with feedback?)

    18. Also, when you’re evaluating rubrics and learning goals for programs outside your field, and it’s 2 a.m. and your report is due soon, are you really paying attention to anything besides the presence or absence of “action verbs”? Be honest.

      Actually, I try very hard. It makes it easier for me when the reports are concise, logical, and well-written. I am looking for clear rationales why a certain assessment is appropriate to measure the learning outcome the instructor says is important. If you can tell me why X assessment measures the learning outcome, then I'll try to read into that to identify your intended learning outcome or operationalization of that outcome. (Though, I might have a conversation with you about possibly shifting the outcome language to better reflect what it seemed you really wanted the student to be able to do.)

      The less clear the writing and the more difficult and time-consuming it becomes for me to extract the information I need from the assessment report, the more dark my mood will become and the more nit-picky I'm likely to become when reviewing verbs of learning outcomes. After all, the learning outcomes might end up looking like the low-hanging fruit for trying to help the instructor think about assessment.

  13. Jun 2018
    1. I wrote about my experience telling students not to use the term "Jeez"

      I always thought that the purpose of a profanity is to shock and express the intensity of the situation. Theoretically, the greatest shock would come from taking a deity's name in vain. For this reason, I tend to feel that people should avoid using swear words too casually. They lose their power that way.

  14. May 2018
    1. The advent of Grinnell's new "institutional identity" and noun-phrase-centric marketing campaign have put me in a funk.

      Why does the marketing stuff bother you so much?

    1. But I think most of my readers want to jump back and forth between the reference and the referent.

      This is probably a good accessibility feature as well as it is reasonably easy to navigate back and forth with keyboard and not lose one's place in the flow of the text. I didn't try with a screen reader to see how that might play out.

    1. My hair

      Hair is a challenging topic. There are so many social meanings that people attach to hair. On top of that, curly hair can be quite challenging to manage, especially when most local hairdressers are used to straight hair. It may not be socially acceptable, but I've occasionally asked random people with curly hair about their hair care techniques and recommendations for hairdressers...all in the hopes of learning some good tips for my own hair. Thus, I welcome hair discussions.

    2. When I rolled up the windows in my car today, I managed to get my hair caught in the window without realizing that I had.

      I've done that. Long hair also gets caught in doors, under backpack shoulder straps, in one's mouth, and a myriad of other places. That's probably why I rarely actually wear it free-flowing. Usually it is pulled back and contained in some way. Otherwise it is just a pain, but it is better than the alternative when short... maybe I'm still traumatized from childhood memories of old ladies telling me that my short, fluffy, ball-shaped, curly hair reminded them of their Toy Poodle dogs.

    3. a likely cause was my tendency to let my hair sit inside my collar when it's wet

      She could be right. As a fiber crafter, my experience indicates that the factors that help increase the likelihood of felting wool or hair includes friction, heat and moisture, all of which are happening inside a collar. (This is why adult wool sweaters come out the size of a child's clothes when they went into a hot, soapy washing machine set to a high-agitation setting.)

    4. I brush regularly and use conditioner

      Figuring out how to care for curly hair has been a life-long experiment, and I feel like straight-hair folks get this sort of thing figured out far earlier in life. Based on hairdresser recommendations and personal experiences, I've found my best results come from no more than once-a-week shampoo and condition, combing with a pick only when wet, and using a bit of oil and curl styling cream when wearing it down.

    1. do many of those things better than our peers

      How does one know what Grinnell does better than peers?

    2. near peers

      What's this?

    1. It would be nice if they entered the data more consistently. For example, I see Clark Lindgren listed as "Lindgren, Clark A.", "Clark Lindgren", and "C. Lindgren".

      Perhaps this form should be turned into an online survey that uses a dropdown select box to choose the faculty name?

    2. Why am I complaining about these issues?

      As an instructional designer, poorly designed forms are one of my largest frustrations. Forms and surveys should be self-explanatory and collect accurate information. It cannot collect accurate information if various people interpret it differently.

    1. As I said, I'd like to see a broader conversation in which we unpack that element a bit.

      Theoretically, operationalizing these would help to identify when students have successfully accomplished all or part of one of these elements of a Liberal Arts education.

    2. While I know that committees are not an ideal vehicle, multiple voices and perspectives do tend to produce better understanding. And I'd generally take the product of a committee over whatever a random administrator comes up with.

      In my experience, the best decisions come from collaboration between strong leadership and effective committees. Great leaders make good decisions as a result of listening to stakeholders and careful considerations of the issues at hand. High-functioning committees provide leadership with well-researched, evidence-based proposals on an issue.

      Unfortunately, we're human with human failings. Committees do not always produce better results than a single thoughtful individual.

    1. It appears that I can do really embarrassing things, too.

      Nah, I think the embarrassment belongs to the person who sees alternative meanings for toy rocket ships. :)

    1. [12] I hate the commas in CTLA [14]. They require that I either have to put it at the end of a list or separate the items of the list with semicolons.

      Very amusing observation. The names for teaching and learning centers have been a hotly debated topic at various institutions. I have yet to see where a name change to one of those centers has in any way influenced their function. Until now, I had never heard a naming argument based on a concern about how the commas impact the way the center’s name appears in a written list.

    2. I also find myself wondering how I should refer to myself in a biography.

      Names are a challenge! When replying to emails, I tend to address people in the way that they signed their emails to me, but that "rule" doesn't always work.

  15. Apr 2018
    1. The College is in the midst of yet another remaking of our marketing image [1].

      What is the goal for this marketing push? Will changing the colors and fonts help to attract new students?

      How will they assess effectiveness?

    2. Like so much on this campus, the rollout was a rush job

      Yet the deliberations to get to a decision takes years...

    3. [6] I think they've redesigned the Web site three or four times since they made the decision to treat the primary Web site as a marketing site. You think that they could have spent some of that time and effort on figuring out what to do with all the materials and users that they tossed off the primary Web site (other than saying "It belongs behind a password wall on GrinCo", which they promised us wouldn't be all that they would do).

      Reminds me of xkcd cartoon #773

    1. I also drive too much; I wonder how I'd do as navigator.

      Could you explain this process? What responsibilities do the two roles have?

    2. I should find more ways to pair program with other folks who are willing to challenge my coding decisions.

      I'd like to learn to code something more than HTML and CSS. I wonder if this experience is like co-writing, which I've enjoyed. It is such a thrill to edit writing with each person's ideas inspiring the other.

      As an instructional designer, nothing beats working side-by-side with a subject-matter-expert to create an instructional design. We're creating something meaningful together and each brings something to the table. The SME brings the content knowledge and the ID brings expertise in teaching and learning while also bringing a novice's perspective in the subject domain. I've been fortunate to learn about wide-ranging topics from installing power lines to constructing levees for flood management to making blood smears.

    1. I'm pretty sure I made the right choice to see the panel on disability.

      What were your take-aways from the experience?

    1. Questions

      What solutions did the students come up with?

    2. I would, of course, appreciate any feedback you would like to provide.

      Nice case study. How did it go? What would you change for next time?

    1. Rather, it's the second anniversary of my first musing.

      happy anniversary

    2. I may have inadvertently engaged in revisionist history. An early musing suggests a slightly different origin

      ...or it is a sign of the learning that has occurred.

    1. One of the students brought up the site "I Write Like",

      Interesting tool. Apparently my writing is inconsistent. First I write like Bram Stoker then Vladimir Nabokov then David Foster Wallace and finally Arthur Clarke.

    1. There's something wrong with the world when College employees get a 5% discount at HyVee whenever they shop, but HyVee employees only get a discount at selected times.

      Interesting point

    1. Discover that when I open it up, the screen is working again. Curse. Curse again.

      Why is there cursing if the screen is working?

    2. Did I mention that I hate computers?

      An occupational hazard or a pre-existing condition?

    1. asks us to specify the "normal" teaching schedule of each faculty member

      Why this question? How is this data intended to help decision-making?

      What would be a better way to get at the necessary information?

    2. Of course, each year's new Council seems to want to change the form.

      Does the process involve user testing to ensure usability and effectiveness?

    3. That's probably a good idea; when I was on Council, there was a wide range of quality and approaches to the proposals. Standardization can help.

      Yep!

    1. There seem to be no public descriptions of the research opportunities available in each department.

      So, what became of this?<br> I can see a number of benefits coming from compiling such information including a) prospective students know what to expect, b) current students can find opportunities, c) faculty can share ideas with each other, and d) departmental and institutional assessment

    1. Who is well served by this kind of advice?

      Probably those who find that procrastination or writers block is severely impeding their output.

    2. Similarly, if I'm on paragraph seven and realize that what I've just written has an impact on what I'd written in paragraph three, I should go back and edit paragraph three now, rather than later, when I'll have forgotten.

      Yep. Me too. I have a completely different writing style when writing longhand on paper than when writing in word processing software. I'm all over the place in the software. As ideas come up, I may just add them at the bottom of the document so I don't lose them or forget about them. Later, I will move those sentences to where they make sense. I'll jump here and there constantly while writing.

    3. But I know from reading numerous writing guides and from talking to the awesome folks in our writing lab that different writers find different approaches better.

      I agree with what you're saying here. However, I also know from personal experience that it is not a bad idea to force students to use a certain method for a while. It can help them develop a wider toolkit and perhaps find that some of these other approaches are useful when given a chance.

      I rarely outline now, but in grade school I had an English teacher who made us outline our essays on paper by hand (digital options were rare at that time). It was really tough for me and I would have never done it unless I was forced. Yet, by the time that class was done, I found that the outlining really had made a difference in how I thought about organizing my content. Today, I think I've internalized some of the lessons and can "outline" in my head a bit, but I realize I also do utilize a form of digital outlining in my process. It is not uncommon for me to start by adding my document headings (using the appropriate Heading styles) and then add a TOC which compiles those headings into a high-level document structure outline.

    4. I found it a frustrating piece that, in my mind, reflected a narrow understanding of writing and of available time.

      After I made a round of comments based only on reading the Musing, I come back having now read "The Tough-Love Approach to Writing". My thoughts...

      It didn't really frustrate me. First, I recognized it as someone's opinion and personal experience and not a statement of universal Truth. Writing method isn't really a hot-button topic for me, so my emotional reaction is more of curiosity than outrage (the emotional states most likely to get me to post comments). Secondly, the diagnostician in me saw this as a treatment plan for a specific diagnosis, namely writer's block/procrastination/agraphia. Her prescription seems reasonable for people suffering with that specific condition, but her medicine is unlikely to be a universal cure for all writing conditions. I can pack away her approach as one more tool in my medicine bag while also knowing I'm unlikely to use this tool often. My own writing illnesses are a different sort and probably require some other medicine. Always try picking the right medicine for the disease, which first requires accurately diagnosing the disease.

    5. Editing is not, in fact, writing

      Hmm. Then a lot of the "writing" I've done is not really writing. What does one call a collaborative effort of putting words on a page when one person starts a draft and another person goes through and builds on/modifies/rearranges the text to clarify, improve flow, etc., perhaps sending it back and forth until both people agree? I'd thought of it as co-authorship with a primary and secondary author, but maybe the second is just editor. Maybe it depends on the quantity of new content added by the secondary person?

    6. Perspectives on writing

      I enjoyed thinking about the writing process. I have lots of experience writing things (mostly reports and technical writing) but feel like I have very little actual formal instruction in writing approaches or even best-practices. You've inspired me to go do some reading on the subject.

      I've got "write a novel" on my bucket list...

    7. I take issue with the insistence that "all you can do during your writing time is write".

      I can relate. I have to write when the ideas come, not during a certain time. Since my brain is ready to write at unpredictable times, I have to employ a variety of tricks to get stuff done. Having various projects going at once helps since I can switch activities when my brain refuses to write. I can instead do transcriptions or data analysis or find literature or go follow up with that person I needed to talk to...or read some Musings...

    8. And why does it frustrate me so much?

      These rules would probably frustrate anyone who values flexibility and individuality.

    9. I tend to go off exploring rabbit holes on the Web in the midst of some writing

      Also a familiar problem. The worst is when I forgot why I went searching the web in the first place. Yet, web searches tend to be an important part of many writing projects. How in the world did I ever manage to write during pre-Internet days!

    10. Perhaps editing is writing

      That's a relief... maybe I'm a writer after all. :)

      I've collaborated with several colleagues who liked my style of heavy editing to help them say what they were really trying to say. (The original author, of course, can freely keep or reject any of these edits. I think I'm better at editing other's work than writing my own from scratch.)

      My style is terrible for teaching people how to write...

    11. But I also know from experience that if I don't immediately rearrange text when I see a better approach, it will be harder to come back later and figure out what had seemed to be a better way to organize or express my thoughts.

      Me too

    1. There are, of course, other reasons to let students know their textbooks early.

      I believe there is also an accessibility reason. Students who need accommodations relating to large text or other modifications of their reading material need to know early so that the support team can help the student have appropriately formatted materials ready by the time class starts.

    2. It's hard to select books for a course before it is completely designed

      Interesting. I've known some instructors who design around a text, so in that case picking the text first is critical. For me, it tends to be somewhere in the middle...I partly design based on the text and partly choose the text based on the design I intend.

    1. Giving up processed sugar

      Probably a good idea. There's a lot of press lately about the health problems from sugar.

      I need to do a better job of staying away from the sugars.

    2. Does real maple syrup count as processed sugar

      Well, it counts as "added sugar" and it also goes through a process to concentrate it into syrup, but I'm not sure it quite counts as "processed" in the way we commonly think of processed food.

      I saw a maple syrup demonstration at a local University and found it fascinating.

    1. "I have a lot of responsibilities. I very much appreciate your thinking of me, but I need a few days to consider the impact."

      I am allergic to saying the word "no" (grin) Thus, I appreciate this post in general and this particular wording in particular.

    2. I always see opportunities in responsibilities

      Yes, this is also the reason I find "no" so hard.

    1. But isn't glass one of those materials that's really easy to recycle?

      Not necessarily. While not a true gaffer, I had done glassblowing for several years at a nearby University. I learned a bit about glass during that time. Also, the the town in which that University resides has a Municipal power plant that burns all the trash, so there is no recycling...it all gets burned for energy...except for the glass, which is not allowed.

      Here are my guesses about some reasons why glass is hard to recycle. First, when burning garbage for energy, the glass causes a mess inside the power plant furnaces. That city takes the glass and grinds it up and use the ground glass for other applications (if memory serves, it includes decoration and incorporation into concrete). Second, why not melt it and reform glass...well, I'm guessing for the same reason we had to be careful about mixing different kinds of glass when glassblowing. There are many kinds of glass formulations which have different coefficients of expansion. If you melt and connect two different pieces of glass, the piece may shatter. For example, I made a glass bird using Spectrum 96 for the base clear glass and added a few pieces of stained glass (unknown COE) on the surface for color. The whole thing hasn't exploded (yet), but there are definitely large cracks where the color meets the clear.

      I'm now realizing that I need to do some research on what is done with glass in industrial recycling. Is it actually turned back into glass or into other products?

    2. The goal should not be to recycle more of the waste you generate; the goal should be to generate less waste.

      I was really hoping you'd get to this point. The catch-phrase now seems to be "Reduce, Reuse, Recycle", which I think is fairly reasonable if people act on all 3 parts, but Recycling seems to get all the press. I think it is because that is the easiest one. Recycling is necessary but insufficient for sustainability. How many people stop at recycling because they feel they've done their part in saving the planet? It is much, much harder to reduce consumption and to reuse.

      Would it be better if we went back to the days when we received our milk in glass containers that would be washed and reused indefinitely? (Though, I know at least one dairy that still sells milk in glass and the customer takes the glass back to the store so the container can be reused.) There are probably economic, social and other reasons for these shifts in consumer products and our relationship with our trash...I should do some research. Maybe it is about time that I read my friend's dissertation.

    1. Mixed messages

      Thanks for writing this musing. It was thought-provoking for me.

    2. If I notice these issues as a cisgender, hetero, high-SES, relatively clueless, adult, white-ish male, why aren't the people who are creating these supposedly empowering toys paying attention?

      Probably because the people who would notice hadn't been invited to the table when decisions are made. The decision-makers are probably not experiencing enough negative consequences of their behavior to encourage them to change.

      I feel like it is more likely to be a self-correcting system when people experience the natural consequences of their decisions. Unfortunately, life often brings situations in which decision-makers make choices that negatively impact other people, but the decision-makers themselves are either rewarded or suffer no immediate negative consequences. (I've thought about this for years and probably could write a whole essay about this idea, but I'll keep my thoughts to myself unless specifically asked.)

    1. What We Really Value: Beyond Rubrics in Teaching and Assessing Writing by Bob Broad

      I haven't read this, but would be interested in your assessment of it. Perhaps your review would be a good musing?

    1. Now if I can only figure out how to tell when one of my pages has been annotated [11].

      Until they get that working, I'll offer a couple of less-than-ideal temporary solutions, in case they're useful:

      1. I notice Hypothes.is does provide notifications of replies to annotations, so maybe we can leverage that feature? Perhaps when writing the musing, add an annotation to the title. Then when those of us using Hypothes.is add notes to your page, we can reply to that annotation. That way you'd get notice. I'll include a sample of the idea above on the title.
      2. I also notice there is an email feature on Hypothes.is. Perhaps I can email to the author when leaving notes?
    2. Rebelsky, S. A. (2018, April 9). Breaking things. Retrieved April 11, 2018, from https://www.cs.grinnell.edu/~rebelsky/musings/breaking-things-2018-04-09

    1. Still, it saddens me that folks in higher education would not understand the value or power of courses in disciplines different than their own.

      Unfortunately, it sometimes seems like human nature for people to oversimplify, underestimate, and discount knowledge/skills outside their own areas of expertise, especially when they don't recognize the limits of their own expertise. (Just because I drive a car to every day does not make me qualified to drive in NASCAR. In my ignorance, I may question whether NASCAR drivers are considered athletes or the value of driving fast around a circle.)

    2. But I'd choose Grinnell's approach any day.

      Each educational approach has its trade-offs and there are few absolutes. The best design is highly contextual, depending on who is asking the question. It sounds like you have found a good match.

      My guiding principles are alignment and intentionality. Applying the alignment principle, the optimal choice is the college whose pros and cons align with the individual's context and personal characteristics (e.g., needs, values, beliefs, goals, finances, interests, support systems, etc.)

    3. However, it's unlikely to be of the same level of experience that you get in a studio course.

      Yes, learning is not limited to the classroom. However, there are likely to be differences between the learning that occurs with intentionally designed instructional experiences versus learners having an experience which was not designed to accomplish learning outcomes. The latter leaves learning to chance.

      Learning is always happening, the key to consistently achieving desired learning is to use intentional instructional designs which align with the goals.

    4. I would never presume that the opportunity to make art was the same as the opportunity to take a real studio art class.

      I'm a bit confused. So, did they not have studio art offerings at all or was it limited to art majors?

    1. I wonder whether I could convince my department to participate in an experiment in which we require every student who majors in CS to follow that process. We'd also need to find a few faculty members from other departments who would serve as a review panel. I wonder if any other departments would partner with us on the experiment? That might be a good innovation fund proposal

      Sounds like a worthwhile project. I'd, of course, recommend including an assessment professional as an integral part of the team. :)

    2. should Grinnell students pass a "curriculum defense"

      Sure, that could be done. It is not uncommon for institutions to have students complete a portfolio which demonstrates their learning, particularly in graduate programs. I wonder how many undergraduate programs take such an approach? Such a portfolio might include a written and oral component in which the learners argue how they met the criteria.

      This could be both a great synthesis activity for the learner as well as a great piece of evidence for assessment purposes.

    3. I realize that our primary goal is to provide "data" [6] to our assessors

      Was that the primary goal of those who created them? Done well, a tagging system could be very helpful for a number of reasons. For example, students and their advisors within an open curriculum institution could use the tags to select courses which would help them gain the skills and knowledge necessary for a broad liberal education.

      I'd also hope that the tagging would provide useful information for assessors; however, satisfying external evaluators should be a byproduct of producing a system that is useful internally. (We should avoid doing things that are solely for the purpose of satisfying external accreditors if those activities are not resulting in any beneficial outcomes for our institutions and students. We can find ways to satisfy accreditors which are also useful to us.)

    4. I'd guess that the bean counters wouldn't like it.

      Why do you say that?

    5. I also think it would be an interesting exercise to hand a stack of those essays to our accreditors.

      If we had essays from students in which they've clearly articulated how they have achieved the college-wide learning outcomes and have exercised intellectual inquiry and have acquired, applied, and integrated broad learning skills (HLC Criterion 3.B), that would be very useful evidence for accreditors.

    6. Faculty will talk more frequently to each other about what they see as essential

      If there are essential components, does this mean that there are certain things that a student must do to have a "good" liberal arts curriculum?

    7. We could also have them defend their curriculum in front of a panel of faculty members.

      Interesting. This approach probably would make a lot of sense within the Grinnell College culture and curriculum framework.

      What do you see as the dark-side of this approach? What about bias? What about fairness? If the same student were to present the same argument to different combinations of faculty or even the same faculty at different times, would that student always get the same answer from the panel?

    8. describe a liberal arts education and to make a compelling case that what they have chosen meets the goals of such an education

      What are they to use as the guidelines to know whether or not they're accomplishing a liberal arts education? ...the elements of a liberal education...the college-wide learning outcomes...the mission...the values...something else?

      If I could argue that I had a liberal arts education having not taken a single course outside my discipline, would that mean I had developed a good curriculum?

    9. Grounded in the literature, of course.

      So, one feature of an effective argument is that it is grounded in the literature?

      Thus, it appears we have another learning outcome relating to using scholarly literature effectively.

    10. goals of a liberal arts education

      What are these goals?

    11. If the curriculum they designed is successful, they can make a good argument. If they make a successful argument, they (or, more precisely, their arguments) convince us of the quality of the curriculum.

      Interesting. I'll have to think about this one some more.

      The hypothetical student has certainly accomplished a learning outcome of writing a persuasive argument.

      So there are no situations in which a student writes a persuasive argument in spite of designing a poor curriculum? Are there no situations in which a student has designed a good curriculum but may not have been able to write a sufficiently persuasive argument?

      What are the features that would make one student's argument more persuasive than another's?

    12. Core to our identity as academics is a willingness to embrace messiness and complexity

      Also core to our identity as academics is our attempt to make meaning out of chaos, messiness and complexity. We are creators and discoverers of knowledge; we try to make sense out of the world that surrounds us. How do you recommend academics make meaning while accounting for complexity?

    13. tags seem overly reductive

      What alternative approach would you propose?

    14. So I'm left to ponder what we really gain from the kinds of analysis that we're supposed to discuss on Monday.

      Did you get an answer to your question on Monday?

    15. don't find that the tags accurately represent what we do and the tags can be interpreted in multiple ways

      True and True, when considering their current state. However, is the problem intrinsic to the idea of tags, or is it more related to the way they've been developed and implemented thus far?

    16. As I've said before, I find it strange to group all Humanities majors together. The experiences of the foreign language students are different than the experiences of the arts students (Studio Art, Music, and Theatre/Dance) are different than the experiences of the other Humanities students (Philosophy, Religious Studies, Classics, Art History, and English). I also find it strange to group the majors in the Science division that emphasize the scientific method (e.g., Biology, Biological Chemistry, Chemistry, Physics, and Psychology) with the majors that do not (e.g., Computer Science, Mathematics, and Statistics). Still, the conglomerating of Humanities departments troubles me more.

      I find it interesting to see how different institutions categorize their disciplines. What rationale did the people who created these three divisions use to sort the fields in the way they did?

    1. So, yes, dear colleague, I do intend to provoke, whether I know it or not.

      Isn't that what a good teacher does to stimulate critical thinking skills?

      John Dewey in Human Nature and Conflict wrote "Conflict is the gadfly of thought. It stirs us to observation and memory. It instigates to invention. It shocks us out of sheep-like passivity..."

    2. Isn't the list supposed to get smaller, rather than larger?

      Nope. The reward for solving a problem is to have new problems to solve. Kinda like cutting the head off of the Hydra.

    3. Did I mention that I like gathering meaningless data, such as making lists of numbers and deltas?

      I find these analyses amusing and I had just been wondering what meaning or benefit you get out of compiling these data. You voluntarily took time and effort to pull it together, so I thought it must serve some purpose for you and I was going to ask what you concluded.

    4. But I am also critical.

      Thoughtful criticism of the things we love is something I've come to regard as a sign of an academic, expert, or enthusiast's attitude. We want to better understand the issues and to make the things we love even better.

      I will have to give some thought on how to distinguish thoughtful criticism from a growth mindset versus uninformed or malignant criticism.

    5. I write to learn. I write to think.

      Perhaps I can relate. I often need to externalize my thoughts to really take a look at them and tweak them and then re-internalize them in their new form. Not uncommonly, something "just doesn't feel right" and finding the right words to describe the issue can turn the nebulous feelings into a more workable idea. It does help to have an audience because others' reactions refine the ideas into something far more insightful than what I would have developed on my own.

      A downside to having pre-thoughts printed in public is potential for misunderstanding the contents as a final rather than developing understanding of a topic.

    6. Does writing moderately coherent text for an hour or so each night make me better at writing moderately coherent text quickly?

      Might not the kind of writing make a difference in skill transfer? I've heard of writers who have mastered novels that struggle with short stories. It is possible that gains in speed or coherence with self-reflections may not transfer to other kinds of writing, such as memos or schedules or reports or journal articles or storytelling or emails. Perhaps I should look into what the scholarly literature has to say about transfer of writing skills across categories of written work.

    7. One question I ended up asking myself was "Am I writing faster?"

      Was that one of your goals when starting musing? I'd need to go back and read the early ones.

    8. [3] It does, however, already exhibit my fondness for endnotes.

      An endnote about endnotes. Love it.

    9. Little Red Schoolhouse

      What is this? Some sort of writing training?

    1. You may annotate this page with Hypothes.is or other annotation toolkit

      Do you receive notifications when someone uses Hypothes.is to annotate? Do you want alerts?

    2. at least if you pay enough attention

      If we were paying attention, what additional gaps would we be seeing?

    3. building tools for annotation

      So, if your students are no longer building the annotation tool, how might an existing annotation tool (like hypothes.is) be useful for teaching/learning in computer science?

    4. like Print What You Like

      Didn't know about this tool. Thanks for sharing. I may start using it!

    5. Very interesting perspective on building web applications by a computer science instructor

    1. taskcade

      Love the term! I didn't even realize I needed this word in my life. It is very reassuring to be able to succinctly describe this very common occurrence in my daily life. At least I know I'm not alone in experiencing this phenomenon!

  16. Jan 2018
    1. requires a specific type of expertise and far more time and effort than are available to any assessment program.

      What is your evidence?

      The "specific type of expertise" are merely social science research skills and basic statistical analysis which faculty should be able to know already or to learn from their campus assessment professionals or enlist the assistance of assessment/IR professionals when designing assessment studies. If the measurements that you were doing were valuable enough to spend time in the first place, isn't it worth doing it right to ensure quality data? If it wasn't valuable, why did you waste your time measuring it?

    2. n Eubanks’s words: "The whole assessment process would fall apart if we had to test for reliability and validity and carefully model interactions before making conclusions about cause and effect."

      Cherry-picking Eubanks' words to make it seem like a different argument. Eubanks was making a point that some people might make this argument but didn't agree because he follows this statement with the following: "How would we feel if the airline industry took that approach to building, flying, and maintaining aircraft? Should we also revert to a pre-scientific era of medical research because randomized trials are difficult and expensive? Are student outcomes valued so much less than health and safety that we should abandon all but the pretense of rigor for the majority of our work?"

    3. but measurements developed by assessors, who lack specific disciplinary knowledge, do

      Eubank did not say this. This is a misrepresentation.

      That said, being a domain expert does not guarantee that person is a good teacher. It does not guarantee that this person knows how to design educational experiences or to design effective assessments. It does not even guarantee that this person is a proficient educational researcher, especially if not trained in educational research. How many PhD graduate students receive training in how to teach effectively and how to design valid assessment instruments and how to design educational research studies?

      Unless the person with the disciplinary knowledge also understands statistical analysis, psychometrics, and good design for instruments such as rubrics, surveys, etc, the person in the discipline may risk creating an invalid and unreliable measurement instrument and poor educational assessment design. The best approach is a collaboration between the person with the domain knowledge and a person who is skilled with creating valid and reliable assessment tools.

    4. assessors have known for sometime now that assessment does not work

      From my read of David's article, I have a different interpretation. It is not that assessment does not work. It is that we have been doing it poorly...basically a garbage in-garbage out phenomenon. I saw David's article as a call to up our game for generating quality data, not declaring assessment as futile.

    5. I think part of the reason we have poor assessments and poor data is because we do not get to the core of what we really value relating to student learning...so we end up nibbling around the edges and then complain about just having crumbs to analyze.

      We don't ask ourselves the hard questions about what we really want students to be able to do at a deep level and get consensus on this. Not uncommonly we talk in vague terminology so that individuals retain freedom to interpret these learning outcomes however they want...ultimately negating the purpose of agreeing on learning outcomes.

    6. As Upton Sinclair said, "It is difficult to get a man to understand something when his salary depends upon his not understanding it."

      Be careful casting stones when one lives in a glass house. This same statement could equally apply to the instructor who refuses to utilize assessment in a meaningful way because it might mean more work for the same salary or because the assessment might indicate that the students aren't learning...which in turn puts the instructor's salary at risk.

    7. courageous position.

      I agree. Eubanks called for assessment professionals to get back to what is meaningful and move beyond the checklist version of assessment. He called for us to make assessment better not to eliminate it.

    8. that grades

      The problem with grades, especially end-of-course grades, is that they often include things that have little to nothing to do with learning involved...such as class participation.

      You can certainly use scores on particular exams or projects or writing, etc. to indicate the students' performance.

    9. there is no harm in fudging your analysis of the data

      I find this an alarming statement from an academic. How would you react to someone saying this about research conducted in your field?

      Even if it doesn't suggest questionable ethics, it certainly suggests disdain for the scientific method.

      How would researchers handle inconclusive data from a scientific study?

    1. taskcade

      You've given a term to a very common occurrence in my daily life. Somehow it is reassuring to both have a word to describe this phenomenon as well as know that I'm not alone!

    2. I see learning outcomes as merely a tool which is part of the instructional alignment process. Instructional alignment means that the outcomes, instruction and assessment must all match for optimum learning experiences.

      As a tool, learning outcomes can be used for accountability, improvement, or both. (The accountability approach tents to rub people the wrong way, but is sometimes a necessity or at least an inevitability.) As with most tools, much of the end result depends on the skill and intention of the person using them. If measurable learning outcomes are a shovel, I can tend a garden to grow some awesome things or I can use it as a weapon to threaten or harm predators in my farm.

    3. I'm testing the question of whether they can program, but I'm testing it at a lower level.

      If you feel that the aggregate of all the "lower level" tested content should demonstrate achievement of the higher level skill, then I don't see a problem. If the aggregate is insufficient, then measures of student performance on more complex activities might be necessary.

      As educators we decide which are the most important sub-skills students must master and be able to combine appropriately. Measuring both the complex task and key specific tasks are important. Such measures tell me if I can complete the complex task, and if not, where the failure occurred.

      Example of measuring high and low: A high-level medical outcome might be “successfully conduct brain surgery”. Sub components of this include but are not limited to anesthesia and surgical technique. Within each of those we have skills related to drug dosages, patient monitoring, anatomy, controlling bleeding, suturing, etc. Within each of those we get into even more detailed points, ad infinitum. We might have a capstone assessment in which a student must successfully conduct brain surgery. If the student was successful, we’re probably safe in assuming that the student also had mastered all relevant sub-tasks. However, if the student failed, it would be good to know why, and we’d want information about sub-task performance. Did I fail because I didn’t have good eye-hand-control of the knife, because I didn’t know the anatomy and cut the wrong part, because I administered too much of the drug, because I failed to notice the patient stopped breathing? Thus, I need to gather data on both “high” and “low” level learning outcomes. From a learning outcome standpoint, I’d probably have “successfully conduct brain surgery” as my high-level course learning outcome. Week 1 learning outcome might be the more specific task of “choose appropriate drugs for anesthesia”. Week 2 learning outcome might be “monitors an anesthetized patient”. Week 3 learning outcome might be “locates anatomical structures in the brain.” Week 4 learning outcome might be “stop bleeding in surgical wounds”. …so forth

    4. My experience is that measurements are difficult to develop, difficult to administer, and often are less reliable than we would expect.

      Why do you think this happens? What examples do we have besides the writing study?

      To me, measuring outcomes is like designing a research study in which the learning outcome is the research question and the measure is a potential method. Some measurement methods are more difficult to develop, more challenging to administer, and more valid/reliable than others, but hopefully the investigators were weighing the pros and cons when selecting the most appropriate measurement approaches for the study. (To increase chances of success, I’d want to make sure that those who are designing institution-wide inquiries have experience designing publication-quality educational research.)

    5. things that we can measure

      Is something unmeasurable if it is not easy or not quantitative in nature? Not necessarily. Well-designed qualitative measures are perfectly valid (and often time-consuming) ways to inform the questions we have about student learning. Ease and expense certainly factor into the cost-benefit ratio for using particular measures. We may choose not to measure due to difficulty or expense, but that is not the same as being impossible to measure.

    6. [17] Don't you feel sorry for them?

      Perhaps I'm crazy, but I welcome questioning on these topics as it forces me to clarify my own understanding and enhances my ability to help others.

    7. In my experience, each student enters this course with their own goals and expectations and leaves the course with their own, individual learning outcomes. However, there are some common goals I have for this course. They include the following.

      This seems reasonable, but I'll need to re-evaluate the syllabus as a whole to truly evaluate it.

    8. With small classes, you see students learn and you can often tailor learning outcomes to individuals. If your experience is primarily in larger classes, it seems unlikely that you can see learning in individual students and you are more likely to focus on broad instruments rather than on the individual. Or maybe I'm just biased.

      It is very possible that this could be a contributing factor, but class size is not the whole story. This might be an area for a longer conversation as I’ve spent much of my academic career contemplating my experiences across different kinds of institutions and trying to understand what impacts the different institutional contexts have on student learning.

    9. if we look primarily at the common learning outcomes, we miss the individual outcomes

      Are you saying that students wouldn’t have individual learning outcomes or that we just wouldn’t know what those were because we didn’t collect the information?

      Why would looking at the common learning outcomes cause us to miss the individual outcomes? Also, what, if anything, does knowing the individual outcomes change for an instructor? How would knowing the students' individual outcomes serve decision-making regarding course or curriculum design, etc.? If it is true that looking at the common learning outcomes causes us to miss the individual outcomes, how will we know this is happening/ What impact would it have?

    10. I don't expect that every student will take the same thing away from my course, and I don't think they have to. There are things that every student will do. But there are also many things that are individual learning outcomes, often more minor outcomes.

      I see the purpose of official measurable learning outcomes is for a course to announce to everyone the minimum threshold skills, knowledge, and attitudes that every student who passes the course should be able to accomplish. This helps enrolled students and instructors teaching the next course in the series know what to expect from those completing the course. (It does not prevent students and instructors from exceeding those minimums)

      It is great if students are learning additional things in the course. In fact, I’ve taught classes where I had students develop a “contract” with me to identify what they wanted to learn and what we would do together to ensure that they accomplished their own learning outcomes in addition to the course learning outcomes. In these cases, the measures for the learning outcomes may be unique for each student.