66 Matching Annotations
  1. May 2026
    1. That counterintuitive finding — that asking students to explain their reasoning during worked examples can actually reduce their effectiveness — likely reflects the additional cognitive load that self-explanation imposes on novice learners still building basic schemas.

      I remember all of those "explain your thinking" questions where I had to pay attention to penmanship and grammar, not math.

  2. Mar 2026
    1. In a world where machines will increasingly drive breakthroughs in climate, medicine, energy, and warfare—often while simulating human knowledge and emotion—the faculty member's ultimate function is to teach students and the public how to understand that work and to lead public debates about what to do with what technology makes possible.

      The faculty member as moral(?)/logical/ethical leader in applying insights from tech?

    2. First, faculty must learn to work with AI teaching assistants—shaping how these agents interact with students, directing them toward specific learning goals, and knowing when to override or supplement the machine's recommendations.

      a potential catalyst for assessment-orietned work

    3. Think of AI as a "genius teaching assistant" who assumes much of the work of basic knowledge transfer, unlocking learning when students get stuck and providing real-time assessment.

      A great quote for the utopian view of AI in education (a perspective that's been mostly missing!)

    1. The goal is not to add one more obligation to an already long list, but to help institutional leaders, IT professionals, and accessibility advocates connect accessibility to other institutional priorities.

      This is helpful framing: don't add obligation, but help institutions connect accessibility to existing priorities.

    1. In a way, what we were seeing in that classroom was a small fractal of a larger pattern: how humans and tools interact inside a complex system, and how trust and curiosity can help a group reorganize its thinking together. The local pattern of one group of students making sense of Sheraden was reflecting something bigger about how our institution, and even our culture, is learning to live with AI.

      Brilliant! How social formations in learning contexts are recrystallizing because of the technology

    2. They were not just reading about a persona’s lived reality; they were stepping into that mindset and then testing it against the neighborhood itself, using ChatGPT as a thinking partner. The tool could simulate roles and perspectives, but the students were the ones inhabiting the personas. That is where my foray into using AI in this way really began.

      use case for chatbots (public health): user as persona; AI as thinking partner

    1. Anything involving risky human interaction. Clinical psychology, nursing, drug-abuse counseling. The danger there, of course, is that you don’t just need guardrails, you need castle walls. You have to be HIPAA-compliant, absolutely. But more than that, you have to make sure that what the student says never, ever leaves the classroom and can never be used in court against them. It has to be absolutely safe to totally screw up. And it has to be tested to hell.

      On the ideal use of chatbots as simulations: "castle walls" being needed as a caveat

    1. The problem is, it only gets you about 80% there, and that last 20% is garbage. And you have to be able to spot the 20% that’s garbage.If you can’t, you’re going to be in deep trouble when you get out into the real world and try to get the machine to do your job for you. You have to know how to do it yourself. You have to be able to do the process. You have to be able to do the real thinking yourself.That’s what we’re really all about in school, even if it doesn’t show. The product is merely a proxy for the process, because I can’t crack your head open and watch you think.

      Great framing to get student buy-in

    2. In some respects, GenAI is just the latest shiny thing to come along, and education has seen a lot of those. In some respects, we can treat it like that. On the other hand, AI is the printing press, it’s radio, it’s the internet. And it is going to fundamentally change things in ways we cannot predict. We need to be prepared for that. It is an absolutely disruptive technology, and it is going to change things in dramatic, fundamental ways. We just have to accept that and be prepared to ride it.

      quotable. A good way to portray how genAI is "just another technology" at the same time as it's completely revolutionary.

  3. Feb 2026
    1. But the AI pragmatists presuppose specific metacognitive abilities: Can writers learn to recognize, in the moment, whether a particular difficulty is productive versus merely frustrating?

      Supports the idea that metacognition might be the most powerful skill to build as we learn to navigate AI

    2. The barriers that non-native speakers face are real and well documented; if AI can lower them, that’s a genuine benefit for equity in science.

      Anecdotally, I've found that non-native speakers are much more receptive to the benefits of externalizing with AI

    3. Talking through your ideas with a colleague, defending your approach at a lab meeting, explaining your project to a collaborator from another field

      e.g., alternative forms of assessment

    4. There is also strong evidence from fields such as aviation and medicine that automated assistance will accelerate skill decay, even in experts.

      That's scary! Bookmarking this study for later.

    5. The worry of “cognitive debt,” or skill decay, or however you want to frame the general issue, feels legitimate to me. But so does the counterargument that these worries are overblown

      captures my thoughts exactly!

  4. Nov 2025
    1. The same thing happened in the 2010s with massive open online courses, or MOOCs. Tech evangelists promised that we would not need as many professors, for one expert could teach tens of thousands online! But MOOCs were a mid technology that could barely augment, much less replace, deep expertise. Receiving information is not the same as developing the facility to use it.

      AI as a "mid" technology akin to MOOCs

    2. The problem is that asking the right questions requires the opposite of having zero education. You can’t just learn how to craft a prompt for an A.I. chatbot without first having the experience, exposure and, yes, education to know what the heck you are doing. The reality — and the science — is clear that learning is a messy, nonlinear human development process that resists efficiency. A.I. cannot replace it.

      Why AI can't replace teachers-- "it's a human process that resists efficiency"

    1. In 2016, Jeff Stein, a veteran journalist covering the US intelligence community, got a tip-off: a small insurance company that specialised in selling liability insurance to FBI and CIA agents had been sold to a Chinese entity.

      What a thought!

    1. But we no longer live in an age of information scarcity. The lecture is a solution to a problem we no longer have. The challenge for colleges and universities in the twenty-first century is to deliver artisanal-quality learning at an industrial scale. For decades, this has been an impossible dream. Until now.

      Logic: The lecture format of learning was in place to "allow one expert to broadcast information"- which can now happen in a multitude of ways (cue "flipped classroom")- should be supplanted with at-scale personalized learning. This is the best way to scale the Socratic ideal, short of direct expert-to-learner instruction.

    1. In the liberal arts, we often critique capitalism’s exploitative systems, yet we reproduce the same patterns in our own knowledge economy. We externalize the costs of learning and call it normal.

      Great quote here. What does an anticapitalist course design look like?

    2. And here’s the truly jarring part: many of those same publishers are now selling our work again. This time to AI companies without our consent or compensation. I’ve come to label it as academic fracking: extracting value from our intellectual commons, layer after layer, until nothing of public good remains.

      What data are these publishers collecting and selling?

    3. The “fetish” for particular kinds of writing—certain tones, certain sentence structures—reasserted itself, this time through algorithms and bias detection tools.

      Resurfacing our baises through AI

  5. Sep 2024
  6. Jun 2024
    1. the professor released a custom GenAI prompt designed to interact with learners and tutor them on topics from the quiz.

      Advanced ues case: custom prompt that is focused on the assignment!

    2. This time-saving effort allows the instructor to focus time and effort on interacting with learners.

      Optimistic spin: saves time to focus on interaction with learners Cynical spin: GenAI can do my job for me and is putting my very job at stake

    3. Instructor Proxy. In this mode, the GenAI's response is advocating on behalf of the instructor while interacting with learners.

      I'm finding this a little tricky to disentangle from the Instructor Assistant mode, because both are aiming to develop materials for the students. Maybe it's that the Instructor Proxy is focused on output-- creating output that goes directly for the learners, instead of being more instructor-mediated? It isn't that clean.

    4. GenAI is introducing novel ways for learners to interact not only with their peers and instructors but also with autonomous entities

      This is some great framing, because it opens up a new set of questions. * How will interactions between students and instructors change? How might interactions between students and their content change? How might interactions between students and their classmates change? [Is this question even relevant? ]

    1. Then we will need to teach students how to work with AI

      This I think is the endpoint for academic tech/ instructional designers, but how does this happen? Does "train the trainer" work better or integrating IDs/AT staff into the curriculum design process?

    1. It turns out that by adding your assignments to the assignment area, many of these functions of the course management system are automatically activated

      This is the main takeaway from this page, right?

      I think reframing this as... Let GLOW do the work for you! or something like that might work.

    1. Creating a Pages Front Page (Special Instructions)

      This page isn't loading for me. 6/10/24 at 5pm.

      If others have this problem, I suggest making this a link to an external resource. The link works for me.

    1. s clear/consistent navigation, a logical layout, and easily-accessible resources:

      I'd be willing to make an optional deep-dive video into how a structured home page aligns with UDL principles!

  7. May 2024
    1. The end goal is to develop standards for integrating the ongoing ITS research (and other data-backed research streams) into continuous improvement of AI tutors.

      Would these standards be visible to end-users?

    1. But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.

      i.e. incapable of reason. Will that change?

    2. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

      Concise explanation of how machine learning is different than human learning

  8. Jul 2023
  9. africana-studies.williams.edu africana-studies.williams.edu