68 Matching Annotations
  1. Jul 2025
    1. They assign students, for example, to generate an essay using A.I. and then have the students fact-check it. That usually results in a better understanding of the limitations of the tech and how it can go awry.

      This is a great suggestion to keep in one's toolkit.

    2. Her professor, an adjunct, like many other professors at her university, had gigs at two other universities and was teaching huge online classes, meaning she had hundreds of students. How can someone possibly give meaningful feedback with a ratio like that?

      Obviously you cannot.

    3. As profs are pushed to raise enrollment caps, they lose the chance to have the meaningful interactions that prevent prohibited AI use and other academic dishonesty.

      Not only to catch AI use, but also to better understand student deficits and provide their own substantive feedback, of course.

    4. The prof in question is an adjunct, which seems to be a case with some of the other examples. So they are having to take on a very large load, probably with inadequate compensation also while balancing with other adjunct gigs.

      One of the fundamental problems here is severely overloading and underpaying adjunct faulty (or any faculty, for that matter).

    5. To your article, I am surprised that nobody pointed out how BAD AI is at grading. I have tinkered with Chatgpt to see how it would evaluate students vs my evaluation (initially thinking I might benefit from new insights and parameters). However, the program is only capable in certain ways. It can tell you how well a student applied formatting, used grammar, and structured their essay (the latter only to a certain degree though), but it is excruciatingly bad at recognizing the difference between bland, surface-level analysis and thoughtful, nuanced, and innovative content.

      This quote is from the comments. I think this is an excellent point about where LLMs (at least currently) fall down, and where a professor's insights and feedback is the "special sauce". And importantly, for professors, would you be comfortable not knowing whether or not an LLM had graded on something substantive vs surface level in a student's work?

    6. Dr. Shovlin compared it to a longstanding practice in academia of using content, such as lesson plans and case studies, from third-party publishers

      I have had highly varied experiences with this, as some few professors I've had were clearly reading off slides or lecture materials they didn't prepare themselves, and didn't bother to review ahead of time.

      It's absolutely true that supports like textbooks, slides, etc, are common, justifiable, and useful in a classroom. But again, you can purchase the textbook yourself, what has the professor added to make the course worth the cost of taking it?

    7. The professors said they had used ChatGPT to create computer science programming assignments and quizzes on required reading

      Even pre-LLMs, I remember taking an open book math test who's questions, when I examined them after the exam, were very clearly cribbed from practice tests published by a number of other universities (changing a few constants in an overall identical set of equations to be solved, for instance).

      This approach made me lose a lot of faith in the quality of my instructors who couldn't be bothered to write their own novel math exam.

    8. The value that we add as instructors is the feedback that we’re able to give students

      Yes, though I would also add "and have our teaching be responsive to the trends we see in student data, quantitative and qualitative".

    9. Working at the school was a “third job” for many of her instructors, who might have hundreds of students,

      Again, the actual core problems here. Using AI to address this problem is putting the prof in danger of irrelevance. Anyone can write an (attempted) anthro paper and ask an LLM for grades, feedback, etc.

    10. helped them with overwhelming workloads

      I think this is actually central to the problem. LLMs are being used to solve a problem rooted not in a need to access information, but rather to solve: - existing problems in higher ed, like attitudes towards who is responsible for student learning, and - newer problems in higher ed, like fundamental changes to funding which institutions are trying to solve by overloading class sizes (a solution that might not have been considered if there weren't already outdated opinions w/r/t the first point).

    11. They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free

      I think students' views that this is hypocritical may be reactionary, but it's up to professors to delineate, I think, how their ChatGPT use is fundamentally different from students'.

      IMO, the use of LLMs to create novel works (course syllabus, answers to quiz questions, etc) robs the writer of a very crucial aspect of writing, the construction of new knowledge and analysis that's produced as part of the writing process.

      Is this necessary to write and re-write a syllabus over each year? Probably not.

      Is it necessary when writing quiz answers, essays, and other aspects of student work? Certainly. Importantly, students (and maybe some faculty) think that assessments need to reflect what is already learned and memorized, when in reality assessments provide more opportunities for learning though the process of assimilation and analysis of ideas.

      Now, is this necessary for professors who are writing feedback and responses to student work? Yes. Using LLMs for feedbacks gets back to the central idea in this article; if the feedback comes from an LLM, what are students paying for? Importantly, how are professors factoring student work into their next lecture and overall into their instruction if they're not reading it?

      Alongside this, is this necessary overall for professors? Probably yes. Lacking an understanding of what obstacles students are facing in their learning can only lead to an unresponsive class experience.

      What is the special sauce students are getting out of a class?

    12. expand on all areas. Be more detailed and specific.

      If the professor isn't applying their personal experience and knowledge to their notes, what is the benefit of the student taking the class with that particular professor?

  2. Apr 2025
  3. Mar 2025
  4. Jan 2025
  5. Oct 2024
  6. Jun 2024
  7. Apr 2023
  8. Mar 2023
  9. Feb 2023
  10. Jan 2023
  11. Nov 2022
  12. Sep 2022
  13. Aug 2022
  14. Jan 2022
    1. And here in the wild I have you: two halflings, and a host of men at my call, and the Ring of Rings. A pretty stroke of fortune! A chance for Faramir, Captain of Gondor, to show his quality!'.... He stood up, very tall and stern, his grey eyes glinting.
  15. Oct 2021
  16. Sep 2021
  17. Aug 2021
  18. Apr 2021
  19. Dec 2020
    1. The only type of grade supported by this service is a decimal numeric grade in the range from 0.0 - 1.0.  Additional types of outcomes and the ability for the TP to perform more detailed outcomes operations may be added at a later date.

      Reason for the grades being 1-10.

    1. Nothing has any purpose. Life is meaningless. Any purposes you imagine you have are illusions, errors, or lies. This is the stance of nihilism. It appears quite logical. It might seem to follow naturally from some scientific facts: everything is made of subatomic particles; they certainly don’t have purposes; and you can’t get purpose by glomming together a bunch of purposeless bits. It is easy to fall into nihilism in moments of despair; but, fortunately, it is difficult to maintain, and hardly anyone holds it for long. Nevertheless, the seemingly compelling logic of nihilism needs an answer. It turns out that it is quite wrong, as a matter again of science and logic. But because that is not obvious, three other stances try (and fail) to find a middle way between eternalism and nihilism.

      Hypothesis is reporting the HTML of this selection contains a number of newlines that don't appear in the HTML on the page.

  20. Sep 2020
  21. Aug 2020
    1. Add an image description in between the square brackets. This will help blind and low-vision users, and is a best practice for using images on the web. For information on how to write good image descriptions, see this blog post from the Stanford Web Services Blog.

      This is better known as the alt attribute!

  22. Jun 2020
  23. Jan 2020
  24. Sep 2019
  25. www.latex-tutorial.com www.latex-tutorial.com
  26. Aug 2019
  27. Jul 2019
    1. Your teacher may require you to use tags for a variety of reasons.

      Tags would also be a great way to practice metacognition! One technique would be to have a list of categories you use when annotating (such as "Evidence", "Central Idea", and "Vocab") and apply one of these as a tag to each annotation.

      The goal is to go beyond analyzing the text and identify why you're analyzing that part of the text. You can read more about metacognition here.