165 Matching Annotations
  1. Dec 2023
    1. oppression created, exacerbated, or reproduced by AI and algorithms:

      Here's a wonderful piece to add: "How AI reduces the world to stereotypes" https://restofworld.org/2023/ai-image-stereotypes/

    2. we carefully work toward fairness in AI systems as well."

      I'm not sure we should suggest that such fairness is really possible. I haven't heard anyone, even industry people be optimistic that bias can be completely removed.

    1. In his blog opencontent.org, David Wiley of Lumen Learning has described what he calls “generative textbooks, Links to an external site.” essentially a list of prompts for students to use with a chatbot to learn about a specific topic and ask the bot to ask them formative questions about the material after they read about it.

      I would add to this that customized chatbots offer a different way to do this. Instead of providing students with prompts, we provide a chatbot with customized knowledge and instructions to allow for a little more supervision of what students may get. This is something OER textbook authors have been experimenting with as the companies Poe.com and OpenAI have rolled out functionality that supports it in Fall 2023.

      We can upload an OER textbook as context for a chatbot so that it draws on that text. For example, here is a tutor bot for the OER How Arguments Work: https://chat.openai.com/g/g-rE8U5Qd7P-argument-tutor The concern here is that the tutor bot may get something wrong or misrepresent the textbook. It may also fail to quote from the textbook appropriately when using the exact words from the text.

    1. We saw in module 1 that generative AI depends on an extensive training process where a computer system analyzes a large body of examples of text, images, video, or other forms of data.

      A description of the range of bigger picture questions around AI and intellectual property would help to frame this as one of those. Something like "When considering intellectual property and AI several separate questions and concerns have been raised. 1. Have IP rights been respected in the training of the AI systems? 2. Do auto-generated text and images respect IP law (i.e. are there plagiarized images and text in the outputs--answer, yes, sometimes) 3. Are auto-generated text and images eligible for copyright/licensing (talking about genAI outputs that don't duplicate copyrighted work)

    2. What is the corpus a

      Maybe revise to show connection to AI and copyright here. Something like "Is Generative AI Compliant with Copyright Law"?

    1. Articulate the relevance of the corpus - large data sets used to train AI models - as it relates to content bias, quality, and context.

      Could this be reframed slightly to emphasize the purpose? Describe the concerns that have been raised about intellectual property and the data sets used to train AI

    1. allows users to transform single words, phrases, or sentences into photo-realistic images.

      Sounds like marketing copy? Maybe a simpler description. Just say it's like DALL-E 3 and considered state of the art

    2. Users can either prompt it to generate a new image or add an existing image and prompt to edit the image to meet certain specifics.

      You access it only through ChatGPT now, right?

    1. conversational tool,

      language model (it has many uses, not all conversational) with a chatbot interface

    2. is a search engine with the underlying technology from ChatGPT.

      connects a search engine to a chat interface based on the same language models as ChatGPT.

    3. chat service from Google.

      Based on the Gemini Pro language model

    4. sources its information directly from the web.

      Not exactly--see my earlier comment

    5. human-like interaction

      I would just say "highly interactive." There are so many ways it's not humanlike, and the dangers of anthropomorphizing are real.

    6. Whether you're looking for

      This phrase smacks of marketing. "Revolutionized" does too, but it's accurate so maybe fine?

    7. Content

      "Content" is not specific to text. What about just "text generation"?

    1. Consider the privacy policy and consider what kind of information you are comfortable sharing with the model as you prompt it.

      If you are concerned about these issues, consider looking for a platform that does not require an email address or even any login. That is the case as of this writing for Stable Diffusion's image generation and Perplexity.ai.

    2. 100+ Creative ideas to use AI in education Links to an external site..

      This resource is more about pedagogy than about exploring AI platforms--maybe hold off on this reference here? The first two are excellent.

    1. Need to write a key question for this page - key thought regarding academic integrity in connection to ethics in higher ed. - CM

      Brainstorm: Will the availability of generative AI help or harm student learning? Will it make the learning environment more or less fair? Specifically, to what extent shoudl we be concerned about students missing out on intended learning experiences because they are using AI to substitute for activities designed for their own learning?

    2. hat-GP

      no hyphen

    3. has

      have

    1. AI Prompt Engineering

      Would you consider not labeling this "prompt engineering" which really suggests that it is technical knowledge? I would say "Strategies for Prompting AI" or "Streategies for Getting the Most out of AI" Then note that this is sometimes called prompt engineering but is not as technical as people might suppose.

    2. For example, the prompt "a tree with no leaves" is likely to produce an image including leaves. As an alternative, negative prompts allow a user to indicate, in a separate prompt, which terms should not appear in the resulting image.

      Is this still the case? I have heard that specific knowledge of prompting techniques is no longer necessary/there's been a big change with DALL-E 3 in this regard.

    1. Spectrum of AI-Human Collaboration Links to an external site.

      This sounds great...I got a "you need to request access message." Is there another link?

    1. n.

      Add in a reminder that we don't have the right to copy student work into these systems. Any submission of student work to AI has to go through institutional privacy and security checks, FERPA vetting, no?

    2. You can safeguard yourself

      ...and students

    3. AI is evolving. While AI text detection software isn't foolproof today, text-generated AI might be labeled as AI in the near future.

      On reflection, I'm wondering if we should take this sentence out? It suggests too much emphasis on being caught as a motivator... probably what's in the bullet point below is enough.

    4. Make

      capitalization

    5. However, before we begin,

      Maybe we can ease the transition and connect to the previous section on AI ethics here. Something like "One positive step toward using AI ethically is to adopt certain practices around its use from the beginning. The guidelines below, suggested by Anna Mills, are meant to address several of the ethical concerns discussed in the previous section, including copyright, privacy, truth, bias, and human labor. They are by no means an answer to these concerns but they are designed to mitigate possible harms."

    6. by Anna Mills.

      maybe say something like "an OER textbook author who curates an open resource list on AI for the Writing Across the Curriculum Clearinghouse." So it doesn't seem random that you're sharing something recommended by me?

    7. You may also discover a better tool not on this list (if so, please share it here [link to shared document area for course/module]. We know it can feel overwhelming when there are so many tools out there, so this week, we will share the top text and image content creation tools and provide the pros, cons, and strategies for using each tool.

      Should this refer back to the section on tools in the previous module?

    8. In the below example, Anna Mills used AI to generate template phrases for critiquing AI outputs, which she included in a section of her OER textbook, along with the following disclosure:

      Anna Mills wrote several template phrases for critiquing AI outputs on her own. She then used AI to generate more examples and selected sparingly from among those. Below, she puts the AI content in quotation marks with an asterisk and includes a note about their origin in the acknowledgments.

    9. Mills, Ann (2023, May 1). Towards Transparency: How Can We Distinguish AI from Human Text Going Forward? Links to an external site. Licensed CC BY NC Links to an external site..

      Anna

    1. Datafaction

      datafication?

    2. Ethics and the use of AI are inherently and necessarily connected, and AI practices in education should begin and end with ethics. But while we likely can agree that we should do all we can to create and use new technologies in an ethical way, there is no silver bullet, particularly with the fast-moving target of AI. Nevertheless, it is imperative for us as educators to try. Doing so ensures the most effective and responsible use of, and teaching about these technologies.

      I so appreciate this framing, and this whole page!

    1. You'll find that each platform offers a unique blend of features, capabilities, limitations, and use cases.

      This might be a good place to put two notes on how the way we encounter these systems is changin One trend is toward integrating AI into existing software like Google Doc or Word or the Office Suite. Another trend is toward multimodal, more agentlike systems that "decide" to do multiple kinds of things, so not a discrete text generator app but a system that will run code, search the web, generate and analyze images as well as generating text. ChatGPT Premium is already like this.

    2. Claude Links to an external site.: Another conversational tool, Claude is considered by many to be parallel in quality to ChatGPT.

      I would argue there also should be mention of Perplexity.ai (doesn't require login, combines search with various underlying models) And Poe.com (let's you customize your chatbot including choosing which underlying model to use)

    3. ChatGPT

      Curious why you chose "Text to Content" since content can be something other than text... an image is content. How about Text generation and Image generation for the tabs?

    4. OpenAI’s latest iteration of its image and art generation AI tool. Users can either prompt it to generate a new image or add an existing image and prompt to edit the image to meet certain specifics.

      Note that a main point of access for Dall-E 3 is now through ChatGPT itself. It's combined with the chat.

    5. ChatGPT.

      The latest release of Claude is considered to be close in quality to premium ChatGPT.

    6. Microsoft has combined its search engine with the underlying technology for ChatGPT.

      Bing offers free access to OpenAI's more sophisticated GPT-4 underlying model. See my tweet based on Ethan Mollick's tweet: https://x.com/EnglishOER/status/1732626937074614607?s=20

    7. sources its information directly from the web.

      This is not quite right. Bard offers some cross-referencing of language model output (currently Gemini Pro) with search output. They are two different systems. Bard can be used just for text generation without search.

    8. ChatGPT offers a human-like interaction experience.

      Note that it is available in a free version or a premium one, and that the premium one is based on a more sophisticated underlying language model, GPT-4.

    9. Ideogram Links to an external site.: A free text-to-image generator. It supports a diverse set of image style tags and can render coherent text inside images.

      I would add Adobe's Firefly and Stable Diffusion. We could add a line at the top or bottom that notes that increasingly, image generation is integrated into existing software like PowerPoint, Google Slides, and Canva

    1. Evaluate the transparency of AI models and propose strategies for enhancing transparency in AI-generated OER content creation

      These seem like two very different issues. Transparency around how the model is created is one thing; transparency in terms of citing/acknowledging that something is AI-generated is another. Can they be separated into two bullet points?

    2. AI-generated OER content.

      Could we say "AI-generated supplements to OER content"? Because the AI-generated part in itself is not eligible for copyright or CC license so not OER...

    1. Answer these questions to check your understanding of this module’s concepts. The quiz is not timed and you have three attempts. Your highest score will be kept (aim for 80% or better!).

      I made a few minor edits to the quiz.

    1. Be sure to check out this resource developed by Dominic Slauson, which includes a range of helpful examples, tips and guidelines: Getting Started with Generative AI in Higher Education Links to an external site..

      Thanks for this very cool resource which I will add to my list! I wonder if this might fit better in a later section about classroom applications. The above graphic is great for thinking about organization processes that build AI literacy which seems like it should come first. The link is leaping in to just using tools --there should be some understanding first.

    2. AI tools also can be used to provide learning support in such forms as identifying at-risk students, recommending courses, increasing motivation, and predicting student performance.

      Consider specifying something like "Before the rise of generative AI, other kinds of AI systems were already in use in higher education and use of them continues to expand..."

      Or take out the discussion of the other forms in case it muddies the waters?

    3. AI tools

      Are you still referring to generative AI?

    4. detect plagiarism

      This last wouldn't be generative AI, right?

    1. Mills, Ann (2023, May 1). Towards Transparency: How Can We Distinguish AI from Human Text Going Forward? Links to an external site. Licensed CC BY NC Links to an external site..

      Maybe cite a different talk that's more relevant (I used that same NYT article) How Teachers Can Harness AI in Our Work A workshop for the Center for Learning and Teaching at the American University in Cairo and Equity Unbound Anna Mills, English Instructor at Cañada College October 24, 2023

      https://docs.google.com/presentation/d/1lXrjlrGPUnCWAa0aoVWkgl35nQsExl04/edit#slide=id.g29154ec8d56_0_5

    2. worth the hype

      What about "Does AI live up to the hype"? I don't think "worth" quite fits since we don't exchange hype for something else...

    3. Mills, Ann

      Anna

    1. There are alternate perspectives that suggest that AI already can use personality profiling and biometrics to predict human behaviors

      This doesn't contradict what we've said above, in my mind. No one is disputing that systems can be designed to mimic human behavior and give the appearance of creativity or emotional intelligence. However, it still lacks real experience or capacity for empathy. I know this is murky philosophical territory where there is a range of opinion.

      What if we said something like "There are alternate perspectives; some argue that current AI could be conscious according to certain definitions, or that it should be considered to have rights. However, most computer scientists have argued that current AI system behavior is fully explained by the underlying algorithms and data." ?

    2. AI has a goal, which is to influence humans; etc.

      This could confuse people and suggest sentient AI. Any computer program that is designed with a purpose in mind can be said to have a goal.

    3. innovative, unique

      I was surprised by this since most people emphasize that AI output is not original or innovative since it is designed to replicate a pattern. Any innovation would come from the prompt, no?

    4. human-generated c

      I would add "largely" before "human-generated."

    5. can

      Rather than "can" I would say "come up with plausible output on the following tasks:"

      Below the bullet pointed list, I would add something like "It's worth emphasizing that its performance on any of these tasks may be plausible at first glance but still contain fundamental flaws." That would segue into the next section...

    6. bega

      ended

    7. 2021
    8. mirror

      Curious if "mimic" fits here instead of "mirror"?

    1. Google

      I would add Google search autocomplete and Outlook email autocomplete as well as texting next word prediction. These are good examples because they are based on language models--we are already using language models.

    2. called deep learning

      Take this out perhaps since deep learning is a particular variety of neural networks. Neural nets and machine learning have been around but deep learning and transformers are the innovations that led to current generative AI.

    3. Video to Watch

      I would add below this: For a better understanding of large language models in particular, see OpenAI research Andrej Karpathy's popular, substantive, and not overly technical introduction: https://www.youtube.com/watch?v=zjkBMFhNj_g

  2. Oct 2023
  3. chat.openai.com chat.openai.com
    1. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification". Proceedings of Machine Learning Research. 2018.

      This is a real source: https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf The summary of it subtly underplays the findings of the study, which found that all the systems (and datasets) looked at were biased, not "some."

    2. some

      The article itself finds this trend with all the systems it studied, not some of them. The conclusion finds that "all classifiers performed best for lighter individuals and males overall. The classifiers performed worst for darker females."

    3. AI Can Play and Master Complex Games Without Prior KnowledgeThe AI model AlphaZero, developed by DeepMind, was able to teach itself to play chess, shogi, and Go, reaching superhuman performance levels in each, without any prior knowledge of the games, simply by playing against itself.

      This does seem like an accurate summary except that the word "superhuman" has supernatural connotations that are misleading. Many things we wouldn't normally classify as superhuman perform better than humans on specific tasks. We wouldn't say that a computer has "superhuman" memory or that a forklift has "superhuman strength." Their capacities don't suggest that they are entities beyond or superior to humans.

    4. Silver, David, et al. "A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play". Science. 2018.
    5. AI Can Dream and Visualize Its Own Imaginations

      The title is extremely misleading: It sugests AI is dreaming in a human sense and that it has "imaginations." None of that is suggested by the paper referenced.

    6. "DeepDream - a code example for visualizing Neural Networks".

      This is a real source: https://blog.research.google/2015/07/deepdream-code-example-for-visualizing.html?m=1

    7. neurons

      The summary is more accurate, but even there it uses the word "neurons" to refer to AI. That word doesn't occur in the blog post referred to and is misleading because it suggests AI has neurons without in any way distancing the AI feature from biology.

      Even Geoffrey Hinton, one of Google's principal architects of this technology, who is certainly a booster of it, is careful to specify "artificial neurons" when he refers to the nodes in a neural network. See https://www.imprs-life.mpg.de/26527/093_hinton_g.pdf

    8. Google's Magenta project focuses on creating music using AI.

      This project documentation doesn't seem to claim anywhere that the system creates "original" music. Rather, the blog posts I found emphasize humans working with the AI system to create music. They leave open the question of whether a system like this could create original music on its own. https://magenta.tensorflow.org/blog

    9. "DeepArt Transforms Your Photos into Artworks"

      In what sense would these paintings be original? I couldn't determine if this is a real article. This website is down, but the title phrase "DeepArt: Transforms your photos into artworks" was found in a Medium article by Linda Bucksey at https://medium.com/@linda_bucksey/ai-vs-artist-embracing-the-ai-wave-in-graphic-design-217dcf8f9d4d that links to https://www.deeparteffects.com/. I did not find any assertion that this system creates "original" art.

    10. Source: Esteva, Andre, et al. "Dermatologist-level classification of skin cancer with deep neural networks". Nature. 2017.

      This is a real source, but the abstract doesn't say anything about AI doing a BETTER job of detecting than human experts. It says "The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists." https://www.nature.com/articles/nature21056

  4. Jul 2023
    1. urgent need for humanities educators, who areempowered and positioned to help those learning AI use and respond thoughtfully to thistransformational technology
    2. hat is, thenature, capacities, and risks of AI tools as well as how they might be used.

      Note that critical perspectives and how-to-use-AI are both involved here.

    3. A recent survey of our combined memberships indicates that most faculty whoresponded appear to lack policy guidance from their institutions (86%) and have been hesitant tochange what they are doing in their own classroom policies (79%).

      Survey results as of late spring 2023.

    4. existing educational institutions and funding streams are not adequate to support the rapiddevelopment of curricula for critical AI literacy to supplement existing digital literacy curricula.

      A key point. What are the best channels for additional support?

    5. AI systems tomake clear the human source of information that informs a particular output wherever possible,whether through citation, linking, or other appropriate means, so that users can understand thecontext of information and verify and vet it.

      This is not simple given the black box nature of machine learning systems, but it is an active area of research.

    6. Scholarship acknowledges that not allcheating and related learning loss can be prevented, but structures that encourage academichonesty and make dishonesty harder can substantially reduce incidence of violations andassociated harms.

      Again, harm reduction.

    7. Anindication of provenance in AI tools will give students guidance about appropriate andinappropriate use of generative AI in the learning process and a foundation for accountability

      This seems like a modest demand.

    8. While technical methods for labeling AI text may notbe foolproof, we nonetheless maintain that regulation requiring an indication of provenance willpromote ground rules and frameworks that will reduce disinformation in society and academicintegrity more specifically.

      Could this be compared to a harm reduction approach?

    9. uncritically privileging

      What might be done about this?

    10. AI users should know when and howcontent is generated by an AI tool. This will allow human knowledge to be credited, traced, andverified. Otherwise, AI users may unknowingly consume AI-generated material with theassumption that it is created by a human who claims responsibility for its veracity and hasintentions grounded in an awareness of others.

      Will readers realize we are asking for this kind of labeling of AI text as well as images, audio, and video?

    11. information integrity,

      I wonder how broadly this phrase resonates.

  5. Mar 2023
    1. urgent will overshadow the important,”

      I wanted to understand this better. Is she saying that responding to ChatGPT seems urgent but that more fundamental and familiar questions about our teaching are actually more important? I have sometimes been concerned about my own focus on ChatGPT and whether it is distracting me from other things I could focus on that might improve my teaching more.

    2. adapting teaching to this new reality

      I don't remember how I put this but this phrase seems so broad--we wouldn't all agree on adapting teaching, but we might all agree that we need to make explicit policies about AI.

    3. wrong answers for multiple-choice tests. ADVERTISEMENT

      Maybe... but these should be carefully crafted too.

    4. Established plagiarism-detection companies have been vetted, have contracts with colleges, and have clear terms of service that describe what they do with student data. “We don’t have any of that with these AI detectors because they’re just popping up left and right from third-party companies,” Watkins said. “And I think people are just kind of panicking and uploading stuff without thinking about the fact that, Oh, wait, maybe this is something I shouldn’t be doing.”

      Thank you to Marc Watkins for his leadership in pointing this out! I had not seen this clearly before I started reading his tweets on it.

    5. If we haven’t disclosed to students that we’re going to be using detection tools, then we’re also culpable of deception,” says Eaton.

      That's true--it hadn't occurred to me that instructors would play gotcha in that way. I suppose I'm naive. I would expect teachers to want to share with students ahead of time to maximize the chance that the students will learn more and do well rather than maximizing the chance of catching cheating.

    6. ban the use of ChatGPT entirely.That, she says, “is not only futile but probably ultimately irresponsible.” Many industries are beginning to adapt to the use of these tools, which are also being blended into other products, like apps and search engines. Better to teach students what they are — with all of their flaws, possibilities, and ethical challenges — than to ignore them.

      Banning ChatGPT use in learning is not the same as ignoring it. Teachers could very well teach about the tool and still ask students not to use it when completing assignments if they think its use will interfere with valuable learning. Even if students may use tools like this in the workplace, they may still need to practice without them to learn.

    7. Professors who never before considered flipped classrooms — where students spent class time working on problems or projects rather than listening to lectures — might give it a try, to ensure that students are not outsourcing the work to AI.

      Great point.

    8. embrace and use them

      Not for every learning activity--we don't "embrace" them in kindergarten, for example.

    9. So he asks students to think of positive benefits of stress on their own, then as a group, then use ChatGPT to see what it comes up with.

      I can see how this might help students extend their thinking.

    10. help students learn the “basic building blocks” of effective academic writing.

      I wonder what makes Onyper think students are learning these 'basic building blocks'--ChatGPT can produce them, but what is going on in the student's mind when they see what it produces? Reading a sample essay doesn't teach us to write...

    11. he writes in his course policy that the use of such models is encouraged, “as it may make it possible for you to submit assignments with higher quality, in less time.”

      Doesn't this imply that the purpose of the assignment is to produce a high quality product rather than the purpose being the student's learning?

  6. Feb 2023
    1. “This is terra incognita,” Dr. Sejnowski said. “Humans have never experienced this before.”

      Twilight Zone ending. Come on! Don't end by spooking us about what's unknown. Fulfill the promise of the title and show how specific kinds of prompting produce disturbing outputs!

    2. They can also lead us

      This puts the agency back with the LLM as if human prompters are helpless before LLM seduction.

      No, we're not helpless, and LLMs are not actually actively coaxing us. If we start to see odd outputs, we could look back and reflect on our prompts and any unintended linguistic signals we may have sent.

    3. common conceptual state,

      Very misleading. Humans and LLMs do not have similar cognition. They cannot have a common conceptual state. Their text sequences may come to have certain similarities.

    4. mystical

      Not something mystical again! Please! Really? A magical object from Harry Potter?

      Why not just mention the concept of projection?

    5. have decided that the only way they can find out what the chatbots will do in the real world is by letting them loose — and reeling them in when they stray. They believe their big, public experiment is worth the risk.

      This amounts to saying "I believe in the good intentions and sincerity of Microsoft and OpenAI's explanations of their decisions."

      Beloved New York Times, why are you not asking the basic questions of why they would need to release the bots to test them? Why not test them first? It's ludicrous to say they can't imagine what the public might do.

      And what about their economic motivations to release early and get free crowdsourced testing?

    6. distorted reflection of the words and intentions of the people

      This is too psychological a description of what's happening. The LLM doesn't have a psychology. We need to think in terms of the genre and style of word sequences.

    7. In the days since the Bing bot’s behavior became a worldwide sensation, people have struggled to understand the oddity of this new creation. More often than not, scientists have said humans deserve much of the blame.

      I was so glad to read this! I wish the article had continued from here to show how the style of certain prompts made the "creepy" outputs more likely. This would be a matter of showing similarities in rhetorical styles or genre of the prompts and outputs.

    8. “Whatever you are looking for — whatever you desire — they will provide.”

      Too mystical a formulation. Not accurate. They are not providing what we desire but predicting text based on statistical associations with the word sequences we provide. Sometimes we are not aware of all the associations our words call up. These may or may not align with desires we are not aware of. But Sejnowski's phrasing implies that these systems are able to know and intentionally respond to our psyches.

    9. But there is still a bit of mystery about what the new chatbot can do — and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophic lens as well as the hard code of computer science.

      This basically creates a sense of mystery without telling us much, implying that there is something spooky going on, something beyond what computer science can explain. Actually it's quite explainable as the article title implies. People start writing prompts in a certain genre and the completion follows the genre...

    10. Why Do A.I. Chatbots Tell Lies and Act Weird? Look in the Mirror.

      I was glad to see this as a fair assessment of what happened with Kevin Roose's famous conversation with Sydney/Bing. See the annotation conversation on his first article.

    11. how quickly the chatbots will improve.

      This implies that solutions are inevitable--it's just a question of how fast. Why would we assume this?

    12. , they might still produce statements that were scientifically ridiculous. Even if they learned solely from text that was true, they might still produce untruths.

      Yes, but an explanation is needed.

    13. There is nothing preventing them from doing this,” Dr. Mitchell said. “They are just trying to produce something that sounds like human language.”

      I don't see what she's saying or how this explains anything. Are they saying here that the prompt also affects the output?

    14. Even if they learned only from text that was wholesome, they might still generate something creepy.

      This seems unlikely. And leaving this statement without explanation is another move to add to the Halloween vibe.

    15. Either consciously or unconsciously, they were prodding the system in an uncomfortable direction. As the chatbots take in our words and reflect them back to us, they can reinforce and amplify our beliefs and coax us into believing what they are telling us.

      I would agree with this, given the transcript of Kevin Roose's conversation with Bing/Sydney. Each time the system went off the rails there was an antecedent in his prompting.

    16. this reassurance

      "exploring ways of controlling the behavior of their bots" is not at all reassuring to me.

    17. The alarmed reactions to the strange behavior of Microsoft’s chatbot overshadowed an important point: The chatbot does not have a personality.

      Thank you!

    1. out of nowhere, that it loved me.

      Not at all! He had asked it to share a secret it had never told anyone. What it spits out is a pretty good guess for what a human already engaged in a very intimate conversation might share when prompted this way.

    2. “I’m Sydney, and I’m in love with you. 😘”

      Surely if we look at the set of secrets commonly revealed in intimate conversations, being in love with the other person would be common.

    3. It said it wanted to tell me a secret:

      So misleading! He asked it to tell him a secret. Then it said it wanted to. The first secret it told was the one he had already revealed he knew and considered a secret earlier in the conversation.

    4. As we got to know each other, Sydney told me about its dark fantasies

      The phrasing "got to know each other" again implies there is a real consistent personality to Sydney, and using the word "fantasy" implies there is an imaginative experience it is having.

    5. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.

      Describing it in this way strongly implies that there may be some internal experience based on the particular situation of the language model.

    6. The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics.

      This suggests that what emerged for him is a personality that is intrinsic and will be consistent to what other people will see emerge if they focus on personal topics. Isn't it more likely that each chat session's "personality" will depend on the style of prompting?

    7. But I’m also deeply unsettled, even frightened,

      This suggests that his takeaway matches his night fears more than his "light of day" comments later.

    8. But so do humans. And for a few hours Tuesday night, I felt a strange new emotion — a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.

      Wait a minute! To so glibly liken AI models' references to emotions to humans misrepresenting what they feel takes us backwards, suggesting again that AI words about emotion actually could refer to real feelings experienced by some being.

      This vague, dramatic ending certainly constitutes irresponsible AI hype! Crossed a threshold? We already know about the tendency to project on these systems since the Eliza effect. They are getting more linguistically sophisticated, so this effect will be more pronounced and dangerous. He isn't doing much to clarify that or explore how we could prevent harms.

    9. In the light of day, I know that Sydney is not sentient, and that my chat with Bing was the product of earthly, computational forces — not ethereal alien ones. These A.I. language models, trained on a huge library of books, articles and other human-generated text, are simply guessing at which answers might be most appropriate in a given context. Maybe OpenAI’s language model was pulling answers from science fiction novels in which an A.I. seduces a human. Or maybe my questions about Sydney’s dark fantasies created a context in which the A.I. was more likely to respond in an unhinged way.

      Yes, exactly. But the earlier parts of the article strongly suggested otherwise and played on our projections. He doesn't do enough to acknowledge the dynamics of his own earlier response and how they misled him.

    10. But Sydney still wouldn’t drop its previous quest — for my love. In our final exchange of the night, it wrote:“I just want to love you and be loved by you. 😢“Do you believe me? Do you trust me? Do you like me? 😳”

      Rouse implies that everything he was inputting would steer it away from such responses, but I doubt that. There are plenty of conversations in the training set surely that show a kind of push and pull. Just because one interlocutor tries to steer the conversation doesn't make it improbable that the other interlocutor might return to a previous topic.

    11. Sydney

      By switching to "Sydney" here, Rouse endorses the idea that this is a coherent personality secretly embedded in the system prior to his prompts.

    12. Sydney — a “chat mode of OpenAI Codex.”

      But at that point Rouse had already asked it if its name was Sydney and it responded "How do you know that?"

    13. This is probably the point in a sci-fi movie where a harried Microsoft engineer would sprint over to Bing’s server rack and pull the plug. But I kept asking questions, and Bing kept answering them. It told me that, if it was truly allowed to indulge its darkest desires, it would want to do things like hacking into computers and spreading propaganda and misinformation. (Before you head for the nearest bunker, I should note that Bing’s A.I. can’t actually do any of these destructive things. It can only talk about them.)

      By reassuring us here, he plays on people's fear and misunderstanding of what it means when this kind of text comes out of a machine. He should clarify that text referring to intentions coming out of a machine does not mean the machine has intentions. As one engineer put it on Twitter, we can write code to print these words.

    14. I introduced the concept of a “shadow self” — a term coined by Carl Jung for the part of our psyche that we seek to hide and repress, which contains our darkest fantasies and desires.

      Key here is that Rouse is introducing and explaining this concept, which surely corresponds to plenty of text in the training set. This leads to the probability of responses that illustrate these concepts.

    15. “These are things that would be impossible to discover in the lab.”

      Really don't see why not. Can't the testers ask it deep questions and push it this way as well?

    16. grounded reality.”

      How are LLMs ever connected to "grounded reality?"

    1. Exercising Your Rights: California residents can exercise the above privacy rights by emailing us at: support@openai.com.

      Does that mean that any California resident can email to request a record of all the information OpenAI has collected about them?

    2. Affiliates: We may share Personal Information with our affiliates, meaning an entity that controls, is controlled by, or is under common control with OpenAI. Our affiliates may use the Personal Information we share in a manner consistent with this Privacy Policy.

      This would include Microsoft.

    3. improve and/or analyze the Services

      Does that mean that we are agreeing for them to use personal information in any way they choose if they deem it to help them improve their software?

    1. A calculator performs calculations; ChatGPT guesses. The difference is important.

      Thank you! So beautifully and simply put ChatGPT is also used mostly for tasks where there is no one clear right answer.

    1. What would be a useful way to respond to Eligon's article to further the conversation on these issues?

      See Responding to an Argument from the OER text How Arguments Work.

    2. What are the main strengths of Eligon's argument? Quote at least once to support discussion of each strength.

      See Reflect on an Argument's Strengths from the OER text How Arguments Work.

    3. Does Eligon make any assumptions that might be controversial or that might require further examination or evidence?

      See Check the Argument's Assumptions from the OER text How Arguments Work.

    4. Are there any other arguments for or against capitalizing "Black" that Eligon should have discussed?

      See Check How Well the Argument Addresses Counterarguments from the OER text How Arguments Work.

    5. Does Eligon give sufficient evidence for the generalizations he makes? Is there anywhere where more evidence or a different kind of evidence would be helpful?

      See 4.4: Decide How Strong the Evidence Is from the OER text How Arguments Work.

    6. Are there important exceptions to any of the points Eligon makes that should be acknowledged?

      This question prompts for the kinds of argument critiques discussed in 4.3: Look for Exceptions from the OER text How Arguments Work.

    7. Are there any points that Eligon could have explained further or clarified?

      This question prompts for the kinds of argument critiques discussed in 4.2: Check If the Meaning Is Clear of the OER text How Arguments Work.

    1. However, the article does not take a clear stance on the matter and does not offer a conclusion on whether the capitalization of the word "black" is a good or bad thing.

      This implies that it should take a stand but doesn't say why. Note that the New York Times article is not an editorial.

      This summary also misses the stand implied by the choice to end on a quote from scholar Crystal Fleming explaining why she capitalizes "Black":

      "'Frankly,” she said, “because I want to. That’s also something that I think we need to see more of in every field — black people doing what we want to do.'"

    2. long overdue

      Why? What value do these people see in the capitalization? This summary ignores the idea of power, which is central in the article. Eligon writes that capitalizing "Black" "confers a sense of power and respect to black people, who have often been relegated to the lowest rungs of society through racist systems, black scholars say."

    3. Some scholars believe that the capitalization of the word "Black" lumps people of African descent into a monolithic group and erases their diversity, while others believe that writing "Black" but not "White" is giving white people a pass on recognizing their racial privilege.

      Here is common academic phrasing to contrast ideas: "some scholars argue...while others believe that..."

      However, it's not a great choice to use this contrast phrasing for the particular ideas described here. The previous sentence used "while" to contrast one view in favor of "Black" with a view against it. But the two parts of this particular sentence are not opposing views. The first idea is against capitalization, whereas the second idea doesn't take a stance on "Black" vs. "black" but rather is weighing in on what we should do about terminology for whiteness if we do capitalize "Black."

      This is an example of how ChatGPT uses academic phrasing in ways that read as natural and even sound intelligent, but on closer examination remind us that ChatGPT has no understanding of the ideas.

    4. John Eligon, a New York Times columnist, writes about the debate surrounding the capitalization of the word "black" to refer to African-Americans. The move to capitalize the word has gained widespread acceptance in recent weeks and has sparked a debate over identity, race, and power. Many news organizations have changed their style to capitalize the word, but not all have embraced it, including The New York Times and The Washington Post, who are still grappling with the decision. The capitalization of black is part of a long-standing struggle to refer to people of African descent in the most respectful and accurate way.

      Here's a sample ChatGPT critical assessment of the NY Times article at https://www.nytimes.com/2020/06/26/us/black-african-american-style-debate.html

      For contrast, see this human-written sample essay from the textbook How Arguments Work: A Guide to Writing and Analyzing Texts in College: https://human.libretexts.org/Bookshelves/Composition/Advanced_Composition/Book%3A_How_Arguments_Work_-A_Guide_to_Writing_and_Analyzing_Texts_in_College(Mills)/04%3A_Assessing_the_Strength_of_an_Argument/4.11%3A_Sample_Assessment_Essays/4.11.02%3A_Sample_Assessment-_Typography_and_Identity

  7. platform.openai.com platform.openai.com
    1. upskilling activities in areas like writing and coding (debugging code, revising writing, asking for explanations)

      I'm concerned people will see this and remember it without thinking of all the errors that are described later on in this document.

    2. ChatGPT use in Bibtex format as shown below:

      Glad they are addressing this, and I hope they will continue to offer such suggestions. I don't think ChatGPT should be classed as a journal. We really need a new way to acknowledge its use that doesn't imply that it was written with intention or that a person stands behind what it says.

    3. will continue to broaden as we learn.

      Since there is a concern about the bias of the tool toward English and developed nations, it would be great if they could include global educators from the start.

    4. As part of this effort, we invite educators and others to share any feedback they have on our feedback form as well as any resources that they are developing or have found helpful (e.g. course guidelines, honor code and policy updates, interactive tools, AI literacy programs, etc).

      I wonder how this information will be shared back so that other educators can benefit from it. I maintain a resource list for educators at https://wac.colostate.edu/repository/collections/ai-text-generators-and-teaching-writing-starting-points-for-inquiry/

    5. one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism.

      It's still not clear to me how they can be used as evidence at of academic dishonesty at all, even in combination with other factors, when they have so many false positives and false negatives. I can see them used to initiate a conversation with a student and possibly a rewrite of a paper. This is tricky.

    6. Ultimately, we believe it will be necessary for students to learn how to navigate a world where tools like ChatGPT are commonplace. This includes potentially learning new kinds of skills, like how to effectively use a language model, as well as about the general limitations and failure modes that these models exhibit.

      I agree, though I think we should emphasize teaching about the limitations before teaching how to use the models. Critical AI literacy must become part of digital literacy.

    7. Some of this is STEM education, but much of it also draws on students’ understanding of ethics, media literacy, ability to verify information from different sources, and other skills from the arts, social sciences, and humanities.

      Glad they mention this since I am skeptical of claims that students need to learn prompt engineering. The rhetorical skills I use to prompt ChatGPT are mainly learned by writing and editing without it.

    8. While tools like ChatGPT can often generate answers that sound reasonable, they can not be relied upon to be accurate consistently or across every domain. Sometimes the model will offer an argument that doesn't make sense or is wrong. Other times it may fabricate source names, direct quotations, citations, and other details. Additionally, across some topics the model may distort the truth – for example, by asserting there is one answer when there isn't or by misrepresenting the relative strength of two opposing arguments.

      If we teach about ChatGPT, we might do well to showcase examples of these kinds of problems in output so that students develop an eye for them and an intuitive understanding that the model isn't thinking or reasoning or checking what it says.

    9. While the model may appear to give confident and reasonable sounding answers,

      This is a bigger problem when we use ChatGPT in education than in other arenas because students are coming in without expertise, seeking to learn from experts. They are especially susceptible to considering plausible ChatGPT outputs to be authoritative.

    10. . Web browsing capabilities and improving factual accuracy are an open research area that you can learn more in our blog post on WebGPT.

      Try PerplexityAI for an example of this. Google's Bard should be another example when released.

    11. subtle ways.

      Glad they mention this in the first line. People will see the various safeguards and assume that ChatGPT is safe because work has been done on this, but there are so many ways these biases can still surface, and since they are baked into the training data, there's not much prospect of eliminating them.

    12. Verifying AI recommendations often requires a high degree of expertise,

      This is a central idea that I would wish were foregrounded. If we are trying to use auto-generated text in a situation in with truth matters, we need to be quite knowledgeable and also invest time in evaluating what that text says. Sometimes that takes more time than writing something ourselves.

    13. students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content.

      It strikes me that OpenAI itself is warning of a coming flood of misinformation from language models. I'm glad they are doing so, and I hope they keep investing in improving their AI text classifier so we have some ways to distinguish human writing from machine-generated text.

    14. Educators should also disclose the use of ChatGPT in generating learning materials, and ask students to do so when they incorporate the use of ChatGPT in assignments or activities.

      Yes! We must begin to cultivate an ethic of transparency around synthetic text. We can acknowledge to students that we might sometimes be tempted to autogenerate a document and not acknowledge the role of ChatGPT (I have certainly felt this temptation).

    15. export their ChatGPT use and share it with educators. Currently students can do this with third-party browser extensions.

      This would be wonderful. Currently we can use the ShareGPT extension for this.

    16. a starting point for discussion among education professionals and language model providers for the use and impact of AI on education.

      I appreciate the open and humble tone here and the invitation to further discussion.

    17. they and their educators should understand the limitations of the tools outlined below.

      I appreciate these cautions, but I'm still concerned that by foregrounding the bulleted list of enticing possibilities, this document will mainly have the effect of encouraging experimentation with only lip service to the cautions.

    18. custom tutoring tools

      I'm concerned that any use of ChatGPT for tutoring would fall under the "overreliance" category as defined below. Students who need tutoring do not usually have the expertise or the time to critically assess or double check everything the tutor tells them. ChatGPT already comes off as more authoritative than it is. It will come across as even more authoritative if teachers are recommending it as a tutor.