5 Matching Annotations
  1. Apr 2023
    1. Historians don’t have to compress their material as severely. Since history is notoriously a story of conflict, and our sources were interested participants, few people expect historians to represent all aspects of the past with one correctly balanced model. On the contrary, historical inquiry is usually about comparing perspectives. Machine learning is not the only way to do this, but it can help. For instance, researchers can measure differences of perspective by training multiple models on different publication venues or slices of the timeline.[6]

      For question three, I believe this highlights the authors transition well because he dicusses the participants reaction the the balanced model which is also talked about throughout the begenning and not only that the perspective of different publications to back up different sources of knowledge throughout the article.

    2. A willingness to find meaning in collective patterns may be especially necessary for disciplines that study the past. But this flexibility is not limited to scholars. The writers and artists who borrow language models for creative work likewise appreciate that their instructions to the model acquire meaning from a training corpus. The phrase “Unreal Engine,” for instance, encourages CLIP to select pictures with a consistent, cartoonified style. But this has nothing to do with the dictionary definition of “unreal.” It’s just a helpful side-effect of the fact that many pictures are captioned with the name of the game engine that produced them.

      I think this answers question 2 being "how does the writer establish their credibility" because in the begenining the author explains that the writer and artists borrow knowledge and at the end of the paragraph, it relays a link towards another source for credibility!

    3. The argument that Bender et al. advance has two parts: first, that large language models pose social risks, and second, that they will turn out to be “misdirected research effort” anyway, since they pretend to perform “natural language understanding” but “do not have access to meaning” (615).

      This entirely describes the language that is going to be used and how we as the reader can understand/infer what is going to come next!

  2. Feb 2023
    1. Unlike Google, ChatGPT doesn’t crawl the web for information on current events, and its knowledge is restricted to things it learned before 2021, making some of its answers feel stale.

      I guess my question is "Why is Chat GPT limited to things/knowledge it learned before 2021?" This connects to the passage because not only is the quotation/statement I highlighted sort of taken out of context but my question I feel is something the author should have answered considering.

    2. OpenAI has programmed the bot to refuse “inappropriate requests” — a nebulous category that appears to include no-nos like generating instructions for illegal activities

      I would say this sentence really stuck out to me because i've played around with Chat-GPT before this class when my brother showed it to me. It helped me construct some crucial points to assist me in writing my persuasion speech in my persuasion class regarding why it's important to weigh your options and really know if it is the right thing to do to receive a vaccine but specific to the COVID-19 vaccine. However, it did not help me in a way where when I asked it "can you tell me reasons not to get vaccinated" it would just say "I am unable to fulfill that request based on reasonings related to the covid-19 pandemic" or something like that which in a lot of peoples eyes is a no-no to go against the vaccine which is why I understood why it could not answer my question.