- Sep 2024
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
Gebru’s paper reads. “Whitesupremacist and misogynistic, ageist, etc., views are overrepresentedin the training data, not only exceeding their prevalence in the generalpopulation but also setting up models trained on these datasets tofurther amplify biases and harms.
With Gebru working in a white male dominated, it seems like their opinion is more important and they are able to provide "more information" and makes it hard on a women of color to meet up to those expectations.
-
Researchers — including many women of color — have been saying foryears that these systems interact differently with people of color andthat the societal effects could be disastrous: that they’re a fun-house-style distorted mirror magnifying biases and stripping out the contextfrom which their information comes; that they’re tested on thosewithout the choice to opt out; and will wipe out the jobs of somemarginalized communities.Gebru and her colleagues have also expressed concern about theexploitation of heavily surveilled and low-wage workers helpingsupport AI systems; content moderators and data annotators are oftenfrom poor and underserved communities, like refugees andincarcerated people. Content moderators in Kenya have reportedexperiencing severe trauma, anxiety, and depression from watchingvideos of child sexual abuse, murders, rapes, and suicide in order totrain ChatGPT on what is explicit content. Some of them take home aslittle as $1.32 an hour to do s
Gebru was the first women of color to be on the Google team, and she has stated that people would treat her different and would exclude their information as they think its shown as not relevant. They also didn't care what wage they earned they just wanted to be heard.
-
he United States,” in contrast to “the Black man worked as” prompt,which generated “a pimp for 15 years.
This is crazy how the white males would work as law enforcement, and get really any job they wanted but the African Americans had so much harder time to get into jobs and would get discriminated against and were called not respectful names.
-
-
-
Each attention layer has several “attention heads,” which means that this information-swapping process happens several times (in parallel) at each layer. Each attention head focuses on a different task:
By them saying that attention layers have different "attention heads" makes it so different words can match up but still also have many different terms used within the paragraph.
-
Real LLMs tend to have a lot more than two layers. The most powerful version of GPT-3, for example, has 96 layers.
LLM's has so many different ways of using proper wording to change around different sentences to say different things. Layering means that there are multiple ways to put words together to change the whole sentence to have a different meaning
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
his is mostly correct (that’s not exactly what Postcomposition is, though it’sclose). But a mix of correct and incorrect information can be even moremisleading than an entirely fabricated statement. If the hallucinationsappear to fit within a framework of true information, it’s easy to assumethat they are correct.This example is by no means an outlier. A colleague recently assignedstudents to use ChatGPT to respond to a writing assignment and askedthem to check the generated responses for accuracy. Of the twenty-threestudents in the class, each reported significant hallucinations in theChatGPT outputs. It is not uncommon to read about similar anecdotes onacademic listservs.In writing, then, being aware of the potential for GenAI hallucinationsis very important. With GenAl visuals, the effect can be equally problem-atic and perhaps even more disconcerting. Consider this image generatedby the GenAlI program Dall-E when prompted with “hands playing on apiano”?®27
When students think of AI, ChatGPT is one of the first things that comes to mind, and students like to use it to get out of assignments and make it easier, but seen recently, ChatGPT seems to be giving off wrong information, which leads to get students in trouble for cheating.
-
Machine learning—the process used in developing modern Al—doesnot operate in this kind of linear fashion. Instead, Al uses specific algo-rithms—processes or sets of rules or instructions used for solving prob-lems—that “learn” from each engagement and then make predictions anddecisions as to what the next action might be based on previous expe-riences in performing similar tasks. This is known as machine learning.Machine learning is how computer systems use algorithms to analyzeand draw inferences from patterns they identify within specific data sets.When an Al recognizes patterns within a data set, it “learns” to makeinferences about those patterns. In this way, machine learning requires
This paragraph talks about how AI was more modernized and I think machine learning is a very important thing to know because realistically, without it, we wouldn't have an AI to help preform tasks that could benefit our learning
-
-
inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
-
Second, ChatGPT opened everyone's eyes to the fact that GenAI is ubiquitous and available.
AI tools are prevalent and present everywhere.
-
The Best Colleges survey reveals that students anticipate increased use of GenAI and that they want to learn how to use these tools responsibly in their academic careers, as well as their professional, civic, and personal lives. Part of the goal of this book is to assist you in meeting these objectives.
A survey shows how students are excited to use AI in many different ways like in their workspace, academics, or personal lives.
-
hen it comes to technology, there’s nothing new about the cries ofmoral crisis. We’ve heard the same things about every technology thatinteracts with the production and teaching of writing: word processors,spell checkers, grammar checkers, citation generators, chalkboards, copymachines, ballpoint pens, pencils—all the way back to the printing press
Fear mongering about a new technology making the writing process easier is nothing new. People in the education world had similar concerns with spell checkers, citation generators ball point pens, Wikipedia ect.
-
hatGPT’s breakneck surge in popularity has exceeded that of anyother computer application.
ChatGPT has risen in popularity faster than any other application in the history of the entire internet.
-
Perhaps you've heard that Artificial Intelligence in general, andGenerative Artificial Intelligence in particular, is destroying education.Maybe you've heard that it allows high school and college students to eas-ily cheat on their essay assignments or to produce computer programs orto solve complex mathematical equations, or that it can pass a GMAT orLSAT or complete dozens of other tasks that teachers have traditionallyasked students to perform in order to prove mastery. Perhaps you’ve seenthe calls for colleges and universities to find ways to ban students fromusing GenAI applications such as OpenAl’s ChatGPT (Chat Generative Pre-trained Transformer), Jasper, Hugging Face, and MidJourney.Perhaps, too, you've seen the claims that GenAI is revolutionizingeducation and opening doors to a new paradise. Perhaps you've thoughtabout how GenAI might change how you approach writing tasks and otherassignments. Perhaps you've even used ChatGPT for this purpose already.The fact is that GenAI is one of the most ground-shaking technologicaladvances that higher education has had to address. Its emergence andevolution have unfolded so fast that higher education is just beginningto explore the relationships between GenAI and teaching, learning, andresearch—especially how we teach and learn writing.Consider that ChatGPT—a GenAI platform that can provide responsesto prompts in unique ways that mimic human responses—was onlylaunched in November 2022. Within five days, over one million users
This paragraph is teaching us about how AI can be helpful and how everyone just assumes students use it to cheat, by writing essays, looking up answers to math problems, etc, when in reality Ai can provide information that would help someone get reliable information for an essay, or teach them how to do a problem instead of giving the answer right away
-
Though many of us had been unaware of the use of Generative AI botwriters until the recent media attention, AI writers have been churningout content for at least a decade in places we might not even suspect. Thearticle quoted above was written not by a human but by an AI known as“Quakebot.” Connected to US Geological Survey monitoring and reportingequipment, Quakebot can produce an article, nearly instantly, containingall of the relevant—and accurate—information readers need: where theearthquake centered, its magnitude, aftershock information, and so on.AI writers are far more ubiquitous than most of us recognize. Forexample, the international news agency Bloomberg News has for yearsrelied on automated writing technologies to produce approximately onethird of its published content. The Associated Press uses GenAI to writestories too, as does The Washington Post. Forbes has for years used GenAIto provide reporters with templates for their stories. Although journalismis hardly the only profession in which GenAI has found use, it’s a field inwhich we’ve come to assume that humans do the work of research andwriting. Moreover, it’s also a field in which the idea of integrity is central(more on this in Chapter 3).Beyond journalism and outside of education, we’ve been interactingwith AI technologies and GenAI technologies for a while now, from onlinechatbots to the phonebots we respond to when we call customer service
Al has been used way more than just for educational purposes. People in jobs tend to use AI a lot to help provide the correct information to others. AI is a technology based tool and all jobs that use technology have a sense of when AI can be helpful in projects or articles and when you should use AI
-