41 Matching Annotations
  1. Dec 2023
    1. To me, that's actually quite exciting  from a democratization of technology perspective.

      From my personal experiences I agree with this. The more people the more change is something I have definitely experienced in my life with one good example of it being the Covid-19 pandemic.

    2. It's really critically important that we have  as many diverse perspectives as possible, 00:02:58 influencing the development of AI. We need the participation of  more women, more people of color, to provide a different perspective and a different  lens on which problems matter and how we should approach these problems.

      All true, the more voices the better. This is what we need to do to try and combat the bias in AI.

    3. Every time when you're looking at a new problem,  you have an opportunity to change the world. Sometimes we succeed, sometimes  we don't, but we always try.

      I like this way of thinking.

    4. We should 00:02:22 strive to make sure that things that provide  value for society can be reached to anybody.

      Unfortunately I don't think this is possible. There will always people who are excluded but it is important to work towards a world like this no matter how impossible it is.

    5. It means that AI, powerful as it is, could theoretically be in everybody's  pockets, benefiting everybody.

      This is a nice view on it however it could also work the opposite. With negative things on the phone people could get negative thoughts.

    6. Most people in the world  just have AI applied to them, 00:01:52 rather than playing an active role  in guiding what AI gets applied to.

      Very true, the large majority is just watching AI evolve and while not being able to do anything.

    7. really systems that are including the perspectives of those that are most vulnerable  or most marginalized, most likely to be hurt by the deployment of that system.

      Very true, the more inclusive data the better!

    8. So if you're building an AI to  determine who gets a home loan, or who should be charged with  a crime, it could definitely bubble up the racial biases that humans  and our current society already does.

      Examples that show real-life examples of the harm these systems can create. It's scary to think about the consequences of this but it's also important for people to know so they can work towards a better future.

    9. Machine learning depends entirely  on the information that you feed it. The problem is that with real world data, there's  often information in there that you didn't intend to be in there, but is captured because of  the bias in the data collection process.

      Over the course of researching this project is has become clear that this is the main issue. Perhaps the solution could be to just moderate the data more closely but I doubt it's that simple.

    10. Like any technology its application  will depend on how it is utilized. At the same time, we need to think about  the risks that are associated with doing that. The consequences are huge!

      This is a great hook that brings this video into its main idea. It is kind of hinted at that AI has problems earlier but this confirm it in a way like this is very dramatic and I love it.

    11. The potential for AI to help society is enormous!

      This is true AI can do so much that we kind of have to use it to move forward as a species. It sucks that AI has problems because with its potential it could be so useful in so many ways.

    12. No matter what field you end up going into, it's quite likely that AI is going to  have some impact on what you're doing.

      Very strong starter sentence, right away it tells you why you need to be conserved about it while hooking you in so you watch the rest of the video. The music is intense to let you know this is a serious issue and the images it shows you are incredible representations of AI and a person who is clearly qualified to inform you on AI. I love this beginning so much (:

    13. It could be used in education

      I thought it was interesting that they showed this because we learned about this in English. China implementing these status band things that determine how much the students and doing in class and gives the information to the parents and teachers. In English class we went into the negative effects of this and how harmful it can be so this is a good example to put into a video like this.

    1. When I was an undergraduate at Georgia Tech studying computer science, I used to work on social robots, and one of my tasks was to get a robot to play peek-a-boo, a simple turn-taking game where partners cover their face and then uncover it saying, "Peek-a-boo!" The problem is, peek-a-boo doesn't really work if I can't see you, and my robot couldn't see me.

      Tragic real life story that really brings out her point through a personal experience.

    2. I've launched the Algorithmic Justice League, 00:07:54 where anyone who cares about fairness can help fight the coded gaze. On codedgaze.com, you can report bias, request audits, become a tester and join the ongoing conversation, #codedgaze. So I invite you to join me in creating a world where technology works for all of us, not just some of us, 00:08:20 a world where we value inclusion and center social change. Thank you. (Applause) But I have one question: Will you join me in the fight? (Laughter) (Applause)

      In this part she brings up the program she started and then concludes. Overall, I thought this was a very good informing TED Talk that brings up a issue that is definitely worth talking about, thank you Ms. Buolamwini (her last name).

    3. we can start thinking about building platforms that can identify bias by collecting people's experiences like the ones I shared, but also auditing existing software.

      On a psychological scale the practice of using dis-including systems like the ones shown can bring feelings of dread and discontent. Being excluded from a group so directly could surface thought like: “do I belong” and “is this for me” which are thoughts that wouldn’t be able to rise up if things were as they should be. The negative narratives they bring paint lines between groups and try to divide them. All of these things are things that shouldn’t be able to happen in a perfect world but unfortunately in our flawed society are common to find.

    4. So what can we do about it? Well, we can start thinking about how we create more inclusive code and employ inclusive coding practices. It really starts with people.

      100% true the real impact that can be made is on a people scale, with people putting in better policy's and such this CAN be prevented.

    5. So we really have to think about these decisions. Are they fair? And we've seen that algorithmic bias doesn't necessarily always lead to fair outcomes.

      The main thing that this video hammers in is how the facial recognition software didn’t reconlize her. Even the practice of systems like this being posible from bias data sets are problematic from a technical side because it sets up a standard for how future systems should operate. In this specific example it is clear that black people in particular are being shown to not be able to be recognized. This gives a great disadvantage to people of color as they are being directly dismayed whether intentional or not and in contrast it gives great advantage to white people as they can be recognized easily.

    6. mysterious and destructive algorithms that are increasingly being used to make decisions that impact more aspects of our lives. So who gets hired or fired? 00:05:44 Do you get that loan? Do you get insurance? Are you admitted into the college you wanted to get into? Do you and I pay the same price for the same product purchased on the same platform? Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison.

      These are all things that should 100% stay human, it's kind of despicable that they are even getting turned over to machines.

    7. My friends and I laugh all the time when we see other people mislabeled in our photos. But misidentifying a suspected criminal is no laughing matter, nor is breaching civil liberties.

      This is true, the underlining idea here is that it is clear facial recognition software can be bias and it is shown in even something as simple as a phone person recognition software. Because of the bias that exists so strongly in these things it is clear that it is not the time to put these technologies into effect in places where they could decide peoples' entire lives.

    8. Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal.

      Due to the issues we have already been over it seems clear why this is a bad idea but it might not seem that way for them.

    9. So how this works is, you create a training set with examples of faces. This is a face. This is a face. This is not a face. And over time, you can teach a computer how to recognize other faces. 00:03:38 However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect, which is what was happening to me.

      The people who made the face detectors probably didn't mean to be racist (maybe they did) but the implication from their data set ended up the same which shows the danger of lapses in judgment like this.

    10. I asked the developers what was going on, and it turned out we had used the same generic facial recognition software.

      I wonder what this code is and how easily it could be fixed to detect everyone's face. Also, what a coincidence.

    11. (Video) Joy Buolamwini: Hi, camera. I've got a face. Can you see my face? No-glasses face? You can see her face. What about my face? I've got a mask. Can you see my mask?

      This is interesting because it means that the program is not only designed to see facial features to decide if what it's looking at is a face but also the skin color. This is quite an issue because is is just plain exclusionary.

    12. However, algorithms, like viruses, can spread bias on a massive scale 00:00:37 at a rapid pace.

      This is interesting because I haven't heard of viruses spreading AI bias before so this is new to me :D

    13. a force that I called "the coded gaze," my term for algorithmi

      I think this a good name for the AI bias issue however I believe that it might bring more attention to the issue if the name was more extreme so it would come to more people's attention. Names like "AI Racism" or "AI Cataclysm" would really get peoples attention but her name is definitely better fundamentally.

    14. Hello, I'm Joy, a poet of code, on a mission to stop an unseen force that's rising

      Big metaphor for the implications of AI bias which is what this video is on, interesting for her very first sentence to be a metaphor like this and I wonder if there is a good reason for it like it being a hook or something. Additionally, I will say that it is kind of funny that her name is "Joy" and throughout the whole video she is happy even though she is talking about a serious issue.

    1. what about a fast food worker

      My race is Mexican and so I could be discriminated against by these generators if for example, someone asked an image generator to generate a picture of a McDonald's worker and it showed a Mexican person. Unfortunately, Mexicans have some pretty bad stereotypes that the image generators could easily enforce with their results. Overall, I think I care about this topic not because of myself but because of all the problems it could cause for people in the future. If people get these stereotypes enforced in their heads then the world is bond to be a less safe more unjust place.

    2. Bloomberg technology generated and analyzed more than 5000 images created by stable diffusion they prompted it to generate portraits of workers in different professions and sorted the images according to skin tone and perceived gender they find that higher 00:01:04 paying professions like CEO lawyer and politician were dominated by lighter skin tones while subjects with darker skin tones were more commonly associated with lower income jobs like dishwasher janitor and fast food worker a similar 00:01:18 story emerged when categorizing by gender with higher income jobs such as doctors CEOs and Engineers being predominantly represented by men whilst professions like cashier social worker and housekeeper were mostly represented 00:01:30 by women

      These tests are interesting because everyone is involved in one way or another because everyone has a race, some sort of gender, their beliefs, and an age and so the AI can be bias to anyone. Even though everyone is involved many groups have it worse then others, groups that historically have had it worse off, groups that have had bad things associated with them over the years. I don’t feel comfortable with giving an example however it is easy to think of some harmful ways that certain groups could be discriminate against by these AI generators.

    3. these are all images created by an artificial intelligence image generator called mid-journey which creates Unique Images based off of simple word prompts

      This is the main problem, these image generators reinforcing stereotypes with their harmful images. For example, if someone asks a image generator to generate an image of a terrorist and it shows someone who is clearly Muslim it could enforce a subconscious negative picture of Muslim people. This ties into social justice because the best form of social is people getting equal opportunities and with harmful stereotypes being thrown around by image generators and by other means the concept of the best social justice gets offset.

    4. if lawmakers wait too long to understand a Technology's impacts and harms by the time they act it may be too late for them to control the technology will have 00:07:15 been widely adopted and now be too deeply integrated into people's lives for Meaningful change to happen

      I disagree with this, I don't think there will ever be a point where it's "too late" for change to happen, this doesn't mean I think that we should just do nothing right now but it does mean that our future is completely screwed if we do

  2. Nov 2023
    1. Chu does tick tock access the home Wi-Fi network [Music] only if the user turns on the Wi-Fi I'm sorry I may not understand that so if I have a tick tock app on my phone and my 00:07:03 phone is on my home Wi-Fi network does tick tock access that Network

      Uhh

    2. Fortune 500 CEO what should a fair gender split look like should it accurately reflect the current statistics which are roughly nine to one in one sense this can be seen as a fair representation of reality but others 00:04:42 might see this as unfair for perpetuating unequal and unjust power structures and for discouraging women from applying for c-suite roles perhaps the distribution should be a 50 50 split this would achieve demographic parity as 00:04:55 it matches the gender split of the population we could consider applying this across all jobs and roles but this assumes that there are no intrinsic differences between genders would it be fair to depict prisoners at a 50 50 00:05:07 Split For example when men currently make up 93 percent of the global prison population perhaps the only fair distribution is to make the output completely random but even then defining the gender category with a binary value 00:05:19 has already introduced bias into the system

      In my opinion, if you ask an AI generator to do something like "Make a photo of a CEO" the generator should say something like "What do you want the CEO to look like? Race, gender, age."

    3. how AI image generators work one of the most popular and Powerful models are called generative adversarial networks organ for short they have two parts the 00:02:35 generator which acts like a forger that makes fake images and tries to pass them off as real and the discriminator which acts like a detective trying to figure out if the generator's images are real or fake the discriminator has been 00:02:47 trained on data sets of lots of real images so it has an idea of what to look out for when it identifies a fake image it tells the generator where it went wrong the generator then tries again to fill the discriminator they both play 00:02:59 this game over and over and over again trying to compete against each other until eventually the generator gets really good at fooling the discriminator a feature common amongst all AI image generators is that the quality of the 00:03:12 outputs will depend on the quality of the data sets the millions of labeled images that the AI has been trained on if there are biases in the data set the AI will acquire and replicate those biases but there is no such thing as a 00:03:25 neutral data

      This is how image generators work. I find it fascinating however this is also where the problems come from (bias data sets).

    4. the Barbie from Germany was dress and clothes reminiscent of an SS Nazi uniform and the Barbie from South Sudan was depicted holding a rifle by 00:02:21 her side

      Even though the other Barbies were discriminatory these specific Barbies take it too a different level. I really suck that these are the things that AI comes out with.

    5. categorizing by gender with higher income jobs such as doctors CEOs and Engineers being predominantly represented by men whilst professions like cashier social worker and housekeeper were mostly represented 00:01:30 by women

      Wow, AI is BAD

    6. higher 00:01:04 paying professions like CEO lawyer and politician were dominated by lighter skin tones while subjects with darker skin tones were more commonly associated with lower income jobs like dishwasher janitor and fast food worker

      This is very bad

    7. the reality it presents can often be distorted where harmful biases relating to gender race age and skin color can be more exaggerated and more extreme than in the real world

      True, AI often exaggerates their images

    8. what does a typical prisoner look like what about a lawyer a nurse a drug dealer what about a fast food worker a 00:00:11 cleaner a terrorist or a CEO

      I wonder if all of these images are what came up the first time they put in the prompts