32 Matching Annotations
  1. May 2023
    1. The Director said “We should know when users leave their house, their commute to work, and everywhere they go throughout the day. Anything less is useless. We get a lot more than that from other tech companies.”

      This is definitely an example of when the capital is more important than the customers. Invasion of privacy like this can do people more harm than good. There might be very few occasions that allowing these companies access to our every move can be lifesaving, but that is very rare. Some companies are so focused on how to make money that they don't consider how their actions are affecting the customers.

    1. Inventors ignoring the ethical consequences of their creations is nothing new as well, and gets critiqued regularly:

      I think that this is true. We are so busy with creating things that are technologically advanced and never thinking about whether or not the invention is good and if this invention may lead to something negative. For example, ChatGPT has been viewed with many different opinions, it can be very helpful to us, but in some ways or in further applications, it may actually be really harmful to us.

    1. Then, much of tech is dependent on exploiting cheap labor, often in dangerous conditions, in other countries (thus extracting the resource of cheap labor, from places with “inferior” governments and economies). This labor might be physical labor, or dealing with dangerous chemicals, or the content moderators who deal with viewing horrific online content.

      Apple's products are made in China, however we often feel that items made in China are of lower quality than other places because of their cheap cost of labor. So, Apple counters this by branding their items as being designed in the US, which makes it seem that their items -- although made in China -- should still be held with American made products' standards.

    1. Monopoly

      Personally I feel like some social media platforms have their own views and agendas. I can feel that there is a monopoly in views when it comes to social media. For example, Twitter holds some form of monopoly on social media -- and some are reporting that their platform is extremely right wing, which could mean that right wing views have a monopoly in twitter.

    1. The Nazi crimes, it seems to me, explode the limits of the law; and that is precisely what constitutes their monstrousness. For these crimes, no punishment is severe enough. It may well be essential to hang Göring, but it is totally inadequate.

      These things compared to the 'unforgivable' things that lead to cancelling today really brings everything into perspective. The nazis did something that is objectively unforgivable, whereas the public today will cancel anything that would give them clicks. When we compare these things, we will find that they are in a completely different world to each other, where one is actually unforgivable and public shaming for celebrities seems to be there in order to milk the problem, when the problem -- most of the time -- is not as bad as it is portrayed.

    1. Media experts have been warning for months that American consumers will face starvation if Hollywood does not provide someone for them to put on a pedestal, worship, envy, download sex tapes of, and then topple and completely destroy.

      I find that this is a really good, although satirical take. In today's culture I find it true that the world constantly needs someone to talk about and someone to be relevant in a bad/public shamed light. Perhaps society views this as a necessary way to do media? because it seems to be a never-ending cycle.

    1. As a more recent event on internet-based social media, we find Twitter users trying to identify participants at a white supremacist rally:

      Although this is done to reduce hate and racism -- I think this is immoral. We should condemn hate and racism, but by doxxing someone else or to harass them in such a way it still doesn't eliminate the problem. Perhaps there should be measures to reduce racism and these rallies, but by doxxing them, we are just cyberstalking them. I do agree that the people who attend these events should be fired, but I feel like it should be an internal process with their company rather than a mass of people doxxing these person.

    1. partially bot-driven

      Sometimes these harassments could be done completely by bots. When group-harassing, only one person can just instruct a number of bots to target the same person. This instigates the problem more because due to the anonymity of social media, the victim could feel that they are being targeted by a large group of people, but they are only targeted by one person and this can have big effects on someone's mental health.

    1. This small percentage of people doing most of the work in some areas is not a new phenomenon. In many aspects of our lives, some tasks have been done by a small group of people with specialization or resources. Their work is then shared with others. This goes back many thousands of years with activities such as collecting obsidian and making jewelry, to more modern activities like writing books, building cars, reporting on news, and making movies. { requestKernel: true, binderOptions: { repo: "binder-examples/jupyter-stacks-datascience", ref: "master", }, codeMirrorConfig: { theme: "abcdef", mode: "python" }, kernelOptions: { kernelName: "python3", path: "./ch16_crowdsourcing" }, predefinedOutput: true } kernelName = 'python3'

      I think this can be normal. Most of the time we use Wikipedia or Stack Overflow to solve our problems because we lack the expertise... We don't get on those sites to find answers that we already know. Therefore, we're relying on people with more skill than us to answer our questions and then it becomes this "ladder", where most users are looking up to another more skilled user.

    1. In the case of Twitter tracks down mystery couple in viral proposal photos, the problem was “Who is the couple in the photo?” and the solution was again to basically dox them, though in the article they seemed ok with it.

      I think when people come together and solve something like finding the people in the proposal and wish them well and sending them the pictures, crowdsourcing can become a useful thing. However, there is nothing stopping internet trolls from also trolling them and the couple's privacy is also breached. Personally, I feel that crowdsourcing is too much of a 50-50 for me to be comfortable with.

    1. Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”

      I find this very interesting. I suspected that there might be some way for social media to control what is being posted so that their platform is safer. However, people will always find ways around it, like the "unalive" example or just changing the letters to a special character that resembles it. Personally, I don't think this helps... since even throughout the day, there are negative things that we see and this isn't really any exception. I can't speak for people experiencing this, but I feel like these things don't really affect me but I have seen these algorithms at work before, I just don't think it's effective yet.

    1. People historically came to cosmetic surgeons with photos of celebrities whose features they hoped to emulate. Now, they’re coming with edited selfies. They want to bring to life the version of themselves that they curate through apps like FaceTune and Snapchat.

      I agree with this statement a lot. I have known a lot of people from home that did plastic surgery, and they are no longer using sample pictures or celebrities anymore. Even some surgeons would use apps to see what their patients would look like after the operation. I feel like when it's this easy for people to see what they would look like after an operation, it's hard not to be tempted by a "more refined" version of themseleves.

    1. Clickbait

      I think that I see this most often out of all the ‘gaming’ in these algorithms. I believe that it’s just the easiest way to do it because you don’t need any information, it’s just to promote interaction with whatever is in the link.

    1. The guidelines suggested harsher sentences on the version of cocaine more commonly used by Black people, and lighter sentences on the version of cocaine more commonly used by white people.

      I feel like this can also be true for search engines. The way that they profile us by different criterias shown in google settings, I wouldn’t be surprised if these algorithms take race into account as well.

  2. Apr 2023
    1. The user might be blind or low-vision. Their device or internet connection might not support images. Or perhaps all the images got deleted (like what happened to The Onion).

      The idea of alt-text is a good step forward, since it promotes inclusivity for more people who might have disabilities. I think if programs are able to recognize the pictures accurately rather than people having to labelling them will help a lot because there is still a lot of work needed when someone posts a picture in this case.

    1. When the person with dark skin takes off the white paper towel, the soap dispenser won’t work for them anymore.

      Shouldn't the dispenser use proximity sensors? Wouldn't that be better than this because it will know an object is near no matter the colour. The fact that these things can happen shows that we need to put more emphasis on design and how people interact with these objects.

    1. Therefore if someone had access to the database, the only way to figure out the right password is to use “brute force,” that is, keep guessing passwords until they guess the right one (and each guess takes a lot of time).

      The only thing with this is that many people tend to use the same passwords and usernames for their social media websites, they are prone to this. Even if the hacker would need to brute force it, wouldn't there be some algorithm that could do that for them? and if they were to get lucky then they might have access to ALL of a user's passwords and social media or even some more important platform like banking.

    1. Deanonymizing Data: Sometimes companies or researchers release datasets that have been “anonymized,” meaning that things like names have been removed, so you can’t directly see who the data is about. But sometimes people can still deduce who the anonymized data is about. This happened when Netflix released anonymized movie ratings data sets, but at least some users’ data could be traced back to them.

      This is a very interesting idea. As users, we would want our information to be anonymous and protected but we also enjoy how friendly some apps' features are. And that isn't possible without the use of feedback from datasets. However, how would we know that the data could be used to generalize the population accurately and also when some data can be traced back to the user, how is it anonymized data.

    1. But targeting advertising can be used in less ethical ways, such as targeting gambling ads at children, or at users who are addicted to gambling, or the 2016 Trump campaign ‘target[ing] 3.5m black Americans to deter them from votin

      I've seen many examples of these and honestly believed that they might be coincidence. When I was younger, I would always see gambling sites in sports pages. I knew that there were relationships between people gambling and sports results, so I thought that it was just coincidence that I'm seeing these things. However, soon the same ads followed me around and to now knowing that they are being targeted to younger me is a pretty scary thought.

    1. One particularly striking example of an attempt to infer information from seemingly unconnected data was someone noticing that the number of people sick with COVID-19 correlated with how many people were leaving bad reviews of Yankee Candles saying “they don’t have any scent” (note: COVID-19 can cause a loss of the ability to smell):

      I honestly think that this is something that could happen. When I had COVID I really could not smell anything, and there were pretty much no other symptoms. On one hand, you could say that these reviewers are just clueless about the symptoms of COVID or they might not even know they had COVID -- which might mean that there is a correlation between bad candle reviews and COVID cases, however it doesn't mean that COVID causes candles to not smell or other forms of misinformation

    1. Fig. 7.5 Various comments on the fake pronunciation video#

      Trolling can be fun and games but when it comes to this extent then it can be problematic. Clearly we can see from the comments that this trolling video pops up first when someone is trying to search for the real video, which can potentially be spreading misinformation.

    1. Going with the gatekeeping role above, trolling can make a troll or observer feel smarter than others, since they are able to see that it is trolling while others don’t reali

      Personally the worst part of trolling is this and being able to feel powerful. People who troll often have a condescending outlook towards others which is mentioned here. i don't know what it stems from, but from seeing trolls online, they often are taking a condescending outlook towards those who disagree with their views or interact with their posts.

    1. Sockpuppet (or a “burner” account): Creating a fake profile in order to argue a position (sometimes intentionally argued poorly to make the position look bad)

      I think this post made by the candidate is unforgivable. I feel like since, for the most part, we can remain anonymous in the social media world, we can create profiles or narratives that can help or cause -- just like the way that this candidate did. Burner accounts can be abused to relay any message the user wants to make without disclosing who they are.

    2. Dr. McLaughlin pretended to be a person (@Sciencing_Bi) who didn’t exist.

      These things can definitely become a problem in the social media world. Our social media accounts are representations of ourselves and what we show on there are sometimes inauthentic... However, when we are inventing a new persona, it doesn't cross into the realm of impersonation, so I wonder what the line is to creating personas online.

    1. Another example of intentionally adding friction was a design change Twitter made in an attempt to reduce misinformation: When you try to retweet an article, if you haven’t clicked on the link to read the article, it stops you to ask if you want to read it first before retweeting.

      I never knew that this was a thing. I feel like it can be really useful if the tweet is from a source that is reputable. Let's say you are trying to retweet that this is a scam site or a site that steals IP addresses, adding friction here just makes the person trying to help others go through a bit more work. Even though it's very minimal, I still think that just these small things can make someone not want to retweet aymore.

    1. In the 1980s and 1990s, Bulletin board system (BBS) provided more communal ways of communicating and sharing messages. In these systems, someone would start a “thread” by posting an initial message. Others could reply to the previous set of messages in the thread.

      I heard that there were also sites a while ago that allows you to illustrate or post anything on a page and later on that page became very populated. I'm thinking that it's a similar concept to this or maybe is even inspired by this and made into an artform.

    1. Can you think of an example of pernicious ignorance in social media interaction? What’s something that we might often prefer to overlook when deciding what is important?

      I think that these definitely happen a lot. For example, celebrities will post how miserable their lives are in the pandemic while living in a mansion, whereas people are losing their jobs and health trying to make it by. I think we overlook the general situation of the tweet a lot, since when we are posting something online, we often give ourselves the most focus and disregard other components that might be affected as well.

    1. Data collection and storage can go wrong in other ways as well, with incorrect or erroneous options. Here are some screenshots from a thread of people collecting strange gender selection forms:

      I don't really understand how a mistake like this can happen. From a coding perspective, I find it very difficult to make these drop downs have unrelated inputs since they should be written at around the same time. I feel like this is most likely due to human error or just not being cautious enough while making the page

    1. This allows the protesters to remain anonymous and the donkey unaware of it’s political mission.

      There are no differences between the use of donkeys here and bots. It allows the people behind the idea to remain anonymous yet spreading their message. However, the responsibility is still on the humans because they are the ones who planned this and ultimately made it happen

    1. We will not consider those to be bots

      This is very interesting. I always thought that any automated response system was referred to as bot accounts but to know that these are actually humans is very curious. In this case there is really no difference between being a code and being a human paid to post these things.

  3. Mar 2023
    1. TickTock

      I think that the growth of tiktok pretty much aligns with people’s shortening attention span. Where everyone wants information in the shortest amount of time possible and tiktok allows for that, even though it may be misinformation.

    1. Like how water (soft and yielding), can, over time, cut through rock.

      this is a very good point that over time water can erode things. In addition to this, I believe that taoism also teaches a karma system where every action is being tracked, and your wellbeing could depend on it.