6 Matching Annotations
  1. Jan 2026
    1. Given all that can be at stake in making decisions on how data will be stored and constrained, choose one type of data a social media site might collect (e.g., name, age, location, gender, posts you liked, etc.), and then choose two different ethics frameworks and consider what each framework would mean for someone choosing how that data will be stored and constrained.

      INFO 103 26 WinterThe group I want to choose is LGBTQ. I think under the moral framework of group discrimination, it is very important whether the identity feature information of these groups can be stored and whether they need to be constrained by others' access to their information, because social media is a place that easily encourages evil people to do evil. Therefore, LGBTQ should have the right to decide whether to share their identity or not.

    1. As you can see, TurboTax has a limit on how long last names are allowed to be, and people with too long of names have different strategies with how to deal with not fitting in the system.

      INFO 103 Winter 2026 It is extremely important to ensure that the user experience of different users is consistent. Otherwise, they will feel discriminated against, which is very fatal to software design.

    1. What bots do you find surprising?

      INFO 103 Winter 2026 I think what surprises me the most, or rather makes me worry, are those bots with certain intelligent attributes. They can make teasing and humorous comments in the comment sections of people's posts just like humans. If they have such intelligent attributes, then if they are not properly regulated, Will they mark inappropriate remarks and thus bring a bad direction to public opinion?

    1. There is no way in which police can maintain dignity in seizing and destroying a donkey on whose flank a political message has been inscribed.

      INFO 103 Winter 2026. I think this is absolutely right. It reminds me of the rapid development of AI in recent years. One point that people are extremely worried about is that if Ai really becomes so powerful that even humans cannot control it, and if it invays media platforms like a hacker and then changes the direction of public opinion, this would be a very terrifying thing. But at this point, how should we convict?

    1. Actions are judged on the sum total of their consequences (utility calculus)

      I quite agree with the core idea of consequentialism. In this era of rapid technological advancement, many times the driving factor for entrepreneurs to undertake a project is to make a profit, so they don't consider what kind of problems the product will bring to society after its launch, they only think about how to make more money. I think this is not right. They should fully consider the impact that this product will have on society after its launch.

    1. “We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast.

      I think this point can well reflect the rapid development of AI nowadays. People often worry that if AI develops to a certain extent, it will replace a large number of existing jobs and raise some moral and ethical issues, such as whether AI will do harm to humans when it develops perception?