- Feb 2020
-
-
Although he’ll be busy, superintendant Prewitt says he doesn’t mind having to take time each day to read through flagged posts that are sometimes irrelevant, even if it can be distracting
In my opinion, this ties out to the Utilitarian approach. The reason I think this is because the Superintendant is recognizing that these methods are not perfect, and they may make errors, but it is doing the greatest good for the greatest number because when it is right, it is saving many innocent lives. I think he makes a good point that you need to make some sacrafices to keep the safety of the school as high as possible.
-
Patton worries that the technology being proffered to schools may be more likely to misfire on language used by black youth,
I beleive this coment touches very heavily on the Justice part of ethics. Since many algorithms have built in racial bias, I think there is a violation of justice ethics, because certain races will be flagged more often than others. This is a violation because it is not treating each child equally. I think this bias should be ironed out before further implementation.
-
ethically extracted by software
I personally beleive public information should be used to help the safety of the school and the students, but there is a fine line between personal data, and data relevant to the school. I agree that if the information is related to the school, they should act, but I beleive it is outside of the school's jurisdiction to monitor things in students lives that is not taking place at the school. By extracting information by software and algorithms, it worries me that these will significantly reduce the student's privacy.
-
-
colig.github.io colig.github.io
-
The book Ethics of Emerging Technologies: Scientific Facts and Moral Challenges explains the different types of ethical principles, and uses situations to analyze them. The principal that intrigued me the most was Utilitarianism. I thought this theory was the most interesting because of the way in which it competes against every other theory in some way. When I originally learned about this theory I thought it was doing the greatest good for the greatest number, but I learned there was more behind it. A big part of Utilitarianism revolves around consequences and understanding what they will be based on your action. Utilitarianism is different than many theories because in some circumstances, it is better for the greatest number of people to do a specific action than doing what your specific duty is, or what justice says is fair for everyone. I believe that Utilitarianism introduces a quality of realism, because it requires understanding the consequences and knowing you will not be able to make everyone happy with your action.
-
- Jan 2020
-
learningfutures.github.io learningfutures.github.io
-
Technology can’t be value neutralbecause people aren’t value neutral
This idea ties out directly to Dash's comment about social media. He explains that social media sites will take credit for positive feedback when the technology is working well, while the site will claim it is "neutral" when the technology is not working well. I agree with the fact that as long humans are the ones programming and creating these new technologies, there will be inherent bias behind each and eveery machine or piece of technology. I believe trying to find a way to make technology value nuetral in the future will be a very important piece of the puzzle to enhance the accuracy and reliability of technology in the future.
-
moral questions raised by technology. Thephilosophers here are less concerned with the effects, risks, and consequencesof particular technologies
This ties directly to the podcast when Dash explains, it is not about what happens if a new technology or app fails, but what about if it succeeds. He goes on to say, what would happen if everyone adopts this new piece of technology and how it would affect the world. These few sentences really wrap up Dash's main point of the podcast. He explains he does not want to limit technology, and is not scared of it, but rather wants to ensure that the technology is being ethical and humane.
Personally I think we should be cautious about the speed at which we implement new technology, but I 100% agree with the morality side of technology. I see some of the negative effects first hand every day, and think it will be something to keep in mind moving forward.
-