40 Matching Annotations
  1. Apr 2022
    1. Similarly, while it’s a mystery exactly how NarxCare may incorporate criminal justice data into its algorithm, it’s clear that Black people are arrested far more often than whites.

      There is a certain amount of discrimination in the algorithm, and its opaqueness allows this discrimination to persist without correction. This also makes people increasingly question the rationality of the algorithm.

    2. Appriss says that it is “very rare” for pets’ prescriptions to drive up a patient’s NarxCare scores.

      Obviously, a simple score judgment here is different from the facts. Of course, pet owners are likely to abuse drugs with the name of the treatment of pets, but there are really pets for pets. Doctors and hospitals cannot just rely on the result of a simple data analysis, but should make professional judgments on the patient's situation.

    1. The FTC should establish (or require the scoring industry to establish) a mandatory public registry of consumer scores because secret consumer scoring is inherently an unfair and deceptive trade practice that harms consumers.

      Third-party regulatory agencies are necessary, they need to monitor the fair and accuracy of the score model, and also require monitoring score usage.

    2. Secret scores can be wrong, but no one may be able to find out that they are wrong or what the truth is. 

      We cannot judge the correctness of the score, but this score will affect us. Only those who develop scores have the opportunity to fix the score, but this is also based on individual or team views, and cannot guarantee the fairness of the score.

    1. Not only do problematic search results seem “normal,” but theyseem completely unavoidable as well, even though these ideas have beenthoroughly debunked by scholars. U

      As it stands, algorithms cannot solve the problem of discrimination. Search engines simply output results based on people's searches. Perhaps we need more social education efforts to eliminate such discrimination

    2. What is even more crucial is an exploration of how people living asminority groups under the influence of a majority culture, such as peo-ple of color and sexual minorities in the United States, are often subjectto the whims of the majority and other commercial influences such asadvertising when trying to affect the kinds of results that search enginesoffer about them and their identities.

      In the Internet age, information should be more diverse, but the reality is that the views of a few people are ignored, and discrimination is more serious. We need fairer algorithms to eliminate such discrimination

    1. Search engine design is not just a technical matter, buta political one.

      In the current era of informatization and networking, the results of search engines have had a certain degree of influence on national values and worldviews. In this case, search engine has a political factor.

    2. ling. In this sense, the internet is full of“information borders” that users cannot

      Information borders are hard to cross, but in some ways that's not a bad thing. For example, language, if another language appears during the search, the searcher may not be able to understand it, and this is obviously not the search result he needs.

    1. obile traces are an important new resource in tracking human mobility, but there is atension between using these data as an engineering tool for policy-relevant research andunderstanding its contextual, ethical and political dimensions. Th

      More data helps us better understand people's behavior, conduct some sociological research, and provide data theory for making good policy. But it also violates people's privacy and sometimes shows social instability. We need to use these data rationally and find a balance.

    2. arly,in the US such data were at first only available to emergency services in case of personaldanger. Soon, however, access was also extended to law enforcement and other authorities(Pell and Soghoian, 2012) and in 2013 the revelations of Edward Snowden made it clear thatmany governments were collaborating to use mobile data for mass surveillance.

      As technology advances, this data is used by more and more departments, slowly moving from a concern to a degree of surveillance

    1. ut it is in this period, when the Census Bureau was asked to produce immigration quotas, that the political dimension of its activity came to the fore.

      With immigration on the rise, the census is not just a need for social research, but a data basis for policy making. As a result, the need for a rational division of the population has also begun to increase.

    2. The stages of development of the census translate into the creation of more abundant and better quality archives.

      The progress of the census also represents the progress of the society, as immigration continues to increase, the census continues to improve

    1. In the nineteenth century statistics on national origin began gradually to take into account the perceptions of inhabit-ants.

      When the period of mass immigration came, the population division became more fair, objective and friendly. But discrimination between races still exists. Is there a more equitable way of dividing the population that can help eliminate this discrimination?

    2. One important consequence of this decision was that the federal immigra-tion service likewise had to accept that Mexicans were white— this, at a time when the law restricted immigration and naturalization to whites.

      Some policies are clearly biased towards race, such as immigration policy. This discriminatory treatment further exacerbates discrimination between different races.

    3. Their answers reflect local opinion, and that opinion probably is based more upon social position and manner of life than upon the relative amounts of blood. I

      Before ancestry, people were divided according to social status and way of life, without racial distinction, is it fairer?

  2. Mar 2022
    1. The company responded to her email, she said, by saying it could not meet her demands and that her resignation was accepted immediately. Her access to company email and other services was immediately revoked.

      It turns out that companies are profit oriented, and when it comes to addressing algorithmic discrimination and social morality, they choose to run away most of the time

    2. Because this text includes biased and sometimes hateful language, the technology may end up generating biased and hateful language.

      When the data is already input with bias and hate speech, the algorithm cannot make the final result fair and peaceful. The question asked by the doctor is ethical, but Google did not accept her opinion

    1. This is exemplified by media outletsthat tend to ignore peaceful protest activity and instead focus ondramatic or violent events that make for good television but nearlyalways result in critical coverage [81].

      In most cases, extreme events are more likely to attract people's attention. Even the media will distort facts to dramatic effect to gain attention. And this makes most of the reported and focused information on the Internet more extreme, but it does not mean that extreme events often occur in reality.

    2. Starting with who is contributing to these Internet text collec-tions, we see that Internet access itself is not evenly distributed,resulting in Internet data overrepresenting younger users and thosefrom developed countries [100, 143].

      The Internet data sample is uneven, and some poor or remote areas have very few user samples, so the conclusions based on Internet data are also biased towards the majority of the society as a whole, and do not take into account minority groups

    1. So after Te Hiku rejected the offer from Lion Bridge, Mahelona and Jones published their rejection along with a video explaining why and the risk in selling their language to an American corporation.

      Te Hiku is very sober and selfless; he firmly defends the future interests of Māori and the right to use data

    2. But with just its initial 320 hours of data, Te Hiku was able to build a speech-to-text engine with an initial word error rate of 14 per cent

      This is a high achievement, and it also makes us realize that saving a language requires the efforts of its users

    1. Consider that the data generated by users in developed contexts like the U.S. is far more monetizable today than data generated by the company’s huge and growing user base in the linguistically-diverse developing world.

      Private companies are usually profit-oriented. Multilingualism in developing regions cannot bring sufficient benefits to companies. Therefore, some international organizations and policy orientations are needed to help these small languages.

    2. This takes place, for example, when data first collected to identify, serve, and give voice to refugees, later, through “function creep” (Ajana, 2013), is used to monitor refugees’ activities and limit their movements as their status shifts from objects of pity to national security threats (Madianou, 2019).

      Refugee data is collected to better serve and house them, but is subject to algorithmic discrimination, which ultimately turns into monitoring and controlling them

    1. This prediction is known as the defendant’s “risk score,” and it’s meant as a recommendation: “high risk” defendants should be jailed to prevent them from causing potential harm to society; “low risk” defendants should be released before their trial. (In reality, judges don’t always follow these recommendations, but the risk assessments remain influential.)

      Is it really fair to judge his sentence on the basis of risk status? They violated the same laws when they were indicted but were sentenced to different sentences because of different risk statuses. This means that those who are judged to be high-risk will have additional penalty periods for things they haven't done yet. Is it a form of discrimination subconsciously?

    2. No matter how much data we collect, two people who look the same to the algorithm can always end up making different choices. 

      Even if the backgrounds of the two people are completely similar at present, they will make different choices in the future. Can we continuously monitor these people and keep updating their risk status? It may be true that their status is low risk at the moment, but something that happens over time may increase his risk status. Should a static single risk status be the final judgment conclusion?

    1. Productivity used to grow in tandem with labor compensation; however, that has changed dramatically since the 1970s. Productivity has continued to grow, but wages stagnated.

      Productivity rose but worker wages did not. From this point of view, workers’ interests are harmed not because of the application of artificial intelligence, but more because of the impact of compensation and welfare policies.

    2. The Terminator narrative of AI and automation very often depicts “low-skill” or “blue collar” workers as the most likely victims of automation.

      In fact, the application of artificial intelligence can liberate people from boring and repetitive work and engage in more creative work, while robots assist people in completing simple and trivial tasks.

    1. Pervasive surveillance in the workplace (Ajunwa et.al., 2017) and the nontransparent collection anddeployment of data (Ajunwa, 2018a) raise new legalquestions, but most alarming is their encroachmenton workers’ bodily autonomy and personhood. For

      Wearable work devices collect more data about when workers are at work, and use this data to quantify their work. However, it is unbiased to judge the work of workers only by working hours, etc., and the monitoring of workers by these devices violates the legitimate rights and interests of workers. The data should be collected more properly, and the results of the data analysis should only be used as a reference for the final conclusion.

    2. automated hiring system—which can operate as anend-run around established employment antidiscrimi-nation laws to deny some workers equal opportunityfor employment—is worker domination.

      The lack of openness and transparency of the “black box at work" raises questions about its fairness. And because its procedures are kept secret from the outside world, it also makes it easier for insiders to manipulate to get the desired results. This is against the interests of candidates, but there are currently no laws and guidelines to regulate such black boxes. Is there any good way to monitor the operation of the black box?

  3. Feb 2022
    1. In contrast to critical race studies analyses of the dystopian digital divide and cybertyping, anotherstream of criticism focuses on utopian notions of a “race-free future” in which technologies wouldpurportedly render obsolete social differences that are divisive now. The idea that, “[o]n the Internet,92nobody knows you’re a dog” (a line from Peter Steiner’s famous 1993 cartoon, featuring aNew Yorkertyping canine) exemplifies this vision. However, this idea relies on a text-only web, which has beencomplicated by the rise of visual culture on the Internet.

      The application of the Internet can indeed transcend the limitations of human differences. But since no one knows if you're a dog or not, there's a high chance that people will become extreme on the Internet.

    2. found that the algorithm associatedWhite-sounding names with “pleasant” words and Black-sounding names with “unpleasant” ones.

      Associating preferences based on names alone, whether it is the data itself that discriminates, and the algorithm just perpetuates this discrimination, should we also find a way to eliminate data discrimination?

  4. data-ethics.jonreeve.com data-ethics.jonreeve.com
    1. theacademy is by no means the sole driver behind the computational turn. Thereis a deep government and industrial drive toward gathering and extractingmaximal value from data, be it information that will lead to more targeted adver-tising, product design, traffic planning, or criminal policing

      Big data technology also plays a major role in the official sector. However, at the same time, some problems of big data, such as discrimination and privacy leakage, cannot be solved by improving algorithms alone, and they also require the regulation of these behaviors by official departments.

    2. . In reality, working with Big Data is still subjective, and what itquantifies does not necessarily have a closer claim on objective truth – particu-larly when considering messages from social media sites.

      The selection and processing of big data and the selection of models all require manual operations, which will inevitably reflect the subjective consciousness of some processors. This may also make the results less objective and fair, further leading to discrimination

    3. It is not enough tosimply ask, as Anderson has suggested ‘what can science learn from Google?’,but to ask how the harvesters of Big Data might change the meaning of learning,and what new possibilities and new limitations may come with these systems ofknowing.

      We should pay attention to the negative impact of technological innovation. At present, big data has brought certain problems. Such as algorithm discrimination, personal privacy leakage, etc.

    1. even when we return to the fundamentals of moral philosophy: asking whether something is ‘done ethically’ does not question who defines and enforces what a good life is, and for whom, and from what po-sition of power, or not, that decision is being made.

      As the author says, 'ethics' is a very broad and yet to be defined thing. What is good and what is bad, who defines it, and does society accept it? Everyone's worldview and values are different, so there is no unified 'ethics' definition in society today.

    2. This paper has argued that the current focus on and enactment of ‘ethics’ will not facilitate so-cial justice in algorithmic technology.

      'Ethics' is indeed a very broad point of view, and the author hopes to divide morality in more detail and solve these problems step by step. such as inequality in data, which is something we can currently address by improving technology.

    1. Americans nervous about everything from automation to data privacy to catastrophic accidents with advanced AI systems.

      AI systems collect people's data and potentially use it for research on user behavior, making people feel that their privacy has been violated, and this is a pressing issue faced by internet companies.

    2. AI ethics boards like Google’s, which are in vogue in Silicon Valley, largely appear not to be equipped to solve, or even make progress on, hard questions about ethical AI progress.

      The establishment of committees alone will not solve the problem of AI ethics. It requires the cooperation and efforts of the company's overall system, culture and engineers. The committee can only play the role of monitoring and judgment.

    1. Human nature is by default free and at liberty to choose good or wrong. The individual must use their liberty to act in harmony with the rest of society.

      Individual liberty should be guaranteed, but unlimited freedom will inevitably harm the liberty of others. Therefore, we need to pursue a balance between individual freedom and social harmony.

    2. If the essence of personhood is rationality, and no individual can achieve complete rationality through self-means, then no one is a person, or at best no one is a full person. Rationality and dehumanization are thus linked: personhood based on rationality is a reduction of personhood.

      This is a very interesting point of view. Personality is complex and multi-faceted, and rationality is one of aspects of personality. And artificial intelligence can only be built based on rationality at present, so it is a simplified form of complex personality.