39 Matching Annotations
  1. Apr 2022
    1. Clearly then, reclaiming this critical point ofview requires a framework for conceptualizingthe harm and discrimination that can be causedthrough algorithmic technologies as an issue ofsocial inequality, rather than ‘ethics’.

      Our correct understanding of some improper behaviors can restrain ourselves

    2. Unsurprisingly, the way in which ‘ethics’ is cur-rently enacted and deployed is increasingly crit-icized.

      Our views on ethics are constantly changing。

    1. But the most troubling thing, according to researchers, is simply how opaque and unaccountable these quasi-medical tools are. None of the algorithms that are widely used to guide physicians’ clinical decisions—including NarxCare—have been validated as safe and effective by peer-reviewed research. And because Appriss’ risk assessment algorithms are proprietary, there's no way to look under the hood to inspect them for errors or biases.

      This opacity also makes it difficult to know whether a model is working, especially for an algorithm that is used in society, and it is difficult for us to have access to data on its effectiveness.

    2. In essence, Kathryn found, nearly all Americans have the equivalent of a secret credit score that rates the risk of prescribing controlled substances to them. And doctors have authorities looking over their shoulders as they weigh their own responses to those scores.

      We all know that at present, no matter how mature the model is, there is an error rate. If all places only refer to this score, how difficult will it be for those who are misjudged by the model?

    1. Consumers who are victims of identity theft can have their credit or consumer scores affected thereby and may have little recourse even though errors may have major consequences for their ability to function in the economic marketplace can be major. Other consumers can also have their lives affected by the use of consumer scores to determine eligibility for important opportunities in the marketplace. Some consequences may be less significant.

      If the scoring criteria are not made public, it is difficult to know how much some of our actions will affect us. The establishment of some public punishment measures, to a large extent, can make people aware of the consequences of such behaviors and avoid such behaviors to a certain extent.

    2. Consumer scores abound today. Credit scores based on credit files receive much public attention, but many more types of consumer scores exist. They are used widely to predict behaviors like, spending, health, fraud, profitability, and much more. These scores rely on petabytes of information coming from newly available data streams. The information can be derived from many data sources and can contain financial, demographic, ethnic, racial, health, social, and other data.

      I wonder if the existence of such a rating system will make some people's accounts more valuable than others. For example, some people may always be recommended low-priced but high-quality items when shopping online. In the future, will we need to regulate behavior to deliberately train accounts to get a better experience?

    1. This change provides a window of opportunity for reconfiguring how we think about society, technology and the economy. Now is a good moment to draw out strategies for change. We need to stop talking about large-scale work replacements caused by robots, and remind ourselves that technological innovation and change follows policy and investment decisions. The state, not just the private sector, plays a central role here, as economist Mariana Mazzucato has reminded us. 

      Some companies will change their business areas and work patterns because of tax incentives. If we can better introduce some tax incentive policies to improve the living environment of workers in terms of employee benefits and other aspects, maybe it can improve the quality of life of workers and realize a partial redistribution of income.

    2.  This is how automation and the rise of inequality are linked: not through technological change, per se, but political and economic decisions made upstream. Not seeing this relationship clearly pits certain humans — not all humans — against machines in ways that have us focus too much on the machinery and make the wrong decisions around workers’ rights and well-being.

      Maybe in the future, computing power will be a very important resource, and AI application in more skillful fields is a more economical choice. In this case the labor cost of blue-collar workers may be more cost-effective in some countries than the resource-intensive AI. This is just an idea and may feel a little impossible at the moment.

    1. SEO is the process of“using a range of techniques, including augmenting HTML code, webpage copy editing, site navigation, linking campaigns and more, in orderto improve how well a site or page gets listed in search engines for par-ticular search topics,”49 in contrast to “paid search,” in which the com-pany pays Google for its ads to be displayed when specific terms aresearched.

      I have been exposed to a simple seo website before, I can upload my articles and keywords, and then it analyzes my keywords and article structure, for example, tells me that the keywords should appear several times in the article .

    2. hat each of these searches represents are Google’s algorithmic con-ceptualizations of a variety of people and ideas. Whether looking forautosuggestions or answers to various questions or looking for notionsabout what is beautiful or what a professor may look like (which doesnot account for people who look like me who are part of the profes-soriate—so much for “personalization”), Google’s dominant narrativesreflect the kinds of hegemonic frameworks and notions that are oftenresisted by women and people of color. Interrogating what advertisingcompanies serve up as credible information must happen, rather thanhave a public instantly gratified with stereotypes in three-hundredths ofa second or less

      As technology advances, we may become more and more confused about whether our current ideas are generated by some algorithm or given to us.

    1. earch engines respond both to users’ immediateinterests and to corporations’ financial imperatives. Thedesign of Google’s search engine is inseparable from thepriorities of its advertising business [41]. Search enginesalso respond to political pressures and legal regulations.China-based search engine Baidu favors results that alignwith the views of Chinese government authoritie

      Indeed, if a search engine is completely neutral and serves no purpose, it probably won't be picked by anyone either. It may be that we as a group have chosen these existing search engines that are not entirely objective.

    2. The concentration of power in technological infra-structures has become a matter of public concern.Such infrastructures, including search engines, seemto play a key role in the spread of false informationand hate speech, including the white supremacist andIslamophobic content that has fueled such disastrousincidents as the U.S. Capitol riot and the Rohingya geno-cide in Myanmar. Scholars at the intersection of scienceand technology studies and critical race theory havepaved the way for understanding the role of technologyin these incidents [4, 18, 19]. Our design work is guidedby their critiques, as well as by several lines of work thatincorporate critical concerns into artistic and technicalinterventions

      When I usually use some websites that are recommended according to my interests, I feel that as the time of use increases, the content recommended by the website to me is indeed more and more relevant. The more similar what you see, the harder it is to get in touch with different viewpoints. Maybe this makes some extreme ideas even more extreme. This is still without intentional guidance.

    1. included a professor from Bouake ́ as a consultant to the project, but of the 150 research teamswhich received and analysed the data, one was based in Africa (a team from Cameroon), andonly one of the other researchers said that they had visited Coˆ te d’Ivoire and conductedinterviews to understand the data’s potential biases and limitations. There were thus limitedopportunities for researchers to understand or be influenced by local understandings of whatmight constitute development or privacy, or to be sensitive to what kinds of research ethicsmight be appropriate for a fragile, post-conflict sta

      In different cultures, people's understanding of privacy and some things can be very different. If researchers don't care about the existence of such differences when they study the data, it may cause some harm to the data owner because of what they see as innocuous behavior.

    2. Having checked the rules in its contract the company then made the IvorianCommunications minister aware that they would be making the release. The firm,however, was not legally constrained to any particular course of action other than thevery broad definition of permissible data reuse contained in its operating license, and theethical constraints were thus defined by industry standards al

      When we use mobile phones and some services on mobile phones, merchants always provide us with some pre-specified standard terms, and consumers do not have the right to change them, and the right to interpret these terms is often in the hands of merchants. The clauses themselves are likely to give merchants too much power, especially where the relevant regulations are not particularly robust.

    1. The US census was cre-ated by the American Constitution to divide up taxes and congressional seats among the federated states, and for that reason it was organized around a funda-mental division of Americans into three groups: free residents, counted in full; slaves, whose numbers were adjusted by a coefficient that reduced their numer-ical importance; and Indians, who were excluded from the count. As

      The division of races at that time consolidated privilege to a certain extent. Now we all feel that it is unfair to do so. Racial divisions should not be associated with privilege.

    2. absence of investigation of the white race occurs in the census of 1850, the first to separately identify each individual: color only needed to be noted by the name of each resident if he was not white; when the space was left blank, it meant that the person was white. This, at a time when legislators regularly expressed their con-cern over the fact that the appropriate instruments to identify all the members of other groups were not available. Pushing this line of reasoning further, one might almost say that the whites distinguish themselves precisely by the fact that they are not racial subjects and that the aim of the census is not to identify all residents, but only those who differ from the implicit norm

      Is this census understandable as a means of identifying people of non-white ethnicity? The purpose of such means may deviate from the census and have stronger political implications.

    1. Many persons living along the Mexican boundary, speaking the Spanish language and wearing European clothes, but largely, perhaps predominantly, of Indian blood, have probably been returned by the enumerators as whites, the word Indian being reserved by local usage for descendants of the wild hunting and nomadic Indians

      Using Indians as the ethnographic classification of these people may not be in line with their living habits and culture. With the increase of understanding, the classification standards are more complex and more problems are considered.

    2. The instructions given to enumerators for making this classification were to the effect that “all persons born in Mexico, who are not definitely white, Negro, Indian, Chinese or Japanese, should be returned as Mexican.” Under these instructions 1,422,533 persons were returned as Mexican in 1930, and 65,958 persons of Mexican birth or parentage were returned as white

      This number is larger than I thought, and under the new racial taxonomy, each change may involve larger groups. I don't know if this change is a good thing or a bad thing for this group of people.

  2. Mar 2022
    1. ct plans. In this way, researcherscan consider the risks and limitations of their LMs in a guidedway while also considering fixes to current designs o

      Could the judgment of a few individuals or certain organizations be wiser than the collective decision of society as a whole? We know that the model may have some risks, but should the adjustment of the model be left to certain people or certain organizations? Is fairness as they understand it really fair? Who should we pass the decision-making power to?

    2. guage strategically to destabilize dominant narratives and call at-tention to underrepresented social perspectives. Social movementsproduce new norms, language, and ways of communicating. Thisadds challenges to the deployment of LMs, as methodologies re-liant on LMs run the risk of ‘value-lock’, where the LM-relianttechnology reifies older, less-inclusive under

      Our ethics and understanding of fairness are constantly being updated, but the data sets we use to train models may have collected a larger percentage of the data from the past. This may lead to the result that the final trained model may be different from the current moral concept. Moral perceptions will also change further in the future, resulting in underrepresented models.

    1. Google’s battles with its workers, who have spoken out in recent years about the company’s handling of sexual harassment and its work with the Defense Department and federal border agencies, have diminished its reputation as a utopia for tech workers with generous salaries, perks and workplace freedom.

      I think it's a very unfair and a violation of liberty to solve the problem without solving the problem. Since the problem exists, it will inevitably be discovered. If you don't want to solve the problem, you will fight against the person who raised the problem. It is also difficult to prevent more people from discovering the existence of the problem.

    2. Many Google employees have bristled at the new restrictions and have argued that the company has broken from a tradition of transparency and free debate.

      I think freedom of speech is necessary, both at Google and at other organizations. Speech that is normal and does not endanger the freedom and rights of others should not be crack down.

    1. Digital surveillance is no longer the exclusive purview of traditional agents of surveillance, such as governments. On the contrary, digital technologies make surveillance, or “big data analysis” as it is often euphemistically termed, an activity available to almost any actor that can pay.

      “surveillance capitalism” will be an increasingly accessible period as technology evolves. If those minority and Indigenous languages have very complete corpora, I think it is easy for people who have corpora to conduct “surveillance capitalism” on minority and Indigenous language countries.

    2. Many advocates and designers of such digital tools for under-resourced languages are motivated by the hopes of keeping their language and language community vibrant in the face of linguists’ predictions that 50-90% of languages face extinction this century (Harrison, 2007; Kraus, 1992). If a language can achieve a digital foothold, the hope is that young “digital natives” will not forego their mother tongue under the impression that other more dominant languages are cooler, more modern, and more convenient for the digital sphere and wider life (Rehm, 2014).

      It is indeed more convenient for people who use minority and Indigenous languages to be exposed to common world languages such as English from an early age. Although Chinese is also a language used by many people, most of us have to learn English from elementary school. For people who speak minority and Indigenous languages, this phenomenon may be more serious. The disappearance of minority and Indigenous languages is not only the disappearance of language, but also some special expressions in language and culture in the form of language.

    1. “While we recognise the value of open-source, we also realise the majority of [our] people don’t have the resources to take advantage of it,” Jones says.

      If this data is readily available to others and does not restrict how it can be used, there may be organizations that use the data to monitor language users and other things against language users.

    2. Well into the 20th century, Māori children were often punished with shame or physical beatings when they spoke their native language in schools. As a result, when that generation reached adulthood, many chose not to pass on the language to their own children to protect them from the same types of persecution.

      When the whole society has fewer and fewer opportunities to speak the local language, fewer and fewer children will learn the local language, which may eventually lead to the disappearance of the language and the expressions and cultural contents that are based on the language.

    1. They also can replace the practice of posting bail in the US, which requires defendants to pay a sum of money for their release. Bail discriminates against poor Americans and disproportionately affects black defendants, who are overrepresented in the criminal legal system.

      I think the way to pay posting bail is really unfair to some poor people. Shouldn't the judiciary's decision to choose parole be based on factors such as social harm, rather than whether the posting bail can be afforded? Some very rich people may be very likely to harm society after he goes out, but just because he can afford posting bail, can he be considered more worthy of bail than those who are relatively less harmful to society but cannot afford bail?

    2. This prediction is known as the defendant’s “risk score,” and it’s meant as a recommendation: “high risk” defendants should be jailed to prevent them from causing potential harm to society; “low risk” defendants should be released before their trial.

      No matter how perfect a model is, there is a certain error rate. Maybe this error rate seems small, but it does exist, and the model can only reduce the error rate as much as possible, but not completely eliminate it. With a larger base, it is likely that someone will be misjudged. And it may not be possible to create a perfect model at present, and more people will be affected by the error rate, which may also lead to unfairness.

    1. t the same time, said employerwould continue to enjoy a public perception of fairhiring inspired by its use of a nonhuman hiring system

      It’s easy to think that algorithms without human involvement would be fairer. But even if no one has adjusted the training data set before, the training data set of the algorithm itself is the information such as previous employees. These previous information may be inherently unfair, making it difficult to finally train a fair model.

    2. Yet, many organizations are embracing “black box”automated hiring without fully understanding their lim-itations or even critically evaluating how they work.Without due care, the automated hiring system maybecome the worst type of broker, a “tertius bifrons,”which seeks to indefinitely and authoritatively maintainitself as intermediary between employer and employeewhile being the biggest benefactor of the benefits of thatposition (Ajunwa, 2020)

      Maybe some companies don't care about these issues, they just think about automation and cost savings. For enterprises, especially large enterprises, there are a lot of resumes sent to them, and if they are manually screened, it will consume a lot of manpower. They may be aware that there is a problem with the algorithm and may filter out some people who meet their requirements, but the cost savings may be more important to them than other problems

  3. Feb 2022
    1. What happens when this kind of cultural coding gets embedded into the technical coding of softwareprograms? In a now classic study, computer scientist Latanya Sweeney examined how online searchresults associated Black names with arrest records at a much higher rate than White names, aphenomenon that she first noticed when Google-searching her own name; and results suggested shehad a criminal record. The lesson?

      The design of many algorithms may initially focus more on convenience and benefit than fairness. This unfairness may sometimes be unintentional. The data that may be collected is like this, showing the bias of the whole society rather than the bias of the algorithm. But how to eliminate bias in search results through human intervention will be a very difficult topic.

    2. Math DestructionIt is powered by haphazard data gathering and spurious correlations, reinforced by institutionalinequities, and polluted by confirmation bias.”9Racial codes are born from the goal of, and facilitate, social control. For instance, in a recent audit ofCalifornia’s gang database, not only do Blacks and Latinxs constitute 87 percent of those listed, butmany of the names turned out to be babies under the age of 1, some of whom were supposedly“self-described gang members.” So far, no one ventures to explain how this could have happened,except by saying that some combination of zip codes and racially coded names constitute a risk.10Once someone is added to the database, whether they know they are listed , they undergo evenor not

      Such biased perceptions may also affect people who are discriminated against. When the message they receive from childhood has always been how they should be, what their race is, and their behavior may also be affected.

    1. The panel was supposed to add outside perspectives to ongoing AI ethics work by Google engineers, all of which will continue. Hopefully, the cancellation of the board doesn’t represent a retreat from Google’s AI ethics work, but a chance to consider how to more constructively engage outside stakeholders.

      The cancellation of this team should not lead to the end of the project, but should prompt the improvement of the project. If technology leaders like Google are forced to give up their emphasis on AI ethics, it is likely to lead to the disorderly and chaotic development of AI technology.

    2. Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change.

      The selection of the person responsible for the team may also be an issue. Before they enter the team, it is difficult to fully investigate their background and ideas, and even if they fully understand their ideas, it is inevitable to select some teams that do not fully meet the selection purpose. Especially for businesses, the scope of their possible selection is extremely limited. So some problems are inevitable. And the scale of diversity these people choose is hard to grasp.

    1. It is essential that these frameworks do not, through a predominantly Western view, ironically reproduce the core problem of algorithmic decision making systems and ignore the adequate inclusion of marginalized communities in their design and application.

      Most of the current technology and discourse power are in the hands of the mainstream. Those marginalized groups may be discriminated against in all aspects, so that they do not have the right to speak in the formulation of rules. But those with vested interests are not necessarily aware of the dominant position they are in. AI created with such thinking may aggravate discrimination and marginalize those currently marginalized groups.

    2. The pressing remedies must take into account the reality of the interconnectedness of society, and the increased intertwinement afforded by artificial intelligence, such as the models that aggregate individual behaviors and generalize them to unknown and future data subjects.

      Our society is always progressing, and our thinking is always changing. What is right now may not be right in the future. Therefore, the AI that we use to regulate the current thinking may not necessarily be applicable in the future. It should also grow together, and conform to the future ideological norms and code of conduct.

    3. The most popular introductory undergraduate computer science book on artificial intelligence defines a “rational agent [as] one that does the right thing.”19 However, the reproduction of power asymmetries through automated decision-making systems shows that the rationality of computers, or of humans programming the machines, does not always result in the right thing and is limited without proper context (relationality). ADMS are being used to perpetuate racism and gender stereotypes in part because computers cannot understand or take into account social contexts, in particular the racial attitudes and gender norms that exist. This is not a problem of not having enough data, it is simply that data does not interpret itself. It does not tell us how to respond or act in a moral dilemma or how to avoid moral dilemmas.

      If we want computers to do the right thing, we must first know what the right thing is. The question of what is right may not have a completely correct, universal answer for anyone. We currently have a lot of data from which to get some options, but these are not necessarily the right options. At present, our society has not reached equality, and everyone's moral bottom line is not the same. The “right things” that the computer can summarize from these data may not be so correct.

  4. data-ethics.jonreeve.com data-ethics.jonreeve.com
    1. Do numbers speak for themselves? We believe the answer is ‘no’. Significantly,Anderson’s sweeping dismissal of all other theories and disciplines is a tell: itreveals an arrogant undercurrent in many Big Data debates where other formsof analysis are too easily sidelined. Other methods for ascertaining why peopledo things, write things, or make things are lost in the sheer volume ofnumbers. This is not a space that has been welcoming to older forms of intellectualcraft. As Berry (2011, p. 8) writes, Big Data provides ‘destablising amounts ofknowledge and information that lack the regulating force of philosophy’. Insteadof philosophy – which Kant saw as the rational basis for all institutions – ‘compu-tationality might then be understood as an ontotheology, creating a new ontological“epoch” as a new historical constellation of intelligibility’ (Berry 2011, p. 12)

      Big data can provide a lot of information, and we will finally get the analysis results when we analyze it. But does huge data necessarily give us the right result, I don't think so. Excessively large data sometimes not only brings us a greater amount of calculation and analysis difficulty, but also provides me with some repetitive, complicated and useless information. This information may lead us to deviate from the correct results, or to obtain results that are too dependent on the same environment. If we want to get a general conclusion, we may not only rely on the analysis of these numbers, but also have some prior knowledge or more efficient data processing methods.

    2. All researchers are interpreters of data. As Gitelman (2011) observes, dataneed to be imagined as data in the first instance, and this process of the imagin-ation of data entails an interpretative base: ‘every discipline and disciplinary insti-tution has its own norms and standards for the imagination of data’. Ascomputational scientists have started engaging in acts of social science, there isa tendency to claim their work as the business of facts and not interpretation.A model may be mathematically sound, an experiment may seem valid, but assoon as a researcher seeks to understand what it means, the process of interpret-ation has begun. This is not to say that all interpretations are created equal, butrather that not all numbers are neutral.The design decisions that determine what will be measured also stem frominterpretation. For example, in the case of social media data, there is a ‘datacleaning’ process: making decisions about what attributes and variables will becounted, and which will be ignored. This process is inherently subjective. AsBollier explains

      When processing and analyzing data, researchers often have certain presuppositions for the results, as well as certain specifications for the structure and representation of the data analysis results. Although the numbers are neutral, the results obtained by the researcher's processing and analysis are often closely related to the researcher's assumptions. And ultimately the interpretation of the results is often subjective