56 Matching Annotations
  1. Dec 2023
    1. It seems to lack a bit of in-depth solutions as many of the solutions are general and although they give an idea of solving the problem it still requires further research

    2. Issue 7: Beneficence * Act with the best interest of others, promoting welfare and well-being in healthcare * possibility of deception

      Strategies:

      Improve communication by exhibiting caring behavior and informing patients about their best interests. Encourage AI developers to design friendliness Issue 8: Responsibility * Responsibility for decisions made by AI systems in healthcare * Unclear accountability for AI-related patient harm

      Strategies: * Define clear guidelines for ethical and legal decision-making based on AI outputs. * Require both doctors and AI developers to follow the "do no harm" standard. * Involve AI developers and engineers in moral accountability assessments.

      Issue 9: Solidarity * Concerns about justice and equality in healthcare due to Ai * Inequality in the distribution of resources and potential discrimination

      Strategies: * Establish a solidarity-based model for applying AI solutions in society. * Consider interpersonal justice in the design of care bots to decrease inequality. * Foster communication, trust, and empathy in patient–doctor relationships.

      Issue 10: Sustainability * development, deployment, and implementation of AI in healthcare * conflicting goals, unequal contexts, risk and uncertainty, opportunity cost

      Strategies: * Support the establishment of trustworthy global AI through shared rules and cooperation. * Develop sustainable ML decision-making tools within occupational healthcare. * Implement a systematic approach to establish digital care

      Issue 11: Dignity * respect for human rights and freedoms * Concerns about human dignity through AI models

      Strategies: * Design and operate AI systems considerate of human dignity. * Focus on dignity and privacy to respect human rights.

      Issue 12: Conflicts * AI implementation results in conflicting goals, and decision-making disparities * Conflicts between patients, medical staff, AI models,

      Strategies: * Share decision-making with patients to ensure autonomy * Seek human perspectives through surveys

    3. Issue 6: Trust * trust between humans and AI systems in healthcare * Factors affecting trust include data usage, data-driven technology, data confidentiality, and bias in AI systems

      Strategies: * Inform patients * Improve data privacy and confidentiality to maintain patient trust. * Educate healthcare personnel on AI to establish trust in AI healthcare providers.

    4. Issue 5: Patient Safety and Cyber Security

      Sub-issue 1: Patient Safety * Unnecessary or potential harm caused by AI tools * Difficulty in assigning responsibility for harm caused by AI Sub-issue 2: Cyber Security * data security and the possibility of hacking

      Strategies: * Develop AI with input from clinicians and computer scientists. * review AI tools through legally selected regulatory committees. * Continuously update regulations, codes of conduct, and standards. * Anticipate problems and take proper action for cyber security.

    5. Issue 4: Transparency: * Lack of transparency due to black box nature of ML * lack of explainability due to black box nature so credibility and reasoning of AI is in question

    6. Strategies for Privacy * Create strict rules and codes for data access and security * Secure data transfer and storage * Optimize patient consent * Incorporate legal rules and healthcare practices

    7. Strategies for Addressing Freedom and Autonomy * Ensure humans are in control of decisions * Allow clinicians control of the tech * Universal code for patient-clinician relationship * Make AI Info comprehensible to patients

    8. Ethical Issue 2: Freedom and Autonomy * 22 of 45 sources * sub-issues of control, respect and informed consent * Control (12 sources): The inability to control AI decisions is a concern * Respecting Human Autonomy (9 sources): AI doesn't have the ability to respect a patient's choice instead, it ignores it * Informed Consent (9 sources): AI must be able to obtain consent while respecting privacy and sensitivity

    9. Strategy for Addressing Ethical Issues of Justice and * Understand the difference between training and input data * Evaluate ecological validity of algorithms * Validate algorithms for different groups (ethnic, socio-economic)

    10. Included sources: * English language * Between January 1, 2010 and September 6, 2020 * Used Sources with 10+ citations * Specific to healthcare * Addresses ethical issues

    11. Criteria to Assess Sources: 1. Does the adopted research method address the research questions? 2. Does the study have a clear research objective? 3. Does the study have a specific description of each ethical issue? 4. Does the study have a specific description of strategies related to the ethical issue? 5. Do the results of the study add value to the area of research?

    12. Academic publications discussing the ethical issues concerning AI in healthcare do exist, such as “The ethics of AI in health care: A mapping review” by Morley’s research group [41], “Ethical and legal challenges of artificial intelligence-driven healthcare” by Gerke’s group [42], and “A governance model for the application of AI in healthcare” by Reddy’s group [43]. Morley’s group focused on mapping the ethical issues based on epistemic, normative, and overarching perspectives [41]. Gerke’s group explored ethical issues from the perspective of legal challenges, but did not present a systematic review of how AI can influence them in healthcare applications [42]. Reddy’s group addressed the introduction and implementation of a proposed governance model in healthcare. However, AI 2023, 4 30 their specification of the ethical issues only focused on the general governance model for ethical issues related to the essential elements of safety and the responsible use of AI

    13. “The global landscape of AI ethics guidelines” by Jobin’s group, which presents an overview of existing ethical guidelines and strategies [38]; “The Ethics of AI Ethics: An Evaluation of Guidelines” by Hangendorff, which analyzes 22 ethical guidelines for AI, and providing recommendations for overcoming their relative ineffectiveness [39]; and “Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications” by Ryan and Carsten Stahl, which provides a elaborative explanation of 11 normative implications of current AI ethical guidelines directed to AI developers and organizational users [40]. Although these three documents present very useful discussions of ethical AI issues in a general domain, none of them specifically address the ethics of AI in healthcare

    14. Examples of rules and regulations are the “Ethics Guidelines for Trustworthy AI” from the European Commission [33], “Report on the Future of Artificial Intelligence” from the US [34], and the “Beijing AI Principles” from the Chinese government [35].

    15. AI with ML algorithms that use DL and other techniques leads to black box models. * black-box models arrive at conclusions without providing an explanation * Black Box nature of ML typically clashes with laws * Can lead to consequences for stakeholders if a bad decision is made

    16. Ethical decision-making by AI evaluates social, ethical, and legal requirements however, acceptance and development of AI depends of AI complying with the law, regulations, and privacy principles.

    17. Machine Learning (ML) now allows AI to solve problems without specific programming. Deep Learning (DL) is a subset of ML can now be used to solve unstructured data problems similar to the human brain.

    18. Source follows PRISMA Guidelines: * PRISMA stands for Preferred Reporting Items for Systematic Reviews and Meta-Analysis * 27 item checklist used to improve transparency in Reviews

    Annotators

    1. Medical AI offers great promise for improving care, decreasing expenditures, and reaching underserved populations. However, the growing field of medical AI is extensive and applications are far ranging; some cutting-edge applications never reach clinical application due to regulatory concerns, while others move from bench to bedside quickly and with unresolved or unanticipated ethical or social concerns with their use. While various suggestions have emerged from ethicists as well as practitioners at technology firms and research labs, as of yet there is no cohesive approach to address these concerns in order to more fully capitalize on the potential of the field.In order for medical AI applications to meet its goals, there must be a more systematic process for addressing and anticipating ethical concerns as they arise before products are in clinical trials or in clinical use. While we believe the embedded ethics approach could most easily be implemented in academic institutions and as part of public–private development, it is suited to many different settings, including industry development of medical AI technology and applications. Doing so will help to enable medical AI to realize its potential to transform medicine for the better, in an equitable and safe fashion.The development of the embedded ethics approach is one step amongst many that will be necessary to tackle the critical ethical, social and political issues that are emerging with the burgeoning application of medical AI. Importantly, the embedded ethics approach can be combined with other specific methodologies such as ethical forecast analysis, as well as with existing proposals in universities for training more AI developers and engineers. Concrete laws and regulations can provide important governance for tech companies and research labs, and ‘softer’ approaches such as AI ethics ‘pledges’ can harness community-level commitments to develop AI only for pro-social intentions [63].The advantage of embedded ethics, while working in conjunction with these various initiatives, is the establishment of a more systematic, integrated, and iterative approach to ethics in AI healthcare innovation. All of these approaches will be necessary as AI becomes an increasingly common-place element of our daily lives and health. However, one of the clear benefits of embedded ethics in relation to existing calls is that it is more systematic, has a broader scope of application, and that it could begin immediately. Highly fluid, embedded ethics can work in a variety of settings, and can be adapted further in light of the specific needs of a development team, product, or process.Nonetheless, several unresolved issues remain with this proposal. First, even within publicly funded research settings, AI development primarily happens in a highly competitive environment which values efficiency and speed and, in more commercial settings, also profit. Ethical considerations might be ignored when they conflict directly with commercial incentives, and no doubt, ethicists and developers are bound to disagree on numerous substantive issues—consider the tension between transparency and intellectual property. As Metcalf and colleagues have noted, the process of taking ethical considerations seriously is often in tension with industry agendas, and runs the risk of being absorbed into broader corporate commitments to meritocracy, technological solutionism, and market fundamentalism [12]. Ethicists will sometimes work in contexts with extreme power differentials, particularly where corporate or financial interests are involved, as seen in the recent case of Timnit Gebru’s departure from Google. At times, it is likely that some form of enforcement measures will prove necessary, whether through hard regulation, certification, or voluntary measures, in order to counter any tendency for embedded ethics to become merely a form of “ethics washing” or ethical lip service to industry [50]. There are examples from other industries of “ethics seals”, certificates, and compliance programs that could potentially be borrowed and applied in embedded ethics. In our view, it is essential that ethics not serve as a new form of ‘industry self-regulation,’ but rather as an integral part of technological development for healthcare [64].Secondly, it remains undetermined how embedded ethicists would be paid for their work. We can imagine the possibility for initial public funding to pilot programs within academic research. In order for embedded ethics to be deployed in commercial medical AI development, however, it is possible that there may be industry push-back to funding such programs in the beginning. However, the hiring of ethicists by major tech companies already indicates that company buy-in may not prove to be a significant hurdle [12]. Given the many existing ethics ‘scandals’ that have emerged in relation to the use of AI technologies, it is likely that there is also a strong financial incentive to preventing the development of poorly-informed technologies that have the real possibility to cause harm. Thus, there could be paths for our proposal to be also adopted successfully in industry settings, once the value that embedded ethics brings to the development process and to the bottom line has been established.Third, there is a clear need for more training for both ethicists and developers and engineers in order to facilitate the kinds of exchange that will be necessary for embedded ethics to work. While existing proposals at leading universities are being developed, it is likely that other models for this training—in particular for professionals that are already working in the field—will prove necessary. Additionally, training, particularly in interdisciplinary and multi-cultural settings, could help to raise awareness of biases, on behalf of both the ethicists and developers involved. By fostering awareness of biases and an environment where diverse perspectives can be openly discussed, we envision the embedded ethics approach working to combat any potentially harmful influences of individual biases concerning the technology in development.Finally, in order for embedded ethics to succeed, it is necessary to develop clear standards of practice. An established methodological process will help to establish embedded ethics as a distinct community of practice with referenceable standards, case studies, and theoretical infrastructure. This will prove beneficial for all those involved in medical AI, including individuals involved in the creation of training programs, those already working in the medical AI field, ethicists trained in other areas looking to transition to medical AI, as well as other researchers, ethicists and concerned members of the public engaged with the social, ethical and political issues surrounding the use of AI in healthcare.

      Challenges: current regulatory challenges in AI development for medicine lack specific regulations - less rigorous testing for medical AI technologies - delayed ethical considerations during clinical trials.

      • It emphasizes the importance of timely assessments and notes that existing regulations, like the EU's General Data Protection Regulation and the Artificial Intelligence Act, may not fully address these issues.

      • The solution proposed is the embedded ethics approach, seen as adept at filling regulatory gaps by integrating ethical considerations early in AI development.

      • The approach aligns with interdisciplinary education trends, capitalizing on corporate willingness to engage in ethical practices and offering a swift response to the growing need for ethical considerations, particularly in medical AI development.

  2. Oct 2023
    1. Plans, as was mentioned earlier, include criteria to determine successful goal-attainment and, as well, include "feedback" processes-ways to incorporate and use in- formation gained from "tests" of potential solutions against desired goals

    2. Rules are usually "smaller," more discrete cognitive capabilities; plans can become quite large and complex, composed of a series of ordered al- gorithms, heuristics, and further planning "sub-routines.

      • A plan is a process that controls the order of operations
      • fundamental plan in Humans = TOTE
      • T (test): matches possible solutions
      • O (operate): proceeds if solution is sensible
      • T (post operation test): compares solution with goal
      • E (exit): if goals are reached
    3. Rules are: - inferred capability that allows someone to respond to a stimulus or class of stimuli with performance - can be learned directly or inferred from experiences - rules are the primary factor for organizing and intellectual functioning

    4. In The Experiment. writers were video taped and asked about their rules, plans and beliefs during their writing to examine the composing process without interfering

      • Experiment involved 1-3 interviews with each student.
      • Experiemnt is more clinical than scientific Led to the following revelation:
      • composing is a complex problem-solving process
      • disruptions of the process can be explained by the cognitive psychology framework
    5. The difference observed between blockers and non-blockers was the rules they set for themselves. - blockers had rigid impeding strategies - non blockers used less strict rules<br /> - non blockers were more functional, flexible and open to outside info

    1. The American Welfare State is a leaky Bucket - Welfare used to use all its funds to provide single-parent families with cash assistance - When Clinton reformed welfare in 1996, he replaced it with TANF, which is a block grant giving states leeway on how to use the money - states don't spend the money properly and instead states use it for other means - Arizona spends welfare for sex ed - Pennsylvania used TANF funds for ant--abortion crisis pregnancy centers - Maine used the money for a Christian summer camp - For every dollar towards TANF in 2020 poor families received 22 cents

    1. Matsuda's "Myth of Linguistic Homogeneity" says - writers or writing teachers assume that all writers share the same knowledge of language structures and functions - sociolinguistic context affects writing style, etc.