8 Matching Annotations
  1. Last 7 days
    1. In this respect, the deployment of AI technologies certainly implies the emergence of new professions, which must be properly understood. For example, new technical professions such as health data analysts, experts in knowledge translation, quality engineers in ehealth, and telemedicine coordinators, as well as professionals in social and human sciences such as ethicists of algorithms and robots are to be imagined [141, 142]. The construction of the organization’s ethical culture will depend in particular on its ability to identify areas of ethical risk, deploy its ethical values, and engage all its members in its mission [143].

      While AI is ruining the workplace, it is also creating new opportunities for employment. Whenever AI introduces a problem, a solution is needed. However, it’s easy to assume these new jobs could be done by anyone. In reality, AI can access a huge amount of info, while humans need to develop the necessary skills to perform these tasks. Therefore, society might struggle for a time to adapt.

    2. However, if these kinds of tasks become more widespread, might AI endanger jobs or even replace health professionals, as is often feared in technological transitions [130]?

      While AI is definitely making it harder to work, I doubt it would be able to replace healthcare. Who would feel safe going to a machine that has no degree just a mixture of true knowledge, and made-up ideas on the internet? People would much rather go to someone who can understand them. and who truly know what they're doing.

    3. Healthcare systems, professionals, and administrators will all be impacted by the implantation of AI systems. The first impact consists in the transformation of tasks. The integration of AI is transforming professional tasks, creating new forms of work [131], and forcing a readjustment of jobs (e.g., changing roles and tasks, modifying professional identities, evolving of professional accountability). For the WHO, readjusting to workplace disruption appears to be a necessary consequence of the ethical principle of “sustainability” identified by the committee of experts on the deployment of AI. In particular, governments and companies should consider “potential job losses due to the use of automated systems for routine healthcare functions and administrative tasks” [27]. Image recognition, for example, makes radiology one of the most advanced specialties in AI system integration [132]. AI is now able to “automate part of conventional radiology” [133], reducing the diagnostic tasks usually assigned to the radiologist. The authors of the French strategy report believe that this profession could then “evolve towards increased specialization in interventional radiology for diagnostic purposes (punctures, biopsies, etc.) for complex cases or therapeutic purposes guided by medical imaging” [133]. The practice of electrocardiograms in cardiology [133] or that of dentists in their routine and laborious tasks [134] is already undergoing upheaval. The field of general medicine is also being impacted by applications available to the public, such as “medical assistant” chatbots that can analyze users’ symptoms and direct them to a specialist or pharmacist. In the case of minor ailments, such technologies de facto diminish the role of the general practitioner.

      AI is damaging healthcare and rearranging how the career looks. WHO says this adjustment is part of keeping healthcare “sustainable,” but mentions that it could also lead to unemployment. Radiology is one of the most affected areas because it isn't as hands-on as most healthcare careers.

    4. In the medical context, increasing importance is placed on patients’ co-participation in their care [54] and their ability to refuse care or request additional medical advice

      How could relying on AI change the way doctors and patients make decisions together?

    5. Considering the intimacy and sensitivity of health data and the many actors potentially involved, AI highlights the question of individual privacy.

      if AI makes a mistake, who should be held accountable? Should the hospital take the blame?