46 Matching Annotations
  1. Apr 2024
    1. an empathic chatbot is not analternative of a mental health professional or a therapist

      hmm I wish there was a more robust conclusion. I guess this could be one side of the lit where they are giving many positives but ultimately saying that the limitations right now mean it cannot replace a human. would like to find a paper that argues otherwise? I suspect most will have this take

    2. Surveys conducted on Woebot users by StanfordUniversity indicate a significant improvement with feelingsof depression and anxiety [20]

      something to look into

    3. The therapy aims to turn the patient’snegative thoughts into positive ones

      how does this differ from human therapy? I feel like it's very different

    4. Lexical speech features: These features are boundwith the vocabulary used by the mental healthpatient. Lexical features need the text extractionfrom the speech to predict the patient’s emotions soit can be used on the recorded audio files.• Acoustic speech features: These features are boundwith the pitch, jitter and tone of the mental healthpatient. Acoustic features need the audio data forunderstanding the emotions in the patient’sconversation so it can be used for voice calls withthe patient. The acoustic model will be trained toextract the spectral features from speech signals

      using both word choice and variance in voice itself

    5. he data is gathered from various physical featuressuch as body movements, facial expressions, eye contact andother physical, biological signals

      a lot of this can be used for context?

    6. Chatbots can identify emergencies such as suicidalthoughts and escalate or redirect them to appropriateprofessionals

      how

    7. emojis/emoticons

      how accurate is the use of emoticons in application though?

    8. Sentiment Analysis

      similar to the other paper i read about how to build an emotional AI

    9. Natural Language Processing (NLP). NLPdepicts how chatbots translate and understand the patient’slanguage. Using NLP, chatbots can make sense of the spokenor written text and accomplish the tasks like keywordextraction, translation, and topic classification. NLPprocesses the content expressed in natural human languagewith the help of the techniques such as sentiment analysis,facial recognition and voice recognition [10].

      might be useful info for context/key terminology

    10. Understand and manage the patient’s psychologicalstate and connect them with a health professionalduring unfavourable events.2. 24/7 Instant chat support3. Smart with reactive behaviour such as promptanswering of a question and engage the patients withillness prevention and care tips.4. Easy to install, configure and maintain and iscompatible with various operating systems such asAndroid, iOS and Linux.5. For sensitive health care issues, patients might feelless shame and feel more private.6. Security of personal data is enhanced using differentauthentication techniques such as login using facialrecognition, biometrics or with a passcode.7. Cost-effective for a few mental conditions, such asstress release.8. Provide reminders such as taking medication, doexercise, slots for jogging.

      solid list of advantages: maybe find a source with the disadvantages to understand the discourse/each side better

    11. but three therapeutic mental health chatbots[Woebot, Wysa and Tess

      look into these?

    12. The main advantage of these bots is toprovide a practical, evidence-based, and an attractive digitalsolution to help fill the gap of the professional instantly [10].

      explaining the main use

    13. he mentalhealth professionals have adopted the use of technologyspecifically Artificial Intelligence-based chatbots to addressthe needs of individuals affected by mental health problemsas the first line of defence

      emphasize first line of defense, helping tool not replacing tool?

    14. social stigma and hesitation [8]

      okay ... maybe why an AI would be easier to access than a human

    15. 0 per 100,000 mental healthprofessionals are available in high-income nations and 2 per100,000 in low-income nations [5

      describing a need

    16. According to the World Health Organization(WHO), 1 in10 people need mental healthcare worldwide, and differentmental disorders are, portrayed by a combination ofperceptions, feelings, and relationships with others [1].

      introducing with mental health information rather than emotional AI info

    1. On the other hand, EAI can help health practitionersto increase the emotional understanding of their patients, thus be-ing able to deliver diagnoses and treatment faster and with moreaccuracy.

      interested to look more into this, mentioned briefly by both this source and my first peer reviewed source

    2. speech to text, wherethe audio signal is converted into words, and text sentiment analy-sis, where text is provided with emotional meaning.

      skeleton for detecting emotion in voice input

    3. One of the most important challenges is tointegrate the technology into the health provision process.

      this is the next topic i want to look into, this paper itself is very technical and would do well in the background/context portion of the proposal/paper.

    4. Figure 2: Emotional recognition system overview

      proposed integration of emotional recognition for AI

    5. Figure 1: Russell’s Circumplex model of emotions

      good figure to refer back to when referencing how emotional AIs work and are able to recognize user emotions

    6. Simplediscrete models associate facial expressions to the basic core of6 emotions recognized universally (namely: anger, fear, surprise,disgust, joy, and sadness), whereas multidimensional approachesparameterize emotions as a lineal combination of different psycho-logical dimensions.

      are these technically more accurate than a human analyzing patient's emotions?

    7. Different codification approaches havebeen proposed in the literature, but one of the most widely used isthe e Facial Action Coding System (FACS) developed by Ekman andFriesen [9]. As a final goal, FACS looks to recognize and categorizeaction units (AUs) which represent the minimum units of muscularactivity that can produce momentary changes in facial appearance.Automatic AU recognition provides as output one or more detectedmicro-movements of facial muscles along with the intensity (froma neutral state) of each AU.

      facial expression detection software

    8. The aim of the workpresented here is to provide an architecture that allows the inte-gration of these technologies into the digital health ecosystemspreviously developed by the authors in [1,2]

      purpose of paper

    9. Virtual therapists may also encouragepatients to share their thoughts and feelings more deeply than ahuman therapist, as they may be perceived as safer environmentsfor sharing personal information.

      curious to see the studies/evidence for this

    10. personalized therapy.

      how do patients feel about talking to a chat bot vs a real human?

    11. It can also help doctors to ensure patients are following acertain treatment, as well as assessing the treatment evolution.

      this seems to be a common application brought up by many professionals

    12. EAI can be used to help patients understandtheir emotional state under stressful situations, so they can man-age their emotions and handle difficult situations.

      in terms of treatment for mental health like what is being suggested? Is an AI actually more accurate than a human diagnosis? Especially since mental health diagnosis are already hard to pinpoint.

    1. emotional AI-based outreach initiatives at community gathering facilities can help elders more comfortably familiarize with the technology, while indirectly having a positive impact on their attitude toward the technology when later used in the diagnosis and treatment process.

      a solution!

    2. This result implies that policy-wise, an increasing awareness of AI through education and media can produce a positive impact on emotional AI perception in healthcare.

      what i was thinking :)))

    3. This finding suggests that Japanese elderly patients have negative perceptions of emotional AI-based tools in healthcare in both private and public settings, which is problematic as the target demographic of AI-based healthcare utilities is the older generation in Japan.

      hmm this does make sense as they mentioned earlier that it's easier for people to accept when they are more educated about it, and the newer en has grown up with this tech while the older has not

    4. Discriminatory concern is also found to have no significant correlation with the attitude toward AI in healthcare.

      we see in many cases here that the literature on AI often doesn't have real grounds here when people come face to face with what they actually are concerned with. I will say that it might be different here because it takes place in Japan. Since Japan is largely homogenous, why would they fear bias as much? Other possible ethical concerns like class, gender, etc. probably have less grounds in medical malpractice or ethics to begin with. (oh they do kind of explain these thoughts in the next paragraph)

    5. Kitano (2006) argues the traditional belief of Shintoism predisposes Japanese people to the natural propensity of ascribing the human quality of a heart (心-Kokoro) or spirit to inanimate objects, thus, explaining the embrace of Japan toward robots.

      really really interesting to have robots be so centered in a culture such that it becomes a comfortable thing. I wonder if other cultures are similar or is it really mostly japan due to the embedding of inanimate objects and robots in daily life, pop culture, and historical culture

    6. traditionally high degree of trust the Japanese have for authority figures and public healthcare institutions.

      cultural vs technological intersect here as they agree and not contradict vs earlier when we saw traditional conservative beliefs might distrust AI.

    7. In contrast, higher familiarity with AI application in healthcare and medicine was associated with subjects’ positive perception of emotional AI usage in public health facilities

      ...maybe we need to educate better on AI to help people understand it rather than fear it

    8. Whereas familiarity with AI in healthcare presents positive correlation with the attitude of private AI use, concern over losing control to AI presents a negative correlation. In other words, those with higher concerns over losing control to AI perceived AI private usage in healthcare more negatively. Surprisingly, concerns for privacy violations and discrimination implicated in the use of AI in a private setting are not significant

      important results! privacy seems to be on the back burner of the potential hesitancy when it comes to AI in healthcare, but the threat of losing control remains dominant.

    9. robots having emotion is what blurs the line between robots, being perceived as an object or a being, being perceived as artificial or humane, and being perceived with a soul or without a soul.

      I wonder if this is true in other contexts. would showing strong emotional intelligence relieve an uncanny valley effect or discomfort around robots? I feel like an emotionally intelligent robot might distrub me more in terms of the "robots could take over" sentient issue.

    10. James Wright's ethnographic study on “Hug”, a mechanical care robot designed to help lift and transfer the elderly on behalf of careers, the author discussed the element of “安心” (anshin—peace of mind) to explain the Japanese caregivers' perception of the mechanical lifting device [21].

      possible study to look into as well

    11. The researcher rejected the common argument that empathetic robots should not be considered an alternative to caregivers because their emotions and interactions are artificial and thus not authentic, with two counterpoints. First, based on interviews with the patients and observations of their interactions with the care robot, Aronsson revealed that many patients were factually aware of the differences between care robots and caregivers but chose to interact with them regardless. Second, there were circumstances where the caregivers themselves had to act professionally and provide care for the elderly regardless of their empathy or emotions. In such a context, Aronsson argues that the authenticity of Palro's emotional labor was as valid as the human professional's. Thus, in the context of care facilities, the question of authenticity doesn't translate equally in a philosophical sense, given that empathetic robots can act as effectively in their caregiver's role as their human counterpart.

      similar to the reading in class--how to define emotions if we perceive and receive emotion from robots in the same way as humans, this could be another kind of authenticness

    12. social robots and humans.

      I've done some research into social robots & humans in media psych! so another topic of interest here... how can I tie in all these branches together or focus more clearly on one

    13. empathetic AI-based chatbots are attracting increasing research attention as they are believed to be beneficial in supporting patients with mental health issues or emotional distress

      examples of empathetic AI --> possible direction to look more into compassionate, empathetic AI

    14. As mentioned in the previously mentioned, emotional AI is a narrow branch of AI to identify and respond to human emotions. The technology is based on the work of Rosalind Picard, a professor at the MIT Media Lab, who coined the term “affective computing”. Picard defined the concept as “computing that relates to, arises from, and deliberately influences emotion” [12].

      emotional AI definition

    15. Sometimes contradictions result from a conflict between two forces.

      this could be a good topic to focus on for my presentation? culture vs technology and how the love of both pulls people in two directions when it comes to AI even if it on a topic of necessity in the dying workforce for healthcare.

    16. For instance, in 1999, Sony introduced the world's first robot dog companion, AIBO, and in 2018, reintroduced AIBO with an AI upgrade that infused it with a “lovable quality”. Studies on the mental health impacts of PALRO, Fujisoft's humanoid robot which communicates with humans through voice and remembers the faces of over 100 people, have demonstrated its ability to reduce anxiety and stress in dementia patients. More importantly, PALRO is found to encourage people with dementia to interact with others in senior care facilities [8].

      interesting cultural context like I mentioned earlier, AI and robots have less of a stigma and are already well loved.

    17. a shortage of nearly half a million healthcare workers by 2025

      interesting to look at it in this cultural and geographical context. Although the shortage of medical professionals is felt globally, this seems to be a significant number for Japan specifically. While the US might want to incorporate AIs for the sake of innovation and technological advancement, here there is somewhat of a necessity to make up for the loss of human labor (and not the other way around where AIs are kicking out human labor).

    18. emotional AI technologies, i.e., deep learning systems trained to read, classify, and respond to human emotions.

      interested in focusing on emotional AIs in healthcare, possibly using this particular case study as a jumping off point