37 Matching Annotations
  1. Last 7 days
    1. Finally, The ASRT system can help to reduce costs in clinical trials by enriching recruited samples. To obtain a pre-specified sample size of Aβ+ individuals, pre-screening using the ASRT system would require recruitment of a higher number of participants (+53.8% in MCI, and +35.1% in CU participants), but reduce the volume of costly PET scans needed (−35.3% in MCI, and −35.1% in CU individuals).

      Use as a research tool rather than for clinical application

    2. In the current study, the ASRT system is superseded by the PACC5 cognitive composite for detecting MCI.In the current study the ASRT system is superior to the PACC5 for detecting Aβ positivity.

      Isn't better than PACC5 (current methods) for detecting MCI but works for the presence of amyloid plaques

    3. Alzheimer’s disease is not routinely screened for in clinical practice.1 Instead it is most commonly tested for when patients present with cognitive complaints, or after cognitive impairment interferes with daily functioning. Research indicates that half of individuals aged 65+ with dementia are missed from primary care dementia registers, which suggests that around 50% of cases remain undiagnosed even at the more advanced stages of Alzheimer’s disease.

      No current screening, AD currently massively underdiagnosed

    4. Simulation analyses indicated that in primary care, speech-based screening could modestly improve detection of mild cognitive impairment (+8.5%), while reducing false positives (−59.1%). Furthermore, speech-based amyloid pre-screening was estimated to reduce the number of PET scans required by 35.3% and 35.5%

      Okay pretty significant percentages but what is this based off?

    5. amyloid beta status (primary endpoint)

      Okay to amyloid beta status is a stand in for AD diagnosis. The study measures AI performance by how well it can predict amyloid status. If it looks likely, send them for a scan.

      So...does it work? And does it work better than current NPTs?

    6. The automatic story recall task was administered during supervised in-person or telemedicine assessments, where participants were asked to recall stories immediately and after a brief delay

      Data they obtained to feed the AI

    7. Early detection of Alzheimer’s disease is required to identify patients suitable for disease-modifying medications and to improve access to non-pharmacological preventative interventions

      Value

    1. fourth row of the table reportsthe performance of adding APOE data to the model using demographicfeatures, resulting in an AUC and F1 score of 71.7% and 75.7%

      Did adding APOE lower the accuracy? Nope thats using APOE alone

    2. As depicted in Figure 5, our analysis revealed that subtests relatedto demographic questions (DEMO), BNT, similarity tests (OTHER), andWAIS emerged as the top features driving the performance of ourmodel

      This is contradictory. They said demographics don't matter but now they claim these tests as most important

    3. The Results section indicates that adding demographic features totext features does not enhance the model’s ability to predict the pro-gression from MCI to AD. This contrasts with previous assumptionsabout the predictive power of age and other demographics in relationto degenerative diseases over extended periods.

      Does this hold for other studies too?

    4. AD with an accuracy of 78.2% and a sensitivity of 81.1% in the held-out test data, demonstrating strong predictive power over a 6-yearspan

      Accuracy encompasses both sensitivity and specificity

      So the model overall accurate but lacking in specificity (tends to have more false positives)

    5. Demographics

      Text + Demographics + APOE + Health = 78.5/79.9 highest score as expected

      Text alone: 77.8/79.4 (so it doesn't look like adding in the other info helped a a lot)

      Traditional neuropsychological tests: 71.3/75.5

      Demographics alone: 68.8/71.1

    6. subsequent two rows highlight modelsthat leverage text features along with readily available demographicdata such as age, sex, and education, also demonstrating strong predic-tive capabilities with an AUC and F1 score of 77.8% and 79.4% for ourNLP model using only text features

      so this trial they used less detailed demographic information and were still able to get a pretty accurate result

    7. The first row showcases the model’s per-formance, incorporating text, demographics, APOE, and health factors,achieving an AUC of 78.5% and an F1 score of 79.9%, marking the high-est effectiveness observed.

      How much of the predictability is coming from the speech recognition and how much is coming from the other factors it is also accounting for (demographic, health factors, apolipoprotein E)? Now that is a specific question...

    8. 10%for testing

      used the same population for testing so not really independent testing. Would be interesting to see how it held up with a completely unrelated group of people. Potentially wouldn't hold up as well.

    9. y leveraging transformer-based language models, weaim to capture semantic nuances potentially missed by conventionalscoring, enriching the assessment with comprehensive text features

      A Transformer is a neural network architecture introduced by Google. It revolutionized how machines understand and generate sequences

      Instead of processing words one at a time, transformers look at all words in a sequence simultaneously and use a mechanism called self-attention to understand how each word relates to every other word.

      Self-Attention Mechanism Each token “looks” at other tokens in the sentence and assigns attention weights — numbers that represent how important each word is to understanding the current one.

      Example: In the sentence “The patient who had pneumonia was discharged.”, the word “was” should pay more attention to “patient” than to “pneumonia.” The self-attention mechanism captures this context automatically.

      1. Stacked Layers

      Many layers of self-attention and feed-forward networks are stacked.

      Each layer learns increasingly abstract relationships — syntax, semantics, and even reasoning patterns.

    10. Our findings, derived from the neuropsycho-logical test interviews conducted by the FraminghamHeart Study, demonstrate strong performance, achievingan accuracy rate of 78.5% and a sensitivity of 81.1% inpredicting progression to AD within 6 years.

      Okay so it works pretty good...what are some next steps? Application to a different disease that affects speech?

    11. The proposed method offers a fully automated procedure, providingan opportunity to develop an inexpensive, broadly accessible, and easy-to-administerscreening tool for MCI-to-AD progression prediction, facilitating development ofremote assessment.

      Okay there is value in this because it another way to help differentiate between MCI and Alzhimer's. Risk stratification, get them a MRI, then potentially some of the drugs that can prevent plaques and tangles from forming

    12. We applied natural language processing techniques along with machinelearning methods to develop a method for automated prediction of progression to ADwithin 6 years using speech

      OKAY a primary research article

    1. enerative classifiers and“other” methods, mainly consisting of studies which combinedmultiple AI algorithms to generate novel or complex classifica-tion tools were difficult to categorize; each constituted 10% ofthe literature. Most of these studies focused heavily on com-putational methods which are not easily accessible to a clinicalaudience

      Generative: creating new categories/classifications in which to place data points (finding new patterns in complex data sets). Not used as much. Potential for a question here?

    2. Most studies (71%) relied on the Alzheimer’s Disease Neuroimaging Initiative(ADNI) dataset with no other individual dataset used more than five times

      Lack of a diverse data set again a common theme.

    3. lackof sufficient algorithm development descriptions and standard definitions

      Is there any potential research question here? What should the standard definitions be? What details are needed?

    Annotators