62 Matching Annotations
  1. Last 7 days
    1. In conclusion, the use of programs that indicate plagiarism percentages (based on the concept of plagiarism), similarities, or AI, without prior analysis of the manuscript's quality, represents an editorial attitude based on new technology guidelines, which do not determine the quality of manuscripts, and should not be a condition that prevents the submission of research papers to reviewers who analyze the quality of the report. Jesus A. Mosquera
    2. AI can assist researchers in the manuscript writing process, improving writing style and language, but not in the interpretation, analysis, and logical conclusions of the results found, which must be carried out by the researchers.
    3. The correct use of AI can be beneficial for scientific publications, provided that natural intelligence establishes the order of what is reported by the AI
    4. "Plagiarism is defined as presenting another person's work as your own, whether it's text, ideas, data, or images, without proper attribution. This includes self-plagiarism, where an author reuses their own previously published work without proper citation"
    5. The editorial committees of various scientific journals have established plagiarism, similarities to other works, and the use of artificial intelligence (AI) as rejection criteria for research papers, without prior expert review.
    1. By using Artificial Intelligence based technology in an effectiveand ethical way, being supported by teachers and policymakers, students at theFaculty of Economic Sciences, University of Oradea can acquire the necessarycommunication skills in Business English in order to be successful in today’sinterconnected world
    2. Our research, undergone through semi-structured interview, inthe qualitative phase, as an instrument for data collection, has revealed the fact that42% of the interviewed students use Artificial Intelligence in improving BusinessEnglish communication skills
    3. In order to examine and get a clear image of the usage of Artificial Intelligence inlearning foreign languages, having clear reference to Business English, we havechosen key informants that could provide relevant information, being chosenfollowing the inclusion criteria: 1. Economics students who have English selectionexam scores above 80 points; 2. Economics students who have taken Introductionto Business English Course during the first semester.
    4. The appearance of Artificial Intelligence has revolutionized not only the students’traditional methods of learning but also the teachers’ teaching methods, bringing newideas and new opportunities to all aspects of the teaching/learning process
    5. Therefore, this study comes as ahelpful perspective regarding the students’ opinion in using Artificial Intelligencebased technology and how they can help in developing their communication skills.
    6. The lack of confidence andanxiety about making errors are only two of the main reasons to take intoconsideration when talking about communication in Business English.
    7. Business English is a complex field which requires a lot of practice in order to bemastered. The focus of the course is communication. Since the students nowadayshave proved to be digital literate from an early age, the use of the artificialintelligence-based tools in teaching have shown an increased interest in the learningprocess
    1. Another notable advancement in this review is the incorporation of research addressing the ethical and fairness concerns surrounding AI in MH care. The earlier review identified these as key areas of concern, particularly regarding algorithmic bias and transparency.
    2. he findings continue to highlight the promise of AI technologies in addressing missed care, alleviating clinicians' workloads, improving diagnostic accuracy, and addressing workforce challenges. However, the newly reviewed studies bring further insights and raise further complexities, particularly concerning the real-world implementation, ethical considerations, and the need for more rigorous evaluations of AI systems in diverse MH contexts.
    3. Ultimately, the research articles convey a sense of cautious optimism about CDSS's potential to transform MH care. They highlight the opportunities presented by data-driven insights while acknowledging the importance of addressing ethical concerns and prioritising patient well-being. The articles suggest that, by carefully navigating these complexities, the field can harness the power of technology to deliver more personalised, effective, and equitable MH care for all.
    4. Beyond technical challenges, the articles also underscore the importance of addressing ethical considerations, such as data privacy and the potential for bias.
    5. While the potential benefits of CDSS are widely acknowledged, the research articles also address the challenges and ethical considerations associated with their development and implementation. A key concern is ensuring the accuracy, fairness, and clinical utility of these systems.
    6. These studies collectively emphasise that, while AI has significant potential in MH care, its integration must be carefully managed through ongoing research, collaboration, and a balanced approach that combines technological innovation with workforce development.
    7. In another study, a novel multimodal, multitask learning model was directly applied as an intervention to predict rehabilitation outcomes for individuals with severe mental illness
    8. ne study utilised this model to identify high-risk patients and recommended preventative interventions for clinicians, with a specific emphasis on refining decision thresholds through decision curve analysis to optimise sensitivity while minimising the risk of overtreatment (Liu et al. 2024).
    9. Clinical decision support systems (CDSS) for MH, including those incorporating AI, show promise in improving patient care. Overall, the results of this subsequent literature review highlight the potential of CDSS to improve MH care but emphasise the importance of rigorous evaluation, debiasing efforts, and ongoing monitoring to ensure fairness, accuracy, and clinical utility.
    10. However, the importance of maintaining clinician oversight remains a central theme, ensuring that AI tools enhance, rather than replace, human judgement.
    11. This updated review revisits the original research aims (Higgins et al. 2023), with a particular focus on how AI can complement and augment clinicians' decision-making processes. The findings highlight the potential of AI-driven clinical decision support systems (CDSS) to enable clinicians to make more informed, accurate, and timely decisions, ultimately reducing instances of missed care.
    12. Healthcare systems now face the challenge of integrating these powerful tools into clinical workflows while maintaining the highest standards of care, medical ethics, and community benefit.
    13. The rapid advancements in artificial intelligence (AI) and machine learning (ML) have significantly expanded the technological capabilities available to healthcare systems, particularly with the emergence of large language models (LLMs) such as ChatGPT.
    14. While AI-driven CDSS holds significant promise for optimising MH care, sustainable improvements require the integration of AI solutions with systemic workforce enhancements.
    15. New evidence highlights the importance of clinician trust, system transparency, and ethical concerns, including algorithmic bias and equity, particularly for vulnerable populations. Advancements in AI model complexity, such as multimodal learning systems, demonstrate improved predictive capacity but underscore the ongoing challenge of balancing interpretability with innovation.
    1. Following the intuitive distinction between knowledge and understanding, we argue that understanding is the superior epistemic aim for higher education because it constitutes a cognitive achievement of the sort we seek in education — proof an individual is developing as a cognitive agent, able to draw upon cognitive effort and skill to solve problems.
    2. In both cases, the cognitive skills on display are memorization and recall. Both students might have knowledge (or the appearance of it) at that moment, but the assessment does little to gauge whether they understand the material.
    3. Given the ease with which students can acquire, or appear to acquire, knowledge via ChatGPT, higher education needs a revised epistemic aim,20 specifically one that constitutes a cognitive achievement — signaling an agent's cognitive development via effort and skill — and that is not easily undermined or acquired by using generative AI tools.
    4. AI panic: we aim for students to develop as cognitive agents who can demonstrate their understanding, but what we demand of them via assessments is only knowledge, something we now fear they can feign (but never acquire) with the use of AI. Simply put, if ChatGPT can complete our assessments, they are poor assessments to begin with, i.e., they are not an adequate gauge of the cognitive development we aim for in education.
    5. Consider two students taking a math quiz. They encounter the problem, “What is the square root of 9?” For whatever reason, one student opted to memorize the answers to specific math problems in case they came up in a test, while another focused on learning the mathematical operation. Both give the same correct answer to the question, but one has mere knowledge, as far as they know the answer to the problem — the square root of nine is three — but the other has understanding. They arrive at the answer by completing the operation. Now, we ask, which student has a better grasp of the square root of nine, or math in general?
    6. All things equal, the student who understands the mathematical operation could complete the square root function for other numbers. Their understanding spans beyond the knowledge of a particular fact.
    7. The cognitive success in one case is the result of memory and luck, the other a result of cognitive effort and agency in combining what one knows with one's understanding of the game of chess, the other player's moves, the timing, flow, and strategy of play, and one's objective. Drawing on the network of relationships between these pieces of knowledge and skills to successfully win a chess game does not merely demonstrate that one knows the rules and strategy needed to play, but that one understands these and can apply them to the game.
    8. Given the inevitable use of generative AI, like ChatGPT, on campus, we maintain that understanding is the superior epistemic aim for education.
    9. Rather than immediately reaching for practical solutions to address current fears of ChatGPT on campus, we argue that campus leaders, instructors, and taskforce members should first pause and revisit what it is they hope to achieve through education, lest their responses lead further away from guarding what is actually at stake in generative AI use in higher education. We hope to have provided a roadmap for examining the educational aims at the heart of the debate over tackling ChatGPT in education that is insightful and will be instructive for campus leaders in their pursuit of a valuable and viable response to generative AI on campus.
    10. The threat of students using ChatGPT highlights the primary pitfalls of knowledge as an epistemic aim of education, and by extension the object of assessment, and bolsters the case for pursuing understanding. And, in reviewing the advantages of understanding as an epistemic aim, we have revealed a pathway for responding to the threat of generative AI on campus. Adopting understanding as higher education's epistemic aim over knowledge both mandates and allows for curriculum revision in favor of assessments that can be completed with the assistance of ChatGPT, but not via ChatGPT alone.
    11. AI tools cannot give you understanding primarily because it is not something that is simply received or acquired without some effort from the agent. Thus, an essay prompt that can easily be answered by AI is assessing for knowledge, not understanding. In other words, if generative AI can complete the assessment for the student, without the required cognitive effort on the part of the student, then that is a bad assessment that is aimed at assessing knowledge, not understanding. In the case of the essay, it is a bad essay prompt if ChatGPT can convincingly complete it without any input, fine tuning, or correction from the student.
    12. Educators naturally fear that generative AI tools like ChatGPT put student effort and cognitive achievement at risk because they can be used to pass assessments;
    13. While understanding may be more difficult to gauge and to confirm whether students have it, we can design assessments for students to demonstrate it — and often, in doing so, ensure that they acquire it.
    14. Generative AI tools like ChatGPT, we suggest, merely bring to the fore what has likely been an issue for some time: that existing assessments do not gauge students' cognitive achievements.
    15. This is the sort of cognitive ability we aim to develop through education. We seek cognitive effort, development, and achievement. In theory, educational assessments gauge our progress toward this aim, i.e., whether students are developing as cognitive agents. Too often, though, assessments measure mere knowledge, not understanding.
    16. If you can draw upon the knowledge that you do have, and make connections between these pieces of information, you can generally make up for the gap in your knowledge, i.e., that you do not know where the train station is in this particular city. Solving this problem by drawing on the network of things you do know is a cognitive achievement: you do not simply have knowledge in this case, rather, you display cognitive effort and skill to infer and make connections between what you do know to make up for what you do not. You draw upon and demonstrate your understanding.
    17. Following the intuitive distinction between knowledge and understanding, we argue that understanding is the superior epistemic aim for higher education because it constitutes a cognitive achievement of the sort we seek in education — proof an individual is developing as a cognitive agent, able to draw upon cognitive effort and skill to solve problems.
    18. Certainly, one used ChatGPT to generate their summary. Still, any student could just as easily have watched a YouTube video, found a study guide with content summaries online, or read a Wikipedia page on photosynthesis to acquire theirs. Both the ChatGPT summary and lecture-slide summary will likely pass the assessment. The students have both demonstrated knowledge of photosynthesis. It is not knowledge they compiled through effort or skill, or that we can guarantee will persist beyond the context of the assessment, but they demonstrate at that moment that they possess knowledge.
    19. This renders our assessments less able to gauge students' development as cognitive agents. Even when students retain information acquired from ChatGPT after the assessment, simply possessing knowledge of a course subject does not confirm that their education was successful, regardless of whether the knowledge was acquired in class or through ChatGPT.
    20. In passing the assessment, it appears the student learned something, i.e., that they acquired and retained information through cognitive effort and intellectual skill. But, if they used generative AI, or if it is possible that they did, our assessments are no longer reliable indicators of students' cognitive effort and development as cognitive agents. We essentially get a false positive that they have developed.
    21. Accordingly, a reasonable institutional response would be to revise academic honor codes to include specific clauses on using generative AI like ChatGPT. Higher education institutions might also lean into educating students in intellectual virtues, stressing the importance of academic integrity. Both responses are valuable and justifiable reactions to the threat of pervasive AI use on campus. However, these responses are only sufficient if student integrity is the sole concern.
    22. While discourse in higher education surrounding generative AI focuses on the need for an adequate response to the tool's use on campus, we argue that an adequate response requires examining what exactly in education is at stake in AI use in higher education.
    23. While some argue that the tool essentially is the new calculator — inevitable and necessarily worth incorporating — others classify it as an existential threat to the future of higher education.
    24. Its ability to synthesize and summarize information in approachable prose can make prepping for an exam easier and less time-consuming than reviewing multiple weeks' worth of course material.
    25. They can answer questions, produce outlines, compose poems, write computer code, and generate argumentative essays that pass for college-level writing. The generated responses are unique: they are not plagiarized — copied directly from a human-written text — nor are they recycled or boilerplate (e.g., using the same prompt can produce distinct responses).4 This makes detecting ChatGPT-generated text (products of generative AI) difficult, though some have found its sentence patterns to be complex and varied than human prose.
    26. We conclude that the advent and continued advancement of AI tools bolsters the case for understanding over knowledge as an aim of education. We argue that the epistemological distinction between the two is informative and directive in articulating what is at stake in AI use in education and what might constitute a strategic response.
    27. We propose that generative AI tools like ChatGPT threaten higher education's commitment to pursuing cognitive achievement. Then, we argue that generative AI tools undercut knowledge-based learning goals and corresponding assessments because knowledge is easily acquired and demonstrated without the corresponding cognitive effort and achievements.
    28. This new type of generative artificial intelligence technology (generative AI) can produce written prose that passes for basic college-level writing. Since then, workshops, institutional task forces, and committees have been created across the higher education system to address generative AI use on campus.
    29. Although AI can enhance and aid students in developing understanding, it can neither provide them with understanding nor give the appearance of understanding without student effort.
    30. They propose that ChatGPT, rather than threatening student cognitive development and effort, reveals a serious flaw in higher education's current aims and assessments: they are directed at knowledge, not understanding.