- Nov 2024
-
ali-alkhatib.com ali-alkhatib.com
-
gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm
Author moved from mitigating harm of algo systems to the moral standpoint that actively resisting, sabotaging, ending AI with attached political projects are valid reaction to harm. So he's moving from monster adaptation / cultural category adaptation to monster slaying cf [[Monstertheorie 20030725114320]]. I empathise but also wonder, bc of the mention of the political projects / structures attached, about polarisation in response to monster embracers (there are plenty) shifting the [[Overton window 20201024155353]] towards them.
Tags
Annotators
URL
-
-
untoldmag.org untoldmag.org
-
Decolonizing AI is a multilayered endeavor, requiring a reaction against the philosophy of ‘universal computing’—an approach that is broad, universalistic, and often overrides the local. We must counteract this with varied and localized approaches, focusing on labor, ecological impact, bodies and embodiment, feminist frameworks of consent, and the inherent violence of the digital divide. This holistic thinking should connect the military use of AI-powered technologies with their seemingly innocent, everyday applications in apps and platforms. By exploring and unveiling the inner bond between these uses, we can understand how the normalization of day-to-day AI applications sometimes legitimizes more extreme and military employment of these technologies.There are normalized paths and routine ways to violence embedded in the very infrastructure of AI, such as the way prompts (text inputs, N.d.R.) are rendered into actual imagery. This process can contribute to dehumanizing people, making them legitimate targets by rendering them invisible.
Ameera Kawash (artist, researcher) def of decolonizing AI.
-
- Feb 2024
-
arxiv.org arxiv.org
-
T. Herlau, "Moral Reinforcement Learning Using Actual Causation," 2022 2nd International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 2022, pp. 179-185, doi: 10.1109/ICCCR54399.2022.9790262. keywords: {Digital control;Ethics;Costs;Philosophical considerations;Toy manufacturing industry;Reinforcement learning;Forestry;Causality;Reinforcement learning;Actual Causation;Ethical reinforcement learning}
-
-
pdf.sciencedirectassets.com pdf.sciencedirectassets.com
-
Can model-free reinforcement learning explain deontological moraljudgments?Alisabeth AyarsUniversity of Arizona, Dept. of Psychology, Tucson, AZ, USA
-
- Jan 2024
-
www.linkedin.com www.linkedin.com
-
the canonical unit, the NCU supports natural capital accounting, currency source, calculating and accounting for ecosystem services, and influences how a variety of governance issues are resolved
-
for: canonical unit, collaborative commons - missing part - open learning commons, question - process trap - natural capital
-
comment
- in this context, indyweb and Indranet are not the canonical unit, but then, it seems the model is fundamentally missing the functionality provided but the Indyweb and Indranet, which is and open learning system.
- without such an open learning system that captures the essence of his humans learn, the activity of problem-solving cannot be properly contextualised, along with all of limitations leading to progress traps.
- The entire approach of posing a problem, then solving it is inherently limited due to the fractal intertwingularity of reality.
-
question: progress trap - natural capital
- It is important to be aware that there is a real potential for a progress trap to emerge here, as any metric is liable to be abused
-
-
-
www.technologyreview.com www.technologyreview.com
-
it didn’t mention more recent work on how to make large language models more energy efficient and mitigate problems of bias.
-
for: AI ethics controversy - citations from Dean please!
-
comment
- Can Dean please provide the missing citations he is referring to?
-
-
- for: progress trap -AI, carbon footprint - AI, progress trap - AI - bias, progress trap - AI - situatedness
-
- Oct 2023
-
www.careful.industries www.careful.industries
-
https://web.archive.org/web/20231019053547/https://www.careful.industries/a-thousand-cassandras
"Despite being written 18 months ago, it lays out many of the patterns and behaviours that have led to industry capture of "AI Safety"", co-author Rachel Coldicutt ( et Anna Williams, and Mallory Knodel for Open Society Foundations. )
For Open Society Foundations by 'careful industries' which is a research/consultancy, founded 2019, all UK based. Subscribed 2 authors on M, and blog.
A Thousand Cassandras in Zotero.
-
-
epochemagazine.org epochemagazine.org
-
- summary
- can we program AI to give a damn?
- summary
-
- Sep 2023
-
-
- for: bio-buddhism, buddhism - AI, care as the driver of intelligence, Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, care drive, care light cone, multiscale competency architecture of life, nonduality, no-self, self - illusion, self - constructed, self - deconstruction, Bodhisattva vow
- title: Biology, Buddhism, and AI: Care as the Driver of Intelligence
- author: Michael Levin, Thomas Doctor, Olaf Witkowski, Elizaveta Solomonova, Bill Duane, AI - ethics
- date: May 16, 2022
-
summary
- a trans-disciplinary attempt to develop a framework to deal with a diversity of emerging non-traditional intelligence from new bio-engineered species to AI based on the Buddhist conception of care and compassion for the other.
- very thought-provoking and some of the explanations and comparisons to evolution actually help to cast a new light on old Buddhist ideas.
- this is a trans-disciplinary paper synthesizing Buddhist concepts with evolutionary biology
Tags
- Thomas Doctor
- self - constructed
- no-self
- self- illusion
- multiscale competency architecture of life
- bodhisattva vow
- Bill Duane
- bio-buddhism
- care drive
- nonduality
- Michael Levin
- Buddhism - AI
- self - deconstruction
- AI - ethics
- care light cone
- Elizaveta Solomonova
- Care as the Driver of Intelligence
- emptiness
- Olaf Witkowski
- cognitive light cone
Annotators
URL
-
- Aug 2023
-
www.semanticscholar.org www.semanticscholar.org
-
One of the most common examples was in thefield of criminal justice, where recent revelations have shown that an algorithm used by the UnitedStates criminal justice system had falsely predicted future criminality among African-Americans attwice the rate as it predicted for white people
holy shit....bad!!!!!
-
automated decisions
What are all the automated decisions currently be made by AI systems globally? How to get a database/list of these?
-
The idea that AI algorithms are free from biases is wrong since the assumptionthat the data injected into the models are unbiased is wrong
Computational != objective! Common idea rests on lots of assumptions
-
- May 2023
-
www.lesswrong.com www.lesswrong.com
-
must have an alignment property
It is unclear what form the "alignment property" would take, and most importantly how such a property would be evaluated especially if there's an arbitrary divide between "dangerous" and "pre-dangerous" levels of capabilities and alignment of the "dangerous" levels cannot actually be measured.
-
- Apr 2023
-
sotonye.substack.com sotonye.substack.com
-
just than the State
I think this is yet to be seen. Although it is true that the computer always gives the same output given the same input code, a biased network with oppressive ideologies could simply transform, instead of change, our current human judiciary enforcement of the law.
-
-
howtosavetheworld.ca howtosavetheworld.ca
-
In other words, the currently popular AI bots are ‘transparent’ intellectually and morally — they provide the “wisdom of crowds” of the humans whose data they were trained with, as well as the biases and dangers of human individuals and groups, including, among other things, a tendency to oversimplify, a tendency for groupthink, and a confirmation bias that resists novel and controversial explanations
not just trained with, also trained by. is it fully transparent though? Perhaps from the trainers/tools standpoint, but users are likely to fall for the tool abstracting its origins away, ELIZA style, and project agency and thus morality on it.
-
- Dec 2022
-
www.axios.com www.axios.com
-
"If you don’t know, you should just say you don’t know rather than make something up," says Stanford researcher Percy Liang, who spoke at a Stanford event Thursday.
Love this response
Tags
Annotators
URL
-
- Dec 2021
-
-
- Jun 2021
-
www.technologyreview.com www.technologyreview.com
-
many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs
And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.
-
- Mar 2020
-
www.lastampa.it www.lastampa.it
-
le nuove tecnologie sono presenti nella vita di tutti, sia lavorativa sia quotidiana. Spesso non ci rendiamo neanche conto che interagiamo con sistemi automatici o che disseminiamo sulla rete dati che riguardano la nostra identità personale. Per cui si produce una grave asimmetria tra chi li estrae (per i propri interessi) e chi li fornisce (senza saperlo). Per ottenere certi servizi, alcuni siti chiedono a noi di precisare che non siamo un robot, ma in realtà la domanda andrebbe capovolta
-
«È necessario che l’etica accompagni tutto il ciclo della elaborazione delle tecnologie: dalla scelta delle linee di ricerca fino alla progettazione, la produzione, la distribuzione e l’utente finale. In questo senso papa Francesco ha parlato di “algoretica”»
-
- Jan 2020
-
outline.com outline.com
-
The underlying guiding idea of a “trustworthy AI” is, first and foremost, conceptual nonsense. Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour.
yikes
-
- Jan 2019
-
wallstreetcn.com wallstreetcn.com
-
假设另一种场景,我把它叫“运输机难题”——假设你是一场救灾行动的总指挥,正带着一支小队坐在一架装满物资的运输机中,这是唯一一架运输机,如果没准时到就会有上万灾民饿死病死,如果最终也没到那几十万人都活不成。但此时受恶劣天气影响飞机突然损坏了,承载不了这么大的重量,必须要有一半人跳下飞机(假设不能丢物资),否则可能机毁人亡,要不要让半支队伍跳下去? 这架运输机就是比特大陆,灾区的难民就是现在的币民。比特大陆若是完蛋,对行业造成的冲击又会让一大批币民破产出局。 试想一下,有一天比特大陆真的倒闭了,那么可以预见到,矿机将挥泪大甩卖,矿工将抛售手里的BTC、BCH,BCH奄奄一息,BTC跌到新一轮谷底。虽然还会有“灾后重建工作”,但一大批人都将倒在这场灾难里,看不到明天的太阳。 作为灾民,会不会因为心疼跳下去的半支队伍而甘愿饿死病死?作为币民,会不会因为心疼被裁掉的几百上千人而甘愿看着比特大陆倒闭,忍受自己哪怕只是短期的破产?
<big>评:</big><br/><br/>「电车难题」曾引起旷日持久的讨论,而它的姊妹版「运输机难题」恐怕也一时难解。对于此类道德两难的选择困境,人们通常倾向于从自己的经验判断出发——主人公「搬动方向杆使车辆撞死一人」的行为,所受到的公众谴责要远小于主人公「站在轨道上方的天桥,为了救五人而故意把桥上的另一个人推下桥以逼停电车」的选择。那么对于「运输机难题」来说呢?还有没有比「一半机组成员跳下飞机」更优的解?<br/><br/>这样的讨论又让人联想到技术主义者在 AI 人工智能领域的意见分野:一派人认为 AI 最终的目的是取代人类,而另一派人的观点则坚信 AI 旨在增强人类(augmentation)。哪一派的话语权更大呢?答案并不重要。重要的是, be nice.
Tags
Annotators
URL
-