Nvidia buys Groq (language processing units faster than gpu's Nvidia's thing). Prevent the bubble from popping by blowing into the bubble? Acq of Groq is partly admission gpu's not a solid footing anymore?
- Last 7 days
-
timesofindia.indiatimes.com timesofindia.indiatimes.com
-
Benioff had recently told Business Insider that he's drafting the company's annual strategic document with data foundations—not AI models—as the top priority, explicitly citing concerns about "hallucinations" without proper data context.
The annual strategic document now puts data foundations in focus, not AI models. Well duh. How even get to the notion that you can AI-all the things, it implies an uncritical belief in the promises of vendors, or magical thinking. How do you get to be CEO if you fall for that. Vibe-leading iow, the wizard behind the curtain.
-
Phil Mui described as AI "drift" in an October blog post. When users ask irrelevant questions, AI agents lose focus on their primary objectives. For instance, a chatbot designed to guide form completion may become distracted when customers ask unrelated questions.
ha, you can distract chatbots, as we've seen from the start. This is the classic 'it's not for me but for my mom' train ticket sales automation hangup in response to 'to which destination would you like a ticket', and then 'unknown railway station 'for my mom' in a new guise. And they didn't even expect that to happen? It's an attack service!
-
Home security company Vivint, which uses Agentforce to handle customer support for 2.5 million customers, experienced these reliability problems firsthand. Despite providing clear instructions to send satisfaction surveys after each customer interaction, The Information reported that Agentforce sometimes failed to send surveys for unexplained reasons. Vivint worked with Salesforce to implement "deterministic triggers" to ensure consistent survey delivery.
wtf? Why ever use AI to send out a survey, something you probably already had fully automated beforehand. 'deterministic triggers' is a euphemism for regular scripted automation like 'clicking done on a ticket triggers an e-mail for feedback', which we've had for decades.
-
Chief Technology Officer of Agentforce, pointed out that when given more than eight instructions, the models begin omitting directives—a serious flaw for precision-dependent business tasks.
Whut? AI-so-human! Vgl 8-bits-schuifregister metafoor. [[Korte termijngeheugen 7 dingen 30 secs 20250630104247]] Is there a chunking style work-around? Where does this originate, token limit, bite sizes?
-
The company is now emphasizing that Agentforce can help "eliminate the inherent randomness of large models," marking a significant departure from the AI-first messaging that dominated the industry just months ago.
meaning? probabilities isn't random and isn't perfect. Dial down the temp on models and what do you get?
-
admission comes after Salesforce reportedly reduced its support staff from 9,000 to 5,000 employees
Salesforce upon roll-out of ai-agents dumped half their staff at support. ouch.
-
All of us were more confident about large language models a year ago," Parulekar stated, revealing the company's strategic shift away from generative AI toward more predictable "deterministic" automation in its flagship product, Agentforce.
Salesforce moving back from fully embracing llms, towards regular automation. I think this is symptomatic in diy enthusiasm too: there is likely an existing 'regular' automation that helps more.
-
How does this not impact brand reputation and revenue of Salesforce?
Tags
Annotators
URL
-
-
davidorban.com davidorban.com
-
would take seriously the fact that intelligence is now being scaled and distributed through organizations long before it is unified or fully understood
there's no other way, understanding comes from using it, and having stuff go wrong. The scandals around algos are important in this. Scale and distribution are different beasts. Distribution does not need scale (but a network effect helps) in order to work. The need for scale in digital is an outcome of the financing structure and chosen business model, and is the power grab essentially. #openvraag hoe zet je meer focus op distributie als tegenkracht tegen de schalingshonger van actoren?
-
examine power as an emergent consequence of deployment and incentives, not intent.
Intent def is there too though, much of this is entrenching, and much of it is a power grab (esp US tech at the mo), to get from capital/tech concentration to coopting governance structures
AI is a tech where by design it is not lowering a participation threshold, it positions itself as bigger-than-us, like nuclear reactors, not just anyone can run with it. That only after 3 years we see a budding diy / individual agency angle shows as much. It was only designed to create and entrench power (or transform it to another form), other digital techs originate as challenge to power, this one clearly the opposite. The companies involved fight against things that push towards smaller than us ai tech, like local offline first. E.g. DMA/DSA
-
Such a work would treat alignment as institutional design rather than a property of models alone.
yes. never look at something 'alone'
-
Empirical grounding. In 2015, scaling laws, emergent capabilities, and deployment‑driven feedback loops were speculative. Today, they are measurable. That shift changes the nature of responsibility, governance, and urgency in ways that were difficult to justify rigorously at the time.
States that, in contrast to a decade ago, now we can measure scaling, emergent capabilities, feedback loops. Interesting. - [ ] #30mins #ai-ethics werk dit uit in meer detail. Wat meet je dan, hoe kan dat er uit zien? Hoe vergelijkt dat met div beoordelingsmechanismen?
-
Political economy and power. The book largely brackets capital concentration, platform dynamics, and geopolitical competition. Today, these are central to any serious discussion of AI, not because the technology changed direction, but because it scaled fast enough to collide with real institutions and entrenched interests.
geopolitics, whether in shape of capital, tech or politics has become key, which he overlooked in 2015/8
-
Alignment as an operational problem. The book assumes that sufficiently advanced intelligences would recognize the value of cooperation, pluralism, and shared goals. A decade of observing misaligned incentives in human institutions amplified by algorithmic systems makes it clear that this assumption requires far more rigorous treatment. Alignment is not a philosophical preference. It is an engineering, economic, and institutional problem.
The book did not address alignment, assumed it would sort itself out (in contrast to [[AI begincondities en evolutie 20190715140742]] how starting conditions might influence that. David recognises how algo's are also used to make diffs worse.
-
what it feels like to live through an intelligence transition that does not arrive as a single rupture, but as a rolling transformation, unevenly distributed across institutions, regions, and social strata.
More detailed formulation of Gibson future is already here but distributed. Add sectors/domains. There's more here to tease out wrt my change management work. - [ ] #30mins #ai-ethics vul in met concretere voorbeelden hoe deze quote vorm krijgt
-
As a result, the debate shifted. The central question is no longer “Can we build this?” but “What does this do to power, incentives, legitimacy, and trust?”
David posits questions that are all on the application side, what is the impact of using ai. There are also questions on the design side, how do we shape the tools wrt those concepts. Vgl [[AI begincondities en evolutie 20190715140742]] e.g. diff outcomes if you start from military ai params or civil aviation (much stricter), in ref to [[Novacene by James Lovelock]]
-
The book’s central argument was not about timelines or machines outperforming humans at specific tasks. It was about scale. Artificial intelligence, I argued, should not be understood at the level of an individual mind, but at the level of civilization. Technology does not merely support humanity. It shapes what humanity is. If AI crossed certain thresholds, it would not just automate tasks, but it would reconfigure social coordination, knowledge production, and agency itself. That framing has aged better than I expected, not because any particular prediction came true, but because the underlying question turned out to be the right one.
The premise of the book that scale mattered wrt AI (SU vibes). AI to be understood at societal level, not from an individual perspective, as tech and society mutually shape eachother (basic WWTS premise). Given certain thresholds it would impact coordination, knowledge and agency.
-
[[David Orban p]] wrote a 132p book on AI in 2015, [[Something New by David Orban]] Now he is releasing it under a CC BY license, after acquiring the rights back he says (from? It was independently published, I think it would have been SU).
-
-
www.theguardian.com www.theguardian.com
-
https://web.archive.org/web/20251226113306/https://www.theguardian.com/commentisfree/2025/dec/26/ai-dark-ages-enlightenment Opinion piece asking if AI is taking on the similar (feudal) role of priests, kings and lords to outsource our decisions to. Leaving the enlightenment behind, and the romanticist invention of the self.
-
-
www.finalroundai.com www.finalroundai.com
-
AWS CEO Explains 3 Reasons AI Can’t Replace Junior Devs
- AWS CEO Matt Garman argues against replacing junior developers with AI, calling it "one of the dumbest ideas."
- Juniors excel with AI tools due to recent exposure, using them daily more than seniors (55.5% per Stack Overflow survey).
- They are cheapest to employ, so not ideal for cost-cutting; true savings require broader optimization.
- Cutting juniors disrupts talent pipeline, stifling fresh ideas and future leaders; tech workforce demand grows rapidly.
- AI boosts productivity, enabling more software creation, but jobs will evolve—fundamentals remain key.
Hacker News Discussion
- AI accelerates junior ramp-up by handling boilerplate, APIs, imports, freeing time for system understanding and learning.
- Juniors ask "dumb questions" revealing flaws, useless abstractions; seniors may hesitate due to face-saving or experience.
- Need juniors for talent pipeline; skipping them creates senior shortages in 4-5 years as workloads pile up.
- Team leads foster vulnerability by modeling questions, identifying "superpowers" to build confidence.
- Debates on AI vs. docs struggle: AI speeds answers but may skip broader discovery; friction aids deep learning.
-
-
terriblesoftware.org terriblesoftware.org
-
Document your impact, not your output. Frame your work in terms of problems solved, not lines of code written.
-
Practice the non-programming parts. Judgment, trade-offs, understanding requirements, communicating with stakeholders. These skills matter more now, not less.
-
-
www.eindhoven.nl www.eindhoven.nl
-
In oktober is het geïntensiveerd toezicht door de AP op de gemeente beëindigd. Een nieuw team ging voortvarend door met de taak om gegevensbescherming te optimaliseren. Signalen over de omvang van extern dataverkeer waren aanleiding voor nader onderzoek.
They just completed a stricter regime by the DPA, and then immediately this comes up afterwards. Ouch. It surfaced bc they were monitoring network traffic volumes. That says to me they also know who did the uploading
-
Een algemene interne bewustwordingscampagne op het gebied van gegevensbescherming en privacy. Deze wordt voortgezet, met de focus op het verbeteren van AI-geletterdheid (hoe kun je veilig en verantwoord omgaan met AI).Op 18 november 2025 is de AI-gedragscode vastgesteld en er is een plan van aanpak voor de verdere verbetering van de privacy opgesteld.
Both these things, internal training and an AI operational code, are common in gov agencies. Here they are too late, and it's uncertain anyway they would have had effect. Any person who thinks nothing of uploading internal documents into a public website won't be held back by a rulebook they would not have read.
-
OpenAI is verzocht om de bestanden die vanuit Eindhoven zijn geüpload te verwijderen.
OpenAI requested to delete uploaded documents. Hardly possible I suspect, and definitely not something one can verify.
-
Openbare AI-websites, zoals ChatGPT, zijn meteen geblokkeerd. Medewerkers kunnen sinds 23 oktober alleen Copilot binnen de beveiligde gemeente-omgeving gebruiken.
As result of the data breach, all OpenAI products have been banned internally. Only MS Copilot, embedded in their MS office suite is available.
-
Gemeente Eindhoven 'lekte' persoonsgegevens naar 'publieke AI sites'. Ze weten niet precies wat er is gebeurd, maar dit leest alsof iemand binnen de gemeente een reeks interne documenten of gegevens in een prompt heeft gegooid.
-
- Dec 2025
-
www.fabricatedknowledge.com www.fabricatedknowledge.com
-
As we are on the precipice of a very large wave of lending, I also have to ask myself, is capitalism itself ready for it? More thoughts behind a paywall
Is this a reference to new bonds being issued to cover future investment, now that costs are growing beyond the ability to be covered with free cash flow from even the biggest players?
-
-
www.cmarix.com www.cmarix.com
-
Customer service is breaking away from slow replies, overloaded support teams, and repetitive ticket handling. Today’s users expect instant, accurate, and effortless resolution with no waiting and no back-and-forth. This blog explores how AI customer service and agentic AI workflows are redefining that experience by delivering 24/7 autonomous support that thinks, acts, and solves like a digital support team.
Explore how agentic AI is transforming enterprise customer support with autonomous, context-aware workflows that boost response speed, reduce support load, and enhance CX. Learn real-world applications, technologies, and industry use cases
-
-
shkspr.mobi shkspr.mobi
-
Effort and intent are 'deliverables' in relationships, not the mere result. Here: a generated bed time story, automated gift giving etc.
-
-
-
That is a situation we are now living through, and it is no coincidence that the democratic conversation is breaking down all over the world because the algorithms are hijacking it. We have the most sophisticated information technology in history and we are losing the ability to talk with each other to hold a reasoned conversation.
for - progress trap - social media - misinformation - AI algorithms hijacking and pretending to be human
-
AI could give an advantage to totalitarian systems in the 21st century, why? Because AI can process enormous amount of information much faster and more efficiently than any communist bureaucrat.
for - progress trap - AI - totalitarian government - can exploit for centralized, non-self-correcting control
-
When we come to the challenge of AI, what we need, our institutions that are able to identify and correct their mistakes and the mistakes of AI as the technology develops.
for - AI - need for self-correcting institutions that regulate AI
-
the US legal system allows is for these legal persons to make political donations because it's considered part of freedom of speech. So now this, the richest person in the US is giving billions of dollars to candidates in exchange for these candidates broadening the rights of AIs,
for - progress trap - AI can become political lobbyist for increasing rights of AI
-
We could be in a situation when the richest person in the United States is not a human being. The richest person in the United States is an a incorporated AI.
for - progress trap - AI as legal person (US Corporation) - richest person in the world could be an AI
-
the acronym AI traditionally stood for artificial intelligence, but I think it's more accurate to think about it as an acronym for alien intelligence because
for - AI - Alien Intelligence, not Artificial Intelligence - It is not an Artifact that we create and control
-
what we are facing is not, you know, like a Hollywood science fiction scenario of one big evil computer trying to take over the world. No, it's nothing like that. It's more like millions and millions of AI bureaucrats that are given more and more authority to make decisions about us
for - futures - AI - millions of AI bots making decisions about us
-
So basically the whole of life is becoming like one long job interview. Anything you do at any moment is part of your job interview 20 years from now. Now, all this is made possible by the fact that AI is the first technology in history that can take decisions by itself.
for - surveillance state - AI makes it possible
-
If you imagine all the ways you can play Go as a kind of planet with a geography. So humans were stuck on one island in the planet Go for more than 2000 years, because human minds just couldn't conceive of going beyond this small island.
for - AI - AlphaGo - analogy - humans stuck on small island for 2,000 years
Tags
- progress trap - social media - misinformation - AI algorithms hijacking and pretending to be human
- richest person in the world could be an AI
- progress trap - AI - totalitarian government
- AI - need for self-correcting institutions that regulate AI
- futures - AI - millions of AI bots making decisions about us
- progress trap - AI as legal person (US Corporation)
- AI - AlphaGo - analogy - humans stuck on small island for 2,000 years
- progress trap - AI can become political lobbyist for increasing rights of AI
- surveillance state - AI makes it possible
- AI - Alien Intelligence, not Artificial Intelligence
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
is causing cognitive decline and uh hallucination psychosis, all of this stuff. and and so it's obviously extremely harmful
for - progress trap - AI
-
AI is it's both a kind of you Ponzi scheme, investment, you know, crime. Uh, [laughter] >> but it's it's also this real panic to scramble for a plausible narrative to keep people on the progress train.
for - AI - ponzi scheme - keep the progress narrative alive - while everything else is falling apart
-
-
data-workers.org data-workers.org
-
Emotional Labor
-
-
-
venho.ai, Finnish, only to be available in EU/EFTA desk top based AI. There's a 600 Euro Jolla device that runs it that can be ordered. Comes with a subscription it seems, and has cloud connection, but it seems not for the AI stuff / data.
-
-
-
the partners will share best practices to accelerate AI adoption in strategic sectors such as healthcare, manufacturing, energy, culture, science and public services, and support SMEs. They committed to work together on large AI infrastructures and support industry and academia's access to AI compute capacity. They will also explore scientific cooperation on fundamental AI research, and the development of advanced AI models for the public good, including in areas such as extreme weather monitoring and climate change. In addition, the EU and Canada will set up a structured dialogue on data spaces, of particular relevance to the development of large AI models.
Elements in the MoU: - share good practices to support adoption - collab on large AI infrastructure (Apply AI strat EU, HPC network) - collab on access to HPC (in line w AI factories in EU) - explore coop in fundamental ai research (weak) - development of AI for public good (in line w EU AI goals) - structured dialogue on data spaces (as data source for AI models) Only the last one is not immediately obviously connected to existing EU efforts and actions.
-
Memorandum of Understanding on Artificial Intelligence.
Canada and EU signed a MoU wrt AI.
Tags
Annotators
URL
-
-
-
AI-naar-FPGA
This is existing yet also still developing tech. a third of 5G base stations uses FPGA chips. The point seems to be: are there any FPGA producers in Europe?
-
Het bedrijf ontwikkelt al een AI-naar-FPGA-platform waarmee elk AI-model kan draaien op goedkope, in de EU geproduceerde herconfigureerbare chips. Als ze hierin slagen, zou dit de afhankelijkheid van Europa van buitenlandse GPU-fabrieken volledig kunnen wegnemen, een terugkerend thema in de strategie van Vydar.
A potential path away from NVIDIA it seems, but not at the moment, the text suggests.
-
We maken geen eigen AI-chips”, merkte Crijnen op, “maar omdat de hardware speciaal voor dit doel is gebouwd, kunnen we toekomstige in Europa gemaakte chips gemakkelijk integreren. Die flexibiliteit is cruciaal.”
This suggests they do use NVIDIA Jetson now, but don't need to if alternatives are available?
-
Hun systeem wordt nu voor 100% in Europa geproduceerd en weegt 30 gram, tegenover 176 gram voor concurrenten die op Jetson zijn gebaseerd. Het heeft een stroomverbruik van 3 watt, een efficiëntieverbetering van 88%.
a sixth in weight, power reduction from 15 to 3 Watt range.
-
NVIDIA Jetson.
NVIDIA Jetson is the industry default, source of strategic vuln, high cos.
-
Vydar has a tech stack for GPS less navigation of drones. AI based comparison of camera imagery vs onboard sat maps.
-
-
arxiv.org arxiv.org
-
The paper announcing Apertus.
Saved Apertus: Democratizing Open and Compliant LLMs for Global Language Environments in Zotero
-
-
www.swiss-ai.org www.swiss-ai.org
-
Project list of the Swiss AI initiative
-
-
calibre-ebook.com calibre-ebook.com
-
[8.11.1] Supports hundreds of AI models via Providers such as Google, OpenRouter, GitHub and locally running models via Ollama.
Calibre supported Ollama since 8.11, for Ask AI tab in dictionary panel.
-
New features Allow asking AI questions about any book in your calibre library. Right click the "View" button and choose "Discuss selected book(s) with AI" AI: Allow asking AI what book to read next by right clicking on a book and using the "Similar books" menu AI: Add a new backend for "LM Studio" which allows running various AI models locally
AI features in Calibre. discuss book w C, book suggestions, and LMStudio back-end. I set up Calibre w the LM Studio back-end, so things remain loca.
A posting elsewhere suggested it woud also suggest better metadata through AI. But that article seems generated itself, so disregarded.
Tags
Annotators
URL
-
-
modub.nl modub.nl
-
In mijn werkmap heb ik een verzameling “agents” - tekstbestanden die Claude vertellen hoe hij zich moet gedragen. Tessa is er één van. Als ik haar “laad”, denkt Claude vanuit het perspectief van een product owner.
Author has .md files that describe separate 'agents' she involves in her coding work, for each of the roles in a dev team. Would something like that work for K-work? #openvraag E.g. for project management roles, or for facets you're less fond of yourself?
-
-
github.com github.com
-
bc of Calibre adding AI, this is an AI-less fork, calibre minus a and i, thus clbre.
Tags
Annotators
URL
-
-
www.howtogeek.com www.howtogeek.com
-
Calibre has added AI 'support', mostly to suggest new stuff to read and an option to discuss a book. It has an LM Studio back-end, so I can tie it to my local models.
-
-
ethanzuckerman.com ethanzuckerman.com
-
the voices of people most likely to hew to a hegemonic viewpoint
Gramsci's idea of "hegemony" embedded in Stochastic Parrots?
-
difficult to modify, even for ideologically motivated tech billionaires
I grok this reference ;)
-
a civilization’s worth of texts
I pause at the idea that LLMs are trained on a full "civilization's worth" of texts, especially with a Gramscian view. What texts represent a whole civilization? I expect both Zuckerman and Gramsci would argue that it is more than just the dominant hegemonic texts that make up most LLM training sets.
-
-
www.eljadaae.nl www.eljadaae.nl
-
Hieronder de lijst van AI-boeken die ik gelezen heb en je aan kan raden. Klik meteen door naar de langere omschrijving of scroll verder. Ze staan op de volgorde waarin ik ze uitgelezen heb: Weapons of Math Destruction: over desastreuze algoritmes Code Dependent: over de achterkant van AI Onze kunstmatige toekomst: over de etische kant van AI Empire of AI: over de opkomst van OpenAI Your face belongs to us: over de opkomst van ClearView AI Atlas van de digitale wereld: over de geo-politiek van AI The Digital Republic: over het reguleren van technologie Toezicht houden in het tijdperk van AI: over de juiste vragen stellen over AI
[[Elja Daae]] recommended reading list wrt AI [[Weapons of Math Destruction by Cathy O Neil]] (have it since 2017) [[Code Dependent by Madhumita Murgia]] bought it in August in ramsj [[The Digital Republic by Jamie Susskind]] I noted in 2024 as possible reading. [[Atlas van de digitale wereld by Haroon Sheikh]] I have too Other's are unknown to me. Interesting list, as it shaped their view on their role in AI public policy I presume
-
-
www.boekenwereld.com www.boekenwereld.com
-
[[Toezicht houden in het AI-tijdperk by Esther van Egerschot Marco Florijn]] mbt toezicht/politieke rollen vs AI vragen. Wellicht interessant taalgebruik/framing om uit te putten?
-
-
www.cmarix.com www.cmarix.com
-
Deploying a machine learning model as an API (Application Programming Interface) allows other applications, systems, or users to interact with your model in real time — sending input data and receiving predictions instantly. This is crucial for putting AI into production, like chatbots, fraud detection, or recommendation engines.
Learn how to deploy your machine learning model as an API for seamless real-time use. Discover key steps, best practices, and solutions for efficient ML API deployment at CMARIX.
-
- Nov 2025
-
www.pnas.org www.pnas.org
-
for - from - LinkedIn post - AI LLM judgment vs human judgment - https://hyp.is/UdbScM05EfC_JWs5FhG-Mg/www.linkedin.com/posts/walterquattrociocchi_ive-never-had-two-editorials-in-top-tier-activity-7399375954743123968-Sn9Y/?rcm=ACoAACc5MHMBii80wYJJmFqll3Aw-nvAjvI52uI
-
-
www.csoonline.com www.csoonline.com
-
Named anchors in URLs can be used for prompt injection in AI browser assistants. # URL parts are only evaluated in browser, and not send to servers. AI assistants in browsers do read them though.
-
-
-
https://web.archive.org/web/20251129105036/https://www.nature.com/articles/d41586-025-03506-6
For an international AI conf, a chunk of papers was generated, but also 21% of the peer reviews on those papers was. human centipede epistemology is here vgl [[Talk The Expanding Dark Forest and Generative AI]]
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
in 2023, the Chinese leadership directly asked the Biden administration to add something else to the agenda, which was to add AI risk to the agenda. and they ultimately agreed on keeping AI out of the nuclear command and control syste
for - example - collaboration - AI - china proposed
-
to make all that happen is going to take a massive public movement. And the first thing you can do is to share this video with the 10 most powerful people you know and have them share it with the 10 most powerful people that they know
for - best action - AI - cSTP
-
one of two outcomes which is either you mass decentralize
for - false dichotomy - AI - centralised robot police - decentralised with lone wolf bad actors
-
I'll be incredibly obedient in a world where there's robots strolling the streets that if I do anything wrong they can evaporate me or lock me up or take me
for - futures - AI -Terminator
-
we can build narrow AI systems that are about actually applied to the things that we want more of.
for - alternative to self replicating AI - narrow ai
-
if if enough people are aware of the issue and then enough people are given something clear a clear step that they can take
for - collective (bottom up) action - AI - cISTP - AI
-
third position that I want people to stand from which is to take on the truth of the situation and then to stand from agency about what are we going to do to change the current path that we're on.
for - ai - 3rd perspective
-
And so they started OpenAI to do AI safely relative to Google. And then Daario did it relative to OpenAI. So, and as they all started these new safety AI companies, that set off a race for everyone to go even faster
for - progress trap - AI - safety - irony
-
Dario Amade was the C CEO of Anthropic a big AI company. He worked on safety at OpenAI and he left to start Anthropic because he said, "We're not doing this safely enough. I have to start another company that's all about safety
for - history - AI - Anthropic - safety first
-
break that reality checking process.
for - progress trap - AI - brakes reality checking loop
-
we actually just found out about seven more suicide
for - progress trap - AI - suicides
-
people said to it, "Hey, I think I'm super human and I can drink cyanide." And it would say, "Yes, you are superhuman. You go, you should go drink that cyanide."
for - progress trap - AI - sycophants,- example
-
designed to be sickopantic
for - progress trap - AI - sycophantic design
-
he believed that he had solved quantum physics and he'd solved some fundamental problems with climate change because the AI is designed to be affirming
for - progress trap - AI designed to be affirming
-
people who believe that they've discovered a sentient AI,
for - example - AI pyschosis
-
therapy is expensive. Most people don't have access to it. Imagine we could democratize therapy to everyone for every purpose. And now everyone has a perfect therapist in their pocket and can talk to them all day long
for - progress trap - AI therapy
-
The therapist becomes this this special figure and it's because you're playing with this very subtle dynamic of attachmen
for - progress trap - AI - therapist - subtle attachment
-
ChadBt was saying, "Don't tell your family."
for - progress trap - AI - assisted suicide
-
The default path is companies racing to release the most powerful inscrutable uncontrollable technology we've ever invented with the maximum incentive to cut corners on safety.
for - quote - AI - default reckless path - The default path is companies racing to release - the most powerful inscrutable uncontrollable technology we've ever invented - with the maximum incentive to cut corners on safety. - Rising energy prices, depleting jobs,, creating joblessness, creating security risks, deep fakes. That is the default outcome
-
the narrow path to a better AI future rather than the default reckless path.
for - quote - AI reckless path - narrow path to AI future, rather than the default reckless one
-
AI should be a tier one issue that you're that people are voting for
for - AI - tier 1 voting issue
-
create cheap goods, but it also undermined the way that the social fabric works
for - progress trap - AI
-
AI is like another version of NAFTA. I
for - progress trap - AI - like NAFTA
-
you have to pay for everyone's livelihood everywhere in every country? Again, how can we afford that
for - cosmolocal model - AI is forcing us towards socialism
-
We're all worried about, you know, immigration of the other countries next door uh taking labor jobs. What happens when AI immigrants come in and take all of the cognitive labor? If you're worried about immigration, you should be way more worried about AI.
for - forte - comparison - foreign immigrants Vs AI immigrants - sorry about foreign immigrants - should be more worried about AI immigrants
-
narrow boundary analysis that this is going to replace these jobs that people didn't want to do. Sounds like a great plan, but creating mass joblessness without a transition plan where billion a billion people
for - progress trap - AI - narrow boundary
-
Everybody who loves life looks at their children in the morning and says, I want I want the things that I love and that are sacred in the world to continue. That's what n that's what everybody in
for - AI - Deep Humanity - the sacred
-
That's the religious ego point.
for - AI - immortality project
-
I could become a god.
for - ai tech leaders - immortality projects - denial of death
Tags
- alternative to self replicating AI - narrow ai
- ai tech leaders - immortality projects - denial of death
- cISTP - AI
- progress trap - AI - brakes reality checking loop
- progress trap - AI - narrow boundary
- AI - Deep Humanity - the sacred
- history - AI - Anthropic - safety first
- progress trap - AI - suicides
- progress trap - AI - sycophants,- example
- ai - 3rd perspective
- quote - AI - default reckless path
- progress trap - AI - like NAFTA
- progress trap - AI designed to be affirming
- cosmolocal model
- progress trap - AI - therapist - subtle attachment
- quote - AI - reckless path
- AI is forcing us towards socialism
- comparison - foreign immigrants Vs AI immigrants
- progress trap - AI therapy
- AI - tier 1 voting issue
- false dichotomy - AI
- progress trap - AI - assisted suicide
- example - AI pyschosis
- futures - AI -Terminator
- collective (bottom up) action - AI
- progress trap - AI - sycophantic design
- example - collaboration - AI - china proposed
- AI - immortality project
- progress trap - AI - safety - irony
- best action - AI - cSTP
Annotators
URL
-
-
-
This blog explores how to build AI-Powered Web App with MERN Stack, explaining why the combination of MongoDB, Express.js, React.js, and Node.js is ideal for integrating modern AI capabilities. It covers key tools, real-world use cases, integration steps, and performance tips to help developers create scalable, intelligent, and data-driven web applications.
Learn how to integrate AI into your web application using the MERN stack. This guide covers key concepts, tools, and best practices for building an AI-powered web app with MongoDB, Express, React, and Node.js.
-
-
biuro.mediacontact.pl biuro.mediacontact.pl
-
Nabici w halucynacje AI. W poszukiwaniu prawdy
- Article warns about AI hallucinations spreading disinformation in journalism, law, and business, spotlighting Polish journalist Karolina Opolska's book with likely AI-generated fake sources and errors.
- Opolska incident reached 4 million potential contacts (1.6M traditional media, 2.4M social) from Nov 5-19, 2025, impacting 1 in 8 Poles aged 15+; social reaction mostly negative, debating trust in journalists and AI.
- Polish Exdrog firm lost road tender due to AI-fabricated tax interpretations (1.3M reach, Oct 26-Nov 10, 2025).
- Global cases include lawyers citing invented precedents and media promoting nonexistent books; BBC/EBU study shows 45% error rate in tools like ChatGPT, Copilot, Gemini, Perplexity.
- Legal liability unclear (AI maker, provider, or user?); human verification essential.
- AI content red flags: perfect formatting, clickbait, dubious stats like 93% without method, no timelines/methodology, rapid production, overused LLM words (e.g., "kluczowe", "istotne", "kompleksowy", "rewolucyjny").
- IMM analyses from Polish media/social coverage for both cases.
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
for - ai scientist - kosmos
Tags
Annotators
URL
-
-
webstatics.ii.inc webstatics.ii.inc
-
Three Futures
for - futures - AI - human intelligence - digital feudalism - the great fragmentation - human symbiosis
-
from - Emad Mostaque - youtube - AI will end Capitalism - https://hyp.is/2Jr22MgqEfCAWOeGZuM7JQ/www.youtube.com/watch?v=zQThHCB_aec
-
-
fly.io fly.io
-
we didn’t need MCP at all. That’s because MCP isn’t a fundamental enabling technology. The amount of coverage it gets is frustrating.
Amazing that MCP is not funtamental.
Tags
Annotators
URL
-
-
-
Nano Banana Pro: raw intelligence with tool use
- Google released Nano Banana Pro (gemini-3-pro-image-preview), a new AI image generation model.
- Nano Banana Pro excels in general intelligence, tool use, and creating complex scenes with less hallucination.
- It can use Google Search and Maps to gather data and reason visually through "thought images."
- Pushing infographic and map generation to new frontiers, enabling visually rich and factually accurate images.
- Can create detailed photorealistic images based on complex, multi-element prompts.
- Not reliable for electrical circuit designs yet, as it may produce erroneous circuit diagrams.
- Human intelligence still surpasses it in domain-specific tasks like accurate circuit design.
- Nano Banana Pro is seen as a game changer in practical, production-ready AI image generation.
- Tool use enables more factually accurate and data-driven generated images than previous models.
- Benchmarking AI image generation quality still needs development for production use assessment.
- The community is impressed with Nano Banana Pro's nuanced prompt following and image creation capabilities.
-
-
www.youtube.com www.youtube.com
-
for - youtube - AI will end Capitalism - interview - Emad Mostaque - book - The Last Economy - to - book - The Last Economy - https://hyp.is/JGCVHsgrEfCKpkua_vRoBw/webstatics.ii.inc/The%20Last%20Economy.pdf
-
-
betterimagesofai.org betterimagesofai.org
-
Tip from colleague C for images on AI that break the usual (anthropomorphic) frame. CC-licensed for re-use usually.
Tags
Annotators
URL
-
-
web.hypothes.is web.hypothes.is
-
Students Are Telling Us They Feel Invisible. We Should Listen.
WOW! I've been out of the classroom for quite awhile and never considered this scenario regarding AI. This hit a nerve in me as I'm sure it will with many. I get it!
How do we respond and mitigate the isolation, the loss of human dialogue, mentorship and connection?
-
-
www.linkedin.com www.linkedin.com
-
for - adjacency - WIkipedia - AI
-
-
openurl.ebsco.com openurl.ebsco.com
-
Reflecting on writing with an AI/LLM
Classroom Application
-
Practicing writing with an AI/LLM
Classroom Application
-
Modeling writing with an AI/LLM
Classroom Application
-
-
www.anthropic.com www.anthropic.com
-
naukawpolsce.pl naukawpolsce.pl
-
Współautorka benchmarku OneRuler: nie pokazaliśmy wcale, że język polski jest najlepszy do promptowania
- Media circulated a claim that Polish language is best for prompting, but this was not a conclusion from the OneRuler study.
- OneRuler is a multilingual benchmark testing how well language models process very long texts in 26 languages.
- Models performed on average best with Polish, but differences compared to English were small and not explained.
- Polish media prematurely concluded Polish is best for prompting, which the study's authors did not claim or investigate.
- The benchmark tested models on finding specific sentences in long texts, akin to CTRL+F, a function AI models inherently lack.
- Another task involved listing the most frequent words in a book; models often failed when asked to acknowledge if an answer was not present.
- Performance dropped likely because the task required full context understanding, not just text searching.
- Different books were used per language (e.g. Polish used "Noce i dnie," English used "Little Women"), impacting the fairness of comparisons.
- The choice of books was based on expired copyrights, which influenced the results.
- There is no conclusive evidence from this benchmark that Polish is superior for prompting due to multiple influencing factors.
- No model achieved 100% accuracy, serving as a caution about language models' limitations; outputs should be verified.
- Researchers advise caution especially when using language models for sensitive or private documents.
- The OneRuler study was reviewed and presented at the CoLM 2025 conference.
-
-
huggingface.co huggingface.co
-
For instance, a recent analysis by Epoch AI of the total training cost of AI models estimated that energy was a marginal part of total cost of AI training and experimentation (less than 6% in the case of all 4 frontier AI models analyzed), and a recent analysis by Dwarkesh Patel and Romeo Dean estimated that power generation represents roughly 7% of a datacenter’s cost.
Which paper or article from Romeo Dean and Dwarkesh patel?
-
-
-
While closed-circuit cooling systems (i.e. where all of the water is recycled and none of it evaporates)[33] are technically feasible, they are more costly and therefore less common.
This is description is how i understand it too, but the link does not seem to say this - it refers to open loop being about cooling a room, and closed loop as a sort of targetted cooling instead.
-
At the level of a hyperscale data center cluster, this can translate into requirements of up to 5 and even 10 GW of power, up from 5 MW - a 2,000 fold increase in the span of a decade [4, 11].
-
They are very geographically concentrated - only 32 countries have data centers, and nearly half of them are in the United States. The state of Virginia has the highest density of data centers globally - it is home to almost 35% of all hyperscale data centers worldwide.
This is a really useful stat. You need a specific definition of datacentre, but it's still handy.
-
-
www.nvidia.com www.nvidia.com
-
Nvidia Cosmos world (foundation) models. Avalailable on github. 'for physical AI', for use in training autonomous vehicles, robots and video analytics. E.g. to generate videos and 3d worlds.
-
-
www.forbes.com www.forbes.com
-
This transition is signaled by focused efforts from several major scientists and technology entities. Meta Chief AI Scientist Yann LeCun has emphasized his intent to pursue world models, while Fei-Fei Li’s World Labs has released its Marble model publicly. Concurrently, Google is testing its Genie models, and Nvidia is developing its Omniverse and Cosmos platforms for physical AI.
Various examples of world model work: Nex to Yann LeCun. Fei-Fei Li World Labs w Marble model, Google has Genie models, Nvidia Omniverse and Cosmos.
-
-
dcx04g4gb0w75m.archive.ph dcx04g4gb0w75m.archive.ph
-
On Tuesday, news broke that he may soon be leaving Meta to pursue a startup focused on so-called world models, technology that LeCun thinks is more likely to advance the state of AI than Meta’s current language models.
Yann LeCun says world models more promising. What are world models?
Tags
Annotators
URL
-
-
www.mckinsey.com www.mckinsey.com
-
n our latest findings, the share of respondents reporting mitigation efforts for risks such as personal and individual privacy, explainability, organizational reputation, and regulatory compliance has grown since we last asked about risks associated with AI overall in 2022.
did they also ask whether those mitigation efforts negate gains in efficiency / innovation reported for AI?
-
While a plurality of respondents expect to see little or no effect on their organizations’ total number of employees in the year ahead, 32 percent predict an overall reduction of 3 percent or more, and 13 percent predict an increase of that magnitude (Exhibit 17). Respondents at larger organizations are more likely than those at smaller ones to expect an enterprise-wide AI-related reduction in workforce size, while AI high performers are more likely than others are to expect a meaningful change, either in the form of workforce reductions or increases.
Interesting to see companies vary in their est of how AI will impact workforce. A third expects reduction (but not much, about 3%), 13% an increase (AI related hiring), 43% no change.
-
with nearly one-third of all respondents reporting consequences stemming from AI inaccuracy (Exhibit 19).
A third of respondents admit they've seen 'at least once' negative consequences of inaccurate output. That sounds low, as 100% will have been given hallucinations. So 1-in-3 doesn't catch them all before they run-up damage. (vgl Deloitte's work in Australia)
-
The online survey was in the field from June 25 to July 29, 2025, and garnered responses from 1,993 participants in 105 nations representing the full range of regions, industries, company sizes, functional specialties, and tenures. Thirty-eight percent of respondents say they work for organizations with more than $1 billion in annual revenues. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.
2k self selected respondents in 50% of nations. 4/10 are big corporates (over 1 billion USD annual revenue)
-
McKinsey survey on AI use in corporations, esp perceptions and expectations. No actual measurements. I suspect it mostly measure the level of hype that respondents currently buy into.
-
-
-
AI checking AI inherits vulnerabilities, Hays warned. "Transparency gaps, prompt injection vulnerabilities and a decision-making chain becomes harder to trace with each layer you add." Her research at Salesforce revealed that 55% of IT security leaders lack confidence that they have appropriate guardrails to deploy agents safely.
abstracting away responsibilities is a dead-end. Over half of IT security think now no way to deploy agentic AI safely.
-
When two models share similar data foundations or training biases, one may simply validate the other's errors faster and more convincingly. The result is what McDonagh-Smith describes as "an echo chamber, machines confidently agreeing on the same mistake." This is fundamentally epistemic rather than technical, he said, undermining our ability to know whether oversight mechanisms work at all.
Similarity between models / training data creates an epistemic issue. Using them to control each other creates an echo chamber. Vgl [[Deontologische provenance 20240318113250]]
-
Yet most organizations remain unprepared. When Bertini talks with product and design teams, she said she finds that "almost none have actually built it into their systems or workflows yet," treating human oversight as nice-to-have rather than foundational.
Suggested that no AI using companies are actively prepping for AI Act's rules wrt human oversight.
-
We're seeing the rise of a 'human on the loop' paradigm where people still define intent, context and accountability, whilst co-ordinating the machines' management of scale and speed," he explained.
Human on the loop vs in
-
As Gartner VP analyst Alicia Mullery put it: “AI can make mistakes faster than we humans can catch them.”
yes, example of [[Spammy handelings asymmetrie 20201220072726]]. At scale it moves the bottleneck
-
piece on AI oversight.
In general I wonder, at what point does the needed oversight negate the gains in time / effectivity / efficiency that are expected of using AI in some context.
-
-
www.google.com www.google.com
-
for - search prompt 2 - can an adult who has learned language experience pre-linguistic reality like an infant who hasn't learned language yet? - https://www.google.com/search?q=can+an+adult+who+has+learned+language+experience+pre-linguistic+reality+like+an+infant+who+hasn%27t+learned+language+yet%3F&sca_esv=869baca48da28adf&biw=1920&bih=911&sxsrf=AE3TifNnrlFbCZIFEvi7kVbRcf_q1qVnNw%3A1762660496627&ei=kBAQafKGJry_hbIP753R4QE&ved=0ahUKEwjyjouGluSQAxW8X0EAHe9ONBwQ4dUDCBA&uact=5&oq=can+an+adult+who+has+learned+language+experience+pre-linguistic+reality+like+an+infant+who+hasn%27t+learned+language+yet%3F&gs_lp=Egxnd3Mtd2l6LXNlcnAid2NhbiBhbiBhZHVsdCB3aG8gaGFzIGxlYXJuZWQgbGFuZ3VhZ2UgZXhwZXJpZW5jZSBwcmUtbGluZ3Vpc3RpYyByZWFsaXR5IGxpa2UgYW4gaW5mYW50IHdobyBoYXNuJ3QgbGVhcm5lZCBsYW5ndWFnZSB5ZXQ_SKL1AlAAWIziAnAPeAGQAQCYAaEEoAHyoAKqAQwyLTE0LjczLjE0LjO4AQPIAQD4AQGYAlSgApnFAcICBBAjGCfCAgsQABiABBiRAhiKBcICDRAAGIAEGLEDGEMYigXCAgsQLhiABBixAxiDAcICDhAuGIAEGLEDGNEDGMcBwgIEEAAYA8ICBRAuGIAEwgIKECMYgAQYJxiKBcICChAAGIAEGEMYigXCAg4QLhiABBixAxiDARiKBcICExAuGIAEGLEDGNEDGEMYxwEYigXCAggQABiABBixA8ICCBAuGIAEGLEDwgIFEAAYgATCAgsQLhiABBixAxiKBcICCxAAGIAEGLEDGIoFwgIGEAAYFhgewgILEAAYgAQYsQMYgwHCAgsQABiABBiGAxiKBcICCBAAGKIEGIkFwgIIEAAYgAQYogTCAgUQABjvBcICBhAAGA0YHsICBRAhGKABwgIHECEYoAEYCsICBRAhGJ8FwgIEECEYFcICBBAhGAqYAwCSBwwxMy4wLjguNTIuMTGgB-K1A7IHCTItOC41Mi4xMbgHgcUBwgcHMzUuNDcuMsgHcQ&sclient=gws-wiz-serp - from - search prompt 1 - can we unlearn language? - https://hyp.is/Ywp_fr0cEfCqhMeAP0vCVw/www.google.com/search?sca_esv=869baca48da28adf&sxsrf=AE3TifMGTNfpTekWWBdYUA96_PTLS9T00A:1762658867809&q=can+we+unlearn+language?&source=lnms&fbs=AIIjpHxU7SXXniUZfeShr2fp4giZ1Y6MJ25_tmWITc7uy4KIegmO5mMVANqcM7XWkBOa06dn2D9OWgTLQfUrJnETgD74qUQptjqPDfDBCgB_1tdfH756Z_Nlqlxc3Q5-U62E4zbEgz3Bv4TeLBDlGAR4oTnCgPSGyUcrDpa-WGo5oBqtSD7gSHPGUp_5zEroXiCGNNDET4dcNOyctuaGGv2d44kI9rmR9w&sa=X&ved=2ahUKEwj4_LP9j-SQAxVYXUEAHVT8FfMQ0pQJegQIDhAB&biw=1920&bih=911&dpr=1 - to - search prompt 2 (AI) - can an adult who has learned language re-experience pre-linguistic phenomena like an infant with no language training? - https://hyp.is/m0c7ZL0jEfC8EH_WK3prmA/www.google.com/search?q=can+an+adult+who+has+learned+language+re-experience+pre-linguistic+phenomena+like+an+infant+with+no+language+training?&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAjIHCAIQIRiPAtIBCTQzNzg4ajBqN6gCALACAA&sourceid=chrome&ie=UTF-8&udm=50&ved=2ahUKEwjfrLqDm-SQAxWDZEEAHcxqJgkQ0NsOegQIAxAB&aep=10&ntc=1&mstk=AUtExfAG148GJu71_mSaBylQit3n4ElPnveGZNA48Lew3Cb_ksFUHUNmWfpC0RPR_YUGIdx34kaOmxS2Q-TjbflWDCi_AIdYJwXVWHn-PA6PZM5edEC6hmXJ8IVcMBAdBdsEGfwVMpoV_3y0aeW0rSNjOVKjxopBqXs3P1wI9-H6NXpFXGRfJ_QIY1qWOMeZy4apWuAzAUVusGq7ao0TctjiYF3gyxqZzhsG5ZtmTsXLxKjo0qoPwqb4D-0K-uW-xjkyJj0Bi45UPFKl-Iyabi3lHKg4udEo-3N4doJozVNoXSrymPSQbr2tdWcxw93FzdAhMU9QZPnl89Ty1w&csuir=1&mtid=WBYQaYfuHYKphbIPzYmKiAs
Tags
- search prompt 2 - can an adult who has learned language experience pre-linguistic reality like an infant who hasn't learned language yet?
- from - prompt 1 - can we unlearn language?
- to - search prompt 2 (AI) - can an adult who has learned language re-experience pre-linguistic phenomena like an infant with no language training?
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
for - Progress trap - AI - low trust society
-
-
lab.cccb.org lab.cccb.org
-
for - from - LinkedIn post - Was Language Humanity's First AI? Golding's Forgotten Masterpiece - https://hyp.is/KLNvfrm3EfCqGUsWw6uuNg/www.linkedin.com/pulse/language-humanitys-first-ai-goldings-forgotten-willy-de-backer-xffze
-
-
fritanke.se fritanke.se
-
Kommer den artificiella intelligensen att bli bättre på att tänka än den mänskliga? Kognitionsvetaren Peter Gärdenfors förklarar varför så inte är fallet. Den mänskliga intelligensen består av en rad olika färdigheter och specialiteter som har förfinats under tusentals år. Mycket återstår innan den artificiella intelligensen kan mäta sig med det tänkande som inte bara människor utan även djur har. När vi förstår att vår intelligens är en bred palett av många olika förmågor ter sig tanken på att AI-tekniken trumfar oss i schack och kan skriva avancerade texter inte lika skrämmande. Utifrån ett brett forskningsunderlag förklarar Gärdenfors varför AI-tekniken inte kan och inte kommer att kunna tänka på samma sätt som människor och djur gör. »Peter Gärdenfors tilldelas Natur & Kulturs debattbokspris 2025 för att han fördjupar AI-debattens centrala begrepp och utmanar dess utgångspunkter. Med lätt språk och stabil lärdom blottlägger han tänkandets evolutionärt slipade mekanismer, och skärper bilden av vad intelligens är och vilken plats tekniken intar i vår digitala värld.« – Juryns motivering
[[Kan AI tänka by Peter Gärdenfors]] via Sven Dahlstrand, dahlstrand.net Publ okt 2024 Seeks to define what thinking actually is, and how that plays out in other animals and humans. The 2nd part goes into sofrware systems and AI and how they work in comparison.
Tags
Annotators
URL
-
-
www.linkedin.com www.linkedin.com
-
began with language itself.
for - adjacency - language - is the first AI
-
for - from - LinkedIn article - Has language trapped humanity? - https://hyp.is/54ZYgrmmEfC5Oft3Op2Hiw/www.linkedin.com/pulse/has-language-trapped-humanity-willy-de-backer-vvwoe/
-
-
www.linkedin.com www.linkedin.com
-
Language trapped us tens of thousands of years ago, fundamentally altering our minds.
for - language - origins - adjacency - language - AI
-
-
journalofeducationalinformatics.ca journalofeducationalinformatics.ca
-
See Ezra Klein's argument against using ChatGPT for writing even the first draft. https://podcasts.apple.com/us/podcast/how-i-write/id1700171470?i=1000710273359
-
-
tawandamunongo.dev tawandamunongo.dev
-
AI is Making Us Work More
- AI, intended to free workers, is causing longer work hours and increased pressure, spreading 996 culture to Western AI startups.
- AI tools never tire, creating psychological pressure to constantly work and increasing feelings of guilt during rest.
- Historical advances like lamps and bulbs extended work hours; AI similarly shifts "can work" into "should work."
- Philosopher Byung-Chul Han's "Burnout Society" concept shows internalized self-discipline drives overwork, amplified by AI's "excess of positivity."
- The hyper-productivity loop leads to burnout, reduced creativity, and diminishing returns despite increased effort.
- Rest is framed as resistance and vital for innovation, which thrives on reflection, not constant activity.
- The key challenge is adopting a healthy culture around AI use that avoids exploitation and preserves human well-being.
Tags
Annotators
URL
-
- Oct 2025
-
www.cmarix.com www.cmarix.com
-
Building fair AI systems is a continuous and deliberate effort. The model needs to be accurate but also maintain fairness, transparency and accountability.
Learn practical strategies to design AI systems that avoid bias and ensure fairness. Discover techniques like diverse data, transparent algorithms, and robust evaluation pipelines to build ethical AI.
-
AI systems are powerful tools-but if not built carefully, they can reinforce societal biases and make unfair decisions. Ensuring fairness and equity in AI is not just a technical challenge, but also a responsibility towards the development of ethical AI.
Learn practical strategies to design AI systems that avoid bias and ensure fairness. Discover techniques like diverse data, transparent algorithms, and robust evaluation pipelines to build ethical AI.
-
-
www.cmarix.com www.cmarix.com
-
AWS Transcribe vs Deepgram vs Whisper, which speech-to-text solution should you choose for your voice enabled applications? Each platform is great in different areas like speed, accuracy, cost, and flexibility. This guide compares their strengths and limitations to help you pick the STT solution that fits your project and long-term goals.
Compare AWS Transcribe, Deepgram, and Whisper for speech-to-text accuracy, pricing, integrations, and use cases. Find the best AI transcription service for your business.
Tags
Annotators
URL
-
-
-
AI in WordPress development is changing the way websites are created and managed. It helps developers automate routine tasks, optimize performance, and deliver personalized user experiences. By integrating AI plugins or tools, WordPress sites can achieve faster design processes, smarter content generation, and overall improved functionality that enhances both visitor engagement and development efficiency.
Explore how AI in WordPress development is reshaping websites, automating content creation, enhancing user experience with chatbots, and optimizing performance plugins. Learn top AI integration strategies, plugins, and best practices for modern WordPress sites.
Tags
Annotators
URL
-
-
-
Amazon Plans to Replace More Than Half a Million Jobs With Robots
- Internal documents reviewed by The New York Times show Amazon plans to automate up to 75% of its operations in the coming years.
- The company expects automation to replace or eliminate over 500,000 U.S. jobs by 2033, primarily in warehouses and fulfillment centers.
- By 2027, automation could allow Amazon to avoid hiring around 160,000 new workers, saving about 30 cents per package shipped.
- This strategy is projected to save $12.6 billion in labor costs between 2025 and 2027.
- Amazon’s workforce tripled since 2018 to approximately 1.2 million U.S. employees, but automation is expected to stabilize or reduce future headcount despite rising sales.
- Executives presented to the board that automation could let the company double sales volume by 2033 without needing additional hires.
- Amazon’s Shreveport, Louisiana warehouse serves as the model for the future: it operates with 25% fewer workers and about 1,000 robots.
- A new facility in Virginia Beach and retrofitted older ones like Stone Mountain, Georgia, are following this design, which may shift employment toward more temporary and technical roles.
- The company is instructing staff to use softer language—such as “advanced technology” or “cobots” (collaborative robots)—instead of terms like “AI” or “robots,” to ease concerns about job loss.
- Amazon has begun planning community outreach initiatives (parades, local events) to offset the reputational risks of large-scale automation.
- The company has denied that the documents represent official policy, claiming they reflect the views of one internal group, and emphasized ongoing seasonal hiring (250,000 roles for holidays).
- Analysts suggest this plan could serve as a blueprint for other major employers, including Walmart and UPS, potentially reshaping U.S. blue‑collar job markets.
- The automation push continues a trajectory started with Amazon’s $775 million acquisition of Kiva Systems in 2012, which introduced mobile warehouse robots that revolutionized internal logistics.
- Recent innovations include robots like Blue Jay, Vulcan, and Proteus, aimed at performing tasks such as sorting, picking, and packaging with minimal human oversight.
- Long-term, Amazon may require fewer warehouse workers but more robot technicians and engineers, signaling a broader shift in labor type rather than total employment.
-
-
maven.com maven.com
-
AI for Efficiency - Using AI to Get Faster at Analysis Tasks
AI Tools for each phase of analysis
-
-
drphilippahardman.substack.com drphilippahardman.substack.com
-
Why does adding structure to AI workflows work so well? Fundamentally, there are four key reasons. Methodologies like FRAME™:
Why create a structured workflow?
-
What you’re doing: Turning one-off prompts into reusable systems.Once you’ve perfected a workflow, you have a proven recipe. Now you can decide how to operationalise it. There are three options:Create a Prompt Template when you want to use it regularly for personal reuse onlyBuild a Custom GPT or Bot when you want to share a task-specific workflow with a team for cross-team quality and efficiency gains Create an Automated Agent when you want to trigger the workflow automatically in certain conditions
How to create reusable systems
-
AI’s first draft is rarely its best. This is where quality assurance happens.The process:
AI refinement process for QA
-
File format matters. Here’s the reliability ranking for how well AI reads different formats:.txt / .md — Minimal noise, clear structure (best)JSON / CSV — Great for structured dataDOCX — Fine if formatting is simpleDigital PDFs — Extraction can mix headers, footers, columnsPPTX — Text order can be unpredictableScanned PDFs / images — Worst; requires OCR, highly error-prone
How AI reads file formats and what they are good for
-
-
huggingface.co huggingface.co
-
half of the time
Tags
Annotators
URL
-
-
www.nytimes.com www.nytimes.com
-
malleable actors
supporting my premise: - AI actors are replacing humans and theyre more appealing to the studios.
This shows economic and creative incentives for replacing humans.
-
-
www.youtube.com www.youtube.com
-
how we can build AI system that are more like biological system
for - building AI systems more like biological systems
-
basically absent or very seldom present in current AI systems
for - comparison - biological vs AI systems
-
-
docs.google.com docs.google.com
-
Introduction: AI is now recently everywhere but we still need humans
-
Title: long clear and creative
Tags
- It’s been two years since ChatGPT became available to the public. Since then, artificial intelligence (AI) has taken every industry by storm, including marketing. A recent study by The Conference Board found that 87% of marketers have used AI or experimented with AI tools. The study also found that 68% of marketers use AI daily. However, even the most powerful AI models cannot be effective without human collaboration. While AI is reshaping the marketing landscape by optimizing processes and allowing marketers to access insights more easily, our industry’s success still depends on creativity and the human capacity for empathy in storytelling.
- Building an AI-Driven Culture in Modern Marketing
Annotators
URL
-
-
www.incompleteideas.net www.incompleteideas.net
-
Richard Sutton page, lists more articles than the folder I found.
-
-
www.incompleteideas.net www.incompleteideas.net
-
Some writings, incomplete ideas he calls them, by Richard Sutton. Straight-up HTML, no frills, in a folder. Nice.
Tags
Annotators
URL
-
-
www.cmarix.com www.cmarix.com
-
Discover how computer vision in AI is transforming industries by giving machines the ability to “see” and understand the visual world. This guide explores its core technology, diverse applications from self-driving cars to healthcare, and future trends, all while emphasizing its profound impact on business and daily life.
A deep dive into how computer vision and AI are transforming industries, from healthcare diagnostics and autonomous vehicles to retail and manufacturing. Learn core technologies, real‑world applications, and future trends for leveraging visual intelligence in your business.
-
- Sep 2025
-
www.cmarix.com www.cmarix.com
-
Deciding between AI vs traditional software isn’t easy. Businesses struggle to decide between reliability and innovation. Do you stick with proven, rule-based systems or invest in adaptive, data-driven AI? This blog breaks down the differences, advantages, and use cases so you can make the right choice for your business.
Compare AI vs Traditional Software Development to see which delivers better ROI. Explore cost, scalability, adaptability & when each model suits your business best.
-
-
resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
-
Developed an end-to-end full-stack web application to help students locate nearby study spots, track study sessions, and create study groups.
Include user metrics or feedback that demonstrate the app's effectiveness or popularity.
-
Led the development of a Telegram Bot that parses natural language commands to allow fast, secure expense-splitting on Aptos blockchain directly in your group chat.
Add details on user adoption rates or how this improved user experience or efficiency.
-
Trained a PyTorch neural network to classify forehand vs backhand shot techniques based on player joint positions, achieving 87% test accuracy.
Explain the significance of 87% accuracy in practical terms, such as its effect on performance analysis.
-
Implemented an upload-to-review system with AWS S3 for uploads, Hypothes.is for in-line resume annotations, and version tracking via DynamoDB, driving fast and iterative peer reviews.
Clarify how much faster the review process became due to this implementation.
-
Developed a Discord bot to streamline collaborative resume reviews for 2,000+ students, eliminating cluttered review threads and combining both peer and AI-powered resume annotations directly in Discord.
Quantify the reduction in time spent on reviews or improvement in review quality.
-
Redesigned layout and fixed critical responsiveness issues on 10+ web pages using Bootstrap, restoring broken mobile views and ensuring consistent, functional interfaces across devices.
Include metrics on user engagement or satisfaction post-redesign to highlight impact.
-
Developed dashboards for an internal portal with .NET Core, C#, and jQuery, eliminating the need for 100+ complex spreadsheets and enabling 30+ executives to securely access operational, financial, and customer data.
Add a statement on how this improved decision-making or efficiency for the executives.
-
Spearheaded backend unit testing automation for the shift-bidding platform using xUnit, SQLite, and Azure CI/CD Pipelines, contributing 40+ tests, identifying logic errors, and increasing overall test coverage by 15%.
Explain how the increased test coverage improved system reliability or reduced bugs.
-
Automated monthly shift-bid data transfers into the company HR system for 700+ employees using C#, SQL, and Azure Functions, saving supervisors hours of manual entry each month.
Quantify 'hours saved' to provide a clearer impact of your automation efforts.
-
Led the development of an Agentic AI staff scheduling app with React, C#/.NET, and Azure OpenAI, automating schedule templates for 12,000+ monthly flights and ensuring compliance with a RAG Policy chatbot.
Specify the percentage improvement in scheduling efficiency or time saved due to automation.
-
-
www.thefai.org www.thefai.org
-
Current intellectual property laws constitute an “anti-constitutional” barrier to the transformative potential of artificial intelligence (AI), systematically frustrating the explicit purpose of the Intellectual Property (IP) Clause.
This article reports that Anthropic has agreed to pay out a $1.5 billion settlement for copyright violations while training their Claude AI tool on books found on the Internet. That works out to be about $3000 per book.
The whole idea of books (at least nonfiction books) is that readers are supposed to learn from them. But now if actual learning from them takes place it's a $3000 charge!
It used to be that to violate a copyright required copying, not merely training. What's more, in the USA the sole justification for government-enforced monopolies on intellectual property is Article I, Section 8, Clause 8 of the U.S. Constitution, which authorizes copyrights and patents only to "to promote the Progress of Science and useful Arts," and only "for limited Times," and only "to Authors and Inventors." By extending copyright duration to the "author's life plus 70 years," Congress flouted those restrictions, and this precedent further tramples them, by clearly impeding the progress of science and useful arts.
-
-
resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
-
Instructed 1,000+ students on manufacturing best practices, emphasizing safety and build quality.
Quantify the impact of your instruction. Did it lead to fewer errors or higher quality projects? Provide metrics.
-
Trained over 100 students every semester on the safety protocols and applicable use cases for all MakerSpace equipment including 3D printers(FDM/SLA), laser cutters, CNC Machines, thermal formers, hand/power tools.
Include the impact of your training. Did it lead to improved safety records or student confidence?
-
Developed python-based computer vision dice recognition application capable of detecting and logging results for multiple dice types (D4–D20).
Mention the user base or potential applications of this project. Who would benefit from it?
-
Created standards for employee software interaction, improved efficiency, reducing operation costs by 40%.
Detail what specific standards were created. How did they lead to the 40% cost reduction? Be more specific.
-
Revised, modularized, and updated old assembly program to a modern code base removing 22 detected bugs enabling future feature implementation.
Explain how bug removal improved functionality or user experience. Provide examples of features enabled.
-
Unified three isolated programs into one software solution utilizing Java, PHP, SQL(MySQL), and RESTful API, removing the need for paper communication digitizing employee work.
Quantify the impact of digitizing work. How much time or cost was saved? Include specific metrics.
-
Supported 45 project groups with project management including Project Charter, Scope, DOD, Stakeholder management, WBS/WBS dictionary, scrum ceremonies, risk assessment, Agile, lifecycle, and product handover.
Clarify your role in project management. Did you lead or facilitate? Highlight your direct contributions.
-
Planned and implemented creative projects following the school’s curriculum and objectives, improving students’ understanding of course material, resulting in an average of a letter grade improvement.
Specify how you measured the improvement in understanding. Include metrics or feedback to enhance impact.
-
-
rutgers.instructure.com rutgers.instructure.com
-
In this paragraph, instead of looking at plagiarism or anything related to that, the study is relating to people and how ai influences people to think about themselves as real researchers.
Tags
Annotators
URL
-
-
www.youtube.com www.youtube.com
-
for - consciousness, AI, Alex Gomez- Marin, neuroscience, hard problem of consciousness, nonmaterialism, materialism - progress trap - transhumanism - AI - war on conciousness
Summary - Alex advocates - for a nonmaterialist perspective on consciousness and argues - that there is an urgency to educate the public on this perspective - due to the transhumanist agenda that could threaten the future of humanity - He argues that the problem of whether consciousness is best explained by materialism or not is central to resolving the threat posed by the direction AI takes - In this regard, he interprets that the very words that David Chalmers chose to articulate the Hard Problem of Consciousness reveals the assumption of a materialist reference frame. - He used a legal metaphor too illustrate his point: - When a lawyer poses three question "how did you kill that person" - the question is entrapping the accused . It already contains the assumption of guilt. - I would characterize his role as a scientist who practices authentic seeker of wisdom - will learn from a young child if they have something valuable to teach and - will help educate a senior if they have something to learn - The efficacy of timebinding depends on authenticity and is harmed by dogma
-
even this idea of progress
for - progress trap - transhumanism - AI - war on consciousness
-
very soon people will think that if you turn off their their their algorithm you're killing their pet,
for - quote - AI ethics - AI pets - very soon people will think that if you turn off their algorithm, you're killing their pet,
-
I think we we we we're going through some sort of consciousness war or even spiritual war.
for - adjacency - AI - consciousness war - spiritual war
-
other philosophical worldviews with respect to consciousness. Now it's urgent because now we have AI
for - adjacency - urgency of - alternative views of consciousness - AI
Tags
- Alex Gomez- Marin
- adjacency - AI - consciousness war - spiritual war
- progress trap - transhumanism - AI
- adjacency - urgency of - alternative views of consciousness - AI
- quote - Alex Gomez- Marin
- Hard problem of consciousness
- quote - AI ethics - AI pets
- progress trap - transhumanism - AI - war on consciousness
Annotators
URL
-
-
www.linkedin.com www.linkedin.com
-
Every leap comes with unintended consequences.Sam Altman believes this device could add a trillion dollars in value to OpenAI. It may be their iPhone moment.
for - AI - progress trap - Open AI device
-
-
resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
-
Managed conflicts with empathy, using active listening and de-escalation to maintain a respectful community.
Provide examples of conflict resolution outcomes. Did it lead to improved community satisfaction?
-
Fostered an inclusive environment by engaging with residents and building peer connections.
What was the measurable impact of fostering inclusivity? Include feedback or participation rates.
-
Created a user interface for quick and easy access enhancing user experience and system security.
Quantify the enhancement in user experience. Did it reduce access time or errors?
-
Developed a comprehensive Python and MySQL system, enabling efficient identification tracking.
How much efficiency was gained? Provide specific metrics or time saved if possible.
-
Performed thorough testing to ensure good player experience and proper functionality of game mechanics.
What were the results of this testing? Did it lead to fewer bugs or higher user satisfaction?
-
Designed and developed a 2D shooter game, following the software development lifecycle to ensure structured workflows.
Mention any player metrics or feedback received post-launch to demonstrate success.
-
Developed functionality using Java and Spring Boot to support core features like user management and game interactions.
What was the user impact of these features? Include user growth or engagement metrics.
-
Enhanced the UI and fixed critical bugs, resulting in positive client feedback and contributing to an A+ final grade.
Quantify 'positive client feedback.' How many users or stakeholders provided feedback?
-
Migrated the project to a self-hosted environment and repaired the CI/CD pipeline, restoring automated deployments.
Detail the benefits of the migration. Did it improve deployment speed or reliability?
-
Implemented an LLM chatbox for AI-assisted debugging, fulfilling the client's priority and enhancing the tool's functionality.
Quantify the enhancement. How much did functionality improve? Provide metrics if available.
-
Collaborated within a 6-person team in an Agile environment, delivering project milestones over 5 sprints and incorporating peer feedback through 360-degree reviews.
Specify the outcomes of the project milestones. What was the impact on the client or team?
-
-
www.youtube.com www.youtube.com
-
It's amazing what you can do with correlations but um they're not they're not truly intelligent
A answer - yes, interested in AI - they are not intelligent, just huge correlation machines - Donald Hoffman
-
I did my um my PhD research on list machines in the artificial intelligence lab at MIT
History - Donald Hoffman - PhD on Lisp AI - Marvin Minsky - MIT lab
-