1,634 Matching Annotations
  1. Last 7 days
    1. For us personally, this means that we no longer use generative AI – neither for private nor professional purposes.

      Authors avoid the use of generative AI. But realise that is difficult for most to do, and as such a privileged tech capable position

    2. To Gen or Not To Gen: The Ethical Use of Generative AI 33 minute read This blog entry started out as a translation of an article that my colleague Jakob and I wrote for a German magazine. After that we added more stuff and enriched it by additional references and sources. We aim at giving an overview about many - but not all - aspects that we learned about GenAI and that we consider relevant for an informed ethical opinion. As for the depth of information, we are just scratching the surface; hopefully, the loads of references can lead you to diving in deeper wherever you want. Since we are both software developers our views are biased and distorted. Keep also in mind that any writing about a “hot” topic like this is nothing but a snapshot of what we think to know today. By the time you read it the authors’ knowledge and opinions have already changed. Last Update: December 8, 2025. Table of ContentsPermalink Abstract About us Johannes Link Jakob Schnell Introduction Ethics, what does that even mean? Clarification of terms Basics Can LLMs think? What LLMs are good at GenAI as a knowledge source GenAI in software development Actual vs. promised benefits Harmful aspects of GenAI GenAI is an ecological disaster Power Water Electronic Waste GenAI threatens education and science GenAI is destroying the free internet. GenAI is a danger to democracy GenAI versus human creativity Digital colonialism Political aspects Conclusion Can there be ethical GenAI? How to act ethically AbstractPermalink ChatGPT, Gemini, Copilot. The number of generative AI applications (GenAI) and models is growing every day. In the field of software development in particular, code generation, coding assistants and vibe coding are on everyone’s lips. Like any technology, GenAI has two sides. The great promises are offset by numerous disadvantages: immense energy consumption, mountains of electronic waste, the proliferation of misinformation on the internet and the dubious handling of intellectual property are just a few of the many negative aspects. Ethically responsible behaviour requires us to look at all the advantages, disadvantages and collateral damages of a technology before we use it or recommend its use to others. In this article, we examine both sides and eventually arrive at our personal and naturally subjective answer to whether and how GenAI can be used in an ethical manner. About usPermalink Johannes LinkPermalink … has been programming for over 40 years, 30 of them professionally. Since the end of the last century, extreme programming and other human-centred software development approaches have been at the heart of his work. The meaningful and ethical implementation of his private and professional life has been his driving force for years. He has been involved with GenAI since the early days of OpenAI’s GPT language models. More about Johannes can be found at https://johanneslink.net. Jakob SchnellPermalink … studied mathematics and computer science and has been working as a software developer for 5 years. He works as a lecturer and course director in university and non-university settings. As a youth leader, he also comes into regular contact with the lives of children and young people. In all these environments, he observes the growing use of GenAI and its impact on people. IntroductionPermalink Ethics, what does that even mean?Permalink Ethical behaviour sounds like the title of a boring university seminar. However, if you look at the wikipedia article of the term 1, you will find that ‘how individuals behave when confronted with ethical dilemmas’ is at the heart of the definition. So it’s about us as humans taking responsibility and weighing up whether and how we do or don’t do certain things based on our values. We have to consider ethical questions in our work because all the technologies we use and promote have an impact on us and on others. Therefore, they are neither neutral nor without alternative. It is about weighing up the advantages and potential against the damage and risks; and that applies to everyone, not just us personally. Because often those who benefit from a development are different from those who suffer the consequences. As individuals and as a society, we have the right to decide whether and how we want to use technologies. Ideally, this should be in a way that benefits us all; but under no circumstances should it be in a way that benefits a small group and harms the majority. The crux of the matter is that ethical behaviour does not come for free. Ethics are neither efficient nor do they enhance your economic profit. That means that by acting according to your values you will, at some point, have to give something up. If you’re not willing to do that, you don’t have values - just opinions. Clarification of termsPermalink When we write ‘generative AI’ (GenAI), we are referring to a very specific subset of the many techniques and approaches that fall under the term ‘artificial intelligence’. Strictly speaking, these are a variety of very different approaches that range from symbolic logic, over automated planning up to the broad field of machine learning (ML). Nowadays most effort, hype and money goes into deep learning (DL): a subfield of ML that uses multi-layered artificial neural networks to discover statistical correlations (aka patterns) based on very large amounts of training data in order to reproduce those patterns later. Large language models (LLM) and related methods for generating images, videos and speech now make it possible to apply this idea to completely unstructured data. While traditional ML methods often managed with a few dozen parameters, these models now work with several trillion (10^12) parameters. In order for this to produce the desired results, both the amount of training data and the training duration must be increased by several orders of magnitude. This brings us to the definition of what we mean by ‘GenAI’ in this article: Hyperscaled models that can only be developed, trained and deployed by a handful of companies in the world. These are primarily the GenAI services provided by OpenAI, Anthropic, Google and Microsoft, or based on these services. We also focus primarily on language models; the generation of images, videos, speech and music plays only a minor role in this article. Our focus on hyperscale services does not mean that other ML methods are free of ethical problems; however, we are dealing with a completely different order of magnitude of damage and risk here. For example, there do exist variations of GenAI that use the same or similar techniques, but on a much smaller scale and restricted domains (e.g. AlphaFold 2). These approaches tend to bring more value with fewer downsides. BasicsPermalink GenAI models are designed to interpolate and extrapolate 3, i.e. to fill in the gaps between training data and speculate beyond the limits of the training data. Together with the stochastic nature of the training data, this results in some interesting properties: GenAI models ‘invent’ answers; with LLMs, we like to refer to this as ‘hallucinations’. GenAI models do not know what is true or false, good or bad, efficient or effective, only what is statistically probable or improbable in relation to training data, context and query (aka prompt). GenAI models cannot explain their output; they have no capability of introspection. What is sold as introspection is just more output, with the previous output re-injected. GenAI models do not learn from you; they only draw from their training material. The learning experience is faked by reinjecting prior input into a conversation’s context 4. The context, i.e. the set of input parameters provided, is decisive for the accuracy of the generated result, but can also steer the model in the wrong direction. Increasing the context window makes a query much more computation-intensive - likely in a quadratic way. Therefore, the promised increase of “maximum context window” of many models is mostly fake 5. The reliability of LLMs cannot be fundamentally increased by even greater scaling 6. Can LLMs think?Permalink Proponents of the language-of-thought hypothesis 7 believe it is possible for purely language-based models to acquire the capabilities of the human brain – reasoning, modelling, abstraction and much more. Some enthusiasts even claim that today’s models have already acquired this capability. However, recent studies 8 9 show that today’s models are neither capable of genuine reasoning nor do they build internal models of the world. Moreover, “…according to current neuroscience, human thinking is largely independent of human language 10” and there is fundamental scientific doubt that achieving human cognition through computation is achievable in practice let alone by scaling up training of deep networks 11. An example of a lack of understanding of the world is the prompt ‘Give me a random number between 0 and 50’. The typical GenAI response to this is ‘27’, and it is significantly more reliable than true randomness would allow. (If you don’t believe it, just try it out!) This is because 27 is the most likely answer in the GenAI training data – and not because the model understands what ‘random’ means. ‘Chain of Thought (CoT)’ approaches and ‘Reasoning models’ attempt to improve reasoning by breaking down a prompt, the query to the model, into individual (logical) steps and then delegating these individual steps back to the LLM. This allows some well-known reasoning benchmarks to be met, but it also multiplies the necessary computational effort by a factor between 30 and 700 12. In addition, multistep reasoning lets individual errors chain together to form large errors. And yet, CoT models do not seem to possess any real reasoning abilities 13 14 and improve the overall accuracy of LLMs only marginally 15. The following thought experiment from 16 underscores the lack of real “thinking” capabilities: LLMs have simultaneous access to significantly more knowledge than humans. Together with the postulated ability of LLMs to think logically and draw conclusions, new insights should just fall from the sky. But they don’t. Getting new insights from LLMs would require these to be already encoded in the existing training material, and to be decoded and extracted by pure statistical means. What LLMs are good atPermalink Undoubtedly, LLMs represent a major qualitative advance when it comes to extracting information from texts, generating texts in natural and artificial languages, and machine translation. But even here, the error rate, and above all the type of error (‘hallucinations’), is so high that autonomous, unsupervised use in serious applications must be considered highly negligent. GenAI as a knowledge sourcePermalink As we have pointed out above, LLMs cannot differentiate between true and false - regardless of the training material. It does not answer the question “What is XYZ?” but the question “How would an answer to question ‘What is XYZ?’ look like?”. Nevertheless, many people claim that the answers that ChatGPT and alike provide for the typical what-how-when-who queries are good enough and often better than what a “normal” web search would have given us. Arguably, this is the most prevalent use case for “AI” bots today. The problem is that most of the time we will never learn about the inaccuracies, left-outs, distortions and biases that the answer contained - unless we re-check everything, which defies the whole purpose of speeding up knowledge retrieval. The less we already know, the better the “AI’s” answer looks to us, but the less equipped we are to spot the problems. A recent by the BBC and 22 Public Service Media organizations shows that 45% of all “AI” assistants’ answers on questions about news and current affairs have significant errors 17. Moreover, LLMs are easy prey for manipulation - either by the service providing organization or by third parties. A recent study claims that even multi-billion-parameter models can be “poisoned” by injecting just a few corrupted documents 18. So, if anything is at stake all output from LLMs must be carefully validated. Doing that, however, would contradict the whole point of using “AI” to speed up knowledge acquisition. GenAI in software developmentPermalink The creation and modification of computer programmes is considered a prime domain for the use of LLMs. This is partly because programming languages have less linguistic variance and ambiguity than natural languages. Moreover, there are many methods for automatically checking generated source code, such as compiling, static code analysis and automated testing. This simplifies the validation of generated code and thereby gives an additional feeling of trust. Nevertheless, individual reports on the success of coding assistants such as Copilot, Cursor, etc. vary greatly. They range from ‘completely replacing me as a developer’ to ‘significantly hindering my work’. Some argue that coding agents considerably reduce the time they have to invest in “boilerplate” work, like writing tests, creating data transfer objects or connecting your domain code to external libraries. Others counter by pointing out that delegating these drudgeries to GenAI makes you miss opportunities to get rid of them, e.g. by introducing a new abstraction or automating parts of your pipeline, and to learn about the intricacies and failure modes of the external library. Other than old-school code generation or code libraries prompting a coding agent is not “just another layer of abstraction”. It misses out on several crucial aspects of a useful abstraction: Its output is not deterministic. You cannot rely on any agent producing the same code next time you feed it the same prompt. The agent does not hide the implementation details, nor does it allow you to reliably change those details if the previous implementation turns out to be inadequate. Code that is output by an LLM, even if it is generated “for free”, has to be considered and maintained each time you touch the related logic or feature. The agent does not tell you if the amount of details you give in your prompt is sufficient for figuring out an adequate implementation. On the contrary, the LLM will always fill the specification holes with some statistically derived assumptions. Sadly, serious studies on the actual benefits of GenAI in software development are rare. The randomised trial by Metr 19 provides an initial indication, measuring a decline in development speed for experienced developers. An informal study by ThoughtWorks estimates the potential productivity gain from using GenAI in software development at around 5-15% 20. If “AI coding” were increasing programmers’ productivity by any big number, we would see a measurable growth of new software in app stores and OSS repositories. But we don’t, the numbers are flat at best 2122. But even if we assume a productivity increase in coding through GenAI, there are still two points that further diminish this postulated efficiency gain: Firstly, the results of the generation must still be cross-checked by human developers. However, it is well known that humans are poor checkers and lose both attention and enjoyment in the process. Secondly, software development is only to a small extent about writing and changing code. The most important part is discovering solutions and learning about the use of these solutions in their context. Peter Naur calls this ‘programming as theory building’ 23. Even the perfect coding assistant can therefore only take over the coding part of software development. For the essential rest, we still need humans. If we now also consider the finding that using AI can relatively quickly lead to a loss of problem-solving skills 24 or that these skills are not acquired at all, then the overall benefit of using GenAI in professional software development is more than questionable. As long as programming - and every technicality that comes with it - will not be fully replaced by some kind of AI, we will still need expert developers who can programm, maintain and debug code to the finest level of detail. Where, we wonder, will those senior developers come from when companies replace their junior staff with coding agents? Actual vs. promised benefitsPermalink If you read testimonials about the use of GenAI that people perceive as successful, you will mostly encounter scenarios in which ‘AI’ helps to make tasks that are perceived as boring, unnecessarily time-consuming or actually pointless faster or more pleasant. So it’s mainly about personal convenience and perceived efficiency. Entertainment also plays a major role: the poem for Grandma’s birthday, the funny song for the company anniversary or the humorous image for the presentation are quickly and supposedly inexpensively generated by ‘AI’. However, the promises made by the dominant GenAI companies are quite different: solving the climate crisis, providing the best medical advice for everyone, revolutionising science, ‘democratising’ education and much more. GPT5, for example, is touted by Sam Altman, CEO of OpenAI, as follows: ‘With GPT-5, it’s now like talking to an expert — a legitimate PhD-level expert in any area you need […] they can help you with whatever your goals are.’ 25 However, to date, there is still no actual use case that provides a real qualitative benefit for humanity or at least larger groups. The question ‘What significant problem (for us as a society) does GenAI solve?’ remains unanswered. On the contrary: While machine learning and deep learning methods certainly have useful applications, the most profitable area of application for ‘AI’ at present is the discovery and development of new oil and gas fields 26. Harmful aspects of GenAIPermalink But regardless of how one assesses the benefits of this technology, we must also consider the downsides, because only then can we ultimately make an informed and fair assessment. In fact, the range of negative effects of hyperscaled generative AI that can already be observed is vast. Added to this are numerous risks that have the potential to cause great social harm. Let’s take a look at what we consider to be the biggest threats: GenAI is an ecological disasterPermalink PowerPermalink The data centres required for training and operating large generative models 27 far exceed today’s dimensions in terms of both number and size. The projected data centre energy demand in the USA is predicted to grow from 4.4% of total electricity in 2023 to 22% in 2028 28. In addition, the typical data centre electricity mix is more CO2-intensive than the average mix. There is an estimated raise of ~11 percent for coal generated electricity in the US, as well as tripled emissions of greenhouse gases worldwide by 2030 - compared to the scenario without GenAI technology 29. Just recently Sam Altman from OpenAI blogged some numbers about the energy and water usage of ChatGPT for “the average query” 30. On the one hand, an average is rather meaningless when a distribution is heavily unsymmetric; the numbers for queries with large contexts or “chain of reasoning” computations would be orders of magnitude higher. Thus, the potential efficiency gains from more economical language models are more than offset by the proliferation of use, e.g. through CoT approaches and ‘agent systems’. On the other hand, big tech’s disclosure of energy consumption (e.g. by Google 31) is intentionally selective. Ketan Joshi goes into quite some details why experts think that the AI industry is hiding the full picture 32. Since building new power plants - even coal or gas fuelled ones - takes a lot of time, data center companies are even reviving old jet engines for powering their new hyper-scalers 33. You have to be aware that those engines are not only much more noisy than other power plants but also pump out nitrous oxide, one of the main chemicals responsible for acid rain 34. WaterPermalink Another problem is the immensely high water consumption of these data centres 35. After all, cooling requires clean water in drinking quality in order to not contaminate or clog the cooling pipes and pumps. Already today, new data centre locations are competing with human consumption of drinking water. According to Bloomberg News about two-thirds of data-centers that were built or developed in 2022 are located in areas that are already under “water-stress” 36. In the US alone “AI servers […] could generate an annual water footprint ranging from 731 to 1,125 million m3” 37. It’s not only an American problem, though. In other areas of the world the water-thirsty data centers also compete with the drinking water supply for humans 38. Electronic WastePermalink Another ecological problem is being noticeably exacerbated by ‘AI’: the amount of electronic waste (e-waste) that we ship mainly to “Third World” countries and which is responsible for soil contamination there. Efficient training and querying of very large neural networks requires very large quantities of specialised chips (GPUs). These chips often have to be replaced and disposed of within two years. The typical data center might not last longer than 3 to 5 years before it has to be rebuilt in large parts39. In summary, it can be said that GenAI is at least an accelerator of the ecological catastrophe that threatens the earth. And it is the argument for Google, Amazon and Microsoft to completely abolish their zero CO2 targets 40 and replace them with investments of several hundred billion dollars for new data centers. GenAI threatens education and sciencePermalink People often try to use GenAI in areas where they feel overloaded and overwhelmed: training, studying, nursing, psychotherapeutic care, etc. The fields of application for ‘AI’ are therefore a good indication of socially neglected and underfunded areas. The fact that LLMs are very good at conveying the impression of genuine knowledge and competence makes their use particularly attractive in these areas. A teacher under the simultaneous pressure of lesson preparation, corrections and covering for sick colleagues turns to ChatGPT to quickly create an exercise sheet. A student under pressure to get good grades has their English essay corrected by ‘AI’. The researcher under pressure to publish will ‘save’ research time by reading the AI-generated summary of relevant papers – even if they are completely wrong in terms of content 41. Tech companies like OpenAI and Microsoft play on that situation by offering their ‘AI’ for free or for little money to students and universities. The goal is obvious: Students that get hooked on outsourcing some of their “tedious” task to a service will continue to use - and eventually buy - this service after graduation. What falls by the wayside are problem-solving skills, engagement with complex sources, and the generation of knowledge through understanding and supplementing existing knowledge. Some even argue that AI is destroying critical education and learning itself 42: Students aren’t just learning less; their brains are learning not to learn. The training cycle of schools and universities is fast. Teachers are already reporting that pupils and students have acquired noticeably less competence in recent years, but have instead become dependent on unreliable ‘tools’ 43. The real problem with using GenAI to do assignments is not cheating, but students “are not just undermining their ability to learn, but to someday lead.” 44 GenAI is destroying the free internet.Permalink The fight against bots on the internet is almost as old as the internet itself – and has been quite successful so far. Multifactor authentication, reCaptcha, honeypots and browser fingerprinting are just a few of the tools that help protect against automated abuse. However, GenAI takes this problem to a new level – in two ways. To make ‘the internet’ usable as the main source for training LLMs, AI companies use so-called ‘crawlers’. These essentially behave like DDoS attackers: They send tens of thousands of requests at once, from several hundred IPs in a very short time. Robot.txt files are ignored; instead, the source IP and user agent are obscured 45. These practices have massive disadvantages for providers of genuine content: Costs for additional bandwidth. Lost advertising revenue, as search engines now offer LLM-generated summaries instead of links to the sources. This threatens the existence of remaining independent journalism in particular 46. Misuse of own content for AI-supported competition. If the place where knowledge is generated is separated from the place where it is consumed, and if this makes the performance of generation even more opaque than before, the motivation to continue generating knowledge also declines. For projects such as Wikipedia, this means fewer donors and fewer contributors. Open communities often have no other option but to shut themselves off. Another aspect is the flooding of the internet with generated content that cannot be automatically distinguished from non-generated content. This content overwhelms the maintainers of open source software or portals such as Wikipedia 47. If this content is then also entered by humans – often in the belief that they are doing good – it is no longer possible to take action against the methodology. In the long run, this means that less and less authentic training material will lead to increasingly poor results from the models. Last but not least, autonomously acting agents make the already dire state of internet security much worse 48. Think of handing all your personal data and credentials to a robot that is distributing and using that data across the web, wherever and whenever it deems it necessary for reaching some goal. is controlled by LLMs who are vulnerable to all kinds of prompt injection attacs 49. is controlled by and reporting to companies that do not have your best interest in mind. has no awareness and knowledge about the implication of its actions. is acting on your behalf and thereby making you accountable. GenAI is a danger to democracyPermalink The manipulation of public opinion through social media precedes the arrival of LLMs. However, this technology gives the manipulators much more leverage. By flooding the web with fake news, fake videos and fake everything undemocratic (or just criminal) parties make it harder and harder for any serious media and journalism to get the attention of the public. People no longer have a common factual basis, which is necessary for all social negotiations. If you don’t agree on at least some basic facts, arguing about policies and measures to take is pointless. Without negotiations democracy will be dying; in many parts of the world it already is. GenAI versus human creativityPermalink Art and creativity are also threatened by generative AI. The impact on artists’ incomes of logos, images and illustrations now being easily and quickly created by AI prompts is obvious. A similar effect can also be observed in other areas. Studies show that poems written by LLMs are indistinguishable from those written by humans and that generative AI products are often rated more highly 50. This can be explained by a trend towards the middle and the average, which can also be observed in the music and film scenes film scene: due to its basic function, GenAI cannot create anything fundamentally new, but replicates familiar patterns, which is precisely why it is so well received by the public. Ironically, ‘AI’ draws its ‘creativity’ from the content of those it seeks to replace. Much of this content was used as training material against the will of the rights holders. Whether this constitutes a copyright infringement has not yet been decided; morally, the situation seems clear. The creative community is the first to be seriously threatened by GenAI in its livelihood 51. It’s not a coincidence that a big part of GenAI efforts is targeted at “democratizing art”. This framing is completely upside down. Art has been one of the most democratic activities for a very long time. Everybody can do it; but not everybody wants to do put in the effort, the practicing time and the soul. Real art is not about the product but about the process, which requires real humans. Generating art without the friction is about getting rid of the humans in the loop - and still making money. Digital colonialismPermalink The huge amount of data required by hyperscaled AI approaches makes it impossible to completely curate the learning content. And yet, one would like to avoid the reproduction of racist, inhuman and criminal content. Attempts are being made to get the problem under control by subsequently adapting the models to human preferences and local laws through additional ‘reinforcement learning from human feedback (RLHF)’ 52. The cheap labour for this very costly process can be found in the Global South. There, people are exposed to hours of hate speech, child abuse, domestic violence and other horrific scenarios in their poorly paid jobs in order to filter them out of the training material of large AI companies 53. Many emerge from these activities traumatised. However, it is not only people who are exploited in the less developed regions of the world, but also nature: the poisoning of the soil with chemicals during the extraction of raw materials for digital chips, as well as the contamination caused by our electronic waste and its improper disposal, are collateral damage that we willingly accept and whose long-term consequences are currently extremely difficult to assess. Here, too, the “developed” world profits, whereas the negative aspects are outsourced to the former colonies and other poor regions of the world. Political aspectsPermalink As software developers, we would like to ‘leave politics out of it’ and instead focus entirely on the cool tech. However, this is impossible when the advocates of this technology pursue strong political and ideological goals. In the case of GenAI, we can cleary see that the US corporations behind it (OpenAI, Google, Meta, Microsoft, etc.) have no problem with the current authoritarian – some say fascist – US government 54. In concrete terms, this means, among other things, that the models are explicitly manipulated to be less liberal or simply not to generate any output that could upset the CEO or the president 55. Even more serious is the fact that many of the leading minds behind these corporations and their financiers adhere to beliefs that can be broadly described as digital fascism. These include Peter Thiel, Marc Andreessen, Alex Karp, JD Vance, Elon Musk and many others on “The Authoritarian Stack” 56. Their ideologies, disguised as rational theories, are called longtermism and effective altruism. What they have in common is that they consider democracy and the state to be obsolete models, compassion to be ‘woke’, and that the current problems of humanity are insignificant, as our future lies in the colonisation of space and the merging of humans with artificial superintelligence 57. Do we want to give people who adhere to these ideologies (even) more power, money and influence by using and paying for their products? Do we want to feed their computer systems with our data? Do we really want to expose ourselves and our children to the answers from chatbots which they have manipulated? Not quite as abstruse, but similarly misanthropic, is the imminent displacement of many jobs by AI, as postulated by the same corporations in order to put pressure on employees with this claim. Demanding a large salary? Insisting on your legal rights? Complaining about too much workload? Doubts about the company’s goals? Then we’ll just replace you with cheap and uncomplaining AI! Whichever way you look at it, AI and GenAI are already being used politically. If we go along without resistance, we are endorsing this approach and supporting it with our time, our attention and our money. ConclusionPermalink Ideally, we would like to quantify our assessment by adding up the advantages, adding up the disadvantages and finally checking whether the balance is positive or negative. Unfortunately, in our specific case, neither the benefits nor the harm are easily quantifiable; we must therefore consult our social and personal values. Discussions about GenAI usually revolve purely around its benefits. Often, the capabilities of all ‘AI’ technologies (e.g. protein folding with AlphaFold 2) are lumped together, even though they have little in common with hyperscaling GenAI. However, if we consider the consequences and do not ignore the problems this technology entails – i.e. if we consider both sides in terms of ethics – the assessment changes. Convenience, speed and entertainment are then weighed against numerous damages and risks to the environment, the state and humanity. In this sense, the ethical use and further expansion of GenAI in its current form is not possible. Can there be ethical GenAI?Permalink If the use of GenAI is not ethical today what would have to change, which negative effects of GenAI would have to disappear or at least be greatly reduced in order to tip the balance between benefits and harms in the other direction? The models would have to be trained exclusively with publicly known content whose original creators consent to its use in training AI models. The environmental damage would have to be reduced to such an extent that it does not further fuel the climate crisis. Society would have to get full access to the training and operation of the models in order to rule out manipulation by third parties and restrict their use to beneficial purposes. This would require democratic processes, good regulation and oversight through judges and courts. The misuse and harming of others, e.g., through copyright theft or digital colonialism, would have to be prevented. Is such a change conceivable? Perhaps. Is it likely, given the interest groups and political aspects involved? Probably not

      All these factors are achievable I think, or will be soonish. Smaller models, better sourced data sets, niche models, etc. But not with current actors as mentioned at the end.

    1. Benioff had recently told Business Insider that he's drafting the company's annual strategic document with data foundations—not AI models—as the top priority, explicitly citing concerns about "hallucinations" without proper data context.

      The annual strategic document now puts data foundations in focus, not AI models. Well duh. How even get to the notion that you can AI-all the things, it implies an uncritical belief in the promises of vendors, or magical thinking. How do you get to be CEO if you fall for that. Vibe-leading iow, the wizard behind the curtain.

    2. Phil Mui described as AI "drift" in an October blog post. When users ask irrelevant questions, AI agents lose focus on their primary objectives. For instance, a chatbot designed to guide form completion may become distracted when customers ask unrelated questions.

      ha, you can distract chatbots, as we've seen from the start. This is the classic 'it's not for me but for my mom' train ticket sales automation hangup in response to 'to which destination would you like a ticket', and then 'unknown railway station 'for my mom' in a new guise. And they didn't even expect that to happen? It's an attack service!

    3. Home security company Vivint, which uses Agentforce to handle customer support for 2.5 million customers, experienced these reliability problems firsthand. Despite providing clear instructions to send satisfaction surveys after each customer interaction, The Information reported that Agentforce sometimes failed to send surveys for unexplained reasons. Vivint worked with Salesforce to implement "deterministic triggers" to ensure consistent survey delivery.

      wtf? Why ever use AI to send out a survey, something you probably already had fully automated beforehand. 'deterministic triggers' is a euphemism for regular scripted automation like 'clicking done on a ticket triggers an e-mail for feedback', which we've had for decades.

    4. Chief Technology Officer of Agentforce, pointed out that when given more than eight instructions, the models begin omitting directives—a serious flaw for precision-dependent business tasks.

      Whut? AI-so-human! Vgl 8-bits-schuifregister metafoor. [[Korte termijngeheugen 7 dingen 30 secs 20250630104247]] Is there a chunking style work-around? Where does this originate, token limit, bite sizes?

    5. The company is now emphasizing that Agentforce can help "eliminate the inherent randomness of large models," marking a significant departure from the AI-first messaging that dominated the industry just months ago.

      meaning? probabilities isn't random and isn't perfect. Dial down the temp on models and what do you get?

    6. All of us were more confident about large language models a year ago," Parulekar stated, revealing the company's strategic shift away from generative AI toward more predictable "deterministic" automation in its flagship product, Agentforce.

      Salesforce moving back from fully embracing llms, towards regular automation. I think this is symptomatic in diy enthusiasm too: there is likely an existing 'regular' automation that helps more.

    1. would take seriously the fact that intelligence is now being scaled and distributed through organizations long before it is unified or fully understood

      there's no other way, understanding comes from using it, and having stuff go wrong. The scandals around algos are important in this. Scale and distribution are different beasts. Distribution does not need scale (but a network effect helps) in order to work. The need for scale in digital is an outcome of the financing structure and chosen business model, and is the power grab essentially. #openvraag hoe zet je meer focus op distributie als tegenkracht tegen de schalingshonger van actoren?

    2. examine power as an emergent consequence of deployment and incentives, not intent.

      Intent def is there too though, much of this is entrenching, and much of it is a power grab (esp US tech at the mo), to get from capital/tech concentration to coopting governance structures

      AI is a tech where by design it is not lowering a participation threshold, it positions itself as bigger-than-us, like nuclear reactors, not just anyone can run with it. That only after 3 years we see a budding diy / individual agency angle shows as much. It was only designed to create and entrench power (or transform it to another form), other digital techs originate as challenge to power, this one clearly the opposite. The companies involved fight against things that push towards smaller than us ai tech, like local offline first. E.g. DMA/DSA

    3. Empirical grounding. In 2015, scaling laws, emergent capabilities, and deployment‑driven feedback loops were speculative. Today, they are measurable. That shift changes the nature of responsibility, governance, and urgency in ways that were difficult to justify rigorously at the time.

      States that, in contrast to a decade ago, now we can measure scaling, emergent capabilities, feedback loops. Interesting. - [ ] #30mins #ai-ethics werk dit uit in meer detail. Wat meet je dan, hoe kan dat er uit zien? Hoe vergelijkt dat met div beoordelingsmechanismen?

    4. Political economy and power. The book largely brackets capital concentration, platform dynamics, and geopolitical competition. Today, these are central to any serious discussion of AI, not because the technology changed direction, but because it scaled fast enough to collide with real institutions and entrenched interests.

      geopolitics, whether in shape of capital, tech or politics has become key, which he overlooked in 2015/8

    5. Alignment as an operational problem. The book assumes that sufficiently advanced intelligences would recognize the value of cooperation, pluralism, and shared goals. A decade of observing misaligned incentives in human institutions amplified by algorithmic systems makes it clear that this assumption requires far more rigorous treatment. Alignment is not a philosophical preference. It is an engineering, economic, and institutional problem.

      The book did not address alignment, assumed it would sort itself out (in contrast to [[AI begincondities en evolutie 20190715140742]] how starting conditions might influence that. David recognises how algo's are also used to make diffs worse.

    6. what it feels like to live through an intelligence transition that does not arrive as a single rupture, but as a rolling transformation, unevenly distributed across institutions, regions, and social strata.

      More detailed formulation of Gibson future is already here but distributed. Add sectors/domains. There's more here to tease out wrt my change management work. - [ ] #30mins #ai-ethics vul in met concretere voorbeelden hoe deze quote vorm krijgt

    7. As a result, the debate shifted. The central question is no longer “Can we build this?” but “What does this do to power, incentives, legitimacy, and trust?”

      David posits questions that are all on the application side, what is the impact of using ai. There are also questions on the design side, how do we shape the tools wrt those concepts. Vgl [[AI begincondities en evolutie 20190715140742]] e.g. diff outcomes if you start from military ai params or civil aviation (much stricter), in ref to [[Novacene by James Lovelock]]

    8. The book’s central argument was not about timelines or machines outperforming humans at specific tasks. It was about scale. Artificial intelligence, I argued, should not be understood at the level of an individual mind, but at the level of civilization. Technology does not merely support humanity. It shapes what humanity is. If AI crossed certain thresholds, it would not just automate tasks, but it would reconfigure social coordination, knowledge production, and agency itself. That framing has aged better than I expected, not because any particular prediction came true, but because the underlying question turned out to be the right one.

      The premise of the book that scale mattered wrt AI (SU vibes). AI to be understood at societal level, not from an individual perspective, as tech and society mutually shape eachother (basic WWTS premise). Given certain thresholds it would impact coordination, knowledge and agency.

    1. AWS CEO Explains 3 Reasons AI Can’t Replace Junior Devs
      • AWS CEO Matt Garman argues against replacing junior developers with AI, calling it "one of the dumbest ideas."
      • Juniors excel with AI tools due to recent exposure, using them daily more than seniors (55.5% per Stack Overflow survey).
      • They are cheapest to employ, so not ideal for cost-cutting; true savings require broader optimization.
      • Cutting juniors disrupts talent pipeline, stifling fresh ideas and future leaders; tech workforce demand grows rapidly.
      • AI boosts productivity, enabling more software creation, but jobs will evolve—fundamentals remain key.

      Hacker News Discussion

      • AI accelerates junior ramp-up by handling boilerplate, APIs, imports, freeing time for system understanding and learning.
      • Juniors ask "dumb questions" revealing flaws, useless abstractions; seniors may hesitate due to face-saving or experience.
      • Need juniors for talent pipeline; skipping them creates senior shortages in 4-5 years as workloads pile up.
      • Team leads foster vulnerability by modeling questions, identifying "superpowers" to build confidence.
      • Debates on AI vs. docs struggle: AI speeds answers but may skip broader discovery; friction aids deep learning.
  2. Dec 2025
    1. In oktober is het geïntensiveerd toezicht door de AP op de gemeente beëindigd. Een nieuw team ging voortvarend door met de taak om gegevensbescherming te optimaliseren. Signalen over de omvang van extern dataverkeer waren aanleiding voor nader onderzoek.

      They just completed a stricter regime by the DPA, and then immediately this comes up afterwards. Ouch. It surfaced bc they were monitoring network traffic volumes. That says to me they also know who did the uploading

    2. Een algemene interne bewustwordingscampagne op het gebied van gegevensbescherming en privacy. Deze wordt voortgezet, met de focus op het verbeteren van AI-geletterdheid (hoe kun je veilig en verantwoord omgaan met AI).Op 18 november 2025 is de AI-gedragscode vastgesteld en er is een plan van aanpak voor de verdere verbetering van de privacy opgesteld.

      Both these things, internal training and an AI operational code, are common in gov agencies. Here they are too late, and it's uncertain anyway they would have had effect. Any person who thinks nothing of uploading internal documents into a public website won't be held back by a rulebook they would not have read.

    3. Openbare AI-websites, zoals ChatGPT, zijn meteen geblokkeerd. Medewerkers kunnen sinds 23 oktober alleen Copilot binnen de beveiligde gemeente-omgeving gebruiken.

      As result of the data breach, all OpenAI products have been banned internally. Only MS Copilot, embedded in their MS office suite is available.

    1. As we are on the precipice of a very large wave of lending, I also have to ask myself, is capitalism itself ready for it? More thoughts behind a paywall

      Is this a reference to new bonds being issued to cover future investment, now that costs are growing beyond the ability to be covered with free cash flow from even the biggest players?

    1. Customer service is breaking away from slow replies, overloaded support teams, and repetitive ticket handling. Today’s users expect instant, accurate, and effortless resolution with no waiting and no back-and-forth. This blog explores how AI customer service and agentic AI workflows are redefining that experience by delivering 24/7 autonomous support that thinks, acts, and solves like a digital support team.

      Explore how agentic AI is transforming enterprise customer support with autonomous, context-aware workflows that boost response speed, reduce support load, and enhance CX. Learn real-world applications, technologies, and industry use cases

    1. That is a situation we are now living through, and it is no coincidence that the democratic conversation is breaking down all over the world because the algorithms are hijacking it. We have the most sophisticated information technology in history and we are losing the ability to talk with each other to hold a reasoned conversation.

      for - progress trap - social media - misinformation - AI algorithms hijacking and pretending to be human

    Tags

    Annotators

    URL

    1. the partners will share  best practices to accelerate AI adoption in strategic sectors such as healthcare, manufacturing, energy, culture, science and public services, and support SMEs. They committed to work together on large AI infrastructures and support industry and academia's access to AI compute capacity. They will also explore scientific cooperation on fundamental AI research, and the development of advanced AI models for the public good, including in areas such as extreme weather monitoring and climate change. In addition, the EU and Canada will set up a structured dialogue on data spaces, of particular relevance to the development of large AI models.

      Elements in the MoU: - share good practices to support adoption - collab on large AI infrastructure (Apply AI strat EU, HPC network) - collab on access to HPC (in line w AI factories in EU) - explore coop in fundamental ai research (weak) - development of AI for public good (in line w EU AI goals) - structured dialogue on data spaces (as data source for AI models) Only the last one is not immediately obviously connected to existing EU efforts and actions.

    1. Het bedrijf ontwikkelt al een AI-naar-FPGA-platform waarmee elk AI-model kan draaien op goedkope, in de EU geproduceerde herconfigureerbare chips. Als ze hierin slagen, zou dit de afhankelijkheid van Europa van buitenlandse GPU-fabrieken volledig kunnen wegnemen, een terugkerend thema in de strategie van Vydar.

      A potential path away from NVIDIA it seems, but not at the moment, the text suggests.

    2. We maken geen eigen AI-chips”, merkte Crijnen op, “maar omdat de hardware speciaal voor dit doel is gebouwd, kunnen we toekomstige in Europa gemaakte chips gemakkelijk integreren. Die flexibiliteit is cruciaal.”

      This suggests they do use NVIDIA Jetson now, but don't need to if alternatives are available?

    3. Hun systeem wordt nu voor 100% in Europa geproduceerd en weegt 30 gram, tegenover 176 gram voor concurrenten die op Jetson zijn gebaseerd. Het heeft een stroomverbruik van 3 watt, een efficiëntieverbetering van 88%.

      a sixth in weight, power reduction from 15 to 3 Watt range.

    1. [8.11.1] Supports hundreds of AI models via Providers such as Google, OpenRouter, GitHub and locally running models via Ollama.

      Calibre supported Ollama since 8.11, for Ask AI tab in dictionary panel.

    2. New features Allow asking AI questions about any book in your calibre library. Right click the "View" button and choose "Discuss selected book(s) with AI" AI: Allow asking AI what book to read next by right clicking on a book and using the "Similar books" menu AI: Add a new backend for "LM Studio" which allows running various AI models locally

      AI features in Calibre. discuss book w C, book suggestions, and LMStudio back-end. I set up Calibre w the LM Studio back-end, so things remain loca.

      A posting elsewhere suggested it woud also suggest better metadata through AI. But that article seems generated itself, so disregarded.

    1. In mijn werkmap heb ik een verzameling “agents” - tekstbestanden die Claude vertellen hoe hij zich moet gedragen. Tessa is er één van. Als ik haar “laad”, denkt Claude vanuit het perspectief van een product owner.

      Author has .md files that describe separate 'agents' she involves in her coding work, for each of the roles in a dev team. Would something like that work for K-work? #openvraag E.g. for project management roles, or for facets you're less fond of yourself?

    1. Hieronder de lijst van AI-boeken die ik gelezen heb en je aan kan raden. Klik meteen door naar de langere omschrijving of scroll verder. Ze staan op de volgorde waarin ik ze uitgelezen heb: Weapons of Math Destruction: over desastreuze algoritmes Code Dependent: over de achterkant van AI Onze kunstmatige toekomst: over de etische kant van AI Empire of AI: over de opkomst van OpenAI Your face belongs to us: over de opkomst van ClearView AI Atlas van de digitale wereld: over de geo-politiek van AI The Digital Republic: over het reguleren van technologie Toezicht houden in het tijdperk van AI: over de juiste vragen stellen over AI

      [[Elja Daae]] recommended reading list wrt AI [[Weapons of Math Destruction by Cathy O Neil]] (have it since 2017) [[Code Dependent by Madhumita Murgia]] bought it in August in ramsj [[The Digital Republic by Jamie Susskind]] I noted in 2024 as possible reading. [[Atlas van de digitale wereld by Haroon Sheikh]] I have too Other's are unknown to me. Interesting list, as it shaped their view on their role in AI public policy I presume

    1. Deploying a machine learning model as an API (Application Programming Interface) allows other applications, systems, or users to interact with your model in real time — sending input data and receiving predictions instantly. This is crucial for putting AI into production, like chatbots, fraud detection, or recommendation engines.

      Learn how to deploy your machine learning model as an API for seamless real-time use. Discover key steps, best practices, and solutions for efficient ML API deployment at CMARIX.

  3. Nov 2025
    1. to make all that happen is going to take a massive public movement. And the first thing you can do is to share this video with the 10 most powerful people you know and have them share it with the 10 most powerful people that they know

      for - best action - AI - cSTP

    2. The default path is companies racing to release the most powerful inscrutable uncontrollable technology we've ever invented with the maximum incentive to cut corners on safety.

      for - quote - AI - default reckless path - The default path is companies racing to release - the most powerful inscrutable uncontrollable technology we've ever invented - with the maximum incentive to cut corners on safety. - Rising energy prices, depleting jobs,, creating joblessness, creating security risks, deep fakes. That is the default outcome

    3. We're all worried about, you know, immigration of the other countries next door uh taking labor jobs. What happens when AI immigrants come in and take all of the cognitive labor? If you're worried about immigration, you should be way more worried about AI.

      for - forte - comparison - foreign immigrants Vs AI immigrants - sorry about foreign immigrants - should be more worried about AI immigrants

    1. This blog explores how to build AI-Powered Web App with MERN Stack, explaining why the combination of MongoDB, Express.js, React.js, and Node.js is ideal for integrating modern AI capabilities. It covers key tools, real-world use cases, integration steps, and performance tips to help developers create scalable, intelligent, and data-driven web applications.

      Learn how to integrate AI into your web application using the MERN stack. This guide covers key concepts, tools, and best practices for building an AI-powered web app with MongoDB, Express, React, and Node.js.

    1. Nabici w halucynacje AI. W poszukiwaniu prawdy
      • Article warns about AI hallucinations spreading disinformation in journalism, law, and business, spotlighting Polish journalist Karolina Opolska's book with likely AI-generated fake sources and errors.
      • Opolska incident reached 4 million potential contacts (1.6M traditional media, 2.4M social) from Nov 5-19, 2025, impacting 1 in 8 Poles aged 15+; social reaction mostly negative, debating trust in journalists and AI.
      • Polish Exdrog firm lost road tender due to AI-fabricated tax interpretations (1.3M reach, Oct 26-Nov 10, 2025).
      • Global cases include lawyers citing invented precedents and media promoting nonexistent books; BBC/EBU study shows 45% error rate in tools like ChatGPT, Copilot, Gemini, Perplexity.
      • Legal liability unclear (AI maker, provider, or user?); human verification essential.
      • AI content red flags: perfect formatting, clickbait, dubious stats like 93% without method, no timelines/methodology, rapid production, overused LLM words (e.g., "kluczowe", "istotne", "kompleksowy", "rewolucyjny").
      • IMM analyses from Polish media/social coverage for both cases.
    1. Nano Banana Pro: raw intelligence with tool use
      • Google released Nano Banana Pro (gemini-3-pro-image-preview), a new AI image generation model.
      • Nano Banana Pro excels in general intelligence, tool use, and creating complex scenes with less hallucination.
      • It can use Google Search and Maps to gather data and reason visually through "thought images."
      • Pushing infographic and map generation to new frontiers, enabling visually rich and factually accurate images.
      • Can create detailed photorealistic images based on complex, multi-element prompts.
      • Not reliable for electrical circuit designs yet, as it may produce erroneous circuit diagrams.
      • Human intelligence still surpasses it in domain-specific tasks like accurate circuit design.
      • Nano Banana Pro is seen as a game changer in practical, production-ready AI image generation.
      • Tool use enables more factually accurate and data-driven generated images than previous models.
      • Benchmarking AI image generation quality still needs development for production use assessment.
      • The community is impressed with Nano Banana Pro's nuanced prompt following and image creation capabilities.
    1. Współautorka benchmarku OneRuler: nie pokazaliśmy wcale, że język polski jest najlepszy do promptowania
      • Media circulated a claim that Polish language is best for prompting, but this was not a conclusion from the OneRuler study.
      • OneRuler is a multilingual benchmark testing how well language models process very long texts in 26 languages.
      • Models performed on average best with Polish, but differences compared to English were small and not explained.
      • Polish media prematurely concluded Polish is best for prompting, which the study's authors did not claim or investigate.
      • The benchmark tested models on finding specific sentences in long texts, akin to CTRL+F, a function AI models inherently lack.
      • Another task involved listing the most frequent words in a book; models often failed when asked to acknowledge if an answer was not present.
      • Performance dropped likely because the task required full context understanding, not just text searching.
      • Different books were used per language (e.g. Polish used "Noce i dnie," English used "Little Women"), impacting the fairness of comparisons.
      • The choice of books was based on expired copyrights, which influenced the results.
      • There is no conclusive evidence from this benchmark that Polish is superior for prompting due to multiple influencing factors.
      • No model achieved 100% accuracy, serving as a caution about language models' limitations; outputs should be verified.
      • Researchers advise caution especially when using language models for sensitive or private documents.
      • The OneRuler study was reviewed and presented at the CoLM 2025 conference.
    1. For instance, a recent analysis by Epoch AI of the total training cost of AI models estimated that energy was a marginal part of total cost of AI training and experimentation (less than 6% in the case of all 4 frontier AI models analyzed), and a recent analysis by Dwarkesh Patel and Romeo Dean estimated that power generation represents roughly 7% of a datacenter’s cost.

      Which paper or article from Romeo Dean and Dwarkesh patel?

    1. While closed-circuit cooling systems (i.e. where all of the water is recycled and none of it evaporates)[33] are technically feasible, they are more costly and therefore less common.

      This is description is how i understand it too, but the link does not seem to say this - it refers to open loop being about cooling a room, and closed loop as a sort of targetted cooling instead.

    2. They are very geographically concentrated - only 32 countries have data centers, and nearly half of them are in the United States. The state of Virginia has the highest density of data centers globally - it is home to almost 35% of all hyperscale data centers worldwide.

      This is a really useful stat. You need a specific definition of datacentre, but it's still handy.

    1. This transition is signaled by focused efforts from several major scientists and technology entities. Meta Chief AI Scientist Yann LeCun has emphasized his intent to pursue world models, while Fei-Fei Li’s World Labs has released its Marble model publicly. Concurrently, Google is testing its Genie models, and Nvidia is developing its Omniverse and Cosmos platforms for physical AI.

      Various examples of world model work: Nex to Yann LeCun. Fei-Fei Li World Labs w Marble model, Google has Genie models, Nvidia Omniverse and Cosmos.

    1. On Tuesday, news broke that he may soon be leaving Meta to pursue a startup focused on so-called world models, technology that LeCun thinks is more likely to advance the state of AI than Meta’s current language models.

      Yann LeCun says world models more promising. What are world models?

    1. n our latest findings, the share of respondents reporting mitigation efforts for risks such as personal and individual privacy, explainability, organizational reputation, and regulatory compliance has grown since we last asked about risks associated with AI overall in 2022.

      did they also ask whether those mitigation efforts negate gains in efficiency / innovation reported for AI?

    2. While a plurality of respondents expect to see little or no effect on their organizations’ total number of employees in the year ahead, 32 percent predict an overall reduction of 3 percent or more, and 13 percent predict an increase of that magnitude (Exhibit 17). Respondents at larger organizations are more likely than those at smaller ones to expect an enterprise-wide AI-related reduction in workforce size, while AI high performers are more likely than others are to expect a meaningful change, either in the form of workforce reductions or increases.

      Interesting to see companies vary in their est of how AI will impact workforce. A third expects reduction (but not much, about 3%), 13% an increase (AI related hiring), 43% no change.

    3. with nearly one-third of all respondents reporting consequences stemming from AI inaccuracy (Exhibit 19).

      A third of respondents admit they've seen 'at least once' negative consequences of inaccurate output. That sounds low, as 100% will have been given hallucinations. So 1-in-3 doesn't catch them all before they run-up damage. (vgl Deloitte's work in Australia)

    4. The online survey was in the field from June 25 to July 29, 2025, and garnered responses from 1,993 participants in 105 nations representing the full range of regions, industries, company sizes, functional specialties, and tenures. Thirty-eight percent of respondents say they work for organizations with more than $1 billion in annual revenues. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

      2k self selected respondents in 50% of nations. 4/10 are big corporates (over 1 billion USD annual revenue)

    5. McKinsey survey on AI use in corporations, esp perceptions and expectations. No actual measurements. I suspect it mostly measure the level of hype that respondents currently buy into.

    1. AI checking AI inherits vulnerabilities, Hays warned. "Transparency gaps, prompt injection vulnerabilities and a decision-making chain becomes harder to trace with each layer you add." Her research at Salesforce revealed that 55% of IT security leaders lack confidence that they have appropriate guardrails to deploy agents safely.

      abstracting away responsibilities is a dead-end. Over half of IT security think now no way to deploy agentic AI safely.

    2. When two models share similar data foundations or training biases, one may simply validate the other's errors faster and more convincingly. The result is what McDonagh-Smith describes as "an echo chamber, machines confidently agreeing on the same mistake." This is fundamentally epistemic rather than technical, he said, undermining our ability to know whether oversight mechanisms work at all.

      Similarity between models / training data creates an epistemic issue. Using them to control each other creates an echo chamber. Vgl [[Deontologische provenance 20240318113250]]

    3. Yet most organizations remain unprepared. When Bertini talks with product and design teams, she said she finds that "almost none have actually built it into their systems or workflows yet," treating human oversight as nice-to-have rather than foundational.

      Suggested that no AI using companies are actively prepping for AI Act's rules wrt human oversight.

    4. We're seeing the rise of a 'human on the loop' paradigm where people still define intent, context and accountability, whilst co-ordinating the machines' management of scale and speed," he explained.

      Human on the loop vs in

    1. for - search prompt 2 - can an adult who has learned language experience pre-linguistic reality like an infant who hasn't learned language yet? - https://www.google.com/search?q=can+an+adult+who+has+learned+language+experience+pre-linguistic+reality+like+an+infant+who+hasn%27t+learned+language+yet%3F&sca_esv=869baca48da28adf&biw=1920&bih=911&sxsrf=AE3TifNnrlFbCZIFEvi7kVbRcf_q1qVnNw%3A1762660496627&ei=kBAQafKGJry_hbIP753R4QE&ved=0ahUKEwjyjouGluSQAxW8X0EAHe9ONBwQ4dUDCBA&uact=5&oq=can+an+adult+who+has+learned+language+experience+pre-linguistic+reality+like+an+infant+who+hasn%27t+learned+language+yet%3F&gs_lp=Egxnd3Mtd2l6LXNlcnAid2NhbiBhbiBhZHVsdCB3aG8gaGFzIGxlYXJuZWQgbGFuZ3VhZ2UgZXhwZXJpZW5jZSBwcmUtbGluZ3Vpc3RpYyByZWFsaXR5IGxpa2UgYW4gaW5mYW50IHdobyBoYXNuJ3QgbGVhcm5lZCBsYW5ndWFnZSB5ZXQ_SKL1AlAAWIziAnAPeAGQAQCYAaEEoAHyoAKqAQwyLTE0LjczLjE0LjO4AQPIAQD4AQGYAlSgApnFAcICBBAjGCfCAgsQABiABBiRAhiKBcICDRAAGIAEGLEDGEMYigXCAgsQLhiABBixAxiDAcICDhAuGIAEGLEDGNEDGMcBwgIEEAAYA8ICBRAuGIAEwgIKECMYgAQYJxiKBcICChAAGIAEGEMYigXCAg4QLhiABBixAxiDARiKBcICExAuGIAEGLEDGNEDGEMYxwEYigXCAggQABiABBixA8ICCBAuGIAEGLEDwgIFEAAYgATCAgsQLhiABBixAxiKBcICCxAAGIAEGLEDGIoFwgIGEAAYFhgewgILEAAYgAQYsQMYgwHCAgsQABiABBiGAxiKBcICCBAAGKIEGIkFwgIIEAAYgAQYogTCAgUQABjvBcICBhAAGA0YHsICBRAhGKABwgIHECEYoAEYCsICBRAhGJ8FwgIEECEYFcICBBAhGAqYAwCSBwwxMy4wLjguNTIuMTGgB-K1A7IHCTItOC41Mi4xMbgHgcUBwgcHMzUuNDcuMsgHcQ&sclient=gws-wiz-serp - from - search prompt 1 - can we unlearn language? - https://hyp.is/Ywp_fr0cEfCqhMeAP0vCVw/www.google.com/search?sca_esv=869baca48da28adf&sxsrf=AE3TifMGTNfpTekWWBdYUA96_PTLS9T00A:1762658867809&q=can+we+unlearn+language?&source=lnms&fbs=AIIjpHxU7SXXniUZfeShr2fp4giZ1Y6MJ25_tmWITc7uy4KIegmO5mMVANqcM7XWkBOa06dn2D9OWgTLQfUrJnETgD74qUQptjqPDfDBCgB_1tdfH756Z_Nlqlxc3Q5-U62E4zbEgz3Bv4TeLBDlGAR4oTnCgPSGyUcrDpa-WGo5oBqtSD7gSHPGUp_5zEroXiCGNNDET4dcNOyctuaGGv2d44kI9rmR9w&sa=X&ved=2ahUKEwj4_LP9j-SQAxVYXUEAHVT8FfMQ0pQJegQIDhAB&biw=1920&bih=911&dpr=1 - to - search prompt 2 (AI) - can an adult who has learned language re-experience pre-linguistic phenomena like an infant with no language training? - https://hyp.is/m0c7ZL0jEfC8EH_WK3prmA/www.google.com/search?q=can+an+adult+who+has+learned+language+re-experience+pre-linguistic+phenomena+like+an+infant+with+no+language+training?&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRiPAjIHCAIQIRiPAtIBCTQzNzg4ajBqN6gCALACAA&sourceid=chrome&ie=UTF-8&udm=50&ved=2ahUKEwjfrLqDm-SQAxWDZEEAHcxqJgkQ0NsOegQIAxAB&aep=10&ntc=1&mstk=AUtExfAG148GJu71_mSaBylQit3n4ElPnveGZNA48Lew3Cb_ksFUHUNmWfpC0RPR_YUGIdx34kaOmxS2Q-TjbflWDCi_AIdYJwXVWHn-PA6PZM5edEC6hmXJ8IVcMBAdBdsEGfwVMpoV_3y0aeW0rSNjOVKjxopBqXs3P1wI9-H6NXpFXGRfJ_QIY1qWOMeZy4apWuAzAUVusGq7ao0TctjiYF3gyxqZzhsG5ZtmTsXLxKjo0qoPwqb4D-0K-uW-xjkyJj0Bi45UPFKl-Iyabi3lHKg4udEo-3N4doJozVNoXSrymPSQbr2tdWcxw93FzdAhMU9QZPnl89Ty1w&csuir=1&mtid=WBYQaYfuHYKphbIPzYmKiAs

    1. Kommer den artificiella intelligensen att bli bättre på att tänka än den mänskliga? Kognitionsvetaren Peter Gärdenfors förklarar varför så inte är fallet.  Den mänskliga intelligensen består av en rad olika färdigheter och specialiteter som har förfinats under tusentals år. Mycket återstår innan den artificiella intelligensen kan mäta sig med det tänkande som inte bara människor utan även djur har. När vi förstår att vår intelligens är en bred palett av många olika förmågor ter sig tanken på att AI-tekniken trumfar oss i schack och kan skriva avancerade texter inte lika skrämmande. Utifrån ett brett forskningsunderlag förklarar Gärdenfors varför AI-tekniken inte kan och inte kommer att kunna tänka på samma sätt som människor och djur gör. »Peter Gärdenfors tilldelas Natur & Kulturs debattbokspris 2025 för att han fördjupar AI-debattens centrala begrepp och utmanar dess utgångspunkter. Med lätt språk och stabil lärdom blottlägger han tänkandets evolutionärt slipade mekanismer, och skärper bilden av vad intelligens är och vilken plats tekniken intar i vår digitala värld.« – Juryns motivering

      [[Kan AI tänka by Peter Gärdenfors]] via Sven Dahlstrand, dahlstrand.net Publ okt 2024 Seeks to define what thinking actually is, and how that plays out in other animals and humans. The 2nd part goes into sofrware systems and AI and how they work in comparison.

    1. AI is Making Us Work More
      • AI, intended to free workers, is causing longer work hours and increased pressure, spreading 996 culture to Western AI startups.
      • AI tools never tire, creating psychological pressure to constantly work and increasing feelings of guilt during rest.
      • Historical advances like lamps and bulbs extended work hours; AI similarly shifts "can work" into "should work."
      • Philosopher Byung-Chul Han's "Burnout Society" concept shows internalized self-discipline drives overwork, amplified by AI's "excess of positivity."
      • The hyper-productivity loop leads to burnout, reduced creativity, and diminishing returns despite increased effort.
      • Rest is framed as resistance and vital for innovation, which thrives on reflection, not constant activity.
      • The key challenge is adopting a healthy culture around AI use that avoids exploitation and preserves human well-being.
  4. Oct 2025
    1. Building fair AI systems is a continuous and deliberate effort. The model needs to be accurate but also maintain fairness, transparency and accountability.

      Learn practical strategies to design AI systems that avoid bias and ensure fairness. Discover techniques like diverse data, transparent algorithms, and robust evaluation pipelines to build ethical AI.

    2. AI systems are powerful tools-but if not built carefully, they can reinforce societal biases and make unfair decisions. Ensuring fairness and equity in AI is not just a technical challenge, but also a responsibility towards the development of ethical AI.

      Learn practical strategies to design AI systems that avoid bias and ensure fairness. Discover techniques like diverse data, transparent algorithms, and robust evaluation pipelines to build ethical AI.

    1. AWS Transcribe vs Deepgram vs Whisper, which speech-to-text solution should you choose for your voice enabled applications? Each platform is great in different areas like speed, accuracy, cost, and flexibility. This guide compares their strengths and limitations to help you pick the STT solution that fits your project and long-term goals.

      Compare AWS Transcribe, Deepgram, and Whisper for speech-to-text accuracy, pricing, integrations, and use cases. Find the best AI transcription service for your business.

    1. AI in WordPress development is changing the way websites are created and managed. It helps developers automate routine tasks, optimize performance, and deliver personalized user experiences. By integrating AI plugins or tools, WordPress sites can achieve faster design processes, smarter content generation, and overall improved functionality that enhances both visitor engagement and development efficiency.

      Explore how AI in WordPress development is reshaping websites, automating content creation, enhancing user experience with chatbots, and optimizing performance plugins. Learn top AI integration strategies, plugins, and best practices for modern WordPress sites.

    1. Amazon Plans to Replace More Than Half a Million Jobs With Robots
      • Internal documents reviewed by The New York Times show Amazon plans to automate up to 75% of its operations in the coming years.
      • The company expects automation to replace or eliminate over 500,000 U.S. jobs by 2033, primarily in warehouses and fulfillment centers.
      • By 2027, automation could allow Amazon to avoid hiring around 160,000 new workers, saving about 30 cents per package shipped.
      • This strategy is projected to save $12.6 billion in labor costs between 2025 and 2027.
      • Amazon’s workforce tripled since 2018 to approximately 1.2 million U.S. employees, but automation is expected to stabilize or reduce future headcount despite rising sales.
      • Executives presented to the board that automation could let the company double sales volume by 2033 without needing additional hires.
      • Amazon’s Shreveport, Louisiana warehouse serves as the model for the future: it operates with 25% fewer workers and about 1,000 robots.
      • A new facility in Virginia Beach and retrofitted older ones like Stone Mountain, Georgia, are following this design, which may shift employment toward more temporary and technical roles.
      • The company is instructing staff to use softer language—such as “advanced technology” or “cobots” (collaborative robots)—instead of terms like “AI” or “robots,” to ease concerns about job loss.
      • Amazon has begun planning community outreach initiatives (parades, local events) to offset the reputational risks of large-scale automation.
      • The company has denied that the documents represent official policy, claiming they reflect the views of one internal group, and emphasized ongoing seasonal hiring (250,000 roles for holidays).
      • Analysts suggest this plan could serve as a blueprint for other major employers, including Walmart and UPS, potentially reshaping U.S. blue‑collar job markets.
      • The automation push continues a trajectory started with Amazon’s $775 million acquisition of Kiva Systems in 2012, which introduced mobile warehouse robots that revolutionized internal logistics.
      • Recent innovations include robots like Blue Jay, Vulcan, and Proteus, aimed at performing tasks such as sorting, picking, and packaging with minimal human oversight.
      • Long-term, Amazon may require fewer warehouse workers but more robot technicians and engineers, signaling a broader shift in labor type rather than total employment.
    1. What you’re doing: Turning one-off prompts into reusable systems.Once you’ve perfected a workflow, you have a proven recipe. Now you can decide how to operationalise it. There are three options:Create a Prompt Template when you want to use it regularly for personal reuse onlyBuild a Custom GPT or Bot when you want to share a task-specific workflow with a team for cross-team quality and efficiency gains Create an Automated Agent when you want to trigger the workflow automatically in certain conditions

      How to create reusable systems

    2. File format matters. Here’s the reliability ranking for how well AI reads different formats:.txt / .md — Minimal noise, clear structure (best)JSON / CSV — Great for structured dataDOCX — Fine if formatting is simpleDigital PDFs — Extraction can mix headers, footers, columnsPPTX — Text order can be unpredictableScanned PDFs / images — Worst; requires OCR, highly error-prone

      How AI reads file formats and what they are good for

    1. Introduction: AI is now recently everywhere but we still need humans

    1. Discover how computer vision in AI is transforming industries by giving machines the ability to “see” and understand the visual world. This guide explores its core technology, diverse applications from self-driving cars to healthcare, and future trends, all while emphasizing its profound impact on business and daily life.

      A deep dive into how computer vision and AI are transforming industries, from healthcare diagnostics and autonomous vehicles to retail and manufacturing. Learn core technologies, real‑world applications, and future trends for leveraging visual intelligence in your business.

  5. Sep 2025
    1. Deciding between AI vs traditional software isn’t easy. Businesses struggle to decide between reliability and innovation. Do you stick with proven, rule-based systems or invest in adaptive, data-driven AI? This blog breaks down the differences, advantages, and use cases so you can make the right choice for your business.

      Compare AI vs Traditional Software Development to see which delivers better ROI. Explore cost, scalability, adaptability & when each model suits your business best.

  6. resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
    1. Developed an end-to-end full-stack web application to help students locate nearby study spots, track study sessions, and create study groups.

      Include user metrics or feedback that demonstrate the app's effectiveness or popularity.

    2. Led the development of a Telegram Bot that parses natural language commands to allow fast, secure expense-splitting on Aptos blockchain directly in your group chat.

      Add details on user adoption rates or how this improved user experience or efficiency.

    3. Trained a PyTorch neural network to classify forehand vs backhand shot techniques based on player joint positions, achieving 87% test accuracy.

      Explain the significance of 87% accuracy in practical terms, such as its effect on performance analysis.

    4. Implemented an upload-to-review system with AWS S3 for uploads, Hypothes.is for in-line resume annotations, and version tracking via DynamoDB, driving fast and iterative peer reviews.

      Clarify how much faster the review process became due to this implementation.

    5. Developed a Discord bot to streamline collaborative resume reviews for 2,000+ students, eliminating cluttered review threads and combining both peer and AI-powered resume annotations directly in Discord.

      Quantify the reduction in time spent on reviews or improvement in review quality.

    6. Redesigned layout and fixed critical responsiveness issues on 10+ web pages using Bootstrap, restoring broken mobile views and ensuring consistent, functional interfaces across devices.

      Include metrics on user engagement or satisfaction post-redesign to highlight impact.

    7. Developed dashboards for an internal portal with .NET Core, C#, and jQuery, eliminating the need for 100+ complex spreadsheets and enabling 30+ executives to securely access operational, financial, and customer data.

      Add a statement on how this improved decision-making or efficiency for the executives.

    8. Spearheaded backend unit testing automation for the shift-bidding platform using xUnit, SQLite, and Azure CI/CD Pipelines, contributing 40+ tests, identifying logic errors, and increasing overall test coverage by 15%.

      Explain how the increased test coverage improved system reliability or reduced bugs.

    9. Automated monthly shift-bid data transfers into the company HR system for 700+ employees using C#, SQL, and Azure Functions, saving supervisors hours of manual entry each month.

      Quantify 'hours saved' to provide a clearer impact of your automation efforts.

    10. Led the development of an Agentic AI staff scheduling app with React, C#/.NET, and Azure OpenAI, automating schedule templates for 12,000+ monthly flights and ensuring compliance with a RAG Policy chatbot.

      Specify the percentage improvement in scheduling efficiency or time saved due to automation.

    1. Current intellectual property laws constitute an “anti-constitutional” barrier to the transformative potential of artificial intelligence (AI), systematically frustrating the explicit purpose of the Intellectual Property (IP) Clause.

      This article reports that Anthropic has agreed to pay out a $1.5 billion settlement for copyright violations while training their Claude AI tool on books found on the Internet. That works out to be about $3000 per book.

      The whole idea of books (at least nonfiction books) is that readers are supposed to learn from them. But now if actual learning from them takes place it's a $3000 charge!

      It used to be that to violate a copyright required copying, not merely training. What's more, in the USA the sole justification for government-enforced monopolies on intellectual property is Article I, Section 8, Clause 8 of the U.S. Constitution, which authorizes copyrights and patents only to "to promote the Progress of Science and useful Arts," and only "for limited Times," and only "to Authors and Inventors." By extending copyright duration to the "author's life plus 70 years," Congress flouted those restrictions, and this precedent further tramples them, by clearly impeding the progress of science and useful arts.

  7. resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com
    1. Instructed 1,000+ students on manufacturing best practices, emphasizing safety and build quality.

      Quantify the impact of your instruction. Did it lead to fewer errors or higher quality projects? Provide metrics.

    2. Trained over 100 students every semester on the safety protocols and applicable use cases for all MakerSpace equipment including 3D printers(FDM/SLA), laser cutters, CNC Machines, thermal formers, hand/power tools.

      Include the impact of your training. Did it lead to improved safety records or student confidence?

    3. Developed python-based computer vision dice recognition application capable of detecting and logging results for multiple dice types (D4–D20).

      Mention the user base or potential applications of this project. Who would benefit from it?

    4. Created standards for employee software interaction, improved efficiency, reducing operation costs by 40%.

      Detail what specific standards were created. How did they lead to the 40% cost reduction? Be more specific.

    5. Revised, modularized, and updated old assembly program to a modern code base removing 22 detected bugs enabling future feature implementation.

      Explain how bug removal improved functionality or user experience. Provide examples of features enabled.

    6. Unified three isolated programs into one software solution utilizing Java, PHP, SQL(MySQL), and RESTful API, removing the need for paper communication digitizing employee work.

      Quantify the impact of digitizing work. How much time or cost was saved? Include specific metrics.

    7. Supported 45 project groups with project management including Project Charter, Scope, DOD, Stakeholder management, WBS/WBS dictionary, scrum ceremonies, risk assessment, Agile, lifecycle, and product handover.

      Clarify your role in project management. Did you lead or facilitate? Highlight your direct contributions.

    8. Planned and implemented creative projects following the school’s curriculum and objectives, improving students’ understanding of course material, resulting in an average of a letter grade improvement.

      Specify how you measured the improvement in understanding. Include metrics or feedback to enhance impact.

    1. for - consciousness, AI, Alex Gomez- Marin, neuroscience, hard problem of consciousness, nonmaterialism, materialism - progress trap - transhumanism - AI - war on conciousness

      Summary - Alex advocates - for a nonmaterialist perspective on consciousness and argues - that there is an urgency to educate the public on this perspective - due to the transhumanist agenda that could threaten the future of humanity - He argues that the problem of whether consciousness is best explained by materialism or not is central to resolving the threat posed by the direction AI takes - In this regard, he interprets that the very words that David Chalmers chose to articulate the Hard Problem of Consciousness reveals the assumption of a materialist reference frame. - He used a legal metaphor too illustrate his point: - When a lawyer poses three question "how did you kill that person" - the question is entrapping the accused . It already contains the assumption of guilt. - I would characterize his role as a scientist who practices authentic seeker of wisdom - will learn from a young child if they have something valuable to teach and - will help educate a senior if they have something to learn - The efficacy of timebinding depends on authenticity and is harmed by dogma

  8. resu-bot-bucket.s3.ca-central-1.amazonaws.com resu-bot-bucket.s3.ca-central-1.amazonaws.com