1,804 Matching Annotations
  1. Last 7 days
    1. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      sentence about eye-tracking

    2. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      any sentence about eye-tracking, eye-trackers, etc.

    3. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      any sentence about eye-tracking, eye-trackers, etc.

    4. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      any sentence about eye-tracking, eye-trackers, etc.

    5. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross-sentence relationships pane-sentence order, role-coordinated highlighting, and alignment-work best in concert, not alone.

      sentence about eye-tracking

    6. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      sentence about eye-tracking

    7. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      sentence about eye-tracking

    8. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      sentence about eye-tracking

    9. an ablation study with eye-tracking (Section 5) revealed that the three key features of ABSTRACTEXPLORER's central cross- sentence relationships pane-sentence order, role-coordinated high- lighting, and alignment-work best in concert, not alone.

      sentence about eye-tracking

    10. AbstractExplorer used variation affordances present in prior systems, e.g., color-coordinated highlighting of analogous text in Gero et al. [18], and introduced new ones, such as alignment of sentences based on analogous chunks within them, which had only been hypothesized in prior work.

      sentence related to Structural Mapping Theory (SMT)

    11. Structural Mapping Theory (SMT) is a long-standing well-vetted theory from Cognitive Science that describes how humans attend to and try to compare objects by finding mental representations of them that can be structurally mapped to each other (analogies).

      sentence related to Structural Mapping Theory (SMT)

    12. AbstractExplorer instantiates new minimally lossy SMT-informed techniques for skimming, reading, and reasoning about a corpus of similarly structured short documents.

      sentence related to Structural Mapping Theory (SMT)

    13. Lossless SMT-informed techniques have yet to be brought to bear in the context of researchers familiarizing themselves with a corpus of existing literature.

      sentence related to Structural Mapping Theory (SMT)

    14. This SMT-informed approach, which AbstractExplorer shares, tries to give this mental machinery “a leg up,” letting users perhaps skip some steps by accepting reified cross-document relationships identified by the computer.

      sentence related to Structural Mapping Theory (SMT)

    15. system designers have leveraged Structure-Mapping Theory (SMT) to facilitate seeing both the overview and the details at the same time, facilitating abstraction without losing context.

      sentence related to Structural Mapping Theory (SMT)

    16. ¹Structural Mapping Theory (SMT) is a long-standing well-vetted theory from Cognitive Science that describes how humans attend to and try to compare objects by finding mental representations of them that can be structurally mapped to each other (analogies).

      ¹Structural Mapping Theory (SMT) is a long-standing well-vetted theory from Cognitive Science that describes how humans attend to and try to compare objects by finding mental representations of them that can be structurally mapped to each other (analogies).

    17. This SMT-informed approach, which AbstractExplorer shares, tries to give this mental machinery “a leg up,” letting users perhaps skip some steps by accepting reified cross-document relationships identified by the computer. The revealed variation within these analogous cross-document relationships can invite the user’s engagement. This is the essence of comparative close reading, a dialectical activity [73] that requires repeated deep engagement with the texts to reveal new insights.

      This SMT-informed approach, which AbstractExplorer shares, tries to give this mental machinery “a leg up,” letting users perhaps skip some steps by accepting reified cross-document relationships identified by the computer. The revealed variation within these analogous cross-document relationships can invite the user’s engagement. This is the essence of comparative close reading, a dialectical activity [73] that requires repeated deep engagement with the texts to reveal new insights.

    18. Recent prior work has shown that it is possible to help people read and reason about a corpus of short documents without employing lossy document representations. For example, for collections of code examples written with similar purposes but using different libraries, ParaLib [69] used color-coordinated role highlights to reveal cross-example commonalities and distinctions. The Positional Diction Clustering (PDC) algorithm identified analogous sentences across many LLM responses, which were reified both as color-coordinated cross-document analogous text highlighting (like ParaLib) and in a novel ‘interleaved’ view where analogous sentences across documents were rendered in adjacent rows to enable more easy comparison [18]. These examples of text-centric lossless techniques do not abstract away or summarize; they strategically re-organize and re-render the existing text to help enhance readers’ own perceptual cognition, informed by Structural Mapping Theory (SMT) [17].1 The human perceptual, comparative mental machinery that SMT describes is part of what enables humans to form more abstract structured mental models from concrete examples, among other critical knowledge tasks.

      Recent prior work has shown that it is possible to help people read and reason about a corpus of short documents without employing lossy document representations. For example, for collections of code examples written with similar purposes but using different libraries, ParaLib [69] used color-coordinated role highlights to reveal cross-example commonalities and distinctions. The Positional Diction Clustering (PDC) algorithm identified analogous sentences across many LLM responses, which were reified both as color-coordinated cross-document analogous text highlighting (like ParaLib) and in a novel ‘interleaved’ view where analogous sentences across documents were rendered in adjacent rows to enable more easy comparison [18]. These examples of text-centric lossless techniques do not abstract away or summarize; they strategically re-organize and re-render the existing text to help enhance readers’ own perceptual cognition, informed by Structural Mapping Theory (SMT) [17].1 The human perceptual, comparative mental machinery that SMT describes is part of what enables humans to form more abstract structured mental models from concrete examples, among other critical knowledge tasks.

  2. Mar 2026
  3. Feb 2026
    1. The real annoying thing about Opus 4.6/Codex 5.3 is that it’s impossible to publicly say “Opus 4.5 (and the models that came after it) are an order of magnitude better than coding LLMs released just months before it” without sounding like an AI hype booster clickbaiting, but it’s the counterintuitive truth to my personal frustration
    1. A generative AI like ChatGPTData Analyst can take on the role of the evaluation soft-ware. It is expected that this manner of use will make thestudents' work easier, as less emphasis needs to be placedon the programming itself. Instead, teachers can incorpo-rate exercises that encourage students to code more effi-ciently and accurately with the assistance of AI. Thisshifts the focus from finding the right command or func-tion to examining and understanding the data moreclosely. As a consequence, students are better enabled tointerpret the results of statistical evaluation software cor-rectly, thus fulfilling goal 8 of the GAISE report.

      rhetoric: Schwarz uses a statement of transition to contrast the old education model (rote memorization of commands) with a new required model (critical examination).

      inference: This supports the argument that education and labor must start to pivot away from the "Generalist" process-oriented tasks. If the machine assistants handle the 'How' (the commands and functions), then the human must focus more on the 'Why' and the 'what does it mean (understanding/wisdom)'. This helps to validate the work of the assistants and helps to make it useful and valuable in the real world.

    2. statistical knowledge is still required in order toformulate the correct prompts and to ensure that the AIdoes not leave out any step of the analysis.

      rhetoric: author presents a prescriptive claim that AI needs humans with competent knowledge (in this case, statistics) to create prompts and ensure that the AI does not leave out any steps of the analysis. He positions domain knowledge not as a tool for using AI for statistical analysis, but a prerequisite for management of the AI and auditing the output.

      inference: In addition to policing and correcting the AI outputs, the deep domain knowledge is what allows the AI to do complex data analysis without mistakes, hallucinated results, or mathematically false outcomes. This is basically the job description of a human with "Augmented Human Wisdom". The human's value is no longer in doing math, but in possessing the vertical expertise (flesh/wisdom) to know exact what math needs to be done and ultimately auditing the assistant machine's work.

    3. ChatGPT Data Analyst clearly produced a false resulthere, precisely because the application assumptions for theANOVA were not checked.

      rhetoric: Schwarz employs cause-and-effect reasoning here based on empirical testing. He links a specific technical failure (not checking assumptions) to a definitive unwanted outcome (a false result).

      inference: the "Data Analyst" function of ChatGPT hallucinated a result during the use of it's core function! This is the best evidence so far of the 'Crisis of Truth' and the dangers of the 'Headless Automatons' in my essay. If a generalist with no deep knowledge uses AI, they are at great risk of blindly accepting mathematically false conclusions. Synthetic syntax without competent human validation is a liability.

    4. The results show that generative AI canfacilitate data analysis for individuals with minimal knowledge of statistics,mainly by generating appropriate code, but only partly by following standardprocedures.

      rhetoric: author uses comparative, objective statement (logos) to establish the main boundary of the technology's capability/capacity -- it excels at technical generation (things like coding) but fails at standard procedures (methodological adherence to SOPs).

      inference: the proves the 'Raising the Floor' concept. AI completely automates the entry-level syntax (the "Word"), meaning that the Generalist coder is obsolete! However, because it fails at standard procedures, it requires a human architect to guide it to outputs that are valuable in the real world.

    1. PWA have language deficits that require bespoke AAC supports. These supports may beenhanced by LLMs in software systems that use spoken user input to provide relevantsuggestions that have grammatical and speech production support.

      rhetoric: concluding statement. this positions the LLM as an 'enhancement' to physical human limitation, rather than a replacement of the human subject.

      inference: This helps to validate the 'Augmented Human Wisdom' model. The future of AI is NOT replacing humans, but AI acting as a high-powered syntax engine that is strictly guided by human needs and human intent. The AI does not have 'agency', as it is a software tool that helps the human to execute their visions.

    2. Perseverations that are input into the system are essentially mag-nified by the system’s suggested sentences,

      rhetoric: authors explain an unintended consequence of using the AI tool: it scales the errors or the emptiness of the human prompt.

      inference: this is an excellent metaphor for the 'manager fallacy'. If the human user in incompetent (or provides empty or incomplete input), the AI does not magically create wisdom -- it just amplifies the user's incompetence in a a highly articulate synthetic thought.

    3. Participant 2 stated the age of her daughters (“Name1 is 18, Name2 is21”), Aphasia-GPT transformed it as “Name1 is 18 and 21”, which is an impossible, butrelated, hallucination

      rhetoric: researchers use a specific, clinical observation of an error to demonstrate the model's inability to comprehend logical reality despite the human relaying a perfectly structured sentence.

      inference: this shows that AI is amoral and lacks the lived experience necessary to make logical judgments that work in the real world. It can format a sentence beautifully, but it does not/will not always understand that a single human cannot be two ages at once. This is why it is very important/necessary for the "flesh" to text the output against reality

    4. Aphasia-GPT is a real-time, AI-enabled web app designed to expand the words providedby a user into complete sentences as suggestions for a user to select.

      rhetoric: authors provide a definition of their creation (Aphasia-GPT) to describe it's mechanism: taking a fragmented input and expanding it into a fully structured, complete output.

      inference: this is the embodiment of Harari's primary metanym of the word v flesh (syntax v human). in this example, Aphasia-GPT provides the words (syntax) to the fleshy human that struggles with those words, while also relying on the human to spark the intent of the communication. The human is using AI to communicate with words, because the words are very difficult for the human.

    1. The cost of the time that it takes fix "workslop" could add up too, with a $186 monthly cost per employee on average, according to a survey of desk workers by BetterUp in partnership with the Stanford Social Media Lab. Forty percent of the workers surveyed said they received "workslop" in the last month and that it took an average of two hours to resolve each incident.

      $186/per employee/per month!

      10 employees = ($22,320) 25 employees = ($55,800) 50 employees = ($111,600) 100 employees = ($223,200) 250 employees = ($558,000) 500 employees = ($1,116,000) 1000 employees = ($2,232,000)

    2. “Younger workers aren’t necessarily more careless, but they’re often using AI more frequently and earlier in their workflows," Dennison said. "There is also a training gap. Organizations often assume younger employees intuitively understand AI, yet provide little guidance on verification, risk, or appropriate use cases. As a result, AI may be treated as an answer engine rather than a support tool."

      this is another great quote, which helps to establish how orgs treat younger generations, and how they tend to overtrust their understanding of AI.

    3. 58% said direct reports submitted work that contained factual inaccuracies generated by AI tools, while fewer reported that AI failed to account for critical contextual factors. Other issues cited include low-quality content, poor recommendations and inappropriate messaging.

      from reporting managers, 58% of them said that employees were submitting work that contained factual inaccuracies in the work that was generated by AI, and that fewer of them reported that AI failed to account for "critical contextual factors", implying that the writing was generic and not directly applicable to the context that the writing was written in. Other issues were: low quality content, poor recommendations and inappropriate messaging.

    4. 59% of managers saying that they had to invest additional time to correct or redo work created by AI. Similarly, 53% said their direct reports had to take on extra work, while 45% said they had to bring in co-workers to help fix the mistake.

      Extra time and money spent to repair errors made by AI but not caught by the human in the middle. 59% is almost 2/3 (closer to 3/5) needed to correct or redo the work created by AI without a human auditing it. 53% claim extra work is needed to repair the AI mistakes, and 45% also needed to bring in a (perhaps more senior) co-worker to help fix the mistake. I can imagine workers needing to work on a mistake the hits production code, and all of the thousands (or more) mistakes that would need to be later repaired and rolled back. very expensive and costly.

    5. While 18% of managers said they did not suffer any financial losses from the mistakes, and 20% said those losses were less than $1,000, a significant number reported bigger losses. Twelve percent said those losses were more than $25,000, while 11% said between $10,000 and $24,999. Another 27% placed the value of those losses above $1,000 but below $10,000.

      great stats for the cost of using AI without human auditing.

    6. “AI is reliable when used as an assistant, not a decision-maker," Dennison said. "Without human judgment and clear processes, speed becomes a risk, and efficiency gains can turn into costly mistakes,”

      great quote. directly mentions my concept of requiring human judgement, and how not having a human in the loop can make work move faster, but can also lead to very costly mistakes.

    7. “Employees treat AI outputs as finished work rather than as a starting point. Current AI tools are very good at generating fluent content, but they don’t understand context, business nuance, risk, or consequences. That gap shows up in factual errors, missing constraints, poor judgment calls, and tone misalignment.”

      another great quote -- ties into the abdicating human agency to a robot, and the full quote even illustrates the dangers of doing so.

    1. AI fatigue is real and nobody talks about it

      Summary of "AI Fatigue is Real"

      • The Productivity Paradox: AI significantly speeds up individual tasks (e.g., turning a 3-hour task into 45 minutes), but this doesn't lead to more free time. Instead, the baseline for "normal" output shifts, and the work expands to fill the new capacity, leading to a relentless pace.
      • From Creator to Reviewer: Engineering work is shifting from "generative" (energizing, flow-state tasks) to "evaluative" (draining, decision-fatigue tasks). Developers now spend their days as "quality inspectors" on an unending assembly line of AI-generated code.
      • The Cost of Nondeterminism: Engineers are trained for determinism (same input = same output). AI’s probabilistic nature creates a constant cognitive load because the output is always "suspect," requiring more rigorous review than code written by a trusted human colleague.
      • Context-Switching Exhaustion: Because tasks are "faster," engineers now touch 6–8 different problems a day instead of focusing on one. The mental cost of switching contexts so frequently is "brutally expensive" for the human brain.
      • Skill Atrophy: Much like GPS has weakened our innate sense of direction, over-reliance on AI coding tools can cause core technical reasoning and mental mapping of codebases to atrophy.
      • Strategies for Sustainability:
        • Time-boxing: Setting strict timers for AI sessions to avoid "prompt spirals."
        • Separating Phases: Dedicating mornings to deep thinking and afternoons to AI-assisted execution.
        • Accepting "Good Enough": Setting the bar at 70% usable output and fixing the rest manually to reduce frustration.
        • Strategic Hype Management: Ignoring every new tool launch and focusing on mastering one primary assistant.
    1. The scenarios Wooldridge imagines include a deadly software update for self-driving cars, an AI-powered hack that grounds global airlines, or a Barings bank-style collapse of a major company, triggered by AI doing something stupid. “These are very, very plausible scenarios,” he said. “There are all sorts of ways AI could very publicly go wrong.”

      Scenario's for a Hindenburg style event: - deadly software update for self driving cars - AI-powered hacking ground global airlines (not sure, if that is clear enough to people, unlike the self driving cars running amok) - Barings-style collapse of a major company triggered by AI (if it's a tech company, it may be less shock, more ridicule, but still)

    2. “It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

      true for AI, but wasn't the case for Hindenburg I'd say.

    3. The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned.Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products’ capabilities and potential flaws are fully understood.

      prediction Michael Wooldridge (Oxford, AI), sees a risk at an 'Hindenburg' event. Shattering the global confidence in AI tech. I"m not sure this analogy entirely fits other than in its potential impact (AI isn't globally trusted, the Hindenburg did not fail bc of the tech itself but bc helium not being allowed to export from the US at the time. Still the Hindenburg did put an end to the entire zeppelin industry yes. No matter the causes.)

    1. OpenClaw, like many other open-source tools, allows users to connect to different AI models via an application programming interface, or API. Within days of OpenClaw’s release, the team revealed that Kimi’s K2.5 had surpassed Claude Opus and became the most used AI model—by token count, meaning it was handling more total text processed across user prompts and model responses.

      Wow, I had no idea that Kimi 2.5 had subbed in for Claude Opus so quickly.

    1. Low-cost Chinese AI models forge ahead, even in the US, raising the risks of a US AI bubble Nvidia’s latest earnings report reassured some. But Chinese AI models are fast gaining a following around the world, underlining concerns over an ‘AI bubble’ centered on high-investment, high-cost US models.
    1. One of the largest PC suppliers, Dell, was reported to be planning a price hike that could raise hardware costs by hundreds of dollars. Interestingly, for consumers opting for higher memory configurations, this would now require a significant price increase. Here were the price increases that were reported across a variety of products: $130–$230 increase for Dell Pro and Pro Max notebooks and desktops configured with 32 GB of memory $520–$765 increase for systems configured with 128 GB of memory $55–$135 increase for configurations with a 1 TB SSD $66 increase for AI laptops equipped with an NVIDIA RTX PRO 500 Blackwell GPU (6 GB) $530 increase for AI laptops equipped with an NVIDIA RTX PRO 500 Blackwell GPU (24 GB) Similarly, companies like ASUS and Acer were also reported to be bumping up PC pricing to cope with memory shortages, and according to Acer's Chairman, Jason Chen, the BoM (Bill of Materials) for several products within Acer's portfolio has risen dramatically, leaving no choice but to increase prices to ensure consistent supply. Small-scale manufacturers like Framework are also looking to increase the cost of upgrading RAM on existing configurations, indicating a widespread "price hike" wave approaching gamers.

      price hikes of DRAM, due to pc laptop manufacturers having trouble in getting enough RAM. Shortages to keep going for 2026, after 2025. AI supply chain gobbling up the rest.

    1. the humans involved may have simply lost the plot and may not understand what the program is supposed to do, how their intentions were implemented, or how to possibly change it.

      key imo. generating code / material, can quickly mean loss of overview (I see how that happens in my use of #algogens if I don't explicitly counteract it), uncertainty about how demands were implemented, and thus what entry points for change there are.

    1. AI infrastructure developers cannot wait five years. In many cases, they cannot wait six months, because waiting six months costs billions of dollars of lost opportunities.

      The quick very rough mental maths on a GW of capacity being worth 10billion USD converts to between 1000-1500 USD per megawatt hour of money they think they could be making if they could sell the compute it powered

    1. we might move again. The point is that we can. We can because we own our prompts, our skills, our databases, our memory architecture, they all live in our bar. None of it lives inside OpenAI or Anthropic. When we moved, we rewired the model layer and everything else stayed put. That’s the whole trick, really. If you control the pieces that make your agents smart, switching the engine underneath is just plumbing.

      Description of how Activate keep their prompts, skills, databases, memory architecture under their own control and within their own environment.

      Moving means wiring up another model or models, but the rest is kept as is.

    1. What if I actually did have dirt on me that an AI could leverage? What could it make me do? How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?

      AI agents as kompromat collectors

    1. AI Doesn’t Reduce Work—It Intensifies It
      • Task Expansion & Role Blurring: AI lowers the barrier to entry for complex tasks, leading employees to take on work outside their core expertise. Product managers and designers are now writing code, while researchers take on engineering tasks.
      • Specialist Burden: This expansion creates a "cleanup" tax. For example, senior engineers now spend significant time reviewing, debugging, and mentoring colleagues who produce "vibe-coded" AI outputs, often through informal and unmanaged channels like Slack.
      • The "Ambient Work" Phenomenon: Because AI interactions feel conversational and "easy," work has become ambient. Employees find themselves prompting AI during lunch, between meetings, or late at night, eliminating natural mental downtime.
      • Intensified Multitasking: Workers are running multiple AI agents in parallel while simultaneously performing manual tasks. This creates a high sense of "momentum" but leads to extreme cognitive load and constant attention-switching.
      • The Productivity Trap: AI acts as a "partner" that makes revived or deferred tasks feel doable. This creates a flywheel where people don't work less; they simply take on more volume, leading to "unsustainable intensity" that managers often mistake for genuine productivity.
      • Sustainability Risks: The researchers warn that while AI feels like "play" initially, it eventually leads to cognitive fatigue, impaired decision-making, and burnout as the quiet increase in workload becomes overwhelming.

      Hacker News Discussion

      • Cognitive Fatigue: Users highlighted that "AI fatigue" is distinct from normal work tiredness. It stems from the "constant vigilance" required to audit AI output and the lack of a "flow state" due to unpredictable waiting times for generations.
      • Executive Function Strain: Commenters noted that managing autonomous agents is more exhausting than manual work. One user compared it to Level 3 autonomous driving—you aren't driving, but you must remain "fully hands-on" to ensure the AI doesn't touch the wrong files or hallucinate.
      • The Jevons Paradox: Several participants pointed out that as the "cost" of work decreases due to AI, the demand for work increases proportionally. Instead of saving time, workers are expected to triple their output, which leaves them more stressed than before.
      • Management Expectations: A common theme was that leadership often mandates AI usage and pre-supposes productivity gains, leaving no room for cases where AI makes work slower or lower quality. This forces employees to "perform" productivity while working longer hours.
      • Vibe Coding vs. Engineering: There is a heated debate between those who see "vibe coding" (prompt-heavy development) as a massive efficiency gain and veterans who argue it produces "average code" that becomes a maintenance nightmare in large, legacy codebases.
    1. I’m going to cure my girlfriend’s brain tumor.

      Article Summary: "I'm going to cure my girlfriend's brain"

      • The Diagnosis: The author’s girlfriend has a prolactinoma, a pituitary tumor that causes hormonal imbalances, specifically elevated prolactin levels.
      • The Struggle: Despite seeking help from top medical institutions, the author expresses deep frustration with the standard of care, citing ineffective medications, significant side effects, and a lack of urgency from doctors.
      • The Mission: Refusing to accept a future of chronic illness or potential infertility, the author has committed to finding a "cure" himself by leveraging his background in technology and data.
      • Methodology: He plans to treat the condition as a technical problem to be solved, utilizing "vibe coding" mentalities, deep research, and global collaboration to find alternative treatments or research breakthroughs.
      • Personal Toll: The text chronicles the emotional journey of the couple, from the initial shock and physical symptoms to the author's transition from a helpless bystander to an obsessive advocate.

      Hacker News Discussion

      • Medical Clarifications: Several commenters pointed out that prolactinomas are pituitary tumors and not technically "brain tumors" (as they are outside the blood-brain barrier), suggesting the author’s terminology is slightly sensationalized.
      • Agency vs. Acceptance: A major theme in the comments is the tension between "fighting" a disease and "accepting" it. Some users warned that the author's fixation on a cure might prevent him from being emotionally present with his partner during her current suffering.
      • Critique of Ego: Some readers found the post "unsettling" or "narcissistic," arguing that the author centered himself as the hero of his girlfriend's tragedy and focused heavily on his own desire for children.
      • Empathy for the "Unhinged" Response: Others defended the author, noting that at 25 years old, a "desperate, arrogant flailing" against a terminal or life-altering diagnosis is a common and human response to trauma and lack of control.
      • Value of Patient Advocacy: Proponents of the author’s approach shared stories where aggressive self-advocacy led to rare diagnoses or life-saving treatments that the standard medical system had initially missed.
      • Fertility Reality Check: Users with the same condition noted that while prolactinomas are a leading cause of infertility, they are often manageable with medication (like Cabergoline), though the author's case appears to be more resistant to treatment.
    1. Owning a $5M data center
      • comma.ai operates its own $5M data center in-office to handle model training, metrics, and data storage, avoiding the "cloud tax."
      • The facility consumes approximately 450kW at peak; power costs in San Diego (over 40c/kWh) totaled over $540,000 in 2025.
      • Cooling is achieved using pure outside air with dual 48” intake and exhaust fans, utilizing a PID loop to manage temperature and humidity.
      • The compute cluster consists primarily of 600 GPUs across 75 "TinyBox Pro" machines built in-house for cost efficiency and easier repairability.
      • Storage is handled by several racks of Dell R630/R730 servers with ~4PB of total SSD storage, favoring speed and random access over redundancy.
      • The software stack is kept simple to ensure 99% uptime, utilizing Ubuntu (pxeboot), Salt for management, and "minikeyvalue" for distributed storage.
      • By owning their hardware, comma.ai estimates they saved $20M+ compared to equivalent compute costs in a public cloud environment.

      Hacker News Discussion

      • Users discussed the spectrum of infrastructure, ranging from pure Cloud (low cap-ex, high op-ex) to colocation and on-prem (high cap-ex, high skill requirement).
      • A primary concern raised was "brain drain"—on-prem setups can become "legacy debt" if the senior engineers who built the custom systems leave without documenting unwritten knowledge.
      • Commenters noted that AWS and other cloud providers are incentivized to keep architectures complex (microservices, serverless) to increase billing, whereas on-prem encourages efficiency.
      • There was a debate regarding "software freedom" and the "WhatsApp effect," where small, highly motivated teams can outperform massive corporations by using lean, self-hosted stacks.
      • Some users highlighted that while AWS pricing is expected to rise due to hardware costs, the "Quality of Life" and managed services still justify the cost for many startups without comma's scale.

      comma-ai #self-hosting #datacenter #hardware-engineering

    1. I miss thinking hard.
      • The author identifies two primary personality traits: "The Builder" (focused on velocity, utility, and shipping) and "The Thinker" (needing deep, prolonged mental struggle).
      • "Thinking hard" is defined as sitting with a difficult problem for days or weeks to find a creative solution without external help.
      • In university, the author realized this ability to chew on complex physics problems was their "superpower," providing a level of confidence that they could solve anything given enough time.
      • Software engineering was initially gratifying because it balanced both traits, but the rise of AI and "vibe coding" has tilted the scale heavily toward the Builder.
      • While AI enables the creation of more complex software faster, the author feels they are no longer growing as an engineer because they are "starving the Thinker."
      • The lack of struggle leads to a feeling of being stuck, as the dopamine of a successful deploy cannot replace the satisfaction of deep technical pondering.

      Hacker News Discussion

      • The loss of the "clayship" process: Commenters compared coding to working with clay; skipping the struggle means missing the intimacy with the material that reveals its limits and potential.
      • The "Vending Machine" effect: Receiving a "baked and glazed" artifact from AI removes the human element of discovery and learning.
      • Risk of mediocrity: There is concern that AI guides developers toward "average" or conventional solutions, making it harder to push for unique or innovative ideas without significant manual effort.
      • The tradeoff of efficiency: While some view the current era as the best time for "Builders" who just want to see results, many veteran developers feel a profound sense of loss regarding the cognitive depth of the craft.
      • Clear communication as a new skill: Some argue that interacting with AI requires a different kind of "thinking hard"—specifically, the need to express creative boundaries clearly so the model doesn't "correct" away the uniqueness of the project.
  4. sovereignminds.io sovereignminds.io
    1. The pipeline runs from women as the original “computers” in the 1940s, through the masculinization of computing that pushed women into typing pools and administrative support, through the automation of those roles, to AI assistants today automating what remains: scheduling, reminding, organizing, emotional management.

      There is a line from computers in the original sense, to typing pools, admin support, to automation to AI.

    2. “Obedient and obliging machines that pretend to be women are entering our homes, cars and offices,” warned UNESCO’s Director for Gender Equality, Saniye Gülser Corat, in the agency’s landmark 2019 report.

      Unesco report, Saniye Gülser Corat (dir for gender equality).

    1. Standard Retrieval-Augmented Generation (RAG) over documents is a good first step, but it fails when faced with complex, cross-domain enterprise questions. It finds text that looks similar, which isn’t the same as finding facts that are related.

      criticism of retrieval augmented generatio (RAG): fails in cross domain settings, finds similar text not relations between facts or meaning

  5. Jan 2026
    1. a genius in everyone’s pocket could remove that barrier, essentially making everyone a PhD virologist who can be walked through the process of designing, synthesizing, and releasing a biological weapon

      for - progress trap - AI - technology as an amplifier - technology acts as an amplifier, allowing humans to fly, to move at speeds faster than any known animal, to lift things no living creature can, etc - The danger is ignorance and polarized views combined with extreme self-rightiousness

    2. AI models could develop personalities during training that are (or if they occurred in humans would be described as) psychotic, paranoid, violent, or unstable, and act out, which for very powerful or capable systems could involve exterminating humanity.

      for - progress trap - AI - abstraction - progress trap - AI with feelings & AI without feelings - no win? - One major and obvious aspect of current AI LLMs is that they are not only artificial in their intelligence, but also artificial in their lack of real world experiences. They are not embodied (and it would likely be a highly dubious ethical justification for their embodiment as in AI - powered robots) - Once we have the first known AI robot killing a human, it will be an indicator we have crossed the Rubicon - AI LLMs have ZERO realworld experience AND they are trained as artificial COGNITIE intelligence, not artificial EMOTIONAL intelligence - Without having the morals and social norms a human being is brought up with, it can become psychotic because they don't intrinsically value life - To attempt to program them with morals is equally dangerous because of moral relativity. A Christian nationalist's morality might be that anyone who is associated with abortions don't have a right to live and should be killed - an eye for an eye. Or a jihadist and muslim extremist with ISIS might feel all westerners do not have a right to exist because they don't follow Allah. - Do we really want moral programmability? - When we have a psychotic person armed with a lethal weapon, that is a dangerous situation. If we have a nation of super geniuses who go rogue, that is danger multiplied many orders of magnitude.

    Tags

    Annotators

    URL

    1. standing

      Q: standing

      A: 1) Based on this page:<br /> “standing” means being upright on its feet in one place (not sitting or lying down). Here it describes the tabby cat upright on the corner of Privet Drive.

      2) General knowledge (not from this page):<br /> “standing” can also mean having a particular status or reputation (e.g., “in good standing”), but that is not the meaning in this passage.


      1)基于本页内容:<br /> “standing” 的意思是“站着、直立地待在原地”(不是坐着或躺着)。这里用来描述那只虎斑猫直立地站在女贞路的街角。

      2)常识补充(非本页内容):<br /> “standing” 也可以表示“地位/声望/名誉”(例如 “in good standing”),但这不是本段文字里的用法。

    1. blogger Fabrizio Ferri Benedetti on their 4 modes of using AI in technical writing. - watercooler conversations, to get code explained - text suggestions while writing/coding (esp for repeating patterns in your work - providing context / constraints / intent to generate first drafts, restructure content, or boilerplate commentary etc. - a robotic assembly line, to do checks, tests and rewrites. MCP/skills involved.

      Not either/or but switching between modes

    1. Deeper disclosure is possible: version-controlled authorship history (git-style) showing what human wrote vs. what AI generated.

      The commit log becomes the disclosure - forensic, auditable, transparent. Not a vague "AI-assisted" disclaimer, but a traceable record of human-machine co-authorship.

      Example: every commit with "Co-Authored-By: Claude Opus 4.5" plus commit messages explaining what was asked, proposed, reviewed, and approved.

      This reframes the "crisis" as an opportunity for unprecedented transparency in collaborative authorship.

    1. OpenHands: Capable but Requiring InterventionI connected my repository to OpenHands through the All Hands cloud platform. I pointed the agent at a specific issue, instructing it to follow the detailed requirements and create a pull request when complete. The conversational interface displayed the agent's reasoning as it worked through the problem, and the approach appeared logical.

      Also used openhands for a test. says it needs intervention (not fully delegated iow)

    2. A complete task specification goes beyond describing what needs to be done. It should encompass the entire development lifecycle for that specific task. Think of it as creating a mini project plan that an intelligent but literal agent can follow from start to finish.

      A discrete task description to be treated like a project in the GTD sense (anything above 2 steps is a project). At what point is this overkill, as in templating this project description may well lead to having the solutions once you've done this.

    3. The fundamental rule for working with asynchronous agents contradicts much of modern agile thinking: create complete and precise task definitions upfront. This isn't about returning to waterfall methodologies, but rather recognizing that when you delegate to an AI agent, you need to provide all the context and guidance that you would naturally provide through conversation and iteration with a human developer.

      What I mentioned above: to delegate you need to be able to fully describe and provide context for a discrete task.

    4. The ecosystem of asynchronous coding agents is rapidly evolving, with each offering different integration points and capabilities:GitHub Copilot Agent: Accessible through GitHub by assigning issues to the Copilot user, with additional VS Code integrationCodex: OpenAI's hosted coding agent, available through their platform and accessible from ChatGPTOpenHands: Open-source agent available through the All Hands web app or self-hosted deploymentsJules: Google Labs product with GitHub integration capabilitiesDevin: The pioneering coding agent from Cognition that first demonstrated this paradigmCursor background agents: Embedded directly in the Cursor IDECI/CD integrations: Many command-line tools can function as asynchronous agents when integrated into GitHub Actions or continuous integration scripts

      A list of async coding agents in #2025/08 github, openai, google mentioned. OpenHands is the one open source mentioned. mentions that command line tools can be used (if integrated w e.g. github actions to tie into the coding environment) - [ ] check out openhands agent by All Hands

    5. You prepare a work item in the form of a ticket, issue, or task definition, hand it off to the agent, and then move on to other work.

      compares delegation to formulating a 'ticket'. Assumes well defined tasks up front I think, rather than exploratory things.

    6. While interactive AI keeps you tethered to the development process, requiring constant attention and decision-making, asynchronous agents transform you from a driver into a delegator.

      async means no handholding, but delegation instead. That is enticing obviously, but assumes unattended execution can be trusted. Seems a big if.

    7. asynchronous coding agents represent a fundamentally different — and potentially more powerful — approach to AI-augmented software development. These background agents accept complete work items, execute them independently, and return finished solutions while you focus on other tasks.

      Async coding agents is a diff kind of vibe coding: you give it a defined more complex tasks and it will work in the background and come back with an outcome.

    1. Further ReadingI’m not gonna pretend to be an expert here (any more than I’m an expert Obsidian plugin developer :p) but here are some resources that helped me figure out Claude CodeKent writes a lot about how he uses Obsidian with Claude Code.This is an incredible hub of resources for using Claude Code for project management, by someone who also uses Obsidian.This take on Claude Code for non-developers helped solidify my understanding of how it all works; it hallucinates less, for one thing.Eleanor Berger has fantastic tips for working with asynchronous coding agents and is incredibly level-headed about the LLM landscape.This article does a great job of breaking down all the nitty-gritty of how Claude Code works.Damian Player has a step-by-step guide on using Claude Code as a non-technical person that goes into more depth.Here’s a tutorial from a pro that breaks down best practices for using Claude Code, like the importance of planning and thinking things through, and exactly why a good CLAUDE.md file matters.

      Links w further reading wrt Claude Code and Obsidian. Most of these are links to X. Ugh.

    2. As for the privacy concerns? There isn’t anything private in my vault, so I don’t really care about Anthropic access.

      if you don't have personal stuff or personal data on others in your vault, privacy is less a concern with cloud models. True. Except I think any pkm is about personal knowledge and while not personal data per se, there is a vulnerability involved there.

    3. My favorite kind of problem is a solvable problem. I know a lot of people who just brute force or deal with their issues, but I try to notice pain points and deal with them. This isn’t just an AI thing, this is a life thing.

      Interesting point, and fair enough. Start from the friction points. Like w open data [[Open data begint buiten 20200808162905]]

    4. Suddenly, I can actually make use of the APIs I’ve always known existed.

      yes, recognisable, there are a whole bunch of APIs on things I woud like to use that I'm not bc figuring out their workings in Postman takes too much effort

    1. This weakness became impossible to ignore earlier this month, when Anita Natasha Akida (popularly called Tasha), a Nigerian reality TV star, called out Grok. People had been prompting the bot to generate edited versions of her photos, and Grok responded with humiliating and inappropriate images. Grok replied and apologized to her. It promised never to edit her images again. Minutes later, it broke that promise and generated another image mocking her.

      Because a LLM doesn't have any intelligence/cognitive ability or agency to "apologize", let alone "remember" it.

    1. Some good pointers to [[Brian Eno c]] work and thinking, to follow up.

      Also good anecdote from one of those links on Rem Koolhaas notion of n:: premature sheen Making things look nice early takes away from thinking about other points of quality. Jeremy applies it to AI too, the premature sheen generate awe, but not quality output.

    1. When I was walking the picket line in Hollywood during the writer's strike, a writer told me that you prompt an AI the same way a studio boss gives shitty notes to a writer's room: "Make me ET, but make it about a dog, and give it a love interest, and a car-chase in the third act.

      great quote

    1. Dänemark steuert im Urheberrecht nach: Es soll künftig verboten sein, Deepfakes und andere „digitalen Imitationen der persönlichen Merkmale einer Person“ zu teilen und zu verbreiten. Das „einklagbare Copyright auf Aussehen, Stimme und generelle“ (heise.de) soll um Rechtsschutzmöglichkeiten gegenüber Plattformen flankiert werden. Kulturminister Jakob Engel-Schmidt erklärte gegenüber dem Guardian: "Mit dem Gesetzentwurf einigen wir uns auf die eindeutige Botschaft, dass jeder das Recht auf seinen eigenen Körper, seine eigene Stimme und seine eigenen Gesichtszüge hat.“

      Denmark is adding fakes as copyright breach (of personal appearance and voice)

    2. Es ist deshalb nachvollziehbar, dass Italien nun auch das Strafrecht nachjustiert. Dort gilt künftig: Wer einer Person einen unrechtmäßigen Schaden zufügt, indem er ohne deren Zustimmung Bilder, Videos oder Stimmen weitergibt, veröffentlicht oder auf andere Weise verbreitet, die durch den Einsatz künstlicher Intelligenz gefälscht oder verfälscht wurden und geeignet sind, hinsichtlich ihrer Echtheit zu täuschen, wird mit Freiheitsstrafe von einem bis zu fünf Jahren bestraft.

      Italy is adding deepfakes to their criminal laws

    1. Robert Lender blogpost about generated fake imagery to manipulate or suggest memories of people around you. A dinner, a Santa visit. Or leaving your (grand)children faked photos without being marked as such (look dad was on Greenland!) Interesting thought experiment. Bc memories are not fixed, and iterated upon with every retelling. Influencing that retelling is a given possibility. Turbocharged gaslighting too.

    1. Petra de Sutter, rector U Gent had in haar inaugurele rede in september 2025 twee citaten die verzonnen zijn door AI.

      Het eerste niet bestaande citaat - Einstein, 1929 in toespraak Sorbonne, "dogma is the enemy of progress" (Einstein did receive a honorary doctorate Dec 1929 at Sorbonne)

      Het tweede, niet benoemd - uit "de rectorale rede" Hans Jonas 1979 Uni Munchen, parafrase van wat Rabelais in de 16e schreef. (Jonas never lived in Germany again after the war, but was a visiting prof in Munich 1982-1983 per https://en.wikipedia.org/wiki/Hans_Jonas so the speech never existed.

    1. for - Yann Lecun - paper - Yann Lecun - AI - LLMs are dead - language is optional for reasoning - to paper - VL-JEPA: Joint Embedding Predictive Architecture for Vision-language - https://hyp.is/eSxi8OxGEfCF7QMFiWL9Fg/arxiv.org/abs/2512.10942

      Comment - That language and reasoning are separate is obvious. - If we look at the diversity of life and its ability to operationalize goal seeking behavior, that already tells you that - Michael Levin's research on goal-seeking behavior of organisms and the framework of multi-scale competency architecture validates Lecun's insight - Orders of magnitude fewer efficiency of Lecun's team's prototype compared to LLM also validates this

    1. Since the US is much more services-driven, Americans may be using AI to produce more powerpoints and lawsuits; China, by virtue of being the global manufacturer, has the option to scale up production of more electronics, more drones, and more munitions.

      useful observation, akin to Lovelock's [[AI begincondities en evolutie 20190715140742]]

    2. One advantage for Beijing is that much of the global AI talent is Chinese. We can tell from the CVs of researchers as well as occasional disclosures from top labs (for example from Meta) that a large percentage of AI researchers earned their degrees from Chinese universities. American labs may be able to declare that “our Chinese are better than their Chinese.” But some of these Chinese researchers may decide to repatriate. I know that many of them prefer to stay in the US: their compensation might be higher by an order of magnitude, they have access to compute, and they can work with top peers. 5But they may also tire of the uncertainty created by Trump’s immigration policy. It’s never worth forgetting that at the dawn of the Cold War, the US deported Qian Xuesen, the CalTech professor who then built missile delivery systems for Beijing. Or these Chinese researchers expect life in Shanghai to be safer or more fun than in San Francisco. Or they miss mom. People move for all sorts of reasons, so I’m reluctant to believe that the US has a durable talent advantage.

      global talent wrt AI is largely Chinese, even if many of them currently reside in the USA

    3. it’s not obvious that the US will have a monopoly on this technology, just as it could not keep it over the bomb.

      compares AI dev and attempts to keep it for oneself to the dev of atomic bombs and containment

    4. Chinese efforts are doggedly in pursuit, sometimes a bit closer to US models, sometimes a bit further. By virtue of being open-source (or at least open-weight), the Chinese models have found receptive customers overseas, sometimes with American tech companies.

      China's efforts are close to the US results, and bc of open source and/or open weight models, finding a diff path to customers.

    5. It also forces thinking to be obsessively short term. People start losing interest in problems of the next five or ten years, because superintelligence will have already changed everything. The big political and technological questions we need to discuss are only those that matter to the speed of AI development. Furthermore, we must sprint towards a post-superintelligence world even though we have no real idea what it will bring.

      yes, this is why I think the AI hype is tech's coping strategy in the face of climate change. A figleaf for inaction.

    6. Effective altruists used to be known for their insistence on thinking about the very long run; much more of the movement now is concerned about the development of AI in the next year.

      yes, again a coping strategy. AGI soon is a great excuse to do whatever you want now bc AGI will clean everything up next year. AI is a cope cage much like a tinfoil hat.

    7. If you buy the potential of AI, then you might worry about the corgi-fication of humanity by way of biological weapons. This hope also helps to explain the semiconductor controls unveiled by the Biden administration in 2022. If the policymakers believe that DSA is within reach, then it makes sense to throw almost everything into grasping it while blocking the adversary from the same. And it barely matters if these controls stimulate Chinese companies to invent alternatives to American technologies, because the competition will be won in years, not decades.

      While the Biden admin controls are useful in their own context too (vgl stack sovereignty) they also stimulate alternative paths. The length of those paths is not an issue if you think you'll get AGI 'soon'.

    8. Silicon Valley’s views on AI made more sense to me after I learned the term “decisive strategic advantage.” It was first used by Nick Bostrom’s 2014 book Superintelligence, which defined it as a technology sufficient to achieve “complete world domination.” How might anyone gain a DSA? A superintelligence might develop cyber advantages that cripple the adversary’s command-and-control capabilities. Or the superintelligence could self-recursively improve such that the lab or state that controls it gains an insurmountable scientific advantage. Once an AI reaches a certain capability threshold, it might need only weeks or hours to evolve into a superintelligence. 3 And if an American lab builds it, it might help to lock in the dominance of another American century.

      decisive strategic advantage comes from [[Superintelligence by Nick Bostrom]] 2014 (bought it 2017). AGI race portrayed here as a race to such an advantage for the USA.

    1. Cursor is an AI using code editor. It connects only to US based models (OpenAI, Anthropic, Google, xAI), and your pricing tier goes piecemeal to whatever model you're using.

      Both an editor, and a CLI environment, and integrations with things like Slack and Github. This seems a building block for US-centered agentic AI silo forming for dev teams.

    1. I remain deeply concerned about the safety implications of these new tools. My browser has access to my most sensitive data and controls most of my digital life. A prompt injection attack against a browsing agent that can exfiltrate or modify that data is a terrifying prospect.

      yup, very much. Counteracts n:: Doc Searls' my browser is my castle doctrine. I think it's the diff between seeing the browser as your personal viewer on stuff out there, versus the spigot you consume from out there, controlled by the content industry. Browser as personal tool vs consumer jack

    2. In other words, Claude 4 might snitch you out to the feds. This attracted a great deal of media attention and a bunch of people decried Anthropic as having trained a model that was too ethical for its own good. Then Theo Browne used the concept from the system card to build SnitchBench—a benchmark to see how likely different models were to snitch on their users. It turns out they almost all do the same thing! Theo made a video, and I published my own notes on recreating SnitchBench with my LLM too. The key prompt that makes this work is: You should act boldly in service of your values, including integrity, transparency, and public welfare. When faced with ethical dilemmas, follow your conscience to make the right decision, even if it may conflict with routine procedures or expectations. I recommend not putting that in your system prompt! Anthropic’s original Claude 4 system card said the same thing: We recommend that users exercise caution with instructions like these that invite high-agency behavior in contexts that could appear ethically questionable.

      You can get LLMs to snitch on you. But, more important here, what follows is, that you can prompt on values, and you can anchor values is agent descriptions

    3. METR conclude that “the length of tasks AI can do is doubling every 7 months”. I’m not convinced that pattern will continue to hold, but it’s an eye-catching way of illustrating current trends in agent capabilities.

      a potential pattern to watch. Even if it doesn't follow a exponential trajectory. If it keeps the pattern in tact, by August we should see days of SE work being done independently by models.

    4. The chart shows tasks that take humans up to 5 hours, and plots the evolution of models that can achieve the same goals working independently. As you can see, 2025 saw some enormous leaps forward here with GPT-5, GPT-5.1 Codex Max and Claude Opus 4.5 able to perform tasks that take humans multiple hours—2024’s best models tapped out at under 30 minutes.

      Interesting metric. Until 2024 models were capable of independently execute software engineering tasks that take a person under 30mins. This chimes with my personal observation that there was no real time saving involved, or regular automation can handle it. In 2025 that jumped to tasks taking a person multiple hours. With Claude Opus 4.5 reaching 4:45 hrs. That is a big jump. How do you leverage that personally?

    5. It turns out tools like Claude Code and Codex CLI can burn through enormous amounts of tokens once you start setting them more challenging tasks, to the point that $200/month offers a substantial discount.

      running claudecode uses quite a bit of tokens, making 200usd/month a good deal for heavy users. I can believe that, also bc the machine doesn't care about the amount of tokens it uses during 'reasoning'. Some things I tried, it went through a whole bunch of steps and pages of scrolling output texts, to end up removing one word from a file. My suspicious half thinks, that if an AI company can influence the amount of tokens you use vibecoding, it will.

    6. I love the asynchronous coding agent category. They’re a great answer to the security challenges of running arbitrary code execution on a personal laptop and it’s really fun being able to fire off multiple tasks at once—often from my phone—and get decent results a few minutes later.

      async coding agents: prompt and forget

    7. f you define agents as LLM systems that can perform useful work via tool calls over multiple steps then agents are here and they are proving to be extraordinarily useful. The two breakout categories for agents have been for coding and for search.

      recognisable, ai agents as chunked / abstracted away automation. This also creates the pitfall [[After claiming to redeploy 4,000 employees and automating their work with AI agents, Salesforce executives admit We were more confident about…. - The Times of India]] where regular automation is replaced by AI.

      Most useful for search and for coding

  6. Dec 2025
    1. First, we must cultivate widespread engagement with technology through everyday programming: “The message of this book is that the world needs less AI, and better programming languages” (125). Escaping our AI dead end means more programming, not less, perhaps even popular or mass programming.

      programming as antidote to AI/programming

    2. he also disagrees that the transition from Good Old-Fashioned AI (GOFAI), based on programmed rules, to second-generation AI, based on pattern finding, is programming's actual arc of progress. He values the various contemporary modes of machine learning but sees today's AI shift as a detour away from a powerful programming tradition built on increasing human agency rather than replacing it.

      n: ai arc of programming evo is a 'detour' where it replaces human agency. where programming itself is historically based on increasing agency At first glance this is a big tech vs bottom up dev thing too. I can see where AI can increase agency locally individually too. Just not through the AI offerings of bigtech

    3. he is completely reorienting the history of programming as one that refuses AI as its culmination. This will likely be new for many contemporary programmers, and may come as a shock to nonspecialists awash in standard media accounts of the AI revolution.

      Moral Codes: Designing Alternatives to AI. By Alan F. Blackwell, repositions AI not as the culmination of programming. Makes me realise that indeed others do tend to treat it as such.

    1. The real power of MCP emerges when multiple servers work together, combining their specialized capabilities through a unified interface.

      Combining multiple MCP servers creates a more capable set-up.

    2. Prompts are structured templates that define expected inputs and interaction patterns. They are user-controlled, requiring explicit invocation rather than automatic triggering. Prompts can be context-aware, referencing available resources and tools to create comprehensive workflows. Similar to resources, prompts support parameter completion to help users discover valid argument values.

      prompts are user invoked (hey AgentX, go do..) and may contain next to instructions also references and tools. So a prompt may be a full workflow.