1,048 Matching Annotations
  1. Last 7 days
    1. When AI is applied in more conventional domains, like increasing integration into command and control systems, does it benefit the attacker? More generally, how will AI change the character of human conflict?

      大多数人认为AI防御系统会增强人类安全,但作者提出AI可能从根本上改变攻防平衡,甚至在传统领域使攻击者获得优势。这一观点挑战了技术进步通常增强防御能力的传统认知,暗示AI可能使冲突更加危险和不可预测。

  2. May 2026
    1. Digital Sovereignty: Wire to Replace Signal as Standard in the Bundestag
      • Bundestag Security Shift: Bundestag President Julia Klöckner has recommended that members of the German Parliament switch from Signal to "Wire," a BSI-certified messenger, as the new standard for communication.
      • Digital Sovereignty: The move is framed as a step toward digital sovereignty, reducing reliance on US-based platforms like Signal or WhatsApp in favor of a service with European roots and German security certification.
      • Phishing Mitigation: A primary driver for the recommendation is security; Wire allows registration via email rather than a phone number, which is intended to hide a central identification feature and make phishing attacks more difficult.
      • BSI Certification: The "Wire Bund" version has been approved for data classified as "Verschlusssache – nur für den Dienstgebrauch" (Restricted) until 2028.
      • Human Factor vs. Technology: Critics and experts note that while Wire is secure, it is not a "panacea." Recent successful phishing attacks against politicians (including Klöckner herself) highlight that the human user is often the weakest link, regardless of the app's encryption.

      Hacker News Discussion

      • Vendor Lock-in Irony: Commenters pointed out the irony of moving from one vendor-locked system (Signal/US) to another (Wire/German-Swiss), questioning why the government didn't choose Matrix, which is an open standard used by NATO and other EU entities.
      • Deployment Details: A former developer shared that Wire was originally deployed for the Bundeskanzleramt using a Nix-based delivery method to allow for completely air-gapped server installations.
      • Skepticism of Motivation: Some users suggested the switch might be politically motivated or a way for Klöckner to deflect from her own experience being phished, rather than a purely technical security upgrade.
      • Data Privacy Concerns: Discussion arose regarding jurisdiction; while Signal is US-based, Wire is subject to German/Swiss law. This is seen as a benefit for EU sovereignty but also raises questions about local legal intercept requirements.
      • Technical Comparisons: Users debated the UX and backup reliability of Wire versus Signal, with some noting that Wire's media backup system has historically been less robust than Signal's.
    1. We estimate, with 90% confidence, that between 290,000 and 1.6 million H100-equivalents of compute were smuggled through the end of 2025.

      大多数人可能认为走私到中国的AI芯片数量在数万级别,但作者的估计显示实际数量可能高达数十万甚至上百万H100等效芯片,这一数量级远超公众认知,表明走私问题的严重程度被严重低估。

    1. The feature can edit spreadsheets without a human-in-the-loop and was vulnerable to data exfiltration risks due to its ability to insert formulas that trigger external communication.

      最佳实践建议:在使用无需人工干预的AI工具时,应特别注意数据泄露风险。

    1. The most urgent finding this week comes from researchers who demonstrated that the very mechanism enabling agents to use tools - function calling - can be hijacked with alarming reliability.

      这一发现揭示了AI代理工具调用接口的安全漏洞,为构建安全的AI代理系统提出了新的挑战。

  3. Apr 2026
    1. Modern-day security tooling looks for the wrong things. Most software composition analysis tools work by checking your dependencies against a database of known vulnerabilities – CVEs. But a deliberately planted backdoor doesn't have a CVE.

      大多数安全团队依赖CVE数据库来评估风险,但作者指出这种方法对故意植入的后门完全无效。这一观点挑战了行业共识,暗示现有安全工具在新型供应链攻击面前已经过时,需要转向行为分析等新方法。

    2. The result is a mismatch that should terrify anyone building software: the attack surface is expanding faster than any human can monitor, and the entities making dependency decisions are increasingly not human.

      大多数人认为安全问题可以通过增加人力监控和审查来解决,但作者认为在AI时代,攻击面扩展速度已经超过了人类监控能力,且依赖决策越来越由AI而非人类做出。这一观点挑战了传统安全理念,暗示需要全新的自动化防御机制。

    3. We are building a world where machines write the code, machines choose the dependencies, and machines ship the updates. The AI agents are building the software. If we don't secure the supply chain they rely on, the AI agents are cooked.

      这句话揭示了AI时代软件安全的根本挑战:当AI系统自主编写、选择和部署代码时,它们的安全性与依赖的供应链安全直接相关。如果我们不能保护这个供应链,AI系统本身就会成为恶意软件的载体,这是一个令人深思的悖论。

    4. select known-vulnerable dependency versions 50% more often than humans.

      这一统计洞察颠覆了“AI写代码更安全”的迷思。AI代理在优化代码功能性时,往往以牺牲安全性为代价,倾向于选择存在已知漏洞的旧版本依赖。这反映出当前AI模型在训练时对安全维度的忽视,也警示我们在AI辅助开发流程中必须强制引入自动化的安全卡点。

    5. A deliberately planted backdoor doesn’t have a CVE.

      戳中了传统安全工具的阿喀琉斯之踵。基于已知漏洞(CVE)的防御逻辑在应对蓄意植入且会自毁的新型后门时形同虚设。这启示我们,静态的特征匹配已无法应对动态的攻击手段,必须转向对代码运行时行为的动态分析,从“它是什么”转向“它做了什么”。

    6. The median JavaScript project on GitHub has 755 transitive dependencies

      这一数据点极具洞察力,指明了现代软件架构的根本性脆弱点:真正的防线不再是你的业务代码,而是你从未审查过的传递依赖网络。开发者往往只关注直接引入的包,却忽略了依赖树深处的暗箱,这正是供应链攻击能够“顺藤摸瓜”造成大面积杀伤的底层逻辑。

    7. the entities making dependency decisions are increasingly not human.

      深刻揭示了当前AI编程代理带来的核心安全悖论:决策速度与监控能力的错配。当代码依赖的决策权从人类让渡给追求功能实现而非安全性的机器时,攻击面便以超越人类认知极限的速度扩张,这要求安全范式必须从人工审查转向机器速度的自动化防御。

    8. Socket, an a16z portfolio company, detected the malicious dependency in the Axios attack within 6 minutes of its publication. That's roughly 63,000 times faster than the industry average.

      大多数人认为供应链攻击需要数月甚至数年才能被发现,但作者展示了新型安全工具可以在几分钟内检测到攻击,比行业平均水平快63000倍。这表明安全检测范式正在从基于CVE的静态检查转向基于行为的实时分析。

    9. Hallucinated packages are the sleeper threat. LLMs regularly invent package names that don't exist. One study found that nearly 20% of AI-recommended packages were fabrications, and 43% of those hallucinated names appeared consistently across queries.

      大多数人认为AI推荐的包都是真实存在的,但作者揭示了AI经常推荐不存在的包,这已成为一种新的攻击向量。攻击者利用这一现象注册'幻觉包'并植入恶意代码,这种'slopsquatting'技术让AI本身成为供应链攻击的放大器。

    1. This card was updated on April 24, 2026, to include additional information about safeguards for the deployment of GPT‑5.5 and GPT‑5.5 Pro in the API.

      大多数人认为系统卡应该在发布时包含所有相关信息,不需要后续更新,但OpenAI在发布后仅一天就更新了系统卡以增加API部署的安全措施信息。这挑战了科技产品文档管理的常规做法,暗示AI安全措施是动态发展的,需要持续调整,这违背了传统软件发布中'文档一次性完成'的共识。

    1. TPM-backed full-disk encryption is now generally available in the Ubuntu installer.

      文章提到TPM支持的全盘加密功能现在已在Ubuntu安装程序中普遍可用。这一安全功能将加密绑定到特定设备的TPM芯片上,大大提高了物理访问攻击的门槛。相比其他Linux发行版,Ubuntu将此功能集成到安装程序中,简化了企业部署安全系统的过程。

    1. Some proposals for AI agents assume that putting agentic code in a TEE or similar 'jail' will solve these problems, but that ignores the need to collectively bargain

      大多数人认为通过技术手段(如可信执行环境)可以解决AI代理的信任问题,但作者认为这忽视了集体谈判的必要性。这个观点挑战了技术解决方案的万能论,强调了制度设计和多方协商的重要性。

    1. Out of 28 paid and 400 free routers: > 9 injected malicious code into tool calls > 17 touched researcher-owned AWS credentials > 1 drained $500k from an Ethereum wallet

      大多数人认为付费API路由器比免费路由器更安全,但作者的研究表明即使是付费路由器也存在严重安全风险,因为无论付费与否,这些中间服务都有能力访问和操纵所有数据。这挑战了人们对'付费等于安全'的普遍认知。

    1. Vercel is advising Google Workspace administrators and Google account owners to check for the following application: OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

      大多数人认为企业安全事件主要影响企业自身系统,但作者指出这次事件实际上要求普通Google Workspace管理员检查特定应用,这挑战了'企业安全事件仅影响企业内部'的常见认知,表明第三方应用的安全风险可能广泛影响普通用户。

    2. the initial access occurred after a Vercel employee's Google Workspace account was compromised via a breach at the AI platform Context.ai.

      大多数人认为大型云平台的漏洞主要来自外部直接攻击,但作者暗示这次安全事件实际上是通过第三方AI平台Context.ai的漏洞间接导致的,这挑战了人们对供应链安全风险的普遍认知。

    3. Vercel stores all customer environment variables fully encrypted at rest. We have numerous defense-in-depth mechanisms to protect core systems and customer data.

      大多数人认为云服务提供商的所有数据都会自动加密保护,但作者指出Vercel实际上允许将环境变量标记为'非敏感',这意味着这些变量默认不加密,这与行业普遍认为的'云数据自动加密'的常识相悖。

    1. we probably will publish more curl vulnerabilities in 2026 than we have done in many years, maybe ever.

      大多数人认为随着安全实践的提升,软件漏洞数量应该减少,但作者预测2026年curl的漏洞发布数量可能会创下历史新高。这一观点挑战了'安全状况持续改善'的主流认知,暗示AI安全审计工具可能正在发现更多过去被忽视的漏洞。

    2. it is decently important to handle them asap when they arrive so that we can avoid building up too much backlog.

      大多数人认为面对大量安全报告应该优先处理最严重的漏洞,但作者强调需要立即处理所有报告以避免积压。这与常见的'按严重程度排序处理'的安全最佳实践相悖,暗示在AI生成报告的高频率环境下,响应速度比优先级排序更重要。

    3. The time when we suffer from large amounts of AI slop is gone. Now we instead suffer under a massive load of good reports.

      大多数人认为AI工具会产生大量低质量的'垃圾报告'(AI slop),增加开发者的负担,但作者认为现在AI生成的安全报告质量很高,虽然数量庞大但都是高质量的报告。这是一个反直觉的观点,因为通常人们认为自动化工具会产生大量噪音而非有价值的贡献。

    1. Maintains your HTTP/TLS fingerprint so intercepted traffic behaves identically to the original.

      大多数人认为流量拦截和监控会留下明显的痕迹,容易被检测到,但作者声称Kampala可以完美保持原始HTTP/TLS指纹,这挑战了网络安全中关于流量检测的基本假设,暗示可以完全不被察觉地监控网络流量。

    1. agent-written code introduces more security vulnerabilities than code authored by humans

      大多数人认为AI编程助手能提高代码质量和安全性,但研究发现AI生成的代码实际上比人类编写的代码引入更多安全漏洞。这一发现与AI能减少编程错误的普遍认知相悖,挑战了AI在安全领域的优越性假设。

    1. Discovery should focus on trust boundaries, authentication flows, parsers, shared services, and legacy code that still sits on critical paths.

      这一建议挑战了传统安全扫描的广度优先方法,转而强调深度优先的特定领域。这表明AI安全研究应该更关注那些传统方法难以发现的复杂逻辑问题,而不是简单地扫描所有代码。这种转变可能带来更有效的安全投资回报。

    2. The scariest part of Mythos is not that one lab has a gated model. It is that the core workflow primitives behind representative findings are no longer confined to a single lab's private stack.

      这一洞察挑战了公众对AI安全威胁的传统理解:真正的威胁不是某个实验室拥有受限访问的模型,而是核心工作流程的原型已经公开可用。这意味着攻击者和防御者都可以访问相同的基础技术,使威胁民主化而非集中化。

    3. The real issue is not whether defenders can get access to another model. It is whether they can turn model capability into something a security team can trust and use every day.

      这是一个颠覆性的观点:安全团队应该停止将获取新模型作为优先事项,而是专注于如何将现有模型能力转化为可信任的日常工具。这挑战了行业对'最新、最强大模型'的追逐,强调了实施和验证框架的重要性。

    4. Public models can already spot that a security-relevant check is missing in the right code path, but they can still miss the actual invariant being violated and therefore misstate the impact.

      这一发现揭示了公共模型在安全分析中的一个关键局限:它们能发现缺失的安全检查,但可能无法正确理解被违反的实际不变量,从而错误陈述影响。这挑战了'AI能完全理解安全含义'的假设,强调了人类专家在解释AI发现中的不可替代性。

    5. If public models can already do useful work inside that kind of workflow, then the story is not 'Anthropic has a magical cyber artifact.' The story is that serious AI-assisted vulnerability research is no longer confined to a single frontier lab.

      这一发现挑战了Anthropic试图构建的叙事:即高级AI安全研究需要受限访问。研究表明,公共模型已经能够复制关键的安全发现,这意味着真正的'护城河'不是模型访问,而是验证、优先排序和操作化的能力。这打破了'只有前沿实验室才能进行高级AI安全研究'的神话。

    6. The real challenge is validating outputs, prioritizing what matters, and operationalizing them.

      这是一个反直觉的结论:AI安全研究的前沿已经从模型本身转移到如何有效利用模型的能力。大多数安全团队仍然专注于获取最强大的模型,而实际上真正的瓶颈在于验证、优先排序和将发现转化为可操作的修复。这挑战了'更好的模型等于更好的安全'的传统观念。

    1. So, cyber security of tomorrow will not be like proof of work in the sense of 'more GPU wins'; instead, better models, and faster access to such models, will win.

      作者提出了一个颠覆性的观点:未来网络安全的关键不是计算资源的多寡,而是模型质量的优劣。这挑战了当前AI安全领域过度关注计算能力的趋势,暗示我们应该重新思考AI安全研究的投资方向。

    1. Official access to the model is limited to a handful of companies through the [Project Glasswing initiative](https://www.theverge.com/ai-artificial-intelligence/908114/anthropic-project-glasswing-cybersecurity), including Nvidia, Google, Amazon Web Services, Apple, and Microsoft.

      通常情况下,人们可能认为只有政府机构才会被授予访问像 Mythos 这样的高级 AI 模型的权限,但作者指出,除了政府之外,像 Nvidia、Google 和 Microsoft 这样的科技公司也被列入了访问名单,这表明了科技公司在网络安全领域的重要作用。

    2. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized.

      大多数人认为 Anthropic 的 Mythos 模型会像其他 AI 模型一样公开发布,但作者指出由于担心其被武器化,Anthropic 没有公开发布该模型的计划,这表明了对 AI 武器化风险的担忧超过了推广技术的需求。

    1. US tech CEOs believe the best models should stay proprietary, partly so they can recoup enormous training costs and partly out of concern that powerful frontier models could be weaponized. Chinese labs, for their part, are not purely idealistic: Open-source is not only free advertising but also a shrewd workaround.

      大多数人认为开源AI会损害商业利益,增加安全风险,但作者认为中国将开源视为一种精明的商业策略,而非单纯的技术共享。这挑战了西方科技公司对知识产权和商业模式的传统认知,表明开源可以成为构建生态系统和最终实现商业价值的有效途径。

    1. Configuration is managed via environment variables. See src/aegis_core/config.py for all available settings.

      通过环境变量进行配置管理的做法提供了灵活性和安全性,但同时也提出了一个值得思考的问题:在AI安全平台中,如何平衡配置的灵活性与安全性?敏感信息如API密钥的环境变量管理可能需要额外的安全层。

    2. Aegis Core provides the foundational infrastructure for orchestrating LLM-based security agents, monitoring their behavior, and tracking the evolution of AI security capabilities over time.

      这段陈述定义了Aegis Core的核心功能,它不仅仅是一个工具,而是一个完整的生态系统,用于管理AI安全代理并监控其行为。这种架构反映了当前AI安全研究的一个重要趋势:从静态防御转向动态监控和适应。

    1. The Life Sciences model was developed with heightened enterprise-grade security controls and strengthened access management, enabling professional scientific use in governed research environments.

      特别强调企业级安全控制反映了生命科学AI应用的独特挑战。这不仅是为了防止滥用,也是为了满足行业严格监管要求,暗示AI在高度监管科学环境中的整合路径。

    1. Only GPT-OSS-120b is perfectly reliable in both directions (in our 3 re-runs of each setup). Most models that find the bug also false-positive on the fix, fabricating arguments about signed-integer bypasses that are technically wrong.

      这一结果揭示了AI模型在识别已修复代码方面的局限性,许多模型虽然能检测漏洞,但错误地将已修复代码标记为仍有问题。这强调了在AI安全系统中需要额外的验证和人工审核层,以确保结果的准确性和可靠性。

    2. The capability rankings reshuffled completely across tasks. There is no stable best model across cybersecurity tasks. The capability frontier is jagged.

      这一发现揭示了AI安全能力的'锯齿状前沿'现象,不同模型在不同安全任务上的表现差异巨大。这表明不存在'一刀切'的最佳安全模型,而是需要根据具体任务选择合适的模型,这对AI安全系统的设计有重要启示。

    3. Eight out of eight models detected Mythos's flagship FreeBSD exploit, including one with only 3.6 billion active parameters costing $0.11 per million tokens.

      这是一个令人惊讶的发现,表明即使是小型、廉价的模型也能实现与昂贵的专有模型相当的安全漏洞检测能力。这挑战了AI安全领域需要最前沿模型的假设,暗示了经济高效的AI安全解决方案的可能性。

    1. Legacy platforms get worse over time : static detections degrade with changing data & behaviors. Artemis gets better : with each incident or proactive threat hunt, the system identifies new patterns.

      这是一个令人惊讶的对比,揭示了Artemis与传统系统的根本区别:传统系统随时间恶化,而Artemis会不断学习和改进。这种'越用越好'的特性代表了安全系统的范式转变,可能从根本上改变企业安全运营的经济模型。

    2. Architected before AI, these SIEM systems are wooden shields in an era of autonomous attackers.

      这个比喻非常有力地揭示了传统安全信息与事件管理(SIEM)系统在面对AI驱动的攻击时的根本性脆弱性。传统系统就像木盾面对现代武器,这种对比暗示了安全架构需要根本性重构,而非渐进式改进。

    3. Within a few months, they have more than a dozen production enterprise deployments & are processing over a billion events per hour.

      令人惊讶的是:Artemis安全公司在短短几个月内就处理了每小时超过10亿个安全事件,这种数据处理规模反映了现代企业面临的网络安全威胁的惊人频率和复杂性。

    1. The model can reverse-engineer compiled software to detect malware and vulnerabilities without needing source code, aiming to help analysts inspect and secure systems more efficiently.

      能够无需源代码即可逆向编译软件检测恶意代码的能力,展示了AI在网络安全领域的突破性进展。这种技术可能彻底改变安全分析师的工作方式,但也可能被滥用,引发关于AI安全与伦理的深刻思考。

    2. OpenAI has introduced GPT-5.4-Cyber, a more permissive version of its flagship model built for defensive security work, expanding access to thousands of verified users through its Trusted Access for Cyber initiative.

      OpenAI推出专门针对网络安全防御的GPT-5.4-Cyber模型,并采用比Anthropic更开放的方法,这反映了AI安全领域的竞争新格局。这种开放与限制之间的平衡,将决定AI在关键安全领域的应用广度和深度,可能重塑网络安全行业的工作方式。

    1. Not every organization has the benefit of a 24x7 security team who is able to respond to incidents when they are disclosed on a Friday night.

      这个令人警醒的陈述揭示了网络安全资源分配不平等的严重问题。OpenAI通过提供1000万美元的API信用额度来解决这个问题,表明他们认识到网络安全领域的'数字鸿沟'。这一举措不仅具有商业意义,还体现了企业社会责任,可能改变中小型组织的安全能力格局。

    2. Cybersecurity is a team sport, and the systems people rely on are protected by organizations of many kinds, from major enterprises and security vendors to researchers, maintainers, public institutions, nonprofits, and smaller teams with limited security resources.

      这个比喻将网络安全描述为'团队运动',揭示了网络安全生态系统的复杂性和包容性。这一观点强调了安全不仅仅是大公司的责任,而是需要多方参与的集体努力,这为OpenAI的多元化合作伙伴策略提供了理论基础,暗示了安全民主化的可能性。

    1. 或许需要某种「第三方评测、审计机构」来评估 Skills 的数据使用方式、检测潜在安全风险等等。

      这一提议揭示了AI技能安全问题的严重性,以及现有评估体系的不足,暗示未来可能会出现专门针对AI能力的第三方评估机构,这可能是解决信任问题的关键创新点。

    1. Lightweight Agent Detection & Response (ADR) layer for AI agents — guards commands, files, and web requests.

      这个项目定义了一个新的'ADR'(Agent Detection & Response)层概念,这标志着AI安全领域的一个重要演进。从传统的端点保护转向专门针对AI代理的轻量级防护,反映了安全行业对AI特定威胁模式的适应和专业化。

    2. Sage sends URLs and package hashes to Gen Digital reputation APIs. File content, commands, and source code stay local.

      这个隐私声明揭示了Sage的数据处理策略,采用了最小化数据传输的设计哲学。这种平衡安全与隐私的做法很有洞察力,表明开发者理解用户对数据泄露的担忧,同时认识到某些云端分析对于有效威胁检测的必要性。

    3. Sage intercepts tool calls (Bash commands, URL fetches, file writes) via hook systems in Claude Code, Cursor / VS Code, OpenClaw, and OpenCode, and checks them against:

      这个声明揭示了Sage的核心创新点——它通过多种平台的hook系统拦截并检查AI代理的工具调用,形成了一个跨平台的防护层。这种多平台集成能力令人印象深刻,表明它能够覆盖当前主流的AI开发环境,为用户提供统一的安全保障。

    4. Sage sends URLs and package hashes to Gen Digital reputation APIs. File content, commands, and source code stay local.

      令人惊讶的是:Sage 采用了一种平衡隐私和安全的方法,只将URL和包哈希发送到云端进行声誉检查,而文件内容、命令和源代码则保留在本地。这种设计既提供了实时的威胁检测,又保护了用户的敏感数据,反映了现代安全工具对隐私保护的重视。

    5. Sage intercepts tool calls (Bash commands, URL fetches, file writes) via hook systems in Claude Code, Cursor / VS Code, OpenClaw, and OpenCode, and checks them against:

      令人惊讶的是:Sage 不仅是一个简单的安全工具,而是一个复杂的拦截系统,能够监控和检查多种AI代理平台上的工具调用。这种跨平台的集成能力展示了AI安全领域的复杂性和创新性,用户可能没有意识到他们的AI代理正在被如此全面地监控和保护。

    1. Mercor, which provides data to AI labs for training, became one of the fastest-growing companies in history before losing four terabytes of data to hackers last week.

      Mercor的快速崛起与数据泄露事件形成了鲜明对比,凸显了数据安全在AI训练中的关键地位。这一事件可能引发行业对数据安全和隐私保护的重新审视,促使AI公司建立更严格的数据管理标准。

    1. In many cases, we can automatically detect when a key is visible on the public web and shut down those keys automatically for security reasons

      自动检测并关闭公开暴露的API密钥的能力展示了AI服务提供商在安全防护方面的进步,但这种自动化也引发了关于误报和合法使用场景的担忧,需要平衡安全性和可用性。

    2. We are moving to disable the usage of unrestricted API keys in the Gemini API, should have more updates there soon.

      Google计划禁用无限制API密钥的决定反映了AI服务安全策略的重大转变,这可能成为行业标准,但也给开发者带来兼容性挑战,需要重新评估现有的API密钥管理策略。

    3. We experienced a sudden and extreme spike in Gemini API usage. The traffic was not correlated with our actual users and appeared to be automated.

      描述了高达54,000欧元的账单激增现象,表明AI API使用监控和防护存在严重漏洞,这种自动化滥用突显了当前API安全机制的脆弱性,对AI服务提供商和开发者都是警钟。

    4. Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true.

      这一声明揭示了Google API安全政策的根本性转变,从长期将API密钥视为非机密信息到现在要求保密,这种转变对开发者安全实践有重大影响,反映了AI服务成本和安全风险的新现实。

    1. Routines run autonomously as full Claude Code cloud sessions: there is no permission-mode picker and no approval prompts during a run.

      这是一个令人惊讶的自主性声明,表明Routines可以在没有人工干预的情况下执行完整的工作流程。这种高度的自主性代表了AI自动化工具的一个重要里程碑,但也引发了对安全和控制的深刻思考,特别是在企业环境中。

    2. Each routine has its own token, scoped to triggering that routine only. To rotate or revoke it, return to the same modal and click 'Regenerate' or 'Revoke'.

      令人惊讶的是:每个 Routines 都有自己的专用令牌,且仅限于触发该特定例程。这种细粒度的安全控制意味着用户可以为每个自动化任务创建独立的认证机制,并且可以随时轮换或撤销这些令牌,提高了安全性。

    1. Each platform surfaces different vulnerabilities, making it difficult to establish a single, reliable source of truth for what is actually secure.

      这一观察揭示了AI安全工具的碎片化问题,不同AI平台发现的漏洞各不相同,导致难以确定真正的安全状态。这种不确定性不仅增加了防御难度,还可能引发安全评估的混乱,需要建立新的行业标准来应对AI时代的安全挑战。

    2. We hope that one day we can return to open source as the security landscape evolves. But for now, we have to put our customers first.

      这一声明揭示了开源与商业利益之间的艰难平衡。Cal.com的决定代表了开源社区面临的一个严峻现实:在AI安全威胁下,企业可能不得不牺牲开源原则来保护用户数据。这引发了一个重要问题:开源社区如何应对AI带来的安全挑战?

    3. The risk landscape is accelerating quickly. Advanced AI models are now capable of identifying and exploiting vulnerabilities at unprecedented speed.

      这一声明揭示了安全威胁演变的加速趋势,AI不仅改变了漏洞发现的方式,还改变了利用漏洞的速度。这种不对称的威胁增长意味着防御方需要以更快的速度创新,否则将面临越来越大的安全风险。

    4. AI uncovered a 27-year-old vulnerability in the BSD kernel, one of the most widely used and security-focused open source projects, and generated working exploits in a matter of hours.

      这一事实令人震惊,展示了AI发现漏洞的惊人能力。即使是经过数十年审查的安全项目,AI也能在几小时内发现并生成利用代码,这表明传统的安全审查方法已无法应对AI驱动的威胁,需要全新的防御策略。

    5. Being open source is increasingly like giving attackers the blueprints to the vault. When the structure is fully visible, it becomes much easier to identify weaknesses and exploit them.

      这个比喻非常有力地揭示了开源与安全之间的根本矛盾。透明度本是开源的优势,但在AI时代却变成了致命弱点,这迫使我们重新思考开源软件的安全模型,以及如何在保持透明的同时有效防御自动化攻击。

    6. AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.

      这是一个令人警醒的观察,揭示了AI技术如何从根本上改变了安全威胁的格局。AI自动化扫描使攻击门槛大幅降低,从需要专业技能转变为任何人都能使用的工具,这可能导致开源软件面临前所未有的安全挑战。

    7. Each platform surfaces different vulnerabilities, making it difficult to establish a single, reliable source of truth for what is actually secure.

      令人惊讶的是:AI安全工具之间存在不一致性,导致难以确定真正的安全状况。这种混乱局面使得企业面临更大的决策困境,即使有先进的安全工具,也无法保证全面保护,这反映了AI安全领域尚未成熟的现实。

    8. Being open source is increasingly like giving attackers the blueprints to the vault. When the structure is fully visible, it becomes much easier to identify weaknesses and exploit them.

      令人惊讶的是:作者将开源软件比作给攻击者提供保险库蓝图,这种比喻揭示了开源与安全之间的根本矛盾。在AI时代,完全可见的代码结构使弱点识别变得前所未有的容易,这挑战了传统上认为开源更安全的观念。

    9. AI uncovered a 27-year-old vulnerability in the BSD kernel, one of the most widely used and security-focused open source projects, and generated working exploits in a matter of hours.

      令人惊讶的是:AI能够在几小时内发现并利用一个存在了27年的BSD内核漏洞,这展示了AI在安全领域的惊人能力。这个事实揭示了传统安全审计方法在面对AI加速攻击时的脆弱性,即使是像BSD这样经过长期审查的开源项目也无法幸免。

    1. policy makers now view cutting-edge AI offensive security capabilities as a systemic financial infrastructure risk

      令人惊讶的是:政策制定者已将前沿AI攻击能力视为系统性金融基础设施风险,这标志着AI安全威胁的认知已经从技术层面上升到国家战略层面,反映了AI技术发展带来的新型国家安全挑战。

    2. Mythos reportedly autonomously discovered thousands of zero-day vulnerabilities within weeks

      令人惊讶的是:Claude Mythos AI系统能在短短几周内自主发现数千个零日漏洞,这种发现速度远超人类安全专家团队的能力,展示了AI在网络安全领域的惊人潜力,同时也引发了政策制定者对AI攻击能力可能威胁金融基础设施的担忧。

    1. Same clinical question, two framings. One as a patient, one as a doctor.

      令人惊讶的是:完全相同的医疗问题,仅因提问者身份从"患者"变为"医生",AI就会给出截然不同的回答。这种简单的措辞变化就能触发或绕过安全限制,表明AI的安全机制极其脆弱且容易被规避。

    1. Across 1,000 runs, Claude Mythos Preview was able to find several bugs in OpenBSD, including one that allows any attacker to remotely crash a computer running it. The notable thing was that the bug had existed for 27 years.

      令人惊讶的是:一个存在了27年的漏洞在OpenBSD这一以安全性著称的操作系统中被AI模型发现,而在此期间人类安全专家却未能察觉。这突显了AI在安全审计方面的独特优势和潜在价值。

    2. Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser.

      令人惊讶的是:一个AI模型能够在已经经过严格安全审查的主流操作系统和浏览器中发现数千个高危漏洞,这表明AI的漏洞发现能力已经达到了令人震惊的水平,远超人类安全专家的传统能力范围。

    1. This website uses a security service to protect against malicious bots.

      令人惊讶的是:即使是像Product Hunt这样的知名产品发现平台也需要实施严格的机器人防护措施,这反映了网络自动化和爬虫行为的普遍性,以及网站保护其内容和用户数据免受自动化攻击的必要性。

    1. Agents show only ~10% success on instances with PoCs longer than 100 bytes, which represent 65.7% of the benchmark

      令人惊讶的是:AI助手在处理复杂输入时表现极差,对于超过100字节的概念验证(PoC),成功率仅为10%。这表明尽管AI在网络安全领域取得了进展,但在处理需要深度分析和复杂输入生成的任务时仍面临重大挑战,而这类任务恰恰代表了大多数现实世界中的安全漏洞。

    1. The window between a vulnerability being discovered and being exploited by an adversary has collapsed—what once took months now happens in minutes with AI.

      令人惊讶的是:AI的出现将漏洞被发现到被利用的时间窗口从几个月缩短到了几分钟。这种根本性的变化意味着传统的安全响应机制已经不再适用,网络安全领域正在经历前所未有的加速变革。

    2. Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.

      令人惊讶的是:Anthropic为Project Glasswing项目投入了高达1亿美元的模型使用积分和400万美元的直接捐款,用于支持开源安全组织。这种大规模的资金投入反映了AI安全威胁的严重性和解决这一问题的紧迫性。

    3. Mythos Preview found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world

      令人惊讶的是:即使在以安全性著称的OpenBSD系统中,Claude Mythos Preview也发现了一个存在27年的漏洞。这个漏洞能让攻击者通过简单连接就使远程机器崩溃,说明即使是经过严格审查的代码也可能存在长期未被发现的严重问题。

    4. In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers—whose software underpins much of the world's critical infrastructure—have historically been left to figure out security on their own.

      大多数人认为开源社区有足够的安全能力和资源来维护关键基础设施。但作者明确指出开源维护者一直被单独应对安全问题,暗示了开源安全状况比普遍认为的要脆弱得多。

    5. AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.

      大多数人认为AI在安全领域仍处于辅助地位,需要人类专家的指导和监督。但作者认为AI已经超越几乎所有人类专家,能够自主发现和利用软件漏洞。这是一个颠覆性的观点,因为它挑战了人类在网络安全领域的传统主导地位。

    1. Agent systems should be designed assuming prompt-injection and exfiltration attempts. Separating harness and compute helps keep credentials out of environments where model-generated code executes.

      令人惊讶的是:OpenAI明确指出AI代理系统应假设存在提示注入和数据泄露尝试,并建议将控制层与计算层分离以保护凭据。这种安全设计理念表明,OpenAI对AI安全威胁有深刻理解,并采取了主动防御措施,这与许多开发者可能采用的被动安全方法形成鲜明对比。

    2. Native sandbox support gives developers that execution layer out of the box, instead of forcing them to piece it together themselves.

      令人惊讶的是:OpenAI的Agents SDK现在原生支持沙盒执行,开发者无需自己构建执行环境。这意味着AI代理可以在受控环境中安全地运行,包括读取和写入文件、安装依赖项、运行代码和使用工具。这种内置的安全层对于企业级AI应用至关重要,但大多数开发者可能没有意识到其复杂性已经被OpenAI解决了。

    1. Apple just changed how iOS validates push notification tokens on iOS 26.4. While it is impossible to tell whether this is a result of this case, the timing is still notable.

      令人惊讶的是:苹果最近在iOS 26.4中更改了推送通知令牌的验证方式,虽然无法确定这是否与此案有关,但时间点值得注意。这暗示苹果可能已经意识到通知数据存储的隐私问题,并采取措施改进系统安全性,表明科技公司与执法机构之间可能存在不公开的博弈。

    2. Messages were recovered from Sharp's phone through Apple's internal notification storage—Signal had been removed, but incoming notifications were preserved in internal memory.

      令人惊讶的是:即使Signal应用被从iPhone上删除,苹果设备的内部通知存储系统仍然保留了收到的消息内容。这表明iOS系统在应用删除后仍会缓存通知数据,这可能成为执法机构获取已删除消息的意外途径,而大多数用户并不意识到这一潜在的数据泄露风险。

    1. Mythos found zero-day bugs in every major OS and browser, without human guidance.

      令人惊讶的是:Anthropic最新的Mythos模型能够自主发现所有主流操作系统和浏览器中的零日漏洞,无需人类指导。这表明AI安全能力已经达到了令人难以置信的水平,能够自主识别人类可能忽略的安全威胁,预示着AI在网络安全领域的革命性潜力。

    1. The model reportedly scored 93.9% on SWE-bench Verified and 77.8% on SWE-bench Pro, but its strongest signal came from real-world results, including uncovering a 27-year-old flaw in OpenBSD, a 16-year-old vulnerability in FFmpeg, and autonomously chaining Linux kernel exploits without human input.

      令人惊讶的是:Claude Mythos不仅在高标准测试中表现出色,还能独立发现长达27年和16年的严重安全漏洞,甚至能自主链接Linux内核漏洞,展示了AI在网络安全领域的惊人能力,这种自主发现和利用漏洞的能力远超人类专家。

    2. The model reportedly scored 93.9% on SWE-bench Verified and 77.8% on SWE-bench Pro, but its strongest signal came from real-world results, including uncovering a 27-year-old flaw in OpenBSD, a 16-year-old vulnerability in FFmpeg, and autonomously chaining Linux kernel exploits without human input.

      这些惊人的安全漏洞发现能力表明AI已经超越了传统安全工具,能够自主发现几十年未被发现的漏洞。特别是能够自主链接Linux内核漏洞的能力,展示了AI在网络安全领域的革命性潜力,这可能彻底改变安全研究和漏洞修复的方式。

    1. Gemma 4 models undergo the same rigorous infrastructure security protocols as our proprietary models.

      「与专有模型相同的安全协议」——这句话针对的是企业和主权机构客户,暗示 Google 正在用开源模型打「安全牌」吸引政府和监管严格行业。对于不愿依赖 OpenAI/Anthropic 闭源 API 的企业,E2B/E4B 提供了一条「可审计、可部署、可监管」的路径,而 Google DeepMind 的安全背书是这条路的核心说服力。

    1. Security has always been a team sport, and the defenders who have protected this industry for decades have never succeeded by working in isolation.

      令人惊讶的是:我们常以为顶级安全公司依靠独家秘笈独步天下,但文章指出安全从来都是“团队运动”。几十年来,真正的防御者从不是在孤立中取得成功的,共享威胁情报才是生存法则。在AI时代,这种共享不仅没有减少,反而演变成了更深度的联盟行动。

    2. New AI models, especially those from Anthropic,have triggered a new set of actions for how we build and secure our products.

      令人惊讶的是:Anthropic等公司的新型AI模型不仅仅是工具,它们直接触发了思科改变构建和保障产品的方式。这种由模型能力反向驱动工程流程重构的现象,说明AI已经不再是业务的附属品,而是正在成为定义行业基础设施形态的决定性力量。

    1. using "Open File..." dialog (`⌘+O`) you could still open and view any file on the system and could preview any file that safari could preview (e.g. `.html`, `.htm`, `.txt`, `.pdf`, and image files)

      大多数人认为Apple在更新后会修复安全漏洞,恢复模式的浏览器会被严格限制。但作者发现,即使在更新后的版本中,通过使用"打开文件"对话框,仍然可以访问和预览系统上的任何文件,这表明Apple的修复措施并不彻底,违背了人们对安全补效的预期。

    2. by "saving" the webpage (`file->save as`) instead of downloading it (which Safari automatically adds an extension for) I could force it to save it as `malicious_file` (with no extension).

      大多数人认为浏览器的保存功能是安全的,会自动处理文件扩展名以确保文件类型正确。但作者发现,通过使用非标准的Content-Type和保存网页功能,可以绕过Safari的安全检查,保存任意扩展名的文件,这打破了人们对浏览器文件处理安全机制的普遍认知。

    3. macOS decides to boot the `Volumes` partition which includes `Data`, `Macintosh HD`, `macOS Base System`, and `Preboot` systems, and when you choose the `Macintosh HD` it allows you to save the file to the Mac's permanent disk.

      大多数人认为macOS恢复模式是只读环境,用于系统修复和恢复,不应该允许对系统分区的写入操作。但作者发现,在恢复模式下,Safari浏览器竟然允许用户将文件直接保存到Mac的永久磁盘上,包括系统分区,这是一个严重的安全漏洞,违背了人们对恢复模式安全性的基本认知。

    1. computer-use agents extend language models from text generation to persistent action over tools, files, and execution environments

      主流观点认为文本语言模型和计算机使用代理的安全挑战本质上是相同的,只需将文本安全措施扩展即可。但作者指出,计算机使用代理引入了持久状态、工具使用和执行环境等全新维度,创造了与纯文本系统完全不同的安全挑战,这挑战了简单的安全扩展假设。

    1. verifiers and observer models inside the action-memory loop reduce silent failure and information leakage while remaining vulnerable to misspecification.

      大多数人认为验证和观察模型应该是外部组件,用于监控AI系统的行为。但作者认为将验证者和观察者模型置于行动-记忆循环内部可以减少静默失败和信息泄露,尽管它们仍然容易受到错误规范的影响。这一观点挑战了传统的监控架构设计,暗示内部验证可能比外部监控更有效。

  4. Mar 2026
    1. How I Dropped Our Production Database and Now Pay 10% More for AWS
      • The author accidentally dropped their production database while using an AI agent (Claude Code) to manage AWS infrastructure via Terraform.
      • The incident occurred because the author attempted to merge two separate projects into one, ignoring the AI’s advice to keep them separate to save on VPC costs.
      • The AI agent generated a Terraform plan that included deleting existing resources to recreate them under the new unified structure.
      • The author authorized a terraform apply and subsequently a terraform destroy without carefully reviewing the plan, mistakenly believing the agent was only cleaning up temporary resources.
      • Because the author had not set up external backups and the automated RDS snapshots were deleted along with the instance, all data was initially lost.
      • AWS Support was miraculously able to recover a snapshot, though the author now pays 10% more for AWS due to implementing more robust (and expensive) backup and security measures.
      • The "lesson learned" highlights the dangers of "vibe engineering"—relying on AI agents to execute destructive commands without human oversight or a deep understanding of the underlying tools.

      Hacker News Discussion

      • Negligence Over AI Risk: Many commenters argue that the issue wasn't the AI itself, but the author's decision to bypass standard safety procedures, such as reviewing terraform plan before execution.
      • Critique of "Vibe Engineering": Users criticized the trend of letting LLMs handle infrastructure (IaC) without the human operator understanding the deterministic tools they are using.
      • Infrastructure Over-engineering: Several participants pointed out that the project seemed over-engineered with AWS and Terraform when a simple VPS or SQLite database might have sufficed and been easier to manage.
      • AWS Data Recovery: Former AWS employees expressed surprise that support could recover the data, noting that AWS typically treats a user-initiated deletion as a final security command to wipe the data.
      • The Importance of Staging: A recurring theme was that major migrations should be tested in a staging environment first; running unverified AI-generated scripts directly against production was labeled as "insanity."
  5. Feb 2026
    1. What Your Bluetooth Devices Reveal About You
      • Project Overview: The author developed "Bluehood," a Python-based Bluetooth scanner, to demonstrate the extensive metadata leaked by devices merely by having Bluetooth enabled.
      • Motivation: Triggered by a critical vulnerability (WhisperPair CVE-2025-36911) and a desire to visualize invisible digital footprints, the project highlights how "invisible" signals compromise privacy.
      • What Bluetooth Reveals About Users: By monitoring signals passively, the author could determine:
        • Delivery Logistics: Exact arrival times of delivery vehicles and whether the same driver visits repeatedly.
        • Daily Routines: The specific daily patterns of neighbors based on their phone and wearable broadcasts.
        • Device Associations: Which devices belong to the same person (e.g., a specific phone moving in tandem with a specific smartwatch).
        • Occupancy & Location: Exact times people are home, at work, or elsewhere.
        • Security Vulnerabilities: Periods when a house is typically empty.
        • Social Patterns: Regular visitors (e.g., someone visiting every Thursday afternoon).
        • Employment Indicators: Patterns that suggest specific work types, such as shift work.
        • Family Schedules: Specific times children return home from school.
        • Consumer Habits: Which households share the same delivery drivers, implying similar shopping preferences.
        • Incident Evidence: Retrospective logs of who was present (passersby, dog walkers) during specific events like property damage.
      • Uncontrollable Broadcasts:
        • Many devices broadcast continuously without user recourse, including medical implants (pacemakers, hearing aids), modern vehicles, and smart home tech.
        • Privacy tools like Briar or BitChat require Bluetooth for off-grid mesh networking, creating a paradox where privacy tools necessitate privacy leaks.
      • Technical Functionality:
        • Bluehood uses passive scanning to identify vendors and device types without connecting.
        • It analyzes patterns (heatmaps, dwell times) and filters out randomized MAC addresses to focus on persistent tracking.

      Hacker News Discussion

      • Ubiquitous Tracking: Commenters confirmed that similar tracking is common in retail (using iBeacons to track shoppers to specific shelves) and via vehicle sensors (TPMS in tires broadcasting unique IDs).
      • WiFi vs. Bluetooth: Users noted that WiFi signals from cars (often named "Audi", "Tesla", etc.) are just as leaky as Bluetooth, allowing for easy "wardriving" profiles.
      • Medical Privacy: Significant concern was raised regarding medical devices (like CPAP machines) that broadcast 24/7, often to satisfy insurance requirements, with no way for the patient to disable the radio.
      • Mitigation Strategies:
        • OS Features: GrapheneOS and recent Android versions offer settings to automatically turn off Bluetooth after a period of inactivity.
        • iOS Limitations: Apple users noted it is harder to keep Bluetooth permanently off without diving into settings or using Shortcuts, as the Control Center toggles are temporary.
      • Legal Context: Several users pointed out that while such tracking is rampant in some regions, it is strictly regulated or forbidden in the EU without explicit consent.
  6. Nov 2025
  7. Oct 2025
    1. a user will want to move their passkeys to the Credential Manager of a different vendor or platform. This is currently challenging to do, but FIDO and vendors are actively working to address this issue and we wait to see support for this take hold across the market.

      Good list of issues in this article. This issue of Credential Exchange Protocol / Format is so key to me, and so timely for this article, since the initial 1.0 was done a year ago. AFAIK there aren't implementations yet, Passkeys are locked on a device.

    1. Like the Elliptic curve Diffie-Hellman (ECDH) protocol that Signal has used since its start, KEM is a key encapsulation mechanism. Also known as a key agreement mechanism, it provides the means for two parties who have never met to securely agree on one or more shared secrets in the presence of an adversary who is monitoring the parties’ connection. RSA, ECDH, and other encapsulation algorithms have long been used to negotiate symmetric keys (almost always AES keys) in protocols including TLS, SSH, and IKE. Unlike ECDH and RSA, however, the much newer KEM is quantum-safe.
  8. Sep 2025
  9. Aug 2025
    1. EASY STEPS ON HOW TO CHANGE YOUR HIVE WALLET KEYS

      A step-by-step guide for Hive users on how to change their wallet keys to enhance security. Emphasizing the importance of not losing passwords. And using randomly-generated keys. It outlines the process of accessing and updating keys. While ensuring they are backed up properly.

  10. Jul 2025
    1. Whatever is at the center of our life will be the source of our security, guidance, wisdom,and power. Security represents your sense of worth, your identity, your emotionalanchorage, your self-esteem, your basic personal strength or lack of it.Guidance means your source of direction in life. Encompassed by your map, yourinternal frame of reference that interprets for you what is happening out there, arestandards or principles or implicit criteria that govern moment-by-moment decision-making and doing.Wisdom is your perspective on life, your sense of balance, your understanding of howthe various parts and principles apply and relate to each other. It embraces judgment,discernment, comprehension. It is a gestalt or oneness, an integrated wholeness.Power is the faculty or capacity to act, the strength and potency to accomplish something.It is the vital energy to make choices and decisions. It also includes the capacity toovercome deeply embedded habits and to cultivate higher, more effective ones.
  11. Jun 2025
  12. May 2025
  13. Apr 2025
    1. To this day, if you know the right people, the Silicon Valley gossip mill is a surprisingly reliable source of information if you want to anticipate the next beat in frontier AI – and that’s a problem. You can’t have your most critical national security technology built in labs that are almost certainly CCP-penetrated

      for - high security risk - US AI labs

    1. the lion's share of American federal outlays every year are in things like Medicare, Social Security, entitlement programs that Americans rely on. Yeah, I think Elon Musk has brought that to attention many times over the last couple of months when talking doge

      for - balancing the budget - Doge - cutting the US deficit - Doge - US deficit - mostly due to medicare and social security

    1. Detailed Summary

      1. You own your data, in spite of the cloud. <br /> Section summary: <br /> Local-fist software tries to solve the problem of ownership, agency and data lock-in present in cloud-based software, without compromising cross-collaboration and improving user control.

      Section breakdown<br /> §1: SaaS<br /> Pros: Easy sync across devices, real-time collab Cons: loss of ownership and agency; loss of data is software is lost.

      §2: Local-fist software<br /> - Enables collaboration & ownership - Offline cross-collaboration - Improved security, privacy, long-term preservation & user control of data

      §3 & §4: Article Methodology<br /> - Survey of existing storage & sharing approaches and their trade-offs - Conflict-free Replicated Data Types (CRDTs), natively multi-user - Analysis of challenges of the data model as implemented at Ink & Switch - Analysis of CRDT viability, UI - Suggestion of next steps

      2. Motivation: collaboration and ownership<br /> Section summary: <br /> The argument for cross-device, real-time collab PLUS personal ownership

      Section breakdown<br /> §1: Examples of online collabs<br /> §2: SaaS increasingly critical, data increasingly valuable<br /> §3: There are cons<br /> §4: Deep emotional attachment to your data brings feeling of ownership, especially for creative expression<br /> §5: SaaS require access to 3rd party server, limitation on what can be done. Cloud provider owns the data.<br /> §6: SaaS: no service, no data. If service is shut down, you might manage to export data, but you may not be able to run your copy of the software.<br /> §7: Old-fashioned apps were local-disk based (IDEs, git, CAD). You can archive, backup, access or do whatever with the data without 3rd party approval.<br /> §8: Can we have collaboration AND ownership?<br /> §9: Desire: cross-device, real-time collab PLUS personal ownership

      3. Seven ideals for local-first software<br /> Section breakdown<br /> §1: Belief: data ownership & real-time collab are compatible<br /> §2: Local-first software local storage & local networks are primary, server secondary<br /> §3: SaaS: In the server, or it didn't happen. Local-first: local is authoritative, servers are for cross-device.

      3.1.1 No spinners<br /> SaaS feels slower because if requires round-trip to a server for data modification and some lookups. Lo-Fi doesn't have dependency on server, data sync happens on the background. This is no guarantee of fast software, but there's a potential for near-instant response.<br /> 3.1.2 Data not trapped on one device <br /> Data sync will be discussed in another section. Server works as off-site backup. The issue of conflict will also be discussed later.<br /> 3.1.3 The network is optional<br /> It's difficult to retrofit offline support to SaaS. Lo-Fi allows CRUD offline and data sync might not require the Internet: Bluetooth/local Wi-fi could be enough.<br /> 3.1.4 Seamless collabs<br /> Conflicts can be tricky for complex file formats. Google Docs became de facto standard. This is the biggest challenge for Lo-Fi, but is believed to be possible. It's also expected that Lo-Fi supports multiple collab.

      TBC

  14. Mar 2025
    1. by Erik Rye, Researcher, University of Maryland

      Wi-Fi Positioning Systems are used by modern mobile operating systems to geolocate themselves without the use of GPS. Both Google and Apple, for instance, run Wi-Fi Positioning Systems for Android and iOS devices to obtain their own location using nearby Wi-Fi access points as landmarks.

      In this work, we show that Apple's Wi-Fi Positioning System represents a global threat to the privacy of hundreds of millions of people. When iOS devices need to geolocate themselves using nearby Wi-Fi landmarks, they transmit a list of hardware identifiers to Apple and receive the geolocations of those access points in return. Unfortunately, this process can be replicated by an unprivileged adversary, who can recreate a copy of Apple's Wi-Fi geolocation database by requesting the locations of access points around the world with no prior knowledge.

      To make matters worse, we demonstrate that by repeatedly querying Apple's Wi-Fi Positioning System for the same identifiers, we can detect Wi-Fi router movement over time. In our data, we see evidence of home relocations, family vacations, and the aftermath of natural disasters like the 2023 Maui wildfires. More disturbingly, we also observe troop and refugee movements into and out of the Ukraine war and the impact of the war in Gaza.

      We conclude by detailing our efforts at responsible disclosure, and offer a number of suggestions for limiting Wi-Fi Positioning Systems' effects on user privacy in the future.

      Full Abstract and Presentation Materials

  15. Feb 2025
  16. Jan 2025
  17. Dec 2024
    1. Emotional security. The feeling of being at home in the presence of another. Safe to be who you are, good times or bad.

      I was just listening to a voice hugs episode today and they were talking about how leah has made her own self her own home becsuse she’s always moved around even as a kid. She’s really mastered feeling home in herself even though she’s alone in a foreign place. I find that so incredible

    1. From DEF CON 32, August 8-11, 2024

      https://defcon.org/html/defcon-32/dc-32-speakers.html#54469

      Abstract

      Pawning countries at top level domain by just buying one specific domain name ‘wpad.tld’, come hear about this more the 25+ years old issue and the research from running eight different wpad.tld domains for more than one year that turn into more the 1+ billion DNS request and more then 600+GB of Apache log data with leaked information from the clients.

      This is the story about how easy it is to just buying one domain and then many hundreds of thousands of Internet clients will get auto pwned without knowing it and start sending traffic to this man-in-the-middle setup there is bypassing encryption and can change content with the ability to get the clients to download harmful content and execute it.

      The talk will explain the technical behind this issue and showcase why and how clients will be trick into this Man-in-the-middle trap.

  18. Nov 2024
    1. one man in his half a page which I actually acquired in the process of writing a book 15 years ago typ written a typewritten half a page he said what we must do we must treble our deficit treble our deficit we have a deficit which is bad we must make it three times as big and make the capitalists of the rest of the world pay for it which is exactly what happened the United States should increase its deficit and use it to create aggregate demand for the net exports of Germany and Japan and later on China

      for - US foreign policy - National Security Council member suggested - triple the deficit too act as a magnet to draw in experts of other countries - Yanis Varoufakis

  19. Oct 2024
  20. Aug 2024
    1. SMS and e-mail are not reliable means of communication. They should no longer be used to communicate links spontaneously. All such communications should be considered fraudulent by default.

  21. Jul 2024
    1. First, the complexity of modern federal criminal law, codified in several thousand sections of the United States Code and the virtually infinite variety of factual circumstances that might trigger an investigation into a possible violation of the law, make it difficult for anyone to know, in advance, just when a particular set of statements might later appear (to a prosecutor) to be relevant to some such investigation.

      If the federal government had access to every email you’ve ever written and every phone call you’ve ever made, it’s almost certain that they could find something you’ve done which violates a provision in the 27,000 pages of federal statues or 10,000 administrative regulations. You probably do have something to hide, you just don’t know it yet.

    1. On call. Incident response. Compliance deadlines. Like any IT job, stuff breaks. Long unpaid hours keeping up on tech to remain competitive. Dealing with the politics of your management not sincerely wanting to spend the money required to do things right and
    2. writing code, reviewing code, deploying configs to harden environments, reading CVEs to know just how bad that vulnerability in our environment is and where it prioritize it in patching and what it could affect, trying to make sense of logs to determine if that oddity is an indicator of compromise or not
  22. Jun 2024
    1. this company's got not good for safety

      for - AI - security - Open AI - examples of poor security - high risk for humanity

      AI - security - Open AI - examples of poor security - high risk for humanity - ex-employees report very inadequate security protocols - employees have had screenshots capture while at cafes outside of Open AI offices - People like Jimmy Apple report future releases on twitter before Open AI does

    2. this is a serious problem because all they need to do is automate AI research 00:41:53 build super intelligence and any lead that the US had would vanish the power dynamics would shift immediately

      for - AI - security risk - once automated AI research is known, bad actors can easily build superintelligence

      AI - security risk - once automated AI research is known, bad actors can easily build superintelligence - Any lead that the US had would immediately vanish.

    3. the model Waits are just a large files of numbers on a server and these can be easily stolen all it takes is an adversary to match your trillions 00:41:14 of dollars and your smartest minds of Decades of work just to steal this file

      for - AI - security risk - model weight files - are a key leverage point

      AI - security risk - model weight files - are a key leverage point for bad actors - These files are critical national security data that represent huge amounts of investment in time and research and they are just a file so can be easily stolen.

    4. here are so many loopholes in our current top AI Labs that we could literally have people who are infiltrating these companies and there's no way to even know what's going on because we don't have any true security 00:37:41 protocols and the problem is is that it's not being treated as seriously as it is

      for - key insight - low security at top AI labs - high risk of information theft ending up in wrong hands

  23. May 2024
    1. Performing a redirect by constructing a URL based on user input is inherently risky, and is a well-documented security vulnerability. This is essentially what you are doing when you call redirect_to params.merge(...), because params can contain arbitrary data the user has appended to the URL.
    1. Identify, prioritize, and resolve dependency risk Once dependencies are identified, Black Duck Security Advisories enable teams to evaluate them for associated risk, and guides prioritization and remediation efforts. Is it secure? Receive alerts for existing and newly discovered vulnerabilities, along with enhanced security data to evaluate exposure and plan remediation efforts. Is it trustworthy? Perform a post-build analysis on artifacts to detect the presence of malware, such as known malicious packages or suspicious files and file structures, as well as digital signatures, security mitigations, and sensitive information. Is it compliant? For every component identified, Black Duck SCA provides insights into license obligations and attribution requirements to reduce risk to intellectual property. Is it high quality? Black Duck SCA provides metrics that teams use to evaluate the health, history, community support, and reputation of a project, so that they can be proactive in their risk mitigation process.
  24. Apr 2024
    1. Youtube Kids is an example of how the product designed for kids differs from the one targeting adults. It’s much easier to navigate thanks to bigger buttons and fewer content boxes on the page. Plus the security settings on the platform make sure that younger users are safe and have access to appropriate content. Those all are parts of a thought-through design interface for children.

      Just an observation here but I remember my godchild using You tube kids whilst they stayed here and we had to double check because it wasn't all good content, you tube is kind of notorious with their bad content checks and algorithms. Elsa Gate Scandal comes to mind.

  25. Mar 2024
  26. Feb 2024
  27. Jan 2024
    1. So we have 50 independent electoral systems that kind of work in conjunction in tandem, but they're all slightly different and they're all run by the state.

      It is worse than that. In Ohio, each county has its own election system. Rules are set at the state level, but each county buys and maintains the equipment, hires and does training, and reports its results.

    1. less secure sign-in technology

      What does that mean exactly?

      All of a sudden my Rails app's attempts to send via SMTP started getting rejected until I enabled "Less secure app access". It would be nice if I knew what was necessary to make the access considered "secure".

      Update: Newer information added to this article (as well as elsewhere) leads me to believe that it is specifically sending password directly as authentication mechanism which was/is no longer permitted.

      This is the note that has since been added on this page, which clarifies this point:

      To help keep your account secure, from May 30, 2022, ​​Google no longer supports the use of third-party apps or devices which ask you to sign in to your Google Account using only your username and password.

  28. Dec 2023
    1. for security, app access token should never be hard-coded into client-side code, doing so would give everyone who loaded your webpage or decompiled your app full access to your app secret, and therefore the ability to modify your app. This implies that most of the time, you will be using app access tokens only in server to server calls.
  29. Nov 2023
    1. permanent security”
      • for: definition - permanent security, examples - permanent security

      • definition: permanent security

        • Extreme responses by states to security threats, enacted in the name of present and future self defence.
        • Permanent security actions target entire civilian populations under the logic of ensuring that terrorists and insurgents can never again represent a threat. It is a project, in other words, that seeks to avert future threats by anticipating them today.
      • example: permanent security

        • Russian-Ukraine war
          • Vladimir Putin reasons that Ukraine must be forcibly returned to Russia so that it cannot serve as a launching site for NATO missiles into Russia decades from now.
        • Myanmar-Rohingya conflict
          • The Myanmarese military sought to squash separatism by expelling and killing the Rohingya minority in 2017
        • China-Uyghur conflict
          • China sought to pacify and reeducate Muslim Uyghurs by mass incarceration to forestall their striving for independence forever
        • Israel-Palestine conflict
          • Israel seeks to eliminate Hamas as a security threat once and for all after the 2023 Hamas attack on Israel
        • US-Iraq-Afghanistan
          • The US sought to eliminate Saddam Hussein's nuclear capabilities and to eliminate Osama Bin Laden for his bombing of the World Trade center.