427 Matching Annotations
  1. Last 7 days
    1. Reasoning models show both a one-off jump in performance and a roughly 2-3x faster trend compared to non-reasoning models.

      大多数人认为不同AI模型之间的性能差异是渐进式的,但作者发现推理模型不仅一次性实现了性能跃升,而且以比非推理模型快2-3倍的速度持续进步。这一发现挑战了人们对AI模型性能提升方式的常规理解。

    2. Reasoning models show both a one-off jump in performance and a roughly 2-3x faster trend compared to non-reasoning models.

      大多数人可能认为不同类型的AI模型性能提升速度大致相同,但研究发现推理模型不仅有一次性的性能飞跃,而且提升速度是非推理模型的2-3倍。这一发现颠覆了人们对不同模型类型进步速度的预期。

    3. Reasoning models show both a one-off jump in performance and a roughly 2-3x faster trend compared to non-reasoning models.

      2-3倍的速度差异是一个非常显著的数字,表明推理模型与非推理模型之间存在明显的性能差距。这个倍数关系暗示了架构变化可能带来的性能飞跃,而非简单的线性改进。这一数据点支持了推理能力可能是AI进步关键驱动力的假设。

    4. The best-performing model across these three metrics was a pair of independent linear trends: one for reasoning models and one for non-reasoning models.

      这个发现表明推理模型和非推理模型的发展轨迹确实存在显著差异。这种分离的线性趋势模型在三个指标上表现最佳,100%的情况下优于其他模型,提供了强有力的统计证据支持AI能力加速的论点。

    1. We spent days loading the system with hundreds of threads, refining rough edges and polishing corners that developers may never see.

      文章提到团队使用'数百个线程'进行了数天的压力测试,这是一个具体的工作量指标。'数百个'虽然不是精确数字,但表明系统设计考虑了大规模并发场景。这种大规模测试表明开发团队对系统稳定性的重视程度,但缺乏具体的线程数量上限和性能指标数据。

    2. All of this runs at Zed's famously buttery-smooth 120 fps

      文章声称Zed以120fps的流畅度运行,这是一个非常具体的技术性能指标。120fps远高于大多数编辑器的60fps标准,表明Zed在处理多代理任务时仍能保持极高的渲染性能。这个数据点对于评估Zed作为开发工具的响应能力具有重要意义,但文章未提供基准测试数据来支持这一说法。

    1. run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025

      年收入从2025年底的约90亿美元增长到超过300亿美元,增长率超过233%,这是一个惊人的增长速度。这一数据表明AI服务市场的爆发式增长,以及Anthropic在商业化方面的显著进展。然而,如此高的增长率是否可持续存疑,且300亿美元的年收入对于一家成立不久的AI公司来说相当惊人,需要更多财务细节来验证。

    2. run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025

      年收入从2025年底的约90亿美元激增至300亿美元,增长率超过230%。这一惊人的收入增长速度反映了AI市场的爆发式增长。然而,考虑到公司规模,这一收入数字需要谨慎看待,可能包含预付款或长期合同收入确认。

    3. run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025

      年收入从90亿美元跃升至300亿美元,增长率超过233%,这是一个爆炸性的增长速度。这一增长率远超大多数科技公司的历史表现,反映了AI即服务(AIaaS)市场的巨大潜力。然而,如此高的增长率也带来了基础设施扩张的压力,需要与算力投资相匹配。

    1. 🔹 **Rich World Knowledge:** Leads all current open models, trailing only Gemini-3.1-Pro.

      这里提供了模型知识能力的相对排名:领先所有当前开源模型,但仅落后于Gemini-3.1-Pro。这是一个相对定位而非绝对性能数据。这种表述暗示DeepSeek-V4-Pro在知识广度上达到了接近顶级闭源模型的水平,这对需要广泛知识的应用场景具有重要意义。然而,缺乏具体的评估指标和分数,难以准确量化这一差距。

    2. 🔹 **Enhanced Agentic Capabilities:** Open-source SOTA in Agentic Coding benchmarks.

      虽然文中没有提供具体的基准测试数据,但声称在代理编程基准测试中达到开源SOTA(最先进水平)。这是一个重要断言,但缺乏具体量化指标。如果属实,这将代表DeepSeek在AI代理能力方面的重大突破,特别是在代码生成和执行任务上。需要查看技术报告中的具体基准测试数据来验证这一声明。

    1. The depth of recursion becomes a tunable compute axis at inference time, requiring no retraining. A small model, by reading itself, can iterate toward answers that neither it nor any of its workers could reach in a single pass.

      文章描述了一种递归推理机制,称小模型通过自我迭代可以达到单次推理无法达到的结果,但未提供具体的性能提升数据或实验证据。这一断言缺乏量化依据,需要更多实验数据支持。

    2. Two variants are available: **Sakana Fugu Mini 🐟**, optimized with latency in mind, and **Sakana Fugu Ultra 🐡**, the full orchestration system, optimized for performance for demanding tasks.

      文章提到有两种变体:Mini(延迟优化)和Ultra(性能优化),但未提供具体的性能指标差异,如延迟降低百分比或吞吐量提升数据。这种缺乏具体量化参数的描述难以评估两种变体在实际应用中的性能差异。

    3. GPQAD | 94.4 | 90.9 | 92.7 | 92.4 | **95.1** | LCBv6 | 90.3 | 92.1 | 92.4 | 90.4 | **93.2** | SWEPro | 48.4 | 51.2 | _53.4_ | 51.3 | **54.2**

      性能对比表格显示,Sakana Fugu Ultra在三个基准测试中均优于竞争对手:GPQAD上达95.1%(超越Gemini 3.1的94.4%),LCBv6上达93.2%(超越GPT 5.4的92.1%),SWEPro上达54.2%(超越Opus 4.6的53.4%)。这些数据表明其多模型协调策略确实带来了性能提升,特别是在科学推理任务上优势明显。

    1. The Prompt API for the web is still being developed. While we build this API, refer to our best practices on session management for optimal performance.

      大多数人认为浏览器AI功能应该是成熟且生产就绪的,但作者明确表示该API仍在开发中。这与人们对Chrome作为成熟浏览器应该提供稳定可靠功能的认知相悖,暗示AI功能可能还不够稳定,需要开发者额外注意性能优化。

    1. Kimi K2.6 demonstrates significant improvements over Kimi K2.5 in internal evaluations conducted by CodeBuddy: code generation accuracy increased by 12%, long-context stability improved by 18%, and tool invocation success rate reached 96.60%.

      大多数人认为AI模型迭代通常是渐进式的改进,每次版本更新可能有5-10%的性能提升。但数据显示Kimi K2.6实现了远超预期的飞跃,特别是在工具调用成功率接近97%的情况下,这挑战了人们对AI模型能力提升速度的常规认知,暗示可能存在某种技术突破或架构创新。

    1. The median US buyout fund returns 13% to 16% net.

      文中提到美国收购基金的中位回报率为13-16%,而OpenAI承诺的17%回报率高于这一水平,约为行业平均值的1.06-1.3倍。这一差异表明OpenAI为了获得渠道优势愿意支付溢价,但也暗示了PE partners可能承担了额外的风险或OpenAI的业务模式需要实现超常增长。

    1. DeepSeek V4 exceeds them all on coding, math, and STEM problems, making it one of the strongest open-source models ever released.

      大多数人认为开源AI模型在性能上无法匹敌闭源商业模型,但作者认为DeepSeek V4在多个关键领域超越了其他开源模型,甚至与顶级闭源模型相当。这挑战了'开源必然意味着性能妥协'的行业共识,暗示开源模型正在迅速缩小与商业模型的差距。

    1. GPT‑5.5 delivers this step up in intelligence without compromising on speed: larger, more capable models are often slower to serve, but GPT‑5.5 matches GPT‑5.4 per-token latency in real-world serving, while performing at a much higher level of intelligence.

      大多数人认为更强大的AI模型必然会牺牲速度和效率,但作者认为GPT-5.5打破了这一传统权衡关系,实现了更高智能的同时保持相同延迟。这挑战了AI领域'更大模型必然更慢'的共识,暗示模型架构优化可能比单纯扩大规模更重要。

    1. The results demonstrate consistent improvements over strong baselines, supporting the effectiveness of agent resource management and closed loop self evolution.

      大多数研究者认为自我进化系统难以评估且效果不稳定,但作者声称他们的系统在多个具有挑战性的基准测试中表现出持续改进的能力。这一结论挑战了AI自我进化领域的普遍怀疑态度,暗示了一种更加可靠和有效的自我进化方法。

    1. V3.3 achieves 70.4% in Mode A (zero-LLM), with +23.8pp on multi-hop and +12.7pp on adversarial. V3.2 achieved 74.8% Mode A and 87.7% Mode C; the 4.4pp gap reflects a deliberate architectural trade-off.

      在零LLM模式下仅比有LLM支持的模式低17.3%,这一结果令人震惊。这表明生物启发的记忆架构可能比我们想象的更强大,能够在没有大型语言模型支持的情况下保持大部分性能,挑战了'强大AI必须依赖大型模型'的主流观点。

    1. GPT-4o operates at roughly 200 billion parameters and outperforms the original 1.8 trillion-parameter GPT-4

      这一发现与行业普遍认为'更大模型必然更好'的共识相悖,暗示模型质量和架构可能比规模更重要。这可能是AI发展史上最令人惊讶的效率提升案例之一,挑战了我们对AI进步的理解。

    1. Our most complex pages, which took 20+ prompts to recreate in other tools, only required 2 prompts in Claude Design.

      这一声明暗示Claude Design将设计效率提高了10倍以上,这是一个惊人的效率飞跃。这种反直觉的提升挑战了人们对AI工具渐进式改进的普遍预期,值得独立验证其真实性能和适用场景。

  2. Apr 2026
    1. It is full of real people who have decided that being present is too time-consuming, so they have automated the performance of presence instead.

      presence as performance. I think this is a key observation, bc we do a lot of presence as performance socially too. Ppl ensuring they're seen at some do, and leaving once their presence has been acknowledged. Ppl attending meetings but doing other work during it, nodding along at the right moments. Just the automation is new.

    1. When evaluated directly in the Codex app, best-of-ten model submissions ranked above the 95th percentile of human experts on the prediction task and around the 84th percentile of human experts on the sequence generation task.

      这一性能指标令人震惊,表明AI在某些任务上已超越95%的人类专家。这不仅是技术进步的标志,也引发了对专业科学家角色和未来就业市场的深刻思考。

    1. Opus 4.7 introduces a new `xhigh` ('extra high') effort level between `high` and `max`, giving users finer control over the tradeoff between reasoning and latency on hard problems.

      引入'xhigh'努力等级显示了AI模型在推理深度与响应速度之间提供更精细控制的能力,这反映了用户对AI性能调优需求的增长,也表明AI系统正变得更加可定制和专业化。

    2. On our 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve.

      13%的性能提升在AI领域是显著的飞跃,特别是解决了前代模型完全无法处理的任务,这表明AI能力的非线性发展可能已经到来,而非简单的线性进步。

    1. Gemma 4 E4B matches or exceeds GPT-4o across multiple benchmarks including MATH, GSM8K, GPQA Diamond & HumanEval

      这一性能对比结果令人惊讶,表明开源模型已经能够闭源模型的性能,这可能打破AI领域的封闭生态,促进更广泛的研究合作和创新,同时降低企业采用AI的门槛。

    1. Multiple community tests show llama.cpp running 1.8x faster than Ollama on the same hardware with the same model, 161 tokens per second versus 89.

      这个性能差异数据非常惊人,表明Ollama的包装层带来了显著的性能开销,这直接挑战了Ollama作为'简化工具'的核心价值主张——如果性能大幅下降,用户为何不直接使用底层工具?

    1. The 66.6% medal rate on MLE Bench Lite, achieved autonomously over 24 hour windows, tells you something real about how this model behaves when you give it a hard problem and step back.

      这个66.6%的奖牌率是在完全自主的情况下连续24小时运行后取得的,这是一个令人印象深刻的数据点。它表明M2.7不仅能够在长时间内保持专注,还能持续改进解决问题的策略。这种自主解决问题的能力可能是评估代理模型实际价值的关键指标,远超传统基准测试所能衡量的范围。

    2. MiniMax claims it has reduced live production incident recovery time to under three minutes on multiple occasions using M2.7.

      这一声明暗示M2.7在实际生产环境中具有惊人的问题解决能力,将传统的故障恢复时间从小时级缩短到分钟级。如果属实,这将代表运维领域的一次革命性进步,大幅提高系统可用性和企业韧性。这一能力值得在独立环境中验证,因为它可能改变企业对AI系统在关键基础设施中角色的看法。

    1. Contemplating mode provides significant capability improvements in challenging tasks, achieving 58% in Humanity's Last Exam and 38% in FrontierScience Research.

      这些具体数字展示了多智能体并行推理的惊人效果,接近人类水平的能力提升,暗示了AI协作模式可能成为解决复杂问题的关键路径,而非单纯扩大模型规模。

    1. A small model trained on fewer than 2,000 examples from real lawyers, bankers, and consultants recently beat all but the best frontier models on corporate legal work, at a fraction of the price.

      这一发现挑战了'规模和计算能力胜过一切'的AI发展范式。高质量专业化数据训练的小型模型在特定领域表现优于通用大模型,暗示AI发展可能从'越大越好'转向'更专业、更高效'的新阶段。

    1. We see continued gains from inference scaling on larger projects, suggesting they may be solvable given enough tokens.

      这一发现揭示了AI性能与推理计算资源之间的正相关关系,暗示了通过增加计算预算可能解决更复杂的编程任务。这为AI能力的边界提供了重要线索,也引发了关于计算资源投入与AI能力提升之间关系的深刻思考。

    1. The variance is also worth noting: baseline+FA TG has ±19 t/s of noise, while optimized+FA has ±0.59 t/s on x86. The fusions eliminate intermediate writes that pollute the cache, making the hot paths more predictable.

      这一数据揭示了优化的一个意外但重要的好处:不仅提高了性能,还显著降低了结果变异性。这表明通过减少缓存污染和内存访问模式的不确定性,优化可以使系统行为更加可预测。这一发现对构建可靠的高性能系统具有重要意义,强调了优化的一致性而不仅仅是峰值性能。

    2. Coding agents working from code alone generate shallow hypotheses. Adding a research phase — arxiv papers, competing forks, other backends — produced 5 kernel fusions that made llama.cpp CPU inference 15% faster.

      这一声明揭示了AI代理在代码优化中的关键局限:仅基于代码的优化会产生浅显的假设。通过引入研究阶段,包括阅读学术论文、研究竞争项目和后端实现,代理能够发现更深层次的优化机会,实现了显著的性能提升。这表明AI代理需要更广泛的上下文信息才能做出有意义的创新。

    3. The variance is also worth noting: baseline+FA TG has ±19 t/s of noise, while optimized+FA has ±0.59 t/s on x86.

      令人惊讶的是:优化后的代码不仅提高了性能,还显著减少了结果方差(从±19 t/s降至±0.59 t/s)。这表明AI代理的优化不仅关注速度,还考虑了内存访问模式的可预测性,这种全面性思维令人印象深刻。

    1. A healthcare LLM might be highly accurate for queries in English, but perform abominably when those same questions are presented in Spanish.

      这个例子揭示了AI系统性能的文化和语言敏感性,这是一个令人惊讶但重要的观察。它表明AI系统的'准确性'可能高度依赖于特定语境,这挑战了我们对AI普遍适用性的假设。这种差异可能强化现有的数字鸿沟,并要求开发更具文化敏感性的AI评估框架。

    1. Performance: dev-browser: 3m53s, $0.88, 100% success rate — beats MCP configs, Chrome extensions, 'browser skill' stacks.

      令人惊讶的是:这种新技术不仅在功能上超越传统方法,在性能指标上也取得了显著优势,100%的成功率和相对较低的成本显示了其技术成熟度和实用性,这可能会使现有的浏览器自动化解决方案迅速过时。

    1. GLM-5.1 pushes this frontier further, delivering 3.6× speedup and continuing to make progress well into the run. While its rate of improvement also slows over time, it sustains useful optimization for substantially longer than GLM-5.

      令人惊讶的是:在机器学习工作负载优化任务中,GLM-5.1能够实现3.6倍的速度提升,并且在长时间运行中持续改进,而其他模型很快就会达到性能瓶颈。这种持续优化的能力对于实际应用中的复杂问题解决具有重要意义。

    1. GLM-5V-Turbo 拿了 94.8 分,Claude Opus 4.6 是 77.3。差距不小。

      令人惊讶的是,在将UI设计稿还原成代码的测试中,GLM-5V-Turbo的得分(94.8)显著领先于Claude Opus 4.6(77.3),这表明它在视觉编码领域有着惊人的优势,几乎领先了17个百分点,这种差距在AI模型比较中是非常罕见的。

    1. Where training a language model took 167 minutes on eight GPUs in 2020, it now takes under four minutes on equivalent modern hardware. To put this in perspective: Moore's Law would predict only about a 5x improvement over this period. We saw 50x.

      令人惊讶的是:AI模型训练速度在6年内提升了约50倍,远超摩尔定律预测的5倍。这种性能提升不仅来自硬件改进,还来自软件优化和算法创新。这一事实打破了人们对技术进步速度的传统认知,展示了AI领域独特的加速发展模式。

    1. 70% of alerts resolved in under 5 minutes

      令人惊讶的是:Relvy声称能够以惊人的速度解决70%的警报,在5分钟内完成,这比传统的人工响应速度快得多,展示了AI在运维自动化领域的巨大潜力,可能彻底改变企业处理系统故障的方式。

    1. Using these ability scores, the method predicts performance on new tasks with ~88% accuracy, including for models such as GPT-4o and Llama-3.1.

      令人惊讶的是:ADeLe方法能够以约88%的准确度预测AI模型在新任务上的表现,这包括像GPT-4o和Llama-3.1这样先进的大模型。这种预测能力远超传统评估方法,为AI性能评估提供了革命性的突破,使研究人员能够更可靠地预见模型在未见过的任务上的表现。

    1. I-DLM-8B is the first DLM to match the quality of its same-scale AR counterpart, outperforming LLaDA-2.1-mini (16B) by +26 on AIME-24 and +15 on LiveCodeBench-v6 with half the parameters

      令人惊讶的是:I-DLM-8B模型仅用80亿参数就超过了160亿参数的LLaDA-2.1-mini模型,在AIME-24和LiveCodeBench-v6测试中分别高出26和15分。这表明扩散模型首次达到了与自回归模型相当的质量水平,同时参数减半,打破了人们对扩散模型质量不如自回归模型的普遍认知。

    1. experiments on WildClawBench show that limited interaction and feedback, it significantly improves the performance of Qwen3-Max in real-world agent scenarios.

      令人惊讶的是:即使在有限的交互和反馈条件下,SkillClaw也能显著提升Qwen3-Max在实际代理场景中的性能。这表明该系统即使在用户参与度不高的情况下,也能有效收集足够的数据来改进技能库,解决了传统AI系统需要大量标注数据才能进化的痛点。

    1. We projected that, given 13 GB300 GPUs, FP8 precision, physical error rate of 0.003, 1000 rounds, Surface code d=13, the fast model can achieve 0.11 μs / round.

      令人惊讶的是:量子纠错解码的速度可以达到惊人的0.11微秒/轮,这比人类神经元的反应速度还要快几个数量级。这种超高速处理能力是实现实用量子计算的关键,也是传统计算方法难以企及的。

    2. Ising-Calibration-1 repeatedly outperforms state-of-the-art open and closed models of a range of parameters. As shown in Figure 1, Ising Calibration 1 scores 3.27% better on average than Gemini 3.1 Pro, 9.68% better than Claude Opus 4.6, and 14.5% better than GPT 5.4.

      令人惊讶的是:专门为量子校准设计的AI模型Ising-Calibration-1竟然在量子校准任务上超越了包括GPT-5.4和Gemini 3.1 Pro在内的最先进通用AI模型,这表明专用AI模型在特定科学任务上可能比通用模型表现更好,颠覆了'通用AI万能'的传统观念。

    1. Artificial Analysis has also positioned Gemini 3.1 Flash TTS within its 'most attractive quadrant' for its ideal blend of high-quality speech generation and low cost.

      令人惊讶的是:这个模型不仅质量高,而且成本效益也非常出色,在'最具吸引力象限'中占据一席之地。这表明Google在平衡AI性能和商业可行性方面取得了显著突破,这对大多数用户来说是意想不到的。

    1. Cost (USD) to run the evaluation: GPT-5.4 (xhigh): $1,110, Claude Opus 4.6 (max): $1,055

      运行一次 452 个任务的评测,GPT-5.4 花费 1110 美元,Claude Opus 4.6 花费 1055 美元——每个任务平均约 2.3 美元。而 Gemini 3 Flash 只需要 596 美元,实现了 27.7% 的成绩(vs 顶级模型的 33.3%)。这个性价比数据对 AI 选型决策极为关键:如果业务场景可以接受 27% 而非 33% 的成功率,Gemini 3 Flash 能节省近一半成本。在金融服务的大规模部署中,这个差异将被放大数千倍。

    1. Uni-1 ranks first in human preference Elo for Overall, Style & Editing, and Reference-Based Generation, and second in Text-to-Image.

      令人惊讶的是:UNI-1在人类偏好评估中表现如此出色,不仅在整体、风格与编辑以及基于参考的生成方面排名第一,甚至在文本到图像转换这种基础任务上也排名第二,这表明它是一个真正多功能的AI模型,而非仅擅长特定领域。

    1. Cai et al. [117] interviewed 21 pathologists who used a deep neural network to aid in thediagnosis of prostate cancer. The interviews showed that pathologists needed to learn moreabout the network’s strengths and limitations to use it effectively. They also wanted to knowthe design objective of the network and the kind of data on which it was trained.
    1. TriAttention matches Full Attention reasoning accuracy while achieving 2.5x higher throughput or 10.7x KV memory reduction

      大多数人认为在KV缓存压缩中,准确率和效率之间存在不可避免的权衡,但作者提出的TriAttention方法能够在保持全注意力推理准确度的同时,实现2.5倍的吞吐量提升或10.7倍的内存减少。这一结果挑战了当前领域内的效率-准确度权衡范式,表明可以通过创新方法打破这一传统限制。

    1. We've seen customers go from 10-20% field accuracy with a frontier model to 99-100% just by switching to using Reducto's Deep Extract.

      大多数人认为从前沿模型到接近完美的准确率需要根本性的技术突破或大量数据训练。但作者声称仅通过切换到Deep Extract方法就能将准确率从10-20%提升到99-100%,这种巨大性能提升的幅度与行业通常预期的改进曲线相悖,暗示现有方法可能存在根本性缺陷。

    2. For the documents that matter most, it gets to 99–100% field accuracy, even out-performing expert human labelers on extraction tasks.

      大多数人认为人工智能系统在文档提取任务上总会落后于人类专家,尤其是对于复杂文档。但作者声称Deep Extract可以达到甚至超过人类专家的准确率(99-100%),这是一个相当大胆的断言,挑战了AI在文档处理领域无法超越人类能力的共识。

    1. Experimental results show the best model, Gemini3-pro, achieves 56.3% overall accuracy, which falls significantly to 23.0% on Level-3 tasks

      大多数人认为当前最先进的多模态大模型已经接近或超越人类在复杂任务上的表现。然而,作者的数据表明,即使是最好的模型在复杂现实任务上的表现也远低于预期,准确率从整体56.3%骤降至23.0%。这一发现挑战了AI领域对当前技术能力的乐观评估,揭示了现实世界多模态代理任务的极端复杂性。

    1. our approach improves Qwen3.5-4B from 63.8 percent to 66.7 percent (+2.9pp) and Qwen3-30B-A3B from 58.0 percent to 69.5 percent (+11.5pp)

      大多数人认为在复杂的多轮任务中,只有大型语言模型才能通过强化学习取得显著进步,但作者展示了即使是较小的4B模型也能通过他们的方法获得实质性提升,而30B模型的提升更是惊人地达到了11.5个百分点,挑战了'规模越大越好'的普遍认知。

    2. the trained 4B model exceeding GPT-4.1 (49.4 percent) and GPT-4o (42.8 percent) despite being 50 times smaller

      大多数人认为AI模型的大小与性能直接正相关,更大的模型必然表现更好。但作者展示了一个仅40亿参数的模型通过强化学习训练后,性能超越了比它大50倍的GPT-4.1和GPT-4o,挑战了当前AI领域'参数规模决定一切'的主流观点。

    1. VMs provision in under 700ms from API request to ready machine.

      大多数人认为启动完整的虚拟机需要数秒甚至数分钟,这不适合需要快速响应的AI工作负载。Freestyle声称能在700毫秒内启动完整VM,这挑战了传统虚拟化性能的常识,暗示他们的技术栈可能重新定义了基础设施的启动速度。

    1. NVIDIA yields unmatched inference throughput across the broadest range of workloads, from massive LLMs to advanced vision language models, to generative recommender systems and more, on industry-standard benchmarks.

      大多数人认为AI领域存在多个竞争平台在不同领域各有所长,但作者声称NVIDIA在所有工作负载上都表现出色,这挑战了多元化竞争的行业共识,暗示了NVIDIA可能比普遍认为的更具统治力。

    2. Co-designed hardware, software, and models are key to delivering the highest AI factory throughput and lowest token cost. Measuring this goes far beyond peak chip specifications.

      大多数人认为AI性能主要由芯片规格决定,但作者强调硬件、软件和模型的协同设计才是关键,这挑战了以芯片为中心的行业认知,暗示了全栈优化比单纯追求芯片性能更重要。

    1. Byte for byte, the most capable open models

      大多数人认为开源模型在性能上无法与闭源/专有模型相提并论,但作者声称Gemma 4是'字节对字节最强大的开源模型',挑战了这一行业共识。这暗示开源模型在特定指标上已经超越了商业闭源模型,是一个非传统的观点。

  3. Mar 2026
    1. When the sudden drop to a pianissimo occurred towards the ending of the piece, the perceived arousal responses of CHM and WM dropped slightly but rose again immediately to end on a high arousal. These two groups of listeners appear to have anticipated a return to a loud and majestic close and therefore kept their arousal responses higher than those of the NM.

      please highlight anything related to music performance practice

    2. CHM, who are more experienced with the instruments and compositional techniques used in Chinese orchestral music, might have had an idea of which features figure more prominently in the communication of particular intentions, and therefore would have more information available for their judgments.

      please highlight anything related to music performance practice

    3. The perception of affective intentions in music is influenced by the degree of familiarity listeners have with a musical tradition, the content implicated in the music, and the complex sonic environment created by the composer's creation and the musicians' interpretation.

      please highlight anything related to music performance practice

    1. actos de extrañeza

      ¿De dónde salen los actos de extrañeza? no lo recuerdo. Yo trabajo mucho actos de extrañamiento desde el entrenamiento en creatividad para improvisar. Hay un texto llamado danzas privadas, que son actos de extrañamiento corporales. Se los recomiendo.

  4. Feb 2026
    1. Lua is more dynamic than Wren which makes its job harder. Lua also tries very hard to be compatible across a wide range of hardware and compilers. If you have a C89 compiler for it, odds are very good that you can run Lua on it. Wren cares about compatibility, but it requires C99 or C++98 and IEEE double precision floats. That may exclude some edge case hardware, but makes things like NaN tagging, computed gotos, and some other tricks possible.

      With these words, you might expect that programs that target the Wren reference implementation are faster than those written in Lua. But (again), languages are not language implementations, and the language implementation matters; while Wren programs written for the reference implementation available here are generally faster than programs that run on lua.org's Lua implementation, they're not faster than programs that run on LuaJIT.

  5. Jan 2026
    1. This makes questions like “how fast is WebAssembly” a bit hard to answer. You don’t ask how fast algebraic notation is—it’s not a very sensible question. Taken in the context of something like JavaScript, the language is only as fast as the engine running it. JavaScript the language has no speed, but you can benchmark JS engines like V8, SpiderMonkey, and JavaScriptCore. You can benchmark the IO libraries of JS runtimes like Bun, Deno, and Node. What people actually mean is “how useful are the constructs of this language to efficient mappings of modern hardware” and “what is the current landscape of systems taking advantage of these constructs”.
  6. Jul 2025
  7. Apr 2025
  8. Mar 2025
  9. Feb 2025
  10. Jan 2025
    1. for - article - Medium - The truth of San Vicente in the voice of Milton Nascimento Mosaic Institute - Eduardo Campos - 2017, Oct 27 - from - music - review Milton Nascimento. Lo Borges - Clube Da Esquina - Classic Music Review - San Vicente - altrochchick - 2021, April 11 - https://hyp.is/krcU1suaEe-s5zcLEaXR3Q/altrockchick.com/2021/04/11/milton-nascimiento-lo-borges-clube-da-esquina-classic-music-review/ - from - youtube - music - San Vicente - Milton Nascimento - Live at Montreal Jazz Festival - moving performance - https://hyp.is/oElbPsucEe-nqit3PkZ2Bg/www.youtube.com/watch?v=H0BLHm7uyO0 - Investigate possibility - Deep Humanity BEing journey - San Vicente - Milton Nascimento

    1. for - music - review Milton Nascimento. Lo Borges - Clube Da Esquina - Classic Music Review - San Vicente - altrochchick - 2021, April 11 - to article - Medium - The truth of San Vicente in the voice of Milton Nascimento Mosaic Institute - Eduardo Campos - 2017, Oct 27 - https://hyp.is/V6DIJMuaEe-hQ1OPLsWsTw/medium.com/instituto-mosaico/a-verdade-de-san-vicente-na-voz-de-milton-nascimento-3ca69d241c53 - from - youtube - music - San Vicente - Milton Nascimento - Live at Montreal Jazz Festival - moving performance - https://hyp.is/oElbPsucEe-nqit3PkZ2Bg/www.youtube.com/watch?v=H0BLHm7uyO0

  11. Nov 2024
    1. In my brag document, I like to do this by making a section for areas that I’ve been focused on (like “security”) and listing all the work I’ve done in that area there. This is especially good if you’re working on something fuzzy like “building a stronger culture of code review” where all the individual actions you do towards that might be relatively small and there isn’t a big shiny ship.

      This is such a clever way to create a container that otherwise might not have existed for that work. I wonder if this would be a good way to highlight glue work?

  12. Oct 2024
  13. Sep 2024
    1. Recommended to take caffeine about 30 minutes before you want peak performance (effects start 5 minutes beforehand). Peak performance ends after roughly 60 minutes, but effects stay in the system for far longer.

      Conditions are not high blood glucose levels and not a very full stomach. Also assumes to drink an entire caffeinated drink in a short period of time.

      (~18:00)

      Because of effects related to caffeine and sleep, maybe recommended to do the most mentally or physically intensive tasks earlier in the day depending on sleep schedule.

  14. Aug 2024
  15. Jun 2024
  16. May 2024
    1. j'ai fait plusieurs travaux sur ces questionsl et ce slide en résume un rapport de 150 pages 00:19:33 sur que font les systèm éducatif qui appartenaient au cadrant en haut à droite que je vous ai montré au début donc ces systèmes éducatifs dit meilleur qui sont à la fois performants et équitables on est allé regarder ce 00:19:45 qu'ils font avec leur profession enseignante qu'est-ce qu'ils font de leurs enseignants quelle politique pour viser les enseignants et bien il y a une variété d'approches très importantes mais on a trouvé trois éléments communs 00:19:57 on a on a trouvé que c'est dans ces pays-là les enseignants avaient une expérience pratique longue obligatoire pendant leur formation initiale donc ils sont face à une salle de classe dès leur 00:20:10 formation initiale avec un tuteur et ils progressent et cette expérience pratique est longue et j'insiste là-dessus c'est pas l'histoire de faire 3 semaines un mois et cetera en sal de classe euh je pense que oui c'est au 00:20:24 minimum un semestre alors là aussi il y a des variétés mais oui minimum un semestre 1 an idéalement peut-être 2 ans oui j'ai plus les détails en tête euh aussi les enseignants peuvent suivre 00:20:38 des formations continues qui répondent à leurs besoins donc pas des formations continues qui sont proposées voilà sur le plan national mais qui viennent répondre aux besoins d'un établissement en particulier l'établissement dans lesquel il se trouve et dernier point 00:20:52 les mécanismes d'évaluation des enseignants sont reliés à la formation continue et à ce qui leur est proposé en terme de formation continue on fait le lien entre ben voilà on a vu des vous pouvez vous améliorer dans tel domaine 00:21:03 on va vous prer proposer une formation continue qui répond à ce besoin
    1. on est tous et 01:02:17 toutes dans une anxiété de performance et dans une anxiété de compétition sociale euh et donc bah ça voilà ça c'est la seule façon d'y arriver c'est de vraiment ne pas se soucier de ce de 01:02:31 cette compétition mais mais ça veut dire qu'on va jamais nous organiser pour changer les choses donc d'où le danger de développement personnel alors du coup non parce que plus on plus on plus 01:02:46 on va bien et moins on veut s'organiser c'est c'est voilà plus on est en colère c'est un peu aussi que c'est les gens passer un militant sont en colère mais justement parce que c'est un peu important de de garder cette émotion pour s'organiser
  17. livejaverianaedu-my.sharepoint.com livejaverianaedu-my.sharepoint.com
    1. La performance de testimonios como forma potencial de resistencia (narrativa digital) puede ofrecer una manera de expresar y también atestiguar cómo los derechos humanos son negados tras la detención y deportación de inmigrantes.

      En este tipo de performance, los individuos pueden compartir sus experiencias de primera mano, narrando los detalles de su situación, las condiciones en las que son detenidos y los abusos que enfrentan durante el proceso de deportación. Al hacerlo, no solo están expresando su propia voz y resistencia, sino que también están arrojando luz sobre las prácticas sistemáticas injustas y violatorias de derechos.

      La narrativa digital amplifica aún más el impacto de estos testimonios al permitir que se compartan a través de plataformas en línea, alcanzando audiencias más amplias y facilitando la solidaridad y el apoyo de comunidades globales. Además, el uso de diferentes medios digitales, como videos, fotos, textos y redes sociales, enriquece la experiencia narrativa y hace que sea más convincente y accesible para aquellos que buscan comprender y abogar por un cambio.

    1. Aunque Brysk (2013) se pregunta si las redes sociales puedenser utilizadas como una herramienta performativa (o como unamáscara) para reclamar derechos –ya que estas pueden generaruna respuesta emocional, aun cuando ese momento de cuidadopuede evaporarse rápidamente sin tener un efecto duradero–,propongo que la performance de ser persona –en su carácter de más-cara que permite reclamar derechos– no es el único acto de hablaa través del cual afirmar la propia humanidad.

      Debemos considerar tanto las limitaciones como las posibilidades de las redes sociales en la afirmación de nuestra humanidad. No debemos depender únicamente de la “performance” en línea; hay múltiples formas de expresar nuestra existencia y lucha. La afirmación clave. La “performance de ser persona” se refiere a cómo nos presentamos en línea para reclamar derechos y reconocimiento. Sin embargo, hay otros actos de habla que también afirman nuestra humanidad, como la resistencia, la solidaridad y la narración de experiencias. Brysk plantea una pregunta interesante. Las redes sociales pueden ser una plataforma para expresar nuestras identidades y luchas, pero también pueden convertirse en una “máscara” detrás de la cual ocultamos nuestra verdadera realidad. Las redes sociales pueden generar respuestas emocionales inmediatas, pero estas pueden desvanecerse rápidamente. A veces, la atención momentánea no se traduce en cambios duraderos.

    Tags

    Annotators

  18. Mar 2024
    1. Practice, learning, getting better at what you do is hard work and not “fun” or “flow”. “Flow” is for performance, rather than practice, for when you’re on stage, rather than in the rehearsal studio.

      Comment: Flow as being performance rather than practice

    1. The only issue left to tackle is the performance issue. In many cases it actually turns out to be a non-issue because of the clustered index on AgreementStatus (AgreementId, EffectiveDate) - there's very little I/O seeking going on there. But if it is ever an issue, there are ways to solve that, using triggers, indexed/materialized views, application-level events, etc.
  19. Jan 2024
  20. Dec 2023
    1. Interpreting accuracy is one of the most commonly used indicators of cognitive demands in experimental interpreting studies. One possibility to assess interpreting performance is to analyse interpreting accuracy based on meaning units. The methodological approaches used thus far, however, have some drawbacks: (a) they are limited to an assessment of sense consistency with no indication of the logical cohesion of the rendition, (b) they do not take into account the difference between unintended and strategic omissions or, more generally, the prioritization of source speech information as an interpreting strategy, and (c) they do not allow for the observation of fluctuations of cognitive load or effects of fatigue. In this article, we will present a refined approach to unit-based accuracy analysis that may contribute to solving the issues mentioned above.

      This piques my interest, especially (b).

      口譯訊息的遺漏:刻意(運用口譯策略),還是無心(因爲無力)?

      源語訊息的權重:每個meaning unit肯定有不同權重,而且權重的認定很主觀。

      整個語篇論述的語意連貫、邏輯銜接、承轉(cohesion),也是一大挑戰,如何判定?銜接詞是否僅是一個語義單位,給予某一權重,還是自成一格,必須另外設計評量方式?

    1. While social media emphasizes the show-off stuff — the vacation in Puerto Vallarta, the full kitchen remodel, the night out on the town — on blogs it still seems that people are sharing more than signalling.

      Social media as performance, blogs as voice. Especially over longer periods of time, blogs become a qualitatively different thing, where the social media timelines remain the same. Vgl [[Blogs als avatar 20030731084659]] https://www.zylstra.org/blog/2020/08/your-blog-is-your-avatar/ Personal relationships are the stuff of our lives.

  21. Nov 2023
    1. All about Caching: Strategies, Challenges and Optimization

      Caching is a crucial technique in modern #webservices, helping to improve #performance, reduce #latency, and minimize the load on servers and networks. The principle is simple: storing copies of data in a more quickly accessible location to enable faster access to that data. But, as with many technologies, the devil is in the details. In fact, creating a good configuration is very difficult and the smallest mistake can be very detrimental to the performance of our #website.

      caching

    1. A more efficient but more complicated way to simulate perfect guessing is to guess both options simultaneously

      NB: Russ talking here about flattening the NFA into a DFA that has enough synthesized states to represent e.g. in either state A or state B. He's not talking about CPU-level concurrency. But what if he were?

  22. Sep 2023
    1. TurboWish is a framework for profiling Rust programs, focused on illuminating the performance and resource usage of task-oriented code written with async/await.

      TurboWish是一个用于分析Rust程序的框架,专注于揭示使用async/await编写的面向任务的代码的性能资源使用情况

  23. Aug 2023
    1. Ten minutes before sleep, do the following: PRAY

      It's a combination of visualization, commitment, and meditation

      Request the subconscious through this act of prayer.

      Also visualize the outcome and process of that which you aspire to do the following day, and even that which you want to achieve the following month(s). Thus, visualize the following: Big Picture, Milestones, and yourself the next day.

  24. Jul 2023
    1. only a small fraction of the features of each component, and your program con-sumes 10 or 100 times the hardware resources of a fully custom program, butyou write 10% or 1% of the code you would have written 30 years ago.

      You use only a small fraction of the features of each component, and your program consumes 10 or 100 times the hardware resources of a fully custom program, but you write 10% or 1% of the code you would have written 30 years ago.

      • Caffeine as backbone of civilization
      • caffeine archetype ( I am the mindful master)
      • high correlation between flow & caffeine
      • associate caffeine with flow (I also do this with flow music)
      • shortcut struggle phase with caffeine
      • caffeine timing (1 to 1.5 hours until waking & 10 hours before sleep no caffeine)
      • proper dosage (test what works) 4.1 higher dosage when lack of sleep
      • what caffeine synergizes most (for me, probably coffee, in particular espresso) 5.1 double water intake when drinking caffeine (I always try to do this) 6 keep caffeine sensitivity high (1 day per week off, 1 week per quarter off)
  25. Jun 2023
    1. The 4 (behavioral) keypoints for great physical and mental as well as cognitive health:

      One) (2:00-4:05) View sunlight early in the day. The light needs to reach the eyes--increasing alertness, mood, and focus, through certain receptors. Also increases sleep quality at night, according to Huberman. Ideally five to ten minutes on a clear day, and ten to twenty minutes on an overcast day. No sunglasses, and certainly not through windows and windshields. If no sun is out yet, use artificial bright light. Do this daily.

      Two) (4:05-6:10) Do physical exercise each and every day. Doesn't have to be super intense. Huberman recommends zone two cardiovascular exercise. Walking very fast, running, cycling, rowing, swimming are examples. He says to get at least between 150 and 200 minutes of this exercise per week. Some resistance training as well for longevity and wellbeing, increases metabolism as well. Do this at least every other day, according to Huberman. Huberman alternates each day between cardiovascular exercise and resistance training.

      Three) (6:20-9:10) People should have access to a rapid de-stress protocol or tools. This should be able to do quickly and instantly, without friction. You can just do one breath for destress. ( Deep long breath through nose, one quick breath in nose to completely fill the longs, and then breathe out through mouth long.)

      Four) (9:12-14:00) To have a deliberate rewiring nervous system protocol to use. A thing that can be done is NSDR (Non-Sleep Deep Rest protocol), this is specifically to increase energy.

      Ideally the NSDR should be done after each learning session as well to imitate deep sleep (REM) and therefore accelerate neuroplasticity and thus rewire the nervous system; increasing the strength of connections between neurons and therefore increase retention significantly.

      NSDR is also a process of autonomity and control, it allows one to find that they are in control of their body and brain. It makes one realize that external factors don't necessarily have influence. According to Huberman, NSDR even replenishes dopamine when it is depleted, making it also suitable for increasing motivation.

    1. The musical depiction of the lyrics from Figure 9.2 illustrates an additional aspect of bluesperformance practice—the use of call and response. Originally practiced by a large groupof people, this improvisational technique involves sharing ideas between the leader andher/his followers. Mastering the call and response technique is especially important at thebeginning of our encounter with jazz improvisation. It engages us in a meaningfuldialogue that includes exchanging and communicating musical ideas. The communicativeaspect of call and response is relatively straightforward in the context of verbal conversation.

      In a musical setting, however, when spoken words and sentences are replaced with motifs and melodic phrases, the structure of the call and response might not be as obvious. To be a good communicator, we have to know how to listen, pay close attention to what the other musicians are playing, and try to be receptive to their ideas. In certain scenarios, however, the use of call and response technique might create less than desirable effects. For instance, when the call and response takes the form of exact and immediate repetition, it might be impressive but not necessarily in keeping with the surrounding musical context. A much more subtle way of thinking about the call and response technique involves musical interaction at the level of the entire performance in which non-adjacent sections relate to one another, and where the flow of the performance is regulated by logically introduced musical ideas. In creating a musical narrative, then, we can also respond to each other’s playing, but these responses are not as obvious as simple repetitions tend to be. We can demonstrate our listening skills, for instance, by incorporating an idea that we have previously heard (i.e. a rhythmic motive from the drummer, or a melodic gesture from the guitarist) and develop it in such a way that leads to a more satisfying musical discourse. The call and response aspect of improvisation means that musicians understand each other’s intentions, have an unspoken agreement, so to speak, and project them with a high level of personal expression and musical commitmen

  26. May 2023
  27. Apr 2023

    Tags

    Annotators

    1. Immediately before stepping on stage, he suggests using the tip of your right pinky finger to find the upper end of your trousers zipper. If your fingernail clicks against the zipper’s metal pull-tab, then you are safe and ready to make your entrance. If your pinky slides in up to your knuckle, however, then you have to XYZ PDQ (eXamine Your Zipper, Pretty Darn Quick)!

      Harry Lorayne used a pinky check, in which he used the fingernail of his pinky finger against his the pull-tab of his zipper to ensure his fly was closed, every night before appearing on stage to prevent embarrassment and to maintain credibility as a memory expert.

      MAGIC MENTOR MONDAY: Harry Lorayne - Chamber Magic<br /> by Steve Cohen

  28. Mar 2023
  29. Feb 2023
  30. Jan 2023
    1. Because endpoints are URLs, you can – and should – monitor them to ensure they stay online. When talking about online services and websites, you’ll often hear the word “uptime”. This is the percentage of time your application stays up – in other words, the percentage of time your app is accessible and functioning. Outages and performance errors will lower your overall percentage.Monitoring your endpoints also gives you metrics on which endpoints are being accessed and what types of API calls developers are making. This can help you track user behavior, and gain insight into which endpoints are highly trafficked so you can maintain your performance.
  31. Dec 2022
  32. Nov 2022
    1. Scaffolding is the act of providing learners with assistance or support to perform a taskbeyond their own reach if pursued independently when “unassisted.”

      Wood, Bruner, & Ross (1976) define scaffolding as what? (Metiri Group, Cisco Sytems, 2008) The act of providing learners with assistance or support to perform a task beyond their own reach if pursued independently when "unassisted."

      What term do Wood, Bruner, & Ross (1976) define as "The act of providing learners with assistance or support to perform a task beyond their own reach if pursued independently when 'unassisted.'"? (Metiri Group, Cisco Sytems, 2008) Scaffolding

    1. It's not entirely the Twitter people's fault. They've been taught to behave in certain ways. To chase likes and retweets/boosts. To promote themselves. To perform.

      Twitter trains users to behave a certain way. It rewards a specific type of performance. In contrast, until now at least, M is focused on conversation (and the functionality of the apps reinforce that, with how boosts and likes work differently)

    1. The most intriguing result in the present study is thepositive effect of white noise on performance for theADHD children. This noise effect was present in boththe non-medicated and medicated children. Thissupports the MBA (Moderate Brain Arousal) model(Sikstro ̈m & So ̈derlund, 2007), suggesting that theendogenous (neural) noise level in children withADHD is sub-optimal. MBA accounts for the noise-enhancing phenomenon by stochastic resonance(SR). The model suggests that noise in the environ-ment introduces internal noise into the neural sys-tem through the perceptual system. Of particularimportance, the MBA model suggests that the peakof the SR curve depends on the dopamine level, sothat participants with low dopamine levels (ADHD)require more noise for optimal cognitive performancecompared to controls.

      Author's self-described "most intriguing result"

    2. Results: Noise exerted a positive effect on cognitive performance forthe ADHD group and deteriorated performance for the control group, indicating that ADHD subjectsneed more noise than controls for optimal cognitive performance

      Explains why studies on music at general population have conflicting results (ie, general decrease in capacity to focus with noisy environment). Wonder if this relates to atonal or discordant and dissonant music (free jazz, avant-guarde, etc) or polyrhythmic and odd metered time signatures

    1. Rust lets us explicitly state our desires to the compiler

      This is the key. It follows that the same results, then, could be seen if we devised a way to communicate the same desires to the machine when we're dealing with JS. (My preferred thought experiment: imagine a docs/ directory in the repo where these sorts of things are documented for the benefit of other programmers—alongside any other rationale that you would naturally hope to communicate as well—and that the computer itself were made to be able to read and act upon the very same documentation to guide its behavior.) See http://cr.yp.to/qhasm/literature.html

  33. Sep 2022
  34. Aug 2022
    1. The lack of CPU power in those days also meant there was deep skepticism about the performance of interpreters in general, and in the user interface in particular. Mention "interpreted language" and what sprung to mind was BASIC or UCSD Pascal, neither regarded as fast.

      still widely held today

    1. The more I think about it, the less I think there is a meaningful definition of the one true run time. I have put significant effort into making sure that runtimes are consistent but, however we do this, it makes our experiments less realistic. With rare exceptions (perhaps some safety critical systems) our algorithms will not be used in highly controlled situations, moderated to reduce variability between runs or to maximise speed of this particular process. Algorithms will be run as part of a larger system, in a computer sometimes with many other processes and sometimes with few and, because of the way that computing has developed, sometimes in raw silicon and sometimes in virtual machines. So the more I think about it, the more I think that what matters is the distribution of run times. For this, your experiment on your hardware is important, but so are future recomputations on a wide variety of VMs running on a variety of different hardware.

      The truth of this has consequences not just for the metaissue of recomputation, but the entire raison d'etre of doing performance studies to begin with.

    1. This is what I observed in hyper productive people: some of them have a unique, novel system of organizing their knowledge, but many of them don't. So, having such a system is probably not that important.

      I see these sorts of statements often, and never taken into account is the diversity of ways of thought, general intelligence, quality of memory, or many other factors that make individuals different as well as their outcomes different.

      Different people are going to use different tools differently and have different outcomes.

  35. Jul 2022
    1. Another key idea here is to separate meaning from tactics. E.g. the meaning of sorting is much simpler and more compact than the dozens of most useful sorting algorithms, each one of which uses different strategies and tactics to achieve the same goal. If the “meanings” of a program could be given in a way that the system could run them as programs, then a very large part of the difficulties of program design would be solved in a very compact fashion. The resulting “meaning code” would constitute a debuggable, runnable specification that allows practical testing. If we can then annotate the meanings with optimizations and keep them separate, then we have also created a much more controllable practical system.

      See also http://cr.yp.to/qhasm/literature.html

  36. Jun 2022
    1. there is clear evidence that explicitly teaching reading strategies to students improves their overall academic performance, such instruction is often limited to developmental reading or study skills courses (Saxby 2017, 37-38).

      Teaching reading strategies to students improves their overall academic performance, but this instruction is often limited to developmental reading or study skills courses.

      ref: Saxby, Lori Eggers. “Efficacy of a College Reading Strategy Course: Comparative Study.” Journal of Developmental Education 40, no. 3 (2017): 36-38.

      Using Hypothes.is as a tool in a variety of courses can help to teach reading strategies and thereby improve students' overall academic performance.

  37. May 2022
    1. therefore the gases should be removed from the condenser. This can be achieved byinstalling vacuum pumps, compressors, or steam ejectors. The condenser heat removalis done either by using a cooling tower or through cold air circulation in the condenser.The condensate forms a small fraction of the cooling water circuit, a large portion ofwhich is then evaporated and dispersed into the atmosphere by the cooling tower. Thecooling water surplus (blow down) is disposed of in shallow injection wells. In singleflash condensation system, the condensate does have direct contact with the coolingwater.
    2. Single flash power plants are classified according to their steam turbines types, i.e., theturbine exit conditions. Two such basic types are the single flash with a condensationsystem and the single flash back pressure system. In the first type, a condenser oper-ating at very low pressure is used to condensate the steam leaving the steam turbine.The condenser should operate at low vacuum pressure to maintain a large enthalpy dif-ference across the expansion process of the steam turbine, hence resulting in a higherpower output. The geothermal fluid usually contains non-condensable gases which arecollected at the condenser. Such a collection of gases may raise the condenser pressure,
    1. Some people have expressed surprise end even doubt that it could be faster to read the files twice than reading them just once. Perhaps I didn't manage to explain very clearly what I was doing. I am talking about cache pre-loading, in order to have the files in disk cache when later accessing them in a way that would be slow to do on the physical disk drive. Here is a web page where I have tried to explain more in detail, with pictures, C code and measurements.
    2. Remember the caching. Reading two files sequentially into memory from the physical disk can be faster than reading them both in parallel, alternating between them (moving the read head back and forth). Everything you do later, with all the data cached in memory, is relatively much faster. But yes, it depends on the data, and this is an average. Two files that actually do differ in the beginning will be faster to compare byte by byte.
  38. Apr 2022
    1. (7) ReconfigBehSci on Twitter: “@ToddHorowitz3 probably- and I think there are many interesting questions around why he is there and whether he should be there. But to answer those properly, looking at the performance of the model seems important and interesting to me- that is all I am saying” / Twitter. (n.d.). Retrieved March 6, 2021, from https://twitter.com/SciBeh/status/1324389147050569734

    1. The amount of resistance and prejudices which the farsighted originators of FORTRAN had to overcome to !gMn acceptance of their product is a memorable indication of the degree to which programmers were pre- occupied with efficiency, and to which trick- ology had already become an addiction
  39. Mar 2022
    1. I just dislike how non-native it feels and looks.

      I suspect VS Code's non-native look has actually contributed to the uptake from a subset of its users. A fullscreen terminal-based editor like Vim is also non-native as far as widget sets are concerned, but pleasantly so in at least one respect. When Chrome was released, when you asked people what they liked about it, sure they'd say it was fast and whatnot, but for a non-trivial segment of the world, they just liked how visually slim it was. They appreciated how little chrome was actually in Chrome. VS Code managed to reap dividends on its imports of the same approach to its "shell", even while not being particularly fast. (People point to VS Code as an example of a snappy Electron app, and they're not wrong insofar as the comparison goes to other typical Electron apps; on an absolute, non-weight-class-adjusted scale, though, VS Code is still pretty clunky—people just ignore it because of how slim it appears in comparison to IDEs like MonoDevelop[1].)

      1. https://upload.wikimedia.org/wikipedia/commons/0/01/Monodevelop-main-window.png
  40. Feb 2022