46 Matching Annotations
  1. Apr 2026
    1. Most of the conversation around MCP is about what it enables. If you flip it - and ask what agents actually need to run efficiently - the math looks pretty broken.

      mcp looks great if you start from the tool looking for a problem to solve. Starting from a problem there are tools more efficient.

    1. 基于 FastMCP 的本地 MCP 服务器,把 𝕏 API 的 OpenAPI 规范自动转化为 MCP 工具。

      令人惊讶的是:𝕏官方直接支持MCP协议,将OpenAPI规范自动转化为MCP工具,这大大简化了AI Agent与𝕏平台的集成难度。这种标准化做法可能成为AI工具集成的未来趋势,使不同AI系统能更无缝地协同工作。

    1. A connector solves this by packaging an integration into a single, reusable entity using the MCP protocol.

      令人惊讶的是:Mistral使用MCP(模型控制协议)将复杂的集成打包成单一的可重用实体。这种标准化方法大大简化了企业AI应用的开发过程,消除了重复实现相同集成逻辑的需要,同时提高了安全性和可维护性。

    1. Most skills require you to install a dedicated CLI. But what if you aren't in a local terminal? ChatGPT can't run CLIs. Neither can Perplexity or the standard web version of Claude.

      令人惊讶的是:许多基于技能的AI工具依赖本地CLI,但主流AI平台如ChatGPT和Perplexity实际上无法执行CLI命令。这一限制意味着许多技能在非终端环境中完全失效,造成了AI工具功能的严重碎片化。

    2. When a remote MCP server is updated with new tools or resources, every client instantly gets the latest version. No need to push updates, upgrade packages, or reinstall binaries.

      令人惊讶的是:MCP服务器更新后所有客户端自动获得最新版本,无需手动更新。这种即时更新机制在软件分发中极为罕见,它消除了版本管理的复杂性,确保用户始终使用最新功能,这是传统软件分发模式无法比拟的优势。

    3. For remote MCP servers, you don't need to install anything locally. You just point your client to the MCP server URL, and it works.

      令人惊讶的是:MCP协议允许远程服务器无需本地安装即可使用,这大大简化了AI工具的集成流程。用户只需指向服务器URL即可获得功能,而不必在每个设备上安装软件,这种零安装模式在AI工具集成中非常独特。

  2. Feb 2026
  3. Jan 2026
    1. Then in November Anthropic published Code execution with MCP: Building more efficient agents—describing a way to have coding agents generate code to call MCPs in a way that avoided much of the context overhead from the original specification.

      still anthropic made MCP more approachable at the end of year with Code execution with MCP. Meaning?

    2. Anthropic themselves appeared to acknowledge this later in the year with their release of the brilliant Skills mechanism—see my October post Claude Skills are awesome, maybe a bigger deal than MCP. MCP involves web servers and complex JSON payloads. A Skill is a Markdown file in a folder, optionally accompanied by some executable scripts.

      suggestion that Anthropic's own Skills (a markdown file w perhaps some scripts) maybe bigger than their MCP

    3. The reason I think MCP may be a one-year wonder is the stratospheric growth of coding agents. It appears that the best possible tool for any situation is Bash—if your agent can run arbitrary shell commands, it can do anything that can be done by typing commands into a terminal. Since leaning heavily into Claude Code and friends myself I’ve hardly used MCP at all—I’ve found CLI tools like gh and libraries like Playwright to be better alternatives to the GitHub and Playwright MCPs.

      Author thinks MCP may be a temporary phenomenon as a protocol, mostly bc cli tools like Claude code don't need it. The last sentence, that cli tools already exist that are better than the corresponding MCP servers for those tools, goes back to why vibecode/AI-the-things if there's perfectly good automation already around? I think that MCP may still be useful locally for personal tools though. It helps structure what you want your AI to do.

    4. f you define agents as LLM systems that can perform useful work via tool calls over multiple steps then agents are here and they are proving to be extraordinarily useful. The two breakout categories for agents have been for coding and for search.

      recognisable, ai agents as chunked / abstracted away automation. This also creates the pitfall [[After claiming to redeploy 4,000 employees and automating their work with AI agents, Salesforce executives admit We were more confident about…. - The Times of India]] where regular automation is replaced by AI.

      Most useful for search and for coding

    5. It turned out that the real unlock of reasoning was in driving tools. Reasoning models with access to tools can plan out multi-step tasks, execute on them and continue to reason about the results such that they can update their plans to better achieve the desired goal. A notable result is that AI assisted search actually works now. Hooking up search engines to LLMs had questionable results before, but now I find even my more complex research questions can often be answered by GPT-5 Thinking in ChatGPT. Reasoning models are also exceptional at producing and debugging code. The reasoning trick means they can start with an error and step through many different layers of the codebase to find the root cause. I’ve found even the gnarliest of bugs can be diagnosed by a good reasoner with the ability to read and execute code against even large and complex codebases.

      Reasoning models are useful for: running tools (mcp) search now works debugging/writing code

  4. Dec 2025
    1. some practices that can make those discussions easier, by starting with constraints that even skeptical developers can see the value in: Build tools around verbs, not nouns. Create checkEligibility() or getRecentTickets() instead of getCustomer(). Verbs force you to think about specific actions and naturally limit scope.Talk about minimizing data needs. Before anyone creates an MCP tool, have a discussion about what the smallest piece of data they need to provide for the AI to do its job is and what experiments they can run to figure out what the AI truly needs.Break reads apart from reasoning. Separate data fetching from decision-making when you design your MCP tools. A simple findCustomerId() tool that returns just an ID uses minimal tokens—and might not even need to be an MCP tool at all, if a simple API call will do. Then getCustomerDetailsForRefund(id) pulls only the specific fields needed for that decision. This pattern keeps context focused and makes it obvious when someone’s trying to fetch everything.Dashboard the waste. The best argument against data hoarding is showing the waste. Track the ratio of tokens fetched versus tokens used and display them in an “information radiator” style dashboard that everyone can see. When a tool pulls 5,000 tokens but the AI only references 200 in its answer, everyone can see the problem. Once developers see they’re paying for tokens they never use, they get very interested in fixing it.

      some useful tips to keep MCPs straightforward and prevent data blobs that are too big. - use verbs not nouns for mcp tool names (focuses on the action, not the object upon you act) - think/talk about n:: data minimalisation - break it up, reads separate from reasoning steps. Keeps everything focused on the specific context. - dashboard the ratio of tokens fetched versus tokens used in answers. Lopsided ratios indicate you're overfeeding the system.

    2. In an extreme case of data hoarding infecting an entire company, you might discover that every team in your organization is building their own blob. Support has one version of customer data, sales has another, product has a third. The same customer looks completely different depending on which AI assistant you ask. New teams come along, see what appears to be working, and copy the pattern. Now you’ve got data hoarding as organizational culture.

      MCP data hoarding leads to parallel data households, exactly the type of thing we spent a lot of energy on to reduce

    3. data hoarding trap find themselves violating the principle of least privilege: Applications should have access to the data they need, but no more

      n:: Principle of least privilege: applications only should have access to data they need, and never more. Data hoarding in MCPs goes beyond that.

    4. There’s also a security dimension to data hoarding that teams often miss. Every piece of data you expose through an MCP tool is a potential vulnerability. If an attacker finds an unprotected endpoint, they can pull everything that tool provides. If you’re hoarding data, that’s your entire customer database instead of just the three fields actually needed for the task.

      MCPs that are overloaded w data are new attack surfaces

    5. MCP can remove the friction that comes from those trade-offs by letting us avoid having to make those decisions at all.

      MCP is meant to abstract the way access is created to resources. In practice it gets used to abstract away any decision on which data to provide or not. That's the trap.

    6. The team ended up with a data architecture that buried the signal in noise. That additional load put stress on the AI to dig out that signal, leading to serious potential long-term problems. But they didn’t realize it yet, because the AI kept producing reasonable-looking answers. As they added more data sources over the following weeks, the AI started taking longer to respond. Hallucinations crept in that they couldn’t track down to any specific data source. What had been a really valuable tool became a bear to maintain.

      Having a clear data architecture for your use case is needed. Vgl [[Eindelijk weet ik wat ThetaOS is een Life Lens System (LLS)]] wrt number of data tables (152 now I think), and how it grew over time, deciding on each table added.

    7. I’ve been watching teams adopt MCP over the past year, and I’m seeing a disturbing pattern. Developers are using MCP to quickly connect their AI assistants to every data source they can find—customer databases, support tickets, internal APIs, document stores—and dumping it all into the AI’s context.

      Dev Andrew Stallman warns against dumping all-the-data into an AI application through MCP. Calls it hoarding.

    1. The real power of MCP emerges when multiple servers work together, combining their specialized capabilities through a unified interface.

      Combining multiple MCP servers creates a more capable set-up.

    2. Prompts are structured templates that define expected inputs and interaction patterns. They are user-controlled, requiring explicit invocation rather than automatic triggering. Prompts can be context-aware, referencing available resources and tools to create comprehensive workflows. Similar to resources, prompts support parameter completion to help users discover valid argument values.

      prompts are user invoked (hey AgentX, go do..) and may contain next to instructions also references and tools. So a prompt may be a full workflow.

    3. Prompts Prompts provide reusable templates. They allow MCP server authors to provide parameterized prompts for a domain, or showcase how to best use the MCP server. ​

      mcp prompts are templates for interaction

    4. Resources support two discovery patterns: Direct Resources - fixed URIs that point to specific data. Example: calendar://events/2024 - returns calendar availability for 2024 Resource Templates - dynamic URIs with parameters for flexible queries. Example: travel://activities/{city}/{category} - returns activities by city and category travel://activities/barcelona/museums - returns all museums in Barcelona Resource Templates include metadata such as title, description, and expected MIME type, making them discoverable and self-documenting.

      Resources can be invoked w fixed and dynamic URIs

    5. Resources expose data from files, APIs, databases, or any other source that an AI needs to understand context. Applications can access this information directly and decide how to use it - whether that’s selecting relevant portions, searching with embeddings, or passing it all to the model.

      resources are just that, read only material to invoke. API, filesystem, databases etc.

    6. Each tool performs a single operation with clearly defined inputs and outputs. Tools may require user consent prior to execution, helping to ensure users maintain control over actions taken by a model.

      Almost function call like.

    7. Tools are model-controlled, meaning AI models can discover and invoke them automatically. However, MCP emphasizes human oversight through several mechanisms. For trust and safety, applications can implement user control through various mechanisms, such as: Displaying available tools in the UI, enabling users to define whether a tool should be made available in specific interactions Approval dialogs for individual tool executions Permission settings for pre-approving certain safe operations Activity logs that show all tool executions with their results

      Tools are available to models, but human in the loop options exist: approval, permission settings, logs

    8. Servers provide functionality through three building blocks:

      n:: MCP servers typically provide three types of building blocks, a) Tools that an LLM can call, b) resources that are read-only resources to an LLM, c) prompts, prewritten instructions templates, i.e. agent descriptions, that outline specific tools and resources to use. So for agentic stuff you'd have an MCP server providing templates which in turn list tools and resources.

    9. Visual Studio Code acts as an MCP host. When Visual Studio Code establishes a connection to an MCP server, such as the Sentry MCP server, the Visual Studio Code runtime instantiates an MCP client object that maintains the connection to the Sentry MCP server.

      VS Code acts as MCP Host (in their AI toolkit extension I think). You could connect it to the Obsidian MCP server plugin then?

    10. The key participants in the MCP architecture are: MCP Host: The AI application that coordinates and manages one or multiple MCP clients MCP Client: A component that maintains a connection to an MCP server and obtains context from an MCP server for the MCP host to use MCP Server: A program that provides context to MCP clients

      The MCP architecture has 3 pieces The host (application, AI or not, that coords the interaction with MCP clients), an MCP client that interacts with a single server. MCP server, which provides the context, i.e. abstracts the access to other sources (filesystem, database, API etc). A server can have one or multiple clients it serves.

  5. Nov 2025
  6. Oct 2025
  7. Dec 2024
    1. https://web.archive.org/web/20241202060131/https://www.forbes.com/sites/janakirammsv/2024/11/30/why-anthropics-model-context-protocol-is-a-big-step-in-the-evolution-of-ai-agents/

      Anthropic proposes 'Model Context Protocol' MCP on how to connect local/external info sources to LLMs and agents, as a standard. To make ai tools more context aware. Article says MCP is open source. Idea is to attach a MCP server to every source and have that interact over MCP with the MCP client attached to a model and/or tools.

      Anthropic is the org of Claude model.

  8. Feb 2021