MCP was donated to the new Agentic AI Foundation at the start of December. Skills were promoted to an “open format” on December 18th.
MCP as protocol now housed at 'agentic ai foundation' and Skills made into open format.
MCP was donated to the new Agentic AI Foundation at the start of December. Skills were promoted to an “open format” on December 18th.
MCP as protocol now housed at 'agentic ai foundation' and Skills made into open format.
Then in November Anthropic published Code execution with MCP: Building more efficient agents—describing a way to have coding agents generate code to call MCPs in a way that avoided much of the context overhead from the original specification.
still anthropic made MCP more approachable at the end of year with Code execution with MCP. Meaning?
Anthropic themselves appeared to acknowledge this later in the year with their release of the brilliant Skills mechanism—see my October post Claude Skills are awesome, maybe a bigger deal than MCP. MCP involves web servers and complex JSON payloads. A Skill is a Markdown file in a folder, optionally accompanied by some executable scripts.
suggestion that Anthropic's own Skills (a markdown file w perhaps some scripts) maybe bigger than their MCP
The reason I think MCP may be a one-year wonder is the stratospheric growth of coding agents. It appears that the best possible tool for any situation is Bash—if your agent can run arbitrary shell commands, it can do anything that can be done by typing commands into a terminal. Since leaning heavily into Claude Code and friends myself I’ve hardly used MCP at all—I’ve found CLI tools like gh and libraries like Playwright to be better alternatives to the GitHub and Playwright MCPs.
Author thinks MCP may be a temporary phenomenon as a protocol, mostly bc cli tools like Claude code don't need it. The last sentence, that cli tools already exist that are better than the corresponding MCP servers for those tools, goes back to why vibecode/AI-the-things if there's perfectly good automation already around? I think that MCP may still be useful locally for personal tools though. It helps structure what you want your AI to do.
f you define agents as LLM systems that can perform useful work via tool calls over multiple steps then agents are here and they are proving to be extraordinarily useful. The two breakout categories for agents have been for coding and for search.
recognisable, ai agents as chunked / abstracted away automation. This also creates the pitfall [[After claiming to redeploy 4,000 employees and automating their work with AI agents, Salesforce executives admit We were more confident about…. - The Times of India]] where regular automation is replaced by AI.
Most useful for search and for coding
It turned out that the real unlock of reasoning was in driving tools. Reasoning models with access to tools can plan out multi-step tasks, execute on them and continue to reason about the results such that they can update their plans to better achieve the desired goal. A notable result is that AI assisted search actually works now. Hooking up search engines to LLMs had questionable results before, but now I find even my more complex research questions can often be answered by GPT-5 Thinking in ChatGPT. Reasoning models are also exceptional at producing and debugging code. The reasoning trick means they can start with an error and step through many different layers of the codebase to find the root cause. I’ve found even the gnarliest of bugs can be diagnosed by a good reasoner with the ability to read and execute code against even large and complex codebases.
Reasoning models are useful for: running tools (mcp) search now works debugging/writing code
https://web.archive.org/web/20251230193244/https://www.docker.com/blog/private-mcp-catalogs-oci-composable-enterprise-ai/ via [[Lee Bryant p]] this article focuses on 'playlist' style remixing of MCP servers for the enterprise. In light of the disc within Digitale Fitheid, around [[Eindelijk weet ik wat ThetaOS is een Life Lens System (LLS)]] etc., I'm more interested in shareware style distribution of MCP servers p2p and in/between communities
some practices that can make those discussions easier, by starting with constraints that even skeptical developers can see the value in: Build tools around verbs, not nouns. Create checkEligibility() or getRecentTickets() instead of getCustomer(). Verbs force you to think about specific actions and naturally limit scope.Talk about minimizing data needs. Before anyone creates an MCP tool, have a discussion about what the smallest piece of data they need to provide for the AI to do its job is and what experiments they can run to figure out what the AI truly needs.Break reads apart from reasoning. Separate data fetching from decision-making when you design your MCP tools. A simple findCustomerId() tool that returns just an ID uses minimal tokens—and might not even need to be an MCP tool at all, if a simple API call will do. Then getCustomerDetailsForRefund(id) pulls only the specific fields needed for that decision. This pattern keeps context focused and makes it obvious when someone’s trying to fetch everything.Dashboard the waste. The best argument against data hoarding is showing the waste. Track the ratio of tokens fetched versus tokens used and display them in an “information radiator” style dashboard that everyone can see. When a tool pulls 5,000 tokens but the AI only references 200 in its answer, everyone can see the problem. Once developers see they’re paying for tokens they never use, they get very interested in fixing it.
some useful tips to keep MCPs straightforward and prevent data blobs that are too big. - use verbs not nouns for mcp tool names (focuses on the action, not the object upon you act) - think/talk about n:: data minimalisation - break it up, reads separate from reasoning steps. Keeps everything focused on the specific context. - dashboard the ratio of tokens fetched versus tokens used in answers. Lopsided ratios indicate you're overfeeding the system.
In an extreme case of data hoarding infecting an entire company, you might discover that every team in your organization is building their own blob. Support has one version of customer data, sales has another, product has a third. The same customer looks completely different depending on which AI assistant you ask. New teams come along, see what appears to be working, and copy the pattern. Now you’ve got data hoarding as organizational culture.
MCP data hoarding leads to parallel data households, exactly the type of thing we spent a lot of energy on to reduce
data hoarding trap find themselves violating the principle of least privilege: Applications should have access to the data they need, but no more
n:: Principle of least privilege: applications only should have access to data they need, and never more. Data hoarding in MCPs goes beyond that.
There’s also a security dimension to data hoarding that teams often miss. Every piece of data you expose through an MCP tool is a potential vulnerability. If an attacker finds an unprotected endpoint, they can pull everything that tool provides. If you’re hoarding data, that’s your entire customer database instead of just the three fields actually needed for the task.
MCPs that are overloaded w data are new attack surfaces
MCP can remove the friction that comes from those trade-offs by letting us avoid having to make those decisions at all.
MCP is meant to abstract the way access is created to resources. In practice it gets used to abstract away any decision on which data to provide or not. That's the trap.
The team ended up with a data architecture that buried the signal in noise. That additional load put stress on the AI to dig out that signal, leading to serious potential long-term problems. But they didn’t realize it yet, because the AI kept producing reasonable-looking answers. As they added more data sources over the following weeks, the AI started taking longer to respond. Hallucinations crept in that they couldn’t track down to any specific data source. What had been a really valuable tool became a bear to maintain.
Having a clear data architecture for your use case is needed. Vgl [[Eindelijk weet ik wat ThetaOS is een Life Lens System (LLS)]] wrt number of data tables (152 now I think), and how it grew over time, deciding on each table added.
I’ve been watching teams adopt MCP over the past year, and I’m seeing a disturbing pattern. Developers are using MCP to quickly connect their AI assistants to every data source they can find—customer databases, support tickets, internal APIs, document stores—and dumping it all into the AI’s context.
Dev Andrew Stallman warns against dumping all-the-data into an AI application through MCP. Calls it hoarding.
Configure the Extension Install this extension from the VS Code marketplace Open VS Code Settings (Cmd/Ctrl + ,) Search for "Obsidian MCP"
Obsidian MCP not found in VS Code market.
The real power of MCP emerges when multiple servers work together, combining their specialized capabilities through a unified interface.
Combining multiple MCP servers creates a more capable set-up.
Prompts are structured templates that define expected inputs and interaction patterns. They are user-controlled, requiring explicit invocation rather than automatic triggering. Prompts can be context-aware, referencing available resources and tools to create comprehensive workflows. Similar to resources, prompts support parameter completion to help users discover valid argument values.
prompts are user invoked (hey AgentX, go do..) and may contain next to instructions also references and tools. So a prompt may be a full workflow.
Prompts Prompts provide reusable templates. They allow MCP server authors to provide parameterized prompts for a domain, or showcase how to best use the MCP server.
mcp prompts are templates for interaction
Resources support two discovery patterns: Direct Resources - fixed URIs that point to specific data. Example: calendar://events/2024 - returns calendar availability for 2024 Resource Templates - dynamic URIs with parameters for flexible queries. Example: travel://activities/{city}/{category} - returns activities by city and category travel://activities/barcelona/museums - returns all museums in Barcelona Resource Templates include metadata such as title, description, and expected MIME type, making them discoverable and self-documenting.
Resources can be invoked w fixed and dynamic URIs
Resources expose data from files, APIs, databases, or any other source that an AI needs to understand context. Applications can access this information directly and decide how to use it - whether that’s selecting relevant portions, searching with embeddings, or passing it all to the model.
resources are just that, read only material to invoke. API, filesystem, databases etc.
Each tool performs a single operation with clearly defined inputs and outputs. Tools may require user consent prior to execution, helping to ensure users maintain control over actions taken by a model.
Almost function call like.
Tools are model-controlled, meaning AI models can discover and invoke them automatically. However, MCP emphasizes human oversight through several mechanisms. For trust and safety, applications can implement user control through various mechanisms, such as: Displaying available tools in the UI, enabling users to define whether a tool should be made available in specific interactions Approval dialogs for individual tool executions Permission settings for pre-approving certain safe operations Activity logs that show all tool executions with their results
Tools are available to models, but human in the loop options exist: approval, permission settings, logs
Servers provide functionality through three building blocks:
n:: MCP servers typically provide three types of building blocks, a) Tools that an LLM can call, b) resources that are read-only resources to an LLM, c) prompts, prewritten instructions templates, i.e. agent descriptions, that outline specific tools and resources to use. So for agentic stuff you'd have an MCP server providing templates which in turn list tools and resources.
Visual Studio Code acts as an MCP host. When Visual Studio Code establishes a connection to an MCP server, such as the Sentry MCP server, the Visual Studio Code runtime instantiates an MCP client object that maintains the connection to the Sentry MCP server.
VS Code acts as MCP Host (in their AI toolkit extension I think). You could connect it to the Obsidian MCP server plugin then?
The key participants in the MCP architecture are: MCP Host: The AI application that coordinates and manages one or multiple MCP clients MCP Client: A component that maintains a connection to an MCP server and obtains context from an MCP server for the MCP host to use MCP Server: A program that provides context to MCP clients
The MCP architecture has 3 pieces The host (application, AI or not, that coords the interaction with MCP clients), an MCP client that interacts with a single server. MCP server, which provides the context, i.e. abstracts the access to other sources (filesystem, database, API etc). A server can have one or multiple clients it serves.
MCP is an OS protocol to connect AI applications to external systems.
MCP plugin for Obsidian, that works with Claude Code
I use obsidian vault and also obsidian MCP server from chat client.With vscode I could use MCP to get content into the vault more easily, but refactoring notes, obsidian is better ux
MCP in Obsidian?
we didn’t need MCP at all. That’s because MCP isn’t a fundamental enabling technology. The amount of coverage it gets is frustrating.
Amazing that MCP is not funtamental.
Using MCP, AI applications like Claude or ChatGPT can connect to data sources (e.g. local files, databases), tools (e.g. search engines, calculators) and workflows (e.g. specialized prompts)
model context protocol (mcp)
https://web.archive.org/web/20241202062809/https://modelcontextprotocol.io/introduction
Anthropic's Model Context Protocol MCP documentation. Includes basic server exercises. The spec doesn't say much at first glance about how resources would actually be connected to a MCP server to serve as context.
https://web.archive.org/web/20241202062707/https://github.com/modelcontextprotocol
Github repositories for MCP by Anthropic. MIT Licences at first glance.
Anthropic proposes 'Model Context Protocol' MCP on how to connect local/external info sources to LLMs and agents, as a standard. To make ai tools more context aware. Article says MCP is open source. Idea is to attach a MCP server to every source and have that interact over MCP with the MCP client attached to a model and/or tools.
Anthropic is the org of Claude model.
If the full SSN provides an account
If the full SSN does not populate an account
you have confirmed the card is Active