Hey lobsters. @jspahrsummers and I created this at Anthropic. I hope I didn’t cross any lines of self-promotion here, but I am super excited about this and wanted to share it.
We will work on remote connectivity. The current “implementation” is focused on local-only, but we hope the underlying primitives will hold true for remote connections.
My understanding is that Cohere Connectors and others aim to solve something very different. MCP is trying to solve the NxM problem of applications to context providers and build in the open. Anybody is free to implement it both on the client as well as on the server side. It is also focused on general interaction primitives rather than pure data providers (hence the separation of prompts and resources for example). As such it’s fundamentally different from proprietary APIs that are bound to a specific product surface. I personally feel LSP is a much closer related concept and something that inspired us.
Why json schema and json rpc and stdio? I understand that json is the lowest common denominator (somewhat pejorative!) for data interchange with all kind of clients. Was making the transport work over http and stdio that important? Why couldn’t local agents just use http? Surely there must be a schema-based approach to specifying the entire protocol.
I work at Whimsical. We have a GPT that people use to create diagrams and flowcharts, and I’d love to build something similar for Claude. The Model Context Protocol is very cool but it seems like it currently requires every context provider to write a server that will run locally on a user’s machine.
It would be great if there was a server-side protocol that we could implement in our existing API which wouldn’t require writing a server. There are two reasons for this:
Writing, maintaining, and distributing an MCP server is more work than implementing new API endpoints in our existing app.
More importantly, it is much lower trust and effort for users to configure their LLM to point to a web service versus downloading and running a server that has local code execution.
Do you have plans for something like this in the future?
We are working on thinking through “remote MCP”, which would solve this (as I understand it). We started with local MCP because we wanted to see if the concepts hold up while we work on hard parts of remote transports, particularly getting authentication right. We will tackle that aspect next and certainly recognize that this would unlock a lot of use cases.
(Also a good part of MCP comes from our own internal need and usage, where we can easily distribute servers)
Hey lobsters. @jspahrsummers and I created this at Anthropic. I hope I didn’t cross any lines of self-promotion here, but I am super excited about this and wanted to share it.
Let us know if you have any questions.
Congrats on launching. There a couple of weird things about this protocol launch:
Wehn faced with a similar need in my hobby project, https://chatcraft.org I found webrtc to be a much more suitable transport layer for llms as it can work for both local native apps and connect over network securely. https://gist.github.com/tarasglek/ff3353169d94e82cbd91218ac43188d6 might be sufficient to grasp my approach
We will work on remote connectivity. The current “implementation” is focused on local-only, but we hope the underlying primitives will hold true for remote connections.
My understanding is that Cohere Connectors and others aim to solve something very different. MCP is trying to solve the NxM problem of applications to context providers and build in the open. Anybody is free to implement it both on the client as well as on the server side. It is also focused on general interaction primitives rather than pure data providers (hence the separation of prompts and resources for example). As such it’s fundamentally different from proprietary APIs that are bound to a specific product surface. I personally feel LSP is a much closer related concept and something that inspired us.
Were you inspired by LSP?
Yes.
Why json schema and json rpc and stdio? I understand that json is the lowest common denominator (somewhat pejorative!) for data interchange with all kind of clients. Was making the transport work over http and stdio that important? Why couldn’t local agents just use http? Surely there must be a schema-based approach to specifying the entire protocol.
I work at Whimsical. We have a GPT that people use to create diagrams and flowcharts, and I’d love to build something similar for Claude. The Model Context Protocol is very cool but it seems like it currently requires every context provider to write a server that will run locally on a user’s machine.
It would be great if there was a server-side protocol that we could implement in our existing API which wouldn’t require writing a server. There are two reasons for this:
Do you have plans for something like this in the future?
We are working on thinking through “remote MCP”, which would solve this (as I understand it). We started with local MCP because we wanted to see if the concepts hold up while we work on hard parts of remote transports, particularly getting authentication right. We will tackle that aspect next and certainly recognize that this would unlock a lot of use cases.
(Also a good part of MCP comes from our own internal need and usage, where we can easily distribute servers)
I wrote a related post about how Zed incorporates this, which includes a little demo video of how it works: https://zed.dev/blog/mcp