Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Had an almost identical experience. Even if you don’t need anything with auth, no one has yet made an mcp that wasn’t ultimately worse or the same as a cli but with a lot more song and dance. Security is also a bit of a joke when half the time it’s installing docker and phoning home. I wanted to like mcp and vend out remote mcp but this spec is not ready.




I'm still confused as to how mcp is better that say a Fastapi endpoint and it's generated swagger docs?

In the cases I’ve tried building/integrating they are the same thing…

I think the only difference is the statefulness of the request. HTTP is stateless, but MCP has state? Is this right?

I haven’t seen many use cases for how to use the state effectively, but I thought that was the main difference over a plain REST API.


My understanding is that it can upgrade to an SSE connection so a persistent stream. Also for interprocess communication you usually prefer a persistent connection. All that to reduce communication overheads. The rationale also is that an AI agent may trigger more fine-grained calls than a normal program or a UI, as it needs to collect information to observe the situation and decide next move (lot more get requests than usual for instance).

This seems like the solution getting ahead of the problem. A series of API requests over HTTP can easily use a persistent connection and will practically default to that with modern client and server implementations. A claim that a more complex approach is needed for efficiency should be accompanied by evidence that the simple approach was problematic.

MCP can use SSE to support notifications (since the protocol embeds a lot of state, you need to be able to tell the client that the state has changed), elicitation (the MCP server asking the user to provide some additional information to complete a tool call) and will likely use it to support long-running tool calls.

Many of these features have unfortunately been specified in the protocol before clear needs for them have been described in detail, and before other alternative approaches to solving the same problems were considered.


I can't agree more, downloading OpenAPI doc for an API and parse it is more than enough to implement the core of MCP. But sadly the buzzword completely took of and for instance all participants to my trainings will ask for MCP, systematically.

Using SSE was far too inconvenient in theory despite that being how nearly all of the MCP that gained traction was working, so instead the spec was switched to being better in theory but very inconvenient in practice:

https://blog.fka.dev/blog/2025-06-06-why-mcp-deprecated-sse-...

There are a million "why don't you _just_ X?" hypothetical responses to all the real issues people have with streamable http as implemented in the spec, but you can't argue your way into a level of ecosystem support that doesn't exist. The exact same screwup with oAuth too, so we can see who is running the show and how they think.

It's hard to tell if there is some material business plan Anthropic has with these changes or if the people in charge of defining the spec are just kind of out of touch, have non-technical bosses, and have managed to politically disincentivize other engineers from pointing out basic realities.


I use it basically as a cache, I create local artifacts that are fast to filter/query and easily paginate on the client (which is to say in the MCP server).

MCP doesn't serve any technical purpose. It exists for business reasons.

MCP provides a convenient packaging for tools, and generally workflows, for LLM clients.

You can debate all day whether bringing your own tools is a good thing vs giving the LLM a generic shell tool and an API doc and letting it run curls. I like tools because it brings reproducibility.

MCP is really just a json RPC spec. json RPC can take place over a variety of transports and under a variety of auth mechanisms- MCP doesn't need to spec a transport or auth mechanism.

I totally agree with everybody that most MCP clients are half assed and remote MCP is not well supported, but that's a business problem

Every LLM tool today either runs locally (cursor, zed, IDEs, etc.) so can run MCP servers as local processses w/ no auth, or is run by an LLM provider where interoperability is not a business priority. So the remote MCP story has not been fleshed out




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: