This morning, my timeline was buzzing with discussions about "MCP." While the name sounded vaguely familiar, I'd never actually dug into what it was. So, in the time it takes to boil an egg, I had a quick chat with Kimi.
Q1: What exactly is MCP?
Kimi told me what it stood for, and then went through its core functionality, workflow, advantages, and applications.
=> My thinking: Sounded about right, roughly aligned with my prior (vague) understanding. So, naturally, I kept digging.
Q2: So, am I right in thinking MCP is a protocol? If I'm building a product and want it to support MCP, it means it can be invoked by AI, right?
Kimi confirmed I was basically on the right track. Then, Kimi explained why supporting the MCP protocol is important, gave a brief rundown of how to make a product MCP-compatible, and offered a simple example: a file management tool that could support MCP requests to read specific file content, allowing AI to analyze or summarize it, and then save it as a new file.
=> My thinking: But is every product suitable for this? Hence, I kept probing.
Q3: What are some applications that are a good fit for integrating with MCP?
Kimi gave me some examples, like AI-driven development tools (IDEs, etc.), enterprise automation management tools (project management, CRM systems, etc.), data processing and analysis tools, smart customer service and chatbots, content generation and creative tools, research and academic tools, personal productivity tools, and more.
=> My thinking: These examples felt pretty high-level and generic. They just listed categories of products that could use MCP, without really explaining why they'd be a good fit, or how they'd integrate.
=> But then it suddenly hit me – isn't this integration basically just turning existing product features into standardized interfaces for AI to call? How is that different from providing an API? So, naturally, I had to ask…
Q4: But from another angle, isn't this just exposing your application's API to external services?
Kimi responded that, in essence, yes, you could see it as a special kind of API. However, MCP is more than just a simple extension of traditional APIs; it's specifically tailored for the interaction needs of AI models. Then, Kimi broke down the differences between MCP and traditional APIs, covering aspects like standardization and universality, context management, security and trust mechanisms, dynamic interaction and real-time feedback, and more. Kimi then summarized MCP's core value: a protocol purpose-built for the AI era.
=> My thinking: These explanations were still quite theoretical. I wasn't really feeling the tangible difference from traditional APIs. So, time for examples.
Q5: Could you give me a few examples to help me understand the contrast between MCP and traditional APIs?
Kimi started giving examples, comparing the limitations of traditional APIs in several scenarios with what could be achieved using the MCP protocol.
=> My thinking: From these simple examples, it basically boils down to "understanding" user input and then translating that into requests similar to "traditional API" calls. This sounded very similar to another concept I'd come across: "Function Calling." So, the key question became: who's responsible for implementing this "understanding" step?
Q6: Is this "understanding" part something the application itself needs to provide?
Kimi confirmed that yes, this capability needs to be provided by the application itself and isn't handled by the MCP protocol. Kimi elaborated on the relationship between MCP and the application, and suggested some approaches for implementing this "understanding" at the application level.
=> My thinking: At this point, I had a pretty decent grasp of the MCP concept. Not mastery by any means, but if I were to start working on an MCP-compatible application, I’d at least have a foundation. By now, the eggs were just about boiled. I turned off the heat, let them sit for a bit, and figured I’d squeeze in one more question about implementation.
Q7: If I already have an application with standard REST APIs, how would I go about making it compatible with MCP? Could you answer with concrete examples?
Kimi provided these actionable suggestions: (1) Create an MCP server for your application to act as a communication bridge between your app and AI models; (2) Wrap your REST APIs as MCP tool functions; (3) Handle context information.
==========
The whole process took about 6 minutes, using Kimi's standard model (not the long-thinking one). What surprised me wasn't really the content of the answers, but how my learning behavior is subtly changing.
Each of these questions could actually be answered by searching for corresponding keywords online. But some questions are hard to get direct answers for through search alone.
- For example, questions like Q2 and Q4, where you're trying to use analogy for quick conceptual understanding. Only AI can give you a clear "yes" or "no," and then explain why.
- And then there are follow-up questions like Q6, which are heavily context-dependent. Certain words only make sense within the specific context of the conversation.
- Also, without the information intake from Q1 to Q6, I wouldn't have been able to directly ask Q7 about implementation details. Even if AI had answered, I wouldn't have had the context to validate the answer or develop new questions from it.
Learning is a never-ending journey, and AI is really becoming quite the teacher!
Comments
No comments yet. Be the first to comment!