How to Use MCP Prompts
In the Model Context Protocol (MCP), a prompt is a named, reusable, parameterized template that an MCP server exposes to a client. MCP defines three core server-side primitives: prompts, resources, and tools. Prompts help structure model interactions, resources provide context data, and tools let the model perform actions such as API calls or computations. MCP itself uses JSON-RPC 2.0 for these interactions.
The most important idea is this: an MCP prompt is not the same thing as “some text you send to the model.” In MCP, a prompt is a discoverable protocol object. The client can list available prompts, retrieve one by name, and pass arguments into it. The server then returns a structured set of messages for the client to use. In other words, MCP prompts are closer to workflow templates than to ad hoc strings.
Another important distinction is control flow. MCP prompts are designed to be user-controlled: users are expected to explicitly choose them, often through UI elements such as slash commands or menu items. By contrast, MCP tools are designed to be model-controlled: the model can discover and invoke them automatically when appropriate, with the client ideally keeping a human in the loop for safety.
The basic API shape
At initialization, a server declares which capabilities it supports. For prompts, it declares prompts; for argument suggestions, completions; for executable functions, tools. MCP also supports notifications such as notifications/prompts/list_changed and notifications/tools/list_changed when those lists change.
A simplified capability declaration looks like this:
{
"capabilities": {
"prompts": { "listChanged": true },
"completions": {},
"tools": { "listChanged": true }
}
}
For prompts, the core requests are prompts/list and prompts/get. The first discovers what exists; the second retrieves one prompt and fills in its arguments. MCP also provides completion/complete so the server can suggest argument values while the user is typing.
Here is a simplified prompt flow:
// 1) Discover prompts
{
"jsonrpc": "2.0",
"id": 1,
"method": "prompts/list"
}
// 2) Example response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"prompts": [
{
"name": "incident_triage",
"title": "Triage an Incident",
"description": "Guide the model through production incident analysis",
"arguments": [
{ "name": "service", "required": true },
{ "name": "severity", "required": true }
]
}
]
}
}
// 3) Retrieve one prompt with arguments
{
"jsonrpc": "2.0",
"id": 2,
"method": "prompts/get",
"params": {
"name": "incident_triage",
"arguments": {
"service": "payments",
"severity": "high"
}
}
}
// 4) Example response
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"description": "Prompt for incident triage",
"messages": [
{
"role": "assistant",
"content": {
"type": "text",
"text": "You are helping investigate a production incident. Be concise and evidence-driven."
}
},
{
"role": "user",
"content": {
"type": "text",
"text": "Investigate the payments service. Severity is high."
}
}
]
}
}
And if the client wants autocomplete while the user enters arguments:
{
"jsonrpc": "2.0",
"id": 3,
"method": "completion/complete",
"params": {
"ref": {
"type": "ref/prompt",
"name": "incident_triage"
},
"argument": {
"name": "service",
"value": "pay"
}
}
}
The key thing to notice is that the prompt API stops at retrieval. MCP defines prompts/list and prompts/get, but not prompts/call. Execution belongs to tools, which are invoked through tools/call. That is the clearest protocol-level difference between a prompt and a tool.
What comes back from prompts/get?
What you get back is a messages array. Those messages can contain text, image, audio, or embedded resources. The roles inside prompt messages are user or assistant, which makes prompts suitable for plain instructions, few-shot examples, multimodal guidance, or templates that bundle instructions together with server-managed reference material.
That means a retrieved prompt can do several useful things. It can give the model a repeatable instruction pattern, provide examples of the style or structure you want, include domain context directly inside the prompt payload, and standardize how users start common workflows. A client can also expose these prompts as slash commands or other UI actions, since the protocol is designed for discoverability but does not force a specific interface.
Just as importantly, MCP does not dictate exactly how an AI application must use the returned prompt with its own LLM pipeline. The architecture docs explicitly say MCP focuses on context exchange and does not dictate how AI applications use LLMs or manage the context they receive. So the client may insert the returned messages directly into the conversation, show them to the user first, combine them with other context, or feed them into a larger orchestration flow.
A concrete example
Imagine an MCP server for developer operations. It exposes:
-
a prompt called
incident_triage - a resource containing the service runbook
-
tools such as
fetch_logs,check_deployments, andcreate_ticket
The user selects “Triage an Incident” from the UI, enters service=payments and severity=high, and the client calls prompts/get. The server returns structured messages such as: investigate carefully, use evidence, summarize likely causes, and include next steps. It may also embed the runbook as a resource inside the prompt. Then the model starts reasoning over that prompt and, if needed, calls tools like fetch_logs or check_deployments. This is very close to the pattern described in the MCP docs: a server can expose tools, a resource, and a prompt that demonstrates or structures how those tools should be used together.
That is why a prompt is so useful: it can encode how to begin a workflow, while tools provide what actions are available inside that workflow. The prompt gives the model a playbook; the tools give it hands and eyes.
What kinds of scenarios are prompts good for?
Prompts are especially good for repeatable, structured, user-initiated workflows. Good examples include code review, weekly report generation, travel planning, drafting standard documents, guided analysis, and domain-specific assistants where you want the user to start from a known template instead of free-form chatting. MCP’s own docs describe prompts as reusable templates, and the official blog shows them being used for workflow automation such as meal planning.
They are also useful when argument entry should feel polished. Since MCP supports completion/complete, the server can suggest valid values while the user types. That makes prompts feel more like a command palette or IDE action than a plain text box.
Prompts are less compelling when the task is highly open-ended and the model can already decide what to do from the user’s plain language request. In those cases, tools alone may be enough. But when you want consistency, discoverability, parameterization, and workflow guidance, prompts are a strong fit. That is exactly the niche they occupy in MCP.
What is the relationship between prompts and tools?
The cleanest way to think about it is:
Prompt = guidance and structure
Tool = execution and side effects
A prompt tells the model how to approach a task. A tool lets the model fetch data, compute something, or change something in the outside world. At the protocol level, prompts are listed and retrieved; tools are listed and called. Tools also define an inputSchema, may define an outputSchema, and return results through content and optionally structuredContent.
So a prompt is not a weaker version of a tool, and a tool is not a better version of a prompt. They solve different problems. Prompts help with intent framing, workflow standardization, and UX discoverability. Tools help with real actions, fresh data, and system integration. In a mature MCP app, you usually want both.
Should you use the prompt first, the tool first, or both?
MCP does not require one universal order. The protocol separates capabilities, but leaves orchestration decisions to the client application. In practice, there are four common patterns.
1. Prompt first, then tools
This is the most common guided-workflow pattern. The user explicitly selects a prompt, the client retrieves its messages, and then the model may call tools while following that template. This fits things like code review, incident response, or travel planning. It matches the “user-controlled prompt, model-controlled tool” design very well.
2. Tools only
This works when the model already understands the task from normal conversation and just needs capabilities. For example, if the user says “check the weather in Tokyo,” the model may only need a weather tool. No prompt template is necessary.
3. Prompt plus resources, with no tool call
Sometimes the goal is pure generation or analysis, not action. In that case a prompt can include embedded resources and ask the model to produce an answer without calling any tools.
4. Tool first, then a client-constructed prompt
This is also possible, but it is more of an application design choice than a special MCP prompt pattern. Since MCP does not dictate how clients manage LLM context, a client can fetch some data through tools or resources and then compose its own downstream prompting strategy.
My practical recommendation is simple: use prompts first when you want a reusable, guided entry point for a workflow; use tools first when the task is ad hoc and action-oriented; use both together when you want the best user experience and the strongest execution capability.
Summary
MCP prompts are discoverable, parameterized templates exposed by a server. You find them with prompts/list, fetch them with prompts/get, and optionally support user-friendly argument entry with completion/complete. The result is a structured messages payload that can contain text, examples, multimodal content, and embedded resources.
What retrieved prompts can do is shape the model’s behavior in a repeatable way: they can start workflows, standardize outputs, carry domain context, and make advanced flows easy for users to launch from a UI. But they do not execute actions by themselves, because MCP execution belongs to tools through tools/call.
So the short mental model is: prompts tell the model how to work; tools let the model do work. In real MCP systems, prompts and tools are complementary, not competing features. The best systems usually use prompts to give structure and tools to provide power.
No comments:
Post a Comment