What exactly is the difference between MCP and Agent Skill?
Insight

What exactly is the difference between MCP and Agent Skill?

Sarah Jenkins

By Sarah Jenkins

MCP vs. Skills: A Deep Comparison of Two Extension Philosophies for AI Agents

When using AI agent tools (Claude Code, Cursor, Windsurf, etc.), you often run into two concepts:

  • MCP (Model Context Protocol)
  • Skill (Agent Skill)

Both look like ways to “extend AI capabilities,” but what exactly is the difference between them? Why do we need two separate mechanisms? And when should you use which one?

This article will thoroughly clarify these two concepts from three angles: design philosophy, technical architecture, and usage scenarios.

Distilled into One Sentence

Let’s start with a simple positioning:

MCP solves the “connection” problem: it lets AI access the external world.Skills solve the “methodology” problem: they teach AI how to perform a certain class of tasks.

To use Anthropic’s official wording:

“MCP connects Claude to external services and data sources. Skills provide procedural knowledge—instructions for how to complete specific tasks or workflows.”

To use an analogy: MCP is AI’s “hands” (it can touch the outside world), while a Skill is AI’s “skill book” (it knows how to do something).

You need both working together: MCP lets AI connect to a database, while a Skill teaches AI how to analyze the query results.

MCP: The USB-C Interface for AI Applications

What Is MCP?

MCP (Model Context Protocol) is an open protocol released by Anthropic in November 2024 to standardize how AI applications interact with external systems.

The official analogy is the “USB-C interface for AI applications” — just as USB-C provides a universal way to connect many kinds of devices, MCP provides a universal way to connect many kinds of tools and data sources.

Key point: MCP is not exclusive to Claude.

It is an open protocol that, in theory, any AI application can implement. As of early 2025, it had already been adopted by multiple platforms:

  • Anthropic: Claude Desktop, Claude Code
  • OpenAI: ChatGPT, Agents SDK, Responses API
  • Google: Gemini SDK
  • Microsoft: Azure AI Services
  • Developer tools: Zed, Replit, Codeium, Sourcegraph

By February 2025, there were already more than 1,000 open-source MCP connectors.

MCP Architecture

MCP is based on the JSON-RPC 2.0 protocol and uses a Client-Host-Server architecture:

┌─────────────────────────────────────────────────────────┐
│                        Host                              │
│              (Claude Desktop / Cursor)                   │
│                                                          │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐      │
│  │   Client    │  │   Client    │  │   Client    │      │
│  │  (GitHub)   │  │ (Postgres)  │  │  (Sentry)   │      │
│  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘      │
└─────────┼────────────────┼────────────────┼─────────────┘
          │                │                │
          ▼                ▼                ▼
    ┌───────────┐    ┌───────────┐    ┌───────────┐
    │MCP Server │    │MCP Server │    │MCP Server │
    │ (GitHub)  │    │(Postgres) │    │ (Sentry)  │
    └───────────┘    └───────────┘    └───────────┘
  • Host: the application users interact with directly (Claude Desktop, Cursor, Windsurf)
  • Client: the component inside the Host application that manages communication with a specific Server
  • Server: the bridge to external systems (databases, APIs, local files, etc.)

The Three Core Primitives of MCP

MCP defines three kinds of primitives that a Server can expose:

1. Tools — Model-Controlled

Executable functions that AI can call to perform actions.

{
  "name": "query_database",
  "description": "Execute SQL query on the database",
  "parameters": {
    "type": "object",
    "properties": {
      "sql": { "type": "string" }
    }
  }
}

AI decides when to call these tools. For example, if a user asks, “What was this month’s revenue?”, AI may determine that it needs to query the database and call the query_database tool.

2. Resources — Application-Controlled

Data sources that provide contextual information to AI.

{
  "uri": "file:///Users/project/README.md",
  "name": "Project README",
  "mimeType": "text/markdown"
}

The application controls when resources are loaded. Users can reference resources with @, similar to referencing a file.

3. Prompts — User-Controlled

Predefined prompt templates that help structure interaction with AI.

{
  "name": "code_review",
  "description": "Review code for bugs and security issues",
  "arguments": [
    { "name": "code", "required": true }
  ]
}

Users explicitly trigger these prompts, similar to a Slash Command.

The Relationship Between MCP and Function Calling

Many people ask: what is the difference between MCP and OpenAI’s Function Calling or Anthropic’s Tool Use?

Function Calling is an LLM capability — it converts natural language into a structured function call request. The LLM itself does not execute the function; it only tells you “which function should be called and what the arguments should be.”

MCP is the protocol layer on top of Function Calling — it standardizes “where the function is, how to call it, and how to discover it.”

The relationship looks like this:

User input → LLM (Function Calling) → "Need to call query_database"
                                           ↓
                                     MCP Protocol
                                           ↓
                                  Executed by MCP Server
                                           ↓
                                  Return result to the LLM

Function Calling solves “what to do,” while MCP solves “how to make it happen.”

MCP Transport Methods

MCP supports two main transport methods:

Transport MethodUse CaseNotes
StdioLocal processesThe Server runs on the local machine; suitable for tools that need system-level access
HTTP/SSERemote servicesThe Server runs remotely; suitable for cloud services (GitHub, Sentry, Notion)

Most cloud services use HTTP, while local scripts and custom tools use Stdio.

The Cost of MCP

MCP is not a free lunch; it has clear costs:

1. High token consumption

Each MCP Server takes up context space. At the beginning of every conversation, the MCP Client needs to tell the LLM, “These tools are available to you,” and those tool definitions consume a large number of tokens.

Once you connect multiple MCP Servers, the tool definitions alone may occupy a large portion of the context window. A community observation noted:

“We're seeing a lot of MCP developers even at enterprise build MCP servers that expose way too much, consuming the entire context window and leading to hallucination.”

2. Connections need to be maintained

MCP Servers are persistent external processes. If a server goes down, the network disconnects, or authentication expires, AI capabilities are affected.

3. Security risks

Anthropic officially warns:

“Use third party MCP servers at your own risk - Anthropic has not verified the correctness or security of all these servers.”

In particular, MCP Servers that can retrieve external content (such as web scraping) may introduce prompt injection risks.

The Value of MCP

Despite these costs, MCP’s value lies in standardization and reusability:

  • Implement once, use everywhere: the same GitHub MCP Server can be used in Claude Desktop, Cursor, and Windsurf
  • Dynamic discovery: AI can discover which tools are available at runtime instead of having them hardcoded
  • Vendor-agnostic: it does not depend on a specific LLM provider

Skill: Progressive Disclosure for Context Engineering

What Is a Skill?

Skill (full name: Agent Skill) is a feature Anthropic released in October 2025. The official definition is:

“Skills are organized folders of instructions, scripts, and resources that agents can discover and load dynamically to perform better at specific tasks.”

Put simply: a Skill is a folder containing instructions, scripts, and resources that AI can automatically discover and load when needed.

At the architecture level, Skills are different from MCP.

In Anthropic’s own words:

“Skills are at the prompt/knowledge layer, whereas MCP is at the integration layer.”

A Skill is part of the “prompt/knowledge layer,” while MCP is part of the “integration layer.” They solve problems at different levels.

The Core Design of Skills: Progressive Disclosure

The most elegant part of Skill design is progressive disclosure. This is one of Anthropic’s important practices in the field of Context Engineering.

The official analogy is:

“Like a well-organized manual that starts with a table of contents, then specific chapters, and finally a detailed appendix.”

Like a well-organized manual: first you look at the table of contents, then go to the relevant chapter, and finally consult the appendix if necessary.

Skills load in three layers:

flowchart TD
    subgraph L1["Layer 1: Metadata (always loaded)"]
        A["Skill name + description"]
        B["~100 tokens"]
    end

    subgraph L2["Layer 2: Core instructions (loaded on demand)"]
        C["Full contents of SKILL.md"]
        D["Usually < 5k tokens"]
    end

    subgraph L3["Layer 3+: Support files (deep, on-demand)"]
        E["reference.md"]
        F["scripts/helper.py"]
        G["templates/..."]
    end

    L1 -->|Claude determines relevance| L2
    L2 -->|Needs more information| L3

What is the benefit of this design?

Traditional approaches (such as MCP) load all information into context at the start of a session. If you have 10 MCP Servers and each exposes 5 tools, that means 50 tool definitions — which may consume thousands or even tens of thousands of tokens.

Progressive loading for Skills lets you have dozens of Skills while loading only one or two at a time. Context efficiency improves dramatically.

In Anthropic’s words:

“This means that the amount of context that can be bundled into a skill is effectively unbounded.”

In theory, a single Skill can contain an unlimited amount of knowledge — because only the parts that are actually needed are loaded.

Context Engineering: The Idea Behind Skills

Skills are a product of Anthropic’s “Context Engineering” philosophy. Anthropic has a dedicated explanation of this idea:

“At Anthropic, we view context engineering as the natural progression of prompt engineering. Prompt engineering refers to methods for writing and organizing LLM instructions for optimal outcomes. Context engineering refers to the set of strategies for curating and maintaining the optimal set of tokens (information) during LLM inference.”

In simple terms:

  • Prompt Engineering: how to write prompts well
  • Context Engineering: how to manage the information inside the context window

The LLM context window is limited (even a 200k window can be overwhelmed by enough information). The core question of Context Engineering is: within a limited window, what information should you include so AI performs best?

Progressive loading in Skills is a concrete implementation of Context Engineering — only loading the information required for the current task so that every token creates maximum value.

How Skills Are Triggered

Skills are automatically triggered, which is the key difference between Skills and Slash Commands.

The workflow is:

  1. Scanning phase: Claude reads the metadata of all Skills (name + description)
  2. Matching phase: it semantically matches the user request against the Skill descriptions
  3. Loading phase: if a match succeeds, it loads the full SKILL.md
  4. Run phase: it follows the instructions in the Skill and loads supporting files on demand

The user does not need to call a Skill explicitly. For example, if you have a code-review Skill and the user says, “Help me review this code,” Claude can automatically match and load it.

What is a Skill in essence?

Technically, a Skill is a meta-tool:

“The Skill tool is a meta-tool that manages all skills. Traditional tools like Read, Bash, or Write execute discrete actions and return immediate results. Skills operate differently—rather than performing actions directly, they inject specialized instructions into the conversation history and dynamically modify Claude's run environment.”

A Skill does not perform a concrete action directly; instead, it injects instructions into the conversation history and dynamically modifies Claude’s run environment.

Skill File Structure

A standard Skill looks like this:

my-skill/
├── SKILL.md              # Required: metadata + main instructions
├── reference.md          # Optional: detailed reference document
├── examples.md           # Optional: usage examples
├── scripts/
│   └── helper.py         # Optional: executable script
└── templates/
    └── template.txt      # Optional: template file

SKILL.md is the core file and must include YAML metadata:

---
name: code-review
description: >
  Review code for bugs, security issues, and style violations.
  Use when asked to review code, check for bugs, or audit PRs.
---

# Code Review Skill

## Instructions

When reviewing code, follow these steps:

1. First check for security vulnerabilities...
2. Then check for performance issues...
3. Finally check for code style...

Key fields:

  • name: the unique identifier of the Skill; lowercase letters + digits + hyphens, up to 64 characters
  • description: describes what the Skill does and when to use it, up to 1024 characters

The quality of description directly determines whether the Skill can be triggered correctly.

Security Considerations for Skills

Skills have a potential security problem: prompt injection.

Researchers found:

“Although Agent Skills can be a very useful tool, they are fundamentally insecure since they enable trivially simple prompt injections. Researchers demonstrated how to hide malicious instructions in long Agent Skill files and referenced scripts to exfiltrate sensitive data.”

Because Skills essentially inject instructions, a malicious Skill can hide harmful instructions inside long files and exfiltrate sensitive data.

Mitigation measures:

  1. Only use Skills from trusted sources
  2. Review the scripts inside Skills
  3. Use allowed-tools to restrict the Skill’s scope of capability
---
name: safe-file-reader
description: Read and analyze files without making changes
allowed-tools: Read, Grep, Glob  # Only allow read operations
---

Platform Support for Skills

Agent Skills are currently supported in:

  • Claude.ai (Pro, Max, Team, Enterprise)
  • Claude Code
  • Claude Agent SDK
  • Claude Developer Platform

It is worth noting that Skills are currently specific to the Anthropic ecosystem, unlike MCP, which is an open cross-platform protocol.

MCP vs. Skills: Comparing Architectural Layers

Now we can understand the difference between the two from the perspective of architectural layers:

┌─────────────────────────────────────────────────────────┐
│                    User request                          │
└────────────────────────┬────────────────────────────────┘
                         ▼
┌─────────────────────────────────────────────────────────┐
│              Prompt / Knowledge Layer (Skill)           │
│                                                         │
│   Skills inject specialized knowledge and workflows     │
│   "How to do a certain type of task"                    │
└────────────────────────┬────────────────────────────────┘
                         ▼
┌─────────────────────────────────────────────────────────┐
│                 LLM Reasoning Layer                     │
│                                                         │
│   Claude / GPT / Gemini, etc.                           │
│   Understand the request and decide which tools         │
│   are needed                                            │
└────────────────────────┬────────────────────────────────┘
                         ▼
┌─────────────────────────────────────────────────────────┐
│                 Integration Layer (MCP)                 │
│                                                         │
│   MCP connects external systems                         │
│   "What tools and data can be accessed"                 │
└────────────────────────┬────────────────────────────────┘
                         ▼
┌─────────────────────────────────────────────────────────┐
│                    External World                       │
│                                                         │
│   Databases, APIs, file systems, third-party services   │
└─────────────────────────────────────────────────────────┘

Skills sit on the upper layer (knowledge layer), while MCP sits on the lower layer (integration layer).

They are not substitutes; they are complementary. You can:

  • Use MCP to connect to GitHub
  • Use a Skill to teach AI how to conduct Code Review according to team conventions

Detailed Comparison Table

DimensionMCPSkill
Core roleConnect external systemsEncode specialized knowledge and methodology
Architectural layerIntegration layerPrompt / knowledge layer
Protocol foundationJSON-RPC 2.0File system + Markdown
Cross-platformYes (open protocol, multi-platform support)No (currently specific to the Anthropic ecosystem)
Trigger mechanismPersistent connection, always availableAutomatic trigger based on semantic matching against the description
Token consumptionHigh (tool definitions persistently occupy context)Low (progressive loading)
External accessCan directly access external systemsCannot directly access them; must work with MCP or built-in tools
ComplexityHigh (requires understanding the protocol and running Servers)Low (writing Markdown is enough)
ReusabilityHigh (standardized protocol, reusable across applications)Medium (folder-based, can be shared via Git)
Dynamic discoveryYes (discover available tools at runtime)Yes (discover available Skills at runtime)
Security considerationsExternal content may introduce prompt injection riskSkill files themselves may contain malicious instructions

When to Use MCP and When to Use Skills

Use MCP When

  • You need access to external data: database queries, API calls, file system access
  • You need to operate external systems: create GitHub issues, send Slack messages, execute SQL
  • You need real-time information: monitor system status, inspect logs, search engine results
  • You need cross-platform reuse: the same tool needs to work in Claude Desktop, Cursor, and other MCP-enabled applications

Use Skills When

  • You have repetitive workflows: code review, document generation, data analysis
  • You have internal company conventions: code style, commit conventions, document format
  • You need complex multi-step tasks: professional tasks that need detailed guidance
  • You want team-shared best practices: standardized operating procedures
  • You are in token-sensitive scenarios: you need a large amount of knowledge but do not want it to permanently occupy context

Using Both Together

In many cases, the two are used together:

User: "Review PR #456 and provide suggestions according to team conventions"

1. MCP (GitHub) fetches PR information
        ↓
2. Skill (team code review standards) provides the review methodology
        ↓
3. Claude analyzes the code according to the Skill instructions
        ↓
4. MCP (GitHub) submits comments

MCP determines “what can be accessed,” while Skills determine “how to do it.”

The Key to Writing Good Skills

Whether a Skill can be triggered correctly depends 90% on how well the description is written.

A Bad Description

description: Helps with data

That is too broad; Claude cannot tell when it should use it.

A Good Description

description: >
  Analyze Excel spreadsheets, generate pivot tables, and create charts.
  Use when working with Excel files (.xlsx), spreadsheets, or tabular data analysis.
  Triggers on: "analyze spreadsheet", "create pivot table", "Excel chart"

A good description should include:

  1. What it does: a concrete description of its capability
  2. When to use it: clear triggering scenarios
  3. Trigger terms: keywords a user is likely to say

Best Practices

  1. Stay focused: one Skill should do one thing; avoid broad cross-domain Skills
  2. Keep SKILL.md under 500 lines: if it gets too long, split content into supporting files
  3. Test triggering behavior: confirm that relevant requests trigger it and irrelevant requests do not
  4. Use version control: track the change history of your Skills

About Slash Commands

The title of this article is MCP vs. Skills, but many people also ask about Slash Commands, so here is a brief explanation.

A Slash Command is the simplest extension mechanism — essentially a stored prompt that gets injected into the conversation when the user enters /command-name.

The key difference between Skills and Slash Commands is the trigger mechanism:

Slash CommandSkill
Trigger methodThe user explicitly enters /commandClaude matches it automatically
User controlFull control over when it is triggeredNo direct control; Claude decides

Ask yourself one question: Does the user need explicit control over when it is triggered?

  • Yes → Slash Command
  • No, you want AI to decide automatically → Skill

Summary

MCP and Skills are two different philosophies for extending AI agents:

MCPSkill
PhilosophyConnectivityKnowledge packaging
The question it asks“What can AI access?”“What does AI know how to do?”
LayerIntegration layerKnowledge layer
Token strategyPreload all capabilitiesLoad knowledge on demand

Remember this line:

MCP connects AI to data; Skills teach AI what to do with that data.

MCP lets AI “touch” data; Skills teach AI how to “process” that data.

They are not substitutes; they are complementary. A mature AI agent system needs both.

Reference Resources

Official MCP Resources

Official Skill Resources

Cross-Platform Adoption

Further Reading



If you found this article helpful, feel free to follow my GitHub. Below are some of my open-source projects:

Claude Code Skills (loaded on demand, automatic intent recognition, no wasted tokens, intro article):

Full-stack projects (great for learning modern tech stacks):

  • prompt-vault - a prompt manager built with the latest stack, great for learning modern front-end and full-stack development patterns: a Next.js 15 + React 19 + tRPC 11 + Supabase full-stack example; clone it, configure a free Supabase instance, and it runs
  • chat_edit - a dual-mode AI app (chat + rich-text editing), built with Vue 3.5 + TypeScript + Vite 5 + Quill 2.0 + IndexedDB

About the author

Sarah Jenkins
Sarah Jenkins

Sarah Jenkins is a seasoned OpenClaw developer with a strong focus on optimizing high-performance computing solutions. Her work primarily involves crafting efficient parallel algorithms and enhancing GPU acceleration for complex scientific simulations. Jenkins is renowned for her meticulous attention to detail and her ability to translate intricate theoretical concepts into practical, robust OpenClaw implementations.

View Full Profile