Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
From getting to know Skill to understanding how to build Crypto Research Skill
Author: @BlazingKevin_ , Blockbooster Researcher
In 2025, the AI Agent track is at a critical turning point from “technological concept” to “engineering implementation.” During this process, Anthropic’s exploration of capability encapsulation unexpectedly facilitated an industry-wide paradigm shift.
On October 16, 2025, Anthropic officially launched Agent Skill. Initially, the official positioning was very restrained—viewing it merely as an auxiliary module to enhance Claude’s performance on specific vertical tasks (such as complex coding logic or targeted data analysis).
However, market and developer feedback far exceeded expectations. It was quickly discovered that this design, which modularizes “capabilities,” demonstrated extremely high decoupling and flexibility in practical engineering. It not only reduced redundancy in prompt tuning but also significantly improved the stability of Agent performing specific tasks. This experience rapidly triggered a chain reaction within the developer community. In a short time, leading productivity tools and IDEs like VS Code, Codex, Cursor followed suit, completing underlying support for the Agent Skill architecture.
Faced with spontaneous ecosystem expansion, Anthropic recognized the fundamental universal value of this mechanism. On December 18, 2025, Anthropic made a milestone decision: officially releasing Agent Skill as an open standard.
Subsequently, on January 29, 2026, the official released a comprehensive user manual for Skill, thoroughly breaking down cross-platform, cross-product reusability at the protocol level. This series of actions marked that Agent Skill had shed the label of “Claude-specific accessory” and evolved into a universal underlying design pattern in the entire AI Agent field.
At this point, a question naturally arises: what core pain points does this Agent Skill, embraced by major companies and core developers, fundamentally solve? How does it differ from and relate to the currently popular MCP?
To thoroughly clarify these questions and ultimately apply them to practical crypto industry research and development, this article will progressively explore the following:
Concept Analysis: The essence of Agent Skill and its foundational architecture.
Basic Workflow: Revealing its underlying operational logic and execution flow.
Advanced Mechanisms: Deep dive into the two major advanced usages—Reference and Script.
Practical Cases: Analyzing the fundamental differences between Agent Skill and MCP, and demonstrating their combined application in crypto research scenarios.
What exactly is Agent Skill? In simple terms, it is like a “dedicated manual” that a large model can consult at any time.
When using AI daily, we often encounter a pain point: every time we start a new conversation, we have to copy and paste a long set of instructions. Agent Skill was created to solve this hassle.
For example: suppose you want to create an “Intelligent Customer Service” Agent. You can explicitly write rules in Skill: “When encountering user complaints, the first step must be to soothe emotions, and absolutely no promises of compensation should be made.” Or, if you often need to do “Meeting Summaries,” you can set a template directly in Skill: “When outputting a meeting summary, strictly follow the sections: ‘Participants,’ ‘Key Topics,’ and ‘Final Decisions.’”
With this “manual,” you no longer need to repeat that long set of instructions every time. When the large model receives a task, it will automatically consult the corresponding Skill and immediately know what standards to follow.
Of course, the “manual” is just a simplified analogy for easier understanding. In reality, Agent Skill can do much more than just format regulation. Its “killer” advanced functions will be detailed in later chapters. But at the initial stage, you can treat it as an efficient task instruction.
Next, let’s use the familiar scenario of “Meeting Summary” to see how to create an Agent Skill. The entire process does not require complex programming knowledge.
Based on current mainstream tools (like Claude Code), you need to find (or create) a folder named
.claude/skill
in your user directory. This is the “base camp” for storing all Skills.
Step 1: Create a new folder inside this directory. The folder name is the name of your Agent Skill.
Step 2: Inside this new folder, create a text file named
skill.md
.
Every Agent Skill must have this
skill.md
file. Its purpose is to tell the AI: who I am, what I can do, and how you should work according to my requirements. Opening this file, you’ll see it clearly divided into two parts:
At the very top, enclosed by two short horizontal lines
, there are only two core attributes: name and description.
name
: The name of this Skill, which must match the folder name exactly.
description
: This is extremely important. It explains the specific purpose of this Skill to the large model. The AI will continuously scan all Skills’ descriptions to determine which Skill to use for the current user query. Therefore, writing a precise and comprehensive description is the key to ensuring your Skill can be accurately activated by AI.
Below the horizontal lines, the rest is the detailed rules for the AI. The official calls this part “instructions.” This is where you specify the logic the model needs to follow. For example, in the meeting summary case, you can plainly specify: “Must extract participants, topics, and final decisions.”
Completing these steps, a simple yet highly practical Agent Skill is born.
However, a truly useful Skill often starts with meticulous upfront design. Before typing the first line, clearly defining goals, scope, and success criteria will make your development process much more efficient.
The first step in building a Skill is not to think “what tricks can I make AI do,” but to ask yourself: “What repetitive problems in my daily work do I need to solve?” It’s recommended to initially define 2-3 specific scenarios that this Skill should cover.
Next, set success standards. How do you know if your Skill is effective? Before implementation, establish measurable criteria. For example, a quantitative standard could be “faster processing speed,” and a qualitative one could be “the extracted meeting decisions are consistently accurate and complete.”
After understanding the basics of Agent Skill, we naturally ask: how does this “manual” actually work during operation?
If you’ve recently experienced products like Manus AI, you probably encountered this scenario: when you pose a specific question, the AI does not immediately produce a lengthy answer or hallucinate, but instead quickly recognizes that “this belongs to a specific Agent Skill.” It then prompts you on the interface, asking whether to invoke that Skill.
When you click “Allow,” the AI behaves as if it’s a different person, strictly following the preset rules to produce the output.
This seemingly simple “request-approval-execution” interaction actually hides a very sophisticated underlying workflow. To fully explain this mechanism, we need to clarify the three core roles involved in the interaction:
User: The person initiating the task request.
Client tool (e.g., Claude Code): The dispatcher and coordinator, acting as an “intermediary.”
Large Language Model: The “brain” responsible for understanding intent and generating the final result.
When we input a request (e.g., “Summarize today’s morning project meeting”), these three roles engage in the following four-step precise cooperation:
Step 1: Lightweight scanning (passing metadata)
After the user inputs the request, the client tool (Claude Code) does not dump all the instructions directly into the large model. Instead, it only sends the user request along with the current system’s Agent Skill “names” and “descriptions” (the Metadata layer) to the large model. Imagine that even if you have ten or more Skills installed, the model only receives a “lightweight directory.” This design greatly conserves the model’s attention and avoids interference among information.
Step 2: Accurate intent matching
Upon receiving the user request and the “Skill directory,” the model quickly analyzes semantically. It finds that the user’s demand is “meeting summary,” and among the directory, there is a Skill called “Meeting Summary Assistant” that perfectly matches this task. The model then informs the client tool: “I found that this task can be handled by ‘Meeting Summary Assistant.’”
Step 3: On-demand loading of full instructions
After the model’s response, the client tool (Claude Code) will actually go into the “Meeting Summary Assistant” folder to read the full
skill.md
content. This is a critical design: only at this point, the complete instruction content is loaded, and only for the selected Skill. Other Skills remain quietly in the directory, not consuming resources.
Step 4: Strict execution and output
Finally, the client tool sends the “original user request” along with the full
skill.md
content of the “Meeting Summary Assistant” to the large model. This time, the model is no longer choosing but executing. It strictly follows the rules in skill.md (e.g., extract participants, topics, decisions), generating a highly structured response, which is then displayed to the user by the client.
The workflow in the previous chapter introduces the first core underlying mechanism of Agent Skill—on-demand loading.
Although the name and description of all Skills are always visible to the large model, the detailed instructions are only fetched into the context after a precise hit. This greatly saves token resources. Imagine deploying over ten Skills like “viral copywriting,” “meeting summary,” “on-chain data analysis,” etc. The model initially only performs a low-cost “directory retrieval.” Only when a target is selected does the system feed the corresponding
skill.md
into the model. This “on-demand loading” is the first secret to keeping Agent Skill lightweight and efficient.
However, for advanced users seeking ultimate efficiency, just the first layer of on-demand loading is not enough.
As the business deepens, we often want Skills to become smarter. Take “Meeting Summary Assistant” as an example: we want it not only to summarize but also to provide incremental insights—such as flagging whether a financial decision complies with corporate policies when money is involved, or warning about legal risks when external cooperation is discussed. This allows teams to see compliance alerts at a glance, avoiding secondary checks.
But this introduces a critical engineering dilemma: to enable such capabilities, the entire lengthy “Financial Regulations” and “Legal Texts” must be embedded into
skill.md
, making the core instruction file extremely bulky. Even if the meeting is purely technical, the model is forced to load tens of thousands of words of financial and legal jargon, causing severe token wastage and risking “attention distraction.”
Can we implement a “layered on-demand” system—only pulling in financial regulations when the discussion truly involves money?
The answer is yes. The Reference mechanism in the Agent Skill system was designed precisely for this.
The essence of Reference is an external knowledge base triggered by conditions. Here’s how it elegantly solves the above pain points:
Create external reference files: First, add a separate file in the Skill directory, called Reference. For example, name it “Group Financial Manual.md,” detailing reimbursement standards (e.g., “Accommodation subsidy: 500 yuan/night,” “Per diem: 300 yuan/day,” etc.).
Set trigger conditions: Then, in the core
skill.md
, add a specific “Financial Reminder Rule.” Clearly specify in natural language: “Trigger only when the meeting content mentions ‘money,’ ‘budget,’ ‘procurement,’ ‘expenses,’ etc. When triggered, read the ‘Group Financial Manual.md’ file. Based on its content, identify if the meeting decision exceeds limits and specify the approval person.”
Once set, a dynamic collaboration begins during the next budget review:
The client tool scans and requests to use the “Meeting Summary Assistant” Skill (first layer of on-demand loading).
The model, reading the meeting record, detects keywords related to “budget” and triggers the rule embedded in skill.md.
At this point, the system prompts you: “Allow reading ‘Group Financial Manual.md’?” (second layer of on-demand loading: Reference dynamic trigger).
After authorization, the model cross-references the meeting content with the dynamically loaded financial standards, producing a summary that includes “participants, topics, decisions,” plus “financial compliance alerts.”
Remember: Reference is strictly conditional. If the meeting is about code logic, with no relation to money, the “Group Financial Manual.md” remains quietly stored on disk, never consuming a token.
Having explained the Reference mechanism for managing information overload, let’s move to another killer feature of Agent Skill: code execution (Script).
For a mature Agent, merely “looking up info” and “summarizing” are not enough. The real automation loop is to directly perform actions—this is where Script shines.
Continuing with the “Meeting Summary Assistant,” after generating the summary, you might want to upload it to your company’s internal system. To do this, create a Python script named
upload.py
inside the Skill folder, containing the logic to connect to your server.
Then, in the core
skill.md
, add a clear instruction: “When the user mentions ‘upload,’ ‘sync,’ or ‘send to server,’ you must run ‘upload.py’ to push the generated summary.”
When you tell the AI: “The summary looks good, please sync it to the server,” the client tool will request execution of
upload.py
. But note a crucial underlying logic: during this process, the AI does not “read” the code; it simply “executes” it.
This means that even if your Python script contains thousands of lines of complex logic, it consumes almost no tokens from the model’s context. The AI acts like using a “black box” tool—it only cares about how to start it and whether it succeeds, not how it works inside.
This highlights the fundamental difference between Reference and Script:
Reference (read): It “transfers” external file content into the model’s context as reference, consuming tokens.
Script (run): It is triggered externally and runs outside the model’s context, not consuming tokens during execution.
A caution: when writing
skill.md
, you must clearly explain the trigger conditions and execution commands for scripts. If the AI encounters ambiguous instructions, it might try to “peek” into the code, wasting tokens. The rule is: define rules as clearly as possible, leaving no ambiguity.
At this point, we’ve assembled all core components of Agent Skill. It’s time to step back and summarize from a global perspective.
If you carefully review the loading process, you’ll find that the design philosophy of Agent Skill is a highly refined progressive disclosure mechanism. To maximize efficiency and minimize computation, the system is strictly divided into three layers, each with increasingly strict trigger conditions:
Layer 1: Metadata Layer (Always loaded)
Contains all Skill names and descriptions. It acts like a “permanent directory” for the large model. Before each task, the model scans this layer for initial routing.
Layer 2: Instruction Layer (On-demand loading)
Corresponds to the detailed rules in skill.md. Only when Layer 1 confirms the task’s attribution does the model “unfold” this layer, loading specific rules into its context.
Layer 3: Resource Layer (On-demand within on-demand)
The deepest and most substantial layer, containing three core components:
Reference: e.g., “Group Financial Manual.md,” loaded only when specific conditions are triggered (e.g., mention of “money”).
Script: e.g., “upload.py,” run only when specific actions are required (e.g., “upload”).
Asset: e.g., company logos, custom fonts, PDF templates used in report generation. They are invoked only at the final production stage.
Having discussed advanced uses of Agent Skill, many readers familiar with AI protocols might feel a déjà vu: the Script mechanism of Agent Skill seems very similar to the recently popular MCP. Aren’t they both about connecting and operating external systems?
Given the functional overlap, which should be chosen when building crypto research workflows?
Anthropic’s official succinctly summarized the core difference:
“MCP connects Claude to data. Skills teach Claude what to do with that data.” (MCP provides data connectivity; Skills instruct how to process that data.)
This statement hits the core. MCP is essentially a “data pipeline,” standardizing external information input (e.g., querying latest block height on a chain, fetching real-time exchange K-line data, reading local research PDFs). Agent Skill, on the other hand, is a set of “Standard Operating Procedures (SOPs),” guiding the model on how to act upon these data (e.g., reports must include tokenomics models, conclusions must contain risk warnings).
Some tech enthusiasts might object: “Since Agent Skill can run Python code, why not just write database connection or API calls directly in Script? Agent Skill can do MCP’s job entirely!”
Indeed, in engineering, Agent Skill can fetch data. But it’s highly unprofessional and awkward.
This “unprofessionalism” manifests in two fatal aspects:
Operation mechanism and state persistence: Agent Skill scripts are stateless; each trigger is an independent run, ending immediately. MCP is a long-running service that maintains persistent connections (like WebSocket links), which scripts cannot do.
Security and stability: Allowing AI to run a high-privilege Python script each time poses huge security risks; MCP offers standardized isolation and authentication mechanisms.
Therefore, the most powerful approach in building advanced crypto research systems is not choosing one over the other, but combining “MCP for data supply, Skill for orchestration.”
To illustrate this, let’s analyze how Web3 developer Cryptoxiao built an automated crypto news intelligence center using API-enhanced Skills and MCP.
This Skill architecture encapsulates MCP’s API capabilities into an intelligent agent for research purposes. It grants AI four core modules:
Module 1: News Source Discovery
Entry point for understanding the tool’s capabilities. Through discovery.py, AI can dynamically learn available channels.
Module 2: Multi-dimensional News Retrieval
The core query module, implemented in news.py, offers various retrieval methods from simple to complex.
Module 3: AI-powered Analysis and Insights
Utilizes pre-existing AI analysis results from 6551.io backend, allowing AI to query “opinions” rather than just “facts.”
Key insight: When AI calls these tools, it does not know that MCP servers internally perform “fetch-then-filter” steps. To AI, it’s just calling a “magical” tool that returns “high-quality news” or “positive signals,” greatly simplifying the workflow.
Module 4: Real-time News Stream
The killer feature, implemented in realtime.py, enables AI to listen to live events.
Once these MCP-driven tools are integrated into the Agent Skill instruction flow, your AI transforms from a general chat assistant into a Wall Street-level Web3 analyst. It can fully automate complex workflows that previously required hours of manual research:
Example Workflow 1: Rapid Due Diligence on New Coins
Command: “Deeply research the newly launched @NewCryptoCoin project.”
Basic reconnaissance: Agent automatically calls opentwitter.get_twitter_user for official Twitter data.
Endorsement cross-verification: Calls opentwitter.get_twitter_kol_followers to analyze which top KOLs or VCs are quietly following the project.
Global media search: Calls opennews.search_news_by_coin for media reports and PR.
Signal-to-noise filtering: Calls opennews.get_high_score_news to filter out worthless news, focusing on high-quality articles.
Report output: Agent generates a standard due diligence report including fundamentals, community token holdings, media buzz, and AI overall rating.
Example Workflow 2: Real-time Event-driven Trading Signal Detection
Command: “Monitor the ZK (Zero-Knowledge Proof) sector for sudden trading opportunities.”
Deploy sentinel: Agent calls opennews.subscribe_latest_news to establish WebSocket connection, listening for news containing “ZK” or “Zero-Knowledge Proof” related to specific tokens.
Capture positive signals: When detecting breakthrough news about a project (e.g., SomeCoin) with high confidence and bullish sentiment, immediately trigger alerts.
Community sentiment resonance: Millisecond-level calls to Twitter tools to check if key ZK KOLs are amplifying the event.
Alarm trigger: If conditions such as “media first release + community resonance” are met, push high-confidence Alpha trading alerts to the user.
Through this combination of Agent Skill behavior regulation and MCP data pipeline, a highly automated, professional crypto research workflow is fully realized.
About BlockBooster
BlockBooster is a next-generation alternative asset management firm focused on the digital age. We leverage blockchain technology to invest in, incubate, and manage core assets—from Web3 native projects to real-world assets (RWA). As value co-creators, we aim to unlock and realize long-term asset potential, capturing exceptional value for our partners and investors amid the wave of the digital economy.
Disclaimer
This article/blog is for informational purposes only, reflecting the author’s personal views and not representing BlockBooster’s stance. It does not intend to provide: (i) investment advice or recommendations; (ii) offers or solicitations to buy, sell, or hold digital assets; or (iii) financial, accounting, legal, or tax advice. Holding digital assets, including stablecoins and NFTs, involves high risks, significant price volatility, and potential total loss. You should carefully consider whether trading or holding digital assets is suitable for your financial situation. For specific questions, consult your legal, tax, or investment advisors. The information provided herein (including market data and statistics, if any) is for general reference only. While reasonable efforts have been made to ensure accuracy, we are not responsible for any factual errors or omissions.